source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
98,824 | I have a project that I've been thinking about for a little while, and I've come to the realization that at some point during its development, I'm going to need an oscilloscope. Okay, not a problem. Instead of purchasing an oscilloscope, I've decided that I'd like to -- at the very least -- design my own, and hopefully build the result. To make things simpler, I'm thinking about using a Raspberry Pi to do all the fun calculations and visualizations (I don't feel like implementing the FFT on an AVR, thank you very much). The more I read about oscilloscopes, the more confused I am, to be honest. Why isn't an oscilloscope just an ADC? If I were to hook up something like this (with appropriate over-voltage protection and pre-amplification) to a circuit on one end, and an appropriately-programmed CPU on the other, wouldn't that be an oscilloscope? [In the past I've only worked with simple digital circuits -- I'm mainly a theoretical computer scientist! -- and so I'm trying to wrap my head around analog electronics right now. As such, I apologize if the answer to this is extremely obvious...] | At it's heart, a (digital) oscilloscope is just an ADC, along with some memory to hold the samples. The samples are then read out of the memory and displayed. The practical implementation issues make commercial oscilloscopes complicated. The input signal needs to be scaled appropriately for the range of the ADC, which means that you need to have attenuators and/or amplifiers that have very precise gain values that are very flat across a huge range of frequencies (DC to 10s or 100s of MHz at a minimum) in order to measure waveforms with minimal distortion. Also, depending on the application, the sample rate of the ADC needs to be adjusted (very precisely) over a wide dynamic range — 1 ns/sample to 1 s/sample (9 orders of magnitude) would be typical. Then there's the question of knowing when to start — or more importantly, stop — sampling; this is known as triggering. Different applications have different needs for triggering, and commercial 'scopes have a wide selection to accomodate them. | {
"source": [
"https://electronics.stackexchange.com/questions/98824",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/33339/"
]
} |
99,320 | I've heard it suggested that "solid tantalum" capacitors are dangerous and may cause fire, may fail short circuit and are fatally sensitive to even very short over voltage spikes. Are tantalum capacitors reliable? Are they safe for use in general circuits and new designs? | Summary: "When used properly" tantalum capacitors are highly reliable. They have the advantage of high capacitance per volume and good decoupling characteristics due to relatively low internal resistance and low inductance compared to traditional alternatives such as aluminum wet electrolytic capacitors. The 'catch' is in the qualifier "when used properly". Tantalum capacitors have a failure mode which can be triggered by voltage spikes only 'slightly more' than their rated value. When used in circuits that can provide substantial energy to the capacitor failure can lead to thermal run-away with flame and explosion of the capacitor and low resistance short-circuiting of the capacitor terminals. To be "safe" the circuits they are used in need to be guaranteed to have been rigorously designed and the design assumptions need to be met. This 'does not always happen'. Tantalum capacitors are 'safe enough' in the hands of genuine experts, or in undemanding circuits, and their advantages make them attractive. Alternatives such as " solid aluminum" capacitors have similar advantages and lack the catastrophic failure mode. Many modern tantalum capacitors have built in protection mechanisms which implement fusing of various sorts, which is designed to disconnect the capacitor from its terminals when it fails and to limit PCB charring in most cases. If 'when', 'limit' and 'most' are acceptable design criteria and/or you are a design expert and your factory always gets everything right and your application environment is always well understood, then tantalum capacitors may be a good choice for you. Longer: Solid Tantalum capacitors are potentially disasters waiting to happen. Rigorous design and implementation that guarantees that their requirements are met can produce highly reliable designs. If your real world situations are always guaranteed to not have out of spec exceptions then tantalum caps may work well for you, too. Some modern tantalum capacitors have failure mitigation (as opposed to prevention) mechanisms built in. In a comment on another stack exchange question Spehro notes: The data sheet for Kemet's Polymer-Tantalum caps says (in part) : "The KOCAP also exhibits a benign failure mode which eliminates the ignition failures that can occur in standard MnO2 tantalum types.". Strangely, I can find nothing about the "ignition failure" feature in their other data sheets. Solid Tantalum electrolytic capacitors have traditionally had a failure mode which makes their use questionable in high energy circuits that cannot be or have not been rigorously designed to eliminate any prospect of the applied voltage exceeding the rated voltage by more than a small percentage. Tantalum caps are typically made by sintering tantalum granules together to form a continuous whole with an immense surface area per volume and then forming a thin dielectric layer over the outer surface by a chemical process. Here "thin" takes on a new meaning - the layer is thick enough to avoid breakdown at rated voltage - and thin enough that it will be punched through by voltages not vastly in excess of rated voltage. For an eg 10 V rated cap, operation with say 15V spikes applied can be right up there with playing Russian Roulette. Unlike Al wet electrolytic caps which tend to self heal when the oxide layer is punctured, tantalum tends not to heal. Small amounts of energy may lead to localised damage and removal of the conduction path. Where the circuit providing energy to the cap is able to provide substantial energy the cap is able to offer a correspondingly low resistance short and a battle begins. This can lead to smell, smoke, flame, noise and explosion. I've seen all these happen sequentially in a single failure. First there was a puzzling bad smell for perhaps 30 seconds. Then a loud shrieking noise, then a jet of flame for perhaps 5 seconds with gratifying wooshing sound and then an impressive explosion. Not all failures are so sensorily satisfying. Where the complete absence of overvoltage high energy spikes could not be guaranteed, which would be the case in many if not most power supply circuits, use of tantalum solid electrolytic caps would be a good source of service (or fire department) calls. Based on Spehro's reference, Kemet may have removed the more exciting aspects of such failures. They still warn against minimal overvoltages. Some real world failures: Wikipedia - tantalum capacitors Most tantalum capacitors are polarized devices, with distinctly marked positive and negative terminals. When subjected to reversed polarity (even briefly), the capacitor depolarizes and the dielectric oxide layer breaks down, which can cause it to fail even when later operated with correct polarity. If the failure is a short circuit (the most common occurrence), and current is not limited to a safe value, catastrophic thermal runaway may occur (see below). Kemet - application notes for tantalum capacitors Read section 15., page 79 and walk away with hands in sight. AVX - voltage derating rules for solid tantalum and niobium capacitors For many years, whenever people have asked tantalum capacitor manufacturers for
general recommendations on using their product, the consensus was “a minimum
of 50% voltage derating should be applied”. This rule of thumb has since become
the most prevalent design guideline for tantalum technology. This paper revisits this
statement and explains, given an understanding of the application, why this is not
necessarily the case. With the recent introduction of niobium and niobium oxide capacitor technologies,
the derating discussion has been extended to these capacitor families also. Vishay - solid tantalum capacitor FAQ . WHAT IS THE DIFFERENCE BETWEEN A FUSED (VISHAY SPRAGUE 893D) AND STANDARD,
NON-FUSED (VISHAY SPRAGUE 293D AND 593D) TANTALUM CAPACITOR? A. The 893D series was designed to operate in high-current applications (> 10 A) and employs an “electronic” fusing mechanism. ... The 893D fuse will not “open” below 2 A because the I2R is below the energy required to activate the fuse. Between 2 and 3 A, the fuse will eventually activate, but some capacitor and circuit board
“charring” may occur. In summary, 893D capacitors are ideal for high-current circuits where capacitor “failure” can cause system failure. Type 893D capacitors will prevent capacitor or circuit board “charring” and usually prevent any circuit interruption that can be associated with capacitor failure. A “shorted” capacitor across the power source can cause current and/or voltage transients that can trigger system shutdown. The 893D fuse activation time is sufficiently fast in most instances to eliminate excessive current drain or voltage swings. Capacitor guide - tantalum capacitors ... The downside to using tantalum capacitors is their unfavorable failure mode which may lead to thermal runaway, fires and small explosions, but this can be prevented through the use of external failsafe devices such as current limiters or thermal fuses. What a cap-astrophe I was working at a manufacturer that was experiencing unexplained tantalum-capacitor failure. It wasn't that the capacitors were just failing, but the failure was catastrophic and was rendering PCBs (printed-circuit boards) unfixable. There seemed to be no explanation. We found no misapplication issues for this small, dedicated microcomputer PCB. Worse yet, the supplier blamed us. I did some Internet research on tantalum-capacitor failures and found that the tantalum capacitors' pellets contain minor defects that must be cleared during manufacturing. In this process, the voltage is increased gradually through a resistor to the rated voltage plus a guard-band. The series resistor prevents uncontrolled thermal runaway from destroying the pellet. I also learned that soldering PCBs at high temperatures during manufacturing causes stresses that may cause microfractures inside the pellet. These microfractures may in turn lead to failure in low-impedance applications. The microfractures also reduce the device's voltage rating so that failure analysis will indicate classic overvoltage failure. ... Related: AVX - surge in solid tantalum capacitors Failure modes and mechanisms in solid tantalum capacitors - Sprague / IEEE abstract only. - OLD 1963. AVX - FAILURE MODES OF TANTALUM CAPACITORS MADE BY DIFFERENT
TECHNOLOGIES - Age ? - about 2001? Effect of Moisture on Characteristics of Surface Mount Solid Tantalum
Capacitors - NASA with AVX assistance - about 2002? Hearst - How to spot counterfeit components Sometimes it's easy :-) : Added 1/2016: Related: Test for reverse polarity for standard wet-aluminium metal can capacitors. Brief: For correct polarity can potential is ~= ground.
For reverse polarity can potential is a significant percentage of applied voltage. A very reliable test in my experience. Longer: For standard wet Al caps I long ago discovered a test for reverse insertion which I've not ever seen mentioned elsewhere but is probably well enough known. This works for caps which have the metal can accessible for testing - most have a convenient clear spot at top center due to the way the sleeve is added. Power up circuit and measure voltages from ground to can of each cap. This is a very quick test with a volt-meter - -ve lead grounded and zip around cans. Caps of correct polarity have can almost at ground. Caps of reverse polarity have cans at some fraction of supply - maybe ~~~= 50%. Works reliably in my experience. You can usually check using can markings but this depends on intended orientation being known and clear. While that is usually consistent in a good design this is never certain. | {
"source": [
"https://electronics.stackexchange.com/questions/99320",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/3288/"
]
} |
99,672 | Why do integrated circuits mostly QFN, need to be placed in the oven for an hour or so, prior to being used on a prototype board?
Is it to somehow improve the protection of the ICs against ESD or just a way of stimulating the silicon? I saw the process being done in an IC design company. | They don't, typically. IPC/JEDEC J-STD-20 provides moisture sensitivity level classifications: MSL 6 – Mandatory Bake before use MSL 5A – 24 hours MSL 5 – 48 hours MSL 4 – 72 hours MSL 3 – 168 hours MSL 2A – 4 weeks MSL 2 – 1 year MSL 1 – Unlimited where the times listed are the component "floor life out of the bag." If a component is moisture sensitive, it will come in a labelled, airtight anti-static bag, with a moisture indicator strip and desiccant. This phenomena isn't to unique to QFN. This particular example is the label on a bag of white PLCC LEDs. I've also seen it recently on DFN, MSOP, and TSSOP. Parts only require baking if they have been out of the bag outside their floor life out of the bag, or the moisture indicator strip indicates the required humidity has been exceeded. In this case, since my parts are MSL4, from the time the bag was open, they had 72 hours to be run through a reflow oven without being baked. Had the indicator strip come out of the bag like shown, the parts would have needed to be baked prior to reflow. | {
"source": [
"https://electronics.stackexchange.com/questions/99672",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/11928/"
]
} |
100,161 | In another question I asked about ways one might obfuscate the design of a system, to prevent unauthorized cloning. One suggestion was that IC manufacturers are often willing to put custom labels on their chips. The idea is interesting, but my quantities are low enough that this would not be cost-effective. How might one remove or otherwise render unreadable the labels on ICs? | Grinding or other abrasives is the only reliable method. I think I've seen machines that will do this for DIP components. A dedicated reverse-engineering person can probably guess the part from the pinout, surrounding circuit, and package or simply have the epoxy removed and look at the identification numbers on the die under a microscope, so it only goes so far. In my (somewhat) humble opinion, hiding the numbers on chips is kind of a red flag that the product is really easy to clone, has nothing proprietary in it, and is being sold for a very healthy margin, but perhaps that's just me. You won't find top tier manufacturers doing it. You could always incorporate one of these chips. | {
"source": [
"https://electronics.stackexchange.com/questions/100161",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7523/"
]
} |
100,212 | Warning: this may be an extremely naive question (if so, please enlighten me). Many applications of relays require a flyback diode to protect against inductive voltage. I'm unable to find any relay that incorporates a flyback diode. Since it's such a common need, why don't relays include a flyback diode inside the relay package? Are there just too many factors to consider, making it hard to guess the circuit's need? | There is simple answer to this question - there are many flyback schematics and the reverse diode is the simplest one. Although it has one big disadvantage - it makes the relay to switch off very slow. This way, sometimes other schematics are to be used. There are several examples: simulate this circuit – Schematic created using CircuitLab | {
"source": [
"https://electronics.stackexchange.com/questions/100212",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/26199/"
]
} |
100,540 | I am wondering why some tactile switches have 4 terminals instead of two? For example, take a look at these switches , like the image below: (source: pranelectronics.com ) What is the use of the two remaining pins? If the pins of the exact opposite side are always shorted then why don't they have just 2 pins? | I'm going to put David Tweed's comment into an answer, which it deserves. The dual shorted pins allow inexpensive single-sided boards to be used for X-Y matrices of switches without requiring jumpers. Here (from an NKK datasheet ) are a couple examples of such layouts: X-Y matrix (This would typically be scanned by a microcontroller or ASIC): Common line (one side of each switch common, typically it might be connected to Vss or Vdd and a pullup or pulldown resistor (perhaps internal to a chip) would be required for each switch. | {
"source": [
"https://electronics.stackexchange.com/questions/100540",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35491/"
]
} |
101,191 | Just got really curious about it reading this answer from Spehro Pefhany . There Spehro comments that one should use a logarithmic pot for audio applications. So I googled for it. The best article I could find was one titled "Difference between Audio and Linear Potentiometers" [1] which now seems to have been removed from the original website. There they said this: Linear vs. Audio Potentiometers, or "pots" to electronics enthusiasts, are differentiated by how quickly their resistance changes. In linear pots, the amount of resistance changes in a direct pattern. If you turn or slide it halfway, its resistance will be halfway between its minimum and maximum settings. That's ideal for controlling lights or a fan, but not for audio controls. Volume controls have to cater to the human ear, which isn't linear. Instead, logarithmic pots increase their resistance on a curve. At the halfway point volume will still be moderate, but it will increase sharply as you keep turning up the volume. This corresponds to how the human ear hears. Well, I'm not satisfied. What does it mean that the human ear isn't linear? How does the log changes in the pot resistance relates to sound waves and how the human ear works? [1] Original (now broken) link was http://techchannel.radioshack.com/difference-audio-linear-potentiometers-2409.html . | Consider this: - Sound level is measured in dB and, a 10 dB increase/decrease in signal equates to a doubling/halving of loudness as perceived by the ear/brain. Look at the picture above and ask yourself which is the better choice for smooth (coupled with extensive) volume controller. Below are the Fletcher Munson curves showing the full range of decibels that a human can comfortably hear. Note, that unless your stereo system is very powerful, a range of 100 dB is "about right" for volume control. The Fletcher Munson curves also relate loudness to the pitch of a sound. Note also that the curves are all normalized to 1kHz in 10 db steps: - Approximately every 10% of travel of the wiper on the LOG potentiometer can reduce/increase the volume by 10 dB whereas a LIN pot will need to move all the way down to its middle position before it's reduced the volume by only 6 dB! When a linear pot is near the bottom end of its travel (sub 1% of movement left) it will be making massive jumps in dB attenuation for just a tiny movement hence it would become very difficult to set the volume accurately at a low level. It's also worth pointing out that a LOG pot is only able to cope with so much dynamic range of adjustment before it does the same (below -100 dB) but, the point is, this will hardly be noticeable at the tiny, quiet end of its travel. You might also note that the markings on a pot such as CW and CCW tell you which end of a pot is the ground end and the high-volume end. CW = clock wise and CCW is counter clock wise end points for the wiper. | {
"source": [
"https://electronics.stackexchange.com/questions/101191",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/29792/"
]
} |
101,472 | I hear of people using FPGAs to improve performance of systems that do things like bit-coin mining, electronic trading, and protein folding. How can an FPGA compete with a CPU on performance when the CPU is typically running at least an order of magnitude faster (in terms of clock speed)? | CPU's are sequential processing devices. They break an algorithm up into a sequence of operations and execute them one at a time. FPGA's are (or, can be configured as) parallel processing devices. An entire algorithm might be executed in a single tick of the clock, or, worst case, far fewer clock ticks than it takes a sequential processor. One of the costs to the increased logic complexity is typically a lower limit at which the device can be clocked. Bearing the above in mind, FPGA's can outperform CPU's doing certain tasks because they can do the same task in less clock ticks, albeit at a lower overall clock rate. The gains that can be achieved are highly dependent on the algorithm, but at least an order of magnitude is not atypical for something like an FFT. Further, because you can build multiple parallel execution units into an FPGA, if you have a large volume of data that you want to pass through the same algorithm, you can distribute the data across the parallel execution units and obtain further orders of magnitude higher throughput than can be achieved with even a multi-core CPU. The price you pay for the advantages is power consumption and $$$'s. | {
"source": [
"https://electronics.stackexchange.com/questions/101472",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4437/"
]
} |
101,896 | I'm working on a layout for a PCB and I need to include a handful of pull-up resistors. The board I'm working on will be a proof of concept, and it is likely I will only need one (and order two). That being said, I'd like to keep the board area small. In addition, I'm using through-hole components to make any revisions easy. For these pull-up resistors, vertically mounting them would save some space and cost instead of alternatively mounting them horizontally. However, I rarely see vertically mounted resistors in commercial or industrial products. So, should I avoid using the vertical resistors even though they will save cost up front? Upon searching Google for an answer to my question, I came across these two links: http://www.head-fi.org/t/162556/any-reason-why-i-shouldnt-use-resistors-vertically http://www.proaudiodesignforum.com/forum/php/viewtopic.php?f=6&t=90 The consensus is that vertical resistors are less popular because: Auto-insertion machines can't (or don't prefer) vertical resistors. This isn't an issue for me since I'll be soldering the board myself. Horizontal mounting provides more stress relief. This is also no problem since my board will be safe in an enclosure that is only going to get light use to prove a concept. Are there any other reasons I am overlooking? Granted, most modern designs use SMT components that take up even less space. If the best answer to my particular situation is to just break down and learn to solder the SMT components, I would still like the background knowledge as to why the horizontal resistors are more popular. | Mounting a resistor vertically creates a bigger loop that can pick-up interference magnetically. Compare this with a resistor being mounted on the PCB flat against a flooded ground-plane. The voltage pick-up level is proportional to frequency and area of loop formed by the resistor. This is why surface-mount resistors are preferred a lot of the time. Also, A high value resistor mounted vertically is also asking for trouble in the presence of HF electric fields - what you can create is a mini-antenna. As for pull-ups and downs, surely you won't be swapping these out in your prototype - I'd consider using surface mount devices for these parts. | {
"source": [
"https://electronics.stackexchange.com/questions/101896",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35800/"
]
} |
102,393 | Can someone extract the HEX file that I burn in a microcontroller I provide them? If that is possible, how can someone ensure that their code is secured in embedded systems? In the case of PIC and AVR microcontrollers, how can one protect their firmware from being reproduced? | Most micro controllers these days have part or manufacturer specific methods to protect the embedded firmware code. This is generally done by locking out the circuits that normally allow the code memory to be read out. (You'll have to look for part specific details in the data sheet or at the manufacturers web site in applicable application notes). Once locked it is not possible to read out the code memory using normal techniques. This provides a reasonable level of protection to keep most hackers from viewing the machine code for your embedded application. Many MCU devices these days have on-board FLASH memory to strore the program code. A previously stored and protected program stored in FLASH can usually be replaced with new code but it takes a full chip FLASH erase operation to unlock the protection mechanism. Once erased the part will operate like it did before the original protection lock. If a new program is loaded it is generally possible to re-lock the part in order to protect the newly loaded machine code. Any discussion of code protection in microcontrollers would not be complete without mention that there is usually no guarantee that any protection scheme offered by the part manufacturer is fool proof. Manufacturers will even state that the protection systems are not 100% fool proof. One of the reasons for this is that there is a whole black market industry going on where, for a fee, diligent hackers will read out code from a protected part for anyone that wants to pay. They have devised various schemes that permit the code to be read out of the ROMs or FLASHes on protected micro controllers. Some of these schemes are incredibly clever but work to better success on some part families than on others. So be aware of this fact then you try to protect your program from prying eyes. Once someone has their hands on the binary image of the machine code that has been read out of a microcontroller, whether that was a protected microcontroller or not, they can process the machine code through a tool called a disassembler. This will turn the binary data back into assembly language code that can be studied to try to learn how the algorithms of your program work. Doing accurate disassembly of machine code is a painstaking job that can take huge amounts of work. In the end the process can lead to the assembler code like I described. If your program was written in some high level language such as C, C++ or Basic the assembly code will only represent the compiled and linked result of your program. It is generally not possible to reverse engineer stolen code all the way back to the high level language level. Experienced hackers can come close given enough time and experience. What this means is there is actually a benefit to writing your embedded application firmware in a high level language. It provides another layer that makes it harder for your program to be fully reverse engineered. Even greater benefit is to be had by using the highest state of the art in optimizing compilers to compile the embedded application because the highest performance optimizers can literally turn the program into a huge spaghetti bowl full of dozens of calls to short subroutines that are very hard to decipher in a disassembler. Most experienced embedded developers will tell you to go ahead and use any protection scheme that is offered on the MCU in your application....but not to depend upon it to the end of road for your product. They will tell you that the best way to stay ahead of the competition is to constantly upgrade your product so that the old versions are out of date and uninteresting by the time that hackers may have cloned your code. Change the code around, add new features, spin your PC boards from time to time to swap all your I/Os around and any other things that you may think of. This way you can win the race every time. | {
"source": [
"https://electronics.stackexchange.com/questions/102393",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/31060/"
]
} |
102,508 | The gain here is A = -R f /Rin. However, lets say I want a gain of 10 V/V. Which resistor value would you choose and why? I know that you could have infinite number of combinations for these resistors but why would some one use specific value. i.e R f = 100Mohm, R in = 10Mohm gives a gain of 10V/V but also R f = 10 ohm and R in = 1 ohm gives the gain of 10V/V. What difference would it make to the design? My thoughts say that higher value resistors are not precise so it wouldn't give you precise gain and using lower value resistors sink higher current from the source (V in ). Are there any other reasons? Also, let me know If I am right or wrong as well. | There are downfalls with choosing very large resistors and very small resistors. These usually deal with the non-ideal behavior of components (namely Op-Amps), or other design requirements such as power and heat. Small resistors means that you need a much higher current to provide the appropriate voltage drops for the Op-amp to work. Most op amps are able to provide 10's of mA's (see Op-amp datasheet for exact details). Even if the op-amp can provide many amps, there will be a lot of heat generated in the resistors, which may be problematic. On the other hand large resistors run into two problems dealing with non-ideal behavior of the Op-Amp input terminals. Namely, the assumption is made that an ideal op-amp has infinite input impedance. Physics doesn't like infinities, and in reality there is some finite current flowing into the input terminals. It could be kind of large (few micro amps), or small (few picoamps), but it's not 0. This is called the Op-amps input bias current . The problem is compounded because there are two input terminals, and there's nothing forcing these to have exactly the same input bias current. The difference is known as input offset current , and this is typically quite small compared to the input bias current. However, it will become problematic with very large resistance in a more annoying way than input bias currents (explained below). Here's a circuit re-drawn to include these two effects. The op-amp here is assumed to be "ideal" (there are other non-ideal behaviors I'm ignoring here), and these non-ideal behaviors have been modeled with ideal sources. simulate this circuit – Schematic created using CircuitLab Notice that there is an additional resistor R2. In your case, R2 is very small (approaching zero), so a small resistance times a small bias current I2 is a very small voltage across R2. However, notice that if R1 and R3 are very large, the current flowing into the inverting input is very small, on the same order as (or worse, smaller than) I1. This will throw off the gain your circuit will provide (I'll leave the mathematical derivation as an exercise to the reader :D) All's not lost just because there's a large bias current though! Look what happens if you make R2 equal to R1||R3 (parallel combination): if I1 and I2 are very close to each other (low input offset current), you can negate out the effect of input bias current! However, this doesn't solve the issue with input offset current, and there are even more issues with how to handle drift. There's not really a good way to counteract input offset current. You could measure individual parts, but parts drift with time. You're probably better off using a better part to begin with, and/or smaller resistors. In summary: pick values in the middle-ish range. What this means is somewhat vague, you'll need to actually start picking parts, looking at datasheets, and deciding what is "good enough" for you. 10's of kohms might be a good starting place, but this is by no means universal. And there probably won't be 1 ideal value to pick usually. More than likely there will be a range of values which will all provide acceptable results. Then you'll have to decide which values to use based off of other parameters (for example, if you're using another value already, that might be a good choice so you can order in bulk and make it cheaper). | {
"source": [
"https://electronics.stackexchange.com/questions/102508",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/32856/"
]
} |
102,528 | In the field of Digital signal processing I have seen people using words Complex signals and Negative frequencies. For eg. in FFT Spectrum. Does it really have significant meaning in the time domain or is just a part of the mathematical symmetry. How do you visualize negative Frequency in Time domain ? | FFTs work by treating signals as 2-dimensional -- with real and imaginary parts. Remember the unit circle ? Positive frequencies are when the phasor spins counter-clockwise, and negative frequencies are when the phasor spins clockwise. If you throw away the imaginary part of the signal, the distinction between positive and negative frequencies will be lost. For example ( source ): If you were to plot the imaginary part of the signal, you would get another sinusoid, phase shifted with regards to the real part. Notice how if the phasor were spinning the other way, the top signal would be exactly the same but the phase relationship of the imaginary part to the real part would be different. By throwing away the imaginary part of the signal you have no way of knowing if a frequency is positive or negative. | {
"source": [
"https://electronics.stackexchange.com/questions/102528",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38310/"
]
} |
102,611 | Just now I realized that the I 2 C data and clock lines (SDA and SCL) must have pullup resistors. Well, I've built a couple of clocks using the DS1307 RTC (see datasheet ) according to the schematic below. Notice that I have omitted both pullup resistors. Both clocks work fine, one of them is working for more than 3 months now. How is that possible? In any case, I wanted to know: What happens when the I 2 C pullups are omitted? Is the lack of pullups likely to damage any of those two ICs in my board? I'm after answers that address my specific case of connecting ATmega328P to a DS1307 RTC like in the schematics I provided, but if the question doesn't get too broad, it would be helpful to know what happens when the pullups are omitted in general, i.e., in other scenarios of I 2 C operation. PS. I did search the Net to find the answer, but could just find articles about dimensioning the pullups. Update: I'm using Arduino IDE 1.03 and my firmware handles the RTC using the DS1307RTC Arduino lib (through its functions RTC.read() and RTC.write() ). That lib in turn uses Wire.h to talk to the RTC. Update 2: Below are a series of scope shots I took to help explain how the I 2 C is working without the external pullups. Update 3 (after I 2 C pullups added): Below is another series of scope shots I took after adding proper (4K7) pullup resistors to the I 2 C lines (on the same board). Rise times dropped from about 5 µs to 290 ns. I 2 C is much happier now. | 1) What happens when the I2C pullups are omitted? There will be no communication on the I 2 C bus. At all. The MCU will not be able to generate the I 2 C start condition. The MCU will not be able to transmit the I 2 C address. Wondering why it worked for 3 months? Read on. 2) The lack of pullups is likely to damage any of those two ICs in my board? Probably not. In this particular case (MCU, RTC, nothing else), definitely not. 3) Why was the MCU able to communicate with the I 2 C slave device in the first place? I 2 C requires pull-up resistors. But they weren't included in the schematic. Probably, you have internal pull-ups enabled on the ATmega. From what I've read 1 , ATmega have 20kΩ internal pull-ups, which can be enabled or disabled from the firmware. 20kΩ is way too weak for the I 2 C pull-up. But if the bus has a low capacitance (physically small) and communication is slow enough, then 20kΩ can still make the bus work. However, this is not a good reliable design, compared to using discrete pull-up resistors. 1 Not an ATmega guy myself. update: In response I 2 C waveforms, which were added to the O.P. The waveforms in the O.P. have a very long rise time constant. Here's what I 2 C waveforms usually look like PIC18F4550, Vcc=+5V, 2.2kΩ pull ups. Waveform shows SCL. The rise time on SDA is about the same. The physical size of the bus is moderate: 2 slave devices, PCB length ≈100mm. | {
"source": [
"https://electronics.stackexchange.com/questions/102611",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/29792/"
]
} |
102,695 | I am interested in how the cpu/gpu presents (to whatever equipment that it does) video data after it has been processed. I have been told that the video is processed by the CPU/GPU and then sent to an integrated circuit by high speed serial that converts the serial signal to an appropriate display output, but I can't confirm this by searching online. I'm interested in the signaling and can't search for protocols/etc because I don't know what it is that I'm looking for. So does the CPU/GPU interact with video outputs directly (I can find these protocols easily) or is there a "middle man" so to say and if so what is it, type of chip/etc? | The image displayed on the monitor is stored in your computer's video RAM on the graphics card in a structure called a framebuffer. The data in the framebuffer is generally 24 bit RGB color, so there will be one byte for red, one for green, and one for blue for each pixel on the display, possibly with some extra padding bytes. The data in the video RAM can be generated by the GPU or by the CPU. The video RAM is continuously read out by a specialized DMA component on the video card and sent to the monitor. The signal output to the monitor is either an analog signal (VGA) where the color components are sent through digital to analog converters before leaving the card, or a digital signal in the case of DVI, HDMI, or DisplayPort. The hardware responsible for this also generate the horizontal and vertical sync signals as well as all of the appropriate delays so the image data is only sent to the monitor when it is ready for it. In the DVI and HDMI, the stream of pixel color information is encoded and serialized and sent via TMDS (transition minimized differential signaling) to the monitor. DisplayPort uses 8b/10b encoding. The encoding serves multiple purposes. First, TMDS minimizes signal transitions to reduce EMI emissions. Second, both TMDS and 8b/10b are DC balanced protocols so DC blocking capacitors can be used to eliminate issues with ground loops. Third, 8b/10b ensures a high enough transition density to enable clock recovery at the receiver as DisplayPort does not distribute a separate clock. Also, for HDMI and DisplayPort, audio data is also sent to the graphics card for transmission to the monitor. This data is inserted into pauses in the data stream between video frames. In this case, the video card will present itself as an audio sink to the operating system, and the audio data will be transferred via DMA to the card for inclusion with the video data. Now, you probably realize that for a 1920x1080 display with 4 bytes per pixel, you only need about 8 MB to store the image, but the video RAM in your computer is probably many times that size. This is because the video RAM is not only intended for storing the framebuffer. The video RAM is directly connected to the GPU, a special purpose processor designed for efficient 3D rendering and video decoding. The GPU uses its direct access to the video RAM to expedite the rendering process. In fact, getting data from main memory into video memory is a bit of a bottleneck as the PCI bus that connects the video card to the CPU and main memory is significantly slower than the connection between the GPU and the video RAM. Any software that requires lots of high resolution 3D rendering has to copy all of the 3D scene data (primarily 3D meshes and texture data) into video RAM so the GPU can access it efficiently. | {
"source": [
"https://electronics.stackexchange.com/questions/102695",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/37206/"
]
} |
103,053 | A motor rated for 3S (11.1 V) has an internal resistance of 0.12 ohm. The maximum current is 22 A. 11.1 V / 0.12 ohm = 92.5 A Doesn't this mean that by supplying a 11.1 V three phase current, the motor will burn up instantly? How does an electronic speed control (ESC) prevent the current from exceeding 22 A? | It doesn't, the motor itself does. Once the rotor starts spinning, the motor produces a voltage that opposes the flow of current; this is commonly called "back EMF (electromotive force)". The motor's speed increases until the back EMF reduces the current flow to the level needed to account for the actual physical load on the motor (plus losses). The heavy current you calculate is drawn only for an instant, just as the rotor starts spinning. If the rotor is prevented from spinning, then that current will be drawn indefinitely, and yes, it can destroy the motor. | {
"source": [
"https://electronics.stackexchange.com/questions/103053",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7269/"
]
} |
103,489 | I'm a high school student. I love computers and electronics. Few weeks ago, I thought to build my own electronic gadget but, unfortunately I had not much knowledge in electronics. So, I decided to learn. After Googling here and there, I came across a large amount of information. Nothing, except one thing, that daunts and intimidates me is that what the term Charge means? None of the books tell what it means. Some tell that it is the basic property of the matter and just it and don't define further about it. Whereas some don't even bother to tell about it. On Wikipedia it is defined as: Electric charge is the physical property of matter that causes it to experience a force when close to other electrically charged matter. The definition is quite bit difficult and confusing. Similarly from All About Circuits Website tutorials I got a different type of definition and understanding. From books, I came to know that we still don't know much about charges even great scientists like Sir Stephen Hawking doesn't known much about it. Is it correct?
If not, then why was it written in the books (I mean here books not a book), what is its correct definition? Why majority of books don't define what charges is/are? | Like Ali said, charge is a property (or characteristic or feature) of a particle. The particle could be an atom, or it could just be a part of an atom like an electron or a proton. Unfortunately, we can't really say much about why particles have this property, or what causes this property to exist. We can only describe some things we observe about this property that we call charge . Charge comes in two types, which we arbitrarily label as "positive" and "negative". Positive charges repel each other with a force that we can measure, negative charges repel each other similarly, and opposite charges attract each other. We find that there are components of atoms called "protons" and "electrons" that are always positively and negatively charged, respectively. Charge is conserved. That means, in all the experiments we have tried, the difference between the amount of positive and negative charge in a closed system is the same at the end of the experiment as it was at the beginning of the experiment, and we therefore believe this is true of all closed systems in the universe. Even though we don't know what charge is or where it originally comes from, the description of what it does is enough for us to predict lots of useful things and make lots of useful tools like radios and computers. | {
"source": [
"https://electronics.stackexchange.com/questions/103489",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38941/"
]
} |
103,941 | Coming to electronics as a hobbyist I'm not sure I understand why I have to solder header pins? On more than one occasion when dealing with Arduino / breadboard projects the circuit will not work until the header pins are soldered. Holding them in place will not solve this problem. Why? Low voltage? Need for persistent connection? Holding them in place doesn't work nearly as well as it looks like it should? Why? | Think about it mechanically - you have a straight row of pins and you insert them in slightly loose fitting holes (all in a line). Even if you hold them in place - can you be sure that one of the pins isn't fractionally bent in one direction different to the others. Think about this for a 3 pin header: - Clearly the pin in the middle isn't touching the inside of the yellow hole until you put unreasonable amount of pressure on the connector. Please, no complaints about the colours. | {
"source": [
"https://electronics.stackexchange.com/questions/103941",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39148/"
]
} |
104,087 | I'm marginally familiar with the way that an AC transformer works. After viewing this question: Why don't all motors burn up instantly? It got me thinking about the same thing with AC transformers. The primary coil should provide very little resistance and thus allow a lot of current to flow. I'm guessing that the resistance comes from the fluctuating magnetic field. Is this correct? If so, I'm assuming that the current increases when a load is placed on the secondary coil because the magnetic field doesn't collapse into the primary coil but is used by the secondary coil instead? Also, does this mean that if a DC current was placed on a transformer that it would cause trouble? (i.e. very high current) I'm sure I'm not saying this correctly, so I'm hoping someone can set me straight. To sum up my question, what is the behavior of a transformer's primary coil (in terms of current flow) when no load is placed on the secondary coil, and what changes when a load is placed on the secondary coil? | Andy gave you the classic academic answer to your questions. Everything he stated is accurate, but I doubt as a beginner you will understand most of it. So, let me take a try at a simple explanation. The primary of a transformer is a coil wound around an iron core which can take one of several shapes. This primary winding has a very low resistance. ( Measure the resistance of a typical power transformer used in electronic bench equipment with a DMM and you will find it is just a few Ohms.) Connect a DC voltage source to this, the result is quite predictable. The voltage source will deliver as large a current as it is capable of to the primary winding and the transformer will get very hot and probably go up in smoke. That, or your DC supply will blow a fuse, burn up itself, or go into current-limit mode if it so equipped. Incidentally, while this high current is flowing, the primary winding is actually producing a uni-directional magnetic field in the transformer core. Now, measure the inductance of the secondary with an LRC meter. (That's a DMM-like device which measures only inductance, resistance and capacitance - "LRC".) For a 60 Hz power transformer you will likely read a few Henries of inductance across its primary leads. Next, apply that "L" value to the formula \$X_L = 2 \pi f L \$ to calaculate the "inductive reactance" ( "\$X_L\$" ) of the primary winding where "f" is the AC Main frequency of 60 Hz for the USA. The answer, \$X_L\$, is in units of Ohms just like DC resistance, but in this case these are "AC Ohms", aka "impedance". Next, apply this value of \$X_L\$ to "Ohm's Law" just like you would with a resistor connected to a DC source. \$I = \frac{V}{X_L}\$. In the usual USA case we have 120 volts RMS as V. You will now see that the current "I" is a quite reasonable value. Likely a few hundred milliamps ("RMS" also). That's why you can apply 120 volts to the unloaded transformer and it will run for a century without a problem. This few hundred milliamp primary current, called the "excitation current" produces heat in the transformer primary coil, but the mechanical bulk of the transformer can handle this amount of heat by design virtually forever. Nonetheless, as described above, it wouldn't take a 5 VDC power supply but a few minutes to burn up this same transformer if that DC supply was capable of supplying a large enough current to successfully drive the low-R DC coil. That's the "miracle" of inductive reactance! It's the self-created alternating magnetic field produced by the AC current itself in the transformer core which limits the current when driven from an AC voltage source. That's for the unloaded transformer. Now, connect an appropriate resistive load to the secondary. The excitation current described above will continue to flow at more-or-less the same magnitude. But now and additional current will flow in the primary. This is called the "reflected current" - the current which is "caused" by the secondary resistive load drawing current from the transformer's secondary. The magnitude of this reflected current is determined by the turns ratio of the power transformer. The simplest way to determine the reflected current is to use the "VA" (volts-amps) method. Multiply the tranformer's secondary voltage by the current in amps being drawn by the resistive load attached to the secondary. (This is essentially "Watts" - volts times amps. ) The "VA Method" says that the VA of the secondary must equal the incremental VA of the primary. ("Incremental" in this case means "in addition to the excitation current".) So, if you have a typcial AC power transformer with a 120 VRMS primary and a 6 VRMS secondary and you attach a 6 Ohm resistor to the secondary, that 6 Ohm load will draw 1.0 Amp RMS from the secondary. So, the secondary VA = 6 x 1 = 6. This secondary VA must numerically equal the primary VA, where the voltage is 120 VRMS. Primary VA = Secondary VA = 6 = 120 x I. I = 6/120 or only 50 milli-Amps RMS. You can verify most of this using a simple DMM to measure the currents in the primary and secondary under no-load and load conditions. Try it yourself, but be careful on the primary because that 120 VRMS is near-lethal. However, you will NOT be able to directly observe the "incremental" current in the primary caused by adding the load to the secondary. Why? That answer is not so simple! The excitation current and the reflected current are 90 degrees out-of-phase. They "add up", but they add up according to vector math, and that's another discussion altogether. Unfortunately, Andy's beautifully expressed answer above will be barely appreciated unless the reader understands vector math as it is applied to AC circuits. I hope my answer, and your verification experiments, will give you a gut-level numerical understanding of the how a power transformer "works". | {
"source": [
"https://electronics.stackexchange.com/questions/104087",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/5432/"
]
} |
104,393 | Recently, I've noticed a trend in power supply manufacturers touting their PSUs (generator or battery inverter etc.) as having a pure sine wave output. I've also seen people saying that warranties will be invalidated if devices/motor homes etc. are connected to anything other that a power source with a pure sine wave output. I wonder what the world did before such power sources existed. Is there science behind this? Surely a standard petrol generator with a good automatic voltage regulator (AVR) or an old-fashioned coil regulator will be enough to stabilize the output to run sensitive electronics like LCD televisions or computers? | Historically, inverters (electronic circuits that take DC power and convert it to AC to simulate the power line) were pretty awful in the waveshapes they produced. Early inverters produced little better than square waves. This means they included significant power at frequencies that devices were not designed to handle. Most devices that are intended to plug into wall power take the sine shape of the voltage for granted. Some might count on the peaks of the sine being a particular voltage, while others count on the RMS. For a sine wave, the peaks are at \$\sqrt{2}\$ times the RMS, whereas for a square wave the peak and RMS are the same. This presents a problem in deciding what voltage square wave to produce. If you match the power line in RMS, then lightbulbs, toasters, and other "dumb" devices will largely work. However, electronic devices that full wave rectify the line will see a significantly lower voltage. If you raise the square wave voltage, then you might overdrive and damage devices that use the RMS. The extra harmonics in the square wave can also cause problems on their own. Transformers designed for the power line frequency, like 60 Hz, might not deal well with the higher frequencies. Or these frequencies might cause extra current and heating without them being harnessed for more power. The sharp transitions can also overload electronics that is expecting a maximum slope from the power voltage. For example, just a simple capacitor accross the AC line would in theory conduct infinite current if the voltage changed infinitely quickly. The next step in inverters was "modified sine", which had a extra ground "step" in the square wave. The point here is that this reduces the power in the harmonics relative to a full square wave. However, many of the problems with square waves were still present, although generally reduced. Modern electronics that can efficiently switch at many times the power line frequency can produce a output voltage that is pretty close to a sine, meaning it has little harmonic content. This eliminates the issues with square wave and modified sine outputs, since the power line itself is ideally a sine. It is still a bit more expensive to produce inverters with sine wave outputs, but the extra cost is no longer that much and is getting steadily lower. Today, sine wave output inverters are common. Note that inverters intended to drive the power line backwards, called grid-tie inverters, are all sine wave output. This is due to a lot of regulations covering what you are allowed to do with the power line, especially when you feed power backwards. | {
"source": [
"https://electronics.stackexchange.com/questions/104393",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/24130/"
]
} |
104,759 | I'm trying to understand why certain CPU cache memories are faster than others. When comparing cache memory to something like main memory, there are differences in memory type (SRAM vs DRAM), and locality issues (on-chip vs having to traverse a memory bus) that can affect access speeds. But L1 and L2 are generally on the same chip, or at least on the same die, and I think they are the same type of memory. So why is L1 faster? | No, they're not the same type of RAM, even though they're on the same chip that uses the same manufacturing process. Of all the caches, the L1 cache needs to have the fastest possible access time (lowest latency), versus how much capacity it needs to have in order to provide an adequate "hit" rate. Therefore, it is built using larger transistors and wider metal tracks, trading off space and power for speed. The higher-level caches need to have higher capacities, but can afford to be slower, so they use smaller transistors that are packed more tightly. | {
"source": [
"https://electronics.stackexchange.com/questions/104759",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39540/"
]
} |
104,760 | I'm trying to understand how BTL driver on TI's DRV8662 Piezo Haptic Driver ( datasheet ) works. What I am confused about is how a 200Vpp output signal can be generated from a driver that only has rails at GND and 100V. If I understand that datasheet correctly the driver looks something like this: The output according to the datasheet is this: What I am confused about is that the differential outputs have a common mode voltage of 50V (as shown in figure 8). So how is the voltage difference in between Out+ and Out- signals leads to a 200Vpp swing (as in figure 10) when figure 8 clearly shows that the maximum difference between these two voltages is 100V? How can the output swing to -100V if no negative rail is ever supplied to the part? I tried reading these documents to clear up my issue but it didn't help. BTL Speakerdrive Reference Note ( link ) Audio Design Note ( link ) | No, they're not the same type of RAM, even though they're on the same chip that uses the same manufacturing process. Of all the caches, the L1 cache needs to have the fastest possible access time (lowest latency), versus how much capacity it needs to have in order to provide an adequate "hit" rate. Therefore, it is built using larger transistors and wider metal tracks, trading off space and power for speed. The higher-level caches need to have higher capacities, but can afford to be slower, so they use smaller transistors that are packed more tightly. | {
"source": [
"https://electronics.stackexchange.com/questions/104760",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/23669/"
]
} |
105,064 | What is the usage of theses negative voltages? Are they there only for backward compatibility? In nowadays PC power supplies, we have: +12V +5V +3.3V but also: -12V -5V But the current rating of the negative rails are much smaller than the positive ones. If we were back to the 80' where op-amps were always powered symmetrically at +12V -12V: Okay.. But nowadays, almost everything you may find on a motherboard is digital logic only powered by positive voltages. Except for the RS232, which is an almost obsolete bus, I don't see any reason for having negative rails distributed by the power supply. Because it's very high volume, I suppose that cost drives everything here. Thus, why each PSU has to deliver those voltages if they are barely used ? (the very low current rating of the negative rails of PSUs let me suppose this). Wouldn't it be less expensive to let every hardware provider to add their own embedded SMPS when a negative voltage is required? | PCs are stuffed with requirements which relate to backwards compatibility - and -Ve rails are part of that. I'm not sure about -5V, but there's a -12V line on the original PCI bus, so if you want to provide proper PCI sockets, then you need a -12V rail, even if the last person making a PCI card which needed -12V died in 2002. Then if you want to design a standard power connector pin-out which can be used by people building motherboards with PCI connectors on it, then it needs a -12V rail, or else the motherboard manufacturer needs to start adding power supplies to his motherboard. So now you have a -12V rail on your power connector even after people have stopped fitting PCI connectors. Some of these things are remarkably difficult to get away from — the 'legacy free' PC with no PS/2-style keyboard/mouse connections was being talked about as imminent 15 years ago, but desktop machines still tend to have those connectors. It just turns out to be cheaper/easier to keep supporting all the old cruft than it does to drop it and clean-up the design. Or perhaps it doesn't, and PCs have sunk under the accumulated weight of all this baggage and people have moved on to other form-factors... | {
"source": [
"https://electronics.stackexchange.com/questions/105064",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/13366/"
]
} |
105,096 | I have recently been looking at the datasheets for the 74HC139 IC in order to see if it was suitable for my project, and have come across the following logic diagram which strikes me as a little bit odd: simulate this circuit – Schematic created using CircuitLab For each of the inputs Yn, there are two NOT gates after the triple-input NAND gate; I don't understand why this is necessary as simple boolean logic tells us: $$\overline{\overline{A}}\equiv A\qquad \forall A \in \{\text{TRUE}, \text{FALSE}\}$$ Therefore I am assuming there is some electronic based reason why there are two inverters before the output? I have heard not gates called Inverting buffers before, and these supposedly isolate the circuit before and after, however, I cannot claim to understand the use of this so I'd appreciate any enlightenment! | Possible reasons: Load Balancing The driver of A has an unknown number of fan-out to drive. Fan-out within the circuit and the parasitic it induces can be calculated for the specific circuits, but we do not know the other circuits that are connected the driver. Essentially the inverters are being used as buffer equivalent. and help manage the parasitic. Timing and total current To reduce the transition glitch, the second state inverters can be sized for a faster transition switch. Doing so makes the NAND gates input update near the same time. With the inputs changing less periodically, power can be saved and transition glitches can be reduced. Signal boosting and power Lets say VDD = 1.2V but the input is 0.9V. The input is still a logical 1, but considered weak which causes slower switching and burns more power. The first inverters can be sized to handle transitions better, making the voltage more predictable for the rest of the design. There is also a possibility of the change in the voltage domain. In this case the inverters in the first state can act as a step down, e.g. a 5V input domain to 2V domain. Any combination of the above | {
"source": [
"https://electronics.stackexchange.com/questions/105096",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/33064/"
]
} |
105,116 | With Apple's lightning cable, and USB 3.0, reversible cables are taking off, and I personally think this is very convenient. But we have had better than reversible for a long time, in the form of the headphone jack, which can be inserted in any direction, not just 2 directions. Why isn't a headphone jack shaped connector used for data more often? All I ever see that shape used for is audio and power supplies (I've seen it used once for data, in the iPod shuffle, but thats it). | Digital signals are highly susceptible to the noise generated by rotating the plug. For audio, these noises (cracks) are rarely audible unless they last longer than 50us (simply because of the fact that we're unable to hear frequencies over 20kHz). So, the cracks becomes audible only when the surface of the connector has deteriorated enough that the period of lack of connection is substantially longer. As a rule of thumb, any connector where there are moving parts while the connection is established, is a terrible idea for high frequency digital data. It might be acceptable for low frequency digital data, as well as power supplies. Finally, most digital standards require quick detection of the disconnect - even though the above issue could be worked around with proper ECC (Error Correcting Codes), USB assumes that any loss of connectivity for over 2ms is considered a disconnect. (USB 3.0 SS is even more strict). | {
"source": [
"https://electronics.stackexchange.com/questions/105116",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/23996/"
]
} |
105,136 | What's the difference between a schematic , a block diagram , a wiring diagram and a PCB layout ? Why do engineers want a schematic instead of a wiring diagram? Where does Fritzing fit into this? | Schematic A schematic shows connections in a circuit in a way that is clear and standardized. It is a way of communicating to other engineers exactly what components are involved in a circuit as well as how they are connected. A good schematic will show component names and values, and provide labels for sections or components to help communicate the intended purpose. Note how connections on wires (or "nets") are shown using dots and non-connections are shown without a dot. Block Diagram A block diagram shows a higher level (or organizational layout) of functional units in a circuit (or a device, machine, or collection of these). It is meant to show data flow or organization between separate units of function. A block diagram gives you an overview of the interconnected nature of circuit assemblies or components. Wiring Diagram A wiring diagram is sometimes helpful to illustrate how a schematic can be realized in a prototype or production environment. A proper wiring diagram will be labeled and show connections in a way that prevents confusion about how connections are made. Typically they are designed for end-users or installers. They focus on connections rather than components . PCB Layout A PCB Layout is the resulting design from taking a schematic with specific components and determining how they will physically be laid out on a printed circuit board. To produce a PCB Layout, you must know the connections of components, component sizes (footprints), and a myriad of other properties (such as current, frequencies, emissions, reflections, high voltage gaps, safety considerations, manufacturing tolerances, etc.). Fritzing Fritzing is a popular open-source software program designed to help you create electronics prototypes. It uses a visual approach to allow you to connect components to Arduino using a virtual breadboard, and even provides ways to design a PCB. Its strength is in the ease with which new users can approach it. One of the principal working views is the virtual breadboard: However, as you can see, it can be time-consuming to tell exactly how components are connected, even if you are very familiar with how breadboard connections work (as most electronics engineers are). As a circuit gets more complex, the visualization becomes more cluttered. Fritzing provides a way to produce a schematic: Be sure to use this to produce a schematic if you need to ask questions about your circuit. It will help others to quickly understand the components and connections involved in your design. Prototype Photo Sometimes a photo can help engineers troubleshoot your design. Especially if quality issues are suspected, such as soldering reliability, improper connections, incorrect polarities, and other problems which might be revealed in a photo. However, realize that most photos are not immediately useful, and if your project is complicated, a picture will do little more than show that you've spent a lot of time and effort on your project! Hint: Not helpful! Images were obtained using internet image searches with license set to public domain or free to use for non-commercial use. | {
"source": [
"https://electronics.stackexchange.com/questions/105136",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/2028/"
]
} |
105,283 | I guess it's a bad thing to try to debug a microcontroller-based project using printf() . I can understand that you have no predefined place to output to, and that it could consume valuable pins. At the same time I've seen people consume a UART TX pin for outputting to the IDE terminal with a custom DEBUG_PRINT() macro. | I can come up with a few disadvantages of using printf(). Keep in mind that "embedded system" can range from something with a few hundred bytes of program memory to a full-blown rack-mount QNX RTOS-driven system with gigabytes of RAM and terabytes of nonvolatile memory. It requires someplace to send the data. Maybe you already have a
debug or programming port on the system, maybe you don't. If you
don't (or the one you have is not working) it's not very handy. It's not a lightweight function in all contexts. This could be a big deal if you have
a microcontroller with only a few K of memory, because linking in printf might eat up 4K all by itself. If you have a 32K or 256K microcontroller, it's probably not an issue, let alone if you have a big embedded system. It's of little or no use for finding certain kinds of problems related to memory allocation or interrupts, and can change the behavior of the program when statements are included or not. It's pretty useless for dealing with timing-sensitive stuff. You'd be better off with a logic analyzer and an oscilloscope or a protocol analyzer, or even a simulator. If you have a big program and you have to re-compile many times as you change printf statements around and change them you could waste a lot of time. What it's good for- it is a quick way to output data in a preformatted way that every C programmer knows how to use- zero learning curve. If you need to spit out a matrix for the Kalman filter you're debugging, it might be nice to spit it out in a format that MATLAB could read in. Certainly better than looking at RAM locations one at a time in a debugger or emulator. I don't think it's a useless arrow in the quiver, but it should be used sparingly, along with gdb or other debuggers, emulators, logic analyzers, oscilloscopes, static code analysis tools, code coverage tools and so on. | {
"source": [
"https://electronics.stackexchange.com/questions/105283",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/13393/"
]
} |
105,559 | Does a MOSFET allow current flow in reverse direction (i.e.; from source to drain)? I made a Google search, but couldn't find a clear statement about this matter. I have found this similar question , but it is about detecting current direction from the schematic symbol of a MOSFET. And under the same question, there is this answer which states that MOSFETs have no intrinsic polarity, thus they can conduct in both directions. However, that answer has no up/down votes or comments, so I can't make sure of it. I need a clear answer on this. Does a MOSFET conduct in both directions? | Yes it does conduct in either direction. Due to the body diode, most discrete MOSFETs cannot block in the reverse direction, but the channel will conduct in either direction when the gate is biased "on". If you want to conduct and block in both directions you need two MOSFETs in series. MOSFETs used as near-perfect rectifiers are usually used in the reverse direction for conduction (so they can block in the other direction). Edit: Your schematic here: Illustrates one example of switching AC with two MOSFETs (one of which will be conducting in reverse at any given time when the switches are on). Another example is here , from the LT4351 datasheet: | {
"source": [
"https://electronics.stackexchange.com/questions/105559",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/5542/"
]
} |
105,988 | How are or were vias made commercially? Wikipedia ( http://en.wikipedia.org/wiki/Via_(electronics) ) mentions
"The hole is made conductive by electroplating, or is lined with a tube or a rivet" Can anyone provide more details on these processes, with an eye towards replicating the process? (I realize the standard DIY way is to thread some single-core through and solder it. That seems relatively slow and not amenable to automation). | PCB production after stack up cure: Drill the hole. This is through the solid copper (un-etched) outer layers and feature etched internal layers (for a 4+ layer board). Copper burrs are removed in the deburring process. Melted epoxy resin is removed by a chemical desmear process. (Without this, you cannot get good plating coverage to the internal copper.) Clarification: This step is only on 4+ layer boards. The plating around the via top and bottom annular rings will get good conduction on a 2 layer board, even if the edges are epoxy insulated. Sometimes (but less seen due to nasty organic chemicals needed) the resin and glass fiber is etched back to expose more of the copper layers. (Again: only on 4+ layer boards) Around 50 microns of electro-less copper is deposited within the hole to allow electroplating. Polymer resist is added to the board to cover everything that will be etched away (all but via pads, normal pads, traces, etc). Around 1 mil of electroplated copper is deposited into the barrel and on every surface of the PCB not covered with resist. Metallic resist is plated over the electroplated copper. Polymer resist is removed. The Etching process removes all copper not covered by metallic resist. Metallic resist is removed. Solder mask is applied. Surface finish is applied (HASL, ENIG, etc.) Some things to consider about vias and DIY via replacements. Thermal expansion is the death of PCB boards, and vias are the most abused portion. An FR4 material is resin impregnated glass fibers. So you have a weave of fibers in the X and Y direction, covered with "Jello". The glass fibers have little CTE (Coefficient of Thermal Expansion). So the board will have maybe 12-18 ppm\C in the X and Y direction. There are no glass fibers restricting motion in the Z direction (the board thickness). So it might expand 70-80 ppm\C. Copper is only a fraction of that amount. So as the board heats up, it is tugging on the via barrel. This is where cracks will form between inner layers and the via barrel, severing the electrical connection and killing the circuit. For a home made via, you are most likely going to have issue with the plating being thinnest in the middle of the barrel, and this area failing with temperature expansion. | {
"source": [
"https://electronics.stackexchange.com/questions/105988",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7639/"
]
} |
106,265 | What is the maximum length of cable that could be used to connect two I2C devices (I2C master-> I2C slave)? Yes, I know that I2C is really designed for intra-board communication. I have been tasked with a "design goal" of using a common I2C bus for multiple I2C slaves to support a demo. For purposes of clarity, let's assume the standard I2C bus rate of 100 kHz. | For fast mode, and resistor pullup, capacitance should be less than 200pF, according to this NXP document I2C-bus specification and user manual . With current source pullups you can go to 400pF, but not with resistors. If your wire is 20pF/30cm and you have another 50pF of stray and input capacitance, you're limited to 2.25m of cable length. Different assumptions will lead to different numbers. | {
"source": [
"https://electronics.stackexchange.com/questions/106265",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/923/"
]
} |
106,718 | When would you use a voltage regulator vs a resistor voltage divider? Are there any uses for which a resistor divider is particularly bad? | These two circuits types have very different applications. A resistor divider is generally used to scale a voltage so that it can be sensed/detected/analysed more easily. For example, say you want to monitor a battery voltage. The voltage may go as high as 15V. You are using a microcontroller's analog-to-digital converter ("ADC"), which is using 3.3V for its reference. In this case, you may choose to divide the voltage by 5, which will give you up to 3.0V at the input of the ADC. There are a couple of drawbacks. One is that there is always current flowing through the resistors. This is important in power-constrained (battery powered) circuits. The second problem is that the divider can't source any significant current. If you start drawing current, it changes the divider ratio, and things don't go as planned :) So, it's really only used to drive high-impedance connections. A voltage regulator, on the other hand, is designed to provide a fixed voltage regardless of its input. This is what you want to use to provide power to other circuitry. As far as creating multiple voltage rails: For this example, let's assume that you are using switching regulators that are 80% efficient. Say that you have 9V, and want to produce 5V and 3.3V. If you use the regulators in parallel, hooking each one up to 9V, then both rails will be 80% efficient. If, however, you create 5V and then use that to create 3.3V, then your 3.3V efficiency is (0.8 * 0.8) = only 64% efficient. Topology matters! Linear regulators, on the other hand, are assessed differently. They simply lower the output voltage, for any given current. The power difference is wasted as heat. If you have 10V in, and 5V out, then they are 50% efficient. They have their benefits, though! They are smaller, less expensive, and less complicated. They're electrically quiet, and create a smooth output voltage. And, if there isn't much difference between the input and output voltages, then the efficiency can top a switching supply. There are ICs which provide multiple regulators. Linear Tech, Maxim Integrated, Texas Instruments, all have a good selection. The LTC3553, for example, provides a combination of a Lithium battery charger, a switching buck regulator, and a linear regulator. They have flavors with or without the charger, some with two switchers and no linears, some with multiple linears... One of my current products uses a 3.7V battery, and needs 3.3V and 2.5V. It was most efficient for me to a linear for the 3.3V, and a switcher for the 2.5V (fed by the battery, not the 3.3V rail). I used the LTC3553. You'll want to spend some time on their respective website's product selector tools. Good luck! | {
"source": [
"https://electronics.stackexchange.com/questions/106718",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/36024/"
]
} |
106,933 | Right, so we have 8-bit, 16-bit and 32-bit microcontrollers in this world at the moment. All of them are often used. How different is it to program 8-bit and 16-bit micrcontrollers? I mean, does it require different technique or skill? Lets take microchip for example. What new things does a person need to learn if they want to transition from 8-bit microcontrollers to 32-bit microcontrollers? | In general, going from 8 to 16 to 32 bit microcontrollers means you will have fewer restraints on resources, particularly memory, and the width of registers used for doing arithmetic and logical operations. The 8, 16, and 32-bit monikers generally refers to both the size of the internal and external data busses and also the size of the internal register(s) used for arithmetic and logical operations (used to be just one or two called accumulators, now there are usually register banks of 16 or 32). I/O port port sizes will also generally follow the data bus size, so an 8-bit micro will have 8-bit ports, a 16-bit will have 16-bit ports etc. Despite having an 8-bit data bus, many 8-bit microcontrollers have a 16-bit address bus and can address 2^16 or 64K bytes of memory (that doesn't mean they have anywhere near that implemented). But some 8-bit micros, like the low-end PICs, may have only a very limited RAM space (e.g. 96 bytes on a PIC16). To get around their limited addressing scheme, some 8-bit micros use paging, where the contents of a page register determines one of several banks of memory to use. There will usually be some common RAM available no matter what the page register is set to. 16-bit microcontroller are generally restricted to 64K of memory, but may also use paging techniques to get around this. 32-bit microcontrollers of course have no such restrictions and can address up to 4GB of memory. Along with the different memory sizes is the stack size. In the lower end micros, this may be implemented in a special area of memory and be very small (many PIC16's have an 8-level deep call stack). In the 16-bit and 32-bit micros, the stack will usually be in general RAM and be limited only by the size of the RAM. There are also vast differences in the amount of memory -- both program and RAM -- implemented on the various devices. 8-bit micros may only have a few hundred bytes of RAM, and a few thousand bytes of program memory (or much less -- for example the PIC10F320 has only 256 14-bit words of flash and 64 bytes of RAM). 16-bit micros may have a few thousand bytes of RAM, and tens of thousand of bytes of program memory. 32-bit micros often have over 64K bytes of RAM, and maybe 1/2 MB or more of program memory (the PIC32MZ2048 has 2 MB of flash and 512KB of RAM; the newly released PIC32MZ2064DAH176, optimized for graphics has 2 MB of flash and a whopping 32MB of on-chip RAM). If you are programming in assembly language, the register-size limitations will be very evident, for example adding two 32-bit numbers is a chore on an 8-bit microcontroller but trivial on a 32-bit one. If you are programming in C, this will be largely transparent, but of course the underlying compiled code will be much larger for the 8-bitter. I said largely transparent, because the size of various C data types may be different from one size micro to another; for example, a compiler which targets a 8 or 16-bit micro may use "int" to mean a 16-bit signed variable, and on a 32-bit micro this would be a 32-bit variable. So a lot of programs use #defines to explicitly say what the desired size is, such as "UINT16" for an unsigned 16-bit variable. If you are programming in C, the biggest impact will be the size of you variables. For example, if you know a variable will always be less than 256 (or in the range -128 to 127 if signed), then you should use an 8-bit (unsigned char or char) on an 8-bit micro (e.g. PIC16) since using a larger size will be very inefficient. Likewise re 16-bit variables on a 16-bit micro (e.g. PIC24). If you are using a 32-bit micro (PIC32), then it doesn't really make any difference since the MIPS instruction set has byte, word, and double-word instructions. However on some 32-bit micros, if they lack such instructions, manipulating an 8-bit variable may be less efficient than a 32-bit one due to masking. As forum member vsz pointed out, on systems where you have a variable that is larger than the default register size (e.g. a 16-bit variable on an 8-bit micro), and that variable is shared between two threads or between the base thread and an interrupt handler, one must make any operation (including just reading) on the variable atomic , that is make it appear to be done as one instruction. This is called a critical section. The standard way to mitigate this is to surround the critical section with a disable/enable interrupt pair. So going from 32-bit systems to 16-bit, or 16-bit to 8-bit, any operations on variables of this type that are now larger than the default register size (but weren't before) need to be considered a critical section. Another main difference, going from one PIC processor to another, is the handling of peripherals. This has less to do with word size and more to do with the type and number of resources allocated on each chip. In general, Microchip has tried to make the programming of the same peripheral used across different chips as similar as possible (e.g. timer0) , but there will always be differences. Using their peripheral libraries will hide these differences to a large extent. A final difference is the handling of interrupts. Again there is help here from the Microchip libraries. | {
"source": [
"https://electronics.stackexchange.com/questions/106933",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/20711/"
]
} |
107,128 | The Wikipedia article is very short and doesn't explain the concept very well, and there aren't any other sites that I can find that give a simple explanation. What does it mean by 100% modulation? I understand the basic concepts behind amplitude modulation, frequency modulation, and pulse width modulation, but I have never really understood what is meant by the "amount" of modulation or modulation depth. Can somebody please shed some light on the subject? Thanks! | In the image below a amplitude modulated sine wave: 0% unmodulated, the sine envelope is not visible at all; < 100% modulation depth is normal AM use; 100% modulation depth, the sine envelope touch at y=0. Maximum modulation that can be retrieved with an envelope detector without distortion; > 100% modulation depth, "overmodulation", the original sine wave can no longer be detected with an envelope detector. A still overview of the most important modulation depths (0, 50, 100 and 200%): The animation was created using gnuplot using the following script: unset xtics
set yrange [-3:3]
set samples 10000
do for [d=0:200] { plot sin(2*pi*3*x)*(1+(sin(2*pi*x/10))*d/100) title sprintf("%3i%%",d); } | {
"source": [
"https://electronics.stackexchange.com/questions/107128",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35366/"
]
} |
107,141 | Recently, i see such a circuit in a board schematic, but can't figure out how it works, why the first Op Amp has positive feedback ???? Thanks. Question, again, now i complete the circuit with part number and values, just as below. By simulation with PSpice, i know the "AC coupled" negative input of the comparator is to synchronize the oscillation frequency of the generator. But it seems there is one 'working window', if the pulse rate is two high, the output wave form will be oscillation on the amplitude. I want to know if this a 'classic' or a widely used technique. And i want to know if there are some accurate mathematical method to predicated the behavior, particularly the synchronizable frequency range. Thanks. | In the image below a amplitude modulated sine wave: 0% unmodulated, the sine envelope is not visible at all; < 100% modulation depth is normal AM use; 100% modulation depth, the sine envelope touch at y=0. Maximum modulation that can be retrieved with an envelope detector without distortion; > 100% modulation depth, "overmodulation", the original sine wave can no longer be detected with an envelope detector. A still overview of the most important modulation depths (0, 50, 100 and 200%): The animation was created using gnuplot using the following script: unset xtics
set yrange [-3:3]
set samples 10000
do for [d=0:200] { plot sin(2*pi*3*x)*(1+(sin(2*pi*x/10))*d/100) title sprintf("%3i%%",d); } | {
"source": [
"https://electronics.stackexchange.com/questions/107141",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/25264/"
]
} |
107,655 | When we go out and look for phones, the first thing I do I always look at the battery capacity, usually 2000 mAh, 3000, up to 10000 mAh for a (external?) battery. Looking at laptop batteries, they are much much larger and have only afraction of what the most massive batteries for phones have. My laptop battery has, for example, 5000 mAh, probably not much higher than 15-20V, if I get four 10,000 mAh phone batteries I get the same voltage range in serial connection and still those 10,000 mAh, or not? (or maybe 40,000 mAh). Yet I will save space. Is it a wise choice to construct such a battery for my laptop? Is my logic correct? | Until recently, all lithium laptop batteries were made up of cylindrical lithium-ion cells. Now, many designs are using a lithium polymer (pouch) type battery in laptops. This is allowing the thinner laptop designs. The properties of the cylindrical and the polymer cells are almost the same. The advantage of the soft pouch construction allows the same capacity battery fit into a smaller space, due to not having air voids between the cylindrical cells. It is also important to note that many after market phone battery capacities are pulled from a data sheet that was printed in a Fantasy Land where everyone rides unicorns. | {
"source": [
"https://electronics.stackexchange.com/questions/107655",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/34802/"
]
} |
107,869 | Im looking for reliable software that will help me to manage electronic components, tools, drills, wires in my workshop. Currently I have probably more than 1000 diffrent components. My brain can't manage this anymore. My "wishlist": preferable free or low cost software not too complicated locations feature ("where is it" eg. Shelf 1 -> Box 1) stock/shopping history - where and when I purchased part and how much I paid categories with tree structure some functionality to assign part to projects (example: I want to note somewhere that im using MCP3424 in "Acurrate voltometer v2" and "Raspberry Pi Solar Energy Logger" projects, also I want to see full part list for "Acurrate voltometer v2" project) client-server architecture would be nice (optional wish) functionality that allows me prepare shopping lists (sometimes I forget to order something, I don't want to order just 1 part an pay for the shipping, so I'm planning buying it next time) EDIT / BOUNTY COMMENT So far nothing really interesting appeared in answers. Excel or PHP based solutions are not productive enough (too much clicking). zParts is too simple, I can write something like this myself in 2 hours :) | Since my last update to this answer, I was lucky enough to find an online system that meets most of my (and the OP's) requirements for managing small electronic components inventory. The service is called Parts-in-place . The system is based on a typical workflow of small electronics products companies (so the site says - how would I know, I'm just a hobbyist) that incidentally supports the work of more serious hobbyists. The workflow looks like this: After signing up, you start by importing your parts database into the system Parts Library . That's the central repository where all information about your parts will be stored. For a minimal setup (like mine), you'll only need to fill out a few columns, such as the Company's Part Numbers and Description . This is easily done thanks to the systems excellent integration with Excel. It is worth to note that before using the system, the user must define a standard coding for its parts (the above mentioned Company's Part Number ) which will be used throughout the system. Then you create a Bill of Materials which will represent an electronic board that your company (or the hobbyist) plans on building. You specify the part numbers and quantities that will be used. This information will later be used for defining necessary part orders. Then you define how many boards are to be built based on a single Bill of Materials . This will later be used to define Parts Orders , in which the system confronts the required materials against the available inventory. You can later add more parts or change quantities in the order, before placing it with the suppliers. Once the order arrives, you record it on the Part Arrivals tab. Then, there are tabs for recording actual assemblies and parts transfer from inventory. Since I don't have such a factory, I don't really use those tabs. But you can if you want to play factory . At any moment you can update inventory information using the Parts Write-offs and Inventories . The former can be used to account for parts that were lost for any reason or that were used and not tracked by the system workflow. The latter can be used to update part counts based on ad-hoc inventories performed. The system is really easy to use, has a nice, modern and very responsive user interface. The free account (the one I signed up for) limits you to a single user, 3 BOMs and about 100 parts per BOM. I haven't reached any of them yet. I wanted to highlight that the system features an extraordinary integration with Excel, both for importing and exporting . For importing, it does a great job identifying column names automatically and it's really forgiving regarding formatting and other trash that you may have left in your spreadsheet. The export function results in a nicely formatted spreadsheet that may be used elsewhere without problems. It's XLS format is recognized by Excel and OpenOffice Calc as well. Here's how I think the system meets the OP's requirements: (Yes) free or low cost software - it has a free account available. (Yes) Not too complicated - it's really easy to use. Also, since it's a service, you don't have to go to the trouble of setting the software up or installing anything. (Yes) Locations feature (Shelf 1 -> Box 1) - you can determine where the parts are stored. (Yes) Stock/shopping history - the system lets you control shopping history pretty well. Orders also reflect on the stock upon arrival. (No) Categories with tree structure - the system only presents a flat structure for parts. You can workaround this by selecting a clever prefixes for part numbers. But to me, not having categories makes things simpler. To me, less is more in this case. (Yes) Functionality to assign part to projects - that's exactly what BOMs are for. (Yes) Client-server architecture would be nice - it's an online service set up online, it's client-server. (Yes) Functionality that allows me prepare shopping lists - it let's you prepare shopping lists based on BOM's and how many boards you say you want to build. There are a few implied requirements that the system doesn't meet : (No) It isn't open source . (No) Since it's a service, you normally won't have it running on your servers. If the company goes bankrupt, your data is gone. But since it provides a nice Excel export feature, you can have all your data backed up and ready for use in other ways. Also, the company says they can setup Parts-in-Place to run at your servers, but I suspect this may be expensive. Here's a screenshot of the BOM screen: PS. I'm not affiliated with Parts-in-Place in any way. Below is the original evaluation I had made of other systems. zParts - Free and open source. But it is a bit too simple for the requirements. I guess one could probably use Excel to do the same zParts does. PartKeepr - Web-based built around PHP, JavaScript and MySQL. Didn't actually install it, but read through some of its docs. Ciiva - another online service for BOM management with integrated components database search. Seems pretty powerful yet simple, but I still couldn't find a way to manage my own inventory of components, but there's gotta be one as it let's you resell excess inventory. My latest Google search on the topic - nothing really interesting, but maybe the search terms may help you filter the results better. PS. I'm not affiliated with any of those companies in any way. | {
"source": [
"https://electronics.stackexchange.com/questions/107869",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39905/"
]
} |
108,686 | This question is about the following three integrated H-bridge drivers: - L293 or L293D (D = protection diodes added) SN754410 (protection diodes included) L298 (no protection diodes) Time after time the same question keeps coming up - someone is using one of these devices (on a low voltage, usually around 6V or less) and they are just not performing adequately. The reasons are listed further below but my question is this: - What H-bridge drivers are preferred when controlling a low-voltage motor? Information The L293 and the SN754410 are nearly identical and crucially, if you try and control a 1 amp load, you are faced with dismal performance: - The tables tell you (typical conditions) that the upper transistor drops (loses) about 1.4 volts when driving a 1A load and, the lower transistor drops (loses) about 1.2 volts when driving a 1A load. The upshot is that if you have a 6V, 1A motor and 6V battery, don't expect to see more than 3.4 volts across the motor: - \$V_{OUT} = 6V - (1.4V + 1.2V) = 3.4V\$ Worst case scenario is you might only see 2.4 volts across it. What about the L298? It's got a nice big heat-sink whereas the L293 and SN754410 are regular-looking chips. Here's what the volt drop (losses) look like: - It's the same story - for a 1A load, you can expect to lose up to 3.2 volts and, what you thought might be 6V across your motor, is at best 4.2 volts and at worst only 2.8 volts. Clearly none of the devices listed are suitable for low voltage applications where the motor might be expected to draw in excess of 0.5 amps. | This answer was provided in 2014 and reflects the devices that are available back then. The point of this answer is to demonstrate that there are far better devices around compared to the problematic (and somewhat archaic) parts listed in the original question. If you are reading this message for the first time hoping to find a recommendation for a motor H-bridge driver chip, I urge you to search for it in the regular semiconductor suppliers lists of parts rather than take product recommendations for purchases from this answer. For low voltages, it seems like the DRV8837 is pretty good: - With an 800mA load, the volt drop is: - \$I_O\cdot R_{OS(ON)}\$ = 800mA x 0.33 ohms = 0.264 volts. At this current, the power dissipation will be 0.8 x 0.8 x 0.33 watts = 211 mW. Compare this with the L293 power dissipation at about 800mA - maybe about 3V is lost giving rise to a power dissipation of 2.4 watts. The VNH5200AS-E from ST is also pretty good and is intended for supplies as low as 5.5V up to 18V: - Also, another offering from ST is the VN5770AKP-E . It can be configured as separate top-side and low-side MOSFETs (including drivers) or just wired as a H bridge. There is also the MC33887 from Freescale (formerly Motorola): - It has on resistances in the low hundreds of milli ohms too. | {
"source": [
"https://electronics.stackexchange.com/questions/108686",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/20218/"
]
} |
109,270 | i would like to ask about what is a Schmitt Trigger and about its application. I searched a lot but still didn't understand it. Please explain step by step and hope you help me in its applications in real circuits. | Most devices have a set point which is the same for a rising signal as it is for a failing signal. For signals that have fast rise times, this is not a problem, but for signals that have very slow rise times, or are noisy, the can cause the output of the device to oscillate back and forth from off to on and back due to the signal hovering right at the set point. So a Schmitt trigger is a device (or the input portion of a device) that has separate thresholds for a rising signal and a failing signal. Obviously the threshold for the former is higher. In this diagram, two bands are shown. The top one represents the high set point, and the low band represents the low set point. They are shown as bands since there will be some tolerance in the specification. The difference between the bottom of the high band and top top of the low band is the hysteresis of the device. As mentioned earlier, Schmidt triggers can be used for either slowly changing signals, or noisy ones. Here are some examples of places where Schmitt triggeers can be used: There are many ways to buy or build Schmidt triggers. There are many logic ICs that include Schmitt triggers on their inputs, such as the 74HCT132 , but it has fixed thresholds.. You can also build one using discreet transistors, but the easiest is just to use an op-amp since the only additional components needed to add the hysteresis are resistors: Unlike a lot of Schmitt trigger schematics found on the web, this one uses an op-amp with a single supply. The voltage thresholds \$V_{\text{high}}\$ and \$V_{\text{low}}\$ are set using a combination of the voltage divider resistors \$R_1/R_2\$ and the feedback resistor \$R_{\text{FB}}\$: $$R_{1\text{FB}} = \frac{(R_1 \times R_{\text{FB}})}{(R1 + R_{\text{FB}})}$$ $$V_{\text{high}} = \frac{(V \times R_2)}{(R_2 + R_{1\text{FB}})}$$ $$R_{2\text{FB}} = \frac{(R_2 \times R_{\text{FB}})}{(R_2 + R_{\text{FB}})}$$ $$V_{\text{low}} = \frac{(V \times R_{2\text{FB}})}{(R_1 + R_{2\text{FB}})}$$ There is a nice Schmitt Trigger Calculator that makes it easy to figure out the resistor values you need. | {
"source": [
"https://electronics.stackexchange.com/questions/109270",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/40761/"
]
} |
109,284 | From what I read on the internet we should respond to interrupts as quickly as possible, when programming microcontrollers; and that flags should be avoided because they tend to compound over time. But the quickest way to respond to an interrupt is by using a flag (just need to change the boolean value of the variable and continue the code inside the main function). How to solve this problem? Should a state machine be used in any interrupt to avoid flags or that doesn't matter? | Designing by rules of thumb you found on the internet someplace is a bad idea. The right way is to understand the issues, them make intelligent tradeoffs. There is nothing wrong with a system that takes a interrupt, clears the hardware condition, then sets a flag for foreground code to do the remainder of the processing when it gets around to it. The danger in that is that the foreground code might not get around to it in a while, and if the same condition occurs again before that, information might get lost. Or, if something needs to be handled with low latency or jitter, then you probably want to handle it in the interrupt routine. Again, understand the tradeoffs. Interrupt code runs immediately after the condition occurred, at the expense of everything else the processor might have to do at the time. Is that worth it? That depends. How much delay can you tolerate in handling the condition? How important is it that the foreground code not be delayed? It should be obvious that there is no universal single answer to this. It is highly dependent on the particular application. For example, if part of the processors's job is to respond to a serial command stream that is sent to it at 115.2 kbaud, then bytes can be received as fast as every 87 µs. The interrupt routine could simply set a flag to let the foreground routine know it should read a byte from the UART, but that would require the foreground code to check the flag at least every 87 µs. In many cases, that would be difficult. A good tradeoff for many cases (again, this might not fit any one particular case) would be for the interrupt routine to grab the byte from the UART, clear the hardware condition, and stuff the byte into a software FIFO. The foreground code then empties the FIFO as it can, probably in bursts between performing other tasks that can take longer than the 87 µs byte time. On the other hand, the interrupt routine for a user button might only perform debouncing and set a flag when the button is in a new state. The system only needs to respond to the button in human time, which can be many milliseconds. If the foreground code checks all events at least every few milliseconds, then there is no need for the interrupt routine to do any more of the processessing than described. In general, the interrupt routine should do whatever immediately latency or jitter-sensitive processing needs to be performed due to the event, then set state so that processing that can respond slower can be performed later from foreground code. Again though, don't just run off using that as a rule of thumb. Understand why. Then you won't need any rules of thumb. | {
"source": [
"https://electronics.stackexchange.com/questions/109284",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/15637/"
]
} |
110,478 | Most of the time RS-232 and UART come together in serial communication theories. Are they both the same? From my readings I concluded UART is a hardware form of the RS-232 protocol. Am I correct? | No, UART and RS-232 are not the same. UART is responsible for sending and receiving a sequence of bits. At the output of a UART these bits are usually represented by logic level voltages. These bits can become RS-232, RS-422, RS-485, or perhaps some proprietary spec. RS-232 specifies voltage levels . Notice that some of these voltage levels are negative, and they can also reach ±15V. Larger voltage swing makes RS-232 more resistant to interference (albeit only to some extent). A microcontroller UART can not generate such voltages levels by itself. This is done with help of an additional component: RS-232 line driver. A classic example of an RS-232 line driver is MAX232 . If you go through the datasheet, you'll notice that this IC has a charge pump, which generates ±10V from +5V. ( source ) | {
"source": [
"https://electronics.stackexchange.com/questions/110478",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/37488/"
]
} |
110,510 | Many circuit designs I see with transistors use two transistors chained together instead of just using one transistor. Case in point: This circuit is designed to allow a device with a 3.3V UART to communicate with a 5V microcontroller. I understand that when Q2 is off, TX_TTL will be high, and when Q2 is on, TX_TTL will be low. My question is, why not run UART_TXD directly to the base of Q2 instead of using Q1 to control the base voltage of Q2? | What you have is basically a two stage amplifier - two consecutive amplifiers. In such a circuit configuration the gain of both amplifiers multiply. Since each stage has negative gain in your example, the overall gain is positive again. So let's say Q1 and R2 have a voltage gain of -10 and Q2 together with R3 create a gain of -10, too. Then the overall gain is 100 which is positive and much larger than the gain of a single stage. In your example this means the following: If UART_TXD goes High, TX_TTL will go High, too. If you omit Q1 and directly feed Q2 with UART_TXD, then TX_TTL will go Low when UART_TXD is High. | {
"source": [
"https://electronics.stackexchange.com/questions/110510",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7624/"
]
} |
110,574 | A diode is put in parallel with a relay coil (with opposite polarity) to prevent damage to other components when the relay is turned off. Here's an example schematic I found online: I'm planning on using a relay with a coil voltage of 5V and contact rating of 10A. How do I determine the required specifications for the diode, such as voltage, current, and switching time? | First determine the coil current when the coil is on. This is the current that will flow through the diode when the coil is switched off. In your relay, the coil current is shown as 79.4 mA. Specify a diode for at least 79.4 mA current. In your case, a 1N4001 current rating far exceeds the requirement. The diode reverse voltage rating should be at least the voltage applied to the relay coil. Normally a designer puts in plenty of reserve in the reverse rating. A diode in your application having 50 volts would be more than adequate. Again 1N4001 will do the job. Additionally, the 1N4007 (in single purchase quantities) costs the same but has 1000 volt rating. | {
"source": [
"https://electronics.stackexchange.com/questions/110574",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7624/"
]
} |
110,649 | I have read at numerous places that NAND gate is preferred over NOR gate in industry. The reasons given online say: NAND has lesser delay than Nor due to the NAND PMOS (size 2 and in parallel) when compared to NOR PMOS (size 4 in series). According to my understanding delay would be the same. This is how I think it works: Absolute delay (Dabs) = t(gh+p) g=logical effort h=electrical effort p=parasitic delay t=delay unit which is technology constant For NAND and NOR gate (gh+p) comes out to be (Cout/3 + 2). Also t is same for both. Then delay should be the same right? | 1. NAND offers less delay. As you were saying, the equation for delay is
$$Delay = t(gh+p)$$
But the logical effort \$g\$ for NAND is less than that of NOR. Consider the figure showing 2 input CMOS NAND and NOR gate. The number against each transistor is a measure of size and hence capacitance. The logical effort can be calculated as \$g = C_{in}/3\$. Which gives \$g = 4/3\$ for 2 input NAND and \$g = \frac{n+2}{3}\$ for n input NAND gate \$g = 5/3\$ for 2 input NOR and \$g = \frac{2n+1}{3}\$ for n input NOR gate refer wiki for table. \$h=1\$ for a gate (NAND or NOR) driving the same gate and \$p=2\$ for both NAND and NOR. Hence NAND has lesser delay when compared with NOR. EDIT: I have two more points to but and I am not 100% sure about the last point. 2. NOR occupies more area. Adding the sizes of transistors in figure, it is clear that size of NOR is greater than that of NAND. And this difference in size will increase as the number of inputs are increased. NOR gate will occupy more silicon area than NAND gate. 3. NAND uses transistors of similar sizes. Considering the figure again, all the transistors in NAND gate have equal size where as NOR gates don't. Which reduces manufacturing cost of NAND gate. When considering gates with more inputs, NOR gates requires transistors of 2 different sizes whose size difference is more when comparing with NAND gates. | {
"source": [
"https://electronics.stackexchange.com/questions/110649",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/41839/"
]
} |
111,569 | What is the difference between PCB fabrication and assembly? Some companies like Golden Phoenix list that based on the size of PCB, it outputs a quote. Mine was 2.1 x 1.6 inch. It outputted 29 pieces for $100. So it means if I give them my schematic and PCB layout and pay them $100, I will get 29 pieces of production-ready PCBs ready to sell? Now one company, EPS , lists fabrication and assembly differently. In assembly, it states I ought to provide the parts. In fabrication, anything but about parts is stated. So what are these companies suggesting? | In PCB world, "fabrication" refers to making the printed circuit board and nothing more . "Assembly" means soldering on all the parts. So, in your example, $100 will get you 29 pieces of copper-clad FR4, with traces etched and holes drilled. It is up to you to solder on the resistors, capacitors, etc which you need. | {
"source": [
"https://electronics.stackexchange.com/questions/111569",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/40885/"
]
} |
112,223 | Say I have a logic-level square wave, where 0V is "low" and 5V is "high". I'm pulsing this at a constant 60Hz, 50% duty cycle. My intuition says that since the voltage never goes negative, it's a DC signal, regardless of how fast I'm pulsing it. Is that correct? Furthermore, when considering op-amps to amplify signals from sensors that again produce 60 Hz square waves between 0 and 100mV, can I also consider this a DC signal, and not worry too much about my gain-bandwidth product? | Brief answer to both questions: No, that's not correct. No, you do need to worry about that. Let's start from the beginning. There is no way you will ever deal with a literally 'DC' signal. Let's say you have a bench power supply, you use it to power your circuits, that's maybe some 5V DC , right? And what about when you turn it off? What about power outages? What about when that particular bench supply didn't even exist? My point is: a real (existing) signal can never literally be DC. At some point in time it didn't, and it won't, exist. But there's hope: we can give a somewhat less strict definition of DC signal, and we're calling in our old friend Fourier. I am assuming you know what the Fourier Transform is, you can read it up or just believe me: there is this particular mathematical transform that takes in a signal that is a function of time and spits out a signal that is a function of frequency . And that works in both way, so your nice signal can be either represented in its time domain form or in its frequency domain form. But what do we need this frequency thing for? Well that's easy, let's say you have:
$$x(t)\rightleftharpoons X(f)$$
where \$x(t)\$ is your signal in the time domain, while \$X(f)\$ it's the same signal in the frequency domain. Now, if you compute \$x(t_0)\$ you get the value your signal has in the instant \$t_0\$, so what about \$X(f_0)\$? Well, you get the value your signal has at the frequency \$f_0\$, plain and simple. Let's say that you record a bass drum and a violin, you have the time domain signals, you transform them then plot them: the bass drum will be very high for low frequencies, while the violin will be very high for high frequencies. That's because the bass drum has plenty of low frequency components , while the violin has plenty of high frequency ones. So let's go back to DC definition. We could say that a signal is DC if "most of its components are at very low frequencies". That's better than "it never changes", having low frequencies components can actually happen. That's not a precise definition but let's take it as is now. What about your square wave? Let's have a look at the plot of a square wave frequency components (also called spectrum): (source: wikipedia ) That's a 1kHz square wave: as you can see the function plotted is very high at 1kHz, but also at 3, 5 and so on... And (trust me) the height of the peaks goes down as 1/f, that's slow . And please note I did not made any assumption on whether the wave is going or not below zero. So your square wave is far, far away from being DC. Now to your second question: that's a completely different one. If and only if your square wave amplitude is very very small compared to other signals you have around you can say "well let's just pretend it's not there". But that's not your case, your square wave is the signal you want to amplify. And as you just learned that's not DC at all... You better look carefully at the specs of the op amp you are going to choose then. | {
"source": [
"https://electronics.stackexchange.com/questions/112223",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/43296/"
]
} |
113,009 | After learning and experimenting with microcontrollers, I've understood the concept of pull-up- and pull-down resistors. I now understand when and how to use them, and how they work. I've mainly used pull-ups because I was taught to, but it has always seemed a little backwards to me, as closing the switch sets the MCU input to LOW. I think it would make more sense to use a pull-down resistor, so that the input is LOW when the switch is open, but that is just my way of thinking. Should I pull my single-throw inputs up or down? When is pulling down preferred over pulling up and vice versa? | The answer depends on what you want the "default" configuration to be. For example, say you have a down-stream N-channel MOSFET, and you want it default off. Then you would use a pull-down resistor to ensure this behavior if the input becomes high impedance. simulate this circuit – Schematic created using CircuitLab On the other hand, suppose you have an upstream P-channel MOSFET, and want it default off. This time a pull up resistor is required to create this behavior. simulate this circuit There's also the alternative case where you want a device to be default-on, in which case the above two cases would be reversed (pull-up for the N-channel MOSFET, pull-down for the P-channel MOSFET). A few other considerations: I2C lines specify pull-up resistors because devices are "expected" to have an open-drain to ground, and thus need some way to raise the line potential. Analog comparators are usually configured as open-drain devices, and thus also need pull up resistors to get a high potential output. You may draw more current using pullup/pulldown resistors, depending on what's hooked to the input/output. Either configuration could works equally well in your application (i.e. there's no significant advantage one way or the other). ... And any number of very application-specific reasons why one configuration may be preferred. | {
"source": [
"https://electronics.stackexchange.com/questions/113009",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/20569/"
]
} |
113,199 | I purchased a few fuses to get familiar with their workings, and I was surprised to see that the 100mA fast-blowing fuses I bought happily conducted up to 215mA (6V power supply, 10\$\Omega\$ resistor), where the filament just started glowing. I could reproduce this with a second fuse. Am I gravely misunderstanding something here, or is this an issue with the fuses? They are Bel Fuse Inc. 5SF 100-R parts. | Even fast fuses don't respond immediately after the rated current is reached. Most fuses require a significant over current to fire almost instantly. This is an obvious requirement since its all about heat, and minor heating (a raise ambient temperature) shouldn't affect the fuse too much (or at least not cause it to "blow"). Judging from the data-sheet, your 100mA fuse should blow after about 80 seconds @215mA @25°C. How long did you wait? | {
"source": [
"https://electronics.stackexchange.com/questions/113199",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/44073/"
]
} |
113,211 | I using LM4871 as a audio pre-amp in my device. If I let the output to earphones or external speakers then it works great. If I want to loop LM4871 output (pin 5 and 8) back to one of the local audio inputs then I have a problem. LM4871 output is not referenced to ground (looked with oscilloscope), but my other inputs are. How should I solve that?
Here's the LM4871 application circuit I'm using: [ http://www.ti.com/lit/ds/symlink/lm4871.pdf ] | Even fast fuses don't respond immediately after the rated current is reached. Most fuses require a significant over current to fire almost instantly. This is an obvious requirement since its all about heat, and minor heating (a raise ambient temperature) shouldn't affect the fuse too much (or at least not cause it to "blow"). Judging from the data-sheet, your 100mA fuse should blow after about 80 seconds @215mA @25°C. How long did you wait? | {
"source": [
"https://electronics.stackexchange.com/questions/113211",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/8614/"
]
} |
113,216 | Which IC can I use to control 7-segment display? I want to make simple projects like a reaction timer but I don't have an Arduino or some other microcontroller available. What ICs can I use instead? | Even fast fuses don't respond immediately after the rated current is reached. Most fuses require a significant over current to fire almost instantly. This is an obvious requirement since its all about heat, and minor heating (a raise ambient temperature) shouldn't affect the fuse too much (or at least not cause it to "blow"). Judging from the data-sheet, your 100mA fuse should blow after about 80 seconds @215mA @25°C. How long did you wait? | {
"source": [
"https://electronics.stackexchange.com/questions/113216",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/44048/"
]
} |
114,916 | In many power supplies, there's CV & CC indicators. What does they mean? | They are acronyms and stand for "Constant Voltage" or "Constant Current". They usually are associated with a LED or an indicator of some kind, as you suggest. When you use a power supply you usually set the desired voltage and the maximum current. When you connect the load two things can happen: The load requires more current than the maximum you set The load requires at most the maximum current you set In the first case the PSU become a current source: the current is limited to what you set and voltage drops accordingly, that's CC for you. In the second case what is costant is the voltage, so that's CV. As an example, consider this case: you set the voltage at 10V and set the maximum current at 1A, then you hook up a load that is over \$10\Omega\$ . As you know that requires at most 1A, so the voltage is constant while the current can vary between 0A and 1A. If you then hook up a lower impedance load it would require a higher current, but now the current protection kicks in so the current is limited to 1A, and it is constant, while voltage varies between 10V and 0V. | {
"source": [
"https://electronics.stackexchange.com/questions/114916",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35874/"
]
} |
115,021 | I wish to use some extra power supply filtering for my DAC, ADC, CPLD and OpAmp devices. In this question I got the point about the global locations for ferrite beads. If I understood correctly, the ferrite bead should be placed close to the device regardless whether it is a noise-generating, or noise-susceptible device. Please, correct me if it's not a general case. I saw some example schematics where the beads are placed before or within the bypass cap circuitry: Note to the pic: Power source is Vin, Chip is Vout Is there a significant difference between the two approaches above? | I'm researching information on decoupling capacitors and came across some information about ferrite beads from TI : Ferrite beads are very handy tools to have in your circuit design arsenal. They are, however, not a good idea for all circuit power rails.
Ferrite beads effectively absorb high frequency transients by raising their resistance at higher frequencies. This makes them very good at preventing power supply noise from getitng to sensitive circuit sections, however, it also makes them a very bad idea for main digital power. When to use them: Use them on power traces in series with analog circuit sections like composite video or PLLs. These beads effectively shut down power flow in times of high noise transients, allowing the power to be drawn only from the decoupling capacitors that are downstream. This cuts noise to sensitive circuit sections considerably. How to use them: Ferrite beads should be used in between two capacitors to ground. This forms a Pi filter and reduces the amount of noise to the supply considerably. In practice, the capacitor on the chip side should be placed as close to the chip supply ball as possible. The ferrite bead placement and input capacitor placement is not as crucial. If there is not room for two capacitors to form a Pi filter, the next best thing is to delete the input capacitor. The chip side capacitor should always be there. This is very important. Otherwise the ferrite beads increased high frequency resistance may make things worse instead of better since there will be local power storage on the chip side, and therefore no way to get the high peak power pulses to the chip that it so desparately needs. When not to use them: The above ferrite traits are very handy for those circuit sections that draw power evenly and consistently, but the same traits make them unsuitable for digital power sections. Digital processors need high peak current, because most internal transistors that switch are switching on each clock edge, all the demand occurs at once. Ferrite beads (by definition) will not allow power to flow through them with the high ramp rates required by digital processor logic. This is what makes them perfect for noise filtering on analog (like PLL) supplies. Since all the power demand in digital system is instant (high frequency), instead of being a slow and steady demand, ferrite beads will block the digital supply during the peaks. Theoretically, the bypass capacitors on the processor side of the bead would supply the peak current, filling in the gaps caused by the ferrites until they were charged after the peak was over, but in reality, the impedance of even the best capacitors is too high above about 200 MHz to supply enough peak power for the processor. In systems without ferrites, the planar capacitance can help to fill in this gap, but if a ferrite is used, it's inserted between the planes and the power pin, so the benefits of planar capacitance are lost. This will cause a big instantaneous voltage drop during the period the processor needs it most, causing logic errors and strange behavior if not immediate crashing. This can be avoided by proper design if required for your system (for EMI reduction, for example), however this is beyond the scope of this note. I believe you should examine what your switching current spectrum looks like. If your digital circuits require large current transients, you should not use a ferrite bead on them. I am currently of the mindset that the ferrite bead is useful in certain, very specific applications, but it is mostly used liberally as a band-aid when issues arise that should be solved by examining the power delivery network. While it would be nice to see some graphs or other data, what I read here from TI sounds plausible. What do you guys think about it? | {
"source": [
"https://electronics.stackexchange.com/questions/115021",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35526/"
]
} |
115,886 | All of the air conditioners I've worked with have the following words on them: Before restarting, wait three minutes. In the event the air conditioner's compressor is switched off and back on too quickly, the compressor motor stalls with a characteristic humming noise instead of running, and either a PTC trips to stop the compressor or the circuit breaker trips altogether. The same thing happens when the same is done on a refrigerator (and by extension, any device which uses vapor-compression refrigeration). Why do refrigeration compressors stall when switched off and on quickly? | The compressor compresses coolant on one side of a closed loop. If you shut off the compressor, you still have the load side of the closed loop full of pressurized coolant. That pressurized coolant makes it much more difficult to start the motor. A motor starting at 0 RPMs will want to draw large amounts of current. With an added load to the motor (pressurized coolant) the motor will draw excessive current and won't turn over. Compressors are likely leaky and therefore will allow the pressurized side to slowly decrease pressure until it's equal pressure between tho two sides. If you wait 3 minutes, it's expected that the pressures balance out and you have virtually no load when you try and start the motor again. A compressor running at speed has one side of the closed loop pressurized and so is under load, but in that case, it already has momentum to keep it going. Also, at speed the motor doesn't need as much current to continue spinning. Here's a graph depicting induction motor torque and current vs speed to help illustrate why this happens. | {
"source": [
"https://electronics.stackexchange.com/questions/115886",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7403/"
]
} |
117,217 | So Wi-Fi runs in the 2.4 GHz band, yeah (and the new ones 5 GHz)? Which means that every second, a Wi-Fi antenna outputs 2.4 billion square wave pulses, right? So I was wondering, why can't it transmit data on every pulse, and be able to send data at 2.4 Gbit/s? Even if 50% of that was data encoding, then it would still be 1.2 Gbit/s. Or have I got the concept of how Wi-Fi works wrong...? | You are confusing band with bandwidth . Band - The frequency of the carrier. Bandwidth - the width of the signal, usually around the carrier. So a typical 802.11b signal may operate at a 2.4GHz carrier - the band - it will only occupy 22MHz of the spectrum - the bandwidth. It's the bandwidth that determines the link throughput, not the band. The band is best thought of as a traffic lane. Several people might be transferring data at the same time, but in different lanes. Some lanes are bigger, and can carry more data. Some are smaller. Voice communications is usually about 12kHz or less. Newer wifi standards allow bandwidth of up to 160MHz wide. Keep in mind that while bandwidth and bits sent are intrinsically linked, there is a conversion there too, that's related to efficiency. The most efficient protocols can transmit over ten bits per Hz of bandwidth. Wifi a/g have an efficiency of 2.7 bits per second per hertz, so you can transmit up to 54Mbps over its 20MHz bandwidth. Newer wifi standards go up past 5 bps per Hz. This means that if you want 2Gbits per second, you don't actually need a 2GHz bandwidth, you just need a high spectral efficiency, and today that's often given using MIMO technology on top of a very efficient modulation. For instance you can now buy an 802.11ac wifi router that supplies up to 3.2Gbps total throughput (Netgear Nighthawk X6 AC3200). | {
"source": [
"https://electronics.stackexchange.com/questions/117217",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/45485/"
]
} |
117,303 | Transistors serve multiple purposes in an electrical circuit, i.e switches, to amplify electronic signals, allowing you to control current etc... However, I recently read about Moore's law, among other random internet articles, that modern electronic devices have a huge number of transistors packed into them, with the amount of transistors that are in modern electronics being in the range of millions, if not billions. However, why exactly would anyone need so many transistors anyway? If transistors work as switches etc, why would we need such a absurdly large amount of them in our modern electronic devices? Are we not able to make things more efficient so that we use wayy less transistors than what we are using currently? | Transistors are switches, yes, but switches are more than just for turning lights on and off. Switches are grouped together into logic gates. Logic gates are grouped together into logic blocks. Logic blocks are grouped together into logic functions. Logic functions are grouped together into chips. For example, a TTL NAND gate typically uses 2 transistors (NAND gates are considered one of the fundamental building blocks of logic, along with NOR): simulate this circuit – Schematic created using CircuitLab As the technology transitioned from TTL to CMOS (which is now the de-facto standard) there was basically an instant doubling of transistors. For instance, the NAND gate went from 2 transistors to 4: simulate this circuit A latch (such as an SR) can be made using 2 CMOS NAND gates, so 8 transistors. A 32-bit register could therefore be made using 32 flip-flops, so 64 NAND gates, or 256 transistors. An ALU may have multiple registers, plus lots of other gates as well, so the number of transistors grows rapidly. The more complex the functions the chip performs, the more gates are needed, and thus the more transistors. Your average CPU these days is considerably more complex than say a Z80 chip from 30 years ago. It not only uses registers that are 8 times the width, but the actual operations it performs (complex 3D transformations, vector processing, etc) are all far far more complex than the older chips can perform. A single instruction in a modern CPU may take a many seconds (or even minutes) of computation in an old 8-bitter, and all that is done, ultimately, by having more transistors. | {
"source": [
"https://electronics.stackexchange.com/questions/117303",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/43414/"
]
} |
117,624 | Specifically, a 2pin and 4pin quartz crystal oscillator. What I know: current is applied and the crystal oscillates in order to provide an oscillating signal. What I want to know: How does the vibration cause an oscillating current? How are 2/4pin crystals different? Lastly, why can a 4pin run alone and a 2pin needs capacitors. | The devices with two pins are not oscillators, they are resonators (crystals), which can be used in an oscillator circuit (such as a Pierce oscillato r), and if used with the correct circuit will oscillate at (or near) the marked frequency. The Pierce oscillator circuit, shown below, uses two capacitors (load capacitors, C1/C2), the crystal (X1), and an amplifier (U1). The devices with four pins are complete circuits including a resonator and an active circuit that oscillates. They require power and output a square wave or sine wave output at (or near) the marked frequency. There are also (ceramic) resonators with three pins that act like crystals with capacitors. The way crystals (and ceramic resonators) work is that they are made of a piezoelectric material that produces a voltage when they are distorted in shape. A voltage applied will cause a distortion in shape. The crystal is made into a shape that will physically resonate (like a tuning fork or a cymbal) at the desired frequency. That means that the crystal will act like a filter- when you apply the desired frequency it will appear like a high impedance once it gets vibrating, and to frequencies a bit different, it will be more lossy. When put in the feedback circuit of an amplifier, the oscillation will be self-sustaining. Much more, and some math, here . | {
"source": [
"https://electronics.stackexchange.com/questions/117624",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/46617/"
]
} |
117,994 | I'm working on a project with a few stepper motors. I want to breadboard them so I know the circuit works well before creating a physical PCB. Each of the three motors is rated 2A, but I'll be prototyping with 1A (controled with driver chip) at 12V. How dangerous is this much current on a solderless breadboard? It's a cheap one I picked up on eBay, so I doubt that it's made with super high quality materials. I won't be leaving it plugged in for extended periods of time and it'll always be supervised with me there if it receives power. Is this a bad idea? Note: each of the motors will be on its own rail with its own wire coming from the power source. | Engineers with a conscience don't let others use breadboards. Oh, I know, it starts out easy, like any other drug. You're uncertain of yourself and instead of doing calculations, or even back of the envelope calculations you say to yourself " I'll do the easy thing, I'll breadboard it- I don't have to think too much and then I can move on". But that is the easy lie, sort of like using SPICE (but I digress) when you should be calculating \$g_m\$. So you try it, perhaps your friends are doing it too, you want to fit in. And hey! it works, and you didn't have to spend too much time at it. But in doing so, you don't get the practise of doing those calculations, that skill never develops and eventually you end up being dependant upon the little suckers. and then they turn on you You're projects have started to grow, and get larger and say you want to show someone "the cool thing" you have been doing. And they want to touch it and you jump at them, not willing to allow them to touch your baby, your precious . I was lucky when I started out. Someone gave me a breadboard to work with, and I now suspect that they did it on purpose. The thing is, this breadboard was damaged, and I wasted hours trying to get something running when it was a faulty breadboard. I didn't have that internal modelling ability, I didn't know what to expect, so I wasted time. After than once I never once used a breadboard again. Using a breadboard is an easy way out, it's an addictive drug and if you persist you'll one day find yourself wondering " hmm, can I run One Amp those those little traces?" maybe you can. But the real question is, should you? with that much power, what happens when the board shorts out in behind (like they are wont to do)? Listen, there are lots of ways to prototype things that take only a little more time than a breadboard that are semi-permanent, are not fragile, are smaller and more importantly allow you to build high performance circuits (be that high amperage or high frequency). Use protoboards or dead bug the circuit. Being a little slower actually is a good thing, because you end up with something that can be handled, doesn't stop working the moment that someone sneezes and most importantly ... because you are going to put more effort into it, you learn to actually calculate before you commit. You end up learning more. So kick the habit dead bug that sucker. You'll find that you get a better circuit by far. | {
"source": [
"https://electronics.stackexchange.com/questions/117994",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/18442/"
]
} |
118,141 | I want to make LEDs blink rapidly. (more than 1000 blinks/sec., faster is better) First, I am curious that the common off-the-shelf LEDs have a capability of blinking with such a high frequency. Datasheet of LEDs that currently I'm using is here . I have no idea which information I should see for my purpose. Or you could suggest me other products. Second, is there a sensor (photoresistor, etc...) that have such a good time resolution for sensing rapidly blinking LEDs. My candidates are two, CdS cell photoresistor and Light sensitive Voltage generator . Again, which information I have to look into? p.s. I'm asking these questions since I want to build a visible light communication systems. I have succeeded in making LEDs blink 32 times per second. But beyond that, I cannot figure out whether it works or not with bare eyes. | To address the sub-parts one by one: common off-the-shelf LEDs have a capability of blinking with such a high frequency Pretty much any LED available can be operated at far higher blink frequencies than 1 KHz: White LEDs or others which use a secondary phosphor would be the slowest, often topping off in the 1 to 5 MHz region, while standard off-the-shelf primary LEDs (red, blue, green, IR, UV etc) are typically rated at a cut-off frequency of 10 to 50 MHz (sine wave). The cut-off frequency is the maximum frequency at which light emission drops to half the initial intensity. Few LED datasheets list the cut-off frequency, but the rise time and fall time of the LED are more common - unfortunately not for the specific datasheet linked in the question. In practice, one would be safe in topping off at one tenth the cut-off frequency for a well shaped square pulse, so 1 MHz visible light communication is very reasonable. As long as LEDs are SMD or with very short lead lengths, and PCB track / component lead capacitance and inductance are kept to a minimum, driving an LED to 1 MHz is feasible without complex pulse-shaping drive circuits. More academic info on the subject of LED cut-off frequencies can be found here . is there a sensor (photoresistor, etc...) that have such a good time resolution for sensing rapidly blinking LEDs. A CdS photocell would not be suitable for high frequency light sensing: The rise + fall time for common CdS cells are of the order of tens to hundreds of milliseconds. For instance, this randomly picked datasheet mentions 60 mS rise time and 25 mS fall time. Thus the highest frequency it can handle is below 11 Hertz. Photodiodes and phototransistors are preferred options for sensing higher speed light pulses at low to moderate intensity (i.e. at a distance from the LED source). This datasheet for the BPW34 PIN diode indicates rise and fall times of 100 nanoseconds each, which would tolerate 5 MHz signaling, so keeping a margin of safety, 1 MHz would be comfortable. For higher signaling speeds and lower signal intensity, super-expensive high speed Silicon Avalanche Photodiodes such as this one have rise and fall times of as little as 0.5 nanoseconds, allowing a 1 GHz signal, well beyond what standard LEDs will support. If the emitted signal intensity can be high enough, such as by having the LED source and the sensor near each other, or by using suitable lenses, and the desired signal bandwidth is not too ambitious, then a standard LED of suitable color is itself a suitable light sensor. LEDs work well as light detectors, and would be ample for signaling frequencies of hundreds of KHz, perhaps even up to MHz depending on the specific LED chosen for emitter and sensor. An interesting paper by Disney Research talks about this specific application: " An LED-to-LED Visible Light Communication System with Software-Based Synchronization " | {
"source": [
"https://electronics.stackexchange.com/questions/118141",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/46498/"
]
} |
118,338 | This is a follow-up question to my previous one, What does Low/High mean on the connections of a chip? . To create a HIGH connection, you often just connect the pin to VDD, which is often above VIH (and therefore sets it to HIGH). However, some people choose to add resistors to those pins, as shown below: Credit: Wolfson In this case, DVDD is 3.3V and the VIH is 0.7×DVDD. Looking at that, it seems that the resistors aren't necessary. What are these resistors for, and do I even need them? | These are called pull up resistors . These are used to ensure that inputs to logic systems settle at expected logic levels if external devices are disconnected or high-impedance is introduced. Usually used with open-collector or open drain outputs. | {
"source": [
"https://electronics.stackexchange.com/questions/118338",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/-1/"
]
} |
121,162 | Here is my simple reasoning. We apply a potential difference across a resistor. All the electrons begin responding. Since it takes time for electrons to respond, our current is not yet fully established. In fact, immediately after we apply the voltage, the current is zero! And yet we have a non-zero V and R. So am I wrong here, or does V sometimes not equal IR? EDIT:
I'm having a bit of an issue here. People want to say that an inductance of zero will allow for instantaneous current initiation, but I wonder. Is it ever possible to accelerate an object with mass infinitely quickly with nothing but an electric field? Sounds impossible more than likely. I feel that an instantaneous acceleration can only be caused by an infinite amount of energy given. | The only reason the current would be zero is if you have a non-zero inductance (which all circuits do). Once you factor in the inductance, you'll be able to calculate the rise time of the current after you apply a voltage. Only in the ideal world where there's no inductance or capacitance, will V=IR be true at all times over a resistor. In our non-ideal world, you're right that V=IR only applies at steady state after the influence from capacitance and inductance fall away. | {
"source": [
"https://electronics.stackexchange.com/questions/121162",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/49197/"
]
} |
121,180 | When someone says they "flashed" firmware to a device, I'm curious as to what this actually implies. Is firmware just a native binary that is stored in memory and that can be interpreted by the device's CPU (like any other EXE on a computer)? Or is firmware just data that acts as input to an immutable program already hardcoded/wired onto the device? If the latter example isn't firmware, what would you call it? For instance, let's say a device has some binary ( someapp.exe ) on it, and you cannot remove or modify this binary. The binary, when ran, takes input from a memory chip. You can "flash" data to this chip, and thus affect the input/configuration of someapp.exe that will take affect the next time it runs. If not firmware, what would this be called? | As often with such definitions, we agree in most cases, but there is no really firm boundary between what is firmware and what isn't. Firmware is stored permanently (except for some knowledgeable person who can change it ...) not intended to be changed (except ...) operates on the processor without the help of other software (except ... you get it?) As to data that is interpreted by a (firmware) interpreter: this is not often done in a professional setting, because it makes the product more expensive: more memory, CPU power, etc. is needed to achieve the same end goal. It is however sometimes used in hobbyist setting, often with a Basic interpreter in flash, and a (tokenized) Basic application stored in eeprom (or in Flash too). Check for instance the PICAXE and the various Basic stamps. IMO in such a setting both the Basic interpreter and the Basic application should be called firmware. An interesting use of a firmware interpreter that interprets stored code (which should IMO be considered firmware too) is the XBOX 360 startup. This excellent talk describes it in some detail. Below MSalters wonders whether FPGA code / configuration data should be considered firmware. In the aspects that matter most (it is information that is changeable late in the production process, but it is not intended to be changed at will by the end user) FPGA bits behave like firmware. That makes the question whether it is firmware according to any definition moot. The important point is that it can (and should) be written, handled and managed like firmware. (If it walks and quacks like a duck, is it a duck?) Don't bother with definitions when they are not useful. Is microcode firmware? Does representation matter? Does context matter? Are the ROM bits for an IWM firmware? Vaxquis' comment to OP's question prompted me to read the wiki article he links to. IMO the definition of firmware given there (persistent memory and program code and data stored in it) is troublesome. IMO the maps stored in a car navigation system are data, not firmware, no matter how they are stored (according to the wiki they should be firmware). And the apps in your iPhone or Android phone are applications, not firmware (according to the wiki they too should be firmware). | {
"source": [
"https://electronics.stackexchange.com/questions/121180",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16643/"
]
} |
121,329 | My CAN bus is running at 125 kbit/s and is using extended frame format exclusively. I would like to know what's the maximum rate of CAN frame I can send out. Suppose the data length is always eight bytes. According to this Wikipedia page , each frame has a maximum frame length of (1+11+1+1+18+1+2+4+64+15+1+1+1+7) = 128 bits: Taking into account of a minimum three bits interframe spacing , the maximum packet rate under 125 kbit/s should be: 125000 / ( 128 + 3) = 954 frames per second. But in my test, I couldn't get that high. The maximum frame rate I can achieve (with all eight bytes data) is around 850 frames per second. What's wrong here - my calculation, or my test method? | Per Olin Lathrop's suggestion, I'll expand on bit-stuffing. CAN uses NRZ coding, and is therefor not happy with long runs of ones or zeroes (It loses track of where the clock edges ought to be). It solves this potential problem by bit-stuffing. When transmitting, if it encounters a run of 5 successive ones or zeros it inserts a bit of the other polarity, and when receiving, if it encounters 5 successive ones or zeroes it ignores the subsequent bit (unless the bit is the same as the previous bits, in which case it issues an error flag). If you are sending all zeroes or all ones for your test data, a string of 64 identical bits will result in the insertion of 12 stuffed bits. This will increase total frame length to 140 bits, with a best-case frame rate of 874 frames / sec. If the data bits are the same as the MSB of the CRC, you'll get another stuffed bit there, and the frame rate drops to 868 frames/ sec. If the CRC has long runs of ones or zeroes, that will reduce the frame rate even further. The same consideration applies to your identifiers. A total of 16 stuffed bits will produce an ideal frame rate of 850.3 frames/sec, so you ought to consider it. A quick test would be to use test data with alternating bits, and see what happens to your frame rate. | {
"source": [
"https://electronics.stackexchange.com/questions/121329",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/13694/"
]
} |
121,522 | I'm searching around digi-key when I noticed one of the MEMS oscillators has these copper pieces on the edge of the package: I'm not talking about the pads, I'm just talking about those little copper pieces on the side of the package. I've seen them before and I've always wondered but I never really thought to question it. Now I'm curious. What are those for? Are they the internal copper connections, and if so, why are they brought out to a visible location other than the pads you solder to? | For plastic overmoulded parts, the lead-frame (correct terminology) is held in place while the plastic is injected. For packages like PTH (Pin through hole) or j-leads etc. the surrounding support is cut away with shears and the individual chip is freed. In the package you show, these pads cannot be sheared and must be moulded into the package. that means that the lead frame must exit the package while the moulding/injection is happening (for support - locating reasons). These are then sheared off the end of the package and the chip is then freed. This applies to over-moulded packages only. Ceramic packages can be multilayered mini circuit boards equivalents. That particular package is a VFQFN - Which is similar to a MLF (Micro Lead Frame) from Amkor. Here is a snip from their website Here is a manufacturing drawing for a QFN ( a little more complicated than the package in the OP) On this I've drawn a red square which shows the boundary that the saw will cut on to define the package edge. Note: the N in QFN means "No Lead" the red circle shows the die attach stabilization structure as it is brought to the edge of the package. And finally here is a picture showing the moulding process being modeled. I've put a red circle on this to show the die attach stabilization structure again. The external frame is not shown in this picture. One remaining thing to note: In the OP post the copper on the side of the package appears to be on a separate plane. Whereas the pads are exposed on both the sides and bottom, these guys are only exposed on the side. There are several ways to accomplish this, one way is to look at the very first drawing and note that the "Cu leadframe" or "exposed die paddle" have steps at the edges. The leads that do NOT need to contact on the bottom are etched back during manufacture. | {
"source": [
"https://electronics.stackexchange.com/questions/121522",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/28592/"
]
} |
121,772 | I'm not sure if I'm in the right place or not, but I figured someone here could maybe provide a good answer. I want to know how electricity is able to flow so fast. For example videos games nowadays, you can shoot someone across the world and they die almost instantly. How is electricity able to do this? I was trying to google this question but found poor results, that's why I am here. | This isn't something that can be answered in a single post, by a single person. However, I hope this answer provides enough information and links to be helpful. It is important to understand how signals are transmitted over the Internet. Note however that due to noise and the immense number of users, the same signal needs to be encoded, decoded, retransmitted, etc so the time needed for processing is many orders of magnitude more that the actual electrical signal needs to travel. Also have in mind at a millisecond is an very large amount of time for a computer; a GeForce Quadro K6000 graphics card can perform 5.000.000.000+ floating point operations in that much time (5196 GFlops times 1ms). Conductive cables: The electrons themselves don't move that fast because they bounce around inside of the conducting cables. However electricity doesn't travel based on electrons bouncing one onto the other, rather one repelling the other through electromagnetic interaction: Say you have 3 electrons in line (assume one dimensional space). Move the first a little bit. The distance of the first to the second gets a little smaller. The electrostatic force on them gets a little larger. According to Coulomb's Law it is: $$\|F\|=k_e\frac{q_1 q_2}{r^2}$$
where: \$\|F\|\$ is the magnitude of the force, \$k_e\$ is Coulomb's constant, \$q_1\$ and \$q_2\$ is the charge of each of the two particles and finally \$r^2\$ is the distance between them. As the first particle moves towards the second, the electrostatic force increases almost instantly. This causes the second particle to move a little bit towards the third etc. "Almost instantly" actually means "at the speed of light " (\$c=299,792,458m/s\$). There is an extreme number of electrons inside a conducting wire and the physics are a bit more complicated but the gist of it is a signal gets across a conductor "almost instantly" but slower than \$c\$. Optical Fibre: Optical fibre cables transmit signals by photons instead of electrons. Even in this case however, the photons don't travel in a straight line. However, the time needed for the photon to travel across the line is still very small compared to the processing time to encode and decode the signals, as well as packet retransmissions. Wireless: Finally, communication satellites as well as numerous types of wireless links are used to transmit signals, well, wireless using a great number of transmission protocols, modulations and frequencies. In this case, signals are transmitted using electromagnetic radiation . This a very complex subject and I can't possibly cover it all. Smart ways to encode information into electrical signals: It is not enough for a voltage pulse to reach the other end of a wire; that voltage is there to convey some information. The act of encoding information by modifying a carrier signal based on the information to be transmitted (carried, hence the name carrier), is called modulation . Smart ways to share the same channels: All these communication channels need to be connected and information needs to travel across this vast network in a reliable way. Initially, to have two nodes communicate with each other, they would reserve a number of cables forming a path from node A to node B. No other node would be able to use this same path. This is called circuit switching . The breakthrough that made such a vast network such as the internet possible was the ability for numerous nodes to share one particular communication channel. This sharing was enabled by packet switching . Instead of reserving a circuit just for two nodes, every node just checks if the bus is free, then transmits a packet containing data and destination info (and some other stuff) and then releases the channel. Packets need to find their destination and this is called packet routing , which is another huge subject. Routing and the need for modulation is the main reason a packet takes "so long" to reach it's destination compared to how fast electromagnetic waves travel. Routing is also necessary for all those users to coexist on the same network. The Internet: All these thing, along with numerous other technologies, are used together to form The Internet . Lag Compensation: In many applications, including competitive video games, a few milliseconds of delay would be unacceptable, especially when a server needs to register a "hit". That's where lag compensation comes into place. One of the methods used involves the server keeping a short history of each entity position and animation state. Then perform a number of tests and physics simulations to see if a "hit" would occur when a player "fires" their weapons, based on the lag, velocity, and animation state of each entity plus the world geometry. | {
"source": [
"https://electronics.stackexchange.com/questions/121772",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/49585/"
]
} |
122,050 | I've recently talked with a friend about LaTeX compilation. LaTeX can use only one core to compile. So for the speed of LaTeX compiliation, the clock speed of the CPU is most important (see Tips for choosing hardware for best LaTeX compile performance ) Out of curiosity, I've looked for CPUs with the highest clock speeds. I think it was Intel Xeon X5698 with 4.4 GHz ( source ) which had the highest clock speed. But this question is not about CPUs that get sold. I would like to know how fast it can get if you don't care about the price. So one question is: Is there a physical limit to CPU speed? How high is it? And the other question is: What is the highest CPU speed reached so far? I've always thought that CPU speed was limited because cooling (so heat ) gets so difficult. But my friend doubts that this is the reason (when you don't have to use traditional / cheap cooling systems, e.g. in a scientific experiment). In [2] I've read that transmission delays cause another limitation in CPU speed. However, they don't mention how fast it can get. What I've found [1] Scientists Find Fundamental Maximum Limit for Processor Speeds : Seems to be only about quantuum computers, but this question is about "traditional" CPUs. [2] Why are there limits on CPU speed? About me I am a computer science student. I know something about the CPU, but not too much. And even less about the physics that might be important for this question. So please keep that in mind for your answers, if it's possible. | Practically, what limits CPU speed is both the heat generated and the gate delays, but usually, the heat becomes a far greater issue before the latter kicks in. Recent processors are manufactured using CMOS technology. Every time there is a clock cycle, power is dissipated. Therefore, higher processor speeds means more heat dissipation. http://en.wikipedia.org/wiki/CMOS Here are some figures: Core i7-860 (45 nm) 2.8 GHz 95 W
Core i7-965 (45 nm) 3.2 GHz 130 W
Core i7-3970X (32 nm) 3.5 GHz 150 W You can really see how the CPU transition power increases (exponentially!). Also, there are some quantum effects which kick in as the size of transistors shrink. At nanometer levels, transistor gates actually become "leaky". http://computer.howstuffworks.com/small-cpu2.htm I won't get into how this technology works here, but I'm sure you can use Google to look up these topics. Okay, now, for the transmission delays. Each "wire" inside the CPU acts as a small capacitor. Also, the base of the transistor or the gate of the MOSFET act as small capacitors. In order to change the voltage on a connection, you must either charge the wire or remove the charge. As transistors shrink, it becomes more difficult to do that. This is why SRAM needs amplification transistors, because the actually memory array transistors are so small and weak. In typical IC designs, where density is very important, the bit-cells have very small transistors. Additionally, they are typically built into large arrays, which have very large bit-line capacitances. This results in a very slow (relatively) discharge of the bit-line by the bit-cell. From: How to implement SRAM sense amplifier? Basically, the point is that it is harder for small transistors to drive the interconnects. Also, there are gate delays. Modern CPUs have more than ten pipeline stages, perhaps up to twenty. Performance Issues in Pipelining There are also inductive effects. At microwave frequencies, they become quite significant. You can look up crosstalk and that kind of stuff. Now, even if you do manage to get a 3265810 THz processor working, another practical limit is how fast the rest of the system can support it. You either must have RAM, storage, glue logic, and other interconnects that perform just as fast, or you need an immense cache. | {
"source": [
"https://electronics.stackexchange.com/questions/122050",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/49719/"
]
} |
123,059 | Put simply, why do some diodes such as most Zeners and Schottky diodes have a glass package as opposed to the more traditional plastic package? Is it ease of manufacturing, thermal properties, or some other electrical phenomenon? | Early semiconductor diodes were mostly glass packaged which provided the advantage that they were hermetic and did not depend on passivation of the chip to survive heat and humidity. The glass package also allows a very high operating temperature. Early devices such as the 1N34A (germanium) and the 1N914 as well as the 1N7xx Zener series became very popular and inexpensive. Plastic-packaged devices were developed to reduce costs where high performance was not so important. For example, the glass 1N4148 has a maximum junction temperature of 200 °C compared to only 150 °C for the plastic-packaged 1N4001. Ceramic packaged diodes have also been produced. | {
"source": [
"https://electronics.stackexchange.com/questions/123059",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4647/"
]
} |
123,080 | I'm still searching to find an answer for this question: Why while the stm32 MCUs have a perfect watchdog (I mean Window watchdog (WWDG)), there is a simple watchdog (Independent watchdog (IWDG)) ? I found this page that has said: ST Microelectronics has a line of Cortex-M3 devices. The M3 has become extremely popular for lower-end embedded devices, and ST's STM32F is representative of these parts (though the WDT is an ST add-on, and does not necessarily mirror other vendors' implementations). The STM32F has two different protection mechanisms. An "Independent Watchdog" is a pretty vanilla design that has little going for it other than ease of use. But their Window Watchdog offers more robust protection. When a countdown timer expires, a reset is generated, which can be impeded by reloading the timer. Nothing special there. But if the reload happens too quickly, the system will also reset. In this case "too quickly" is determined by a value one programs into a control register. Another cool feature: it can generate an interrupt just before
resetting. Write a bit of code to snag the interrupt and you can take
some action to, for instance, put the system in a safe state or to
snapshot data for debugging purposes. ST suggests using the ISR to
reload the watchdog -- that is, kick the dog so a reset does not
occur. Don't take their advice. If the program crashes the interrupt
handlers may very well continue to function normally. And using an ISR
to reload the WDT invalidates the entire reason for a window watchdog. and this : STMicroelectronics' new series of STM32F4 Cortex™-M4 CPUs has two
independent watchdogs. One runs from its own internal RC oscillator.
That means that all kinds of things can collapse in the CPU and the
WDT will still fire. There is also a “window watchdog” (WWDT) which
requires the code to tickle it frequently, but not too often. This is
a very effective way to insure crashed code that randomly writes to
the protection mechanism does not cause a WDT tickle, and the WWDT can
generate an interrupt shortly before reset is asserted. ok, let's to take a look in the reference manual : The STM32F10xxx have two embedded watchdog peripherals which offer a
combination of high safety level, timing accuracy and flexibility of
use. Both watchdog peripherals (Independent and Window) serve to
detect and resolve malfunctions due to software failure, and to
trigger system reset or an interrupt (window watchdog only) when the
counter reaches a given timeout value. The independent watchdog (IWDG)
is clocked by its own dedicated low-speed clock (LSI) and thus stays
active even if the main clock fails. The window watchdog (WWDG) clock
is prescaled from the APB1 clock and has a configurable time-window
that can be programmed to detect abnormally late or early application
behavior. The IWDG is best suited to applications which require the
watchdog to run as a totally independent process outside the main
application, but have lower timing accuracy constraints. The WWDG is
best suited to applications which require the watchdog to react within
an accurate timing window. The window watchdog is used to detect the occurrence of a software
fault, usually generated by external interference or by unforeseen
logical conditions, which causes the application program to abandon
its normal sequence. The watchdog circuit generates an MCU reset on
expiry of a programmed time period, unless the program refreshes the
contents of the downcounter before the T6 bit becomes cleared. An MCU
reset is also generated if the 7-bit downcounter value (in the control
register) is refreshed before the downcounter has reached the window
register value. This implies that the counter must be refreshed in a
limited window. As you can see, none of them have said that Why there is two watchdog. if I ask that What are the differences between the both watchdog, you will count all features that you can see in the above and if you want compare the both, obviously the Window watchdog (WWDG) will be the winner! then Why there are two watchdog? I want to know that when should I use IWDG and when WWDG? and is there any reason that say us Why do they call the second watch by this name -> "Window watchdog"? | Regular watchdog timers must be reset at some time before they time out. If you have a 100ms WDT you can reset it every 99.9ms or every 10us and it will never time out. Window watchdog timers have a time window within which they must be reset. If you reset it too early or too late (from the previous reset) it will cause the processor to reset. The purpose, if it is not obvious, is to help ensure that the code resetting the WDT is the intended code, operating in the intended fashion. Some kind of unforeseen condition that generates high-frequency WDT resets won't prevent the system from being reset. Running a WDT from the system clock could be a bit of an issue- if the clock fails and if there is not an independent clock monitor circuit, bad things can happen. The independent clock for the WDT means that if the thing for some reason started running at 1/10 speed, the WDT would reset (but the window WDT would not). Use both if you can. As the page says, resetting the WDT with an ISR is generally bad juju (but may be acceptable if the ISR verifies the reset of the firmware is functioning before resetting the timer). | {
"source": [
"https://electronics.stackexchange.com/questions/123080",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/29617/"
]
} |
123,117 | I'm constructing a boost converter, and I need to measure both the input current and the output current. Currents range anywhere from 25A to 200A, depending on the model. My controller is referenced to the negative rail of the converter. I've been focusing on hall effect sensors, but it occurs to me that I could use shunt resistors in the negative leg instead. What are the advantages and disadvantages of each approach? | I am not an expert in the field but I can try to help jotting down some quick ideas. Hall effect sensor pros: galvanic insulation between the measurement circuit and the circuit to be measured they can be placed anywhere on the current path (voltage is not a problem), thus easyness of installation and eventually servicing they nearly do not affect the measured current so they are great if this is a concern cons: cost: a high current, precise sensor can cost tens of bucks bandwidth: the sensor and the sensed wire are coupled through a transformer, and of course it has its own frequecny response. A piece of copper (aka shunt resistor) is less affected by this problem. magnetic fields: an external fixed magnetic field can cause an offset in the measurement that must be somehow taken into account Shunt resistor pros: small and cheap, I bet that with a good pcb manufacturer you can make your shunt resistor on the pcb paying only for the increased size, but keep in mind that the copper resistivity depends on temperature, moreover the pcb outer layers thickness is not precise while inner layers are somewhat better. You can get cheap SMD shunt resistors down to 1m\$\Omega\$ from ohmite cons: they can dissipate quite an amount of power, and a tradeoff exists between precision and dissipated power. They can get quite hot too. they do affect the measured circuit, namely there's a voltage drop across them and that might not be acceptable for very low voltage, high current applications. You can't measure the current consumed by an array of cores that are powered with 1.8V with a shunt that drops some 100mV That is just what comes to me from the top of my mind, I'd be very happy to integrate/correct this list reflecting any reasonable comment from below. | {
"source": [
"https://electronics.stackexchange.com/questions/123117",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7523/"
]
} |
123,132 | I've been looking for a part that I can boost a 10 mV signal to a 10 volt signal
using a single op amp, so far most of the op amps I've found only meet the gain requirements, but cannot output 10 volts. | I am not an expert in the field but I can try to help jotting down some quick ideas. Hall effect sensor pros: galvanic insulation between the measurement circuit and the circuit to be measured they can be placed anywhere on the current path (voltage is not a problem), thus easyness of installation and eventually servicing they nearly do not affect the measured current so they are great if this is a concern cons: cost: a high current, precise sensor can cost tens of bucks bandwidth: the sensor and the sensed wire are coupled through a transformer, and of course it has its own frequecny response. A piece of copper (aka shunt resistor) is less affected by this problem. magnetic fields: an external fixed magnetic field can cause an offset in the measurement that must be somehow taken into account Shunt resistor pros: small and cheap, I bet that with a good pcb manufacturer you can make your shunt resistor on the pcb paying only for the increased size, but keep in mind that the copper resistivity depends on temperature, moreover the pcb outer layers thickness is not precise while inner layers are somewhat better. You can get cheap SMD shunt resistors down to 1m\$\Omega\$ from ohmite cons: they can dissipate quite an amount of power, and a tradeoff exists between precision and dissipated power. They can get quite hot too. they do affect the measured circuit, namely there's a voltage drop across them and that might not be acceptable for very low voltage, high current applications. You can't measure the current consumed by an array of cores that are powered with 1.8V with a shunt that drops some 100mV That is just what comes to me from the top of my mind, I'd be very happy to integrate/correct this list reflecting any reasonable comment from below. | {
"source": [
"https://electronics.stackexchange.com/questions/123132",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50257/"
]
} |
123,157 | When the internet appears to have crashed, the service provider invariably gives this advice as a first line remedy: unplug the router from the wall socket, then wait 5 minutes, then plug it back in. Many times this remedy works. Question: Why does the router care if it's been unplugged from the wall versus simply turning it off? And more interestingly, what happens during the 5 minute interval that the router is unplugged; if it has no electricity, is it not merely in a deadened state? | What you are waiting can be two things. One is for the ISP to "release" your dynamic IP address, and after 'x' minutes, when powered back on, the MRC (Modem/router combo) will be re-assigned an IP address to its MAC address. The other reason is to allow a internal capacitor to discharge completely to allow the volatile memory that contains the cache to be cleared. Clearing this cache can often "resolve" the issue. | {
"source": [
"https://electronics.stackexchange.com/questions/123157",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50278/"
]
} |
123,660 | MCU: ATTiny13 I noticed this after trying to debug why pushing my switch (connected via R2, a 507kOhm pulldown resistor) makes the LED dimmer while depressed. The switch was powered by the same supply line as the Vcc input to the microcontroller. Upon disconnecting the Vcc input (Pin 8), I noticed that the LED was still lit when the switch was depressed. If I removed a connection from the ground pin 4, the LED still lit up, but less brighter. The circuit below represents what I observed. The switch is removed to simplify the problem: Why does this happen, and how can I stop it? It is interfering with the output when the button is depressed. Here is a picture of the circuit on a breadboard. The supply line (5V is the red wire, Ground is black): | Inputs of many modern CMOS devices have ESD protection diodes from the I/O pins to the supply rails, which hope to divert transient overvoltages to the supply before they cause damage. A side effect of this is that the chip can, at least to a degree be powered through an I/O pin, once the pin rises enough against the (unsupplied) supply to forward bias the diode. Even in technologies without explicit protection diodes, it could happen to a degree, though often resulted in very unreliable operation (classic mistake - forget to power a chip and see it "sort of" work - I did it myself with an SPI flash this past January that somehow never got a ground, and would provide expected responses right up until I tried to write flash locations). Generally you do not want to power a chip this way - it is outside the absolute maximum ratings, and the protection diode may not be sized to carry the full operating current. You do see it at times though, both in intentional experiments, such as an RF-powered ATTiny RFID tag emulator experiment, or accidentally in cases such as trying to measure power consumption of a sleeping MCU and having it actually draw power from your serial debug port rather than the supply you are trying to measure. | {
"source": [
"https://electronics.stackexchange.com/questions/123660",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/29737/"
]
} |
123,665 | A standard form of a first order differential equation is: (1) $$\tau \frac{dy}{dt} + y = k * x(t)$$ The laplace transform of this: (2) $$G(s) = \frac{Y(s)}{X(s)} = \frac{k}{\tau s+1}$$ but sometimes it is given as (3) $$H(s) = \frac{1}{\tau s +1} = \frac{a}{s+a}$$ A standard form of a second order differential equation is: (4) $$\tau ^{2} \frac{d^{2}y}{dt^{2}}+2 \tau \zeta \frac{dy}{dt} + y = k * x(t)$$ The laplace transform of this: (5) $$G(s) = \frac{Y(s)}{X(s)} = \frac{k}{\tau^2s^2 + 2\tau\zeta s+1}$$ but sometimes this is given as (6) $$H(s) = \frac{\omega_n^2}{s^2+2\zeta \omega_n s + \omega_n^2}$$ Here are my questions: What is the physical meaning of "first" and "second order"? (apart from the fact that the highest power of the differential in the first is 1 and in the second is 2). How do I know if a system is first or second order? Where do equations (1) and (4) come from? Why were these decided to be the "standard form"? What is so special about this form and how were these equations derived? When given a first order system, why is sometimes equation (2) given, and sometimes equation (3) as the transfer function for this system? Likewise, when given a second order system why is equation (6) usually given, when the laplace transform is actually equation (5)? | Inputs of many modern CMOS devices have ESD protection diodes from the I/O pins to the supply rails, which hope to divert transient overvoltages to the supply before they cause damage. A side effect of this is that the chip can, at least to a degree be powered through an I/O pin, once the pin rises enough against the (unsupplied) supply to forward bias the diode. Even in technologies without explicit protection diodes, it could happen to a degree, though often resulted in very unreliable operation (classic mistake - forget to power a chip and see it "sort of" work - I did it myself with an SPI flash this past January that somehow never got a ground, and would provide expected responses right up until I tried to write flash locations). Generally you do not want to power a chip this way - it is outside the absolute maximum ratings, and the protection diode may not be sized to carry the full operating current. You do see it at times though, both in intentional experiments, such as an RF-powered ATTiny RFID tag emulator experiment, or accidentally in cases such as trying to measure power consumption of a sleeping MCU and having it actually draw power from your serial debug port rather than the supply you are trying to measure. | {
"source": [
"https://electronics.stackexchange.com/questions/123665",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/35366/"
]
} |
123,760 | Wikipedia's Instructions per second page says that an i7 3630QM deliver ~110,000 MIPS at a frequency of 3.2 GHz; it would be (110/3.2 instructions) / 4 core = ~8.6 instructions per cycle per core?!
How can a single core deliver more than one instruction per cycle? To my understanding a pipeline should only be able to deliver one result per clock. These are my thoughts: Internal frequency is actually higher than 3.2 GHz Some parts of the CPU are asynchronous in a way a humble human like myself cannot understand There are multiple concurrent pipelines per core A pipeline can deliver more than result per clock, an instruction can skip pipeline stages and there are multiple prefetcher to keep up I am missing something | First, as Keelan's comment and Turbo J's answer point out, the measurement was 113,093 Dhrystone MIPS not native MIPS. The Ivy Bridge microarchitecture of the i7 3630QM can only commit 4 fused µops per cycle, though it can begin execution of 6 µops per cycle. (The number of fused µops in a trace of code is roughly equal to the number of instructions; some complex instructions are decoded into multiple µops that are not fused and some pairs of instructions can be fused into a single µop, e.g., a compare immediately followed by a conditional jump.) Two of your speculations on how multiple instructions can be executed in a single cycle are quite valid and have been used in actual processors. Your first speculation, that a faster internal clock is used, was used in the original Pentium 4's "fireball" ALUs. These ALUs were clocked at twice the frequency of the rest of the core, which was already relatively high. (This was accomplished by using a staggered ALU in which the lower half of an addition was done in one cycle, allowing a dependent operation to use the lower half of the result in the next cycle. For operations like add, xor, or left shift which only need the lower half of operands to produce the full lower half of the result, such staggering—also known as width-pipelining—allows single cycle result latency as well as single cycle throughput.) A somewhat related technique, cascaded ALUs, was used by the HyperSPARC. The HyperSPARC fed the results from two ALUs into a third ALU. This allowed two independent and a third dependent operation to be executed in a single cycle. Your speculation that "there are multiple concurrent pipelines per core" is the other technique that has been used. This type of design is called superscalar and is by far the most common means of increasing the number of operations executed in a single cycle. There are also a few other odds and ends of instruction execution that might be worth noting. Some operations can be more efficiently performed outside of the ordinary execution units. The technique of move elimination exploits the use of register renaming in out-of-order processors to perform move operations during register renaming; the move simply copies the physical register number from one position in the renaming table (called a register alias table) to another. Not only does this effectively increase execution width but it also removes a dependency. This technique was used early with the stack-based x87, but is now broadly used in Intel's high performance x86 processors. (The use of destructive, two-operand instructions in x86 makes move elimination more helpful than it would be in a typical RISC.) A technique similar to move elimination is the handling of register zeroing instructions during renaming. By providing a register name that provides the zero value, a register clearing instruction (like xor or subtract with the both operands being the same register) can simply insert that name into the renaming table (RAT). Another technique used by some x86 processors reduces the cost of push and pop operations. Ordinarily an instruction using the stack pointer would have to wait a full cycle for a previous push or pop to update the value for the stack pointer. By recognizing that push and pop only add or subtract a small value to the stack pointer, one can compute the results of multiple additions/subtactions in parallel. The main delay for addition is carry propagation, but with small values the more significant bits of the base value—in this case the stack pointer—will only have at most one carry-in. This allows an optimization similar to that of a carry-select adder to be applied to multiple additions of small values. In addition, since the stack pointer is typically only updated by constants, these updates can be performed earlier in the pipeline separately from the main execution engine. It is also possible to merge instructions into a single, more complex operation. While the reverse process of splitting instructions into multiple, simpler operations is an old technique, merging instructions (which Intel terms macro-op fusion) can allow the implementation to support operations more complex than those exposed in the instruction set. On the theoretical side, other techniques have been proposed. Small constants other than zero could be supported in the RAT and some simple operations that use or reliably produce such small values could be handled early. ("Physical Register Inlining", Mikko H. Lipasti et al., 2004, suggested using the RAT as a means of reducing register count, but the idea could be extended to support loading small immediates and simple operations on small numbers.) For trace caches (which store sequences of instructions under particular assumptions of control flow), there can be opportunities to merge operations separated by branches and remove operations that produce unused results in the trace. The caching of the optimizations in a trace cache can also encourage performing optimizations such as instruction merging which might not be worthwhile if they had to be done each time the instruction stream was fetched. Value prediction can be used to increase the number of operations that can be executed in parallel by removing dependencies. A stride-based value predictor is similar to the pop/push optimization of a specialized stack engine mentioned earlier. It can compute multiple additions mostly in parallel, removing the serialization. The general idea of value prediction is that with a predicted value, dependent operations can proceed without delay. (Branch direction and target prediction is effectively just a very limited form of value prediction, allowing the fetching of following instructions which are dependent on the "value" of the branch—taken or not—and the next instruction address, another value.) | {
"source": [
"https://electronics.stackexchange.com/questions/123760",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50555/"
]
} |
125,011 | The new USB Type C connector doesn't have a physical reverse polarity protection any more. You can plug it in any way you want on both ends, there is also no A and B end any more, it's all the same. So how does this new USB type handle that the polarity doesn't end up being reversed or signals being routed to the wrong point? Is there some sort of routing going on in the connector and the devices don't have to handle anything and can be sure the polarity is always correct? This is assuming that not half the signals in the cable are redundant. | Below is the pinout for the receptacle: GND TX1+ TX1- Vbus CC1 D+ D- SBU1 Vbus RX2- RX2+ GND
| | | | | | | | | | | |
=+====+====+====+====+====+====+====+====+====+====+====+=
| | | | | | | | | | | |
GND RX1+ RX1- Vbus SBU2 D- D+ CC2 Vbus TX2- TX2+ GND You will note that all the pins are rotationally symmetric, so if you flip the connector, TX1+ connects to TX2+, TX1- connects to TX2-, etc. and most importantly, Vbus and GND always match up. The trick lies in the controller and cable -- the CC pins are used to detect orientation, at which point the controller routes appropriately: 2.3.2 Plug Orientation/Cable Twist Detection The USB Type-C plug can be inserted into a receptacle in either one of two orientations, therefore the CC pins enable a method for detecting plug orientation in order to determine which SuperSpeed USB data signal pairs are functionally connected through the cable. This allows for signal routing, if needed, within a DFP or UFP to be established for a successful connection. As you might imagine, the cables are going to be a fair bit heftier due to the extra wires. A minimum of 15 wires plus braid required for full-featured Type-C (i.e. USB 3.1 -- recommended 4-6mm outer diameter) 10 wires plus braid for legacy Type-C USB 3.0/3.1 cables (intended to connect to Type-A or Type-B on the other end -- recommended 3-5mm outer diameter) For USB 2.0 or earlier, whether connecting to Type-C or a legacy type on the other end, the usual four wire configuration is permitted (recommended 2-4mm outer diameter) Source: USB 3.1 Specification @ usb.org -- specifically, the Universal Serial Bus Revision 3.1 Specification PDF available for download at the top of the page) Also a great blog post explaining all the details about the Configuration Channel pin: http://kevinzhengwork.blogspot.de/2014/09/usb-type-c-configuration-channel-cc-pin.html Archive.org (in case it goes offline) | {
"source": [
"https://electronics.stackexchange.com/questions/125011",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/17371/"
]
} |
125,237 | I know the symbol at the bottom is ground, but I can't get what the other symbol is. It can be a single symbol of two arrows pointing at each other, or two symbols each one a arrow. I also have the PCB for this schematic. It is nothing but two traces (1mm long) with a little gap in between. Is it a RF filter of some sort? | This looks like a spark gap . (You could also look up gas discharge tube (GDT) , which is similar to spark gap.) The likely purpose of this component in your circuit is to protect the rest of the circuit from lightning strike and/or ESD. Spark gaps are often the first line of defense against high voltage. Advantages of spark gaps: Once the arc is established, a spark gap acts as a crowbar, and it can dissipate a lot of energy. Because of that, spark gaps are used for protection against high voltage high energy threats (lightning, defibrillator). The parasitic capacitance of a spark gap is low, so it doesn't affect the signal. A spark gap can be formed as a PCB feature for free; it doesn't require an additional component in a BOM. Weaknesses of spark gaps: They fire at a high voltage, hundreds of volts, and the firing voltage is not well reproducible or predictable. To address this weakness, there's usually another overvoltage protection device (such as TVS) in parallel with a spark gap. This additional device clamps at a lower voltage. related: What is this component and what is its use? https://electronics.stackexchange.com/a/28959/7036 EEVblog #678 - What is a PCB Spark Gap? | {
"source": [
"https://electronics.stackexchange.com/questions/125237",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/37802/"
]
} |
125,692 | Bluetooth, WiFi, Zigbee, Remote Controls, Alarms, Cordless Phones etc.. Why all of these protocols, devices, etc. use 2.4 GHz band instead of 3.14 GHz. What is so special about it? | 2.4 GHz is one of the industrial, scientific and medical (ISM) radio bands . ISM bands are unlicensed, which makes it easier to certify the equipment with FCC (or its counterparts in other countries). However, what special about 2.4 GHz? There is about a dozen ISM bands. Some at higher frequency, others have lower frequency. Not all ISM bands are international. But 2.4 GHz is an international band. update: Microwave ovens also operate at 2.4 GHz, which is not a coincidence. Short version in Q&A format: Q: Why does so much wireless communication operate at 2.4 GHz band? A: Because it's an ISM band, and it's unlicensed, and it's international. Q: Why is 2.4 GHz an unlicensed band? A: FCC has originally set aside this band for microwave heaters (cookers, ovens). As a result, from the beginning, this band is polluted by the microwave ovens. Q: Why 2.4 GHz for microwave ovens? Microwave ovens can work on pretty much any frequency between 1 and 20 GHz. There's nothing special (like resonance), when it comes to absorption of microwaves by water at 2.4 GHz (see also here ). A: The frequency choice was based on a combination of empirical
measurements of heat penetration for various foodstuffs, design
considerations for the size of the magnetron, and frequency
considerations for any resulting harmonic frequencies. [These considerations were proposed by Raytheon and GE to FCC in 1946, when the decision about 2.4 GHz was made.] The long versions can be found here . [This link goes to Indiegogo, because this bit of historical research was crowd-funded.] Also, this FCC document (54MB) from 1947 can be of interest. Thanks, @Compro01 for finding this reference. | {
"source": [
"https://electronics.stackexchange.com/questions/125692",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50692/"
]
} |
125,863 | I have a bag of about 50 non-rechargeable AA batteries (1.5 V) that I have collected over the years. I bought a multimeter recently and would like to know the best way to test these batteries to determine which ones I should keep and which I should toss. Sometimes a battery will be useless for certain high-power devices (e.g. children's toys) but are still perfectly suitable for low-power devices such as TV remote controls. Ideally, I'd like to divide the batteries into several arbitrary categories: As-new condition (suitable for most devices) Suitable for low-powered devices such as remote controls Not worth keeping Should I be measuring voltage, current, power or a combination of several of these? Is there a simple metric I can use to determine what to keep and what to toss? | **WARNING: Lithium Ion cells ** While this question relates to non-rechargeable AA cells it is possible that someone may seek to extend the advice to testing other small cells. In the case Of Li-Ion rechargeable cells (AA, 18650, other) this can be a very bad idea in some cases. Shorting Lithium Ion cells as in test 2 is liable to be a very bad idea indeed. Depending on design, some Li-Ion cells will provide short circuit current of many times the cell mAh rating - eg perhaps 50+ amps for an 18650 cell, and perhaps 10's of amps for an AA size Li-Ion cell. This level of discharge can cause injury and worst case may destroy the cell, in some uncommon cases with substantial release of energy in the form of flame and hot material. AA non-rechargeable cells: 1) Ignore the funny answers Generally speaking, if a battery is more than 1 year old then only Alkaline batteries are worth keeping. Shelf life of non-Alkaline can be some years but they deteriorate badly with time. Modern Alkaline have gotten awesome, as they still retain a majority of charge at 3 to 5 years. Non brand name batteries are often (but not always) junk. Heft battery in hand. Learn to get the feel of what a "real" AA cell weighs. An Eveready or similar Alkaline will be around 30 grams/one ounce. An AA NiMH 2500 mAh will be similar. Anything under 25g is suspect. Under 20g is junk. Under 15g is not unknown. 2) Brutal but works Set multimeter to high current range (10A or 20A usually). Needs both dial setting and probe socket change in most meters. Use two sharpish probes. If battery has any light surface corrosion scratch a clean bright spot with probe tip. If it has more than surface corrosion consider binning it. Some Alkaline cells leak electrolyte over time, which is damaging to gear and annoying (at least) to skin. Press negative probe against battery base. Move slightly to make scratching contact. Press firmly. DO NOT slip so probe jumps off battery and punctures your other hand. Not advised. Ask me how I know. Press positive probe onto top of battery. Hold for maybe 1 second. Perhaps 2. Experience will show what is needed. This is thrashing the battery, decreasing its life and making it sad. Try not to do this often or for very long. Top AA Alkaline cells new will give 5-10 A. (NiMH AA will approach 10A for a good cell). Lightly used AA or ones which have had bursts of heavy use and then recovered will typically give a few amps. Deader again will be 1-3A. Anything under 1 A you probably want to discard unless you have a micropower application. Non Alkaline will usually be lower. I buy ONLY Alkaline primary cells as other "quality" cells are usually not vastly cheaper but are of much lower capacity. Current will fall with time. A very good cell will fall little over 1 to maybe 2 seconds. More used cells will start lower and fall faster. Well used cells may plummet. I place cells in approximate order of current after testing. The top ones can be grouped and wrapped with a rubber band. The excessively keen may mark the current given on the cell with a marker. Absolute current is not the point - it serves as a measure of usefulness. 3) Gentler - but works reasonably well. Set meter to 2V range or next above 2V if no 2V range. Measure battery unloaded voltage. New unused Alkaline are about 1.65V. Most books don't tell you that. Unused but sat on the shelf 1 year + Alkaline will be down slightly. Maybe 1.55 - 1.6V Modestly used cells will be 1.5V+ Used but useful may be 1.3V - 1.5V range After that it's all downhill. A 1V OC cell is dodo dead. A 1.1V -.2V cell will probably load down to 1V if you look at it harshly. Do this a few times and you will get a feel for it. 4) In between. Use a heavyish load and measure voltage. Keep a standard resistor for this. SOLDER the wires on that you use as probes. A twisted connection has too much variability. Resistor should draw a heavy load for battery type used. 100 mA - 500 mA is probably OK. Battery testers usually work this way. 5) Is this worth doing? Yes, it is. As well as returning a few batteries to the fold and making your life more exciting when some fail to perform, it teaches you a new skill that can be helpful in understanding how batteries behave in real life and the possible effect on equipment. The more you know, the more you get to know, and this is one more tool along the path towards knowing everything :-). [The path is rather longer than any can traverse, but learning how to run along it can be fun]. | {
"source": [
"https://electronics.stackexchange.com/questions/125863",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/51673/"
]
} |
125,990 | I always wondered why color coding is still used on resistors in 2014. Here is Wikipedia's word on the original reason why : Colorbands were commonly used (especially on resistors) because they
were easily printed on tiny components, decreasing construction costs.
However, there were drawbacks, especially for color blind people.
Overheating of a component, or dirt accumulation, may make it
impossible to distinguish brown from red from orange. Advances in
printing technology have made printed numbers practical for small
components, which are often found in modern electronics. However, like pointed out in this quote, printing small numbers on electronic is now quite an easy thing (or so it seems) and it would, in my opinion, be much more convenient, especially for colour-blind people. Is there reason why we still use colour coding on resistors in 2014? | The reason for using color bands on through-hole (axial) resistors is simple -- when they are inserted into the PCB, you can't guarantee their orientation -- there is no top or bottom. So you need a way to mark the value so it can be seen no matter how the part is oriented with the board. Color bands are perfect for this. For this reason, I don't expect to see color bands disappearing from through-hole resistors. With surface mount parts, there is a top and bottom, so the parts can have the value stamped on the top. | {
"source": [
"https://electronics.stackexchange.com/questions/125990",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50963/"
]
} |
126,502 | Moving membranes or piezoelectric materials obviously produce sound waves, but how can "purely" electrical circuits such as transformers or DCDC choppers (and others) often have an audible noise? Is the material microscopically expanding and shrinking with the current? | What you are really asking is how can electrical circuits cause small motions. After all, sound is motion of the air. The answer is that there are various ways electric fields or electric currents can cause forces or motions. These effects are harnessed in the design of various transducers , which exist to deliberately cause or sense small motions. However, the laws of physics that allow these transducers to function don't stop outside the transducer case. They exist everywhere, so many things are unintended transducers. The difference is that usually the effect is rather weak without it being deliberately designed for as in a transducer. Some of these effects are: Electrostatic force . Two objects at a different voltage will have a force between them. The force is proportional to the voltage and inversely proportional to distance. This is the same force that allows a balloon to stick to your hair after rubbing it against a cat or something. For ordinary circuits, this force is very weak, and conductors are held in place much more strongly than it. Still, you can sometimes get audible sound from this with high voltage circuits. Electrodynamic force . A moving charge creates a circular magnetic field around it. The magnetic field is proportional to the current, and can be made quite strong by looping the wire into a coil. This magnetic field can be made to move things, and is the basis for how solenoids, motors, and loudspeakers work. Moving charges likewise experience a force if flowing thru a magnetic field of the right orientation. Most loudspeakers actually work on this principle; they are made so that a strong permanent magnet is fixed and the coil moves, which in turn moves the center of the speaker cone. The same thing happens in any inductor. Each piece of wire with current thru it experiences some force due to the overall magnetic field. Some of the buzzing you hear from transformers is individual pieces of wire moving a little bit as a result. Piezoelectric effect . Some materials, like quartz for example, will change their size or shape slightly as a function of applied electric field. Some small earphones work on this principle. There are also "crystal" microphones that work on this principle in reverse, meaning applying force to the crystal causes it to create a voltage. Common barbecue grill ignitors work on this principle by whacking a quartz crystal hard and suddenly enough to create a high enough voltage to cause a spark. Some capacitor materials exhibit enough of this effect that when rigidly mounted on a circuit board can cause audible sound. I had to respin a board once and replace a ceramic cap with a electrolytic just because the ceramic was causing annoying audible whine. Magnetostrictive effect . This is the magnetic analog of the piezoelectric effect. Some materials change shape or size depending on the applied magnetic field, and this effect works in reverse too. I have worked on magnetic sensors that exploited this effect. Materials in transformers and inductors are chosen to not have this effect, but a small amount is there anyway. The core of a inductor actually changes size very slightly as the magnetic field changes. This can cause audible sound, especially if the inductor is mechanically coupled to something that presents a greater area to the air, like a circuit board. | {
"source": [
"https://electronics.stackexchange.com/questions/126502",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/30449/"
]
} |
126,557 | In quite a lot of PCBs that I have seen, they often have these small metal bars going across from one point to another. Here is an image of what I am talking about. In this example it is a charger: J2 and J3 are what I am talking about For reference, here is the other side of the PCB: So, What are these called? What is the point of these? Why not just use tracers in the board or a wire? | These are called "jumpers" or "jumper wires" and simply connect two parts of the PCB together. They are common for single-sided PCBs to make a connection that may not be routable. The alternative would be to have a double-sided PCB, but that would be more expensive. | {
"source": [
"https://electronics.stackexchange.com/questions/126557",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39152/"
]
} |
126,564 | I recently bought this: http://smile.amazon.com/gp/product/B00II1MN5I and am planning on using it for under cabinet lighting in the kitchen. Power adapter:
Model: TNS-1220
Input: 100-240V AC, 50/60Hz
Output: 12V DC, 2000mA I want to keep it plugged in all the time and control it via some sort of switch/button to turn off/on. If the power adapter is always plugged in, would it consume electricity (it has an LED that lights up when plugged in). If so, would it be negligible? Basically I want to know if I should: [outlet] -> [power adapter] -> [control switch] -> [LEDs] or [outlet] -> [control switch] -> [power adapter] -> [LEDs] | These are called "jumpers" or "jumper wires" and simply connect two parts of the PCB together. They are common for single-sided PCBs to make a connection that may not be routable. The alternative would be to have a double-sided PCB, but that would be more expensive. | {
"source": [
"https://electronics.stackexchange.com/questions/126564",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/52006/"
]
} |
126,605 | When soldering, I have on a few occasions, used my teeth to hold a strand of solder while my hands were busy. I do not do this frequently, but sometimes I forget that solder is made of lead and I instinctively use my mouth. How bad is doing that for my health? Is it very dangerous? I use manly lead based rosin-core solder. | It's a question of bio-availablity, which is facilitated by acids and lead that is more "absorbable" when it is in certain compounds. Certainly Lead accumulates in tissues but even sub toxic doses would take years to accumulate. The Romans used lead based wine mugs because the acidic wine would dissolve the lead which apparently made the wine more palatable (sweeter). No one is willing to do the taste test of that one, that certainly would be unethical for a tester to ask someone to do that. It took the Roman years to get to toxic levels. As long as you weren't chewing down on the lead and having a good floss with it, it is very likely that very little transferred into your system. I know this because I have myself had blood tests. The Dr. told me it was unlikely that they could detect anything as the lead tends to be gathered into tissues and does not tend to be available in blood. the next option was a biopsy which is more dangerous that any possible lead given predicted exposure. TLDR; don't worry about having done this, but don't continue. | {
"source": [
"https://electronics.stackexchange.com/questions/126605",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/52024/"
]
} |
126,797 | If I have a device that draws 5 amps at 12 volts, I can use any 12 volt DC adapter that can provide at least 5 amps. Why don't all DC adapters have the capacity to provide loads of amps!? If all DC adapters provided e.g. 1000 amps, we would only need to care about the voltage value. Do too many amps make DC adapters bulky, inefficient, or expensive? | The components that make up DC adaptors (inductors, transistors, capacitors, diodes, ect) are all rated for a certain current and/or power dissipation. Components that can handle 1000A vs. components that can handle 5A are orders of magnitude apart in cost, size, and availability. For an example let's look at an inductor that could be used in a 1000A supply vs. a 5A supply. Price: An inductor that can do 5A is $0.17 on digikey, an inductor that can do 200A is $400. Size: The 5A inductor is 5mmx5mm and the 200A inductor is 190mmx190mm. Availability: Digikey stocks well over 5,000 different inductors that can handle 5A. It didn't even have anything rated for more than 200A. It stocks only 7 that can do more than 100A. Now repeat this experiment for all the components found in a common wall adaptor and you'll quickly get to the answer of your question. To summarize: If you had two devices that needed 5A and 6A respectively would you rather buy something that costs in the thousands of dollar range and is larger than your bathtub so you could use it on both, or would you rather buy two palm sized adaptors for $30? | {
"source": [
"https://electronics.stackexchange.com/questions/126797",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/52124/"
]
} |
127,199 | In really early days of electronic school, the teacher used to say something about not unplug power too quickly at an inductor or capacitor and we were used to slowly turn the voltage generator from a signal generator down to zero. Something about the transients, something about the charge stored... I'm now interested in working with a power converter, but what was said many years ago still lingers with me but I can't remember exactly what was said at that time. Can someone please remind me what is the rule when it comes to safely handling inductors and capacitors in a (basic) circuit? | Thou shall NOT open-circuit a charged inductor. Thou shall NOT short-circuit a charged capacitor. If you think about it from their fundamental equations: \$V = L\dfrac{di}{dt}\$ - a sudden change in current (i.e. forced open circuit) will result in infinite voltage. \$I = C\dfrac{dv}{dt}\$ - sudden change in voltage (i.e. short circuit) will result in an infinite current. It's obviously not infinite in practice (due to strays and the ability to change the voltage/current fast enough) BUT it is significant enough to damage electronics... | {
"source": [
"https://electronics.stackexchange.com/questions/127199",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50265/"
]
} |
127,525 | I have been looking around for an easy way to convert 12V to 5V . I have seen some people saying that a simple resistor is all that is needed. $$ Volts = Ohms \cdot Amps $$
$$ Amps = \frac{Volts}{Ohms} $$
$$Ohms = \frac{Volts}{Amps}$$ So applying a resistor will diminish the voltage of the circuit. That should mean that an appropriately sized resistor could simply be placed in the path of a 12V circuit, converting it to 5v. If this is the case how would one reduce amps? Would series vs parallel make a difference in this area? I have seen designs that include a regulator IC and some capacitors, but if a simple resistor/fuse/diode setup will do the trick, I would really prefer that. | There are a few ways to get 5V from a 12V supply. Each has its advantages and disadvantages, so I've drawn up 5 basic circuits to show their pros and cons. Circuit 1 is a simple series resistor - just like the one "some people" told you about. It works, BUT it only works at one value of load current and it wastes most of the power supplied. If the load value changes, the voltage will change, since there is no regulation. However, it will survive a short circuit at the output and protect the 12V source from shorting out. Circuit 2 is a series Zener diode (or you could use a number of ordinary diodes in series to make up the voltage drop - say 12 x silicon diodes) It works, BUT most of the power is dissipated by the Zener diode. Not very efficient! On the other hand it does give a degree of regulation if the load changes. However, if you short circuit the output, the magic blue smoke will break free from the Zener... Such a short circuit may also damage the 12V source once the Zener is destroyed. Circuit 3 is a series transistor (or emitter follower) - a junction transistor is shown, but a similar version could be built using a MOSFET as a source follower. It works, BUT most of the power has to be dissipated by the transistor and it isn't short circuit proof. Like circuit 2, you could end up damaging the 12V source. On the other hand, regulation will be improved (due to the current amplifying effect of the transistor). The Zener diode no longer has to take the full load current, so a much cheaper/smaller/lower power Zener or other voltage reference device can be used. This circuit is actually less efficient than circuits 1 and 2, because extra current is needed for the Zener and its associated resistor. Circuit 4 is a three terminal regulator (IN-COM-OUT). This could represent a dedicated IC (such as a 7805) or a discrete circuit built from op amps / transistors etc. It works, BUT the device (or circuit) has to dissipate more power than is supplied to the load. It is even more inefficient than circuits 1 and 2, because the extra electronics take additional current. On the other hand, it would survive a short circuit and so is an improvement on circuits 2 and 3. It also limits the maximum current that would be taken under short circuit conditions, protecting the 12v source. Circuit 5 is a buck type regulator (DC/DC switching regulator). It works, BUT the output can be a bit spikey due to the high frequency switching nature of the device. However, it's very efficient because it uses stored energy (in an inductor and a capacitor) to convert the voltage. It has reasonable voltage regulation and output current limiting. It will survive a short circuit and protect the battery. These 5 circuits all work (i.e. they all produce 5V across a load) and they all have their pros and cons. Some work better than others in terms of protection, regulation and efficiency. Like most engineering problems, it's a trade off between simplicity, cost, efficiency, reliability etc. Regarding 'constant current' - you cannot have a fixed (constant) voltage and a constant current with a variable load . You have to choose - constant voltage OR constant current. If you choose constant voltage, you can add some form of circuit to limit the maximum current to a safe maximum value - such as in circuits 4 and 5. | {
"source": [
"https://electronics.stackexchange.com/questions/127525",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/52219/"
]
} |
127,541 | I'm designing a programmable bench power supply (PS) that has multiple isolated channels (each powered by a separate transformer winding). I like the idea of isolated channels so that multiple channels can be connected in series to generate dual-polarity output (like -15/0/+15 V). The idea was to use a microcontroller (MCU) to set the output voltage and current limits of each channel using analog voltages from the MCU's DAC (or an external DAC IC controlled by the MCU). Given that channels are able to float with respect to one another, what would be the best way to ensure that the control voltages sent from the MCU to each PS output channel are properly referenced to the channel's voltage level? The options that I've come across so far are to either: (analog approach) use an isolation amplifier between the DAC and each PS channel (digital approach) use a separate DAC IC for each PS channel and digital isolators to interface each DAC with the MCU. I'm curious to know if there is a standard approach to this type of problem as I imagine that this comes up in many bench supply designs. Thanks! | There are a few ways to get 5V from a 12V supply. Each has its advantages and disadvantages, so I've drawn up 5 basic circuits to show their pros and cons. Circuit 1 is a simple series resistor - just like the one "some people" told you about. It works, BUT it only works at one value of load current and it wastes most of the power supplied. If the load value changes, the voltage will change, since there is no regulation. However, it will survive a short circuit at the output and protect the 12V source from shorting out. Circuit 2 is a series Zener diode (or you could use a number of ordinary diodes in series to make up the voltage drop - say 12 x silicon diodes) It works, BUT most of the power is dissipated by the Zener diode. Not very efficient! On the other hand it does give a degree of regulation if the load changes. However, if you short circuit the output, the magic blue smoke will break free from the Zener... Such a short circuit may also damage the 12V source once the Zener is destroyed. Circuit 3 is a series transistor (or emitter follower) - a junction transistor is shown, but a similar version could be built using a MOSFET as a source follower. It works, BUT most of the power has to be dissipated by the transistor and it isn't short circuit proof. Like circuit 2, you could end up damaging the 12V source. On the other hand, regulation will be improved (due to the current amplifying effect of the transistor). The Zener diode no longer has to take the full load current, so a much cheaper/smaller/lower power Zener or other voltage reference device can be used. This circuit is actually less efficient than circuits 1 and 2, because extra current is needed for the Zener and its associated resistor. Circuit 4 is a three terminal regulator (IN-COM-OUT). This could represent a dedicated IC (such as a 7805) or a discrete circuit built from op amps / transistors etc. It works, BUT the device (or circuit) has to dissipate more power than is supplied to the load. It is even more inefficient than circuits 1 and 2, because the extra electronics take additional current. On the other hand, it would survive a short circuit and so is an improvement on circuits 2 and 3. It also limits the maximum current that would be taken under short circuit conditions, protecting the 12v source. Circuit 5 is a buck type regulator (DC/DC switching regulator). It works, BUT the output can be a bit spikey due to the high frequency switching nature of the device. However, it's very efficient because it uses stored energy (in an inductor and a capacitor) to convert the voltage. It has reasonable voltage regulation and output current limiting. It will survive a short circuit and protect the battery. These 5 circuits all work (i.e. they all produce 5V across a load) and they all have their pros and cons. Some work better than others in terms of protection, regulation and efficiency. Like most engineering problems, it's a trade off between simplicity, cost, efficiency, reliability etc. Regarding 'constant current' - you cannot have a fixed (constant) voltage and a constant current with a variable load . You have to choose - constant voltage OR constant current. If you choose constant voltage, you can add some form of circuit to limit the maximum current to a safe maximum value - such as in circuits 4 and 5. | {
"source": [
"https://electronics.stackexchange.com/questions/127541",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/27177/"
]
} |
128,120 | What is the reason that most of ICs (e.g. MCU) has multiple (A/D)GND and (A)VCC pins? If it is to increase performance of an IC, how does it help to performance? or is it easier for the IC designer to connect some pins externally? some footprint of ICs have a GND connection under the case, how does it help? Would it improve performance of an IC if I draw a GND under the case even if it is not required? | Three reasons come to mind: 1) Take a look at this close-up of the guts of a microcontroller. There's a LOT going on in there. And every part of that die needs power. Power coming in from any one pin will probably have to snake it's away around a lot of stuff to get to every part of the device. Multiple power lines gives the device multiple avenues to pull power from, which keeps the voltage from dipping as much during high current events. 2) Sometimes the different power pins supply specific peripherals within the chip. This is done when certain peripherals need as clean a voltage supply as possible to operate correctly. If the peripherals share the power supply that the rest of the chip uses, it may be subject to noise on the line and voltage dips. An example is the analog power supply. You noticed it's typical to see an AVCC pin on MCUs. That pin is a dedicated supply just for the analog peripherals on the chip. Really, this is just an extension of #1 above. 3) It's not uncommon for an MCU to power its core at one voltage but operate peripherals at another. For example, an ARM chip I worked with recently used 1.8V for its core. However, the digital output pins would supply 3.3V when driven high. Therefore, the chip required a 1.8V supply and a separate 3.3V supply. The main thing to remember is that all of those supply pins are absultely necessary to connect . They are not optional, even when doing development work. As for the bottom pad on the chip, it's there for extra heat sinking. The chip designer decided that the casing and pins of the chip may not sink the heat away from the silicon enough. So the extra pad on the bottom acts like a heat sink to help keep the temperature down. If the part is expected to need to dissipate a lot of heat, you'd want to have a large copper pour to solder that pad onto. | {
"source": [
"https://electronics.stackexchange.com/questions/128120",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/26011/"
]
} |
128,289 | Typical consumer cameras can capture wavelength of 390-700 nm 400-1050 nm . But why is it so difficult and expensive to produce cameras for infrared, ultraviolet, hard x-rays, etc.? The only thing which differs them are the wavelength and the energy eV. | It comes down to market size. Where is the demand for such cameras and do the number of sales justify the production set up costs? You can get an infra red conversion to standard type DSLR cameras (eg Do It Yourself Digital Infrared Camera Modification Tutorials ) and you can convert the camera to a 'full spectrum' type which takes in some ultra violet. (see Full-spectrum photography ). For smaller wavelengths you'll need different sensors. These, by their specialist nature and low volume production, tend to be very expensive. | {
"source": [
"https://electronics.stackexchange.com/questions/128289",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/34410/"
]
} |
128,637 | I have been reading about grounding in a mixed signal systems. Do I get it correct that it is best to group analog and digital elements and then have a single ground plane, as long as the digital routes do not pass through analog part, and analog routes do not pass through the digital part? The highlighted part on the left figure shows the analog ground and the right one highlights the digital ground for the same circuit. The component on the right side is a 80 pin MCU with 3 sigma-delta ADC converter. Is it better to let the AGND and DGND to be tied on ADC of the MCU connect the DGND and AGND through an inductor/resistor have a single ground-plane (DGND = AGND)? P.S. as I read the aim is to prevent DGND to disturb the AGND, I defined main ground-plane as AGND | Combining digital and analog grounds is quite a contentious issue, and is might well fire up a debate/argument. A lot of it depends on whether your background is analog, digital, RF etc. Here is some comments based on my experience and knowledge, which is likely to differ from other peoples (I am mostly digital/mixed signal) It really depends on what kind of frequencies you are running at (digital I/O and analog signals). Any work on combining/separate grounds will be a work in compromise - the higher the frequencies you are operating at, the less you can tolerate inductance in your ground return paths, and the more relevant ringing will be (a PCB that oscillates at 5GHz is irrelevant if it measures signals at 100Khz). Your main aim by separating grounds is to keep noisy return current loops away from sensitive ones. You can do these one of several ways: Star Ground A fairly common, but quite drastic approach is to keep all digital/analog grounds separate for as long as possible and connect them together at one point only. On your example PCB, you would track in digital ground separately and join them at the power feed most likely (power connector or regulator). The problem with this is when your digital needs to interact with your analog, the return path for that current is half across the board and back again. If it's noisy, you undo a lot of the work in separating loops and you make a loop area to broadcast EMI across the board. You also add inductance to the ground return path which can cause board ringing. Fencing A more cautious and balanced approach to the first one, you have a solid ground plane, but try to fence in noisy return paths with cut outs (make U shapes with no copper) to coax (but not force) return currents to take a specific path (away from sensitive ground loops). You are still increasing ground path inductance, but much less than with a star ground. Solid Plane You accept that any sacrifice of the ground plane adds inductance, which is unacceptable. One solid ground plane serves all ground connections, with minimal inductance. If you're doing anything RF, this is pretty much the route you have to take. Physical separation by distance is the only thing you can use to reduce noise coupling. A word about filtering Sometimes people like to put a ferrite bead in connect to different ground planes together. Unless you're designing DC circuits, this is rarely effective - you're more likely to add massive inductance and a DC offset to your ground plane, and probably ringing. A/D Bridges Sometimes, you have nice circuits where analog and digital is separated very easily except at an A/D or D/A. In this case, you can have two planes with a line of separation that runs underneath the A/D IC. This is an ideal case, where you have good separation and no return currents crossing the ground planes (except inside the IC where it is very controlled). NOTE: This post could do with some pictures, I'll have a look around and add them a bit later. | {
"source": [
"https://electronics.stackexchange.com/questions/128637",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/26011/"
]
} |
128,699 | I'm curious why some IC manufacturers like Maxim, Analog Devices, and Linear Tech's parts are so expensive? I know they are much higher quality (better electrical characteristics, etc), but for some parts it still doesn't make sense. For example: SD Flash Media Controllers (not identical, but to do the same thing) Maxim MAX14502 : $21.69 Mouser Microchip USB2244 : $2.18 Mouser RS-232 Transceiver (not pin-to-pin identical, but do the same thing) Maxim MAX3232 : $5.92 TI MAX3232 : $1.91 ST ST3232 : $1.06 Accelerometer. This might be an extreme example. They're a little different, but both are 3-axis, digital accelerometer with the same sensitivity. Analog Devices ADXL362 : $9.73 (!?) Freescale MMA8653FC : $1.09 And there are tons of more examples. I'll try to frame the questions so this doesn't become a discussion. As a design engineer why would you want to select a part that is 5 to 10 times more expensive? What could be some technical reasons? Obviously there are markets that require very high quality components, but is the demand for pricey, quality components even that big? How can the expensive-chip manufacturers compete with others that can make it so much cheaper? | Disclosure: I currently work for one of the manufacturers mentioned, I completed an internship with a second, and I know current and former employees of a third. I can't reveal specifics but I can give some general reasons why ICs have variable costs and prices. I also can't speak about the specific ICs mentioned -- even if I knew why my company's version is priced the way it is and could reveal that information, I couldn't possibly know why other companies priced theirs differently. There are many reasons -- both technical and non-technical -- why the price of one manufacturer's IC may be significantly higher than another's. Below are some of the major ones. Some or all of these may be true for a particular case, and manufacturers may be in different price positions for different IC types (e.g. op amps, ADCs, voltage regulators, etc.). Technical Reasons Fab/Assembly Every manufacturer has a different fab process (actually many processes), and the process used by a manufacturer may have better performance for a particular application or the process may be more expensive (which of course drives up the cost and price of the final IC). Even with identical processes and circuitry, though, the materials used in the assembly of the IC can vary in performance and cost. For example, a higher quality mold compound reduces stress on the die and therefore improves performance over temperature...at an increased manufacturing cost. To further reduce stress on the die a polyimide layer may be added. Another material choice that can affect performance and cost is the wire bond material -- for example, it is easier to meet or exceed qualification standards (e.g. temp cycle) with gold wire but gold is more expensive than copper. The added cost of higher quality materials may be important for applications which require long lifetimes, severe temperature swings, etc., but would be an unnecessary expense for shorter term applications with little temperature variation. Test Production test also has a large effect on overall cost and quality. Virtually every IC requires some sort of trim (e.g. laser trimming ) for at least an internal bandgap voltage reference or oscillator, and possibly for offset reduction, gain correction, etc. Adding additional trims and/or trim bits can improve the performance of the trimmed IC at the cost of increased test time (which is increased test cost). Trim may also require the addition of non-volatile memory, which may require additional data retention tests that also increase the test time. The fab process may even dictate whether trimming is done at wafer probe or final test (i.e. after the die is packaged); wafer probe generally has higher throughput (so it's cheaper) and allows the manufacturer to throw out bad die before spending money packaging it, which of course reduces overall cost of test. Also, while every IC is tested (at the very least for continuity ) overall test coverage can vary. Some applications like defense, automotive, or medical require very low or 0 DPPM, which requires the manufacturer to fully test all electrical parameters (possibly over temperature, which significantly increases test cost). Other applications do not require such low DPPM and the manufacturer may choose not to test in production certain electrical parameters which demonstrated a high \$C_{pk}\$ during characterization, especially if those parameters have a long test time or require more expensive tester equipment. Skipping these tests can result in a significant cost and price reduction with very low but non-zero risk (due to the high \$C_{pk}\$) of passing a die that does not meet the spec, which may be worth it to customers in less critical applications. Non-technical Reasons One non-technical factor affecting price is which manufacturer was the first to market. This manufacturer has a temporary monopoly or near monopoly and can command a higher price. This manufacturer may spend less time optimizing their production for lower cost in order to be first to the market. Manufacturers which enter the market later tend to optimize for cost to undercut the manufacturer that was first to market since a customer will not switch to a different manufacturer for an identical or nearly identical IC at the same price. The manufacturer who was first to market may still be able to command a higher price if they have established design wins with large customers who do not wish to qualify a new manufacturer's IC even if the new IC is offered at a lower price. Also, a manufacturer's prior relationships with major customers and perceived reputation can allow it to charge a higher price. Major customers may be willing to pay extra if they have an established relationship with a manufacturer's support teams and/or if the customer(s) have had quality problems with a different manufacturer in the past. In short Ultimately, a manufacturer's price depends on which market it is targeting: some customers are relatively low volume but have very high quality needs and are willing to pay for it (e.g. military, automotive, and medical) whereas other customers have much higher volumes and every penny counts. ICs manufactured for critical applications depend on higher margins to make up for relatively low volume, use better quality materials, have more extensive test coverage, etc. ICs manufactured for less critical but higher volume applications optimize cost to deliver lower priced ICs which make up for the lower margins with much higher volumes. | {
"source": [
"https://electronics.stackexchange.com/questions/128699",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4940/"
]
} |
128,704 | I have an old laptop power supply which provides 3.5A at 20V. I want to re-purpose this for another project for which I require 3A at 5V. If I cut the output cable I'm assuming I'll find a +20V wire and a 0V wire? If I chuck some resistance on the +20V to drop it down to +5V will this have the desired effect? Will I end up altering the available current or otherwise messing things up? | Disclosure: I currently work for one of the manufacturers mentioned, I completed an internship with a second, and I know current and former employees of a third. I can't reveal specifics but I can give some general reasons why ICs have variable costs and prices. I also can't speak about the specific ICs mentioned -- even if I knew why my company's version is priced the way it is and could reveal that information, I couldn't possibly know why other companies priced theirs differently. There are many reasons -- both technical and non-technical -- why the price of one manufacturer's IC may be significantly higher than another's. Below are some of the major ones. Some or all of these may be true for a particular case, and manufacturers may be in different price positions for different IC types (e.g. op amps, ADCs, voltage regulators, etc.). Technical Reasons Fab/Assembly Every manufacturer has a different fab process (actually many processes), and the process used by a manufacturer may have better performance for a particular application or the process may be more expensive (which of course drives up the cost and price of the final IC). Even with identical processes and circuitry, though, the materials used in the assembly of the IC can vary in performance and cost. For example, a higher quality mold compound reduces stress on the die and therefore improves performance over temperature...at an increased manufacturing cost. To further reduce stress on the die a polyimide layer may be added. Another material choice that can affect performance and cost is the wire bond material -- for example, it is easier to meet or exceed qualification standards (e.g. temp cycle) with gold wire but gold is more expensive than copper. The added cost of higher quality materials may be important for applications which require long lifetimes, severe temperature swings, etc., but would be an unnecessary expense for shorter term applications with little temperature variation. Test Production test also has a large effect on overall cost and quality. Virtually every IC requires some sort of trim (e.g. laser trimming ) for at least an internal bandgap voltage reference or oscillator, and possibly for offset reduction, gain correction, etc. Adding additional trims and/or trim bits can improve the performance of the trimmed IC at the cost of increased test time (which is increased test cost). Trim may also require the addition of non-volatile memory, which may require additional data retention tests that also increase the test time. The fab process may even dictate whether trimming is done at wafer probe or final test (i.e. after the die is packaged); wafer probe generally has higher throughput (so it's cheaper) and allows the manufacturer to throw out bad die before spending money packaging it, which of course reduces overall cost of test. Also, while every IC is tested (at the very least for continuity ) overall test coverage can vary. Some applications like defense, automotive, or medical require very low or 0 DPPM, which requires the manufacturer to fully test all electrical parameters (possibly over temperature, which significantly increases test cost). Other applications do not require such low DPPM and the manufacturer may choose not to test in production certain electrical parameters which demonstrated a high \$C_{pk}\$ during characterization, especially if those parameters have a long test time or require more expensive tester equipment. Skipping these tests can result in a significant cost and price reduction with very low but non-zero risk (due to the high \$C_{pk}\$) of passing a die that does not meet the spec, which may be worth it to customers in less critical applications. Non-technical Reasons One non-technical factor affecting price is which manufacturer was the first to market. This manufacturer has a temporary monopoly or near monopoly and can command a higher price. This manufacturer may spend less time optimizing their production for lower cost in order to be first to the market. Manufacturers which enter the market later tend to optimize for cost to undercut the manufacturer that was first to market since a customer will not switch to a different manufacturer for an identical or nearly identical IC at the same price. The manufacturer who was first to market may still be able to command a higher price if they have established design wins with large customers who do not wish to qualify a new manufacturer's IC even if the new IC is offered at a lower price. Also, a manufacturer's prior relationships with major customers and perceived reputation can allow it to charge a higher price. Major customers may be willing to pay extra if they have an established relationship with a manufacturer's support teams and/or if the customer(s) have had quality problems with a different manufacturer in the past. In short Ultimately, a manufacturer's price depends on which market it is targeting: some customers are relatively low volume but have very high quality needs and are willing to pay for it (e.g. military, automotive, and medical) whereas other customers have much higher volumes and every penny counts. ICs manufactured for critical applications depend on higher margins to make up for relatively low volume, use better quality materials, have more extensive test coverage, etc. ICs manufactured for less critical but higher volume applications optimize cost to deliver lower priced ICs which make up for the lower margins with much higher volumes. | {
"source": [
"https://electronics.stackexchange.com/questions/128704",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/52996/"
]
} |
128,986 | Why is it that in AC circuits, sine waves are represented as a complex number in polar form? I don't logically understand from a physical perspective why there is an imaginary part at all. Is it purely from a mathematical point of view to make the analysis of circuits easier? | Actually the motivation is quite simple. When you have a linear circuit and you stimulate it with only one frequency, wherever you will look you will always find that very same frequency, only the amplitude and the phase of the wave you measure change. What you do then is say well let's forget of frequency, if I keep track of amplitude and phase of voltages and/or currents around the circuit it will be more than enough. But how can you do that? Isn't there any mathematical tool that allows you to keep track of amplitude and phase? Yeah, you've got it: vectors. A vector has an amplitude, that is its length, and a phase, that is the angle it forms with the x axis, ccw direction is positive. Now you can object ok vectors are cool, but isn't anything cooler? And why do we need to use the imaginary unit? The answer to the second question is easy: making calculations with vectors is quite a pain, a notation pain: $$
\pmatrix{2\\3}+\pmatrix{1\\7}=\pmatrix{3\\10}
$$ And that's addition alone! Well that's only a notation problem, if we choose another base of \$\mathbb{R}^2\$ things may be better... And this base happens to exist, but requires the imaginary unit \$j\$. The previous mess becomes:
$$
2+3j+1+7j=3+10j
$$
Much easier, isn't it? Ok but what has an imaginary vector in common with a voltage? Well try to imagine the Gauss plane, the x axis is the real axis, the y axis is the imaginary one. A voltage can be represented by a vector centered on the origin, its length being equal to the voltage value, its starting angle being equal to the phase. Now the magic trick: start rotating the vector so that its angular speed \$\omega\$ corresponds to the desired frequency: Bam. That's what we call a phasor , and that little guy is the strongest weapon you have against tough circuits. So why are these phasors special? That's because if you take two real voltages:
$$
v_1(t)=V_1\cos(2\pi f_0t+\theta_1)\\
v_2(t)=V_2\cos(2\pi f_0t+\theta_2)
$$
and you want to sum them, it happens that if you sum the corresponding phasors and then get back in the real domain the result is the same . This is not magic of course, it depends on the mathematics affinity between cosinusoids and the complex exponential . Just believe me, or believe this cool picture: And the best thing is that all the real circuit analysis you've studied up to now keeps working with phasors and complex impedances. That is: Ohm's law holds with phasors and complex impedances , and that's great since we have a ton of tools to solve circuits that are built on Ohm's and Kirchhoff's laws, and we can still use them. With phasors taking the derivative/integrating is also super easy: as you know, since we're speaking of sines and cosines all at the same frequency it's only a matter of phase shift, and that -surprise- is very clear if you use the complex exponential representation. TL;DR:
Sinusoids are represented as rotating vectors on the polar plane, it's pretty much like stopping time while they rotate and take a photo, i.e. calculate phase and amplitude relationships. Just check out the phasor page on wikipedia. And check this other more concise answer too. | {
"source": [
"https://electronics.stackexchange.com/questions/128986",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/51137/"
]
} |
129,548 | There are a lot of movies and video games that depict defusing bombs, most of which boils down to picking the right color wire. Something like this: Now, that's a part I don't understand. simulate this circuit – Schematic created using CircuitLab I'm not very familiar with circuitry and electrical engineering, but I assume this would be the place to ask it. I'm confused as to how clipping a wire would cause a bomb to explode. I assumed that if you clipped a wire, there'd be no connection from the switch to the bomb, so it wouldn't explode, but apparently, in a lot of movies as well, clipping the wrong wire leads to dangerous things. simulate this circuit Is this a realistic scenario? I really don't understand why clipping a wire would cause a bomb to explode. Rather, shouldn't it "defuse" the bomb? | As you have identified, clipping the correct wire would stop a bomb exploding. So, a bomb maker would ensure that there are many wires, so it isn't obvious which one to clip. They would monitor whether a wire has been cut, and if it is, the bomb would explode. They are bomb makers, after all. They could also add more stuff to detect whether the bomb has been opened, etc. Many people have lost their lives trying to defuse bombs, and the UK army disposal folks prefer to either completely destroy the detonation system, or just blow the whole thing up when one is found. So I have always assumed it is somewhat realistic. Of course, I don't believe anyone would put a cute little LCD displaying the countdown. Nor do I think they would use different colors for each wires, or maintain a consistent color code across a set of similar devices. | {
"source": [
"https://electronics.stackexchange.com/questions/129548",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/46094/"
]
} |
129,907 | What is the best way to solder these stranded wires onto a vero board (strip board)? I am thinking maybe using some kind of connector instead of just soldering on. It will make things more flexible. I also want it to look semi professional. If using connectors, what connectors would be the best? | If you do want to solder them directly, you can make it more robust (and marginally better looking) by drilling holes wide enough for the wire + insulation to fit snugly, looping the wires through those holes and then soldering them. This provides strain relief so that the solder joint isn't taking a significant mechanical load. This is illustrated below, but you can also have the holes closer together, the other way round, etc - whatever's convenient. Image credit: Windell Oskay, CC-BY | {
"source": [
"https://electronics.stackexchange.com/questions/129907",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/53578/"
]
} |
129,927 | When I started learning about electrical engineering, one of the most important safety guidelines that I learned was that voltages over 36V can be dangerous. As my circuits professor put it, 120VAC will sting, and 240VAC will seriously hurt. Now that I know more about these voltages and their dangers, I'm confused about mains electricity and how safe it is. I often see people (un)plugging power cables into outlets that are behind a desk or a couch without a clear view of what they're touching - sometimes with two hands to get a better grip, a definite no-no in my electrical labs at school. How dangerous is the risk of, for example, touching a finger across both the hot and neutral lines? Do the circuit breakers in a home really guard against this, or is 120/240VAC not quite as dangerous as I think? | Circuit breakers are not enough to protect life. Circuit breakers are there to stop the cable in the walls of your house melting and possibly catching fire – circuit breakers and fuses perform the function of stopping a fire (which of course is also very dangerous to life). For direct contact with a live AC part, in the UK we have residual current devices (RCDs) – these "trip" the supply if the current taken down one of the AC wires is different to the current down the other by ~20mA: (source: diyhowto.co.uk ) Clearly a fuse wouldn't be useful because the normal current of the devices attached to the AC will be tens or more amps. So if you have an appliance taking 10 amps and you touched one of the AC conductors you'd draw an earth current of maybe 20mA and this would "imbalance" the RCD and trip the supply. As for touching both terminals simultaneously a different scenario has to be envisaged. I'm talking about AC power systems where one conductor (sometimes called neutral) is "earthy" i.e. it may have a voltage of only a couple of volts to earth – if you touched only this wire then it is very unlikely to trip the RCD BUT who cares – it's only a couple of volts put across your body at best and hardly any current will flow. If instead you touched both AC wires (live and neutral) then there will be an earth current taken from the live that is still significantly greater than the earth current from the neutral and the RCD trips. Having said all of this ~20mA is still going to sting even if it is only for sub 100 milliseconds. Will it be lethal – possibly to people with heart complaints but will those folk be rummaging under a desk to blindly push a connector into a socket? For AC systems that are "isolated" from earth, touching any one wire will barely be noticeable, but touching both will not trip an RCD and you'll be in serious danger – the current flow will be directly through the body and from conductor to conductor. Luckily these sorts of installations are not very common but certainly not unheard of. Losing the neutral-earth bond can cause this problem. | {
"source": [
"https://electronics.stackexchange.com/questions/129927",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/49251/"
]
} |
129,961 | I have an ATmega8 and I started working on servo motor HS-645MG. What frequency works with the HS-645MG? How do I get the frequency of PWM and the duration/length of each pulse? Is there a calculation? | Radio Control (RC) model servos use a Pulse-Position Modulation PPM. There is some confusion over terminology. Some people call it Pulse Width Modulation (PWM). It is very understandable, because the width of the pulse encodes information. Also the timer hardware used to generate a PWM signal can also be used to create a PPM signal. The base PPM frequency for an RC servo is 50Hz, i.e. a signal to the servo every 20ms. Model servos are quite tolerant to error in this time, and 15-25ms might work, even as short as 5ms works with some. When the pulse varies in width, the servo will sweep between 0 and 180 degrees. There is some variation in the recommended length of PPM pulse, try between 1ms and 2ms, and if that doesn't give 180 degrees, try 0.5ms to 2.5ms. You might need to do some experiments to get it right. A 1.5ms long pulse will command the servo to the 'centre', 90 degree position. You can get a simple version of this by using delays. If the pulse position length is measured in micro seconds, and stored in pos , this Arduino code would drive the servo int servoPin = 9; // pin attached to servo
int pos = 1500; // initial servo position
void setup() {
pinMode(servoPin, OUTPUT);
}
void loop() {
digitalWrite(servoPin, HIGH); // start PPM pulse
delayMicroseconds(pos); // wait pulse diration
digitalWrite(servoPin, LOW); // complete the pulse
// Note: delayMicroseconds() is limited to 16383µs
// see http://arduino.cc/en/Reference/DelayMicroseconds
// Hence delayMicroseconds(20000L-pos); has been replaced by:
delayMicroseconds(5000-pos); // adjust to 5ms period
delay(15); // pad out 5ms to 20ms PPM period
} WARNING: This code has been compiled, but not tested. Note: delayMicroseconds() is limited to 16383µs . Hence delayMicroseconds(20000L-pos); will fail, and has been replaced by two delays: delayMicroseconds(5000L-pos); to delay a convenient duration, followed by a fixed delay(15); . Some servos are happy with a shorter cycle, so they may work fine if the delay(15); is deleted This diagram might help: PPM vs PWM The difference between PPM and PMW might seem quite subtle. However, the width of the PPM pulse directly encodes position information. If the pulse width is changed, it means a different position. PWM hardware can be used to generate a PPM signal, but that does not mean PPM is the same as PWM. Edit:
@Adam.at.Epsilon wrote a clear, pithy explanation in the comments below: PPM encodes information only in the positive pulse width, wheras PWM encodes information in the entire duty cycle Put another way, a PWM signal encodes a ratio . The ratio of on to off is needed to get all the information; on alone is not enough. PPM is not encoding a ratio. The active duration of the signal (it might be positive or negative) is encoding an 'absolute' position, and the dead duration of the signal (opposite sense to the active part) is just 'filling in time'. The dead duration can be varied significantly without changing the meaning of the information in the signal. For example, some 'digital hobby servos' can work reliably with the dead duration of the signal ranging from about 5ms to over 20ms, a factor of 400%. Yet will move with a change in active signal duration of 1%. A PWM signal is typically 'encoding' power. Think of the PWM signal as a fraction of full power. The more of a cycle it is on (and the less it is off) the bigger the fraction of full power. On all of the cycle is 100%, on 60% (and hence off 40%) is 60% power, on 0% and off 100% is 0% power etc. As a concrete example PWM might be running at a frequency of 200Hz, or period of 5ms. A 50%, or 0.5, of maximum power signal would be on for 2.5ms, and off for 2.5ms. That 2.5ms pulse might be decoded by an RC servo expecting a PPM signal as 180 degrees, say. Change the frequency to 1000Hz, and hence the period becomes 1ms. The 50% signal would now be 0.5ms on, and 0.5ms off. That PWM signal still encodes the same 50% power information. However, the RC servo expecting a PPM pulse will decode that pulse width as a different position, and either change its position, or 'give up' and fail to track the signal. | {
"source": [
"https://electronics.stackexchange.com/questions/129961",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/34136/"
]
} |
129,967 | For my job, I was tasked with creating a board to interface with a 2.5V device we've made. So, to keep things simple, I decided to interface it with an ATmega328P-PU running the Arduino firmware at 2.5V. All the specifications say the chip has an operating voltage of 1.8V-5.5V, so I didn't bother testing any circuit at 2.5V until today. That was a dumb mistake. I was testing the low voltage setup by running the simple blink sketch on a breadboard and found it would stop working just below 2.7V. The power supply would also show 0mA current draw as opposed to the 2mA it was showing before. I checked the pin with a DMM and the LED unplugged to confirm that no output was coming from the pin. After some research, I found a site that seemed to say that, at lower voltages, it needs to run at a slower speed. So, I burned the bootloader to have the chip used the 8Mhz internal oscillator. However, it still turns off below 2.7V. How can I get access to these lower voltages? I am guessing that this has to do with some internal fuse setting, but can't find anything about it. I'm well within the operating voltage for both the I/O pins and Supply rail. I know it can run at 2.5V as the Arduino BT can have an input voltage as low as 2.5V. Is there something I'm forgetting? | Radio Control (RC) model servos use a Pulse-Position Modulation PPM. There is some confusion over terminology. Some people call it Pulse Width Modulation (PWM). It is very understandable, because the width of the pulse encodes information. Also the timer hardware used to generate a PWM signal can also be used to create a PPM signal. The base PPM frequency for an RC servo is 50Hz, i.e. a signal to the servo every 20ms. Model servos are quite tolerant to error in this time, and 15-25ms might work, even as short as 5ms works with some. When the pulse varies in width, the servo will sweep between 0 and 180 degrees. There is some variation in the recommended length of PPM pulse, try between 1ms and 2ms, and if that doesn't give 180 degrees, try 0.5ms to 2.5ms. You might need to do some experiments to get it right. A 1.5ms long pulse will command the servo to the 'centre', 90 degree position. You can get a simple version of this by using delays. If the pulse position length is measured in micro seconds, and stored in pos , this Arduino code would drive the servo int servoPin = 9; // pin attached to servo
int pos = 1500; // initial servo position
void setup() {
pinMode(servoPin, OUTPUT);
}
void loop() {
digitalWrite(servoPin, HIGH); // start PPM pulse
delayMicroseconds(pos); // wait pulse diration
digitalWrite(servoPin, LOW); // complete the pulse
// Note: delayMicroseconds() is limited to 16383µs
// see http://arduino.cc/en/Reference/DelayMicroseconds
// Hence delayMicroseconds(20000L-pos); has been replaced by:
delayMicroseconds(5000-pos); // adjust to 5ms period
delay(15); // pad out 5ms to 20ms PPM period
} WARNING: This code has been compiled, but not tested. Note: delayMicroseconds() is limited to 16383µs . Hence delayMicroseconds(20000L-pos); will fail, and has been replaced by two delays: delayMicroseconds(5000L-pos); to delay a convenient duration, followed by a fixed delay(15); . Some servos are happy with a shorter cycle, so they may work fine if the delay(15); is deleted This diagram might help: PPM vs PWM The difference between PPM and PMW might seem quite subtle. However, the width of the PPM pulse directly encodes position information. If the pulse width is changed, it means a different position. PWM hardware can be used to generate a PPM signal, but that does not mean PPM is the same as PWM. Edit:
@Adam.at.Epsilon wrote a clear, pithy explanation in the comments below: PPM encodes information only in the positive pulse width, wheras PWM encodes information in the entire duty cycle Put another way, a PWM signal encodes a ratio . The ratio of on to off is needed to get all the information; on alone is not enough. PPM is not encoding a ratio. The active duration of the signal (it might be positive or negative) is encoding an 'absolute' position, and the dead duration of the signal (opposite sense to the active part) is just 'filling in time'. The dead duration can be varied significantly without changing the meaning of the information in the signal. For example, some 'digital hobby servos' can work reliably with the dead duration of the signal ranging from about 5ms to over 20ms, a factor of 400%. Yet will move with a change in active signal duration of 1%. A PWM signal is typically 'encoding' power. Think of the PWM signal as a fraction of full power. The more of a cycle it is on (and the less it is off) the bigger the fraction of full power. On all of the cycle is 100%, on 60% (and hence off 40%) is 60% power, on 0% and off 100% is 0% power etc. As a concrete example PWM might be running at a frequency of 200Hz, or period of 5ms. A 50%, or 0.5, of maximum power signal would be on for 2.5ms, and off for 2.5ms. That 2.5ms pulse might be decoded by an RC servo expecting a PPM signal as 180 degrees, say. Change the frequency to 1000Hz, and hence the period becomes 1ms. The 50% signal would now be 0.5ms on, and 0.5ms off. That PWM signal still encodes the same 50% power information. However, the RC servo expecting a PPM pulse will decode that pulse width as a different position, and either change its position, or 'give up' and fail to track the signal. | {
"source": [
"https://electronics.stackexchange.com/questions/129967",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/53603/"
]
} |
130,606 | Frequently, electric current is compared with water flow. For example, if I make a hole in a water tank, water will flow till the tank pressure and the atmospheric not become equal or the tank becomes empty. Why does this not happen with electricity? | You're imagining an open circuit to look like this: A better analogy would be this: The pipes in a circuit aren't surrounded by free space for the water to flow -- they are tunnelled through a rock. Where there is no pipe, there is just rock and the water does not flow. | {
"source": [
"https://electronics.stackexchange.com/questions/130606",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/53894/"
]
} |
131,531 | So I'm currently working on my high school final project, which is basically a Radar :) ... I'm using the SRF05 detector to detect objects that are near the surface of the device. My current assignment is to learn and summarize all the different components that will be assembled at the end. (UART, MAX232 74HC244 etc, if you want to know :) My teacher told me that the more I will know about these components, the better I will do at my work, and in the exams. So here is my question: Why sound waves are the best choice for the SRF05? Furthermore, why UltraSonic ones? What are the benefits of using sound waves, but not invisible light waves, heat or any other means that can do the job? Light, for example, travel much faster, thus creates a better result and will probably be more effective than sound. | Basically, sound is slow. Using sound you can easily time how long a wave takes to travel to your object and reflect off it, thus giving you a fairly accurate distance. Light goes too fast for that, unless you are looking to measure the distance of the moon, say. And why ultrasonic? So you can't year it. Imagine how annoying it would be if you were forced to hear it all the time? BeeeEEEeeeEEEEeeeEEEEEEEeeeeeeEEE....eeEEEeeEEEP | {
"source": [
"https://electronics.stackexchange.com/questions/131531",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/54370/"
]
} |
131,860 | For some reason, I understand transistor logic gates, and I am able to solve problems, but for some reason I do not understand the and / or logic gates constructed by diodes. If someone can explain it to me using circuit analysis, I would appreciate that. | All you have to remember, is that current flows through a diode in the direction of the arrow. In the case of the OR gate, if there is no potential (i.e. logic 0, or ground) on both inputs, no current will pass through either diode, and the pull-down resistor R\$_{L}\$ will keep the output at ground (logic 0). If either of the inputs has a positive (logic 1) voltage on its input (In 1 or 2), then current will pass through the diode(s) and appear on the output Out, less the forward voltage of the diode (aka diode drop). The AND gate looks more challenging because of the reversed diodes, but its not. If either input (In 1 or In 2) is at ground potential (logic 0), then due to the higher potential on the anode side due to the positive voltage from resistor R\$_{L}\$, current will flow through the diode(s) and the voltage on the output Out will be equal to the forward voltage of the diode, 0.7v. If both inputs to the AND gate are high (logic 1), then no current will pass through either diode, and the positive voltage through R\$_{L}\$ will appear on the output Out. -------------------------------------------- As an aside, diode logic by itself is not very practical. As noted in the description of the OR gate for example, the voltage on the Out terminal when there is a logic high (1) on either of the inputs will be the voltage on the input minus a diode drop. This voltage drop cannot be recovered using just passive circuits, so this severely limits the number of gates that can be cascaded. With diode logic, it is also difficult to build any gates other than AND and OR. NOT gates are not possible. So enter DTL (diode transistor logic), which adds an NPN transistor to the output of the gates described above. This turns them into NAND and NOR gates, either of which can be used to create any other kind of logic function. Sometimes a combination of diode logic and DTL will be used together; diode logic for its simplicity, and DTL to provide negation and regeneration of signal levels. The guidance computer for the Minuteman II missile , developed in the early 1960's, used a combination of diode logic and diode transistor logic contained in early integrated circuits made by Texas Instruments. | {
"source": [
"https://electronics.stackexchange.com/questions/131860",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/54509/"
]
} |
131,881 | I want to calculate theoretically my effective resolution from C8051F350 in-built sigma delta ADC with different interface options like ISL28134 , ADA4528 , ADA4898 , OPA211 , mcp6v07 .
I want to know their noise in 0.1 to 10 Hz range.
Shown below is my ADC interface.(it will be different if I use ad620) Actually its a old schematic My LPF before ADC will have cutoff of 10Hz. So when I look at datasheet of ad620 it gives Noise peak to peak at different gains RTI, my question is why did noise reduce at higher gains.?
My second question is, I know my ADCs rms noise at different PGA gains so how should I combine that noise with the noise of interface circuit to find effective resolution?
Also is it possible to get differential o/p from AD620, and will it be beneficial to reduce noise? Can someone give me math to find ADC ENOB with above interface circuits. | All you have to remember, is that current flows through a diode in the direction of the arrow. In the case of the OR gate, if there is no potential (i.e. logic 0, or ground) on both inputs, no current will pass through either diode, and the pull-down resistor R\$_{L}\$ will keep the output at ground (logic 0). If either of the inputs has a positive (logic 1) voltage on its input (In 1 or 2), then current will pass through the diode(s) and appear on the output Out, less the forward voltage of the diode (aka diode drop). The AND gate looks more challenging because of the reversed diodes, but its not. If either input (In 1 or In 2) is at ground potential (logic 0), then due to the higher potential on the anode side due to the positive voltage from resistor R\$_{L}\$, current will flow through the diode(s) and the voltage on the output Out will be equal to the forward voltage of the diode, 0.7v. If both inputs to the AND gate are high (logic 1), then no current will pass through either diode, and the positive voltage through R\$_{L}\$ will appear on the output Out. -------------------------------------------- As an aside, diode logic by itself is not very practical. As noted in the description of the OR gate for example, the voltage on the Out terminal when there is a logic high (1) on either of the inputs will be the voltage on the input minus a diode drop. This voltage drop cannot be recovered using just passive circuits, so this severely limits the number of gates that can be cascaded. With diode logic, it is also difficult to build any gates other than AND and OR. NOT gates are not possible. So enter DTL (diode transistor logic), which adds an NPN transistor to the output of the gates described above. This turns them into NAND and NOR gates, either of which can be used to create any other kind of logic function. Sometimes a combination of diode logic and DTL will be used together; diode logic for its simplicity, and DTL to provide negation and regeneration of signal levels. The guidance computer for the Minuteman II missile , developed in the early 1960's, used a combination of diode logic and diode transistor logic contained in early integrated circuits made by Texas Instruments. | {
"source": [
"https://electronics.stackexchange.com/questions/131881",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/37192/"
]
} |
131,919 | Why does a microwave oven with metal (?) walls work fine, but if I (theoretically) put a metal spoon in it, "bad things" may happen? Maybe these internal walls are not conductive? | Metal in a microwave is really not a big problem. The walls of every microwave ever made are metal, the window contains metal mesh, mine has a metal shelf and a metal base for the turntable. The general guideline of "do not put metal objects into a microwave" does make sense - metal in the oven has to have a certain shape, size, alloy, distance from other pieces etc. or it will really do unpleasant things like arc and get dangerously hot. The rules are complex and as the average microwave oven owner doesn't have a post-graduate degree in physics with at least a minor in high-energy radio it's just easier to say "no metal." People who really do know better will also know that they can ignore the note on the box, but the lawyers can point to the note on the box after your attempt to home-sinter aluminum powder burns the kitchen down. | {
"source": [
"https://electronics.stackexchange.com/questions/131919",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39905/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.