source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
25,525
As I understand it FPGAs are flexible "digital" circuits, that let you design and build and rebuild a digital circuit. It might sound naive or silly but I was wondering if there are FPGAs or other "flexible" technologies that also make analog components available to the designer, like amplifiers, or A/D or D/A or transceivers or even more simple components?
I've used a product line called the Electronically Programmable Analog Circuit (EPAC), probably more than ten years ago by now, which claimed to be the analog equivalent of an FPGA, and Cypress has for years produced a line called the PSoC (Programmable System On Chip) which incorporates a switchable arrays of both analog and digital circuitry. Note that in both cases the devices have a moderately small number of functional blocks (3 to 24 or so in the case of the PSoC) with somewhat limited routing options, rather than providing hundreds or thousands of blocks with enough interconnects to allow essentially arbitrary routing. One reason that analog FPGA's don't offer anywhere near the design flexibility of digital devices is that even if one passes a digital signal through dozens or hundreds of levels of routing and logic circuitry, each of which has a 10dB signal-to-noise ratio (SNR), meaning there's 1/3 as much noise as signal, the resulting signal can be clean. By contrast, getting a clean signal from an analog device requires that every stage the signal goes through must be clean. The more complex the routing, the more difficult it is to avoid picking up stray signals. In applications that aren't too demanding, having a small amount of analog circuitry combined into a chip can be useful. For example, I've designed a music box which uses a PSoC to drive a piezo speaker directly; the PSoC includes a DAC, a fourth-order low-pass filter, and output amplifier. It wouldn't have been hard to use a separate chip to do the filtering and amplification, but using the PSoC avoided the need for an extra chip.
{ "source": [ "https://electronics.stackexchange.com/questions/25525", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2534/" ] }
25,629
How does a latch get its initial state? I'm guessing that it depends on race conditions and which ever condition comes first then that is the state that the latch starts off with.
There is certainly a lot of stuff taught in school that is not required in the job market. And, of course, there is a lot that is not taught that should be. This could probably be said about any job market, since it depends on what specialty the person ends up being employed in. Unfortunately for you, neither your professors or I can tell you what you will and will not use once you get a real job in your field. For example, I don't use calculus in my job as an E.E.. But a coworker, who is also technically an E.E., uses calculus almost daily. I design PCB's and FPGA's, while he writes DSP algorithms. There was no way our teachers could have ever known what we needed to get the job done. That being said... Your question to your teacher, about the initial value of the latch or Flip Flop (FF), was a great question and the way your professor responded shows her ignorance of the requirements for designing practical digital logic circuits. Simply put, the initial value of a Latch or FF is indeterminate. Meaning, it will have an initial value but you won't know what it is in advance. A given latch/FF might even have different initial values from one power-up to the next. Sometimes it'll be a '0', other times a '1'. Things like temperature and how fast the power rails ramp up will effect the initial value. If your circuit requires a known initial value then you must force the value. Normally this is done using some sort of set/reset/clear input that is driven by a reset signal. This is also why almost any digital circuit of reasonable complexity has a reset signal. Reset signals are not just for CPU's.
{ "source": [ "https://electronics.stackexchange.com/questions/25629", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7726/" ] }
25,683
In truth, I've only been told this anecdotally by an instructor, but can someone explain the physics at play? I have been told that if an inductor is driven at a high enough frequency, it will begin to behave as an capacitor, but I cannot figure out why.
An ideal inductor would not behave like a capacitor, but in the real world there are no ideal components. Basically, any real inductor can be though of an ideal inductor that has a resistor in series with it (wire resistance) and a capacitor in parallel with it (parasitic capacitance). Now, where does the parasitic capacitance come from? an inductor is made out of a coil of insulated wire, so there are tiny capacitors between the windings (since there are two sections of wire separated by an insulator). Each section of windings is at a slightly different potential (because of wire inductance and resistance). As the frequency increases, the impedance of the inductor increases while the impedance of the parasitic capacitor decreases, so at some high frequency the impedance of the capacitor is much lower than the impedance of the inductor, which means that your inductor behaves like a capacitor. The inductor also has its own resonance frequency. This is why some high frequency inductors have their windings far apart - to reduce the capacitance.
{ "source": [ "https://electronics.stackexchange.com/questions/25683", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5565/" ] }
26,232
This doesn't immediately look so good, but if you think about it, why shouldn't it be? If the extension cord drops in, why should the electricity choose to flow through any of them, instead of just the ~1 inch of water to the other pole?
I wouldn't get into that pool, but this isn't as bad as it might first appear: If the rubber is intact, then there is no path to ground. Electricity can only flow between the two conductors. There would be a strong field between them, but that would diminish rapidly with distance. The remaining currents thru the water (and you) a foot away from two conductors spaced normal outlet distance apart should be pretty negligeable. The real issue is leakage from the hot side thru the water to some ground connection. In this pool, it appears that would only be due to a pinhole leak in the pool floor (presumably a bigger opening would be found and fixed to prevent water loss). That would be some distance from where the power cord touched the water, which should be a significant resistance. For the current to be substantial, the distance would have to be close, making it less likely your body is near any significant current path. For longer distances, the current would be less due to the resistance, and more spread out anyway. Since the most serious danger is from hot to ground conduction, a ground fault interruptor should shut off the power feed if there was such condution. Of course anyone dumb enough to float a outlet strip in a pool, and then get into the pool, probably didn't think of plugging the thing into a GFI circuit. What's the real problem here? We've got 7G people on this planet and rising. If some of them want to do the rest of us a favor in reducing this problem, who are we to object? I'm all for Darwin Awards as long as I don't get one.
{ "source": [ "https://electronics.stackexchange.com/questions/26232", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7962/" ] }
26,238
I need to know why people in embedded systems use AT commands? When I have asked people say that it is a standard. So my question is: What does "AT" means? Why do people keep saying it's a standard?
One seldom-appreciated detail about "AT" commands is that many modems would start out in "auto-baud/auto-parity" mode. Initially, the modem would start out not trying to actually decode any serial data, but would simply watch for a consecutive low pulse and high pulse whose widths matched the same valid bit period (e.g. 3.333ms for 300 baud, 833us for 1200 baud, etc.). Upon finding that, they would see if the next low pulse was five times that width. If so, they would watch for either another high-low-high or else for at least 1.5 bit times of high. Finding either of those would indicate that the modem had just seen a 0x41 or 0xC1 (i.e. "A") of the identified baud rate. It would further indicate either the attached computer was using either 8-N-1 or 7-E-1, or that it was using either 7-N-1 or 7-O-1. In either case, it would look for the next character to be either 0x54 or 0xD4 (i.e. "T"). That would allow the modem to further categorize the character length and parity settings. Note that everything received before the "AT" would be ignored. If echo was turned on, the data would be echoed back to the attached computer simply by mirroring all line transitions without any serial decoding. If a computer sent data prior to the "AT" at e.g. 247 baud, it would be echoed back at that speed. Nowadays, a few devices use an initial "A" for auto-baud-rate detection, but otherwise the fact that commands start with "AT" is basically a historical curiosity.
{ "source": [ "https://electronics.stackexchange.com/questions/26238", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5281/" ] }
26,294
Why do USB plugs only fit one way into USB ports? Forgive my ignorance, but there are several types of plugs that are "omnidirectional" and do not have to be oriented a certain way to fit into the corresponding plug (pursuant to shape). In the case of USB, when you're trying to plug one in blind, it can be a bit annoying if you happen to be trying to do it upside-down. I'm guessing this has to do with the pinouts, but then why doesn't the USB standard just negotiate for "pin 1" when something is plugged in, or use a functionally symmetrical pinout layout?
MOST connectors in the world only allow one mechanical orientation. Ones that are not orientation specific are usually "concentric" such as the familiar 2.5 / 3.5 / 6mm plugs on earphones and similar. Where these have more than 2 conductors the contacts for the conductors at the inside end of the socket ride over the conductors for the tip end as the plugs are inserted. Care must be taken to ensure that no problems are cause by these spurious short term connections. AC power connectors in some systems can be polarity insensitive, but this can lead to safety concerns where there is some difference in attribute between the two contacts other than their ability to provide power. eg in many systems the mains power is ground referenced with one conductor essentially at ground potential. Reversing the twocontacts would still lead to a functioning power connection but may bypass protection and safety systems. BUT the vast majority of plug and socket systems are orientation sensitive. Consider the plugs for keyboards and mice (DB9, PS/2, now USB), any 3 pin power plug, trailer power connectors, telephone and network connectors (RJ10, RJ11, RJ45, ...), XLR/Cannon and similar audio connectors, video connectors for monitors ("IBM"/Apple/Other), SCART AV connectors, DMI, ... People are well used to this. Why should USB be any different? BUT, full size USB has two power connectors and two signal connectors. Rhe signal connections could easily enough be interchanged. But interchanging the two power connections involves routing +ve and -ve signals correctly. This could be done with a diode bridge and two diodes but the voltage drop of about 1.2 Volts represents a loss of about 25% of the Voltage and an immediate 25% power loss. This could be addressed with mechanical automated switching - essentially relays, or with low voltage drop electronic switches (MOSFETs or other) but the cost and complexity is not justified in view of the ease of "just plugging it in correctly". Im Mini and Micro USB systems with potentially more conductors this could have been addressed by redundant arrangements of contacts but that wastes potential resources (size or contacts) and still only results in two possible alignments, 180 degrees apart rotationally. You still could not insert it aligned long side vertical or at an angle. Super Solution: For the ultimate connector consider these two conductor wholly functionally symmetric hemaphroditic connectors. Not only can these be orientated in two orientations rotationally but there is no "male" or "female" connector - both 'plug' and 'socket' are identical. This scheme can be extended to more conductors using a coaxial arrangement. This is a General Radio GR874 connector. If you ever meet something using these you can be fairly sure you are in the presence of greatness :-). Many many more of the same
{ "source": [ "https://electronics.stackexchange.com/questions/26294", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7985/" ] }
26,404
I've read lot of topics here. I read some people saying I prefer to "have CMOS characteristics" & so on , also in some data sheets (like AVR), they say it have CMOS characteristics, etc... I remember once "CMOS compatible" word? So why having "CMOS characteristics" makes people proud?
CMOS (complementary metal oxide semiconductor) logic has number of desirable characteristics: High input impedance. The input signal is driving electrodes with a layer of insulation (the metal oxide) between them and what they are controlling. This gives them a small amount of capacitance, but virtually infinite resistance. The current in or out of a CMOS input held at one level is just leakage, usually 1 µA or less. The outputs actively drive both ways. The outputs are pretty much rail-to-rail. CMOS logic consumes very little power when held in a fixed state. The current consumption comes from switching as those capacitors are charged and discharged. Even then, it has good speed to power ratio compared to other logic types. CMOS gates are very simple. The basic gate is a inverter, which is only two transistors. This together with the low power consumption means it lends itself well to dense integration. Or conversely, you get a lot of logic for the size, cost, and power.
{ "source": [ "https://electronics.stackexchange.com/questions/26404", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5281/" ] }
26,417
Nowadays almost everyone who have owned a smartphone or some type of GPS device somewhere. These devices also seemly update in real time. How is the GPS satellite capable of responding to potentially millions of requests from millions of different devices, and update all the millions of devices in real time without lag. As I understand it, websites which get traffic even in the thousands gets slowed if it's not properly prepared for it, how does the GPS cope with amounts of traffic that is seemly impossible to deal with, even difficult for a super computer.
If someone stands on a hilltop over a large town and screams "the Mongols are coming!" then everybody knows what's up and they get out of town. The lookout doesn't have to say "Hey Timmy: The Mongols are coming! Hey John: The Mongols are coming! Hey Sarah..." GPS is just a bunch of satellites in orbit screaming "I'm over here!" in radio frequency. A GPS receiver just tries to make out the different satellites screaming their positions and does the number crunching for "If satellite 1 is over there, and satellite 2 is over THERE, and satellite 3 is just about in THAT spot... then I must be around HERE someplace". Technically, the receiver is listening for each GPS satellite's timestamp and orbital position. It calculates the time the different satellites' signals took to reach the receiver, which gives the receiver the distance from each satellite. Given the distance to each satellite, you know your own position. How? Imagine three satellites in orbit and you on the earth, with long sticks in between. Those sticks are only going to meet in one spot. With one satellite and one fixed length stick, you could be anywhere on a sphere around the satellite. With two satellites, you could be anywhere on a circle centered between the two satellites. With three satellites, your position generally can only be in one spot. Usually, four satellites are required for any precision, though. (The calculation of distance from the satellites is usually not that precise, so knowing the distance to more satellites is better)
{ "source": [ "https://electronics.stackexchange.com/questions/26417", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8036/" ] }
26,535
Recently I asked how to charge a (lead-acid) car battery at home and looks like the answer is very dangerous, don't do it unless you really really have to . Meanwhile people charge Li-Ion batteries of laptops and power tools in-house every day. Those Li-Ion batteries are smaller than car batteries yet still have enough chemistry inside to cause trouble should anything go wrong. I guess anyone who says laptop batteries shouldn't be charged in-house would be labeled paranoid immediately and then ignored. So it looks like Li-Ion batteries are much safer that lead-acid batteries or at least are perceived so. Why exactly do these two types of batteries differ in safety so much?
In domestic use LiIon (Lithium Ion) batteries are, all things considered, MORE dangerous than "lead acid" batteries, not less dangerous. But both are "reasonably safe" [tm] when used properly. The advice that you linked to above is actually titled "What precautions are needed when charging a car battery in an apartment?" and that is quite different than charging a car battery at home. Specifically, a car battery is a one of a range of variants of lead acid batteries and contains liquid acid and while it has plugged vents and fillers it is not "sealed" in any adequate manner. Under certain conditions which are reasonably liable to be encountered in normal charging it may liberate either acid fumes or Hydrogen gas, or both. If it is charged in a car or outside it is unlikely to cause many problems. Lithium Ion batteries when being charged do not usually liberate hydrogen or release electrolyte. Both are possible, but only if a damaged or incorrect charger is used. In exchange for not doing these things. instead they occasionally catch fire and "explode" - actually not a true explosion. Each 18650 cell in a typical laptop battery contains the energy of about 12 high energy load '44 magnum' shells or about 24 "standard" .44 Magnum rounds, and that's just the electrical energy. ( The energy from combustion can exceed 300 kJ , or over a hundred .44 Magnum rounds!). An entry level netbook has 3 such cells and a top notebook PC may have 9 or even 12 of these. This can be released in about 10 seconds with flame and 'lots of heat'. The standard industry term for this, somewhat tongue in cheek is "Vent with flame". The fact that a standard industry term exists indicates that it's a well known problem. The "melt down" mode can be triggered by charging to too high a voltage, discharging excessively and then charging normally, charging at an excessive current rate, puncturing them, or in selected cases* giving them a slightly harder than usual knock which causes internal parts to short together and - Wow!. (* This is obviously a manufacturing fault but has happened in a number of independent cases due to too tight manufacturing tolerances and substandard assembly). All that said - given how many there are, LiIon batteries have proved to be very safe considering how many there are and how they are treated. Despite the horror stories I have never seen a real world "vent with flames" event. Added 7 years on: I've now seen one. An old netbook with a dead LiIon battery pack was left connected to a power supply for several days. One evening it sudeenly burst into magnificent melt-down. Flame, smoke, ... - urgent defenestration of the battery save the netbook - and the couch was "not badly damaged". Had we not been in the room we'd have had a house fire. Lead acid batteries do not have an equivalent failure mode to "vent with flame" However, drop a spanner, vacuum cleaner metal suction tube or similar across the terminals of a car battery and you'll get immense energy release. The battery may be damaged by such treatment but usually won't explode. Charge them too fast or too long and Hydrogen gas will be produced and WILL leak out and can form a flammable or explosive mix in confined spaces. Battery acid from other than fully sealed lead acid batteries seems to be special Houdini grade - skilled at escaping in many unexpected instances. If you carry the battery and your arms itch, Wash them NOW. Next time you wash your clothes small holes with brown edges may appear(Ask me how I know). Charge them inside at above gassing point and you may be sorry. I was :-). BUT Lead acid are also very safe as long as they are used as intended. Charging a wet plate lead acid car battery "inside" at home is not included in "as intended". Some excellent related links, recommended by Nick Alexeev: MPower UK - Lithium Battery Failures SE EE - Why is there so much fear surrounding LiPo batteries? - good question and eleven good to great answers.
{ "source": [ "https://electronics.stackexchange.com/questions/26535", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
26,547
I'm an Electrical Engineering student and I'm studying the hardware description language known as VHDL. I searched for it on Google looking for an IDE (I'm on a mac), but this language seems pretty dead. So here is my question: in my future job as an electrical engineer will VHDL be useful to me? Are you using it? UPDATE: Thank you everyone for the answers, I was clearly wrong with my first impression.
I use ONLY VHDL. It is far from dead. A couple of years ago it seemed like a 50/50 split between people using VHDL or Verilog (anecdotal evidence at best), but I doubt that it has changed much since then. The most recent version of VHDL is "VHDL-2008", which in language standard terms was just yesterday.
{ "source": [ "https://electronics.stackexchange.com/questions/26547", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8092/" ] }
26,937
Will a <5mW laser Class IIIa become less dangerous to the eye if I supply it with less current/voltage? Or is there something about the laser itself that makes it dangerous?
In general, the laser hazard depends on the laser power, the output beam diameter, and the laser wavelength. For a class IIIa or 3R laser (the "IIIa" designation is basically obsolete, although it remains in use for products certified before the new classes were defined), you're at low risk if you don't force yourself to stare into the beam. If the beam just happens to stray into your eye, you'll generally have a reflex response to look away from the painfully bright light. (Don't fight this reflex -- keep yourself safe). Note that the high end of the class 3R power limits is defined by the power where 50% of people will have an "aversion response" sufficient to avoid injury --- and the other 50% won't. Reducing the current to a diode laser will reduce the output power, and thus make the output safer. However, most laser accidents don't happen when the laser is operated as intended or as planned. You also need to consider all the possible "fault conditions" or ways that things can go wrong. Say you design a control circuit that regulates the laser output to always be less than 1 mW (in most cases, a "safe" level) using a feedback photodiode. For real safety, you should also consider things like What if the optical feedback path to the photodiode is blocked (by dirt getting in your box)? What if the electrical path from the feedback diode to its amplifier is broken? What if there is a spike or drift in the power supply to the laser or laser drive circuit? What if some bit of metal junk gets in your box and short-circuits the laser to the power supply, bypassing the power controller? How are you calibrating the limit value for the photodiode current? If you're setting it with a pot, could the pot drift or be mis-adjusted after calibration? If you're using a digital circuit, could the EEPROM with the cal data be erased or damaged? etc. These are the kind of conditions that marketable laser products need to consider before they can pass regulatory requirements. Before you risk your eyesight with your laser system, you should at least consider doing this kind of analysis for yourself. If you used a laser with less power capability, you would know that before any of these kind of hazards could lead to the laser producing a dangerous beam, the laser itself would burn itself up. If you use a laser capable of producing 5 mW before burning out, you'd be wise to treat it with a proportional level of respect.
{ "source": [ "https://electronics.stackexchange.com/questions/26937", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8219/" ] }
26,944
I have just been cogitating on the tutorial at http://www.electronics-tutorials.ws/io/io_5.html , and in the discussion of flywheel diodes it includes this sentence without further elaboration: As well as using flywheel Diodes for protection of semiconductor components, other devices used for protection include RC Snubber Networks, Metal Oxide Varistors or MOV and Zener Diodes. I can kind of see how an RC network might be needed if it is a large device and therefore the coil could be kicking back more current than you want to dissipate through a single diode. (Please correct me if that's not the reason.) I don't have a clue what an MOV is so for the moment I'll ignore that one. :-) I have read a bit about Zener diodes, but I don't understand why their lower reverse breakdown voltage might be desirable here? Edit: I'm also puzzled by the following diagram from the tutorial above: Wouldn't this take any flyback voltage and dump it into the Vcc net? Would it not be a better idea to have the relay coil be between TR1 and ground, and the diode dissipating the flyback voltage to ground?
The current from the relay opening doesn't go into the Vcc rail at all. It follows the path shown here: The stored energy is dissipated in the diode drop and the coil resistance of the relay. In the Zener diode configuration, the stored energy is dissipated in the full Zener voltage of the diode. V*I is a lot higher power, so the current will fall faster and the relay might open a little faster: MOVs are different than Zeners, but fulfill a similar circuit function: They absorb energy when the voltage exceeds a certain level. They are used for overvoltage protection, not for precision things like voltage regulators.
{ "source": [ "https://electronics.stackexchange.com/questions/26944", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1317/" ] }
27,066
Thinking about it: You would never find a "Grounded" multimeter as robust and useful if a path to ground through the multimeter were introduced, modifying the circuit's behaviour and possibly damaging the multimeter with currents. Why are so many oscilloscopes earth referenced? Upon reading some educational material, a majority of the "common mistakes made by students" are placing the grounding clip incorrectly and causing poor results - when the o-scope is just being used as a fancy voltmeter! I've heard of a Tek scope having an isolation transformer within.. however ignoring that, and taking in to account that newer DSOs may have plastic cases (isolated from you most importantly I would assume) could I just remove the earthing pin, and install a 1:1 AC transformer inbetween the o-scope and outlet and be on my merry way probing various hot/neutral/earthed sources with no worries about a path to ground any longer through it?
Oscilloscopes usually require significant power and are physically big. Having a chassis that size, which would include exposed ground on the BNC connectors and the probe ground clips, floating would be dangerous. If you have to look at waveforms in wall-powered equipment, it is generally much better to put the isolation transformer on that equipment instead of on the scope. Once the scope is connected, it provides a ground reference to that part of the circuit so other parts could then be at high ground-referenced voltages, which could be dangerous. However, you'll likely be more careful not to touch parts of the unit under test than the scope. Scopes can also have other paths to ground that are easy to forget. For example, the scope on my bench usually has a permanent RS-232 connection to my computer. It would be easy to float the scope but forget about such things. The scope would actually not be floating. At best a fuse would pop when it is first connected to a wall powered unit under test in the wrong place. Manufacturers could isolate the scope easily enough, but that probably opens them to liability problems. In general, bench equipment is not isolated but hand-held equipment is. If you really need to make isolated measurements often, you can get battery operated handheld scopes.
{ "source": [ "https://electronics.stackexchange.com/questions/27066", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7181/" ] }
27,210
I sometimes end up having someone's unwanted old electronics, usually faulty ones. Does it make sense to harvest components from such. Anything valuable I should be looking for?
You'll be paying youself only pennies per hour. I wouldn't bother with basic and cheap stuff like resistors and capacitors. ICs are often hard to identify, and old ones have little value today. I would look for things like power transformers, speakers, heat sinks, relays, solenoids, motors, and large mechanical parts. Those cost more and are often hard to find. Sometimes just the box and chassis are the more valuable parts.
{ "source": [ "https://electronics.stackexchange.com/questions/27210", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8324/" ] }
27,486
I've met many projects in which an AVR microcontroller uses with a bootloader (such as the Arduino), but I don't understand the concept very well. How can I make a bootloader (for any microcontroller)? After writing my bootloader, how it is programmed to the microcontroller (like any .hex program burnt on the flash rom of the AVR, or some other method)?
A bootloader is a program that runs in the microcontroller to be programmed. It receives new program information externally via some communication means and writes that information to the program memory of the processor. This is in contrast with the normal way of getting the program into the microcontroller, which is via special hardware built into the micro for that purpose. On PICs, this is a SPI-like interface. If I remember right, AVRs use Jtag, or at least some of them do. Either way, this requires some external hardware that wiggles the programming pins just right to write the information into the program memory. The HEX file describing the program memory contents originates on a general purpose computer, so this hardware connects to the computer on one side and the special programming pins of the micro on the other. My company makes PIC programmers among other things as a sideline, so I am quite familiar with this process on PICs. The important point of external programming via specialized hardware is that it works regardless of the existing contents of program memory. Microcontrollers start out with program memory erased or in a unknown state, so external programming is the only means to get the first program into a micro. If you are sure about the program you want to load into your product and your volumes are high enough, you can have the manufacturer or a distributor program chips for you. The chip gets soldered to the board like any other chip, and the unit is ready to go. This can be appropriate for something like a toy, for example. Once the firmware is done, it's pretty much done, and it will be produced in large volumes. If your volumes are lower, or more importantly, you expect ongoing firmware development and bug fixes, you don't want to buy pre-programmed chips. In this case blank chips are mounted on the board and the firmware has to get loaded onto the chip as part of the production process. In that case the hardware programming lines have to be made available somehow. This can be via a explicit connector, or pogo pin pads if you're willing to create a production test fixture. Often such products have to be tested and maybe calibrated anyway, so the additional cost of writing the program to the processor is usually minimal. Sometimes when small processors are used a special production test firmware is first loaded into the processor. This is used to facilitate testing and calibrating the unit, then the real firmware is loaded after the hardware is known to be good. In this case there are some circuit design considerations to allow access to the programming lines sufficiently for the programming process to work but to also not inconvenience the circuit too much. For more details on this, see my in-circuit programming writeup. So far so good, and no bootloader is needed. However, consider a product with relatively complex firmware that you want field upgradable or even allow the end customer to upgrade. You can't expect the end customer to have a programmer gadget, or know how to use one properly even if you provided one. Actually one of my customers does this. If you buy their special field customizing option, you get one of my programmers with the product. However, in most cases you just want the customer to run a program on a PC and have the firmware magically updated. This is where a bootloader comes in, especially if your product already has a communications port that can easily interface with a PC, like USB, RS-232, or ethernet. The customer runs a PC program which talks to the bootloader already in the micro. This sends the new binary to the bootloader, which writes it to program memory and then causes the new code to be run. Sounds simple, but it's not, at least not if you want this process to be robust. What if a communication error happens and the new firmware is corrupt by the time it arrives at the bootloader? What if power gets interrupted during the boot process? What if the bootloader has a bug and craps on itself? A simplistic scenario is that the bootloader always runs from reset. It tries to communicate with the host. If the host responds, then it either tells the bootloader it has nothing new, or sends it new code. As the new code arrives, the old code is overwritten. You always include a checksum with uploaded code, so the bootloader can tell if the new app is intact. If not, it stays in the bootloader constantly requesting a upload until something with a valid checksum gets loaded into memory. This might be acceptable for a device that is always connected and possibly where a background task is run on the host that responds to bootloader requests. This scheme is no good for units that are largely autonomous and only occasionally connect to a host computer. Usually the simple bootloader as described above is not acceptable since there is no fail safe. If a new app image is not received intact, you want the device to continue on running the old image, not to be dead until a successful upload is performed. For this reason, usually there are actually two special modules in the firmware, a uploader and a bootloader. The uploader is part of the main app. As part of regular communications with the host, a new app image can be uploaded. This requires separate memory from the main app image, like a external EEPROM or use a larger processor so half the program memory space can be allocated to storing the new app image. The uploader just writes the received new app image somewhere, but does not run it. When the processor is reset, which could happen on command from the host after a upload, the bootloader runs. This is now a totally self-contained program that does not need external communication capability. It compares the current and uploaded app versions, checks their checksums, and copies the new image onto the app area if the versions differ and the new image checksum checks. If the new image is corrupt, it simply runs the old app as before. I've done a lot of bootloaders, and no two are the same. There is no general purpose bootloader, despite what some of the microcontroller companies want you to believe. Every device has its own requirements and special circumstances in dealing with the host. Here are just some of the bootloader and sometimes uploader configurations I've used: Basic bootloader. This device had a serial line and would be connected to a host and turned on as needed. The bootloader ran from reset and sent a few upload request responses to the host. If the upload program was running, it would respond and send a new app image. If it didn't respond within 500 ms, the bootloader would give up and run the existing app. To update firmware therefore, you had to run the updater app on the host first, then connect and power on the device. Program memory uploader. Here we used the next size up PIC that had twice as much program memory. The program memory was roughly divided into 49% main app, 49% new app image, and 2% bootloader. The bootloader would run from reset and copy the new app image onto the current app image under the right conditions. External EEPROM image. Like #2 except that a external EEPROM was used to store the new app image. In this case the processor with more memory would have also been physically bigger and in a different sub-family that didn't have the mix of peripherals we needed. TCP bootloader. This was the most complex of them all. A large PIC 18F was used. The last 1/4 of memory or so held the bootloader, which had its own complete copy of a TCP network stack. The bootloader ran from reset and tried to connect to a special upload server at a known port at a previously configured IP address. This was for large installations where there was always a dedicated server machine for the whole system. Each small device would check in with the upload server after reset and would be given a new app copy as appropriate. The bootloader would overwrite the existing app with the new copy, but only run it if the checksum checked. If not, it would go back to the upload server and try again. Since the bootloader was itself a complicated piece of code containing a full TCP network stack, it had to be field upgradeable too. They way we did that was to have the upload server feed it a special app whose only purpose was to overwrite the bootloader once it got executed, then reset the machine so that the new bootloader would run, which would cause the upload server to send the latest main app image. Technically a power glitch during the few milliseconds it took the special app to copy a new image over the bootloader would be a unrecoverable failure. In practise this never happened. We were OK with the very unlikely chance of that since these devices were parts of large installations where there already were people who would do maintainance on the system, which occasionally meant replacing the embedded devices for other reasons anyway. Hopefully you can see that there are a number of other possibilities, each with its own tradeoffs of risk, speed, cost, ease of use, downtime, etc.
{ "source": [ "https://electronics.stackexchange.com/questions/27486", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6357/" ] }
27,561
Possible Duplicate: Correct formula for LED current-limiting resistor? Why we use 330 ohm resistor to connect a LED ? I mean: the R is by practice 330 ohm. Why this value? How do I calculate it? what's the purpose of it? Is there a specific parameters in LED to get this value?
This is to limit current through LED, without resistor LED will eat current until it melts. Voltage drop across a LED depends on a it's color, for blue led for example - 3.4V. So if you have 5V power supply, and want 5mA current through led (5mA usually gives good visibility), you need (5V-3.4V)/0.005A = 320 Ohm resistor. (I.e. this resistance will give voltage drop across resistor of 1.6V, remaining 3.4V drops on LED => 5V total) Red LEDs usually have smaller voltage drop (~2V), so you'll have slightly higher current with same resistor, but anything below 20mA is usually ok. Also, slightly smaller currents are ok, LEDs at 1mA are easily visible. PS. few extra things: 1) Light output of led is linearly proportional to current until it's well over specifications. That's why everyone are talking about current through led. 2) Personally I throw 220 Ohm in 5V circuits to make it really bright :-) But on my recent project where I had 3.3V supply, and leds of different color (green, red, blue) I had to calculate resistances more carefully, and they were 68 Ohm for blue and 220 Ohm for green and red.
{ "source": [ "https://electronics.stackexchange.com/questions/27561", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5281/" ] }
27,573
I was looking at an instructable and they suggested using "reflow soldering" for a certain section. "Reflow" wasn't a concept i was familiar with, so I did some Googling... I got a basic description of the process, but I still can't figure out why you would want to do this over "traditional" soldering (not sure of the proper term). What are the pros/cons of this technique, and when would I favor it over other techniques?
Commercially there are two main soldering methods - reflow and wave. "Manual" soldering may still be used to add selected mechanically complex or large parts but this would be rare. "Manual" soldering could include the use of "robots" for the excessively keen. Wave soldering involves literally passing a wave of molten solder along a carefully preheated board. The board temperature, heating and cooling profiles (non linear), solder temperature, wave shape (even), time in solder, flow rate, board rate and more are all important factors that affect results. Pad shapes and component orientations matter and shadowing of parts by other parts needs to be worked around. All aspects of board design, layout, placement, pad shapes and sizes, heat-sinking and more needs to be carefully considered to get good results. Where used with SMD components they will need to be retained in position - either with purpose applied adhesive instant set adhesive or advanced magic. Clearly, wave soldering is an aggressive and demanding process - why use it? It's used because it is the best and cheapest method when it can be done and the only practical method in some cases. Where through hole components are used wave soldering is usually the method of choice. So - Reflow soldering is less demanding on pad shape, shadowing, board orientation, temperature profiles (still very important) and more. For surface mount components it is often a very good choice - solder and flux mix are preapplied with a stencil or other automated process, components are placed in position and are often adequately retained by the solder paste. Adhesive may be used in demanding cases. Use with through hole parts is problematic or worse - usally reflow will not be the method of choice for through hole parts. Where it can be used reflow soldering is used in preference to wave. It is more amenable to small scale manufacture, and generally easier with SMD parts. Complex and/or high density boards may use a mix of reflow and wave soldering with leaded parts being mounted on one side of the PCB only (call this side A) so they can be wave soldered on side B. Prior to through hole part insertion components can be reflow soldered on side A amidst where TH parts are going to be inserted. Additional SMD parts can then be added to side B to be wave soldered along with the TH parts. Those keen on high-wire acts can try complex mixes with different melting point solders, allowing reflow on side B before or after wave soldering, but that would be very uncommon. FWIW manual soldering , while slow and expensive, is the least demanding of most factors as it usually also utilises biological computing power to control relatively crude soldering instruments in extremely flexible manners. However, precision of component heating and temperature profiles are poor comparatively. Some modern components (eg Nichia SMD LEDs with silicone rubber lenses) MUST be reflow soldered (according to the data sheet) and MUST NOT be hand soldered or wave soldered.
{ "source": [ "https://electronics.stackexchange.com/questions/27573", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8033/" ] }
27,619
I can't find any reliable information about this. I don't have the full specification of the SD/MMC Card hardware. Is it true? My high-level application doesn't need to be concerned with wear leveling when working with these cards? EDIT Could someone confirm that wear leveling is guaranteed by the SD specification? I want to be sure, because it looks like most vendors do it, but it is not required by the specification.
I work for a company that used to be a member of the SD association, we are familiar with the 2.0 (SDHC) spec. The SD card spec has NO entry for wear leveling. That is completely dependent on the SD manufacturer to handle that if they so choose. We have seen that some likely do, while others very much do not (beware the super cheap knock-off SD cards). SDXC may have changed that to include wear leveling, but I am unsure of that. Unfortunately the only way to really show that is to get your hands on the official spec. You can find it online most likely, but the SD association really wants you to pay for it. As a side note, taking a 2GB card and writing it beginning to end over and over again averages about 10TB before the card is dead and no longer is writable. Also, SD cards will not let you know when data is bad, i.e. wont return an I/O error like a PC harddrive will. This might not be an issue for embedded designs as 10TB is a LOT of data, but it could be a factor for someone.
{ "source": [ "https://electronics.stackexchange.com/questions/27619", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6942/" ] }
27,677
How does one use a measured step response to tune either a PID or convolution control scheme? Inspired by this answer* , I'm interested a more detailed explanation of how to implement a control system based on the measured step response. I would not try to guess all the effects. There are probably some non-obvious things going on, and you can't know all the parameters. I would measure the step response. Find two pump settings that both result in the ball being within the measurable range within the tube. Then have the controller suddenly switch from one setting to the other open loop. Meanwhile measure what the ball does over time. That is the step response. You can take the derivative of that and get the impulse response. From the impulse response you can predict the motion of the ball for any history of pump settings, assuming this is a linear system. It is probably linear enough over the small range of settings to keep the ball within its normal range. You can use this as a simulation base to find the parameters for old fashioned PID control. Or you can use the impulse response directly to do a convolution control. You have to low pass filter the control input enough so that convolution kernel doesn't go negative unless your pump is actually reversible and can suck the ball back down. How exactly does this work? PID tuning is difficult ; I assume "convolution control" is the use of the pole-zero or transfer function, but don't see exactly how to get the parameters. RELATED: System Modeling for Control Systems * modeling transfer function of ping pong ball levitation in a tube as a damper
Introduction First, we need to consider what exactly is this thing called the impulse response of a system and what does it mean. This is a abstract concept that takes a little thinking to visualize. I'm not going to get into rigorous math. My point is to try to give some intuition what this thing is, which then leads to how you can make use of it. Example control problem Imagine you had a big fat power resistor with a temperature sensor mounted on it. Everything starts out off and at ambient temperature. When you switch on the power, you know that the temperature at the sensor will eventually rise and stabalize, but the exact equation would be very hard to predict. Let's say the system has a time constant around 1 minute, although "time constant" isn't completely applicable since the temperature doesn't rise in a nice exponential as it would in a system with a single pole, and therefore a single time constant. Let's say you want to control the temperature accurately, and have it change to a new level and stay there steadily significantly more quickly than what it would do if you just switched on at the appropriate power level and waited. You may need about 10 W steady state for the desired temperature, but you can dump 100 W into the resistor, at least for a few 10s of seconds. Basically, you have control system problem. The open loop response is reasonably repeatable and there is somewhere a equation that models it well enough, but the problem is there are too many unknows for you to derive that equation. PID control One classic way to solve this is with a PID controller. Back in the pleistocene when this had to be done in analog electronics, people got clever and came up with a scheme that worked well with the analog capabilities at hand. That scheme was called "PID", for Proportional , Integral , and Derivative . P term You start out measuring the error. This is just the measured system response (the temperature reported by the sensor in our case) minus the control input (the desired temperature setting). Usually these could be arranged to be available as voltage signals, so finding the error was just a analog difference, which is easy enough. You might think this is easy. All you have to do is drive the resistor with higher power the higher the error is. That will automatically try to make it hotter when it's too cold and colder when it's too hot. That works, sortof. Note that this scheme needs some error to cause any non-zero control output (power driving the resistor). In fact, it means that the higher the power needed, the bigger the error is since that's the only way to get the high power. Now you might say all you have to do is crank up the gain so that the error is acceptable even at high power out. After all, that's pretty much the basis for how opamps are used in a lot of circuits. You are right, but the real world won't usually let you get away with that. This may work for some simple control systems, but when there are all sorts of subtle wrinkles to the response and when it can take a significant time you end up with something that oscillates when the gain is too high. Put another way, the system becomes unstable. What I described above was the P (proprotional) part of PID. Just like you can make the output proportional to the error signal, you can also add terms proprtional to the time derivative and integral of the error. Each of these P, I, and D signals have their own separate gain before being summed to produce the control output signal. I term The I term allows the error to null out over time. As long as there is any positive error, the I term will keep accumulating, eventually raising the control output to the point where overall error goes away. In our example, if the temperature is consistantly low, it will constantly be increasing the power into the resistor until the output temperature is finally not low anymore. Hopefully you can see this can become unstable even faster than just a high P term can. A I term by itself can easily cause overshoots, which become oscillations easily. D term The D term is sometimes left out. The basic use of the D term is to add a little stability so that the P and I terms can be more aggressive. The D term basically says If I'm already heading in the right direction, lay off on the gas a bit since what I have now seems to be getting us there . Tuning PID The basics of PID control are pretty simple, but getting the P, I, and D terms just right is not. This is usually done with lots of experimentation and tweaking. The ultimate aim is to get a overall system where the output responds as quickly as possible but without excessive overshoot or ringing, and of course it needs to be stable (not start oscillating on its own). There have been many books written on PID control, how to add little wrinkles to the equations, but particularly how to "tune" them. Tuning refers to divining the optimum P, I, and D gains. PID control systems work, and there is certainly plenty of lore and tricks out there to make them work well. However, PID control is not the single right answer for a control system. People seem to have forgotten why PID was chosen in the first place, which had more to do with contraints of analog electronics than being some sort of universal optimum control scheme. Unfortunately, too many engineers today equate "control system" with PID, which is nothing more than a small-thinking knee jerk reaction. That doesn't make PID control wrong in today's world, but only one of many ways to attack a control problem. Beyond PID Today, a closed loop control system for something like the temperature example would be done in a microcontroller. These can do many more things than just take the derivative and integral of a error value. In a processor you can do divides, square roots, keep a history of recent values, and lots more. Many control schemes other than PID are possible. Impulse response So forget about limitations of analog electronics and step back and think how we might control a system going back to first principles. What if for every little piece of control output we knew what the system would do. The continuous control output is then just the summation of lots of little pieces. Since we know what the result of each piece is, we can know what the result of any previous history of control outputs is. Now notice that "a small piece" of the control output fits nicely with digital control. You are going to compute what the control output should be and set it to that, then go back and measure the inputs again, compute the new control output from those and set it again, etc. You are running the control algorithm in a loop, and it measures the inputs and sets the control output anew each loop iteration. The inputs are "sampled" at discrete times, and the output is likewise set to new values at a fixed interval. As long as you can do this fast enough, you can think of this happening in a continuous process. In the case of a resistor heating that normally takes a few minutes to settle, certainly several times per second is so much faster than the system inherently responds in a meaningful way that updating the output at say 4 Hz will look continuous to the system. This is exactly the same as digitally recorded music actually changing the output value in discrete steps in the 40-50 kHz range and that being so fast that our ears can't hear it and it sounds continuous like the original. So what could we do if we had this magic way of knowing what the system will do over time due to any one control output sample? Since the actual control response is just a sequence of samples, we can add up the response from all the samples and know what the resulting system response will be. In other words, we can predict the system response for any arbitrary control response waveform. That's cool, but merely predicting the system response doesn't solve the problem. However, and here is the aha moment, you can flip this around and find the control output that it would have taken to get any desired system response. Note that is exactly solving the control problem, but only if we can somehow know the system response to a single arbitrary control output sample. So you're probably thinking, that's easy, just give it a large pulse and see what it does. Yes, that would work in theory, but in practise it usually doesn't. That is because any one control sample, even a large one, is so small in the overall scheme of things that the system barely has a measureable response at all. And remember, each control sample has to be small in the scheme of things so that the sequence of control samples feels continuous to the system. So it's not that this idea won't work, but that in practise the system response is so small that it is buried in the measurement noise. In the resistor example, hitting the resistor with 100 W for 100 ms isn't going to cause enough temperature change to measure. Step response But, there still is a way. While putting a single control sample into the system would have given us its response to individual samples directly, we can still infer it by putting a known and controlled sequence of control responses into the system and measuring its response to those. Usually this is done by putting a control step in. What we really want is the response to a small blip, but the response to a single step is just the integral of that. In the resistor example, we can make sure everything is steady state at 0 W, then suddenly turn on the power and put 10 W into the resistor. That will cause a nicely measurable temperature change on the output eventually. The derivative of that with the right scaling tells us the response to a individual control sample, even though we couldn't measure that directly. So to summarize, we can put a step control input into a unknown system and measure the resulting output. That's called the step response . Then we take the time derivative of that, which is called the impulse response . The system output resulting from any one control input sample is simply the impulse response appropriately scaled to the strength of that control sample. The system response to a whole history of control samples is a whole bunch of the impulse responses added up, scaled and skewed in time for each control input. That last operation comes up a lot and has the special name of convolution . Convolution control So now you should be able to imagine that for any desired set of system outputs, you can come up with the sequence of control inputs to cause that output. However, there is a gotcha. If you get too aggressive with what you want out of the system, the control inputs to achieve that will require unachievalby high and low values. Basically, the faster you expect the system to respond, the bigger the control values need to be, in both directions. In the resistor example, you can mathematically say you want it to go immediately to a new temperature, but that would take a infinite control signal to achieve. The slower you allow the temperature to change to the new value, the lower the maximum power you need to be able to dump into the resistor. Another wrinkle is that power into the resistor will sometimes need to go down too. You can't put less than 0 power into the resistor, so you have to allow a slow enough response so that the system wouldn't want to actively cool the resistor (put negative power in), because it can't. One way to deal with this is for the control system to low pass filter the user control input before using it internally. Figure users do what users want to do. Let them slam the input quickly. Internally you low pass filter that to smooth it and slow it down to the fastest you know you can realize given the maximum and minmum power you can put into the resistor. Real world example Here is a partial example using real world data. This from a embedded system in a real product that among other things has to control a couple dozen heaters to maintain various chemical reservoirs at specific temperatures. In this case, the customer chose to do PID control (it's what they felt comfortable with), but the system itself still exists and can be measured. Here is the raw data from driving one of the heaters with a step input. The loop iteration time was 500 ms, which is clearly a very short time considering the system is still visibly settling on this scale graph after 2 hours. In this case you can see the heater was driven with a step of about .35 in size (the "Out" value). Putting a full 1.0 step in for a long time would have resulted in too high temperature. The initial offset can be removed and the result scaled to account for the small input step to infer the unit step response: From this you'd think it would be just subtracting successive step response values to get the impulse response. That's correct in theory, but in practise you get mostly the measurement and quantization noise since the system changes so little in 500 ms: Note also the small scale of the values. The impulse response is shown scaled by 10 6 . Clearly large variations between individual or even a few readings are just noise, so we can low pass filter this to get rid of the high frequencies (the random noise), which hopefully lets us see the slower underlying response. Here is one attempt: That's better and shows there really is meaningful data to be had, but still too much noise. Here is a more useful result obtained with more low pass filtering of the raw impulse data: Now this is something we can actually work with. The remaining noise is small compared to the overall signal, so shouldn't get in the way. The signal seems to still be there pretty much intact. One way to see this is to notice the peak of 240 is about right from a quick visual check and eyeball filtering the previous plot. So now stop and think about what this impulse response actually means. First, note that it is displayed times 1M, so the peak is really 0.000240 of full scale. This means that in theory if the system were driven with a single full scale pulse for one of the 500 ms time slots only, this would be the resulting temperature relative to it having been left alone. The contribution from any one 500 ms period is very small, as makes sense intuitively. This is also why measuring the impulse response directly doesn't work, since 0.000240 of full scale (about 1 part in 4000) is below our noise level. Now you can easily compute the system response for any control input signal. For each 500 ms control output sample, add in one of these impulse responses scaled by the size of that control sample. The 0 time of that impulse response contribution to the final system output signal is at the time of its control sample. Therefore the system output signal is a succession of these impulse responses offset by 500 ms from each other, each scaled to the control sample level at that time. The system response is the convolution of the control input with this impulse response, computed every control sample, which is every 500 ms in this example. To make a control system out of this you work it backwards to determine the control input that results in the desired system output. This impulse response is still quite useful even if you want to do a classic PID controller. Tuning a PID controller takes a lot of experimentation. Each iteration would take a hour or two on the real system, which would make iterative tuning very very slow. With the impulse response, you can simulate the system response on a computer in a fraction of a second. You can now try new PID values as fast as you can change them and not have to wait a hour or two for the real system to show you its response. Final values should of course always be checked on the real system, but most of the work can be done with simulation in fraction of the time. This is what I meant by "You can use this as a simulation base to find the parameters for old fashioned PID control" in the passage you quoted in your question.
{ "source": [ "https://electronics.stackexchange.com/questions/27677", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2118/" ] }
27,756
As the title says really, why do ethernet sockets need to be mag-coupled? I have a basic understanding of electronics, but mostly, I can't figure out the right search terms to google this properly.
The correct answer is because the ethernet specification requires it . Although you didn't ask, others may wonder why this method of connection was chosen for that type of ethernet. Keep in mind that this applies only to the point-to-point ethernet varieties, like 10base-T and 100base-T, not to the original ethernet or to ThinLan ethernet. The problem is that ethernet can support fairly long runs such that equipment on different ends can be powered from distant branches of the power distribution network within a building or even different buildings. This means there can be significant ground offset between ethernet nodes. This is a problem with ground-referenced communication schemes, like RS-232. There are several ways of dealing with ground offsets in communications lines, with the two most common being opto-isolation and transformer coupling. Transformer coupling was the right choice for ethernet given the tradeoffs between the methods and what ethernet was trying to accomplish. Even the earliest version of ethernet that used transformer coupling runs at 10 Mbit/s. This means, at the very least, the overall channel has to support 10 MHz digital signals, although in practice with the encoding scheme used it actually needs twice that. Even a 10 MHz square wave has levels lasting only 50 ns. That is very fast for opto-couplers. There are light transmission means that go much much faster than that, but they are not cheap or simple at each end like the ethernet pulse transformers are. One disadvantage of transformer coupling is that DC is lost. That's actually not that hard to deal with. You make sure all information is carried by modulation fast enough to make it thru the transformers. If you look at the ethernet signalling, you will see how this was considered. There are nice advantages to transformers too, like very good common mode rejection. A transformer only "sees" the voltage across its windings, not the common voltage both ends of the winding are driven to simultaneously. You get a differential front end without a deliberate circuit, just basic physics. Once transformer coupling was decided on, it was easy to specify a high isolation voltage without creating much of a burden. Making a transformer that insulates the primary and secondary by a few 100 V pretty much happens unless you try not to. Making it good to 1000 V isn't much harder or much more expensive. Given that, ethernet can be used to communicate between two nodes actively driven to significantly different voltages, not just to deal with a few volts of ground offset. For example, it is perfectly fine and within the standard to have one node riding on a power line phase with the other referenced to the neutral.
{ "source": [ "https://electronics.stackexchange.com/questions/27756", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8548/" ] }
27,763
I have a project that I think would be best suited for an ATMega328P. However, in every simple project I've seen, people always hook up a 16MHz external oscillator. From what I can see, it should have an 8MHz internal oscillator. My project doesn't require a lot of processing power, nor does timing need to be very precise(other than for a UART and I2C). I have a programmer also, so I don't need to worry about bootloaders. Is there any reason for me to use an external oscillator?
What you don't say is what the accuracy of this internal oscillator is. It took me some time to find it in the datasheet , on page 369. 10%. Ten percent! And that for a calibrated oscillator? This is awful. It isn't unreasonable to expect error as low as 1% for this . Microchip/Atmel provides a document for calibrating the oscillator yourself to 1% accuracy. I2C is a synchronous protocol, and timing accuracy isn't relevant as long as minimum and maximum pulse times are respected. UART on the other hand is asynchronous , and then timing accuracy is important indeed. Most UARTs allow a half bit error in the last bit (the stop bit), so that's 5% for a 10 bit transmission. The factory calibrated oscillator won't do here. You'll have to go through the calibration procedure to get to 1%. In that case you can use the internal oscillator. Otherwize you'll have to use a crystal.
{ "source": [ "https://electronics.stackexchange.com/questions/27763", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/708/" ] }
27,931
I know what torque is but I find difficult to understand what Torque: 3kgcm means ? I am not sure, how much weight that motor can carry, and I want to know how I can calculate that. Please give me some hints :)
Torque is a measure of "twisting force". Power is a measure of twisting force x speed. Torque is usually expressed as a Force x a distance So for the same Torque if you double the distance you halve the force to get the same answer. So kg.cm is kg force x centimetre distance. In fact kg is a unit of mass and not of force BUT kg is sloppily used as unit of force in many cases. Other torque units include foot-pound, Newton-metre, dyne-centimeter (!) ... In your case 3 kg.cm means that a "force" of 3 kg acting at a radius of 1 cm would produce the same amount of torque as your motor. Equally that could be 0.1 kg x 30 cm, or 10 kg x 0.3 cm or ... FWIW - kg is a unit of mass and Newton the corresponding unit of force. Where the "weight" of 1 kg = g Newton where g = 9.8 m/s/s. Close enough g = 10 here so 1 kg weighs 10 Newton. BUT pound IS in fact a unit of force. The corresponding unit of mass is the Slug where 1 Slug weighs ~32 pounds force. You will not find people selling vegetables by the slug, or by the Newton :-). A Newton glass of beer is about 4 ounces. A useful approximation Power in Watts ~= kg.m torgue x RPM This is just happenstance as various constants cancel almost exactly but it is extremely useful. Accurate to about 1%. So in your case 3 kg.cm = 0.03 kg.m So the power that your motor makes at a given RPM at this torque is Power = 0.03 x RPM Watts. ie about 30 Watts at 1000 RPM at 3 kg.cm torque. I have spent many long hours playing with dynamometers while developing alternator brakes and controllers to act as loads for exercise equipment. The approximation Watts = kg.m.RPM ......was a useful approximation to remember.
{ "source": [ "https://electronics.stackexchange.com/questions/27931", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8396/" ] }
28,053
While reading a datasheet for an IC I came across the pin voltages being presented as 3V3 or 1V8. What does this representation stand for?
That's the new politically correct way of writing numbers that would normally have decimal points. Some parts of the world (Germany for example) use a comma to separate the integer and fraction digits. To avoid ambiguity in international situations, some people now put the letter for the units where the decimal point should be. So "3V3" really means 3.3 Volts and "1V8" means 1.8 Volts. If your audience is English speaking or it is obvious the document or the context is in English, then you are fine using a decimal point normally. After all, using a decimal point is part of the language no less than the words used to describe other things, so this is not ambiguous. In rare cases when numbers are by themselves without a language context, then it's probably best to use the "3V3" type notation. Otherwise, I personally find this notation rather annoying since I have to look at it and think about it rather than the brain parsing it without much conscious thought. As with most things PC, it's about choosing which group of people to piss off.
{ "source": [ "https://electronics.stackexchange.com/questions/28053", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8647/" ] }
28,078
I'm curious. I think I'm confused because I don't really understand the way ionizing radiation works. So the Wifi radiation passes through my body, but has no ionizing effects within my body?
Ionizing radiation is a little complicated, so stay with me as I try to explain it in an easy way... When talking about ionizing radiation, scientists talk about energy levels. But this refers to the energy level of the photon of electro-magnetic energy, not the quantity of photons. All electromagnetic energy (radio waves, light, x-rays, etc.) can be thought of as either a wave or a particle. The shorter the wavelength, the higher the energy. So when scientists talk about the energy level of ionizing radiation they are talking about the wavelength. Here's a wiki page showing the electromagnetic spectrum. Only the higher energy waves are ionizing. Specifically, stuff in the UltraViolet and above (X-Rays and Gamma Rays). Stuff in the visible spectrum and below (including radio waves and microwaves) are non-ionizing. Wifi signals, which are in the 2.4 to 5.something GHz range are not ionizing. I should point out that if something isn't ionizing then simply having more of it (at the same frequency/wavelength) is not going to make it ionizing. It doesn't work that way. Non-ionizing radiation can have an effect on your body, however. It can cause heating. A microwave oven, for example, operates near 2.4 GHz and obviously heats up food. But a microwave does not ionize food. But let's put all of this into perspective. A typical WiFi device can output about 0.1 watts of energy. A typical LED flashlight will output about 1 watt of light. They are both non-ionizing energy and will have a similar heating effect. The main difference is that the flashlight will heat you 10 times faster and in a more concentrated spot on your body. Yet you wouldn't think twice about shining a flashlight on your hand-- and you shouldn't worry about it. On the equator at noon the sun puts out approximately 1,000 watts of energy per square meter of ground. The vast majority of this is non-ionizing (the UV part is ionizing). This is about 1,000 times more "radiation" than the LED Flashlight, and 10,000 times more than the WiFi signal. You run more risk going outside than sitting in your house playing on the iPad. Even so, just put on some sunscreen and enjoy the outdoors! Some electro-magnetic radiation will pass through your body. The higher wavelengths and lower wavelengths in particular will pass through more easily. But passing through means that their energy did not interact with your body. It's the stuff that doesn't pass through that you're interested in. Even so, what I said above assumes that 100% of the energy gets trapped in your body and it still isn't an issue. Conclusion: A WiFi signal is non-ionizing and is thousands of times less energy than going outside in the sun. Don't worry about it. It's not going to harm you.
{ "source": [ "https://electronics.stackexchange.com/questions/28078", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8659/" ] }
28,091
I am reading the datasheet of an ARM Cortex chip, specifically the GPIO chapter. Ultimately, I want to configure various GPIO pins to use them in "Alternate Function" mode for read/write access to SRAM. Of all the GPIO registers available, I do not understand two: GPIO_PUPDR and GPIO_OTYPE which are respectively the "pull-up/pull-down register" and the "output type register". For GPIO_PUPDR I have three choices: No pull-up or pull-down Pull-up Pull down For GPIO_0TYPE I have two choices: Output push-pull Output open-drain What is the difference between all the different configurations, and which would be the most appropriate for SRAM communication? The documentation for the board I am working on is available here (see page 24 for the SRAM schematics). The reference manual for the ARM Chip is available here (see pages 145 and 146 for the GPIO registers).
This answer is general to processors and peripherals, and has an SRAM specific comment at the end, which is probably pertinent to your specific RAM and CPU. Output pins can be driven in three different modes: open drain - a transistor connects to low and nothing else open drain, with pull-up - a transistor connects to low, and a resistor connects to high push-pull - a transistor connects to high, and a transistor connects to low (only one is operated at a time) Input pins can be a gate input with a: pull-up - a resistor connected to high pull-down - a resistor connected to low pull-up and pull-down - both a resistor connected to high and a resistor connected to low (only useful in rare cases). There is also a Schmitt triggered input mode where the input pin is pulled with a weak pull-up to an initial state. When left alone it persists in its state, but may be pulled to a new state with minimal effort. Open drain is useful when multiple gates or pins are connected together with an (external or internal) pull-up. If all the pin are high, they are all open circuits and the pull-up drives the pins high. If any pin is low they all go low as they tied together. This configuration effectively forms an AND gate. _____________________________ Note added November 2019 - 7+ years on: The configuration of combining multiple open collector/drain outputs has traditionally been referred to as a "Wired OR" configuration. CALLING it an OR (even traditionally) does not make it one. If you use negative logic (which traditionally may have been the case) things will be different, but in the following I'll stick to positive logic convention which is what is used as of right unless specifically stated. The above comment about forming an 'AND' gate has been queried a number of times over the years - and it has been suggested that the result is 'really' an 'OR' gate. It's complex. The simple picture' is that if several open collector outputs are connected together then if any one of the open collector transistors is turned on then the common output will be low. For the common output to be high all outputs must be off. If you consider combining 3 outputs - for the result to be high all 3 would need to have been high individually. 111 -> 1. That's an 'AND'. If you consider each of the output stages as an inverter then for each one to have a high output it's input must be low. So to get a combined high output you need three 000 -> 1 . That's a 'NOR'. Some have suggested that this is an OR - Any of XYZ with at least 1 of these is a 1 -> 1. I can't really "force" that idea onto the situation. _________________________________ When driving an SRAM you probably want to drive either the data lines or the address lines high or low as solidly and rapidly as possible so that active up and down drive is needed, so push-pull is indicated. In some cases with multiple RAMs you may want to do something clever and combine lines, where another mode may be more suitable. With SRAM with data inputs from the SRAM if the RAM IC is always asserting data then a pin with no pull-up is probably OK as the RAM always sets the level and this minimises load. If the RAM data lines are sometimes open circuit or tristate you will need the input pins to be able to set their own valid state. In very high speed communications you may want to use a pull-up and a pull-down so the parallel effective resistance is the terminating resistance, and the bus idle voltage is set by the two resistors, but this is somewhat specialist.
{ "source": [ "https://electronics.stackexchange.com/questions/28091", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5872/" ] }
28,251
There are a lot of poorly drawn schematics here. A few times people have actually asked for critiques of their schematics. This question is intended as a single repository on schematic drawing rules and guidelines can point people to. The question is What are the rules and guidelines for drawing good schematics? Note: This is about schematics themselves, not about the circuits they represent.
A schematic is a visual representation of a circuit. As such, its purpose is to communicate a circuit to someone else. A schematic in a special computer program for that purpose is also a machine-readable description of the circuit. This use is easy to judge in absolute terms. Either the proper formal rules for describing the circuit are followed and the circuit is correctly defined or it isn't. Since there are hard rules for that and the result can be judged by machine, this isn't the point of the discussion here. This discussion is about rules, guidelines, and suggestions for good schematics for the first purpose, which is to communicate a circuit to a human. Good and bad will be judged here in that context. Since a schematic is to communicate information, a good schematic does this quickly, clearly, and with a low chance of misunderstanding. It is necessary but far from sufficient for a schematic to be correct. If a schematic is likely to mislead a human observer, it is a bad schematic whether you can eventually show that after due deciphering it was in fact correct. The point is clarity . A technically correct but obfuscated schematic is still a bad schematic. Some people have their own silly-ass opinions, but here are the rules (actually, you'll probably notice broad agreement between experienced people on most of the important points): Use component designators This is pretty much automatic with any schematic capture program, but we still often see schematics here without them. If you draw your schematic on a napkin and then scan it, make sure to add component designators. These make the circuit much easier to talk about. I have skipped over questions when schematics didn't have component designators because I didn't feel like bothering with the second 10 kΩ resistor from the left by the top pushbutton . It's a lot easier to say R1, R5, Q7, etc. Clean up text placement Schematic programs generally plunk down part names and values based on a generic part definition. This means they often end up in inconvenient places in the schematic when other parts are placed nearby. Fix it. That's part of the job of drawing a schematic. Some schematic capture programs make this easier than others. In Eagle for example, unfortunately, there can only be one symbol for a part. Some parts are commonly placed in different orientations, horizontal and vertical in the case of resistors for example. Diodes can be placed in at least 4 orientations since they have direction too. The placement of text around a part, like the component designator and value, probably won't work in other orientations than it was originally drawn in. If you rotate a stock part, move the text around afterward so that it is easily readable, clearly belongs to that part, and doesn't collide with other parts of the drawing. Vertical text looks stupid and makes the schematic hard to read. I make separate redundant parts in Eagle that differ only in the symbol orientation and therefore the text placement. That's more work upfront but makes it easier when drawing a schematic. However, it doesn't matter how you achieve a neat and clear end result, only that you do. There is no excuse. Sometimes we hear whines like " But CircuitBarf 0.1 doesn't let me do that" . So get something that does. Besides, CircuitBarf 0.1 probably does let you do it, just that you were too lazy to read the manual to learn how and too sloppy to care. Draw it (neatly!) on paper and scan it if you have to. Again, there is no excuse. For example, here are some parts at different orientations. Note how the text is in different places relative to parts to make things neat and clear. Don't let this happen to you: Yes, this is actually a small snippet of what someone dumped on us here. Basic layout and flow In general, it is good to put higher voltages towards the top, lower voltages towards the bottom and logical flow left to right. That's clearly not possible all the time, but at least a generally higher level effort to do this will greatly illuminate the circuit to those reading your schematic. One notable exception to this is feedback signals. By their very nature, they feed "back" from downstream to upstream, so they should be shown sending information opposite of the main flow. Power connections should go up to positive voltages and down to negative voltages. Don't do this: There wasn't room to show the line going down to ground because other stuff was already there. Move it. You made the mess, you can unmake it. There is always a way. Following these rules causes common subcircuits to be drawn similarly most of the time. Once you get more experience looking at schematics, these will pop out at you and you will appreciate this. If stuff is drawn every which way, then these common circuits will look visually different every time and it will take others longer to understand your schematic. What's this mess, for example? After some deciphering, you realize "Oh, it's a common emitter amplifier. Why didn't that #%&^$@#$% just draw it like one in the first place!?" : Draw pins according to function Show pins of ICs in a position relevant to their function, NOT HOW THEY HAPPEN TO STICK OUT OF THE CHIP. Try to put positive power pins at the top, negative power pins (usually grounds) at the bottom, inputs at left, and outputs at right. Note that this fits with the general schematic layout as described above. Of course, this isn't always reasonable and possible. General-purpose parts like microcontrollers and FPGAs have pins that can be input and output depending on use and can even vary at run time. At least you can put the dedicated power and ground pins at top and bottom, and possibly group together any closely related pins with dedicated functions, like crystal driver connections. ICs with pins in physical pin order are difficult to understand. Some people use the excuse that this aids in debugging, but with a little thought you can see that's not true. When you want to look at something with a scope, which question is more common "I want to look at the clock, what pin is that?" or "I want to look at pin 5, what function is that?" . In some rare cases, you might want to go around a IC and look at all the pins, but the first question is by far more common. Physical pin order layouts obfuscate the circuit and make debugging more difficult. Don't do it. Direct connections, within reason Spend some time with placement reducing wire crossings and the like. The recurring theme here is clarity . Of course, drawing a direct connection line isn't always possible or reasonable. Obviously, it can't be done with multiple sheets, and a messy rats nest of wires is worse than a few carefully chosen "air wires". It is impossible to come up with a universal rule here, but if you constantly think of the mythical person looking over your shoulder trying to understand the circuit from the schematic you are drawing, you'll probably do alright. You should be trying to help people understand the circuit easily, not make them figure it out despite the schematic. Design for regular size paper The days of electrical engineers having drafting tables and being set up to work with D size drawings are long gone. Most people only have access to regular page-size printers, like for 8 1/2 x 11-inch paper here in the US. The exact size is a little different all around the world, but they are all roughly what you can easily hold in front of you or place on your desk. There is a reason this size evolved as a standard. Handling larger paper is a hassle. There isn't room on the desk, it ends up overlapping the keyboard, pushes things off your desk when you move it, etc. The point is to design your schematic so that individual sheets are nicely readable on a single normal page, and on the screen at about the same size. Currently, the largest common screen size is 1920 x 1080. Having to scroll a page at that resolution to see necessary detail is annoying. If that means using more pages, go ahead. You can flip pages back and forth with a single button press in Acrobat Reader. Flipping pages is preferable to panning a large drawing or dealing with outsized paper. I also find that one normal page at reasonable detail is a good size to show a subcircuit. Think of pages in schematics like paragraphs in a narrative. Breaking a schematic into individually labeled sections by pages can actually help readability if done right. For example, you might have a page for the power input section, the immediate microcontroller connections, the analog inputs, the H bridge drive power outputs, the ethernet interface, etc. It's actually useful to break up the schematic this way even if it had nothing to do with drawing size. Here is a small section of a schematic I received. This is from a screenshot displaying a single page of the schematic maximized in Acrobat Reader on a 1920 x 1200 screen. In this case, I was being paid in part to look at this schematic so I put up with it, although I probably used more time and therefore charged the customer more money than if the schematic had been easier to work with. If this was from someone looking for free help like on this web the site, I would have thought to myself screw this and gone on to answer someone else's question. Label key nets Schematic capture programs generally let you give nets nicely readable names. All nets probably have names inside the software, just that they default to some gobbledygook unless you explicitly set them. If a net is broken up into visually unconnected segments, then you absolutely have to let people know the two seemingly disconnected nets are really the same. Different packages have different built-in ways to show that. Use whatever works with the software you have, but in any case, give the net a name and show that name at each separately drawn segment. Think of that as the lowest common denominator or using "air wires" in a schematic. If your software supports it and you think it helps with clarity, by all means, use little "jump point" markers or whatever. Sometimes these even give you the sheet and coordinates of one or more corresponding jump points. That's all great but label any such net anyway. The important point is that the little name strings for these nets are derived automatically from the internal net name by the software. Never draw them manually as arbitrary text that the software doesn't understand as the net name. If separate sections of the net ever get disconnected or separately renamed by accident, the software will automatically show this since the name shown comes from the actual net name, not something you type in separately. This is a lot like a variable in a computer language. You know that multiple uses of the variable symbol refer to the same variable. Another good reason for net names is short comments. I sometimes name and then show the names of nets only to give a quick idea what the purpose of that net is. For example, seeing that a net is called "5V" or "MISO" could help a lot in understanding the circuit. Many short nets don't need a name or clarification, and adding names would hurt more due to clutter than they would illuminate. Again, the whole point is clarity. Show a meaningful net name when it helps in understanding the circuit, and don't when it would be more distracting than useful. Keep names reasonably short Just because your software lets you enter 32 or 64 character net names, doesn't mean you should. Again, the point is about clarity. No names is no information, but lots of long names are clutter, which then decreases clarity. Somewhere in between is a good tradeoff. Don't get silly and write "8 MHz clock to my PIC", when simply "CLOCK", "CLK", or "8MHZ" would convey the same information. See this ANSI/IEEE standard for recommended pin name abbreviations. Upper case symbol names Use all caps for net names and pin names. Pin names are almost always shown upper case in datasheets and schematics. Various schematic programs, Eagle included, don't even allow for lower case names. One advantage of this, which is also helped when the names aren't too long, is that they stick out in the regular text. If you do write real comments in the schematic, always write them in mixed case but make sure to upper case symbol names to make it clear they are symbol names and not part of your narrative. For example, "The input signal TEST1 goes high to turn on Q1, which resets the processor by driving MCLR low." . In this case, it is obvious that TEST1, Q1, and MCLR refer to names in the schematic and aren't part of the words you are using in the description. Show decoupling caps by the part Decoupling caps must be physically close to the part they are decoupling due to their purpose and basic physics. Show them that way. Sometimes I've seen schematics with a bunch of decoupling caps off in a corner. Of course, these can be placed anywhere in the layout, but by placing them by their IC you at least show the intent of each cap. This makes it much easier to see that proper decoupling was at least thought about, more likely a mistake is caught in a design review, and more likely the cap actually ends up where intended when the layout is done. Dots connect, crosses don't Draw a dot at every junction. That's the convention. Don't be lazy. Any competent software will enforce this any way, but surprisingly we still see schematics without junction dots here occasionally. It's a rule. We don't care whether you think it's silly or not. That's how it's done. Sort of related, try to keep junctions to Ts, not 4-way crosses. This isn't as hard a rule, but stuff happens. With two lines crossing, one vertical the other horizontal, the only way to know whether they are connected is whether the little junction dot is present. In past days when schematics were routinely photocopied or otherwise optically reproduced, junction dots could disappear after a few generations, or could sometimes even appear at crosses when they weren't there originally. This is less important now that schematics are generally in a computer, but it's not a bad idea to be extra careful. The way to do that is to never have a 4-way junction. If two lines cross, then they are never connected, even if after some reproduction or compression artifacts it looks like there maybe is a dot there. Ideally connections or crossovers would be unambiguous without junction dots, but in reality, you want as little chance of misunderstanding as possible. Make all junctions Ts with dots, and all crossing lines are therefore different nets without dots. Look back and you can see the point of all these rules is to make it as easy as possible for someone else to understand the circuit from the schematic, and to maximize the chance that understanding is correct. Good schematics show you the circuit. Bad schematics make you decipher them. There is another human point to this too. A sloppy schematic shows lack of attention to detail and is irritating and insulting to anyone you ask to look at it. Think about it. It says to others "Your aggravation with this schematic isn't worth my time to clean it up" which is basically saying "I'm more important than you are" . That's not a smart thing to say in many cases, like when you are asking for free help here, showing your schematic to a customer, teacher, etc. Neatness and presentation count. A lot. You are judged by your presentation quality every time you present something, whether you think that's how it should be or not. In most cases, people won't bother to tell you either. They'll just go on to answer a different question, not look for some good points that might make the grade one notch higher, or hire someone else, etc. When you give someone a sloppy schematic (or any other sloppy work from you), the first thing they're going to think is "What a jerk" . Everything else they think of you and your work will be colored by that initial impression. Don't be that loser.
{ "source": [ "https://electronics.stackexchange.com/questions/28251", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4512/" ] }
28,393
I've researched and it says that resistors limit the current flowing through the LED. But this statement confuses me because we know that in a series circuit, the current is constant at every point, so how come a resistor can limit the current flowing?
LEDs have a fairly constant voltage across them, like 2.2V for a red LED, which only slightly rises with current. If you supply 3V to this LED without series resistor the LED will try to set for a voltage/current combination for this 3V. There's no current that goes with this kind of voltage, theoretically it would be 10s, maybe 100s of amperes, which would destroy the LED. And that's exactly what happens if your power supply can supply enough current. So the solution is a series resistor. If your LED needs 20mA you can calculate for the red LED in the example \$ R = \dfrac{\Delta V}{I} = \dfrac{3V - 2.2V}{20mA} = 40 \Omega\$ You may think that supplying 2.2V directly will also work, but that's not true. The slightest difference in LED or supply voltage may cause the LED to light very dim, very bright, or even destroy. A series resistor will ensure that slight differences in voltage have only a minor effect on the LED's current, provided that the voltage drop across the resistor is large enough.
{ "source": [ "https://electronics.stackexchange.com/questions/28393", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8769/" ] }
28,404
I want to connect the output from the audio jack of an iPhone to an Arduino. What voltage range can I expect to see on the audio lines from the iPhone? I assume that turning the volume up on the phone will produce a large AC voltage, but how large does it go up to? I want to make sure that it wont exceed the voltage level that an Arduino can read on its input pins. Will I need to provide any circuitry between the iPhone and the Arduino?
Commercial line out specification is to be able to drive 1 milliwatt to a 600 ohm load. For a sine wave, this means a voltage of 0.77 volts RMS (2.2 volts peak-to-peak) and a current of 1.3 milliamperes RMS (3.6 milliamperes peak-to-peak).
{ "source": [ "https://electronics.stackexchange.com/questions/28404", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5222/" ] }
28,546
What I understood from the definition of current sources is that it is a source which supplies a constant current across a load no matter how the other parameters (like resistances for example) in the circuit are changed. Am I right? If I'm right, what is an example of a current source used in a practical circuit? Wikipedia gave the example of a Van de Graaff generator as a constant current source. (I didn't read the article, because there was a note that the section appeared to contradict itself. I didn't want to get confused.) I can think of voltage sources - for example a battery which has a constant potential difference across its ends irrespective of the changes in the circuit it is connected to, but I cannot think of a current source. Any example I can think of involves change in the current when the resistances are changed.
A current source is the dual of a voltage source. An ideal voltage source has zero output impedance, so that the voltage doesn't drop under load. It shouldn't be shorted, because in theory there would flow an infinite current. An ideal current source has infinite output impedance. This means that the load's impedance is negligible and won't influence the current flowing. Like voltage sources shouldn't be shorted, current sources shouldn't be left open. An open current source will still try to source the set current, and the theoretical current source will go to infinite voltage. edit (following your comment) Here you can read impedance as resistance. If the current source would have a limited resistance changes in load would change the current, because the total resistance would change. You don't want that. So if the current source's resistance is infinite the load can be ignored and the resistance always remains the same (infinite). Therefore the current will as well. A practical current source may be constructed as follows: One diode has the same voltage drop as the base-emitter junction, so the other diode sets the transistor's emitter to about 0.7V. A fixed voltage across a fixed resistor gives a fixed emitter current, which is about the same as the collector current if the transistor's \$H_{FE}\$ is high enough. (Strictly speaking this is a current sink rather than a current source, but the principle remains the same.) Another current sink uses an opamp as control element: The main thing you need to know about opamps in this configuration is that they will try to keep the voltage on both inputs equal. So suppose you set \$V_{SET}\$ to 1V, then the opamp will try to make the - input also 1V. It does so by inserting current into the transistor's base. This will cause a current through the load \$I_{LOAD}\$ which is (almost) equal to \$I_{SET}\$. And \$I_{SET}\$ is constant to get the 1V across \$R_{SET}\$, according to Ohm's Law: \$ I_{SET} = \dfrac{V_{SET}}{R_{SET}} \$ Since \$V_{SET}\$ and \$R_{SET}\$ are constant, so will \$I_{SET}\$ be. QED.
{ "source": [ "https://electronics.stackexchange.com/questions/28546", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7364/" ] }
28,628
In this answer I read that microcontrollers usually don't have DACs, while they do have ADC. Why is that? edit I appreciate that integrating resistors like in an R-2R DAC is expensive in terms of real-estate (thanks Mike, for your answer), but I thought switched current DACs can be made very small since they only need a handful of transistors.
First, some microcontrollers DO have D/A converters. However, these are far less common than A/D converters. Aside from the technical issues, the main reason is market demand. Think about it. What kind of application would require a real D/A? It is quite rare to want a micro to produce a reasonably high speed analog signal unless the point is signal processing. The main market for that however is audio, and that needs a lot more resolution than you can build with the same process used to make the digital microcontroller. So audio will use external A/Ds and D/As anyway. DSPs that are intended for such applications have communication hardware built in to talk to such external devices, like I2S. Otherwise for ordinary control applications, the strategy is to convert to digital as early in the process and then keep things digital. This argues for A/Ds, but D/As are useless since you don't want to go back to analog. Things that microcontrollers typically control are controlled with PWM (PulseWidth Modulation). Switching power supplies and class D audio inherently work on pulses. Motor control, solenoid control, etc, is all done with pulses for efficiency. You want the pass element to be either fully on or fully off because a ideal switch can't dissipate any power. In large systems or where input power is scarce or expensive (like battery operation), the efficiency of switching systems is important. In a lot of medium cases the total power used isn't the issue, but getting rid of wasted power as heat is. A switching circuit that dissipates 1 W instead of 10 W may cost a little more in electronic parts than the 10 W linear circuit, but is a lot cheaper overall because you don't need a heat sink with associated size and weight, possibly forced air cooling, etc. Switching techniques also are usually tollerant of a wider input voltage range. Note that PWM outputs, which are very common in microcontrollers, can be used to make analog signals in the unusual cases where you need them. Low pass filtering a PWM output is the easiest and nicest way to make a analog signal from a micro as long as you have sufficient resolution*speed product. Filtered PWM outputs are nicely monotonic and highly linear, and the resolution versus speed tradeoff can be useful. Did you have anything specific in mind you wished a micro had a D/A converter for? Chances are this can be solved with low pass filtered PWM or would need a external D/A for higher resolution*speed anyway. The gap between filtered PWM and external is pretty narrow, and the type of applications that actually need such a signal is also narrow.
{ "source": [ "https://electronics.stackexchange.com/questions/28628", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
28,634
Maybe the underlying question is what the Voltage-Current curve looks like. Can I drive it from a voltage source (like you drive a heater) or from a current source (like you drive an LED)? Or even different than those two options? ADDITIONAL1: Say (hypothetically) I have two commercially available identical Peltier's, they are spec'd 6V/3A. Can I connect these in series to a 12Vdc power supply without any worries? CONCLUSION1: Current/Voltage load curve is reasonably linear, both driving from current or voltage source will do fine as long as the device is operated within its spec. (Olin Lathrop, Russell McMahon) CONCLUSION2: Don't drive a Peltier from PWM, power loss due to current increase, grows more rapidly than cooling power. (Olin Lathrop) CONCLUSION3: Beware the mechanical wear of the device with continues cycling. Eg. don't use a thermostat on/off controller. (Russell McMahon)
Peltier devices work on current, but usually have significant enough resistance so that voltage control is possible. Peltier devices are one of the few things you do not want to run with pulses, particularly in cooling applications. The cooling effect is proportional to current, but the internal heating due to \$I^2R\$ losses is proportional to the square of the current. Starting at 0, increasing current causes increasing cooling. However, at some point the resistive heating due to more current outweighs the additional cooling power of the higher current. More current beyond this actually therefore causes less overall cooling. The maximum cooling current is one of the parameters that should be supplied by the manufacturer. While maximum cooling occurs at some specified current, efficiency steadily decreases with increasing current. Therefore you don't want to PWM a peltier cooler between 0 and the maximum cooling current. Driving it at the steady current to produce the same overall cooling is more efficient. Of course the microcontroller regulating the temperature will still produce PWM pulses. These pulses need to be filtered so that the Peltier device sees relatively smooth current. The general rule of thumb is to try to keep the ripple below 10% of nominal, but of course that is just a tradeoff someone picked. Fortunately, this is usually not a difficult requirement to design to.
{ "source": [ "https://electronics.stackexchange.com/questions/28634", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8627/" ] }
28,644
I've got the chip, so how would I add: a clock, RAM, hard drive (maybe EEPROM?), a screen (LCD graphical screen?), input method (keyboard, mouse) ?
Don't listen to the others saying that the z80 is too old or too hard. The z80 was designed for this task. It's the oldest continually produced CPU around for a reason, it's easy to build computer systems with it. It's an excellent choice for your project. There are some great books like "z80 microcomputer design projects" and " the z80 handbook " that will really help you out. Also, look at z80.info , they have a ton of information you'll want. Your design goals are realistic. The hardest part will be the LCD screen, assuming you want to drive a VGA or NTSC display. But even that, once you get into it, is not that hard. That'll be a recurring theme you'll encounter in this project, things are much easier than you expected. Early microcomputers were remarkably simple machines, expecting you can duplicate them to some degree in 2012 is a very realistic goal. Aside from the custom sound and video chips, the rest of the machine is still available as off the shelf parts and easily understandable even as a newbie. The simplest usable z80 system will have the z80 CPU, some flash memory or EEPROM you can get for free from old motherboards, ram and a uart for serial communication (plus a max232 for level shifting). All of this is available at any electronics distributor, are through hole components and can be built on a breadboard. The only special equipment you'll need is the flash/EEPROM programmer (which I built myself from an Arduino). Oh, and a few other things like some 74 series logic chips for address decoding, reset circuit, etc and a crystal oscillator. Alternatively, you can replace the uart with a z80 pio chip to communicate with a modern parallel mode LCD character display. It won't really do graphics, but it's easy to use and your z80 can print things early on. A ps/2 keyboard will be rather simple to interface. But anyway, the z80 is a good choice for your project. This might sound complicated, but in the end its just not all that bad. Build incrementally, start with the z80 test circuit, wire up a EEPROM so it can run some code and just build from there.
{ "source": [ "https://electronics.stackexchange.com/questions/28644", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8210/" ] }
28,686
I've had some experience with FPGA/HDL tool suites such as Xilinx ISE, Lattice Diamond, etc. The general workflow is writing Verilog/VHDL, simulation, testing and then programming the FPGA. I've heard a couple of people say ASIC design is very different. What are the toolsets used for the two major types of ASICs, Gate level ASICs and Transistor level ASICs? I've been looking into High Level Synthesis tools such as Catapult C and Cadence C to Silicon, but I've never tried any yet. Can you explain the different types of tools available in the ASIC/FPGA field that can change/speeden up the typical HDL workflow?
Typically ASIC design is a team endeavor due to the complexity and quantity of work. I'll give a rough order of steps, though some steps can be completed in parallel or out of order. I will list tools that I have used for each task, but it will not be encyclopedic. Build a cell library. (Alternatively, most processes have gate libraries that are commercially available. I would recommend this unless you know you need something that is not available.) This involves designing multiple drive strength gates for as many logic functions as needed, designing pad drivers/receivers, and any macros such as an array multiplier or memory. Once the schematic for each cell is designed and verified, the physical layout must be designed. I have used Cadence Virtuoso for this process, along with analog circuit simulators such as Spectre and HSPICE . Characterize the cell library. (If you have a third party gate library, this is usually done for you.) Each cell in your library must be simulated to generate timing tables for Static Timing Analysis (STA). This involves taking the finished cell, extracting the layout parasitics using Assura , Diva , or Calibre , and simulating the circuit under varying input conditions and output loads. This builds a timing model for each gate that is compatible with your STA package. The timing models are usually in the Liberty file format. I have used Silicon Smart and Liberty-NCX to simulate all needed conditions. Keep in mind that you will probably need timing models at "worst case", "nominal", and "best case" for most software to work properly. Synthesize your design. I don't have experience with high level compilers, but at the end of the day the compiler or compiler chain must take your high level design and generate a gate-level netlist. The synthesis result is the first peek you get at theoretical system performance, and where drive strength issues are first addressed. I have used Design Compiler for RTL code. Place and Route your design. This takes the gate-level netlist from the synthesizer and turns it into a physical design. Ideally this generates a pad-to-pad layout that is ready for fabrication. It is really easy to set your P&R software to automatically make thousands of DRC errors, so not all fun and games in this step either. Most software will manage drive strength issues and generate clock trees as directed. Some software packages include Astro, IC Compiler, Silicon Encounter, and Silicon Ensemble. The end result from place and route is the final netlist, the final layout, and the extracted layout parasitics. Post-Layout Static Timing Analysis. The goal here is to verify that your design meets your timing specification, and doesn't have any setup, hold, or gating issues. If your design requirements are tight, you may end up spending a lot of time here fixing errors and updating the fixes in your P&R tool. The final STA tool we used was PrimeTime . Physical verification of the Layout. Once a layout has been generated by the P&R tool, you need to verify that the design meets the process design rules (Design Rule Check / DRC) and that the layout matches the schematic (Layout versus Schematic / LVS). These steps should be followed to ensure that the layout is wired correctly and is manufacturable. Again, some physical verification tools are Assura , Diva , or Calibre . Simulation of the final design. Depending on complexity, you may be able to do a transistor-level simulation using Spectre or HSPICE , a "fast spice" simulation using HSIM , or a completely digital simulation using ModelSim or VCS . You should be able to generate a simulation with realistic delays with the help of your STA or P&R tool. Starting with an existing gate library is a huge time saver, as well as using any macros that benefit your design, such as memory, a microcontroller, or alternative processing blocks. Managing design complexity is a big part as well - a single clock design will be easier to verify than circuit with multiple clock domains.
{ "source": [ "https://electronics.stackexchange.com/questions/28686", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2249/" ] }
28,897
I understand that the output voltage is determined by the ratio between the two resistor values, and that if the both resistors are same then output voltage will be exactly same for all; but what's the basis of picking the resistor values?there is any need of consider output current to choose resistor value.
The main point is current. Take a look at this circuit. Hover your mouse pointer over the ground symbol and you'll see that the current is 25 mA. Now take a look at this circuit and you'll see that the output current is \$ 2.5 \mbox{ } \mu A \$. Now let's see how the circuits behave under load. Here 's the first circuit with load. As you can see, is a 2.38 mA current going through the load resistor on the right and the voltage on it is no longer the expected 2.5 V but instead 2.38 V (because the two bottom resistors are in parallel). If we take a look at the second circuit here , we'll see that now the top resistor drops around whole 5 V while the two bottom resistors have voltage of 4.99 mV. That is because the resistor ratio have been changed here. Since the two bottom resistors are in parallel now, and we have one resistor with significantly larger resistance than the other, their combined resistance is negligible compared to the resistance of just the bottom right resistor (you can check that using parallel resistor formulas). So now the voltage output is significantly different from the 2.5 V we get in case of no-load condition. Now let's take a look at opposite situation: Two small resistors in voltage divider and one large as load here . Again the combined resistance of the two lower resistors is smaller than the resistance of the smaller resistor of the two. In this case however this doesn't make a big impact on the voltage seen by the load. It still has the voltage of 2.5 V and everything is fine so far. So the point is when determining the resistance of the resistors, we should take into account the input resistance of the load and the two voltage divider resistors should be as small as possible. On the other hand, let's compare the current going through the divider in the circuit with large resistors on the divider and the circuit with small resistors on the divider . As you can see, the large resistors have current of just \$2.5 \mbox{ }\mu A\$ going through them and the small resistors have current of 25 mA. The point here is that the current is wasted by the voltage divider and if this was for example part of a battery operated device, it would have a negative impact on the battery life. So the resistors should be as large as possible in order to lower the wasted current. This gives us two opposite requirements of having as small as possible resistors to get better voltage regulation at the output and as large as possible resistors to get as small as possible wasted current. So to get the correct value, we should see which voltage we need on the load, how precise it needs to be and get the load's input resistance and based on that calculate the size of the resistors we need to get to have the load with acceptable voltage. Then we need to experiment with higher voltage divider resistor values and see how the voltage will be affected by them and find the point where we can't have greater voltage variation depending on the input resistance. At that point, we (in general) have good choice of voltage divider resistors. Another point that needs to be considered is the power rating of the resistors. This goes in favor of resistors with larger resistance because resistors with lower resistance will dissipate more power and heat up more. That means that they will need to be larger (and usually more expensive) than resistors with larger resistance. In practice, once you do a number of voltage dividers, you'll see few popular values for the voltage divider resistors. Many people just pick one of them and don't bother too much with calculations, unless there is a problem with the choice. For example for smaller loads, you can pick resistors in the \$100 \mbox{ } k \Omega\$ range while for bigger loads you can use \$10 \mbox{ } k \Omega\$ or even \$1 \mbox{ } k \Omega\$ resistors, if you have enough current to spare.
{ "source": [ "https://electronics.stackexchange.com/questions/28897", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8322/" ] }
28,954
I want to protect a MCU (PIC18F67J60) ADC input (0 to 3.3V) against ESD surges. I have seen different approaches and have some doubt what would be the preferred method. Or possible just pros and cons of each method. The methods are: A TVS diode with correct reverse working voltage connected to ground. Two schottky diodes: one between V+ and adc input, one between GND and adc input. What to choose?
There are several methods to do, and a successful approach usually requires several of them at the same time. They are: Use a spark-gap on the PCB itself. This is normally made using two diamond-shaped pads on the PCB separated by about 0.008 inches or less. This cannot be covered in soldermask. One pad is connected to GND (or better yet, chassis ground) and the other is the signal you want to protect. Put this at the connector where it is coming in from. This spark gap doesn't actually work very well since it might only reduce the ESD voltage to about 600 volts-- give or take a LOT because of humidity and dirt on the PCB. The #1 purpose for this is to remove the possibility of a spark jumping across the other protective devices like diodes and resistors. You cannot use a spark-gap alone and expect things to work. An example of a PCB spark gap. Source NXP AN10897 A guide to designing for ESD and EMC. rev. 02 (fig.33 inside). A series resistor between the spark and your sensitive components. This resistor should be as large as possible without interfering with your signal. Sometimes your signal won't allow for any resistor, or sometimes you can get away with something as large as 10K ohms. A ferrite bead could also work here, but a resistor is preferred if possible because a resistor has more predictable performance over a wider frequency range. The purpose of this resistor is to reduce the current flow from the spike, which can help protect the diodes or other devices. Protection diodes (one connects your signal to GND, and another to VCC). These will hopefully shunt any spikes to either the power or ground plane. Put these diodes between your sensitive components and your series resistor from #2. You could use a TVS here, but that's not as good as normal diodes. A 3 nF cap between your signal and GND (or Chassis Gnd) can help to greatly absorb any spike. For best ESD protection, put it between your series resistor and chip. For best EMI filtering, put it between the resistor and your connector. Depending on your signal, this might not work well. This cap and the series resistor will form a low-pass filter that could negatively effect signal quality. Keep that in mind when designing your circuit. Each situation will likely require a different combination of these 4 things. If your ADC input is fairly slow then I'd go with a spark gap, a 500 to 1k resistor, and maybe a cap. If you have room on the PCB then the diodes wouldn't be bad either (but still overkill). Let me elaborate on the spark gap for a moment. Let's say that a resistor in an 0402 package was all the protection you had, and a spike comes in. Even if that resistor is 1 meg ohm, the spike could jump across that small resistor (effectively bypassing the resistor) and still kill your chip. Since the gap in the spark gap is smaller than the distance between the pads of the resistor, the ESD spike is more likely to jump across the spark gap than the resistor. Of course you could just use a resistor with more distance between pads, and that's OK in some cases, but you still have the energy there that you have to deal with. With a spark gap you do dissipate some of that ESD energy, even though you don't dissipate it enough to make it benign. And best of all, they are FREE!
{ "source": [ "https://electronics.stackexchange.com/questions/28954", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3041/" ] }
29,034
When driving a brushless DC motor, which parameters control the speed. Is it the current in the windings, the voltage, or both? What determines the maximum speed? If you drive the windings with PWM, that controls the winding current, correct?
First let's consider just a ordinary brushed DC motor. The hardware mechanically ensures that the windings are switched (commutated) such that the magnetic field is always trying to pull the motor along. The magnetic field strength is directly proportional to current, so the torque is proportional to current. So at a very basic level, the speed is whatever results in enough mechanical resistance to ballance the torque. However, that is not useful in most cases since it's not obvious what the current is. For a stalled motor, the current is the applied voltage divided by the resistance of whatever windings are switched in. However, as the motor spins it also acts like a generator. The voltage the generator produces is proportional to speed, and apposes the external applied voltage. At some speed this equals the external voltage, in which case the effective voltage driving the motor is zero and the motor current is zero. That also means the torque is zero, so a unloaded motor can't spin that fast since there is always some friction. What happens is that the motor spins at a little lower speed. The amount it spins slower is just enough to leave a little effective voltage on the motor, which is the amount to create just enough current to create the torque to ballance the small friction in the system. This is why the speed of a unloaded motor doesn't just increase until it flies apart. The unloaded speed is pretty much proportional to the external voltage, and is just below the speed at which the motor internally generates that voltage. This also explains why a fast spinning motor draws less current than a stalled motor at the same external voltage. For the stalled motor, current is applied voltage divided by resistance. For the spinning motor, current is applied voltage minus the generator voltage divided by the resistance. Now to your question about a brushless DC motor. The only difference is that the windings are not automatically switched in and out according to the rotation angle of the motor. If you switch them optimally as the brush system in a brushed DC motor is intended to do, then you get the same thing. In that case the unloaded current will be even lower since there is no friction from the brushes to overcome. That allows less current to drive the motor at a particular speed, which will be closer to where the generator voltage matches the external applied voltage. With a brushless motor you have other options. I recently did a project where the customer needed very accurate motor speed. In that case I communtated the windings at precisely the desired speed derived from a crystal oscillator. I used the Hall effect position feedback signals only to clip the applied magnetic field to within ±90° of the position. This works fine as long as the load on the shaft is less than the torque applied when the magnetic field is at 90°. Usually, however, you commutate a brushless DC motor optimally, just like the mechanical brushes would try to do. This means keeping the magnetic field at 90° from the current position in the direction of desired rotation. The overall applied voltage is then adjusted to modulate speed. This is efficient since only the minimum voltage is used to make the motor spin the desired speed. Yes, PWM works fine for driving the coils. After a few 100 Hz or so for most motors, the windings only "see" the average applied voltage, not the individual pulses. The mechanical system can't respond anywhere near that fast. However, these windings make magnetic fields which apply force. There is a little bit of force on every turn of wire. While the motor may operate fine at a few 100 Hz PWM, individual turns of the winding can be a little loose and vibrate at that frequency. This is not good for two reasons. First, the mechanical motion of the wires can eventually cause insulation to rub off, although that's rather a long shot. Second, and this is quite real, the small mechanical vibrations become sound that can be rather annoying. Motor windings are therefore commonly driven with PWM just above the audible range, like 25-30 kHz.
{ "source": [ "https://electronics.stackexchange.com/questions/29034", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5273/" ] }
29,037
What tradeoffs should I consider when deciding to use an SPI or I2C interface? This accelerometer/gyro breakout board is available in two models, one for each interface. Would either one be easier to integrate into an Arduino project? http://www.sparkfun.com/products/11028
Summary SPI is faster. I2C is more complex and not as easy to use if your microcontroller doesn't have an I2C controller. I2C only requires 2 lines. I2C is a bus system with bidirectional data on the SDA line. SPI is a point-to-point connection with data in and data out on separate lines (MOSI and MISO). Essentially SPI consists of a pair of shift registers, where you clock data in to one shift register while you clock data out of the other. Usually data is written in bytes by having each time 8 clock pulses in succession, but that's not an SPI requirement. You can also have word lengths of 16 bit or even 13 bit, if you like. While in I2C synchronization is done by the start sequence in SPI it's done by SS going high (SS is active low). You decide yourself after how many clock pulses this is. If you use 13 bit words the SS will latch the last clocked in bits after 13 clock pulses. Since the bidirectional data is on two separate lines it's easy to interface. SPI in standard mode needs at least four lines: SCLK (serial clock), MOSI (Master Out Slave In), MISO (Master In Slave Out) and SS (Slave Select). In bideroctional mode needs at least three lines: SCLK (serial clock), MIMO (Master In Master Out) which is one of the MOSI or MISO lines and SS (Slave Select). In systems with more than one slave you need a SS line for each slave, so that for \$N\$ slaves you have \$N+3\$ lines in standard mode and \$N+2\$ lines in bidirectional mode. If you don't want that, in standard mode you can daisy-chain the slaves by connecting the MOSI signal of one slave to the MISO of the next. This will slow down communication since you have to cycle through all slaves data. Like tcrosley says SPI can operate at a much higher frequency than I2C. I2C is a bit more complex. Since it's a bus you need a way to address devices. Your communication starts with a unique start sequence: the data line (SDA) is pulled low while the clock (SCL) is high, for the rest of the communication data is only allowed to change when the clock is low. This start sequence synchronizes each communication. Since the communication includes the addressing only two lines are required for any number of devices (up to 127). edit It's obvious that the data line is bidirectional, but it's worth noting that this is also true for the clock line. Slaves may stretch the clock to control bus speed. This makes I2C less convenient for level-shifting or buffering. (SPI lines in standard mode are all unidirectional.) After each byte (address or data) is sent the receiver has to acknowledge the receipt by placing an acknowledge pulse on SDA. If your microcontroller has an I2C interface this will automatically be taken care of. You can still bit-bang it if your microcontroller doesn't support it, but you'll have to switch the I/O pin from output to input for each acknowledge or read data, unless you use an I/O pin for reading and one for writing. At 400kHz standard I2C is much slower than SPI. There are high-speed I2C devices which operate at 1MHz, still much slower than 20MHz SPI.
{ "source": [ "https://electronics.stackexchange.com/questions/29037", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17697/" ] }
29,316
I was reading USB scope probe - request for comments and ideas , and it got me thinking. What I'd really like is a very high performance oscilloscope, one which would cost $10000 or so. Surely many other people would like one too. And surely, with the expertise available on this site, it should be possible to design and open-source one. Here's my idea: It would be a hand-held 'scope probe with a USB lead coming out. Battery operated to isolate it from USB power. Input stage is a high speed op-amp, like THS3201DBVT ? ADC is something like ASD5010 , which is 1 Gs/s and 650 MHz input bandwidth. FPGA to handle the 32-bit data coming out, do the triggering, and package it into the USB. Open-source software to run on the PC. Is this a fool's errand? What am I missing? Added, more details in response to the answers: This 'scope would not be able to compete with the fancy expensive scopes out there. The main aim is to have something which would make it possible to examine high speed signals, while costing less than $200 for someone to make themselves. USB bandwidth: This is not an analog scope, nor is it a fancy LeCroy . However, USB is quite capable of transfering 2k samples at 60 Hz. This still makes it extremely useful, even though it might not be capable of capturing transient events in between those frames. A clear responsive display. Well, a PC's monitor is certainly clear. Better than almost all scopes on the market. So clarity and size are no problems. Responsive? As long as the screen can be updated at 60 Hz, I think that's pretty responsive. Triggering: I was imagining simple level triggering happening on the device. Again, it would not be able to compete with fancy scopes, but remember: this is supposed to be a $200 device. It's not supposed to have 1 GHz bandwidth. Where did I say that? But surely it could have more than 100 MHz bandwidth? Take home points: It's a $200 device. The main aim of the device is to make it possible to see high speed signals without spending $10000. There would be many things that it would be unable to do. Surely something like this would be fairly useful to people here. Surely, with the expertise available on this site, we could make it happen?
This comes down to a question of bandwidth and latency. For a simple system let's assume one probe with 100 MHz bandwidth with 1GS/s sampling rate and an 10-bit A/D converter (I've had bad experiences with 8-bit scopes). I want a real-time display on the PC with a minimum sampling window of let's say 10ns - 1 cycle of a 100MHz sine wave and a maximum window of (I'll be generous to you in this) half a second. In other words, the lowest time setting will be something like 1ns/div and the highest is .05s/div. I also want several voltage modes - 100mV range up to 20V let's say. What kind of data rates does this involve? 1Gs/s * 10 bits/sample = 10Gbits/s Those aren't USB speeds. Far from it. And I didn't even take overhead into account. First off, you just don't have the bandwidth. And it's not just bandwidth either. For your real-time display you need to be consistent. You need to transfer 100 bits to your application layer every 10 nano seconds. That kind of consistency can't be had from USB. It's not designed to cater to one device with extravagant demands - it's designed as a bus. And you can't control when you own the bus - the devices are just slaves. If the host lets another device talk when you need to send data, your data is lost. You may be crying foul - why transfer real-time data to the computer when 'real-time' for a person is 60Hz? If all you need to do is update the display you certainly don't need that much data. Except you do - your display is some linear combination of all of the samples you've collected. Averaged, least-mean-square approximated, cubic spline interpolation - it doesn't matter. To make a nice pretty display that isn't just a bunch of dots, you need most to all of that data and you need to post process it. Any triggering? The calculations have to be done on the host - at the application layer. No matter what way you slice it, for real-time displays at 1GS/s rates for any accuracy worth a damn, you have to transfer orders of magnitude more data than USB can handle and you have to do it more reliably than you're guaranteed by USB. What are the ways around this? Don't do a real-time display. Some USB scopes only offer triggered modes. The triggering is handled on the device and when a trigger is found, data is collected in a buffer. When the buffer fills, the USB scope slowly transfers it to the application and then the application displays it. That suffices for lot of scope use, but it's not real-time. And the transfer - that takes a while too. It's inconvenient. And usually the drivers suck. You can tell I've had bad experiences. I've always wondered why Firewire wasn't used for scopes. It avoids some of the headaches of USB. It's peer-to-peer, offers isochronous (consistent timing) modes and is relatively high bandwidth. You might be able to make a 10MHz real-time scope or so with that. To address your points after the edit: The usability of a scope goes up tremendously with price. When you make the jump from a $200 USB scope to even a $500 standalone you get tremendous increases in features and basic functionality. Why spend just $200 when for a little bit more you can get a real scope? Now that China has opened up the floodgates of cheap, effective scopes, there's little reason to want to save $300 that will just frustrate you later. The 'fancy' scopes that have these features are cheap nowadays. Yes, limiting your data transfer to only provide something around 60Hz-worth of consistent data will be easier with USB, but that's still not something you want to do. Don't forget about your DSP classes - only grabbing certain data from the stream amounts to decimation. When you decimate, you have to add antialiasing filters. When you do that, you lose bandwidth. This makes your scope less useful - it will limit your bandwidth on the real-time display (and only for real-time - triggered modes would be okay) to much less than the bandwidth of your analog front-end. Managing the signal processing aspects of an oscilloscope are tricky business. Clear responsive display? The PC? Not consistently. Regardless of how you do this, you need to buffer data. As I said before, USB doesn't guarantee when your data gets through. I'll say it differently: USB is not designed to accommodate hard real-time data transfer. Sure, for sufficiently small amounts of data at large intervals you may get some good performance, but not consistent performance. You WILL use buffering and once in a while you WILL miss transferring your buffer in a timely manner. Then your display skips, data is stale, etc. etc. Clear and responsive real-time displays require hard real-time data links, period. Simple triggering - again, we get back to cost vs. complexity vs. responsive. To do triggering on the device to detect transients your device can't just be a dumb data pipe that transfers samples irresponsibly over USB. You have to buffer, buffer, buffer samples on the device until you see your trigger condition. That means you need memory and intelligence on your device - either a large FPGA or a large microcontroller. That adds to size and space. If you use an FPGA you have to balance the amount of triggering logic with your need for lots of RAM for buffer space. So your buffer is smaller than you'd like it to be already. That means that you get a miniscule amount of data around your trigger point. Unless you add external memory - then you can do more. That increases the size and cost of your device though - this certainly won't be just a probe with a USB cable attached to it. You'd be lucky to get 100MHz bandwidth - usually 10x the sampling rate is considered the minimum cutoff for bandwidth. So if you have 1GS/s sampling rate that barely gets you 100MHz bandwidth. You can't get more - a 200MHz square wave is going to look like a 200MHz sine wave. That sucks. That's dumb - it's nowhere near professional level. Your other set of points: $200? How do you figure? What's the parts list? Good scopes to read high-speed signals do not cost thousands of dollars. They cost maybe A thousand dollars. 100MHz is child's play in the scope department and your idea won't even meet that benchmark as well as a $1000 scope Yes, from the way you describe it it would be very limited indeed. The technical aspects of even the few requirements you have mean a very limited device. It wouldn't be nearly as useful as the $1100 scope I bought with a logic analyzer and 60MHz analog bandwidth. I'd rather pay for my test equipment that dick around with intentionally-limited child's toys. You live and die by your test equipment as an engineer. If you're not certain you can trust it you're wasting your time. Given the lack of expertise you've shown about high-speed communication, signal processing and the power of embedded processing (in FPGAs or microcontrollers) I wouldn't wager you're up to designing it yourself and no one else who's answered is anything other than ambivalent. If there were a better-targeted set of requirements that hit upon a real need in the community that wasn't being served, that I could see being technically feasible I'd be on board. But your vague requirements don't seem researched. You need to do a survey of the available options out there for hobbyists - what USB scopes and standalones are people using, what are their strengths and weaknesses, and determine if any niches aren't being filled. Otherwise this is just fantasizing.
{ "source": [ "https://electronics.stackexchange.com/questions/29316", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1024/" ] }
29,591
I was asked by my professor that why the standard of 4-20 mA is set for measurement of current in measurements circuit. Is there any reason why this level is used? I mean why the value of 4? Can't it be 5-25 mA or anything? Please note that I know why we don't use a non-zero starting point for measurement.
A 4-10mA corresponds to a 1-5V analog voltage across a 250 Ohm resistor, making it easy to adapt the 4-20mA current loop to a 1-5VDC analog input voltage (common on many controllers using TTL inputs) to by simply placing a 250 Ohm resistor across the analog input terminals of the controller. The industry specification for Compatibility of Analog Signal for Electronic Industrial Process Instruments is ISA 50.00.01-1975 (R2012). The pre-1972 version of this standard specified 10-50mA for analog current loops because the technology at the time used magnetic amplifiers which required a minimum of 10mA to operate. Since transistor circuits have become more stable and accurate, the 4-20mA analog current loop has become the standard, requiring less power and allowing greater distances. Of course, if you aren't already aware, one of the main advantages of current loop over voltage driven analogue signals is that a current driven loop allows greater distances to be achieved - typically lengths up to 1000m are possible. Any signal sent over a long distance produces a voltage drop which is in proportion to the length of the cable (cable resistance), however when a 4-20mA signal is used to transmit the analog signal (as opposed to a 1-5VDC signal), the voltage drop is irrelevant, since the same current has to pass through the circuit loop (it has no where else to go!) provided the power supply can handle it. For example, for a 24VDC powered transmitters, 7-15VDC is typically used by the transmitter circuit, leaving a budget of at least 9VDC for the loop voltage drop. You may already be aware that having a non-zero current (4mA) for representing the "zero value" of the analog signal allows the controller to detect a broken wire (0mA) as well as allowing loop-powered transmitter design. Having the 20mA correspond to the 'maximum value' of the analog signal is practical for industrial process instruments, since having larger currents would result in larger voltage drop for the same size cabling, thereby limiting the cable length. Also, limiting the power of signals in the transmitter is better for 'intrinsically safe design' by limiting the energy available for igniting explosive dust or vapors in hazardous areas (as per ANSI/ISA-RP12.06.01-2003 Recommended Practice for Wiring Methods for Hazardous (Classified) Locations Instrumentation Part 1: Intrinsic Safety).
{ "source": [ "https://electronics.stackexchange.com/questions/29591", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7746/" ] }
29,756
What will happen if for a BJT transistor, it's emitter terminal is treated as collector and collector as emitter in a common emitter amplifier circuit?
Short answer It will work but will have a lower \$\beta\$ (beta) Why? The BJT is formed by two p-n junction (either npn or pnp ), so at a first glance it's symmetrical. But both the concentration of dopant and the size of the regions (and more important : the area of the junctions) is different for the three regions. So it simply won't work at the full potential. (like using a reversed lever) Wiki about BJT : look especially the section Structure and the reverse-active operating mode The lack of symmetry is primarily due to the doping ratios of the emitter and the collector. The emitter is heavily doped, while the collector is lightly doped, allowing a large reverse bias voltage to be applied before the collector–base junction breaks down. The collector–base junction is reverse biased in normal operation. The reason the emitter is heavily doped is to increase the emitter injection efficiency : the ratio of carriers injected by the emitter to those injected by the base. For high current gain, most of the carriers injected into the emitter–base junction must come from the emitter . Another note : classical BJTs are created stacking the three regions in a linear way (see picture in the left), but modern bipolars, realized in surface (MOS) technology, will have also a different shape for collector and emitter (in the right): In the left a traditional BJT, in the right a BJT in MOS technology (also called Bi-CMOS when both transistors are used in the same die) So the behavior will be even more affected.
{ "source": [ "https://electronics.stackexchange.com/questions/29756", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8915/" ] }
29,861
This is subjective, but I am looking for other people's experience. If I am going to screw eight wires into a daughter relay board's terminal block, is there a benefit or advantage to tinning the copper end before? The environment that the units are to be installed is not near the coast and is not normally damp/wet. Rather dry and hot.
You MUST NOT fully tin the copper wires to be inserted into a screw down terminal block - that your days may be long on the face of the land. It is permissible to tin the tip to maintain the wire shape. The minimum possible amount of copper should be tinned. Any competent regulatory authority will have this requirement as a rule in their system (see below) The reason for the prohibition is that when you fully tin a multistrand wire fully, the solder wicks between the strands of copper and forms a solid block, part of whose volume is metallic solder. When you clamp the solder and copper bundle you tighten the screw or clamp against the solder block, and in time the solder metal "creeps" under the compressive forces and the join loses tension. The wire can then either pull out or cause a high resistance connection with heating. This is a genuine real-world issue and is covered by genuine real-world regulations in many countries.
{ "source": [ "https://electronics.stackexchange.com/questions/29861", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4459/" ] }
29,945
UARTs often let you choose between 1, 1.5 and 2 stop bits. With 1 stop bit payload efficiency is 80% (8/10), with 2 stop bits that drops to 72.7% (8/11). So what's the advantage of the second stop bit?
Extra stop bits can be a useful way to add a little extra receive processing time, especially at high baud rates and/or using soft UART, where time is required to process the received byte. Where speed is tight, and your UART only offers division ratios in powers of 2, adding an extra stopbit can be an option to give a less drastic reduction in speed than the next lowest baudrate. I believe this is may one reason the DMX512 standard specifies 2 stopbits. Another situation where they can be useful is if you have devices forwarding a data stream without any buffering or packetisation - small differences in clock rates between nodes and finite sampling granularity can cause errors to occur as data is received and retransmitted by a number of nodes in a chain, but if the data is sent with 2 stopbits and the receivers are set to one stopbit, it adds enough margin to accommodate these errors and leave at least one valid stopbit period for nodes far down the chain to receive reliably. I have also enountered a situation where a very long cable run caused some asymmetry in the rise and fall times, resulting in inadequate stopbit length - sending 2 stopbits and having the receiver only require one fixed this.
{ "source": [ "https://electronics.stackexchange.com/questions/29945", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
29,955
Omron d2fc-f-7n microswitches are used in computer mice all around, and they eventually start clicking several times per hit. AFAIK there is a flexible metal plate that wears out due to metal fatigue, so there must be a way to prolong its life. The obvious solution is to remove the malfunctioning microswitch and replace with a spare, but where I live they aren't available at all.
Delivery costs many times more than the items themselves Even from eBay? Whereabouts do you live? One way you can get spares is to smash open another mouse that's broken for some other reason. Perhaps a friend has a broken one? It may be possible to repair them. Those little switches have a snap fit cover, and can be opened up. carefully pull on the catch with a fine blade, and remove the cover. At this point, plug in the mouse and test the switch. gently push on the metal spring on the switch, and see if the problem still happens. If not, try to push on it in such a way that the problem happens. After you have attempted to fix the switch, you will be able to test it again without re-assembling the whole mouse. Unplug the mouse now. The switches come in a variety of different designs, but they are fairly similar. There's a bistable metal spring which normally serves to ensure the contacts move rapidly and decisively. Either it's this spring which isn't pulling as hard, or the contacts are dirty. Let's start with the spring itself: You need to flatten it slightly. You might find it easier to remove the spring from the switch first. Place it on a table, and squash it slightly with your finger. Not too much. Better to err on the side of caution. Then put the spring back in the switch. Test it again now. If it still bounces, then you might need to flatten the spring a little more. If this doesn't work, then try cleaning the contacts. Tear off a thin (5mm wide) strip of J-Cloth or similar. Apply a little abrasive cleaner (like CIF) to it. Thread it through the switch and pull it back and forth to rub away any dirt from the contacts. Tear off another strip, and soak it in methylated spirits (or pure alcohol). Use this to clean off the abrasive cleaner. Test the switch again. If it still doesn't work, then get a new switch.
{ "source": [ "https://electronics.stackexchange.com/questions/29955", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9243/" ] }
30,017
I was discussing on pull down resistors with a colleague of mine. Here are the two configurations for transistor as a switch. The input signal can either come from a microcontroller or an another digital output to drive a load, or from an analogue signal to give a buffered output from the collector of the transistor to the microcontroller. On the left, with Q1, is my colleague's configuration. He states that: A 10K resistor is needed directly in the base to prevent the Q1 from switching ON unintentionally. If the configuration on the right, with Q1, is used, then the resistance will be too weak to pull the base down. R2 also protects \$V_{BE}\$ from over-voltage and give stability in case of temperature changes. R1 protects from over-current to the Q1's base, and will be a bigger value resistor in case the voltage from "uC-out" is high (in example +24V). There is going to be a voltage divider formed, but that doesn't matter as the input voltage is high enough, already. On the right, with Q2, is my configuration. I think that: Since an NPN transistor's base is not a high impedance point like a MOSFET or a JFET, and the \$H_{FE}\$ of the transistor is less than 500, and at least 0.6V is needed to turn the transistor ON, a pull-down resistor is not critical, and in most cases is not even needed. If a pull-down resistor is going to be put in the board, then the value of exact 10K is a myth. It depends on your power budget. A 12K would do fine as well as a 1K. If the configuration on the left, with Q1, is used, then a voltage divider is created and may create problems if the input signal, that is used to switch the transistor ON, is low. So, to clarify things, my questions are: Is 10K pull-down resistor a rule-of-thumb that I should apply everytime? What are the things to consider when determining a pull-down resistor's value? Is the pull-down resistor really needed in every application? In what cases the pull-down resistor is needed? Which configuration would you prefer and why? If none, what would be a better configuration?
Summarised Solution: The two configurations are close to equivalent. Either would work equally well in almost all cases. In a situation where one was better than the other, the design would be excessively marginal for real world use (as anything so crucial to make the two differ substantially means the operation is "right on the edge"). \$R_{2}\$ or \$R_{4}\$ are needed only when \$V_{in}\$ can be open circuit, and they are a good idea in that case. Values up to about 100K are probably OK in most cases. 10k is a good safe value in most cases. A secondary effect in bipolar transistors (which I have alluded to in my answer) means that R2 and R4 may be needed to sink Icb reverse bias leakage current. If this is not done, then it will be carried by the base-emitter-junction and can cause device turn-on. This is a genuine real world effect which is well known and well documented but not always well-taught in courses. See my answer addition. Left hand case: Drive voltage is decreased by \$\frac{10}{11}\$ , which means 9% less. Base sees 10K to ground, if input is open circuit. If input is LOW, then base sees about 1K to ground. Actually 1K//10K = essentially the same. Right hand case: Drive = 100% of \$V_{in}\$ is applied via 1K. Base sees 10K to ground if \$V_{in}\$ is open circuit. (as opposed to 11K). If the input is LOW, base sees 1K, which is essentially the same. R2 and R4 act to shunt the base leakage current to ground. For low power or small signal jellybean transistors, up to several Watts rating, this current is very small, and usually will not turn the transistor ON, but it just might in extreme cases - so say 100K would usually be enough to keep the base LOW. This only applies if \$V_{in}\$ is open circuit. If \$V_{in}\$ is grounded, which means it is LOW, then R1 or R5 are from base to ground and R2 or R4 are not needed. Good design includes these resistors if \$V_{in}\$ may ever be open circuit (e.g. a processor pin during startup may be open circuit or undefined). Here is an example where a very short "blip" due to a pin floating was of major consequence: A very long time ago, I had a circuit controlling an 8-track open-reel data tape drive. When the system was first turned on, the tape would run backwards at high speed and despool. This was "very, very, very annoying". The code was checked and no fault was found. It turned out that the port drive went open circuit as the port initialized, and this allowed the floating line to be pulled high by the tape deck, which put a rewind code on the tape port. It rewound! The initialisation code did not explicitly command the tape to stop, as it was assumed that it was already stopped and would not start by itself. Adding an explicit stop command meant that the tape would twitch but not despool. (Counts on fingers of the brain - hmmm, 34 years ago. (That was at the very start of 1978 - now almost 38 years ago as I edit this answer). Yes, we had microprocessors back then. Just :-). Specifics: A 10K resistor is needed directly in the base to prevent the Q1 from switching ON unintentionally. If the configuration on the right, with Q1, is used, then the resistance will be too weak to pull the base down. No! For practical purposes, 10K = 11K for 99.8% of the time, and even 100K would work in most cases. R2 also protects VBE from over-voltage and gives stability in case of temperature changes. There is no practical difference in either case. R1 protects from over-current to the Q1's base, and will be a bigger-value resistor in case the voltage from "uC-out" is high (in example, +24V). There is going to be a voltage divider formed, but that doesn't matter, as the input voltage is already high enough. This has some merit. R1 is dimensioned to provide the desired base-drive current, so yes. \$R_{1} = \dfrac{V}{I} = \dfrac{(Vin - Vbe)}{I{desired\, base\, drive}}\$ As \$V_{BE}\$ low and you design for more than enough current, then: \$R_{1} \cong \dfrac{Vin}{Ib_{desired}}\$ \$I_{base \ desired} >> \frac{Ic}{\beta}\$ - where \$\beta\$ = current gain. If \$\beta_{nominal} = 400\$ (eg BC337-40 where \$\beta =\$ 250 to 600) then design for \$\beta \leq 100\$ unless there are special reasons not to. For instance, if \$\beta_{nominal} = 400\$ then \$\beta_{design} = 100\$ . If \$Ic_{max} = 250mA \$ and \$V_{in} = 24V \$ then $$I_b = \frac{I_c}{\beta} = \frac{250}{100} = 2.5mA $$ $$ R_b = \frac{V}{I} = \frac{24V}{2.5mA} = 9.6k \Omega$$ We could use 10k, as beta is conservative, but 8.2k or even 4.7k is ok. $$ Pr_{4.7k} = \frac{V^2}{R} = \frac{24^2}{4.7k} = 123mW $$ This would be ok with a \$\frac{1}{4}W\$ resistor, but 123mW may not be totally trivial, so one may wish to use the 10k resistor instead. Note that switched collector power = V x I = 24 x 250 = 6 Watts. On the right, with Q2, is my configuration. I think that: Since an NPN transistor's base is not a high impedance point like a MOSFET or a JFET, and the HFE of the transistor is less than 500, and at least 0.6V is needed to turn the transistor ON, a pull-down resistor is not critical, and in most cases is not even needed. As above - sort of, yes, BUT, base leakage will bite you sometimes. Murphy says that, without the pull-down, it will accidentally fire the potato-cannon into the crowd just before the main act, but that a 10k to 100k pull-down will save you. If a pull-down resistor is going to be put in the board, then the value of exact 10K is a myth. It depends on your power budget. A 12K would do fine as well as a 1K. Yes! 10k = 12k = 33k. 100k MAY be getting a bit high. Note that all of this applies only if Vin can go open circuit. If Vin is either high or low or anywhere in between then the path through R1 or R5 will dominate. If the configuration on the left, with Q1, is used, then a voltage divider is created and may create problems if the input signal, that is used to switch the transistor ON, is low. Only in very, very, very, very extreme cases as shown. $$ I_{R1} = \frac{V}{R} = \frac{V_{in}-V{be}}{R1} $$ $$ I_{R2} = \frac{V_{be}}{R_2} $$ So the fraction that R2 will "steal" is $$ \frac{I_{R2}}{I_{R1}} = \frac{\frac{V_{be}}{R_2}} { \frac{V_{in}-V_{be}}{R_1}} $$ $$ \frac{I_{R2}}{I_{R1}} = \frac{R_1}{R_2} \times \frac{V_{be}}{V_{in}-V_{be}} $$ If \$R_1 = 1k \$ , \$R2 = 10K\$ then $$\frac{R_1}{R_2} = 0.1 $$ and if \$V_{be} = 0.6V \$ , \$V_{in} = 3.6V \$ (to make sums clearer) then $$ \frac{V_{be}}{V_{in}-V_{be}} = \frac{0.6}{3.0} = 0.2 $$ So overall fraction of drive lost is \$ 0.1 \times 0.2 = 0.02 = 2\% \$ i.e even with 1k/10k the loss of drive is minimal. If you can judge Beta and more so closely that 2% drive loss matters, then you should be in the space program. Orbital launchers work with safety margins in the 1% - 2% range in some key areas. When your payload to orbit is 3% to 10% of your launch mass (or less) then every % of safety margin is a bite out of our lunch. The latest North Korean orbital launch attempt used an actual safety margin of -1% to -2% somewhere critical, apparently, and "summat gang aglae". They are in good company - the US and USSR lost many many many launchers in the early 1960s. I knew a man who used to build Atlas missiles early on. What fun they had. One Russian system NEVER produced a successful launch - too complex.) UK launched one satellite ever FWIW. ADDED It has been suggested in comments that R2 and R4 are never needed, because an NPN is a CURRENT-controlled device. R2 and R4 would only make sense for VOLTAGE-controlled devices, like MOSFETs and How can a pull-down be needed when the MCU output is hi-Z, and the transistor is controlled by current? This suggestion in various forms has been repeated by enough people that it is worth emphasizing. If a bipolar transistor base is left floating, then reality AND the relevant data sheet information both demonstrate that a small amount of collector current can flow under specified conditions. The conditions where this typically can occur are described below. I have personally seen real-world situations where this effect caused spurious turn-on problems. If your worst case situation, using worst case (not typical) datasheet parameters, does not fulfill these conditions, and/or the results do not concern you worst case, then a base pull-down is not strictly essential. There is an important secondary effect in bipolar transistors which results in R2 and R4 having a useful and sometimes essential role. I'll discuss the R2 version, as it is the same as the R4 version but slightly "purer" for this case (ie, R1 becomes irrelevant). If Vin is open-circuit, then R2 is connected from base to ground and R1 has no effect. The base APPEARS to be grounded, and have no signal source. However, the CB junction is effectively a reverse-biased silicon diode. Reverse leakage current will flow through the CB diode into the base. If no external path to ground is provided, this current will then flow via the forward-biased base-emitter diode to ground. This current will notionally result in a collector current of Beta x Icb leakage, but at such low currents that you need to look at the underlying equations and/or published device data. A BC337 - datasheet here has an Icb cutoff of about 0.1 uA with Vbe = 0. Ice0 = collector-base current is about 200 nA in this case. Vc is 40V in that example, but the current approximately doubles per 10 degrees C rise, and that spec is at 25C and the effect is relatively voltage-independent. The two are closely related. At around 55C you may get 1 uA - not much. If the usual Ic is 1 mA, then 1 uA is irrelevant. Probably. I have seen real world circuits where omission of R2 caused spurious turn-on problems. With R2 = say 100K, then 1 uA will produce 0.1V voltage rise and all is well.
{ "source": [ "https://electronics.stackexchange.com/questions/30017", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5035/" ] }
30,105
From what I understand, the process of programming an FPGA comes in two parts: Encode the hardware description into bits that the FPGA can understand (i.e. write some HDL and compile it) Load the compiled HDL onto the FPGA. My question is: "What does the FPGA do with the compiled HDL?". At the moment, I think of FPGAs as "mouldable hardware", where wires and logic gates can be moulded to whatever you want. One of the nice things is that the mouldability is permanent: FPGAs can be reprogrammed. How do FPGAs interpret compiled HDL? How is permanent mouldability achieved?
Judging by your other question, you're a Xilinx guy. So I highly suggest getting the data sheet for your Xilinx chip and going to the Functional Description chapter. For the Spartan 3 chip that I use, it's 42 pages of fun reading. It details exactly what components are inside an FPGA - the IOBs, CLBs, slices, LUTs, Block RAM, Multipliers, Digital Clock Manager, Clock Network, Interconnect, and some very basic configuration information. You need to understand this information if you want to know what a "compiled HDL" looks like. Once you're familiar with your FPGA's architecture, then you can understand this process. First, your HDL design is run through the synthesis engine, which turns your HDL into basically RTL. Then the Mapper processes the results from Synthesis, "mapping" them onto the available pieces of FPGA architecture. Then the Router does Place And Route (PAR), which figures out where those pieces go and how to connect them. Finally, the results from PAR are turned into a BIT file. Typically this BIT file is then transformed in some way so that it can be loaded into a Flash chip, so that the FPGA can be programmed automatically when it powers up. This bit file describes the entire FPGA program. For instance, the CLBs in a Spartan 3 are composed of slices, which are composed of LUTs, which are just 16-address 1-bit SRAMs. So one thing the BIT file will contain is exactly what data goes into each address of the SRAM. Another thing the BIT file contains is how each input of the LUT is wired to the connection matrix. The BIT file will also contain the initial values that go inside the block RAM. It will describe what is connected to the set and reset pins of each flip flop in each slice. It will describe how the carry chain is connected. It will describe the logic interface for each IOB (LVTTL, LVCMOS, LVDS, etc). It will describe any integrated pull-up or pull-down resistors. Basically, everything. For Xilinx, the FPGA's memory is cleared when configuration is initiated (i.e. PROG_B is asserted). Once memory is clear, INIT_B goes high to indicate that phase is complete. The BIT file is then loaded, either through JTAG or the Flash chip interface. Once the program is loaded, the Global Set/Reset (GSR) is pulsed, resetting all flip flops to their initial state. The DONE pin then goes high, to indicate configuration is complete. Exactly one clock cycle later, the Global Three-State signal (GTS) is released, allowing outputs to be driven. Exactly one clock cycle later, the Global Write Enable (GWE) is released, allowing the flip flops to begin changing state in response to their inputs. Note that even this final configuration process can be slightly reordered depending on flags that are set in the BIT file. EDIT: I should also add that the reason the FPGA program is not permanent is because the logic fabric is composed of volatile memory (e.g. SRAM). So when the FPGA loses power, the program is forgotten. That's why they need e.g. Flash chips as non-volatile storage for the FPGA program, so that it can be loaded whenever the device is powered on.
{ "source": [ "https://electronics.stackexchange.com/questions/30105", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5872/" ] }
30,238
I need a way to invert a digital signal i.e. if the input is high, I want the output to be low and if the input is low I want the output to be high. I think this can be accomplished with a single PNP transistor, but wanted to verify that here. The voltages that I'm dealing with are less than 5V.
Or, since you're talking about digital signals anyway you use an inverter . A is the input (for gates with more inputs that will be A , B , C , etc.), Y is the output. If it doesn't complicate your schematic too much place the symbol with the input to the left. Nexperia has single-gate inverters . Just four connections: power supply, ground, input and output. It can be done with a transistor and two resistors, though. It's a simple schematic, but you still have to make a few simple calculations. You'll have exactly the same connections as with the inverter. BTW, a PNP is an option, but more often an NPN will be used. edit (re your comment) If the input signal is high there will flow current through R2 and the transistor's base-emitter junction (base, not gate). This current will be amplified, and the collector current through R1 will cause a voltage drop so that the output will be low. Input high, output low. If the input signal is low there won't be any base current, and no collector current. No current through R1 means no voltage drop, so that the output will be at +V. Input low, output high. This already leads a bit further, but like I said in comment to sandun the output is highly asymmetrical. If the output is connected to a capacitor a high output level would mean that the capacitor is charged through R1, which will result in an exponential slope with a time constant R1C. When the output goes low the capacitor will be discharged through a much lower resistance and the slope will be much steeper. You won't get this difference with CMOS gates, which have symmetrical source/sink capabilities. The transistor version's input will also draw (a small) current when high. The CMOS version will only have a small leakage current both when high and low. Overall the integrated logic gate is the winner.
{ "source": [ "https://electronics.stackexchange.com/questions/30238", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5432/" ] }
30,370
I'm looking for a time and memory efficient solution to calculate a moving average in C. I need to avoid dividing because I'm on a PIC 16 which has no dedicated division unit. At the moment, I just store all values in a ring buffer and simply store and update the sum each time a new value arrives. This is really efficient, but unfortunately uses most of my available memory...
As others have mentioned, you should consider a IIR (infinite impulse response) filter rather than the FIR (finite impulse response) filter you are using now. There is more to it, but at first glance FIR filters are implemented as explicit convolutions and IIR filters with equations. The particular IIR filter I use a lot in microcontrollers is a single pole low pass filter. This is the digital equivalent of a simple R-C analog filter. For most applications, these will have better characteristics than the box filter that you are using. Most uses of a box filter that I have encountered are a result of someone not paying attention in digital signal processing class, not as a result of needing their particular characteristics. If you just want to attenuate high frequencies that you know are noise, a single pole low pass filter is better. The best way to implement one digitally in a microcontroller is usually: FILT <-- FILT + FF(NEW - FILT) FILT is a piece of persistant state. This is the only persistant variable you need to compute this filter. NEW is the new value that the filter is being updated with this iteration. FF is the filter fraction , which adjusts the "heaviness" of the filter. Look at this algorithm and see that for FF = 0 the filter is infinitely heavy since the output never changes. For FF = 1, it's really no filter at all since the output just follows the input. Useful values are in between. On small systems you pick FF to be 1/2 N so that the multiply by FF can be accomplished as a right shift by N bits. For example, FF might be 1/16 and the multiply by FF therefore a right shift of 4 bits. Otherwise this filter needs only one subtract and one add, although the numbers usually need to be wider than the input value (more on numerical precision in a separate section below). I usually take A/D readings significantly faster than they are needed and apply two of these filters cascaded. This is the digital equivalent of two R-C filters in series, and attenuates by 12 dB/octave above the rolloff frequency. However, for A/D readings it's usually more relevant to look at the filter in the time domain by considering its step response. This tells you how fast your system will see a change when the thing you are measuring changes. To facilitate designing these filters (which only means picking FF and deciding how many of them to cascade), I use my program FILTBITS. You specify the number of shift bits for each FF in the cascaded series of filters, and it computes the step response and other values. Actually I usually run this via my wrapper script PLOTFILT. This runs FILTBITS, which makes a CSV file, then plots the CSV file. For example, here is the result of "PLOTFILT 4 4": The two parameters to PLOTFILT mean there will be two filters cascaded of the type described above. The values of 4 indicate the number of shift bits to realize the multiply by FF. The two FF values are therefore 1/16 in this case. The red trace is the unit step response, and is the main thing to look at. For example, this tells you that if the input changes instantaneously, the output of the combined filter will settle to 90% of the new value in 60 iterations. If you care about 95% settling time then you have to wait about 73 iterations, and for 50% settling time only 26 iterations. The green trace shows you the output from a single full amplitude spike. This gives you some idea of the random noise suppression. It looks like no single sample will cause more than a 2.5% change in the output. The blue trace is to give a subjective feeling of what this filter does with white noise. This is not a rigorous test since there is no guarantee what exactly the content was of the random numbers picked as the white noise input for this run of PLOTFILT. It's only to give you a rough feeling of how much it will be squashed and how smooth it is. PLOTFILT, maybe FILTBITS, and lots of other useful stuff, especially for PIC firmware development is available in the PIC Development Tools software release at my Software downloads page . In addition, a web-based port of PLOTFLIT can be found here . Added about numerical precision I see from the comments and now a new answer that there is interest in discussing the number of bits needed to implement this filter. Note that the multiply by FF will create Log 2 (FF) new bits below the binary point. On small systems, FF is usually chosen to be 1/2 N so that this multiply is actually realized by a right shift of N bits. FILT is therefore usually a fixed point integer. Note that this doesn't change any of the math from the processor's point of view. For example, if you are filtering 10 bit A/D readings and N = 4 (FF = 1/16), then you need 4 fraction bits below the 10 bit integer A/D readings. One most processors, you'd be doing 16 bit integer operations due to the 10 bit A/D readings. In this case, you can still do exactly the same 16 bit integer opertions, but start with the A/D readings left shifted by 4 bits. The processor doesn't know the difference and doesn't need to. Doing the math on whole 16 bit integers works whether you consider them to be 12.4 fixed point or true 16 bit integers (16.0 fixed point). In general, you need to add N bits each filter pole if you don't want to add noise due to the numerical representation. In the example above, the second filter of two would have to have 10+4+4 = 18 bits to not lose information. In practise on a 8 bit machine that means you'd use 24 bit values. Technically only the second pole of two would need the wider value, but for firmware simplicity I usually use the same representation, and thereby the same code, for all poles of a filter. Usually I write a subroutine or macro to perform one filter pole operation, then apply that to each pole. Whether a subroutine or macro depends on whether cycles or program memory are more important in that particular project. Either way, I use some scratch state to pass NEW into the subroutine/macro, which updates FILT, but also loads that into the same scratch state NEW was in. This makes it easy to apply multiple poles since the updated FILT of one pole is the NEW of the next one. When a subroutine, it's useful to have a pointer point to FILT on the way in, which is updated to just after FILT on the way out. That way the subroutine automatically operates on consecutive filters in memory if called multiple times. With a macro you don't need a pointer since you pass in the address to operate on each iteration. Code Examples Here is a example of a macro as described above for a PIC 18: //////////////////////////////////////////////////////////////////////////////// // // Macro FILTER filt // // Update one filter pole with the new value in NEWVAL. NEWVAL is updated to // contain the new filtered value. // // FILT is the name of the filter state variable. It is assumed to be 24 bits // wide and in the local bank. // // The formula for updating the filter is: // // FILT <-- FILT + FF(NEWVAL - FILT) // // The multiply by FF is accomplished by a right shift of FILTBITS bits. // /macro filter /write dbankif lbankadr movf [arg 1]+0, w ;NEWVAL <-- NEWVAL - FILT subwf newval+0 movf [arg 1]+1, w subwfb newval+1 movf [arg 1]+2, w subwfb newval+2 /write /loop n filtbits ;once for each bit to shift NEWVAL right rlcf newval+2, w ;shift NEWVAL right one bit rrcf newval+2 rrcf newval+1 rrcf newval+0 /endloop /write movf newval+0, w ;add shifted value into filter and save in NEWVAL addwf [arg 1]+0, w movwf [arg 1]+0 movwf newval+0 movf newval+1, w addwfc [arg 1]+1, w movwf [arg 1]+1 movwf newval+1 movf newval+2, w addwfc [arg 1]+2, w movwf [arg 1]+2 movwf newval+2 /endmac And here is a similar macro for a PIC 24 or dsPIC 30 or 33: //////////////////////////////////////////////////////////////////////////////// // // Macro FILTER ffbits // // Update the state of one low pass filter. The new input value is in W1:W0 // and the filter state to be updated is pointed to by W2. // // The updated filter value will also be returned in W1:W0 and W2 will point // to the first memory past the filter state. This macro can therefore be // invoked in succession to update a series of cascaded low pass filters. // // The filter formula is: // // FILT <-- FILT + FF(NEW - FILT) // // where the multiply by FF is performed by a arithmetic right shift of // FFBITS. // // WARNING: W3 is trashed. // /macro filter /var new ffbits integer = [arg 1] ;get number of bits to shift /write /write " ; Perform one pole low pass filtering, shift bits = " ffbits /write " ;" sub w0, [w2++], w0 ;NEW - FILT --> W1:W0 subb w1, [w2--], w1 lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right sl w1, #[- 16 ffbits], w3 ior w0, w3, w0 asr w1, #[v ffbits], w1 add w0, [w2++], w0 ;add FILT to make final result in W1:W0 addc w1, [w2--], w1 mov w0, [w2++] ;write result to the filter state, advance pointer mov w1, [w2++] /write /endmac Both these examples are implemented as macros using my PIC assembler preprocessor , which is more capable than either of the built-in macro facilities.
{ "source": [ "https://electronics.stackexchange.com/questions/30370", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8496/" ] }
30,564
There are a number of CAN modules built into microcontrollers these days. The PIC18F2480 is an example of that. Is that microcontroller (with built-in CAN) capable of driving a CAN bus on its own or is an external CAN transceiver/controller required? I believe CAN has both a software and hardware layer and by the looks of it these CAN-enabled microcontrollers appear to have just the software, but it does not state that it can or cannot drive the CAN bus as is. I'm looking to connect more than six microcontrollers through a CAN bus and would like to know if I need a transceiver across all of them or whether the built-in stuff can handle the communication from a software and hardware perspective. Assume that I'll have necessary termination resistors and other small discrete components (caps, resistors, etc.)
This is a very good question. As a general rule, CAN requires a transceiver for every node: However, under certain circumstances, you can actually get away without any transceivers! Those circumstances are: Short bus length (much less than 1 meter) Preferably all microcontrollers are on the same PCB, or stack of PCBs. The bit rate is low The environment isn't too electrically noisy These aren't hard rules. You might get away with maximum bit rate (1MB/s) if you have a really short bus (10cm). To achieve this, you need to know a little about what the transceiver does. Like most transceivers, they can output a high or a low to the bus (representing 1 and 0), but the 0 can dominate a 1. I.E. If two transceivers try to speak at the same time, and one is saying 1 and the other is saying 0, then the 0 will win. We can re-create the same situation simply using diodes: See the Seimens application note AP2921: On-Board Communication via CAN without Transceiver But here's something even more interesting: The PIC actually has hardware support for transceiverless CAN! You can configure the CAN TX pin so that is behaves in exactly the same way as the transceiver. This means you can wire up the CAN bus without the diodes. You'll still need the resistor though.
{ "source": [ "https://electronics.stackexchange.com/questions/30564", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9409/" ] }
30,737
I'm little confused about this one and don't know where to start. The idea is to have micro-controller or FPGA output PWM signal (5V or 3.3V while PWM is 100%), and then use a transistor to power ventilator that needs 12V to run. I know that I need to connect grounds of ventilator's power supply and FPGA's (or μC's) power supply together. After that, I use resistor in series with transistor's collector to limit current. The part that's bugging me is how to connect the base and the PWM output pin? Which resistor value do I need to chose if I want 3.3V to be 100% ? And which value do need if I want 5V to be 100% ? I mean, how can I "tell" transistor that 3.3V (or any other voltage that I'm operating on) is when it needs to power ventilator at 100% capacity? I hope you can understand my question. Thank you for any answers !!
A (two-level) PWM signal has two states: high and low. Regardless of whether the supply for your FPGA/MCU is 5 V or 3.3 V, you want the low state to turn into 0 V across your fan, and the high state to turn into 12 V across it (or vice versa). That way, by varying the duty cycle of the PWM signal, you will be able to drive the fan all along its working range. The transistor (which can be BJT or MOSFET) has to work either completely off or completely on, to dissipate the minimum possible. If the supply is 12 V, you don't need any resistor in series with the fan. The transistor's collector or drain will be directly connected to the fan. Also, use a Schottky diode in parallel with the fan, so that the cathode is at your +12 V node, and the anode is at the collector or drain. The fan is an inductive load, and you need to provide a path for its current, once you turn off the transistor. Otherwise, excessive voltage may build up at the collector/drain of the transistor, and you may damage it. Assume BJT: You only need a resistor in series with the base, to limit the base current. We need to know how much current your fan draws, at 12 V (let's call that \$I_{fan}\$), and also the \$\beta\$ of your transistor (the current gain from \$I_{base}\$ to \$I_{collector}\$). Choose the resistor this way: \$ R_1 = \dfrac{V_{supply}-0.7}{10*\dfrac{I_{fan}}{\beta}} \$ \$ V_{supply} \$ is 3.3 or 5. The factor 10 is to have enough margin as to make sure that the BJT will never work in the linear region.
{ "source": [ "https://electronics.stackexchange.com/questions/30737", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5029/" ] }
30,750
I was surprised to see that the GPS receiver I'm working with has a pin reserved for outputting a 1 PPS (Pulse Per Second) signal. What is the point of this? Can't the microcontroller easily generate its own 1 PPS signal?
The 1 PPS output has a much lower jitter than anything a MCU can do. In some more demanding applications you can use that pulse can be used to time things very accurately. With some scientific grade GPS's this 1 PPS output might be accurate to better than 1 nS.
{ "source": [ "https://electronics.stackexchange.com/questions/30750", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5872/" ] }
30,830
What happens in an embedded processor when execution reaches that final return statement Does everything just freeze as it is; power consumption etc, with one long eternal NOP in the sky? or are NOPs continuously executed, or will a processor shut down altogether? Part of the reason I ask is I am wondering if a processor needs to power down before it finishes execution and if it does how does it ever finish execution if it has powered down before hand?
This is a question my dad always used to ask me. " Why doesn't it just run through all the instructions and stop at the end? " Let's take a look at a pathological example. The following code was compiled in Microchip's C18 compiler for the PIC18: void main(void) { } It produces the following assembler output: addr opco instruction ---- ---- ----------- 0000 EF63 GOTO 0xc6 0002 F000 NOP 0004 0012 RETURN 0 . . some instructions removed for brevity . 00C6 EE15 LFSR 0x1, 0x500 00C8 F000 NOP 00CA EE25 LFSR 0x2, 0x500 00CC F000 NOP . . some instructions removed for brevity . 00D6 EC72 CALL 0xe4, 0 // Call the initialisation code 00D8 F000 NOP // 00DA EC71 CALL 0xe2, 0 // Here we call main() 00DC F000 NOP // 00DE D7FB BRA 0xd6 // Jump back to address 00D6 . . some instructions removed for brevity . 00E2 0012 RETURN 0 // This is main() 00E4 0012 RETURN 0 // This is the initialisation code As you can see, main() is called, and at the end contains a return statement, although we didn't explicitly put it there ourselves. When main returns, the CPU executes the next instruction which is simply a GOTO to go back to the beginning of the code. main() is simply called over and over again. Now, having said this, this is not the way people would do things usually. I have never written any embedded code which would allow main() to exit like that. Mostly, my code would look something like this: void main(void) { while(1) { wait_timer(); do_some_task(); } } So I would never normally let main() exit. "OK ok" you saying. All this is very interesting that the compiler makes sure there's never a last return statement. But what happens if we force the issue? What if I hand coded my assembler, and didn't put a jump back to the beginning? Well, obviously the CPU would just keep executing the next instructions. Those would look something like this: addr opco instruction ---- ---- ----------- 00E6 FFFF NOP 00E8 FFFF NOP 00EA FFFF NOP 00EB FFFF NOP . . some instructions removed for brevity . 7EE8 FFFF NOP 7FFA FFFF NOP 7FFC FFFF NOP 7FFE FFFF NOP The next memory address after the last instruction in main() is empty. On a microcontroller with FLASH memory, an empty instruction contains the value 0xFFFF. On a PIC at least, that op code is interpreted as a 'nop', or 'no operation'. It simply does nothing. The CPU would continue executing those nops all the way down the memory to the end. What's after that? At the last instruction, the CPU's instruction pointer is 0x7FFe. When the CPU adds 2 to its instruction pointer, it gets 0x8000, which is considered an overflow on a PIC with only 32k FLASH, and so it wraps around back to 0x0000, and the CPU happily continues executing instructions back at the beginning of the code, just as if it had been reset. You also asked about the need to power down. Basically you can do whatever you want, and it depends on your application. If you did have an application that only needed to do one thing after power on, and then do nothing else you could just put a while(1); at the end of main() so that the CPU stops doing anything noticeable. If the application required the CPU to power down, then, depending on the CPU, there will probably be various sleep modes available. However, CPUs have a habit of waking up again, so you'd have to make sure there was no time limit to the sleep, and no Watch Dog Timer active, etc. You could even organise some external circuitry that would allow the CPU to completely cut its own power when it had finished. See this question: Using a momentary push button as a latching on-off toggle switch .
{ "source": [ "https://electronics.stackexchange.com/questions/30830", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9287/" ] }
30,952
I've come across multiple mentions of switches described as 1NO1NC. They often described as having 2 options: ON/(OFF) and (ON)/OFF and from what I gather they have 3 terminals for wires: a NO terminal, a NC terminal and a C (?) terminal. Can you explain what these mean? When would you use an NO or an NC switch?
NO = Normally open (open = open circuit = not creating a path for the current) NC = Normally closed (closed = short circuit = creating a path for the current) C = Common (The drawings show the state in the absence of force.) When you press a normally-open pushbutton, you provide a path for the current. When you press a normally-closed pushbutton, you impede the current from flowing. In a relay, when the coil is not energized, C and NC are connected. When the coil is energized, the magnetic field attracts the movable metal, and C and NO are connected instead. Uses: NC pushbuttons are used in emergency stop buttons. You press them when an accident has occurred, or may occur, and you need to immediately stop some machine, whose action could damage someone or something. Normally-closed buttons are preferred for two reasons: a) they don't rely on creating a good contact, to signal something. They just have to open a circuit, which is much easier. An NC is more robust and therefore safer. b) they react quicker. For an NO button, the signal event happens at the end of the movement (when the movable part makes contact). For an NC button, the signal event happens at the beginning of the movement (when the movable part stops making contact).
{ "source": [ "https://electronics.stackexchange.com/questions/30952", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9489/" ] }
31,603
I am trying to perform a software reset of my STM32F2. (Reference manual available here .) The relevant page of the reference manual (page 80) gives little information. Basically, the SYSRESETREQ bit of the Application Interrupt and Reset Control Register must be set. Now this page explains that to be able to modify the SYSRESETREQ , a specific "key" needs to be written to the VECTKEY bits. Neither document explains where this Application Interrupt and Reset Control Register is. What is its address, and how can I access it?
Why don't you use the CMSIS library? There is a specific function for that. Moreover, This is the Code taken from CMSIS Library for System Software Reset: /****************************************************************************** * @file: core_cm3.h * @purpose: CMSIS Cortex-M3 Core Peripheral Access Layer Header File * @version: V1.20 * @date: 22. May 2009 *---------------------------------------------------------------------------- * * Copyright (C) 2009 ARM Limited. All rights reserved. * * ARM Limited (ARM) is supplying this software for use with Cortex-Mx * processor based microcontrollers. This file can be freely distributed * within development tools that are supporting such ARM based processors. * * THIS SOFTWARE IS PROVIDED "AS IS". NO WARRANTIES, WHETHER EXPRESS, IMPLIED * OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE APPLY TO THIS SOFTWARE. * ARM SHALL NOT, IN ANY CIRCUMSTANCES, BE LIABLE FOR SPECIAL, INCIDENTAL, OR * CONSEQUENTIAL DAMAGES, FOR ANY REASON WHATSOEVER. * ******************************************************************************/ /* memory mapping struct for System Control Block */ typedef struct { __I uint32_t CPUID; /*!< CPU ID Base Register */ __IO uint32_t ICSR; /*!< Interrupt Control State Register */ __IO uint32_t VTOR; /*!< Vector Table Offset Register */ __IO uint32_t AIRCR; /*!< Application Interrupt / Reset Control Register */ __IO uint32_t SCR; /*!< System Control Register */ __IO uint32_t CCR; /*!< Configuration Control Register */ __IO uint8_t SHP[12]; /*!< System Handlers Priority Registers (4-7, 8-11, 12-15) */ __IO uint32_t SHCSR; /*!< System Handler Control and State Register */ __IO uint32_t CFSR; /*!< Configurable Fault Status Register */ __IO uint32_t HFSR; /*!< Hard Fault Status Register */ __IO uint32_t DFSR; /*!< Debug Fault Status Register */ __IO uint32_t MMFAR; /*!< Mem Manage Address Register */ __IO uint32_t BFAR; /*!< Bus Fault Address Register */ __IO uint32_t AFSR; /*!< Auxiliary Fault Status Register */ __I uint32_t PFR[2]; /*!< Processor Feature Register */ __I uint32_t DFR; /*!< Debug Feature Register */ __I uint32_t ADR; /*!< Auxiliary Feature Register */ __I uint32_t MMFR[4]; /*!< Memory Model Feature Register */ __I uint32_t ISAR[5]; /*!< ISA Feature Register */ } SCB_Type; #define SCS_BASE (0xE000E000) /*!< System Control Space Base Address */ #define SCB_BASE (SCS_BASE + 0x0D00) /*!< System Control Block Base Address */ #define SCB ((SCB_Type *) SCB_BASE) /*!< SCB configuration struct */ #define NVIC_AIRCR_VECTKEY (0x5FA << 16) /*!< AIRCR Key for write access */ #define NVIC_SYSRESETREQ 2 /*!< System Reset Request */ /* ################################## Reset function ############################################ */ /** * @brief Initiate a system reset request. * * @param none * @return none * * Initialize a system reset request to reset the MCU */ static __INLINE void NVIC_SystemReset(void) { SCB->AIRCR = (NVIC_AIRCR_VECTKEY | (SCB->AIRCR & (0x700)) | (1<<NVIC_SYSRESETREQ)); /* Keep priority group unchanged */ __DSB(); /* Ensure completion of memory access */ while(1); /* wait until reset */ }
{ "source": [ "https://electronics.stackexchange.com/questions/31603", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5872/" ] }
31,618
Typically mobile devices that have a mains-powered supply will accept voltage that is multiple of some single battery voltage. For example, 4.5 volts is 1.5 volts (AA primary battery) 3 times and 36 volts is 3.6 volts (Li-Ion battery) 10 times. Now there're laptops that use external power supplies rated at exactly 19 volts. That isn't a multiple of anything suitable. Puzzles me a lot. Where does this voltage originate from?
Now there're laptops that use external power supplies rated at exactly 19 volts. That isn't a multiple of anything suitable. Puzzles me a lot. This is not a design question as posed, but it has relevance to design of battery charging systems. Summary: The voltage is slightly more than a multiple of the fully charged voltage of a Lithium Ion battery—the type used in almost every modern laptop. Most laptops use Lithium Ion batteries. 19 V provides a voltage which is suitable for use for charging up to 4 x Lithium Ion cells in series using a buck converter to drop the excess voltage efficiently. Various combinations of series and parallel cells can be accommodated. Voltages slightly below 19 V can be used but 19 V is a useful standard voltage that will meet most eventualities. Almost all modern laptops use Lithium Ion (LiIon) batteries. Each battery consists of at least a number of LiIon cells in a series 'string' and may consist of a number of parallel combinations of several series strings. A Lithium Ion cell has a maximum charging voltage of 4.2 V (4.3 V for the brave and foolhardy). To charge a 4.2 V cell at least slightly more voltage is required to provide some “headroom” to allow charge control electronics to function. At the very least about 0.1 V extra might do but usually at least 0.5 V would be useful and more might be used. One cell = 4.2 V Two cells = 8.4 V Three cells = 12.6 V Four cells = 16.8 V Five cells = 21 V. It is usual for a charger to use a switched mode power supply (SMPS) to convert the available voltage to required voltage. A SMPS can be a Boost converter (steps voltage up) or Buck converter (steps voltage down) or swap from one to the other as required. In many cases a buck converter can be made more efficient than a boost converter. In this case, using a buck converter it would be possible to charge up to 4 cells in series. I have seen laptop batteries with 3 cells in series (3S), 4 cells in series (4S), 6 cells in 2 parallel strings of 3 (2P3S), 8 cells in 2 parallel strings of 4 (2P4S) and with a source voltage of 19 V it would be possible to charge 1, 2, 3 or 4 LiIon cells in series and any number of parallel strings of these. For cells at 16.8 V leave a headroom of (19−16.8) = 2.4 volt for the electronics. Most of this is not needed and the difference is accommodated by the buck converter, which acts as an “electronic gearbox”, taking in energy at one voltage and outputting it at a lower voltage and appropriately higher current. With say 0.7 V of headroom it would notionally be possible to use say 16.8 V + 0.5 V = 17.5 V from the power supply—but using 19 V ensures that there is enough for any eventuality and the excess is not wasted as the buck converter converts the voltage down as required. Voltage drop other than in the battery can occur in SMPS switch (usually a MOSFET ), SMPS diodes (or synchronous rectifier), wiring, connectors, resistive current sense elements and protection circuitry. As little drop as possible is desirable to minimise energy wastage. When a Lithium Ion cell is close to fully discharged it's terminal voltage is about 3 V. How low they are allowed to discharge to is subject to technical considerations related to longevity and capacity. At 3 V/cell 1/2/3/4 cells have a terminal voltage of 3/6/9/12 volt. The buck converter accommodates this reduced voltage to maintain charging efficiency. A good buck converter design can exceed 95 % efficient and in this sort of application should never be under 90 % efficient (although some may be). I recently replaced a netbook battery with 4 cells with an extended capacity version with 6 cells. The 4 cells version operated in 4S configuration and the 6 cell version in 2P3S. Despite the lower voltage of the new battery the charging circuitry accommodated the change, recognising the battery and adjusting accordingly. Making this sort of change in a system NOT designed to accommodate a lower voltage battery could be injurious to the health of the battery, the equipment and the user.
{ "source": [ "https://electronics.stackexchange.com/questions/31618", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
31,675
I bought some single strand wires hoping to prototype on the breadboards. Unfortunately it was too small to properly fit into the breadboard holes. So my question is which gauge fits well on those small holes of breadboard?
Plain single stranded copper wire works fine in these breadboards. That's what I primarily use. I find 22 guage is about right. Fancy specially made jumper wires may be more reliable in the long run, but cutting a piece of wire off a roll and stripping the ends is easy and quick. You can do that many times for the cost of one jumper wire. A while ago I bought a set of pre-cut and pre-stripped wires for this use from Jameco. It sounded like a good idea at the time. Having the wires ready to use is nice, but they stupidly decided to bend the stripped ends at right angles right where the insulation ends. That makes them difficult to use except for the ones that only go 1, 2, or 3 holes. As I cut and strip more jumper wires from a 500 foot roll of #22 wire, I put them into the box the Jameco kit came in according to their lengths. Over the years, the stripped ends of a few wires have broken right at the end of the insulation. This happens quite rarely, so the trouble to cut and strip a new wire is nothing.
{ "source": [ "https://electronics.stackexchange.com/questions/31675", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4277/" ] }
31,699
So, I understand, at least at a basic level, the method of operation of switching converters, both buck and boost. What puzzles me, though, is why buck converters in particular aren't simpler. Why not build a buck converter as a switch that charges a capacitor, with the switch controlled by a comparator comparing the output voltage to a reference? Wouldn't that be a lot simpler, allow you to use a more easily and cheaply available capacitor in place of the inductor, and skip the diode entirely?
Buck converters are as simple as boost converters. In fact, they are exactly the same circuit, just seen backwards, if we have the freedom to choose which switch (out of the two) will work as the controlled switch (or both, if it is a synchronous converter). Regarding your second paragraph, if you did that, you would incur in losses. More than with an inductor-based switched regulator, and much much more than with a linear regulator. Every time you connect a voltage source to a capacitor whose initial voltage is not the same as that of the voltage source, you unavoidably waste energy. Even if you don't see an explicit resistor, in real life it is there, and (curiously) no matter how small it is, it will waste that same amount of energy. See here . Charge pumps work as you say, but they are less efficient than inductor-based switched regulators. So, that's the justification for the --apparently unnecessary-- added complexity of inductor-based switched regulators. More : To try to give you the intuition of why buck and boost converters exist, see this figure. If you try to move energy between two voltage sources that are not alike, or between two current sources that are not alike, you will have unavoidable losses. On the other hand, you can move energy (and even doing some voltage or current scaling on the way) without any loss, if you connect a voltage source to a current source. The passive physical element that resembles the most a current source is an inductor. That's why inductor-based switched regulators exist. Charge pumps would be on the left column. Their theoretical maximum efficiency is lower than 100% (the actual efficiency depends on the difference of voltages, and the capacitances). Inductor-based switched regulators are on the right column. Their theoretical maximum efficiency is 100% (!).
{ "source": [ "https://electronics.stackexchange.com/questions/31699", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3707/" ] }
32,200
For example: SPI Flash largest size is 512MB at $9/ea: SPI Flash prices vs 2GB of microSD $3/ea (some with $1 shipping): microSD prices
Welcome to the world of consumer electronics and manufacturing in volume! Nobody ever said it made sense! The difference in price has nothing to do with anything technical. It is purely the economics of the market. The SPI Flash is being sold in relatively low quantities and somewhat high profit margins. The SD card is being sold in huge quantities and a very low profit margin. While on the surface it might seem that the SD card would be more expensive since it has a smaller capacity and less "middlemen", that obviously isn't the case. Another complication is that you could buy one make/model of SD card today, and then buy the same make/model in 3 month, and you would not be guaranteed to get the exact same thing. In those 3 months the internal design of the SD card could change. For most consumers this would not matter, but for some embedded users this could kill your application. Also, the SD card maker is not going to tell you of these changes. The same is not true of the SPI Flash, where you will most likely get the same thing for years. You can get SD cards from manufacturers that will guarantee that they sell the same part for years, but it will be much more expensive. These things are true of many products, not just SPI Flash and SD Cards. Memory (Flash and RAM) is the most obvious one. Another one is the iPad. In many cases it would be cheaper to buy iPads in bulk than to try and manufacture your own-- even in 100,000 unit quantities. You can't underestimate the purchasing power of a large company building millions of units at a time. There are other factors that I didn't cover. Differences in part types, packages, purchasing channels, etc. But the problem you raise is more complicated than any one single factor can account for. My market/economic explanation is the biggest factor, but not the only one.
{ "source": [ "https://electronics.stackexchange.com/questions/32200", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9844/" ] }
32,257
Noise figures in (op-amp) datasheets are expressed in V/√Hz, but Where does this unit come from? Why the square root? How should I pronounce it? How should I interpret it? I know lower is better, but will a noise figure that doubles also double the trace width on my scope? Is this value useful in calculating signal to noise ratio? Or what fun calculations can I do with this number? Is noise always expressed in V/√Hz?
"Volt per square root hertz". Noise has a power spectrum, and as you might expect the wider the spectrum the more noise you'll see. That's why the bandwidth is part of the equation. The easiest is to illustrate with the equation for thermal noise in a resistor: \$ \dfrac{v^2}{R} = 4kT\Delta f \$ where \$k\$ is Boltzmann's constant in joules per kelvin, and T is temperature in kelvin. \$\Delta f\$ is the bandwidth in Hz, just the difference between maximum and minimum frequency. The left hand side is the expression for power: voltage squared over resistance. If you want to know the voltage you rearrange: \$ v = \sqrt{4kT R\Delta f} \$ That's why you have the square root of the bandwidth. If you would express the noise in terms of power or energy you wouldn't have the square root. All noise is frequency related, but energy spectra may differ. White noise has an equal power across all frequencies. For pink noise, on the other hand, noise energy decreases with frequency. Flicker noise is therefore also called \$1/f\$ noise. In that case bandwidth in itself is meaningless. The left graph shows the flat spectrum of white noise, the right graph shows pink noise decaying 3dB/octave: You can make noise visible on an oscilloscope, but you can't measure it that way. That's because what you can see is the peak value, what you need is the RMS value. The best thing you're getting out of it is that you can compare two noise levels, and estimate one is higher than the other. To quantify noise you have to measure its power/energy.
{ "source": [ "https://electronics.stackexchange.com/questions/32257", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8627/" ] }
32,310
From reading so many sources online, I still cannot grasp why a different waveforms have harmonics. For example: when designing a silly amplitude modulation (AM) circuit that puts a square wave from a microcontroller in to an antenna, how are harmonics generated? The signal is just "on" or "off", how are there first, third, and fifth harmonics and why do they get weaker? I've heard oscilloscopes being able to measure up to the fifth harmonic of a square wave (or something similar) is important, but why would that make the reading different? Are these harmonics irrelevant in things such as data transfer (high=1, low=0) and only matter in situations such as audio or RF? Why do sinusoidal waves not have as many harmonics? Because the waveform is always moving and not flat going up (triangle) or horizontal (square), but circular with an always changing value?
Sinusoidal waves don't have harmonics because it's exactly sine waves which combined can construct other waveforms. The fundamental wave is a sine, so you don't need to add anything to make it the sinusoidal signal. About the oscilloscope. Many signals have a large number of harmonics, some, like a square wave, in theory infinite. This is a partial construction of a square wave. The blue sine which shows 1 period is the fundamental. Then there's the third harmonic (square waves don't have even harmonics), the purple one. Its amplitude is 1/3 of the fundamental, and you can see it's three times the fundamental's frequency, because it shows 3 periods. Same for the fifth harmonic (brown). Amplitude is 1/5 of the fundamental and it shows 5 periods. Adding these gives the green curve. This is not yet a good square wave, but you already see the steep edges, and the wavy horizontal line will ultimately become completely horizontal if we add more harmonics. So this is how you will see a square wave on the scope if only up to the fifth harmonic are shown. This is really the minimum, for a better reconstruction you'll need more harmonics. Like every non-sinusoidal signal the AM modulated signal will create harmonics. Fourier proved that every repeating signal can be deconstructed into a fundamental (same frequency as the wave form), and harmonics which have frequencies that are multiples of the fundamental. It even applies to non-repeating waveforms. So even if you don't readily see what they would look like, the analysis is always possible. This is a basic AM signal, and the modulated signal is the product of the carrier and the baseband signal. Now \$ sin(f_C) \cdot sin(f_M) = \dfrac{cos(f_C - f_M) - cos(f_C + f_M)}{2} \$ So you can see that even a product of sines can be expressed as the sum of sines, that's both cosines (the harmonics can have their phase shifted, in this case by 90°). The frequencies \$(f_C - f_M)\$ and \$(f_C + f_M)\$ are the sidebands left and right of the carrier frequency \$f_C\$. Even if your baseband signal is a more complex looking signal you can break the modulated signal apart in separate sines.
{ "source": [ "https://electronics.stackexchange.com/questions/32310", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9876/" ] }
32,454
I have just bought some 555 timers and they are timers for microelectronic circuits...could anyone tell me what I need to search for in order to get larger ones which will fit a breadboard? Thanks
You want to look for a part in a "DIP" package.
{ "source": [ "https://electronics.stackexchange.com/questions/32454", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9916/" ] }
32,511
The FDC855N comes in a 6-pin package, 4 of which are connected to the drain, and only 1 to the source. Why this difference? The source sees the same current as the drain, doesn't it?
That's not for the high current, it's for heat management. The single source pin can handle the current, and so would a single drain pin. Schematically a MOSFET is often drawn symmetrically, because this way it's easier to show the asymmetry in the channel's conductivity. But discrete MOSFETs aren't constructed that way. More like this: It will probably be packaged upside down, with the bulk of the drain connected to the lead frame which directly connects to the 4 pins. Gate and source will be bonded to their pins. The bulk of the MOSFET will dissipate the most heat, and because its direct contact with the pins the heat can be drained through the pins, it's a path with low thermal resistance. The drain may still be wire bonded as well, for proper electrical connection. But the bonding wire will pass much less of the heat. Thermal resistance in conduction (to the PCB's copper) is much lower than that of convection (the way heat is exchanged with the air above the package). I found the following suggested pad layout for a Luxeon power LED . They claim it can easily achieve 7K/W. In SMT power MOSFETs which will have to dissipate quite some heat it's advisable to have the drain pins on a larger copper plane, or allow the heat to dissipate through a series of (filled) vias, like for the Luxeon LED.
{ "source": [ "https://electronics.stackexchange.com/questions/32511", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3920/" ] }
32,533
What is the fast and elegant way to generate PWM without a Microcontroller to control a servo motor? With potentiometer or other ways to control the duty cycle with fix period. sorry about the mess, I want to control a hobby servo.
I recommend the (GASP!) 555 Timer in "astable" mode . You'll find everything you need in the link, but I copied them here just for you! Astable mode gives you a variable PWM frequency, and allows for an adjustable duty cycle as well (high-time and low-time equations in the link). The circuit: Note: I'd add an electrolytic cap across Vcc (positive lead) and GND (negative lead) to reduce the effect of dips in power supply voltage. The PWM frequency: Some defense for my answer compared to others in this post. Most other answers require an intermediate waveform to generate a variable PWM signal, such as the common triangle wave/comparator method. I don't see much point in constructing triangle wave generator (a significant circuit in-and-of itself) just as an intermediate step to solve your problem. The 555 is a great analog chip and does just what you need. I wish people didn't hate on them as much.
{ "source": [ "https://electronics.stackexchange.com/questions/32533", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9838/" ] }
32,675
This question might sound silly, but it is very serious (although geeky, I must admit). When I use my electric toothbrush in front of my alarm clock (one of those with a big red LED display), the numbers seem to break apart. Why? With my current alarm clock the first 3 segments (a,b,c) make one group and the other 4 make another one. The striking thing is that both groups seem to slowly oscillate, in anti-phase. I cannot affirm that the grouping is the same for every display of the kind -- and I cannot check now because I only have this one LED display at home -- but I've seen this illusion on many different displays in the past. I think this illusion is fascinating (told you I'm a geek) and I would like to understand why it is... I believe that it has something to do with 1) the vibrations of the toothbrush making my eyes oscillate in their orbits so the image seems to move (a bit like when you touch your eye on the side with your finger and the whole image seems to move), 2) with the periodic refresh of the segments. I suspect that the two groups I'm seeing actually correspond to two groups of bars blinking in sync, so there really are two groups, (but why do they dislocate like that?) 3) with our periodic perceptual "refresh rate". Something similar to what makes us see car wheels as stationary when they rotate at a certain speed. I must say my competence in electronics is close to zero, so my questions might be trivial to you guys. How does the refresh of the line segments in a 7 segments display happen? Is it cycling (a,b,c,d,e,f,g,a,b,c...)? How come I see two groups, then? At what frequency are the LEDs blinking? I understand some of these questions depend on the specifications of my alarm clock, but since I've seen this illusion on basically every type of LED display I passed by with my toothbrush (microwave, VHS video recorder (yes the story started long ago. I bought the toothbrush I'm now using just for understanding this silly illusion, but it was actually as a teenager that I noticed it)...), I guess there is something constant there...
Yes, it sounds like you are seeing artifacts of toothbrush frequency vibrating your head and therefore your eyes, and that beating against the LED refresh frequency. This is a similar effect to eating potato chips (actually anything crunchy) while watching the LED display. In that case the head vibrations are more random, so parts of the LED display will appear to jump around randomly. Some segments will be displayed during a head-high part of a vibration, and others during head-low. These will appear in different locations. LEDs are refreshed all kinds of ways. A lot has to do with how clever or competent the engineer was that wrote the firmware. I've seen naive refresh algorithms that simply do each digit in order. Those have the most apparent flicker for any one refresh rate. Better means interleave digits, sortof like interlacing of old TV scans lines. The whole display is still refreshed at the same rate, but the apparent flicker is at least in part related to the interlace rate. There are fancy schemes which interleave both digits and segments, but these are often not possible with common displays where whole digits are already partially wired together.
{ "source": [ "https://electronics.stackexchange.com/questions/32675", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9978/" ] }
32,687
I have this problem in a book and I've did good through the entire problem until I had to calculate current gain. I've been stuck for the past 30 minutes. The formula I get is different from the one in the book and I'm wondering why. This is probably some simple mathematical thing and I'll probably end up embarrassing myself but I just don't know why I get different current gain. I can't continue if I don't understand this. What I get is that the second term in current gain (circled red on the picture) is reverse, that is numerator and denominator are reverse. I just need an explanation for that term in the book.
Yes, it sounds like you are seeing artifacts of toothbrush frequency vibrating your head and therefore your eyes, and that beating against the LED refresh frequency. This is a similar effect to eating potato chips (actually anything crunchy) while watching the LED display. In that case the head vibrations are more random, so parts of the LED display will appear to jump around randomly. Some segments will be displayed during a head-high part of a vibration, and others during head-low. These will appear in different locations. LEDs are refreshed all kinds of ways. A lot has to do with how clever or competent the engineer was that wrote the firmware. I've seen naive refresh algorithms that simply do each digit in order. Those have the most apparent flicker for any one refresh rate. Better means interleave digits, sortof like interlacing of old TV scans lines. The whole display is still refreshed at the same rate, but the apparent flicker is at least in part related to the interlace rate. There are fancy schemes which interleave both digits and segments, but these are often not possible with common displays where whole digits are already partially wired together.
{ "source": [ "https://electronics.stackexchange.com/questions/32687", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9980/" ] }
32,798
Well, can it, when it is in direct contact with the components and circuitry?
The best generic term is "Silicone Rubber". I'll refer to it as SR for brevity. SR is a good to excellent encapsulant that has limitations (as does everything). It is not essential to use an electronics grade SR - these are usually dearer and may have Mil Spec ratings which are not essential. BUT see below for what IS essential. For electronics and anything liable to be corroded you MUST use "neutral cure" SR. You can buy acid cure SR which exudes acetic acid as it sets. If it smells like vinegar then it's acid cure. Do not use "acid cure" silicone rubber for electronics. Neutral cure SRs will always say "neutral cure" or similar on the container. if they do not say this or similar they will be acid cure. There are 2 main types of neutral cure SR (= NCSR) in common use. There are a number of other NCSRs but you will almost certainly not meet them. Oxime cure is the cheaper and most common NCSR. It releases oximes and usually also methyl alcohol as it sets. Ventilation is needed and some people may get eczema skin reactions. The oximes can corrode bare bright copper during curing but this is usually not a major problem. Oxime cure NCSR does not bond to polycarbonate plastic. Alkoxy NCSR is more costly and the better grades of NCSR are alkoxy. It releases methyl alcohol as it sets. This can be 5% - 10% by volume! So ventilation is an extremely good idea. It is good to work with - just be sensible. ALL SRs that you meet are moisture cured !!!!!!!!!!!!!!!!!!!! Atmospheric water reacts with the SR to cause cross linking of the SR rubber. If the air is relatively dry it takes longer. If the air is completely dry then SR will not set!!! SR tubes are water proof and water vapor proof. Once you open them they are NOT water vapor proof - a tube of set of SR that is carefully sealed will set completely hard on months to a year. Storage in 100% dry air MAY work. To set right through, water vapor from the air must penetrate the SR. Penetration rates vary from about 1mm/day to 3mm/day. If you make a very thick blob of SR it can take many days to set in the middle. If you take two flat plates and overlap them and apply SR to the overlap the air path to the overlap is d/2 where d is the smallest overlap dimension and the path is through set SR and is very thin. SO an overlap SR join may take many many many days to set. Major SR makers recommend not using vast overlap in joins. The 600 pound gorillas of the SR market are Dow Corning, Shinetsu (Japanese) and maybe BASF (BASF are the 600 pound gorilla of ANYTHING chemical but nobody notices). There are many other brands and many are good but if it's made by DC or Shinetsu you know it's good. Not all brands are good. Some people put large amounts of filler in their SRs to the extent that it works poorly. DC do make some cheap lower performance NCSRs but even these work well for most purposes. DC and other large makers may sell specific grades in selected markets which are not available in all countries. For example they sell "Dow Corning Neutral Plus" oxime cure NCSR in Asia. Unlike most DC SR's, it has no product number and US sources do not know of its existence. It costs a few $US a tube ( e.g. in Hong Kong) and works well enough. Many people do not know the following. Others will refuse to believe it: Note that SR's are NOT water vapor proof. Water vapor will permeate through them but liquid water will not. So a container "sealed" with SR will have an internal relative humidity comparable to that outside it! SR is typically about 10x more water permeable than the EVA sealant/adhesive used to bond silicon "solar cells" & glass PV panels together. So a glass fronted PV panel and a "waterproof" backsheet is also not in fact sealed and inside humidity levels are ~+ outside ones. Keeping LIQUID water off your components is what is required to prevent major corrosion. Fortunately. Corrosion still occurs with water vapour but at a vastly reduced rate due to the much lower concentration of reactants. The other requirement is a void-free bond to the component. If there are voids then water vapour can condense to form liquid water and allow corrosion at greatly accelerated rates. There are many many grades of NCSR - setting times vary from minutes to hours. Viscosity varies from very pourable to thixotropic. If you ask specific SR questions not covered here I may be able to answer them. Added, February 2014: @DaveTweed linked to "Silicone materials for Electronic Devices and Component Assemblies" - here which is probably the document that user 55online mentioned. This seems to be a very relevant document - it is specific brand related but contains much useful information. __________________________________ Added 2018: As a guide only: Be wary of products that are relatively heavy and relatively light relative to competing brands. Heavier ones tend to be filled with CaCO3 or similar. Light ones (which I've mainly seen in "Asian market only" offerings are filled with ???. SR when set will usually just float in water or perhaps sink slowly. Filled SR sinks more rapidly.
{ "source": [ "https://electronics.stackexchange.com/questions/32798", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1443/" ] }
32,830
Based on numerous internet resources, speaker wire acts like an antenna which picks up the transmitted signal of nearby cellphones and causes the speakers to buzz. But I'm not really buying that... A 3.5 mm speaker cable is designed to carry 1 V. I've seen old setups where PC speakers are powered directly from the 3.5 mm jack (and I've tested playing unamplified sound directly from a PC through the jack, although the volume in my setup was not very high at all). How can the tiny bit of EM emitted by a cellphone radio cause a speaker system, designed to operate off of a fluctuating 1 v signal, produce such a loud buzzing noise? I couldn't imagine the EM generating more than a few micro-volts in a receiving antenna. Am I wrong? Thanks. Updated - corrected voltage of line out to 1 V (see comments) Update I looked it up, and yes it seems GSM transmits at 2 W. I'd like to do a sanity check with that figure to verify some of the answers which state that the transmitted power is significant. My physics is quite rusty, but I'll try... We know that the intensity of EM radiation around a source is: $$I = \frac{P}{4\pi r^2}$$ So let's say we have a wire 2 m long and 0.2 mm wide (I hope this is a valid approximation for the wire) that is approximately 2 m away from a transmitting GSM module. Then for \$P = 2 W, I = 39 \frac{mW}{m^{2}}\$ Multiply that by the surface area of the wire (0.2 mm * 2 m) The total EM power along the wire is then 16 \$\mu W\$. Like I said I'm quite rusty, but is this not correct? Is this really significant enough to produce that sound without being amplified somehow? Perhaps the signal resonates? Or interferes directly with sound cards?
The buzzing is AM detected signal. The reason of audio amplifiers being hit by GSM signal is that contemporary audio semiconductor parts are actually very functional up to high GHz range. For GSM-800-900MHz range any 80mm copper trace works like 1/4 wave antenna, or stripline resonator. The signal is AM detected on any non-linearity (transistors or diode structures in chips) on multiple points of amplifier simultaneously, also including power regulator chips and so on. It is translated into audio range as tiny but very sharp and periodic dips or pops of averaged conductivity of non-linear parts (AM detection), which are DC powered. Think of low speed oscilloscope trace showing straight line with beads of UHF flashes. Simple sharp spikes of consumed DC current will become audible with amplifier.
{ "source": [ "https://electronics.stackexchange.com/questions/32830", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8380/" ] }
32,990
I'm just trying out Arduino Uno for the first time with 2 blinking LEDs on a breadboard. All the tutorials on the Internet seem to use a resistor. I do know the function of resistors, but does it really matter here? These LEDs are working just fine without a resistor.
Naughty! :-). If they say to use a resistor there's a good reason for that! Switch it off, NOW! The resistor is there to limit the LED's current. If you omit it the current limiting has to come from the Arduino's output, and it will not like it. How do you find out what the resistor needs to be? You do know Ohm's Law? If you don't, write it down in big letters: \$ V = I \cdot R \$ Voltage equals current times resistance. Or you could say \$ R = \dfrac{V}{I} \$ It's the same thing. The voltage you know: Arduino runs at 5V. But not all that will go over the resistor. The LED also has a voltage drop, typically around 2V for a red LED. So there remains 3V for the resistor. A typical indicator LED will have a nominal current of 20mA, then \$ R = \dfrac{5V - 2V}{20mA} = 150\Omega \$ The Arduino Uno uses the ATmega328 microcontroller. The datasheet says that the current for any I/O pin shouldn't exceed 40mA, what's commonly known as Absolute Maximum Ratings. Since you don't have anything to limit the current there's only the (low!) resistance of the output transistor. The current may so well be higher than 40mA, and your microcontroller will suffer damage. edit The following graph from the ATmega's datasheet shows what will happen if you drive the LED without current limiting resistor: Without load the output voltage is 5V as expected. But the higher the current drawn the lower that output voltage will be, it will drop about 100mV for every extra 4mA load. That's an internal resistance of 25\$\Omega\$. Then \$ I = \dfrac{5V - 2V}{25\Omega} = 120mA \$ The graph doesn't go that far, the resistance will rise with temperature, but the current will remain very high. Remember that the datasheet gave 40mA as Absolute Maximum Rating. You have three times that. This will definitely damage the I/O port if you do this for a long time. And probably the LED as well. A 20mA indicator LED will often have 30mA as Absolute Maximum Rating.
{ "source": [ "https://electronics.stackexchange.com/questions/32990", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10067/" ] }
33,042
I'm using an optocoupler ( MOC3021 ) to sense the On/Off state of an electrial appliance using a microcontroller ATmega16L. How do i go about doing this? My mains supply specs are 230V, 50Hz. How do I design the surrounding circuit and select component values, like the resistors? EDITED on 13th June 2012 Note: This is the first time I'm solving a circuit like this. Please send any helpful feedback. (including things I did wrong or any improvements) Referring to the above schematic. The idea is to use this circuit to determine whether the load is on or off. The output pin from the optocoupler connects to an external interrupt of the Microcontroller I'm using which is ATmega16L. The interrupt will Monitor the state of the load. After monitoring I can toggle the state of the load using a relay (relay acts as a Control mechanism) which connects to the same microcontroller. Now, I tried calculating the resistor values for R1, R2 and Rc. Note, microcontroller's VIL(max) = 0.2xVcc = 660mV and VIH(min) = 0.6xVcc = 1.98V and VIH(max) = Vcc+0.5 = 3.8V. To calculate Rc is quite easy. When the transistor is not conducting the output is high (at 3.3V). When the transistor conducts the output is pulled low. so from microcontroller's point of view, output high means load is switched OFF and output low means load is switched ON. Looking at the datasheet for SFH621A-3, using 34% minimum CTR at IF = 1mA. Therefore, at 1mA input, the output is going to be 340uA. So in order for the microcontroller to detect low voltage from the output of the optocoupler can I use resistor value of 1Kohm? So that the output from the optocoupler will have a voltage of 340mV (which is below VIL(max) ) More on this later, been a long day. EDITED on 15th June 2012 Note: Solving for resistors on the power line (R1 and R2). Please check my calculations and any appropriate feedbacks. Aim : the aim is to keep the LEDs *ON** for maximum period of time in a 10mS half period (20mS full period of 50Hz). Lets say LEDs have to be ON for 90% of the time, that means LEDs require at least 1mA of current for 90% of the time for that half period which means LEDs will be active for 9mS in a 10mS half period. So, 9mS/10mS = 0.9 * 180( half period ) = 162 degrees. This shows the current will be 1mA between 9deg and 171deg ( and less than 1mA from 0deg to 9deg and 171deg to 180deg ). Did not consider ON time to be 95% as working with whole numbers is neat and 5% doesn't make any difference not in this application at least. Vpeak-peak = 230V x sqrt(2) = 325V. Taking tolerances into account. Minimum tolerance of 6%. 325 x 0.94 ( 100-6 ) x sin(9) = 47.8V So, R1 ≤ (47.8V - 1.65V) / 1mA = 46.1 Kohms Choosing a value one smaller than 46.1 Kohms of 39 Kohms (e12 series). Now that a smaller value resistance is chosen compared to what was calculated, means current through the diodes will be greater than 1mA. Calculating new current: ((325V x 110%) - 1.25V)/39 Kohms = 9.1mA (too close to max If of diodes). Coming back to this in a moment [Label - 1x] First calculate the power ratings of the resistor (considering 39 Kohm) ((230 + 10%)^2) / 39K = 1.64 Watts (too high). Going back to calculation [Label - 1x] Lets choose two 22 Kohm resistors. Together they add upto 44 Kohm which is quite close 46.1 Kohm(calculated above) checking power rating of the two resistors combined: ((230 + 10%)^2) / (2 x 22) Kohm = 1.45W. choose 22 Kohm resistors each with 1W power rating. Now, after all this the initial CTR was 34% which means 1mA in will be 340µA out . But now because of 2x22 Kohm resistors the current will be slightly more on the output. That means higher potential across the pull up resistor Rc. Would there be an issue to get a volt drop below 500mV on the output of the optocoupler??
The MOC3021 is an optocoupler with a triac output. It's used to drive a power triac typically to switch mains operated appliances. Triacs can only be used in AC circuits. You need an optocoupler with a transistor output, preferably one with two LEDs in antiparallel at the input. The SFH620A is such a part. The two LEDs in antiparallel ensure that the transistor is activated on both half cycles of the mains. Many optocouplers only have 1 LED, that would work, but give you an output pulse of 10ms in a 20ms period for 50Hz. You would need to place a diode in antiparallel to the input also in that case, to protect the LED from overvoltage when reverse polarized. Important is CTR or Current Transfer Ratio, which indicates how much output current the transistor will sink for a given LED current. CTR is often not very high, but for the SFH620A we can choose a value of 100% minimum, only that's at 10mA in, at 1mA it's only 34% minimum, so that 1mA in means at least 340\$\mu\$A out. Let's suppose the output goes to a 5V microcontroller and that you would use the 2k\$\Omega\$ pull-up resistor shown in the diagram. If the transistor is off it won't draw current, except for a small leakage current, 100nA maximum, according to the datasheet. So that will cause a voltage drop of 200\$\mu\$V across the resistor, which is more than safe. If the transistor is on, and it draws 340\$\mu\$A then the voltage drop across the resistor is only 680mV, and that's way too low to get a low level. We'll have to increase either the resistor's value or the current. Since we had a lot of margin on the leakage current we can safely increase the resistor value to 15k\$\Omega\$ for instance. Then 340\$\mu\$A will give a sufficiently low output voltage. (Theoretically 5.1V voltage drop, but there's only 5V available, so it will go to ground.) The voltage drop because of the leakage current is still well within limits at 1.5mV. If we want to have a CTR of at least 34% at 1mA we have to use the SFH620A-3. If this would be controlled from a DC source we would almost be done. Just add R1 in series with the LEDs, R2 will probably not be needed. Then R1 \$\leq\$ (\$V_{IN}\$ - \$V_{LED}\$) / 1mA. But we have to deal with a 230V AC input signal. At the zero crossings there won't be any current, there's little we can do about that. How can we get at least 1mA for most of the cycle without wasting too much power? This is a trade-off. You can have the 1mA for just the maximum voltage, and that will give you only a small pulse, but you'll waste the least power. Or you can go for 1mA for most of the cycle, but then you'll have more current when the voltage is highest. Let's say we want at least a 9ms pulse in a 10ms half period (50Hz). That means the current has to be 1mA at a 9° phase until 171°. 230V AC is 325V peak, but we have to take a -6% tolerance into account, so that's 306V minimum. 306V \$\times\$ sin(9°) = 48V. R1 \$\leq\$ (48V - 1.65V) / 1mA = 46.2k\$\Omega\$. (The 1.65V is the LED's maximum voltage.) The closest E24 value is 43k\$\Omega\$. Then we have more than 1mA at a 9° phase, but what at the voltage's maximum. For that we have to work with the positive tolerance, max. 10%. Then peak voltage is 230V \$\times\$ \$\sqrt{2}\$ \$\times\$ 110% = 358V. Maximum current is then (358V - 1.25V) / 43k\$\Omega\$ = 8.3mA. (The 1.25V is the LED's nominal voltage). That's well below the optocoupler's limit. We won't be able to do this with just 1 resistor. It probably can't stand the high voltage, and may have power dissipation problems too, we'll come to that in a minute. Peak voltage across the resistor is 357V. The MFR-25 resistor is rated at max 250V, so we'll need at least 2 of them in series. How about power? 230V+10% in 43k\$\Omega\$ is 1.49W. The MFR-25 is only rated at 1/4W, so two of them won't do. Now you can choose to have more of them in series, but that would have to be at least 6, or choose a higher rated resistor. The MFR1WS (same datasheet) is rated at 1W, so 2 in series will do. Remember that we'll have to divide the resistor value by 2: 21.5k\$\Omega\$, which is not an E24 value. We can choose the closest E24 value an d check our calculations, or choose an E96. Let's do the latter. That's all, folks. :-) edit I suggested in comment that there's a lot more which has to be accounted for, this answer could well be 3 times as long. There's for instance the input leakage current of the AVR's I/O pin, which can be ten times as high as the transistor's. (Don't worry, I checked it, and we're safe.) Why didn't I choose an optocoupler with Darlington output? They have a much higher CTR. The main reason is the Darlington's saturation voltage, which is much higher than for a common BJT. For this optocoupler for instance it can be as high as 1V. For the ATmega16L you're using the maximum input voltage for a low level is 0.2 \$\times\$ \$V_{DD}\$, or 0.66V at a 3.3V supply. The 1V is too high. That's the main reason. Another reason could be that it may not really help. We do have enough output current, it's just that the 1mA input current is so high that we need power resistors for them. Darlingtons don't necessarily solve this if they're also only specified at 1mA. At a 600% CTR we'd get 6mA collector current, but we don't need that. Can't we do anything about the 1mA in? Probably. For the optocoupler I mentioned the Electrical Characteristics only talk about 1mA. There's a graph in the datasheet, fig.5: CTR versus forward current, which shows a CTR of more than 300% at 0.1mA. You have to be careful with these graphs. While tables often give you minimum and/or maximum values, graphs usually give you typical values. You may have 300%, but it may be lower. How much lower? It doesn't say. If you build just one product you can try it, but you can't do that for every optocoupler if you want to run a 10k/year production. It might work. Say you use 100\$\mu\$A in, and at a relatively safe CTR value of 100% you would have 100\$\mu\$A out. You would have to do the calculations again, but your major advantage will be that the input resistors will only dissipate 150mW, instead of 1.5W. It way be worth it.
{ "source": [ "https://electronics.stackexchange.com/questions/33042", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10082/" ] }
33,166
Why do Arduino boards ship w/ 16MHz crystal instead of 20MHz? They are spec'ed for operating at 20MHz, after all. I guess there are a few advantages to running more slowly (lower power consumption, longer life), but I must be missing something.
I'd buy into the answer on the Arduino Forum: The original ATmega8 Arduino ran at 16MHz, which was the top rated clock speed for the ATmega8 cpu used. When "upgraded" to ATmega168 (with a 20MHz top cpu speed), the clock was left at 16MHz (probably) because the designers thought that more people/code would have backward compatibility issues with a new clock rate than would benefit from the extra 25% cpu performance. I certainly think they were right...
{ "source": [ "https://electronics.stackexchange.com/questions/33166", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6094/" ] }
33,245
When looking at consumer electronic devices with batteries I sometimes see the battery capacity listed in Wh (watt hours) and sometimes in mAh (milliamp hours). I would like to be able to compare the two different metrics and I'm wondering how to convert from one value to the other.
The Simple Answer DC Power is defined in terms of W = 1 V * 1 A - that is, the power that is delivered by sustaining 1V potential with 1A of current. Thus, a battery pack that can deliver 5400mAh, that is 5.4Ah, while sustaining voltage of 10.4V (this happens to be running in my laptop right now), can in theory deliver up to 5.4 * 10.4 = 56.16 Wh = 56160mWh. The Complicated Answer The above get a lot more complicated with different battery chemistries, and with different measurement methods. Firstly, the mAh rating can depend on the actual current draw - in general, the more current you draw, the less capacity the battery has, but there are exceptions at both ends of this guideline (if you draw too slowly, self discharge affects your measurement, and if you drive quickly enough, the battery gets warmer, and if it doesn't break, it tends to perform better). Also, the voltage across the battery changes with the load - this is at least simple, the more current you draw, the lower the voltage across the terminals (this is due to internal resistance). Finally, some devices are essentially dumb loads (battery powered tools), and draw as much as they can from the battery.. and some devices handle voltage and current changes in a more intelligent manner (mostly laptops and other DC/DC converters). This means that for dumb loads, you are more concerned with mAh ratings (perhaps measured until battery voltage remains above some usable threshold).. since this can be used to calculate time-to-empty (which is really what you or your users are after), and dumb loads are approximately constant current/constant resistance loads. For smart loads, the discharge controller (DC/DC converter) would actually try to drain constant power - the lower the voltage, the more current it drains so that it can continue outputting constant power on its business end.
{ "source": [ "https://electronics.stackexchange.com/questions/33245", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10133/" ] }
33,372
So, in my previous question I asked about using the SPI bus over a short distance for board to board communication. I was recommended to try termination resistors. I placed a resistor close to the destination (but not exactly there, there was a distance of 1 cm) and grounded it. (As this was a board without termination resistor footprints, I had to improvise. I couldn't solder the resistor onto the device as it's a TQFP and has delicate pins.) From some basic testing, I found that a 1 kΩ resistor barely reduced the overshoot. 470 Ω and 180 Ω worked better. The lower I went, the better it worked. With 180 Ω, the overshoot was about a volt or a little lower. Now, unfortunately, I can't go down much more than that because the current is more than my MCU can handle. I did fix the problem, on the current revision of the board, by using a 330 Ω resistance in series. This brought the overshoot to 3.7 V and the rise time was 10 or 11 ns. But I would really like a 'proper' solution on the next revision. My frequency requirements stay the same: 2 MHz, but would prefer 4 MHz. So I felt I should ask here: on the next revision of the board, should I place beefy buffers on the lines? Finding a buffer isn't really a problem but the current draw will increase significantly - I have 8 devices on the SPI which need termination and 3 lines that are always active go to each. An example, SCK goes to all 8 devices. Each device will have, say, a 100 Ω termination resistor. So that is a current draw of 12 * 3.3/100 = 390 mA! So what is the best recourse here? Should I go for 'active termination' by using Schottky diodes as clamps? EDIT: Regarding line impedance: As I mentioned previously, the intention is to connect 4 external boards. The pad to pad distance is the same for all (12 inches). However, there are also devices on the same board as the MCU - but these don't need terminations - the lengths are about an inch (or less) and there is very little overshoot (300 mV). The traces that go to external boards are roughly the same length and width. The 2nd layer on my board is an unbroken ground plane.
Talking about signal termination is like opening a can of worms. This is a HUGE subject that is difficult to summarize in just a couple hundred words. Therefore, I won't. I am going to leave a huge amount of stuff out of this answer. But I will also give you a big warning: There is much misinformation about terminating resistors on the net. In fact, I would say that most of what's found on the net is wrong or misleading. Some day I'll write up something big and post it to my blog, but not today. The first thing to note is that the resistor value to use for your termination must be related to your trace impedance. Most of the time the resistor value is the same as your trace impedance. If you don't know what the trace impedance is then you should figure it out. There are many online impedance calculators available. A Google search will bring up dozens more. Most PCB traces have an impedance from 40 to 120 ohms, which is why you found that a 1k termination resistor did almost nothing and a 100-ish ohm resistor was much better. There are many types of termination, but we can roughly put them into two categories: Source and End termination. Source termination is at the driver, end termination is at the far end. Within each category, there are many types of termination. Each type is best for different uses, with no one type good for everything. Your termination, a single resistor to ground at the far end, is actually not a very good. In fact, it's wrong. People do it, but it isn't ideal. Ideally that resistor would go to a different power rail at half of your power rail. So if the I/O voltage is 3.3v then that resistor will not go to GND, but another power rail at half of 3.3v (a.k.a. 1.65v). The voltage regulator for this rail has to be special because it needs to source AND sink current, where most regulators only source current. Regulators that work for this use will mention something about termination in the first page of the datasheet. The big problem with most end-termination is that they consume lots of current. There is a reason for this, but I won't go into it. For low-current use we must look at source termination. The easiest and most common form of source termination is a simple series resistor at the output of the driver. The value of this resistor is the same as the trace impedance. Source termination works differently than end termination, but the net effect is the same. It works by controlling signal reflections, not preventing the reflections in the first place. Because of this, it only works if a driver output is feeding a single load. If there are multiple loads then something else should be done (like using end termination or multiple source termination resistors). The huge benefit of source termination is that it does not load down your driver like end termination does. I said before that your series resistor for source termination must be located at the driver, and it must have the same value as your trace impedance. That was an oversimplification. There is one important detail to know about this. Most drivers have some resistance on it's output. That resistance is usually in the 10-30 ohm range. The sum of the output resistance and your resistor must equal your trace impedance. Let's say that your trace is 50 ohms, and your driver has 20 ohms. In this case your resistor would be 30 ohms since 30+20=50. If the datasheets do not say what the output impedance/resistance of the driver is then you can assume it to be 20 ohms-- then look at the signals on the PCB and see if it needs to be adjusted. Another important thing: when you look at these signals on an o-scope you MUST probe at the receiver. Probing anywhere else will likely give you a distorted waveform and trick you into thinking that things are worse than they really are. Also, make sure that your ground clip is as short as possible. Conclusion: Switch to source termination with a 33 to 50 ohm resistor and you should be fine. The usual caveats apply.
{ "source": [ "https://electronics.stackexchange.com/questions/33372", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3966/" ] }
33,537
In free space, lower frequency signals seems to go farther because the signal is either diffracted by the ground or reflected by the upper atmospheric layers, making it actually go farther. In urban condition, where we need to penetrate walls, does 2.4GHz travel further than 433MHz radio? In the electromagnetic spectrum, do Gamma rays and X-rays have good penetration because they have high frequency?
It is not true that higher frequencies always penetrate further than lower ones. The graph of transparency of various materials as a function of wavelength can be quite lumpy. Think of colored filters, and those only apply to a narrow octave of wavelengths we call visible light. What you are apparently thinking of is wavelengths so short that the energy is very high, like xrays and gamma rays. These go thru things solely because of their high energy. At lower energies (longer wavelengths), the waves interact with the material in various ways so that they can get absorbed, refracted, reflected, and re-emitted. These effect vary in non-monotonic ways as a function of wavelength, the depth of the material, it's resistivity, density, and other properties.
{ "source": [ "https://electronics.stackexchange.com/questions/33537", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9838/" ] }
33,725
Photo #1 Photo #2 Photo #3 - A zoom of Photo #1 Photo #4 - A zoom of Photo #2 I shot these photos while travelling in a highway. In each line-group has three separate lines. I think that the three lines in each group carry the same electrical potential (if not, could they be so close to each other?). Why are the three lines in each group isolated from each other? Is there an electrical reason for this?
Why are the three lines in each group isolated from each other? Is there an electrical reason for this? Impedance, power factor, corona discharge and resistive loss effects are improved by spacing a number of conductors apart to form a larger effective single conductor. The combination of multiple wires in this manner is usually termed a "bundle". Wikipedia notes Bundle conductors are used to reduce corona losses and audible noise. Bundle conductors consist of several conductor cables connected by non-conducting spacers*. For 220 kV lines, two-conductor bundles are usually used, for 380 kV lines usually three or even four. American Electric Power[4] is building 765 kV lines using six conductors per phase in a bundle. Spacers must resist the forces due to wind, and magnetic forces during a short-circuit. Bundle conductors are used to increase the amount of current that may be carried in a line. Due to the skin effect, ampacity of conductors is not proportional to cross section, for the larger sizes. Therefore, bundle conductors may carry more current for a given weight. A bundle conductor results in lower reactance, compared to a single conductor. It reduces corona discharge loss at extra high voltage (EHV) and interference with communication systems. It also reduces voltage gradient in that range of voltage. As a disadvantage, the bundle conductors have higher wind loading. * Insulated / non-insulated spacers: Note that the above reference says "non conducting spacers". In fact, some are and some aren't. There is no obvious gain from insulating between wires although, a conducting spacer will probably carry some current with the potential for additional losses at the clamping joints. While the potential in all wires in a bundle is nominally identical, the magnitude of the fields produced and the imbalances due to line-line, line-ground and line-tower mean there will be some differences in voltage - probably small but more than may be intuitively obvious. Many spacers use elastomer bushes at the wire support points - aimed primarily at providing damping of Aeolian oscillations in the wires. As differences in voltage are low then these bushes may provide functional insulation. Good discussion here Summary of their comments: Bundled conductors are primarily employed to reduce the corona loss and radio interference. However they have several advantages: Bundled conductors per phase reduces the voltage gradient in the vicinity of the line. Thus reduces the possibility of the corona discharge. Improvement in the transmission efficiency as loss due to corona effect is countered. Bundled conductor lines will have higher capacitance to neutral in comparison with single lines. Thus they will have higher charging currents which helps in improving the power factor. Bundled conductor lines will have higher capacitance and lower inductance than ordinary lines they will have higher Surge Impedance Loading (Z=(L/C)1/2). Higher Surge Impedance Loading (SIL) will have higher maximum power transfer ability. With increase in self GMD or GMR inductance per phase will be reduced compared to single conductor line. This results in lesser reactance per phase compared to ordinary single line. Hence lesser loss due to reactance drop. An extreme case: {From here} Nice calculation toy. Power_lineparam here including effects of bundles. The power_lineparam function computes the resistance, inductance, and capacitance matrices of an arbitrary arrangement of conductors of an overhead transmission line. For a three-phase line, the symmetrical component RLC values are also computed. 3 :
{ "source": [ "https://electronics.stackexchange.com/questions/33725", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
33,860
L p : Self inductance of the primary winding. L s : Self inductance of the secondary winding. L m : Mutual inductance between the primary and secondary windings. Assume that I need an iron core inductor with large inductance to use under 50Hz or 60Hz. How do I obtain an inductor from the given transformer in the image? I don't want to use any other circuit elements unless it is absolutely required. The dot convention of the transformer is given in the image; terminal connections must be done so that the inductance of the resulting inductor must be maximum (I think that happens when the fluxes generated by the primary and secondary windings happen to be in the same direction inside the transformer core). I'm expecting an answer like " Connect \$ P_2 \$ and \$ S_2 \$ to together, \$ P_1 \$ will be \$ L_1 \$ and \$ S_1 \$ will be \$ L_2 \$ of the resulting inductor. ". I understand that I can use the primary and secondary windings separately by making the unused winding open, but I'm looking for a smart way of connecting the windings so that the resulting inductance will maximize. What will be the inductance of the inducter in terms of \$ L_p \$, \$ L_s \$ and \$ L_m \$ ? What will be the frequency behavior of the resulting inductor? Will it have a good performance at frequencies other than the original transformer was rated to run in.
How do I obtain an inductor from the given transformer in the image? ... So that the inductance of the resulting inductor must be maximum. Connect the undotted end of one winding to the dotted end of the other. eg P 2 to S 1 (or P 1 to S 2 ) and use the pair as if they were a single winding. (As per example in diagram below) Using just one winding does NOT produce the required maximum inductance result. The resulting inductance is greater than the sum of the two individual inductances. Call the resultant inductance L t , L t > L p L t > L s L t > (L p + L s ) !!! <- this may not be intuitive \$ L_t = ( \sqrt{L_p} + \sqrt{L_s}) ^ 2 \$ <- also unlikely to be intuitive. \$ \dots = L_p + L_s + 2 \times \sqrt{L_p} \times \sqrt{L_s} \$ Note that IF the windings were NOT magnetically linked (eg were on two separate cores) then the two inductances simply add and L sepsum = L s + L p . What will be the frequency behavior of the resulting inductor? Will it have a good performance at frequencies other than the original transformer was rated to run in. "Frequency behavior" of the final inductor is not a meaningful term without further explanation of what is meant by the question and depends on how the inductor is to be used. Note that "frequency behavior" is a good term as it can mean more than the normal term "frequency response" in this case. For example, applying mains voltage to a primary and secondary in series, where the primary is rated for mains voltage use in normal operation will have various implications depending on how the inductor is to be used.Impedance is higher so magnetising current is lower so core is less heavily saturated. Implications then depend on application - so interesting. Will need discussing. Connecting the two windings together so that their magnetic fields support each other will give you the maximum inductance. When this is done the field from current in winding P will now also affect winding S and the field in winding S will now also affect winding P so the resultant inductance will be greater than the linear sum of the two inductances. The requirement to get the inductances to add where there 2 or more windings is that the current flows into (or out of) all dotted winding ends at the same time. \$ L_{effective} = L_{eff} = (\sqrt{L_p} + \sqrt{L_s})^2 \dots (1) \$ Because: Where windings are mutually coupled on the same magnetic core so that all turns in either winding are linked by the same magnetic flux then when the windings are connected together they act like a single winding whose number of turns = the sum of the turns in the two windings. ie \$ N_{total} = N_t = N_p + N_s \dots (2) \$ Now: L is proportional to turns^2 = \$ N^2 \$ So for constant of proportionality k, \$ L = k.N^2 \dots (3) \$ So \$ N = \sqrt{\frac{L}{k}} \dots (4) \$ k can be set to 1 for this purpose as we have no exact values for L. So From (2) above: \$ N_{total} = N_t = (N_p + N_s) \$ But : \$ N_p = \sqrt{k.L_p} = \sqrt{Lp} \dots (5) \$ And : \$ N_s = \sqrt{k.L_s} = \sqrt{L_s} \dots (6) \$ But \$ L_t = (k.N_p + k.N_s)^2 = (N_p + N_s)^2 \dots (7) \$ So \$ \mathbf{L_t = (\sqrt{L_p} + \sqrt{L_s})^2} \dots (8) \$ Which expands to: \$ L_t = L_p + L_s + 2 \times \sqrt{L_p} \times \sqrt{L_s} \$ In words: The inductance of the two windings in series is the square of the sum of the square roots of their individual inductances. L m is not relevant to this calculation as a separate value - it is part of the above workings and is the effective gain from crosslinking the two magnetic fields. [[Unlike Ghost Busters - In this case you are allowed to cross the beams.]].
{ "source": [ "https://electronics.stackexchange.com/questions/33860", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
34,040
This has been bugging me for a long time. Take this video for example. I have always thought that electricity will take the shortest path. When the electromagnet's windings are uninsulated, it seems that the electricity would flow straight through the "mass of metal" created by the wire, not in the circular path needed for the electromagnet to work. I have also seen solenoids that work like this. How does this design work?
It is insulated. Have you ever noticed that sometimes solenoids are made from copper wire that seems distinctly non copper coloured? This is called enamelled copper wire, and it available in a whole range of colours. The insulation is just a very thin coating of polyurethane, polyamide or polyester. It shouldn't be confused with vitreous enamel , which is glass. The good thing about it is that you can easily remove the insulation by rubbing hot solder on the wire.
{ "source": [ "https://electronics.stackexchange.com/questions/34040", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10377/" ] }
34,046
I need to read several analog sensors on a Arduino Mega ADK. I want to use a multiplexer for this ( CD74HC4067E ), see the schematics . The output however from the multiplexer channels is not consistent with the output which i read directly from on the analog input: Through the Mux x: 333 y: 276 z: 323 Direct analog readings x: 328 | y: 334 | z: 285 Through the Mux x: 333 y: 276 z: 321 Direct analog readings x: 328 | y: 335 | z: 277 Through the Mux x: 334 y: 276 z: 322 Direct analog readings x: 329 | y: 335 | z: 277 Through the Mux x: 333 y: 276 z: 324 Direct analog readings x: 328 | y: 334 | z: 283 Through the Mux x: 333 y: 276 z: 299 Direct analog readings x: 329 | y: 335 | z: 282 ALthough it might seem that there is a wiring problem (simply switch X and Z), my setup is correct (triple checked!). When i turn the sensor 90 degrees clockwise so that Y is up, i get the following: Through the Mux x: 334 y: 344 z: 270 Direct analog readings x: 266 | y: 334 | z: 344 Through the Mux x: 334 y: 345 z: 269 Direct analog readings x: 265 | y: 334 | z: 344 Through the Mux x: 333 y: 343 z: 271 Direct analog readings x: 264 | y: 333 | z: 343 Through the Mux x: 335 y: 344 z: 270 Direct analog readings x: 265 | y: 334 | z: 344 so it seems that the X and Z pins should be switched. I can i improve this? And my arduino code: //to hold direct read from the analog output of the ADXL335 int xAnaRead; int yAnaRead; int zAnaRead; //to hold readings from the mux: int xMuxRead; int yMuxRead; int zMuxRead; //mux pins int s0 = 8; int s1 = 9; int s2 = 10; int s3 = 11; //The pin on which the Mux outputs int SIG_pin = A0; //Analog read pins const int xPin = A8; const int yPin = A9; const int zPin = A10; void setup(){ Serial.begin(9600); } void loop(){ //read value on channel 0 of Mux xMuxRead = readMux(0); //read analog value int xAnaRead = analogRead(xPin); delay(100); //to let the capacitator discharge //read value on channel 1 of Mux yMuxRead = readMux(1); //read analog value int yAnaRead = analogRead(yPin); delay(100); //to let the capacitator discharge //read value on channel 2 of Mux zMuxRead = readMux(2); //read analog value int zAnaRead = analogRead(zPin); delay(100); //to let the capacitator discharge //Output the readings Serial.print("Through the Mux x: "); Serial.print(xMuxRead); Serial.print("\t y: "); Serial.print(yMuxRead); Serial.print("\t z: "); Serial.print(zMuxRead); Serial.print("\t\t Direct analog readings x: "); Serial.print(xAnaRead); Serial.print(" | y: "); Serial.print(yAnaRead); Serial.print(" | z: "); Serial.print(zAnaRead); Serial.println(""); delay(100);//just here to slow down the serial output - Easier to read } //this is verbose but it works, and is more readable (i need that :) int readMux(int channel){ int controlPin[] = { s0, s1, s2, s3 }; int muxChannel[16][4]={ { 0,0,0,0 } , //channel 0 { 1,0,0,0 } , //channel 1 { 0,1,0,0 } , //channel 2 { 1,1,0,0 } , //channel 3 { 0,0,1,0 } , //channel 4 { 1,0,1,0 } , //channel 5 { 0,1,1,0 } , //channel 6 { 1,1,1,0 } , //channel 7 { 0,0,0,1 } , //channel 8 { 1,0,0,1 } , //channel 9 { 0,1,0,1 } , //channel 10 { 1,1,0,1 } , //channel 11 { 0,0,1,1 } , //channel 12 { 1,0,1,1 } , //channel 13 { 0,1,1,1 } , //channel 14 { 1,1,1,1 } //channel 15 }; //loop through the 4 sig for(int i = 0; i < 4; i ++){ digitalWrite(controlPin[i], muxChannel[channel][i]); } //read the value at the SIG pin int val = analogRead(SIG_pin); //return the value return val; }
It is insulated. Have you ever noticed that sometimes solenoids are made from copper wire that seems distinctly non copper coloured? This is called enamelled copper wire, and it available in a whole range of colours. The insulation is just a very thin coating of polyurethane, polyamide or polyester. It shouldn't be confused with vitreous enamel , which is glass. The good thing about it is that you can easily remove the insulation by rubbing hot solder on the wire.
{ "source": [ "https://electronics.stackexchange.com/questions/34046", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10262/" ] }
34,048
I know MOVs inside surge protectors degrade over time, but I also see manufacturers putting notes on the box that the protection warranty is void if I daisy-chain surge protectors. So questions: Is there an estimate of what the life of a MOV is if: i. it has never faced a surge before? ii. it has faced some surges before? I'm buying surge protectors with indicators showing the effectiveness of the surge protection: how do they work? How does that indicator know if the surge protection is still effective? Why do manufacturers discourage daisy-chaining of surge protectors? I'm assuming most off-the-shelf/commercial "computer"-grade surge protectors use MOVs. I was interested in finding out if the MOVs are connected in parallel to the AC output of the surge protectors? If so, how do MOVs connected in parallel interact (something like connecting resistors or capacitors in parallel or series changes their values)?
You should not daisy chain protective devices (fuses, MOVs, breakers, etc.) without first doing the appropriate research because generally speaking, they are rated to interrupt in x seconds given a particular fault current. When you have two protection devices with similar ratings in the circuit, they end up both trying to interrupt and may very well end up interfering with each other's interruption capability, possibly to the point where neither will properly clamp or interrupt the fault, causing excessive current flow and possibly fires. e.g. A fuse chosen more or less at random will clear in 1s with a ~20A fault current. If you have a second fuse with similar ratings in series, they will actually start limiting the fault current as they open up, and the fault current is no longer 20A, it may be 15A, or 10A, or ... you get the idea. That same fuse will clear a 10A fault in ~10s, which could be enough time to heat up wire or traces or cause a semiconductor to fail because it wasn't designed to handle that kind of current for that kind of time. e.g. A MOV series chosen more or less at random will clamp a surge at 130V. Two in parallel will have (slightly or significantly) different clamping voltages, usually with the lower one "winning". The breaker/fuse and MOV are usually selected so that the MOV will clamp and the breaker will open with the surge current, but when you mix and match you end up with a MOV clamping earlier, which the fuse/breaker wasn't designed to trip at, which now alters its fault ratings, leading to unpredictable protection. In the industrial power world this kind of interaction is actually a significant part of the overall electrical design, since you have substation transformers protected with fuses, and the downstream equipment protected with their own fuses or breakers, and then the load controllers protecting their semiconductors or motors again with their own protective devices, usually a combination of MOVs or fused MOVs and either breakers or fusing. There's a lot to look at, including the I2T ratings of the protective devices, interrupting capabilities, pulse withstand capabilities, temperature derating, clearing times, current limiting effects as the devices become active, Joule ratings and so on. Here are a few good references if you wish to look into it further. The term "fuseology" has come about to describe this particular aspect of electronics design. ... and I bet you thought fuses, breakers and TVS type devices were pretty simple, didn't you. :-)
{ "source": [ "https://electronics.stackexchange.com/questions/34048", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5296/" ] }
34,071
Not all op-amps have explicit offset-null support, but all op-amps have an offset voltage. This is exactly my practical circuit: How do I correct the offset voltage of TL084 in this circuit? (Datasheet: TL084 )
There are a range of methods which can be used to provide offset voltage compensation. The best method to use varies with the application circuit, but all either apply a variable current to a circuit node or vary the voltage of a node which a circuit element connects to. The methods described below can easily be applied to your circuit by Adding a divider and potentiometer at the ground end of your R2. The ease of use of this method is improved by adding one two-resistor divider to the potentiometer voltage, as explained below. Or a say 100 kohm resistor from the op-amp inverting input can be fed by a 10 kohm potentiometer connected to +/- 15 V. This injects a small current into the node which causes an offset voltage. Current injection effectively occurs at a high impedance point and voltage adjustment at a low impedance point, but both methods are functionally equivalent. That is, injecting a current causes it to flow in related circuitry and causes a voltage change, and adjusting voltage causes current flows to alter. To compensate for an offset voltage by injecting a current you can apply an adjustable voltage from a potentiometer via a high-value resistor to an appropriate circuit node. To adjust a "ground" voltage that a resistor connects to, you can connect it to a potentiometer which is able to vary either side of ground. The diagram below shows one method. Here Rf would usually connect to ground. If R1 is a short circuit and R2 an open circuit the whole change in potentiometer voltage is applied to the end of Rf. This causes two problems. The equivalent resistance of Rf (equal to Rf/4) will add to Rf and cause gain errors. For a small error the potentiometer value would need to be small or Rf would need to be reduced by an equal amount. For small offset voltage adjustments the adjustment of the potentiometer becomes difficult and most of the potentiometer range is not used... Adding R1 and R2 overcomes both these problems. R1 and R2 divide down changes in potentiometer voltage by the ratio R2/(R1+R2). If, for example, a +/- 15 mV change is required then the ratio of R1:R2 can be about 15 V:15 mV = 1000:1. The effective resistance of the R1, R2 divider is R1 and R2 in parallel or about = R2 for large division ratios. If the resistance of R2 is small relative to Rf then minimal errors are caused. If Rf is, say, 10 kohm then a value of R2 = 10 ohm causes an error of 10/10,000 = 0.1%. Maxim manage to say this in fewer words in the diagram below. If R1 and R2 form a ~~ 1000:1 divider then R1 will be about 10 ohm x 1000 = 10 kohm. Use of a, say, 50 kohm potentiometer will result in an equivalent resistance of about 12.5 kohm at the mid point and this can be used in place of R1. The circuit becomes: R2 = 10 ohm, R1 = short circuit, potentiometer = 10 kohm linear. The above circuit is taken from the useful Maxim Application note 803 - EPOT Applications: Offset Adjustment in Op-Amp Circuits which contains much other applicable information. In his answer miceuz referred to NatSemi's AN-31 pages 6 & 7 . Not surprisngly, the circuits there apply the identical methods to what I describe above and to those in the Maxim app note , but the diagrams are more explanatory, so I've copied them here.
{ "source": [ "https://electronics.stackexchange.com/questions/34071", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
34,083
I have been having a hard time trying to know if ARM is a microprocessor or microcontroller or something else?
Neither. ARM is a CPU architecture (more accurate, a family of related CPU architectures). If you put that CPU (or anyother) CPU on a chip all by itself, you have a microprocessor (like they did in the age-old Acorn machines). If you combine it with ROM (Flash), RAM and peripherals on one chip, you have a microcontroller (example: LPC2148). Things can get a bit muddy when you combine the CPU with ROM and RAM, but also provide the data, address and control lines on the pins, so external memory can be added. Such a chip is can be used either in microcontroller mode, or in microprocessor mode. (example: LPC2478) Nowadays smaller systems (up to 0.5Mb Flash, a few 10's Kb RAM) are available as microcontroller. Larger systems (typically running a Linux or something similar) are typically composed of a microprocessor with external RAM. (ROM can be external too, or a small boot-rom on chip + an SD card or similar). Examples: The Raspberry Pi and other small Linux boards, the ESP8266, or open up any mobile phone, set-top box, modem/router, etc. Funny note: microcontrollers tend to be short on RAM, hence the run from Flash, which often limits their speed. Microprocessors often have plenty RAM, have a slower Flash, from which the code and data is loaded into RAM for execution. Nowadays (2015) the term ARM is increasingly confusing, because it can refer to the company that makes the ARM designs, or to one of the designs. (The ARM company itself does not make chips, it licenses its designs to chip makers.) The recent Cortex 'family' of designs is sufficiently different from the old ARM designs that I prefer not to call it 'ARM'.
{ "source": [ "https://electronics.stackexchange.com/questions/34083", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10392/" ] }
34,406
I'm working on a case design. There is rectangular switch. I am trying to open a rectangular hole on the metallic case to fix it in there. I'm not doing a good job, if it goes on like this it looks like I'm going to ruin the case at all. How do I open this hole? Picture #1 Picture #2 Dimensions of the rectangle is 20.0mm x 25.4mm.
There is the tool exactly for this, named nibbler. They sell it for $10 in Radioshack. Specialized hole punch for specific shapes: For most advanced holes there is a special drill even
{ "source": [ "https://electronics.stackexchange.com/questions/34406", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5542/" ] }
34,549
Can someone tell me why people use either USB or RS232. They are both serial ports, right? And I understand that USB is much faster (especially USB3.0) but if people wanted too I'm sure they could make a successor to RS232 that is just as fast. So, what are the advantages and disadvantages to both?
What are the differences between USB and RS232? You will find much more than I can tell you here about the abilities and disadvantages of RS232 by starting with a search for RS232 and then 'wandering around the web' and following the thread where it leads. No one page will tell you everything but 10 or 20 quick skims will show you how useful it was and how utterly terrible, all at the same time. USB is intended as a high speed upward extensible fully standardised interface between 1 computing device using a single port and N peripherals using one port each with all control being accomplished by signals within the data stream. USB is formidably difficult to provide low level interfaces for. "Simple" interfaces are common but these provide and hide a very large degree of related complexity. RS232 was intended as a 1:1 relatively low speed semi-standardised interface between 1 computing device and 1 peripheral per port with hardware control being an integral part of operation. RS232 is relatively easy to provide low level physical interfaces for. RS232 was (and to some extent still is) a very useful powerful flexible way of connecting computing device to peripherals. However [tm] [!!!] RS232 was intended as a short distance (a few metres max) moderately low speed (9600 bps usual, up to about 100kbps in some cases, faster in very specialist situations), one device per port (exceptions proving the rule). Signalling was unbalanced relative to ground using about +/- 12V with logic one on dfata = -V and logic one on control = +V. There were many many many control signals on the original 25 pin connector which led to an utterly vast range of non standard uses and incompatabilities. The later version reduced the connector to 9 pins with still enough control signals to allow people to utterly destandardise configurations. Getting RS232 working between a randomly chosen terminal device and a computer or similar MAY have been a matter of plug in and go, or need minutes hours or days of playing and in some cases just would not work. RS232 does NOT provide powering per se although many people used it to power equipment in many different ways, none of them standard. Observation of the data lines will allow data signals to be identified. (Fast eyes and a brain that works at a suitable number of kbps would help). Data transfer is unidirectional on a transmit and receive line and uses asynchronous framing. Design is for 1:1 connection with no way of multidropping in an 1:N arranagement without non-standard arrangements. USB up to USB2 is a 4 physical wire system with two power lines and two data lines. There are no physical control lines. USB3 uses more lines and details are best left for another question and answer. Initial speed was 12 Mbps, increased to 480 Mbps with USB2 and up to 5 Gbps "Superspeed" mode with USB3. Control and configuration is all done with software using data signals which are an utterly inseparable part of the interface. Observing the data stream with an oscilloscope will not reveal the actual data component of the system. Data transfer uses 0/+5 balanced differential voltage signalling. Data transfer is bidirectional with ownership of the "bus" being an integral part of the protocol. Connection is almost always on a 1:1 basis physically but a number of logical devices can be accommodated on the one port. Connection of N physical devices to one upstream port is usually accomplished by use of a "hub" but this is essentially a visible manifestation of an internal 1:N arrangement which is an integral part of the design. There are going to be some interesting connector issues :-): USB2 / USB3 From here USB3 superspeed microconnector with USB 2 backward compatability from here USB3.COM - USB3 superspeed cable connectors from here Wikipedia RS232 USB versus serial Wikipedia USB USB3 Superspeed FAQ Wikipedia USB3 USB.ORG - superspeed
{ "source": [ "https://electronics.stackexchange.com/questions/34549", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }
34,561
We often seem to use microcontrollers to control relays, and a 5 V microcontroller is often used with 12 V relays. A relay may need several times more power than the microcontroller. Not a problem if you can use an SSR, which you can drive at a few mA, but there are situations where you do need an electromechanical relay. When, is another discussion. Here I'll focus on the electromechanical. So, what are some ways to use those relays more efficiently?
This is becoming a quite long answer, but I added lots of pretty pictures, which should keep you from falling asleep ;-) I'm aware of bistable relays, and they're the big savers, but here I'll discuss different solutions all for the same non-latching relay, in case you don't want to use a latching relay. That could be for feedback, or more complicated drive reasons, for instance. (One way to get feedback is by using one contact of a dual pole relay, but then you reduce it to a single pole relay. Three pole relays exist, but are expensive.) Anyway, this is about your common, low-cost astable relay. I'll be using this relay for reference. Series resistor A cheap and simple way to reduce power, and applicable to most relays. Look out for must operate voltage in the datasheet, sometimes called "pull-in voltage". For the 12 V standard version of the above relay that's 8.4 V. That means the 12 V relay will also work if you apply minimum 8.4 V to it. The reason for this wide margin is that the 12 V for relays is often not regulated, and may vary, for instance with mains voltage tolerances. Check the margins on the 12 V before doing this. Let's keep some margin and go for 9 V. The relay has a coil resistance of 360 Ω, then a 120 Ω series resistor will cause a 3 V drop, and 9 V remaining for the relay. Power dissipation is 300 mW instead of 400 mW, a 25% power saving, with just a series resistor. In this and the other graphs the common solution's power is shown in blue, normalized for 12 V input, and our improved solution in purple. The x-axis shows the input voltage. LDO regulator With the series resistor the power savings are a constant 25 %, the ratio of our resistors. If the voltage rises the power will rise quadratically. But if we can keep the relay voltage constant, independant of our power supply voltage, power will only rise linearly with rising input voltage. We can do this by using a 9 V LDO to power the relay. Note that compared to the series resistor this saves more power at higher input voltages, but less if the input voltage drops below 12 V. Power saving: 25 %. Sensitive relay This is the most simple way to drastically reduce power: use the sensitive version of the relay. Our relay is available in a standard version which needs 400 mW, and a sensitive version which is happy with half of that. So why not always use sensitive relays? First, not all relays come in sensitive types, and when they do they often have restrictions, like no change-over (CO) contacts, or a limited switching current. They're more expensive as well. But if you can find one that fits your application I would certainly consider it. Power saving: 50 %. 12 V relay at 5 V Here we get to the Real Savings™. First we'll have to explain the 5 V operation. We already saw that we can operate the relay at 9 V, since the "must operate voltage" was 8.4 V. But 5 V is considerably lower than that, so it won't activate the relay. It appears, however, that the "must operate voltage" is only needed to activate the relay; once it's activated it will remain active even at much lower voltages. You can easily try this. Open the relay and place 5 V across the coil, and you'll see it doesn't activate. Now close the contact with the tip of a pencil and you'll see that it remains closed. Great. There's one catch: how do we know this will work for our relay? It doesn't mention the 5 V anywhere. What we need is the relay's "hold voltage", which gives the minimum voltage to stay activated, and unfortunately that's often omitted in datasheets. So we'll have to use another parameter: "must release voltage". That's the maximum voltage at which the relay will guaranteed switch off. For our 12 V relay that's 0.6 V, which is really low. The "hold voltage" is usually only a bit higher, like 1.5 V or 2 V. In many cases the 5 V is worth the risk. Not if you want to run a 10k/year production of the device without consulting the relay's manufacturer; you may have a lot of returns. But for a hobby project with a one-time production you can see for yourself if it works. So we only need the high voltage for a very short time, and then we can settle for the 5 V. This can easily be achieved with a parallel RC circuit in series with the relay. When the relay is switched on the capacitor is discharged and therefore short-circuits the parallel resistor, so that the full 12 V are across the coil and it can activate. The capacitor then gets charged and there will be a voltage drop across the resistor which reduces the current. This is like in our first example, only then we went for a 9 V coil voltage, now we want 5 V. Calculator! 5 V across the coil's 360 Ω is 13.9 mA, then the resistor should be (12 V - 5 V)/13.9 mA = 500 Ω. Before we can find the value for the capacitor we have to consult the datasheet once more: maximum operate time is 10 ms maximum. That means the capacitor should charge slow enough to still have 8.4 V across the coil after 10 ms. This is what the coil's voltage over time should look like: The R value for the RC time constant is the 500 Ω parallel to the coil's 360 Ω, due to Thévenin. That's 209 Ω. The graph's equation is \$ V_{COIL} = 5 V + 7 V \cdot e^{\dfrac{-t}{RC}} \$ With \$V_{COIL}\$ = 8.4 V, \$t\$ = 10 ms and \$R\$ = 209 Ω we can solve for \$C\$ and we find 66 µF minimum. Let's take 100 µF. So in steady state we have a 860 Ω resistance instead of 360 Ω. We're saving 58 % . 12 V relay at 5 V, reprise The following solution gives us the same savings at 12 V, but with a voltage regulator we'll keep the voltage at 5 V, even if the input voltage would increase. What happens when we close the switch? C1 gets quickly charged to 4.3 V via D1 and R1. At the same time C2 gets charged through R2. When the analog switch's threshold is reached the switch in IC1 will toggle, and C1's negative pole will be connected to +5 V, so that the positive pole goes to 9.3 V. That's enough for the relay to activate, and after C1 is discharged the relay is powered by the 5 V through D1. So what our gain? We have 5 V / 360 Ω = 14 mA through the relay, and coming from a 12 V via an LM7805 or similar that's 167 mW instead of 400 mW. Power saving: 58 %. 12 V relay at 5 V, reprise 2 We can do even better by using a SMPS to get our 5 V from our 12 V power supply. We'll use the same circuit with the analog switch, but we'll save a lot more. At a 90 % efficient SMPS we have an 80 %(!) power saving . (graphs made with Mathematica)
{ "source": [ "https://electronics.stackexchange.com/questions/34561", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
34,745
Power supplies are available in a wide range of voltage and current ratings. If I have a device that has specific voltage and current ratings, how do those relate to the power ratings I need to specify? What if I don't know the device's specs, but am replacing a previous power supply with particular ratings? Is it OK to go lower voltage, or should it always be higher? What about current? I don't want a 10 A supply to damage my 1 A device.
Voltage Rating If a device says it needs a particular voltage, then you have to assume it needs that voltage. Both lower and higher could be bad. At best, with lower voltage the device will not operate correctly in a obvious way. However, some devices might appear to operate correctly, then fail in unexpected ways under just the right circumstances. When you violate required specs, you don't know what might happen. Some devices can even be damaged by too low a voltage for extended periods of time. If the device has a motor, for example, then the motor might not be able to develop enough torque to turn, so it just sits there getting hot. Some devices might draw more current to compensate for the lower voltage, but the higher than intended current can damage something. Most of the time, lower voltage will just make a device not work, but damage can't be ruled out unless you know something about the device. Higher than specified voltage is definitely bad. Electrical components all have voltages above which they fail. Components rated for higher voltage generally cost more or have less desirable characteristics, so picking the right voltage tolerance for the components in the device probably got significant design attention. Applying too much voltage violates the design assumptions. Some level of too much voltage will damage something, but you don't know where that level is. Take what a device says on its nameplate seriously and don't give it more voltage than that. Current Rating Current is a bit different. A constant-voltage supply doesn't determine the current: the load, which in this case is the device, does. If Johnny wants to eat two apples, he's only going to eat two whether you put 2, 3, 5, or 20 apples on the table. A device that wants 2 A of current works the same way. It will draw 2 A whether the power supply can only provide the 2 A, or whether it could have supplied 3, 5, or 20 A. The current rating of a supply is what it can deliver, not what it will always force thru the load somehow. In that sense, unlike with voltage, the current rating of a power supply must be at least what the device wants but there is no harm in it being higher. A 9 volt 5 amp supply is a superset of a 9 volt 2 amp supply, for example. Replacing Existing Supply If you are replacing a previous power supply and don't know the device's requirements, then consider that power supply's rating to be the device's requirements. For example, if a unlabeled device was powered from a 9 V and 1 A supply, you can replace it with a 9 V and 1 or more amp supply. Advanced Concepts The above gives the basics of how to pick a power supply for some device. In most cases that is all you need to know to go to a store or on line and buy a power supply. If you're still a bit hazy on what exactly voltage and current are, it's probably better to quit now. This section goes into more power supply details that generally don't matter at the consumer level, and it assumes some basic understanding of electronics. Regulated versus Unregulated Unregulated Very basic DC power supplies, called unregulated , just step down the input AC (generally the DC you want is at a much lower voltage than the wall power you plug the supply into), rectify it to produce DC, add a output cap to reduce ripple, and call it a day. Years ago, many power supplies were like that. They were little more than a transformer, four diodes making a full wave bridge (takes the absolute value of voltage electronically), and the filter cap. In these kinds of supplies, the output voltage is dictated by the turns ratio of the transformer. This is fixed, so instead of making a fixed output voltage their output is mostly proportional to the input AC voltage. For example, such a "12 V" DC supply might make 12 V at 110 VAC in, but then would make over 13 V at 120 VAC in. Another issue with unregulated supplies is that the output voltage not only is a function of the input voltage, but will also fluctuate with how much current is being drawn from the supply. A unregulated "12 volt 1 amp" supply is probably designed to provide the rated 12 V at full output current and the lowest valid AC input voltage, like 110 V. It could be over 13 V at 110 V in at no load (0 amps out) alone, and then higher yet at higher input voltage. Such a supply could easily put out 15 V, for example, under some conditions. Devices that needed the "12 V" were designed to handle that, so that was fine. Regulated Modern power supplies don't work that way anymore. Pretty much anything you can buy as consumer electronics will be a regulated power supply. You can still get unregulated supplies from more specialized electronics suppliers aimed at manufacturers, professionals, or at least hobbyists that should know the difference. For example, Jameco has wide selection of power supplies. Their wall warts are specifically divided into regulated and unregulated types. However, unless you go poking around where the average consumer shouldn't be, you won't likely run into unregulated supplies. Try asking for a unregulated wall wart at a consumer store that sells other stuff too, and they probably won't even know what you're talking about. A regulated supply actively controls its output voltage. These contain additional circuitry that can tweak the output voltage up and down. This is done continuously to compensate for input voltage variations and variations in the current the load is drawing. A regulated 1 amp 12 volt power supply, for example, is going to put out pretty close to 12 V over its full AC input voltage range and as long as you don't draw more than 1 A from it. Universal input Since there is circuitry in the supply to tolerate some input voltage fluctuations, it's not much harder to make the valid input voltage range wider and cover any valid wall power found anywhere in the world. More and more supplies are being made like that, and are called universal input . This generally means they can run from 90-240 V AC, and that can be 50 or 60 Hz. Minimum Load Some power supplies, generally older switchers, have a minimum load requirement. This is usually 10% of full rated output current. For example, a 12 volt 2 amp supply with a minimum load requirement of 10% isn't guaranteed to work right unless you load it with at least 200 mA. This restriction is something you're only going to find in OEM models, meaning the supply is designed and sold to be embedded into someone else's equipment where the right kind of engineer will consider this issue carefully. I won't go into this more since this isn't going to come up on a consumer power supply. Current Limit All supplies have some maximum current they can provide and still stick to the remaining specs. For a "12 volt 1 amp" supply, that means all is fine as long as you don't try to draw more than the rated 1 A. There are various things a supply can do if you try to exceed the 1 A rating. It could simply blow a fuse. Specialty OEM supplies that are stripped down for cost could catch fire or vanish into a greasy cloud of black smoke. However, nowadays, the most likely response is that the supply will drop its output voltage to whatever is necessary to not exceed the output current. This is called current limiting . Often the current limit is set a little higher than the rating to provide some margin. The "12 V 1 A" supply might limit the current to 1.1 A, for example. A device that is trying to draw the excessive current probably won't function correctly, but everything should stay safe, not catch fire, and recover nicely once the excessive load is removed. Ripple No supply, even a regulated one, can keep its output voltage exactly at the rating. Usually due to the way the supply works, there will be some frequency at which the output oscillates a little, or ripples . With unregulated supplies, the ripple is a direct function of the input AC. Basic transformer unregulated supplies fed from 60 Hz AC will generally ripple at 120 Hz, for example. The ripple of unregulated supplies can be fairly large. To abuse the 12 volt 1 amp example again, the ripple could easily be a volt or two at full load (1 A output current). Regulated supplies are usually switchers and therefore ripple at the switching frequency. A regulated 12 V 1 A switcher might ripple ±50 mV at 250 kHz, for example. The maximum ripple might not be at maximum output current.
{ "source": [ "https://electronics.stackexchange.com/questions/34745", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4512/" ] }
34,843
I 'm looking for the best RC time constant and its reason in a PWM to convert digital signal to analog based on duty-cycle and frequency and other parameters. PWM frequency is 10 kHz.
The best RC is infinite, then you have a perfectly ripple-less DC output. Problem is that it also takes forever to respond to changes in the duty cycle. So it's always a tradeoff. A first-order RC filter has a cutoff frequency of \$ f_c = \dfrac{1}{2 \pi RC} \$ and a roll-off of 6 dB/octave = 20 dB/decade. The graph shows the frequency characteristic for a 0.1 Hz (blue), a 1 Hz (purple) and a 10 Hz (the other color) cutoff frequency. So we can see that for the 0.1 Hz filter the 10 kHz fundamental of the PWM signal is suppressed by 100 dB, that's not bad; this will give very low ripple. But! This graph shows the step response for the three cutoff frequencies. A change in duty cycle is a step in the DC level, and some shifts in the harmonics of the 10 kHz signal. The curve with the best 10 kHz suppression is the slowest to respond, the x-axis is seconds. This graph shows the response of a 30 µs RC time (cutoff frequency 5 kHz) for a 50 % duty cycle 10 kHz signal. There's an enormous ripple, but it responds to the change from 0 % duty cycle in 2 periods, or 200 µs. This one is a 300 µs RC time (cutoff frequency 500 Hz). Still some ripple, but going from 0 % to 50 % duty cycle takes about 10 periods, or 1 ms. Further increasing RC to milliseconds will decrease ripple further and increase reaction time. It all depends on how much ripple you can afford and how fast you want the filter to react to duty cycle changes. This web page calculates that for R = 16 kΩ and C = 1 µF we have a cutoff frequency of 10 Hz, a settling time to 90 % of 37 ms for a peak-to-peak ripple of 8 mV at 5 V maximum. edit You can improve your filter by going to higher orders: The blue curve was or simple RC filter with a 20 dB/decade roll-off. A second order filter (purple) has a 40 dB/decade roll-off, so for the same cutoff frequency will have 120 dB suppression at 10 kHz instead of 60 dB. These graphs are pretty ideal and can be best attained with active filters, like a Sallen-Key. Equations Peak-to-peak ripple voltage for a first order RC filter as a function of PWM frequency and RC time constant: \$ V_{ripple} = \dfrac{ e^{\dfrac{-d}{f_{PWM} RC}} \cdot (e^{\dfrac{1}{f_{PWM} RC}} - e^{\dfrac{d}{f_{PWM} RC}}) \cdot (1 - e^{\dfrac{d}{f_{PWM} RC}}) }{1 - e^{\dfrac{1}{f_{PWM} RC}}} \cdot V_+\$ E&OE. "d" is the duty cycle, 0..1. Ripple is the largest for d = 0.5. Step response to 99 % of end value is 5 x RC. Cutoff frequency for the Sallen-Key filter: \$ f_c = \dfrac{1}{2 \pi \sqrt{R1 \text{ } R2 \text{ } C1 \text{ } C2}} \$ For a Butterworth filter (maximum flat): R1 =R2, C1 = C2
{ "source": [ "https://electronics.stackexchange.com/questions/34843", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10612/" ] }
34,917
I am almost done routing a board. However, Eagle is telling me that there is still one more wire. I have looked but I just can't seem to find it. Is there a way to make Eagle tell me where it is?
I can think of three options: Zoom out as much as you can then use the route tool on the tiny board, this catches the air wire, then zoom in again and route it. You can also disable the top and bottom layers so the air wire becomes more visible. Yet another option is to run the provided "length.ulp" script (File->Run... or ULP button). This script shows a list of all the nets, on that list there is a column "Unrouted", some net is not completely routed a value should appear here instead of "--". You can then type on the command line "show net_name " to highlight it.
{ "source": [ "https://electronics.stackexchange.com/questions/34917", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6138/" ] }
34,959
I've been reading an article ( TheMagPi eMagazine) relating to the Raspberry Pi; "An ARM GNU/Linux box for $25." In the article, page 17 at the bottom it shows an area on the Pi where a track zigzags next to a straight one with the explanation text: The “wiggles” in tracks, ensure signals are matched electrically, reducing interference and signal delay. This is particularly important for high speed video data and HDMI signals. I have very limited knowledge of electrical engineering so perhaps this is a very simple question but why would you incorporate these 'wiggles' in a PCB design? I realise the quote gives me an answer and I sort of understand the interference point due to problems with power cables and coaxial cables running next to each other but I'd appreciate something assuming very little knowledge that explained why you would get the problems and how wiggles help. For example, why isn't the board covered in wiggles?
The wiggle is present on the inside track at corners (or the shorter overall) to equalise the track lengths of a differential pair - that is any two wires that use differential signalling to cary data. If the tracks were not the same length, the noise-cancelling benefit of a differential signalling would be lost. While the physical-layer components of most modern LVDS signalling (PCIe, HDMI, DVI) include de-skew or 'elastic' buffers to compensate for differing track lengths between pairs, skew within a pair must be avoided with these physical layout techniques. Following comments by OP: Taking Gigabit ethernet as an example, as this might be more familiar to you: The CAT6 cable has eight wires, which if you tear open the outer insulating sheath are twisted together in pairs, so wires 1+2 are twisted together as pair one. Next to this lies pair 2, which is wires 3+4 twisted together, pair 3 comprises wire 5+6 twisted together etc. It's important to keep the pairs the same length, because they contain copies of the same signal sent with opposite polarities (one is positive, while the other is negative). If and only if the wires are the same length, the signals arrive together (given the fixed speed of electrons), which allows any common-mode electrical interference to be rejected in the magnetic coupling. The four pairs themselves however do not have to be exactly the same length because the gigbit auto negotiation process calibrates the elastic buffers (and echo cancelling units) such that any minute discrepancies in arrival time are removed before the higher level components do their work. The same thing is happening on this circuit board. The immediately adjacent/close circuit board traces are "the pairs" and are kept the same length to allow the differential receivers to reject noise, although electrically rather than magnetically. You can see the HDMI connector carries several such pairs, and no attempt is made to keep one pair the same length as the pair next to it ("between pairs"). There are however some limits in the size of the elastic buffers (in bytes) after which the cable becomes non-operative or downgrades. It would be fun to experiment and find the limits in millimetres. This picture of a HDMI plug shows the differential pairs:
{ "source": [ "https://electronics.stackexchange.com/questions/34959", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7128/" ] }
35,462
What is the extra, 5th, pin on micro usb 2.0 adapters for? Here is an image with the different connectors. Most of them have 5 pins, but the A-type host only has four. (source: wikimedia.org )
It's for On-The-Go , to select which device is the host or slave: The OTG cable has a micro-A plug on one side, and a micro-B plug on the other (it cannot have two plugs of the same type). OTG adds a fifth pin to the standard USB connector, called the ID-pin; the micro-A plug has the ID pin grounded, while the ID in the micro-B plug is floating. The device that has a micro-A plugged in becomes an OTG A-device, and the one that has micro-B plugged becomes a B-device. The type of the plug inserted is detected by the state of the pin ID .
{ "source": [ "https://electronics.stackexchange.com/questions/35462", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }
35,618
Is there a minimum clock rate specified by I2C? I know the most widely used clock rate is 100kHz and there is a "fast" mode of 400kHz supported by some devices, and a faster yet mode supported by other devices (I think 1MHz?). Since the SCK signal is generated by the master I presume one could operate at a much slower speed than any of those - is there a lower bound in practice? To what extent do slave devices care about the clock rate (e.g. is it common for them to have short timeouts)? The reason I'm asking is that I'm wondering if could possibly run I2C over a longer distance (e.g. 20 feet) to program I2C EEPROMs reliably in a production tester setup. I'm assuming it won't work reliably over that distance at the standard data rates. Am I off-base entirely in thinking that slowing down the clock speed will improve reliability over longer distances (e.g. is it really a question of drive strength and rise/fall times)?
No, there is no minimum frequency, minimum clock frequency is 0, or DC. See the specification , page 48. But you will have to pay attention to rise and fall times. Those are 1000 ns and 300 ns maximum, resp. And a longer cable, with some capacitance will influence edges, regardless of frequency. It's that capacitance, together with pull-up resistances that will determine rise time. Fall time is not a problem because the FET which pulls the line low has a very low resistance, and then the fall time time constant will be very low as well. So we're left with the rise time. To get a 1000 ns rise time on a 200 pF cable your pull-up resistors shouldn't be larger than 2.2 kΩ. (rise time to 90 % of end value.) The graph shows maximum pull-up resistance (in Ω) versus cable capacitance (in pF) to get 1000 ns rising edges. Note that I2C devices don't have to sink more than 3 mA, therefore at 3.3 V the bus capacitance shouldn't be higher than about 395 pF, otherwise the pull-up resistance would have to be smaller than 1100 Ω, and allow more than the 3 mA. That's the greenish dashed lines. For 5 V operation the allowed capacity is even 260 pF, for a 1667 Ω pull-up value (the purple dashed lines).
{ "source": [ "https://electronics.stackexchange.com/questions/35618", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/771/" ] }
35,625
I am attempting to replace an old PLCC32 part that was directly soldered to the board with a new part of undecided form. We will definitely need an adapter as we have not been able to find a PLCC32 part that does what we need. I cannot use a PLCC adapter plug because there are also height restrictions. We are considering building a two-sided adapter board that has pads on the bottom side that match the PLCC32 layout on the current board, with the new layout on top. Theoretically, the adapter board would be soldered directly to the old board and the new chip on top of the adapter. However, I have not seen any examples of soldering two PCBs directly together in this manner, which makes me think it is a likely to be a bad idea. Can anyone comment on this sort of custom adapter?
No problem. I had to look for a picture that illustrates the technique: You make a PCB with plated through holes on the PLCC's pads, so at a 1.27 mm pitch, and mill the four sides so that you get the half holes like in the picture. These are easily solderable on the old PLCC footprint, it's an often used technique, called castellation . A picture of a complete board: and another one: or this one from a question posted 1 minute ago: You get the idea. You'll have to find a part which fits inside this small PCB, but given the miniaturization of the last years that may not be a problem. edit 2012-07-15 QuestionMan suggested to make the PCB a bit larger so that the PLCC's solder pads are under it. For BGAs the solder balls are also under the IC, but that's solid solder balls, not paste, and I don't know how solder paste will behave when squeezed between two PCBs. But today I bumped into this IC package: It's the "Staggered Dual-row MicroLeadFrame® Package (MLF)" of the ATMega8HVD , and it has pins under the IC as well. This is 3.5 mm x 6.5 mm, and weighs a lot less than the small PCB. That may be important, because thanks to the low weight capillary forces of the molten solder paste can pull the IC to its exact position. I'm not sure if that will also be the case for that PCB, and then positioning may be a problem.
{ "source": [ "https://electronics.stackexchange.com/questions/35625", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5694/" ] }
35,630
I am trying to get my arduino working over wireless (via bluetooth). I would like to send serial signal though it via bluetooth, but I am having some difficulty getting the bluetooth module I purchased actually showing up on any of my computer's bluetooth scan. This is the skimpy datasheet for the module. It was made by someone in China (an individul, not a company - and that is why I do not really understand it) Any help as to how I can get this connected to my PC would be great. ---Thank you--- Here is the bluetooth module that I have... Here is the complete setup...
No problem. I had to look for a picture that illustrates the technique: You make a PCB with plated through holes on the PLCC's pads, so at a 1.27 mm pitch, and mill the four sides so that you get the half holes like in the picture. These are easily solderable on the old PLCC footprint, it's an often used technique, called castellation . A picture of a complete board: and another one: or this one from a question posted 1 minute ago: You get the idea. You'll have to find a part which fits inside this small PCB, but given the miniaturization of the last years that may not be a problem. edit 2012-07-15 QuestionMan suggested to make the PCB a bit larger so that the PLCC's solder pads are under it. For BGAs the solder balls are also under the IC, but that's solid solder balls, not paste, and I don't know how solder paste will behave when squeezed between two PCBs. But today I bumped into this IC package: It's the "Staggered Dual-row MicroLeadFrame® Package (MLF)" of the ATMega8HVD , and it has pins under the IC as well. This is 3.5 mm x 6.5 mm, and weighs a lot less than the small PCB. That may be important, because thanks to the low weight capillary forces of the molten solder paste can pull the IC to its exact position. I'm not sure if that will also be the case for that PCB, and then positioning may be a problem.
{ "source": [ "https://electronics.stackexchange.com/questions/35630", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }
35,663
I've been looking at some microcontrollers and I've seen they've got some "weird" minimum operating temperatures, like -25 degrees or -10 degrees etc. But I can't really understand why there is a minimum, a maximum I do understand because everything melts down and breaks, the resistance increases making the signals too weak. But when you go to the cold side. Everything kind of gets better and better, the resistance gets reduced, everything gets more stable. But yet... the minimum operational temperature is -25 degrees... Why is it not 0 Kelvin? Because I was thinking about the mars-rover and other satellites, when they are behind the sun they are operating at nearly 0-50 kelvin, the mars-rover... according to wiki it gets as cold as −87 °C (−125 °F). And this is still very much more cold than -25 degrees. So, can anyone please explain to me why microcontrollers do have minimum operational temperature? The more thorough the better.
2nd Edit! Modified my answer about semi-conductors based on jk's answer below, read the history if you want to see the wrong bits I modified! Everything gets weird within certain limits. I mean, sure, the resistance improves in conductors but it increases in semi-conductors, and that change effects how the IC works. Remember that the way that transistors work on the basis that you can modify their resistance, and if the temperature drops so low that you can no longer decrease their resistance, you've got an issue! Imagine that suddenly your semi-conductor essentially became a resistor... how do you control it? It no longer behaves the same way! Now I'm a bit confused at where you're getting the -25°C, as the industrial/military spec should put it at -40°C for the minimum operating temp. But for the space question, I can answer that as I work in a space lab! In general you have three thermal concerns in space: 1) In space, you only radiate heat. Radiation is a terrible way to get rid of heat. In the atmosphere, you conduct heat into the air around you which makes cooling a lot easier. So in space, you have to put big heatsinks on to get the heat into larger radiative surfaces. 2) If you have a component which doesn't generate heat, then space is happy to let you get really friggin' cold! In general, what you do is you have active heating elements to keep components which don't generate more heat than they radiate but have thermal limits. 3) Heat swings are common because you will exit and re-enter the sun's rays. Thus you need to have active thermal management where you have a big heatsink which can radiate heat when it's hot, and a heater for when it's not. You can also get extended temperature range devices which go lower and higher, but there's pretty much always a limit. Some of them are for where the cold temperature will crack the die because the metal will shrink more than the plastic (or vice versa) which is why they list limits for storage as well! The limit is mostly in materials. You also tend to get space-rated chips made out of ceramic for the packaging, which can also raise or lower the thermal limits. Anyway, I hope that explains it for you. I can try and answer any other questions, but I'll admit the physics of low-temperature semiconductors is not my forte! 1st Edit: Here's a link to a wikipedia entry about the idea that at lower temperatures there are fewer electrons which are excited enough to generate a current flow through a semiconductor lattice. This should give you a good idea of why the resistance becomes higher, and why 0 Kelvin would have never been an option.
{ "source": [ "https://electronics.stackexchange.com/questions/35663", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8713/" ] }
35,773
I'd like to start implementing a system consisting of N microcontrollers (N >= 2 MCUs), but I would like to know the possibilities to let them communicate one with the other. Ideally, (N-1) microcontrollers are placed inside the house acting as clients, while the last (the "server") one is connected to a PC via USB. The problems I have right now is how to connect these (N-1) microcontrollers to the "server". The clients MCUs perform very simple tasks, so it may not be a good solution to use ARMs to do such simple jobs just because they provide CAN / PHY-MAC . The communication will not happen more than once every few minutes for most of the devices and on demand for others. The speed is not very critical (message is short): 1 Mbit/s I think is WAY overkill for my purposes. The MCUs I plan on using are the following. Atmel AVR Tiny / Mega TI MSP430 ARM Cortex M3/M4 (Possibly Atmel AVR UC3 - 32-bit) I'd like to avoid PICs if possible (personal choice), simply because there are less possibilities to program them (all the above have more or less open source tools as well as some official tools). I know some ARMs provide CAN functionality and am not so sure about the others. Right now I came up with these possibilities: Simple GPIO to send data (say > 16 bits at HIGH to indicate start of message, > 16 bits at LOW to indicate end of message). However it has to be at a standard frequency << (frequency_client, frequency_server) to be able to detect all bits. Only need one cable per client MCU. RS-232 : I think this is by far the most commonly used communication protocol, but I don't know how well it scales. I'm considering up to 64 client MCUs right now (probably more later) USB: AFAIK this is mostly like RS-232, but I don't think it scales very well in this case (though USB supports lots of devices - 255 if I remember correctly - it may be overly complicated for this application) RJ45 / Ethernet: this is what I'd really love to use, because it allows transmission over long distances without a problem (at least with shielded > Cat 6 cable). The problem is the cost (PHY, MAC, transformer, ...). I don't know if you can actually solder it well at home though. This way I wouldn't need a client MCU Wireless / ZigBee : modules are very expensive, though it may be the way to go in order to avoid "spaghetti" behind the desk RF modules / transceivers: I'm speaking of those in the 300 MHz - 1 GHz band, so they should be difficult to solder at home. The modules are all built-in, but they are quite as expensive as the ZigBee (at least the RF's modules at Mouser, at Sparkfun there seem to be cheaper ones). CAN? It seems to be very robust. Even though I don't plan to use it in automotive applications, it may still be a good alternative. I²C / SPI / UART ? Again - better avoid "spaghetti" with the cables if possible PLCs are not really an option. Performance degrade pretty fast as the length increases and depends on the capacitance load of the power network. I think price-wise is about the same as Ethernet. Furthermore, which protocol would be "better" in case of simultaneous transmissions (let's assume the rare case that at the very same instant two devices begin transmitting: which protocol provides the best "conflict management system" / "collision management system"? To sum it up : I'd like to hear what may be the best solution for a distributed client system that do very light data communication, considering both flexibility (max number of devices, conflict / collision management system, ...), price, easy to make at home (soldering), ... I'd like to avoid spending 20$ on just the communication module, but at the same time having 30 wires behind the desk would suck. The solution I'm imaging right now would be to do basic communication between near MCUs by GPIO or RS-232 ( cheap !) and use Ethernet / ZigBee / Wi-Fi on one MCU per "zone" to communicate with the server ( expensive , but it is still a lot cheaper than one Ethernet module per each client MCU). Instead of cables it may as well be possible to use fiber optic / optical fibers. Though additional conversions are necessary, and I'm not sure if it'd be the best solution in this case. I'd like to hear additional details on them.
CAN sounds the most applicable in this case. The distances inside a house can be handled by CAN at 500 kbits/s, which sounds like plenty of bandwidth for your needs. The last node can be a off the shelf USB to CAN interface. That allows software in the computer to send CAN messages and see all the messages on the bus. The rest is software if you want to present this to the outside world as a TCP server or something. CAN is the only communications means you mentioned that is actually a bus, except for rolling your own with I/O lines. All the others are point to point, including ethernet. Ethernet can be made to logically look like a bus with switches, but individual connections are still point to point and getting the logical bus topology will be expensive. The firmware overhead on each processor is also considerably more than CAN. The nice part about CAN is that the lowest few protocol layers are handled in the hardware. For example, multiple nodes can try to transmit at the same time, but the hardware takes care of detecting and dealing with collisions. The hardware takes care of sending and receiving whole packets, including CRC checksum generation and validation. Your reasons for avoiding PICs don't make any sense. There are many designs for programmers out there for building your own. One is my LProg , with the schematic available from the bottom of that page. However, building your own won't be cost effective unless you value your time at pennies/hour. It's also about more than just the programmer. You'll need something that aids with debugging. The Microchip PicKit 2 or 3 are very low cost programmers and debuggers. Although I have no personal experience with them, I hear of others using them routinely. Added: I see some recommendations for RS-485, but that is not a good idea compared to CAN. RS-485 is a electrical-only standard. It is a differential bus, so does allow for multiple nodes and has good noise immunity. However, CAN has all that too, plus a lot more. CAN is also usually implemented as a differential bus. Some argue that RS-485 is simple to interface to electrically. This is true, but so is CAN. Either way a single chip does it. In the case of CAN, the MCP2551 is a good example. So CAN and RS-485 have pretty much the same advantages electrically. The big advantage of CAN is above that layer. With RS-485 there is nothing above that layer. You are on your own. It is possible to design a protocol that deals with bus arbitration, packet verification, timeouts, retries, etc, but to actually get this right is a lot more tricky than most people realize. The CAN protocol defines packets, checksums, collision handling, retries, etc. Not only is it already there and thought out and tested, but the really big advantage is that it is implemented directly in silicon on many microcontrollers. The firmware interfaces to the CAN peripheral at the level of sending and receiving packets. For sending, the hardware does the colllision detection, backoff, retry, and CRC checksum generation. For receiving, it does the packet detection, clock skew adjusting, and CRC checksum validation. Yes the CAN peripheral will take more firmware to drive than a UART such as is often used with RS-485, but it takes a lot less code overall since the silicon handles so much of the low level protocol details. In short, RS-485 is from a bygone era and makes little sense for new systems today. The main issue seems to be people who used RS-485 in the past clinging to it and thinking CAN is "complicated" somehow. The low levels of CAN are complicated, but so is any competent RS-485 implementation. Note that several well known protocols based on RS-485 have been replaced by newer versions based on CAN. NMEA2000 is one example of such a newer CAN-based standard. There is another automotive standard J-J1708 (based on RS-485) that is pretty much obsolete now with the CAN-based OBD-II and J-1939.
{ "source": [ "https://electronics.stackexchange.com/questions/35773", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8575/" ] }
35,778
I've got a TMP36 temperature sensor of which I'm trying to read the value of using an Arduino Uno. I know that the sensor has an accuracy of -/+ 2 degrees celsius but my readings are peak far past this range. I soldered three wires onto the three legs of the sensor. I hadn't exposed the legs (and thus) the sensor to the heat of soldering for that long a time, but could this have damaged the sensor? Also the three wires I've soldered on are about 50cm in length and are twisted together, could this cause interference of the voltages?
CAN sounds the most applicable in this case. The distances inside a house can be handled by CAN at 500 kbits/s, which sounds like plenty of bandwidth for your needs. The last node can be a off the shelf USB to CAN interface. That allows software in the computer to send CAN messages and see all the messages on the bus. The rest is software if you want to present this to the outside world as a TCP server or something. CAN is the only communications means you mentioned that is actually a bus, except for rolling your own with I/O lines. All the others are point to point, including ethernet. Ethernet can be made to logically look like a bus with switches, but individual connections are still point to point and getting the logical bus topology will be expensive. The firmware overhead on each processor is also considerably more than CAN. The nice part about CAN is that the lowest few protocol layers are handled in the hardware. For example, multiple nodes can try to transmit at the same time, but the hardware takes care of detecting and dealing with collisions. The hardware takes care of sending and receiving whole packets, including CRC checksum generation and validation. Your reasons for avoiding PICs don't make any sense. There are many designs for programmers out there for building your own. One is my LProg , with the schematic available from the bottom of that page. However, building your own won't be cost effective unless you value your time at pennies/hour. It's also about more than just the programmer. You'll need something that aids with debugging. The Microchip PicKit 2 or 3 are very low cost programmers and debuggers. Although I have no personal experience with them, I hear of others using them routinely. Added: I see some recommendations for RS-485, but that is not a good idea compared to CAN. RS-485 is a electrical-only standard. It is a differential bus, so does allow for multiple nodes and has good noise immunity. However, CAN has all that too, plus a lot more. CAN is also usually implemented as a differential bus. Some argue that RS-485 is simple to interface to electrically. This is true, but so is CAN. Either way a single chip does it. In the case of CAN, the MCP2551 is a good example. So CAN and RS-485 have pretty much the same advantages electrically. The big advantage of CAN is above that layer. With RS-485 there is nothing above that layer. You are on your own. It is possible to design a protocol that deals with bus arbitration, packet verification, timeouts, retries, etc, but to actually get this right is a lot more tricky than most people realize. The CAN protocol defines packets, checksums, collision handling, retries, etc. Not only is it already there and thought out and tested, but the really big advantage is that it is implemented directly in silicon on many microcontrollers. The firmware interfaces to the CAN peripheral at the level of sending and receiving packets. For sending, the hardware does the colllision detection, backoff, retry, and CRC checksum generation. For receiving, it does the packet detection, clock skew adjusting, and CRC checksum validation. Yes the CAN peripheral will take more firmware to drive than a UART such as is often used with RS-485, but it takes a lot less code overall since the silicon handles so much of the low level protocol details. In short, RS-485 is from a bygone era and makes little sense for new systems today. The main issue seems to be people who used RS-485 in the past clinging to it and thinking CAN is "complicated" somehow. The low levels of CAN are complicated, but so is any competent RS-485 implementation. Note that several well known protocols based on RS-485 have been replaced by newer versions based on CAN. NMEA2000 is one example of such a newer CAN-based standard. There is another automotive standard J-J1708 (based on RS-485) that is pretty much obsolete now with the CAN-based OBD-II and J-1939.
{ "source": [ "https://electronics.stackexchange.com/questions/35778", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6059/" ] }
36,086
Oli used this circuit in an answer, and it pops up a lot on Google images too. But does it work? If it does a theoretical explanation will be welcome.
According to this, the photodiode does indeed produce a current even when there is zero volts across it; it's the short circuit current . Note that the reference direction of \$I_S\$ in the question's diagram is opposite that of the \$I_{SC}\$ of the diode so the output voltage is: \$V_{OUT} = - I_S \cdot R_F = I_{SC} \cdot R_F\$ I found the above here . A reasonable question to ask is how can a current be produced with zero voltage ? Remember that there's an internal E field through the depletion region even when the diode terminals are shorted together. Briefly, light generated EHPs in the vicinity of the depletion region are separated by the E field resulting in charge accumulating in the P and N sides (that's how \$V_{OC}\$ is developed). A short circuit allows a current to restore charge balance.
{ "source": [ "https://electronics.stackexchange.com/questions/36086", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
36,098
I'm looking to drive a magnetic door lock from an Arduino. I've found a question about driving a solenoid from an Arduino , which includes a circuit that looks perfect for this kind of situtation: What I don't understand is how to select a MOSFET for the job. What properties should I look for, if I know my logic level, device voltage and device current? In this case it's 5V logic, and the load runs at 12V / 500mA, but it'd be nice to know the general rule.
You've got a luxury problem: there are thousands of FETs suitable for your job. 1) the logic level. You have 5 V, and probably less than 200 mV or so when off. What you need is \$V_{GS(th)}\$, that's the gate's threshold voltage, at which the FET starts conducting. It's given for a specific current, which you want to keep an eye on too, because it may be different for different FETs. Useful for you could be maximum 3 V @ 250 µA, like for the FDC855N . At 200 mV (or lower) you'll have a leakage current much lower than that. 2) Maximum \$I_D\$ continuous. 6.1 A. OK. 3) the \$I_D / V_{DS}\$ graph: This one's again for the FDC855N. It shows the current the FET will sink at a given gate voltage. You can see that it's 8 A for a 3.5 V gate voltage, so that's OK for your application. 4) \$R_{DS(ON)}\$. The on-resistance determines the power dissipation. For the FDC855N it's maximum 36 mΩ at 4.5 V gate voltage, at 5 V it will be a little less. At 500 mA that will cause a 9 mW dissipation. That's more than good enough. You can find FETs with better figures, but there's really no need to pay the extra price for them. 5) \$V_{DS}\$. The maximum drain-source voltage. 30 V for the FDC855N, so for your 12 V application OK. 6) package. You may want a PTH package or SMT. The FDC885N comes in a very small SuperSOT-6 package, which is OK, given the low power dissipation. So the FDC855N will do nicely. If you want you can have a look at Digikey's offering. They have excellent selection tools, and now you know the parameters to look out for.
{ "source": [ "https://electronics.stackexchange.com/questions/36098", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6585/" ] }
36,308
I am trying to figure out the difference between crystals, oscillators, and resonators. I'm starting to grasp it but I still have some questions. From my understanding, an oscillator is built from a crystal and two capacitors. What is a resonator then? Is it a difference in terminology? If an oscillator and a resonator are similar, why do these two items: http://www.digikey.com/product-detail/en/HWZT-16.00MD/535-9379-ND/675574 http://www.digikey.com/product-detail/en/FCR16.0M2G/445-1646-ND/653108 have two pins out and no ground. Whereas this one http://www.digikey.com/product-detail/en/ZTT-16.00MX/X908-ND/170095 has three pins one of which is a ground? Will any of these three devices work as an external clock for a microcontroller? PS: Bonus points for an explanation of how the capacitors help the crystal work properly. :)
Both ceramic resonators and quartz crystals work on the same principle: the vibrate mechanically when an AC signal is applied to them. Quartz crystals are more accurate and temperature stable than ceramic resonators. The resonator or crystal itself has two connections. On the left the crystal, right the ceramic resonator. Like you say the oscillator needs extra components, the two capacitors. The active part which makes the oscillator work is an amplifier which supplies the energy to keep the oscillation going. Some microcontrollers have a low-frequency oscillator for a 32.768 kHz crystal, which often has the capacitors built-in, so that you only need two connections for the crystal (left). Most oscillators, however, need the capacitors externally, and then you have thee connections: input from the amplifier, output to the amplifier, and ground for the capacitors. A resonator with three pins has the capacitors integrated. The function of the capacitors: in order to oscillate the closed loop amplifier-crystal must have a total phase shift of 360°. The amplifier is inverting, so that's 180°. Together with the capacitors the crystal takes care of the other 180°. edit When you switch a crystal oscillator on it's just an amplifier, you don't get the desired frequency yet. The only thing that's there is a low-level noise over a wide bandwidth. The oscillator will amplify that noise and pass it through the crystal, upon which it enters the oscillator again which amplifies it again and so on. Shouldn't that get you just very much noise? No, the crystal's properties are such that it will pass only a very small amount of the noise, around its resonance frequency. All the rest will be attenuated. So in the end it's only that resonance frequency which is left, and then we're oscillating. You can compare it with a trampoline. Imagine a bunch of kids jumping on it randomly. The trampoline doesn't move much and the kids have to make a lot of effort to jump just 20cm up. But after some time they will start to synchronize and the trampoline will follow the jumping. The kids will jump higher and higher with less effort. The trampoline will oscillate at its resonance frequency (about 1Hz) and it will be hard to jump faster or slower. That's the frequencies that will be filtered out. The kid jumping on the trampoline is the amplifier, she supplies the energy to keep the oscillation going. Further reading MSP430 32 kHz crystal oscillators
{ "source": [ "https://electronics.stackexchange.com/questions/36308", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6138/" ] }
36,317
I am fairly new to "electrical" engineering as a hobby/ past time. I have always loved electricity and have made simple circuits since I was a little baby. I am now working on one of my first arduino projects and I started thinking about what color wires I should use for the different I/O and serial communication. I know it doesn't matter, but I am curious: are there standard wire colors for things such as Tx and Rx, Digital IO, analogue IO? It is very common that Red is "Positive" and Black is "Negative." I have also noticed that USB data wires are usually green and white. Are there other standards that are commonly used for other applications? Can someone give me a list please?
Obligatory link to the XKCD comic about standards: So yeah, there are standards. There are so many of them that it's effectively the same as having no standard, as every possible wire arrangement likely has a standard that describes it.
{ "source": [ "https://electronics.stackexchange.com/questions/36317", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }
36,341
I just came across the word 'Burden Resistor'. Is it any different from a normal resistor? If it is different, where can I possibly get one? Sparkfun hasn't got one listed. Any help is appreciated. I am trying to build a current sensing circuit .
No, they're the same components as regular resistors. The name refers to the function, not to the resistor's construction. Current transformers act as current sources and need a load. A current source is the dual of a voltage source, and just like you shouldn't short-circuit a voltage source because it would cause infinite current, you shouldn't leave a current source open, as it would cause an infinite voltage. The burden resistor converts the current to a limited voltage.
{ "source": [ "https://electronics.stackexchange.com/questions/36341", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/11009/" ] }
36,504
Like many people I'm often annoyed by ugly cable clutter, especially when it's long distance. In my opinion it would be ideal if cables could be eliminated altogether, so that you'd just have two corresponding plugs without the cable, which you plugin at either end and transmit their information wirelessly. Maybe that would be a little futuristic for now, but what about transporting data over a wireless computer network (or two paired bluetooth transmitters)? That's perfectly possible, and all you'd need is a little box that can convert data to a format that can be transmitted over a wireless network, and a box that converts it back to the appropriate format at the other end. Over time this technology would surely be miniaturized until it fits in a plug. (you could use this for HDMI, audio or other information for example) Why doesn't this technology exist already? Am I overlooking something, or are there some difficulties with it that have to be solved first?
Wireless technology is great and can be used in all sorts of scenarios but it's complex and hard to design for. Wires are in fact superior in many ways. ( Taken from: Essentials of Short Range Wireless Standards ) With wires: Range is not an issue - just add more cable Latency is excellent - what goes in one end appears immediately at the other They're transparent to data protocols and formats Throughput is excellent No issue with security - you know what you plug it into Interoperability is excellent, At most, you only need to change the plug Power consumption may be higher, but the cable can carry power They can be specified on a single page Topology is simple - it's typically one-to-one Robustness to interference is generally a minor issue Backwards compatibility is normally no more difficult than changing a plug There's generally no license agreement, no qualification requirements and export controls
{ "source": [ "https://electronics.stackexchange.com/questions/36504", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
36,549
For instance, the J108 JFET is listed as "N-Channel Switch", and the datasheet mentions the RDS on resistance, while the J201 JFET is listed as "N-Channel General Purpose Amplifier" (and the on-resistance would have to be deduced from the IDS curves?) Is there a difference in the way these are designed and manufactured? Can one type generally be used in the other application, but not vice versa? Related, for BJTs: What's the difference between small signal bipolar junction transistors (BJTs) marketed as switches vs. amplifiers?
There are various choices that can be made in the design of transistors, with some tradeoffs being better for switching applications and others for "linear" applications. Switches are intended to spend most of their time fully on or fully off. The on and off states are therefore important with the response curve of the in-between states being not too relevant. For most applications, the off state leakage current of most transistors is low enough to not matter. For switching applications, one of the most important parameters is how "on" on is, as quantified by Rdson in FETs and the saturation voltage and current in bipolars. This is why switching FETs will have Rdson specs, not only to show how good they are at being fully on, but because this is also important for designers of the circuit to know how much voltage they will drop and heat they will dissipate. Transistors used as general purpose amplifiers operate in the "linear" region. They may not be all that much linear in their characteristics, but this is the name used in the industry to denote the in-between range where the transistor is neither fully on nor fully off. In fact, for amplifier use you want to never quite hit either of the limit states. The Rdson is therefore not that relevant since you plan to never be in that state. You do however want to know how the device reacts to various combinations of gate voltage and and drain voltage because you plan to use it accross a wide continuum of those. There are tradeoffs the transistor designer can make that favor a more proportional response to gate voltage versus the best fully on effective resistance. This is why some transistors are promoted as switches versus for linear operations. The datasheets then also focus on the specs most relevant to the circuit designer for the intended use.
{ "source": [ "https://electronics.stackexchange.com/questions/36549", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/142/" ] }