source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
2,272
What is a decoupling capacitor (or smoothing capacitor as referred to in the link below)? How do I know if I need one and if so, what size and where it needs to go? This question mentions many chips needing one between VCC and GND; how do I know if a specific chip is one? Would an SN74195N 4-bit parallel access shift register used with an Arduino need one? (To use my current project as an example) Why or why not? I feel like I'm starting to understand the basics of resistors and some places they're used, what values should be used in said places, etc, and I'd like to understand capacitors at the basic level as well.
I was the one that asked that question. Here is my rudimentary understanding: You attach capacitors across \$V_{CC}\$ /GND to try to keep the voltage more constant. Under a DC circuit, a capacitor acts as an open circuit so there is no problem with shorting there. As your device is powered up ( \$V_{CC}\$ =5V), the capacitor is charged to capacity and waits until there is a change in the voltage between \$V_{CC}\$ and GND ( \$V_{CC}\$ =4.5V). At this point, the capacitor will discharge to try to bring the voltage back to the level of charge inside the capacitor (5V). This is called "smoothing" (or at least that is what I call it) because the change in voltage will be less pronounced. Ultimately, the voltage will not ever return to 5V through a capacitor, rather the capacitor will discharge until the charge inside it is equal to the supply voltage (to an equilibrium). A similar mechanism is responsible for smoothing if \$V_{CC}\$ increases too far beyond its average ( \$V_{CC}\$ =5.5V perhaps). As for why you need them, they are very important in high-speed digital and analog circuits. I can't imagine you would need one for an SN74195, but it can't hurt!
{ "source": [ "https://electronics.stackexchange.com/questions/2272", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/647/" ] }
2,324
A recent question asked about the advantages/disadvantages of various types of MCU. AVRs seemed not even worth a mention given the answers. Why then does it seem to an outsider that AVRs are experiencing a rush of popularity? Is this solely due to the Arduino, or is there something else that makes the AVR an especially good microcontroller?
The AVR family has a lot of good, inexpensive, hobbyist-friendly devices with nice peripherals, low power consumption, and good cross-platform support. Yes, Arduino is a big part of it. But I think that Arduino came to exist the way that it did -- and to the success that it has -- partly due to those features. Good: They work well. Easy to program in C for most basic functions. Adequate documentation. Inexpensive: Lots of $3-$5 parts, available from major distributors in small quantity. Hobbyist friendly: Parts in through-hole packages-- a big contrast to many of the chip families out there today. Newer AVR (e.g., xmega) devices are less so. Nice peripherals: Built-in oscillator, flash memory, on-board RAM, serial ports, ADC, EEPROM, and the other goodies that make it possible to run a single MCU on a protoboard to do basic stuff, without too much hassle. Low power consumption. AVR's major pitch point these days. Suckers can run on a battery almost forever if you know what you're doing. Good cross-platform support: The AVR was designed with C support in mind-- not as an afterthought. GCC support came early, and a big open source community developed around that. It's still one of the best MCUs that you can develop from any platform with free tools. This is a big one with respect to the other families, many of which use proprietary compilers or have lackluster gcc support. Even PIC was pretty late to the game with good free C compilers. As for why there wasn't much about it in the replies to your earlier question, I think that (1) you're seeing small sample bias and (2) many of the answers were specifically to discuss non-AVR solutions-- because so much of the discussion on this site is AVR/Arduino-centric. Most of the microcontroller families aren't represented in your list as of this writing-- including some that I use regularly, and others that are among the most popular in the world.
{ "source": [ "https://electronics.stackexchange.com/questions/2324", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/504/" ] }
2,425
I was looking over the schematic for the GumStix Palo 43 and noticed they used a common mode choke coil on the data lines coming in from USB. I understand how this design can help remove noise coming in on the USB lines, but I wonder if it is actually something I should start doing on my designs. The datasheet for the FT232R has no mention of adding common mode choke coils, and I have used this chip before with out one. So, would you recommend I change my USB design or keep it the way it is?
The USB signal is not entirely differential, so it's not a great idea. (The end-of-packet (EOP) signal is both pins pulled low, which, I believe, is why there's always noise at 1 kHz and harmonics in USB systems, since it's sending common-mode signals every 1 ms.) A common mode (CM) choke should be used to terminate the high speed USB bus if they are need to pass EMI testing. Place the CM choke as close as possible to the connector pins. See Section 5.1 for details. Note: Common mode chokes degrade signal quality, thus they should only be used if EMI is a known problem. Common mode chokes distort full speed and high-speed signal quality. The eye diagram above shows full speed signal quality distortion of the end of packet, but still within the specification. As the common mode impedance increases, this distortion will increase, so you should test the effects of the common mode choke on full speed and high-speed signal quality. High Speed USB Platform Design Guidelines Note: additional filtering may be achieved by winding the 4 wires through the ferrite bead an additional turn. As with the use of ferrite beads in signal paths, care should be taken to insure that the signaling meets rise and fall times, especially the EOP signaling. EOP signaling is single ended and may be strongly affected by a single bead, which acts as a common mode only filter. Intel EMI Design Guidelines for USB Components
{ "source": [ "https://electronics.stackexchange.com/questions/2425", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/319/" ] }
2,440
I am reading a basic electronics book: "There are no Electrons: Electronics for Earthlings" and I came across a clever passage about the fact that you need a closed circuit in order for current to flow. Here is the passage I am curious about: "This has always bothered me: If the negative terminals of batteries have excess electrons (a negative charge) and the positive terminals of batteries have too few electrons (a positive charge) and opposites attract, why can't I hook a wire between the negative side of one battery and the positive side of a different battery and get any current? This truth is it won't work. No current will flow. Had someone been able to explain that to me, I probably would never have written this book." Does anyone have a straight-forward answer to this question?
The confusion here is from the initial poor description of how a battery works. A battery consists of three things: a positive electrode, a negative electrode, and an electrolyte in between. The electrodes are made of materials that strongly want to react with each other; they are kept apart by the electrolyte. The electrolyte acts like a filter that blocks the flow of electrons, but allows ions (positively charged atoms from the electrodes) to pass through. If the battery is not connected to anything, the chemical force is pulling on the ions, trying to draw them across the electrolyte to complete the reaction, but this is balanced by the electrostatic force-- the voltage between the electrodes. Remember-- a voltage between two points means there is an electric field between those points which pushes charged particles in one direction. When you add a wire between the ends of the batteries, electrons can pass through the wire, driven by the voltage. This reduces the electrostatic force, so ions can pass through the electrolyte. As the battery is discharged, ions move from one electrode to the other, and the chemical reaction proceeds until one of the electrodes is used up. Thinking about two batteries next to each other, linked by one wire-- there is no voltage between the two batteries, so there is no force to drive electrons. In each battery, the electrostatic force balances the chemical force, and the battery stays at steady state. (I kind of glossed over what it means for two materials to "want" to react with each other. Google "Gibbs free energy" for more details on that. You might also google "Nernst equation.")
{ "source": [ "https://electronics.stackexchange.com/questions/2440", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
2,443
What does it mean to "tristate a pin" on a CMOS microcontroller?
"Tristate" means a state of high impedance. A pin can either pull to 0 V (sinking current, generally), pull to 5 V (sourcing current, generally), or become high impedance, like an input. The idea is that if a pin is in high impedance state, it can be pulled to high or low by an external device without much current flow. You see this kind of thing on bidirectional serial lines, where sometimes a pin is an output and sometimes an input. When it's an input, it's "tristated," allowing the external chip to control its logic level. Does that make sense in your situation?
{ "source": [ "https://electronics.stackexchange.com/questions/2443", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/374/" ] }
2,641
How cheap do FPGAs get? I know they're more expensive than microprocessors of comparable capability, but I wonder if there exist FPGAs that could contain a Microblaze soft core running Linux, while leaving gates available for implementing DSP functionality (media codecs, for example) at a cost competitive with, say, a Cortex A8 ($20-30 in qty. ~100). (Apologies if my terminology is non-idiomatic i.e. wrong. Please comment with corrections, or edit directly.)
I recently attended an online conference on FPGA's with the keynote being "Should your next processor be an FPGA." The FPGA basically makes sense in any application that requires highly parallelizable work streams, an example being used was analyzing Full HD images to find pedestrians for instance. The thing you have to remember is that you have to initialize your FPGA every time it powers up, I think the FPGA's Xilinx is coming with (which have an on-chip ARM core) are a good option, but probably expensive. Looking into the Actel ones with on-chip flash may be useful too. As for performance, the company BDTI did a benchmark in highly parallel computations where they saw about 40x performance gain switching to an FPGA. The interesting thing is that they compared chips with similar costs (23$ vs 28$ I believe). Here are the links that might interest you: Pocket guide to processor selection FPGA Conference Archives (Free registration, but only available for about 6 months after this answer) You cannot really compare performance of FPGA based systems based on MIPS or Mhz stats. The way an FPGA is used to process certain tasks is simply too different from a Microcontroller. The design of firmware for an FPGA is something you have to do using VHDL for instance, which is akin to Assembly. A register transfer level (RTL) of abstraction. Some environments are being produced to provide more abstraction, but these are still often vendor specific. Wikipedia has a decent overview of languages available to program FPGA code: Wikipedia: Programming FPGA Wikipedia: Digital Circuit Design If you have money to burn you can use the LabView systems to build FPGA based real time measurement systems for instance. These devices needed for this are in a completely different pricerange (1500$ and up), but open up te FPGA design to a much broader audience with graphic programming. More and more vendors are providing boards which combine microcontrollers such as an ARM chip with an FPGA to provide specific additional features and parallel processing power. An example of such products can be found here: EmbeddedARM: FPGA series
{ "source": [ "https://electronics.stackexchange.com/questions/2641", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/712/" ] }
2,869
I know the values of resistors if they are gold-colored at the end. When both ends are the same, such as brown-o-p-brown and red-x-y-z-red, I am in problems. How to know which side has the last colour and which side is the starting end?
I asked a similar question a long time ago here but the resistor chart which was mentioned there appears to have moved. So from its new home at itll.colorado.edu here's the diagram, as far as I can tell one band will be thicker signifying it as the tolerance band (no-one responded when I queried whether this was the case in the previous post above so if I'm wrong please let me know).
{ "source": [ "https://electronics.stackexchange.com/questions/2869", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2387/" ] }
2,910
Question emerged from my last question about small caps' markings, better to open new question so not become bloated. Heard term polarization in the context of light but not with caps. Googling revealed the effect: Dipolar Polarization . Not sure whether it is the right effect but at least it mentions cap. 1. What does the term "polarized" mean? 2. Why should I use "polarized" cap instead of non-polarized cap? 3. Why does the circuit need a polarized cap? http://en.wikipedia.org/wiki/File:RC_Filter.png
Some caps -- such as nearly all electrolytic capacitors and tantalum capacitors -- are polarized. Such caps use some sort of chemical reaction between an anode and a cathode made of two different kinds of materials to form a thin insulating layer. When you hold one of these caps in your hands, you will see a "-" mark by the pin intended to stay more negative, or a "+" mark by the pin intended to stay more positive. If a polarized cap is ever "reverse biased" more than 1 V to 1.5 V (typical), it drives that chemical reaction in reverse, eating away at the thin insulating layer, leading to a short between the two pins. Not only is that capacitor no longer working, after that, any significant voltage -- forward or reverse -- could make that "capacitor" overheat and in some cases explode. The person drawing the circuit and connecting the capacitor in a circuit must make sure the "+" end goes towards the more positive voltage, and the "-" end goes towards the more negative voltage, at all times, to prevent catastrophe. See the Wikipedia article Greg pointed out for more details. Other caps -- such as nearly all ceramic capacitors, paper disk capacitors, and mica capacitors -- are non-polarized. Such caps typically use an anode and a cathode made of identical metal, and they work just as well with "reverse biased" voltage as forward biased. They don't have either "+" or "-" mark, because they don't need one. & 3. You never "need" a polarized cap. Practically all physical circuits would work just as well, and perhaps better, if the polarized caps were all replaced with non-polarized caps of the same capacitance and voltage rating. The opposite is not true -- you often can't replace non-polarized caps with polarized caps. Some circuits require a capacitor that can handle a high positive voltage at some times and a high negative voltage at other times (polarity reversal), which requires a non-polarized capacitor. The only reason people use polarized caps is because they often cost much less than non-polarized caps of the same capacitance and voltage rating. However, when drawing a schematic, you should always draw a "+" sign on one side of a cap whenever you intend that that the cap always has positive voltage applied to it, it never suffers polarity reversal. That helps the people reading the schematic understand what you meant. That gives people putting together the physical circuit the option of using polarized capacitors, even though many times it is more convenient to use non-polarized capacitors in the place of the polarized capacitors clearly marked on the schematic. It also tells people putting together the physical circuit, should they choose to use a polarized capacitor, which way around the polarized capacitors should go. It also communicates to repair people that, if they measure a negative bias voltage, that something has gone horribly wrong. The schematic you show -- with the clearly marked "+" polarized capacitor -- would work just as well with a non-polarized capacitor. The "+" on one end of the capacitor is telling us that that end is expected to never be negative relative to the other end. It's also telling us that we have the option of using a polarized or nonpolarized cap in that location when we build that circuit.
{ "source": [ "https://electronics.stackexchange.com/questions/2910", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2387/" ] }
3,027
A common question, here and elsewhere. Is C++ suitable for embedded systems? Microcontrollers? RTOSes? Toasters? Embedded PCs? Is OOP useful on microcontrollers? Does C++ remove the programmer too far from the hardware to be efficient? Should Arduino's C++ (with no dynamic memory management, templates, exceptions) be considered as "real C++"? (Hopefully, this wiki will serve as a place to contain this potential holy war)
Yes, C++ is still useful in embedded systems. As everyone else has said, it still depends on the system itself, like an 8-bit uC would probably be a no-no in my book even though there is a compiler out there and some people do it (shudder). There's still an advantage to using C++ even when you scale it down to something like "C+" even in a 8-bit micro world. What do I mean by "C+"? I mean don't use new/delete, avoid exceptions, avoid virtual classes with inheritance, possibly avoid inheritance all together, be very careful with templates, use inline functions instead of macros, and use const variables instead of #defines . I've been working both in C and C++ in embedded systems for well over a decade now, and some of my youthful enthusiasm for C++ has definitely worn off due to some real world problems that shake one's naivete. I have seen the worst of C++ in an embedded systems which I would like to refer to as "CS programmers gone wild in an EE world." In fact, that is something I'm working on with my client to improve this one codebase they have among others. The danger of C++ is because it's a very very powerful tool much like a two-edged sword that can cut both your arm and leg off if not educated and disciplined properly in it's language and general programming itself. C is more like a single-edged sword, but still just as sharp. With C++ it's too easy to get very high-levels of abstraction and create obfuscated interfaces that become meaningless in the long-term, and that's partly due to C++ flexibility in solving the same problem with many different language features(templates, OOP, procedural, RTTI, OOP+templates, overloading, inlining). I finished a two 4-hour seminars on Embedded Software in C++ by the C++ guru, Scott Meyers. He pointed out some things about templates that I never considered before and how much more they can help creating safety-critical code. The jist of it is, you can't have dead code in software that has to meet stringent safety-critical code requirements. Templates can help you accomplish this, since the compiler only creates the code it needs when instantiating templates. However, one must become more thoroughly educated in their use to design correctly for this feature which is harder to accomplish in C because linkers don't always optimize dead code. He also demonstrated a feature of templates that could only be accomplished in C++ and would have kept the Mars Climate Observer from crashing had NASA implemented a similar system to protect units of measurement in the calculations. Scott Meyers is a very big proponent on templates and judicious use of inlining, and I must say I'm still skeptical on being gung ho about templates. I tend to shy away from them, even though he says they should only be applied where they become the best tool. He also makes the point that C++ gives you the tools to make really good interfaces that are easy to use right and make it hard to use wrong. Again, that's the hard part. One must come to a level of mastery in C++ before you can know how to apply these features in most efficient way to be the best design solution. The same goes for OOP. In the embedded world, you must familiarize yourself with what kind of code the compiler is going to spit out to know if you can handle the run-time costs of run-time polymorphism. You need to be willing to make measurements as well to prove your design is going to meet your deadline requirements. Is that new InterruptManager class going to make my interrupt latency too long? There are other forms of polymorphism that may fit your problem better such as link-time polymorphism which C can do as well, but C++ can do through the Pimpl design pattern (Opaque pointer) . I say that all to say, that C++ has its place in the embedded world. You can hate it all you want, but it's not going away. It can be written in a very efficient manner, but it's harder to learn how to do it correctly than with C. It can sometimes work better than C at solving a problem and sometimes expressing a better interface, but again, you've got to educate yourself and not be afraid to learn how.
{ "source": [ "https://electronics.stackexchange.com/questions/3027", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/566/" ] }
3,067
I understand that a DSP is optimized for digital signal processing, but I'm not sure how that impacts to the task of choosing an IC. Almost everything I do with a microcontroller involves the processing of digital signals! For example, let's compare the popular Microchip dsPIC30 or 33 DSP and their other 16-bit offering, the PIC24 general purpose microcontroller. The dsPIC and the PIC can be configured to have the same memory and speed, they have similar peripherial sets, similar A/D capability, pin counts, current draw, etc. The only major difference that appears on Digikey's listing is the location of the oscillator. I can't tell the difference by looking at the prices (or any other field, for that matter.) If I want to work with a couple of external sensors using various protocols (I2C, SPI, etc.), do some A/D conversions, store some data on some serial flash, respond to some buttons, and push data out to a character LCD and over an FT232 (a fairly generic embedded system), which chip should I use? It doesn't appear that the DSP will lag behind the PIC in any way, and it offers this mysterious "DSP Engine." My code always does math, and once in a while I need floating point or fractional numbers, but I don't know if I'll benefit from using a DSP. A more general comparison between another vendor's DSPs and microcontrollers would be equally useful; I'm just using these as a starting point for discussion.
To be honest the line between the two is almost gone nowadays and there are processors that can be classified as both (AD Blackfin for instance). Generally speaking: Microcontrollers are integer math processors with an interrupt sub system. Some may have hardware multiplication units, some don't, etc. Point is they are designed for simple math, and mostly to control other devices. DSPs are processors optimized for streaming signal processing. They often have special instructions that speed common tasks such as multiply-accumulate in a single instruction. They also often have other vector or SIMD instructions. Historically they weren't interrupt based systems and operated with non-standard memory systems optimized for their purpose making them more difficult to program. They were usually designed to operate in one big loop processing a data stream. DSP's can be designed as integer, fixed point or floating point processors. Historically if you wanted to process audio streams, video streams, do fast motor control, anything that required processing a stream of data at high speed you would look to a DSP. If you wanted to control some buttons, measure a temperature, run a character LCD, control other ICs which are processing things, you'd use a microcontroller. Today, you mostly find general purpose microcontroller type processors with either built in DSP-like instructions or with on chip co-processors to deal with streaming data or other DSP operations. You don't see pure DSP's used much anymore except in specific industries. The processor market is much broader and more blurry than it used to be. For instance i hardly consider a ARM cortex-A8 SoC a micro-controller but it probably fits the standard definition, especially in a PoP package. EDIT: Figured i'd add a bit to explain when/where i've used DSPs even in the days of application processors. A recent product i designed was doing audio processing with X channels of input and X channels of output per 'zone'. The intended use for the product meant that it would often times sit there doing its thing, processing the audio channels for years without anyone touching it. The audio processing consisted of various acoustical filters and functions. The system also was "hot plugable" with the ability to add some number of independent 'zones' all in one box. It was a total of 3 PCB designs (mainboard, a backplane and a plug in module) and the backplane supported 4 plug in modules. Quite a fun project as i was doing it solo, i got to do the system design, schematic, PCB layout and firmware. Now i could have done the entire thing with an single bulky ARM core, i only needed about 50MIPS of DSP work on 24bit fixed point numbers per zone. But because i knew this system would operate for an extremely long time and knew it was critical that it never click or pop or anything like that. I chose to implement it with a low power DSP per zone and a single PIC microcontroller that played the system management role. This way even if one of the uC functions crashed, maybe a DDOS attack on its Ethernet port, the DSP would happily just keep chugging away and its likely no one would ever know. So the microcontroller played the role of running the 2 line character LCD, some buttons, temperature monitoring and fan control (there were also some fairly high power audio amplifiers on each board) and even served an AJAX style web page via ethernet. It also managed the DSPs via a serial connection. So thats a situation where even in the days where i could have used a single ARM core to do everything, the design dictated a dedicated signal processing IC. Other areas where i've run into DSPs: *High End audio - Very high end receivers and concert quality mixing and processing gear *Radar Processing - I've also used ARM cores for this in low end apps. *Sonar Processing *Real time computer vision For the most part, the low and mid ends of the audio/video/similar space have been taken over by application processors which combine a general purpose CPU with co-proc offload engines for various applications.
{ "source": [ "https://electronics.stackexchange.com/questions/3067", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/857/" ] }
3,105
I am working with a PIC micro-controller with inbuilt 10bit ADC and want to measure a voltage in the range of -1 to -3Volts. I thought of using an op-amp in the inverting mode to make voltage positive and then feed it to the adc of the microcontroller however here I would have to power the opamp with a negative power supply, right?. I don't want to use a negative power supply at the moment and was wondering whether it was possible to achieve this configuration? Can you'll help out?
An inverting amplifier does not need a negative rail to invert the voltage. Try to think of your power rails as what supply your output. If you look at the circuit, all op-amp pins are tied to a voltage of 0V or higher. When your range of -1 to -3 comes in, it will show up as the exact opposite of 1 to 3 on the output. This also gives you some advantages as a buffer, as the input impedance of your pin will not affect this circuit very much (so long as R in ||R f is large). I agree that a simple resistor divider does the job -- just letting you know that this also works.
{ "source": [ "https://electronics.stackexchange.com/questions/3105", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/444/" ] }
3,203
What's the cheapest way to link a few microcontrollers wirelessly at low speeds over short distances. I'm looking to keep it ultra-cheap, use common discrete parts and keep it physically small. I don't care about bands and licensing so long as it works. 802.15.4/ZigBee, Bluetooth and WiFi all require an expensive coprocessor, so aren't an option. Alternatively, are there very cheap radio modules available to hobbyists? The kind of things you find in car keyfobs and wireless thermometers, perhaps? Would building a simple transceiver on a homebrew PCB even be practical, or will I be plagued by tuning, interference and weirdy analogue stuff? Could something like this be driven from a microcontroller? What about receive?
You pretty much have to buy pre-made modules, you can't expect to wire up your own transmitter/receiver from a few transistors and a crystal, RF circuit design is unforgiving and all but requires a custom PCB (or custom IC) to do. You could probably build your own RF module on a PCB if you did some work, but at that point if you are making your own PCB's, you're not saving much money versus the very cheap modules that are available. SparkFun has RF Transmitters & Receivers for $4 and $5 respectively. Since they are just basic parts, you will need to do a little extra logic on your microcontroller to compensate for interference, eg sending error control codes so that missing / flipped bits can be detected and recovered. I found SeeedStudio sells almost the exact same thing, but even cheaper. It's $4.90 for a pair of a receiver and transmitter .
{ "source": [ "https://electronics.stackexchange.com/questions/3203", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/566/" ] }
3,343
I'd like some tips for those who want to become a good embedded software developer or want to improve in this area. What should I need learn about hardware, software? Which books are most recommended? Blogs? In the end, how could I go from a beginner hobbyist to an excellent professional?
All the answers have been good so far, but I'll throw in my two cents. Here's a repeat of some tips with a twist and some extra: Learn C: The fundamental language of the hardware that is still portable (too some degree). Don't just learn it, but become an expert of all it's features like volatile and why it is important for writing device drivers. Start out with a good development kit like Arduino, but as said before learn other architectures once you got a good feel for it. Luckily there are some Arduino compatible boards built with other processors, that way you can rewrite the same design on a different uC not mess up your whole design while getting a feel for something new. In the learning stage, feel free to re-invent the wheel on device drivers or other pieces of code. Don't just plop someone else's driver code down in there. There's value in re-inventing the wheel when you're learning. Challenge yourself to re-write your code more efficiently in terms of speed and memory usage. Becoming familiar with different styles of embedded systems software architectures. Start with basic interrupt driven/background loop processing, then move up to background schedulers, then real-time operating systems. Get good source control! I prefer Mercurial myself. Even sign up for some free source control hosting sites like Sourceforge.net or Bitbucket.org to host your project even if you're the only one working on it. They'll back your code up, so you don't have to worry about that occasional hard drive crash destroying everything! Using a distributed VCS comes in handy, because you can check in changes to your hard drive then upload to the host site when ready. Learn your tools well for whatever chip you're working on! Knowing how the compiler creates assembly is essential. You need to get a feel for how efficient the code is, because you may need to rewrite in assembly. Knowing how to use the linker file and interpreting the memory map output is also essential! How else are you going to know if that routine you just wrote is the culprit of taking up too much ROM/Flash! Learn new techniques and experiment with them in your designs! Assume nothing when debugging. Verify it! Learn how to program defensively to catch errors and verify assumptions (like using assert) Build a debugging information into your code where you can such as outputting memory consumption or profiling code with timers or using spare pins on the uC to toggle and measure interrupt latency on a O-scope. Here are some books: The Pragmatic Programmer by Andrew Hunt and David Thomas - more or less required reading for any practical software development Practical Arduino Programming Embedded Systems by Michael Barr Embedded Systems Building Blocks by Jean Labrosse MicroC OS II Real Time Kernel by Jean Labrosse, great intro into RTOS's in general in there along with his OS. Embedded Software Primer by David Simon - good intro to embedded software Here are some websites: Embedded Gurus Ganssle Group Jack Ganssle has some wonderful historical stories to tell. Read the articles. He gets a little preachy about some things though. Embedded.com Good info for latest techniques and tips from Ganssle, Barr, and other industry experts.
{ "source": [ "https://electronics.stackexchange.com/questions/3343", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/938/" ] }
3,348
There are dedicated "MOSFET driver" ICs available (ICL7667, Max622/626, TD340, IXD*404.) Some also control IGBTs. What is the practical purpose of these? Is it all about maximizing the switching speed (driving gate capacitance) or are there other motives?
A MOSFET driver IC (like the ICL7667 you mentioned) translates TTL or CMOS logical signals to a higher voltage and higher current, with the goal of rapidly and completely switching the gate of a MOSFET. An output pin of a microcontroller is usually adequate to drive a small-signal logic level MOSFET, like a 2N7000. However, three issues occur when driving larger MOSFETs: Higher gate capacitance - Digital signals are meant to drive small loads (on the order of 10–100 pF). This is much less than the gate capacitance of many MOSFETs, which can be in the thousands of pF. Higher gate voltage - A 3.3 V or 5 V signal is often not enough. Usually 8–12 V is required to fully turn on the MOSFET. A switching MOSFET can cause a back-current from the gate back to the driving cicruit. MOSFET drivers are designed to handle this back current. (Ref: Laszlo Balogh Design And Application Guide For High Speed MOSFET Gate Drive Circuits .) Finally, many MOSFET drivers are designed explicitly for the purpose of controlling a motor with an H-bridge.
{ "source": [ "https://electronics.stackexchange.com/questions/3348", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/883/" ] }
3,352
What are the scenarios where Triacs can not replace a relay or vice-versa?
Thyristors (triacs and their unidirectional cousins, SCRs) are solid-state devices, whereas relays are electromechanical devices. Triacs can switch both AC and DC, but as XTL said, they will not stop the current flow unless the current between MT1 and MT2 falls below a threshold level, or you forcibly commutate the device off. (note: I spent 13 years in industrial motor control, we designed equipment which switched up to many thousands of Amps and many thousands of Volts through thyristors.) Relays are pretty simple devices to use; you energize the coil and the contacts engage. You de-energize the coil and the contacts open. A simple transistor can drive it, but you'll want some snubbering (a reverse-biased diode across the relay coil at a minimum) to prevent your transistor from dying due to inductive kickback. Your control signal and your controlled signal are completely isolated from one another. Relay contacts aren't invincible; if you open up the contacts under load you can cause them to "ice up" (meaning they won't open up). Also, if you use a relay rated for power and try to switch small signals, the contacts can eventually get dirty and you won't get a good connection between the contacts. Triacs, being solid-state, are mostly silent. Unless you use a pulse transformer or optoisolator, your control circuit will be at the potential of your controlled circuit (generally the Neutral for your 120/220V circuits). Thyristors can be used to phase-control a load, meaning you can dim lights or (roughly) control the speed of an AC motor. This is pretty much impossible with relays. You can also do neat tricks like allowing only 'x' entire cycles through to do less "noisy" phase control. SCRs are also good for dumping all the energy in a capacitor into a load (flash or railgun type applications). Some power supplies use SCRs as crowbar devices as well; they turn on and short out the supply (blowing the fuse in the process), protecting the load from an overvoltage. Thyristors don't really enjoy sharp voltage or current spikes when they're turned off; these can cause them to turn on by accident or can destroy the devices. Simple snubbering helps control these failure modes. Thyristors also don't completely isolate the load from the source; if you measure the voltage on a load with the thyristor off, you'll measure full voltage. They thyristor is off, but off doesn't mean "open" -- it means "high resistance". This can cause trouble with some applications. If you are switching an AC signal, thyristors are pretty painless; they will shut themselves off around the next zero crossing. If you're controlling DC... again... you have more to think about. DC is also problematic for relays because you will almost always be opening up the relay contacts under load, so you must size your relay for this. Long story short: Yes, triacs can replace relays in almost every application. If you don't want to bother with the snubbering and isolation you can always buy solid state relays; they're triacs with the appropriate control circuitry to make them work almost the same as relays.
{ "source": [ "https://electronics.stackexchange.com/questions/3352", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/938/" ] }
3,734
Is it possible to blink an LED using just a capacitor? (and maybe a resistor). For example, if I want to LED to blink once every 2 seconds. Is that possible? I know it can be done with a 555 as well as with a capacitor and transistor.
Blinking an LED can't done with just passive elements. Interestingly, you can accomplish periodically blinking a light, with a resistor and a capacitor, if your light happens to be a neon discharge lamp. The reason a neon bulb will work and an LED won't has to do with their current-vs-voltage behavior. In the LED's case, no matter what the voltage across it, some current will be passed. This effectively keeps the cap from charging up, as an operating point is established that is determined by the LED and the resistor. You'll just get a constant glow of some intensity. But with the neon bulb, no current is passed until the voltage exceeds some threshold, which is the breakdown voltage of the neon gas. This allows the capacitor to charge up while the bulb remains dark. When the breakdown voltage is reached, the gas ionizes, and the energy stored in the capacitor is dumped through it, producing a short, bright flash. Basically you need some device in the circuit that operates like an active device. To blink an LED, you need a couple of transistors (e.g., multivibrator configuration) or possibly a single SCR (biased to have a suitably low break-over voltage). In the case of a neon bulb, the bulb itself is the active device, having distinct conducting and cutoff behavior.
{ "source": [ "https://electronics.stackexchange.com/questions/3734", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1009/" ] }
3,805
I am trying to make an LCD shield from my Arduino and am having problems trying to got the solder to stick to the Freetronics protoshield PCB. I have cleaned it as best I can. I am using a decent Proxxon soldering bolt with solder that has worked good for me in the past. How can I get it to stick? It won't bond to the PCB, only the wires.
Heat! (One word answer) A classic reason solder won't stick to something is because you're not getting it hot enough. My interns come to me with this problem all the time . Make sure the tip of the iron is nice and shiny. Touch some solder on it, and it should melt almost instantly. Put a nice little blob of solder on the tip of the iron. Press the blob of solder into the metal to be soldered. Initially the solder won't be too keen, but when the metal reaches the right temperature, the solder will suddenly be attracted to it, and you'll see it move slightly. Now that the pad has reached temperature, you can touch the solder anywhere on the pad and it should melt almost instantly. I often add solder this way so I know I'm adding it to a nice hot pad. Hugo
{ "source": [ "https://electronics.stackexchange.com/questions/3805", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/665/" ] }
3,941
This is presented as both a resource for the community and a learning experience for myself. I have just enough knowledge of the subject to get myself into trouble, but I don't have the best grasp of the subject's details. Some helpful responses might be: Explanation of the components of impedance How those components interact How can one transform impedances How this relates to RF filters, power supplies, and anything else... Thanks for the help!
To the question "what is Impedance," I would note that impedance is a broad concept of physics in general, of which electrical impedance is only one example. To get a grasp of what it means and how it works, it's often easier to consider mechanical impedance instead. Think of trying to push (slide) a heavy couch across the floor. You apply a certain amount of force, and the couch slides at a certain velocity, depending on how hard you push, the weight of the couch, the type of floor surface, the type of feet that the couch has, and so on. For this situation, it's possible to define a mechanical impedance that gives the ratio between how hard you push and how fast the couch goes. This is actually a lot like a dc electrical circuit, where you apply a certain amount of voltage across a circuit, and current flows at a certain corresponding rate through it. For the case of both the couch and the circuit, the response to your input may be simple and fairly linear: a resistor that obey's Ohm's Law, where its electrical impedance is just the resistance, and the couch may have friction slider feet that allow it to move with a velocity proportional to your force.* Circuits and mechanical systems may also be nonlinear. If your circuit consists of a variable voltage placed across a resistor in series with a diode, the current will be near zero until you exceed the forward voltage of the diode, at which point current will begin to flow through the resistor, in accordance with Ohm's law. Likewise, a couch sitting on the floor will usually have some degree of static friction: it won't begin moving until you push with a certain amount of initial force. In neither the mechanical nor electrical system is there a single linear impedance that can be defined. Rather, the best that you can do is to separately define impedances under different conditions. (The real world is much more like this.) Even when things are very clear and linear, it's important to note that impedance just describes a ratio-- it doesn't describe the limits to the system, and it's not "bad." You can definitely get as much current/velocity as you want (in an ideal system) by adding more voltage/pushing harder. Mechanical systems also can give a pretty good feel for ac impedance. Imagine that you're riding a bicycle. With each half-cycle of the pedals, you push left, push right. You can also imagine pedaling with just one foot and a toe-clip, such that you push and pull with every cycle of your pedal. This is a lot like applying an ac voltage to a circuit: you push and pull in turn, cyclicly, at some given frequency. If the frequency is slow enough-- like when you're stopped on the bike, the problem of pushing down on the pedals is just a "dc" problem, like pushing the couch. When you speed up, though, things can act differently. Now, suppose that you're biking along at a certain speed, and your bike is a three-speed with low, medium, and hi gear ratios. Medium feels natural, hi gear is difficult to apply enough force to make any difference, and at low gear, you just spin the pedals without transferring any energy to the wheels. This is a matter of impedance matching , where you can only effectively transfer power to the wheels when they present a certain amount of physical resistance to your foot-- not too much, not too little. The corresponding electrical phenomenon is very common as well; you need impedance matched lines to transmit RF power effectively from point A to point B, and any time that you connect two transmission lines together, there will be some loss at the interface. The resistance that the pedals provide to your feet is proportional to how hard you press, which relates it most closely to a simple resistance-- particularly at low speeds. Even in AC circuits, a resistor behaves like a resistor (up to a certain point). However, unlike a resistor, the impedance of a bicycle is dependent on frequency. Suppose that you put your bike in high gear, starting from a stop. It can be very hard to get started. But, once you do get started, the impedance presented by the pedals goes down as you get going faster, and once you're going very fast, you may find that the pedals present too little impedance to absorb power from your feet. So there's actually a frequency-dependent impedance (a reactance ) that starts out high and gets lower as you head to higher frequency. This is much like the behavior of a capacitor, and a fairly good model for the mechanical impedance of a bicycle would be a resistor in parallel with a capacitor. At dc (zero velocity), you just see the high, constant resistance as your impedance. As the pedaling frequency increases, the capacitor impedance becomes lower than that of the resistor, and allows current to flow that way. There are, of course, various other electrical components and their mechanical analogies**, but this discussion should give you some initial intuition on the general concept to stay grounded (pun intended) as you learn about the mathematical aspects of what can at times seem like a very abstract subject. *A word to the picky: Ohm's law is never exact for a real device, and real-world friction forces never give velocity exactly proportional to force. However, "fairly linear" is easy. I'm trying to be all educational and stuff here. Cut me some slack. **For example, an inductor is something like a spring-loaded roller on your wheel that adds drag as you get to higher frequency)
{ "source": [ "https://electronics.stackexchange.com/questions/3941", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/965/" ] }
4,027
I've reviewed the two methods, and multiplexing really only appears to have one advantage, that it would be easier to track down a failed LED than in a Charlieplex array. Can someone more knowledgeable explain any other trade-offs?
Yes, multiplexing and charlieplexing both have their advantages, and are each best suited for different tasks. The main advantage of charlieplexing is that it can be done on a medium-size microcontroller, driving more LEDs with fewer I/O pins, and potentially no external hardware besides the LEDs. Now, the advantage of "no external hardware" only applies for relatively small numbers of LEDs, up until you reach the current limit of your microcontroller, and it imposes a brightness limit that multiplexed arrays do not have. If you choose to use no external hardware, you are generally limited to driving only one LED in the matrix at any given moment-- unlike a multiplexed array where you drive a full row at a time. Once you add external hardware to drive the LEDs with higher brightness to match that of a multiplexed array, charlieplexing loses its luster. First, a multiplexed array is addressed strictly by row and column; it's very straightforward to do this. Turn on row 1, turn on all columns for row 1, turn off row 2, turn on all columns for row 2, and so on. By contrast, a charlieplexed array is much less straightforward. There's always a diagonal row that's useless, and I personally use look-up tables to relate between my arrays and a rectangular array where I store my data. Secondly, and a killer, is that charlieplexing generally requires tri-state drivers. Multiplexing, however, is performed with strict on-off binary logic. If you have a low pin count microcontroller and want to drive a large array of LEDs, it's straightforward to use external logic chips (e.g., shift register and/or LED drivers) to control both the X and Y axes. Most shift-register type chips don't support the tristating necessary to do the same thing in a charlieplexed array. Dedicated charlieplexed drivers are available, but are not nearly as versatile. Third, in a charlieplexed array, every pin that's not actively doing something is still hooked up to the LEDs through a weak ("high impedance") connection. And while the connection is weak, it's nonzero. Suppose that you have 25 I/O pins connected to a grid of LEDs. If you light one LED-- taking one of those 25 lines high and one of those 25 lines low, that leaves 23 high-impedance lines. Each place that the high I/O line goes through an LED to a neutral line, there's some possibility to leak current. Not much. Maybe a microamp here or there. But with modern, efficient LEDs, that's often enough to create visible ghosting. Fourth, it's harder to control LED brightness in a charlieplexed array. In a multiplexed array, you can use any number of commonly available current-regulating "sinking" LED driver chips that regulate the brightness of each column separately, by using current regulation in combination with a PWM driver. To the extent that charlieplexed matrix locations are addressed individually and use only a resistor per row or column, it's much harder to perform dot-brightness correction and full grayscale/color animation in a charlieplexed array.
{ "source": [ "https://electronics.stackexchange.com/questions/4027", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/989/" ] }
4,123
I have a conceptual question: What means "high" code density? and why is it so important?
Code density refers loosely to how many microprocessor instructions it takes to perform a requested action, and how much space each instruction takes up. Generally speaking, the less space an instruction takes and the more work per instruction that a microprocessor can do, the more dense its code is. I notice that you've tagged your question with the 'arm' tag; I can illustrate code density using ARM instructions. Let's say you want to copy a block of data from one place in memory to another. Conceptually, your high level code would look something like this: void memcpy(void *dest, void *source, int count_bytes) { char *s, *d; s = source; d = dest; while(count_bytes--) { *d++ = *s++; } } Now a simple compiler for a simple microprocessor may convert this to something like the following: movl r0, count_bytes movl r1, s movl r2, d loop: ldrb r3, [r1] strb [r2], r3 movl r3, 1 add r1, r3 add r2, r3 sub r0, r3 cmp r0, 0 bne loop (my ARM is a little rusty, but you get the idea) Now this would be a very simple compiler and a very simple microprocessor, but you can see from the example that we're looking at 8 instructions per iteration of the loop (7 if we move the '1' to another register and move the load outside the loop). That's not really dense at all. Code density also affects performance; if your loops are longer because the code is not dense, you might need more instruction cache to hold the loop. More cache means a more expensive processor, but then again complex instruction decoding means more transistors to decipher the requested instruction, so it's a classic engineering problem. ARM's pretty nice in this respect. Every instruction can be conditional, most instructions can increment or decrement the value of registers, and most instructions can optionally update the processor flags. On ARM and with a moderately useful compiler, the same loop may look something like this: movl r0, count_bytes movl r1, s movl r2, d loop: ldrb r3, [r1++] strb [r2++], r3 subs r0, r0, 1 bne loop As you can see, the main loop is now 4 instructions. The code is more dense because each instruction in the main loop does more. This generally means that you can do more with a given amount of memory, because less of it is used to describe how to perform the work. Now native ARM code often had the complaint that it wasn't super-dense; this is due to two main reasons: first, 32 bits is an awfully "long" instruction, so a lot of bits seem to be wasted for simpler instructions, and second, code got bloated due to ARM's nature: each and every instruction is 32 bits long, without exception. This means that there are a large number of 32-bit literal values that you can't just load into a register. If I wanted to load "0x12345678" into r0, how do I code an instruction that not only has 0x12345678 in it, but also describes "load literal to r0"? There are no bits left over to code the actual operation. The ARM load literal instruction is an interesting little beast, and the ARM assembler must also be a little smarter than normal assemblers, because it has to "catch" these kinds of instructions and code them as a value stored in the object file and an indirect load of that address to the requested register. Anyway, to answer these complaints, ARM came up with Thumb mode. Instead of 32 bits per instruction, the instruction length is now 16 bits for almost all instructions, and 32 bits for branches. There were a few sacrifices with Thumb mode, but by and large these sacrifices were easy to make because Thumb got you something like a 40% improvement in code density just by reducing the instruction length.
{ "source": [ "https://electronics.stackexchange.com/questions/4123", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/170/" ] }
4,185
I am going through the book "Elements of computing systems". This book teaches how to build a whole computer from scratch. While I was just browsing the chapters on computer architecture, I noticed that it all focused on the Von Neumann architecture. I was just curious as to what are the other architectures and when & where they are used. I know about only two, one is Von Neumann and the second is Harvard. Also I know about RISC which is used in uC of AVR.
There are many different kinds of computer architectures. One way of categorizing computer architectures is by number of instructions executed per clock. Many computing machines read one instruction at a time and execute it (or they put a lot of effort into acting as if they do that, even if internally they do fancy superscalar and out-of-order stuff). I call such machines "von Neumann" machines, because all of them have a von Neumann bottleneck. Such machines include CISC, RISC, MISC, TTA, and DSP architectures. Such machines include accumulator machines, register machines, and stack machines. Other machines read and execute several instructions at a time (VLIW, super-scalar), which break the one-instruction-per-clock limit, but still hit the von Neumann bottleneck at some slightly larger number of instructions-per-clock. Yet other machines are not limited by the von Neumann bottleneck, because they pre-load all their operations once at power-up and then process data with no further instructions. Such non-Von-Neumann machines include dataflow architectures, such as systolic architectures and cellular automata, often implemented with FPGAs, and the NON-VON supercomputer. Another way of categorizing computer architectures is by the connection(s) between the CPU and memory. Some machines have a unified memory, such that a single address corresponds to a single place in memory, and when that memory is RAM, one can use that address to read and write data, or load that address into the program counter to execute code. I call these machines Princeton machines. Other machines have several separate memory spaces, such that the program counter always refers to "program memory" no matter what address is loaded into it, and normal reads and writes always go to "data memory", which is a separate location usually containing different information even when the bits of the data address happen to be identical to the bits of the program memory address. Those machines are "pure Harvard" or "modified Harvard" machines. Most DSPs have 3 separate memory areas -- the X ram, the Y ram, and the program memory. The DSP, Princeton, and 2-memory Harvard machines are three different kinds of von Neumann machines. A few machines take advantage of the extremely wide connection between memory and computation that is possible when they are both on the same chip -- computational ram or iRAM or CAM RAM -- which can be seen as a kind of non-von Neumann machine. A few people use a narrow definition of "von Neumann machine" that does not include Harvard machines. If you are one of those people, then what term would you use for the more general concept of "a machine that has a von Neumann bottleneck", which includes both Harvard and Princeton machines, and excludes NON-VON? Most embedded systems use Harvard architecture. A few CPUs are "pure Harvard", which is perhaps the simplest arrangement to build in hardware: the address bus to the read-only program memory is exclusively is connected to the program counter, such as many early Microchip PICmicros. Some modified Harvard machines, in addition, also put constants in program memory, which can be read with a special "read constant data from program memory" instruction (different from the "read from data memory" instruction). The software running in the above kinds of Harvard machines cannot change the program memory, which is effectively ROM to that software. Some embedded systems are "self-programmable", typically with program memory in flash memory and a special "erase block of flash memory" instruction and a special "write block of flash memory" instruction (different from the normal "write to data memory" instruction), in addition to the "read data from program memory" instruction. Several more recent Microchip PICmicros and Atmel AVRs are self-programmable modified Harvard machines. Another way to categorize CPUs is by their clock. Most computers are synchronous -- they have a single global clock. A few CPUs are asynchronous -- they don't have a clock -- including the ILLIAC I and ILLIAC II, which at one time were the fastest supercomputers on earth. Please help improve the description of all kinds of computer architectures at http://en.wikibooks.org/wiki/Microprocessor_Design/Computer_Architecture .
{ "source": [ "https://electronics.stackexchange.com/questions/4185", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1200/" ] }
4,382
I've worked on the Arduino family (specifically the Sanguino), built a few simple devices and a simple phototrope. I am thus pretty comfortable with microcontrollers - specifically Atmel's. I'm curious to know how do FPGA's differ from standard microcontrollers. I am from a technical background (C/C++ programming) and thus would love technical answers. Just keep in mind that I am a newbie (relative to my s/w experience) in the electronics domain. :) I did go through this query and it was good but I'm looking for further deeper details. Thanks! Sushrut.
Designing for an FPGA requires a Hardware Description Language (HDL). HDLs are absolutely nothing at all like C. Whereas a C program is a sequential series of instructions and must contort itself to achieve parallel execution, an HDL describes a concurrent circuit and must contort itself to achieve sequential execution. It is a very different world and if you try to build a circuit in an FPGA while thinking like a software developer it will hurt. An MCU is time-limited. In order to accomplish more work, you need more processor cycles. Clocks have very real limits to their frequencies, so it's easy to hit a computational wall. However, an FPGA is space-limited. In order to accomplish more work, you merely add more circuits. If your FPGA isn't big enough, you can buy a bigger one. It's very hard to build a circuit that can't fit in the largest FPGA, and even if you do there are app notes describing how to daisy chain FPGAs together. FPGAs focus way more on parallel execution. Sometimes you have to worry about how long your MCU's ISR takes to service the interrupt, and whether you'll be able to achieve your hard-real-time limits. However, an in FPGA there are lots of Finite State Machines (FSM) running all the time. They are like "femto-controllers", like little clouds of control logic. They are all running simultaneously, so there's no worrying about missing an interrupt. You might have an FSM to interface to an ADC, another FSM to interface to a microcontroller's address/data bus, another FSM to stream data to a stereo codec, yet another FSM to buffer the dataflow from the ADC to the codec...You need to use a simulator to make sure that all the FSMs sing in harmony. If any control logic is off by even a single clock cycle (and such mistakes are easy to make) then you will get a cacophony of failure. FPGAs are a PCB layout designer's wet dream. They are extremely configurable. You can have many different logic interfaces (LVTTL, LVCMOS, LVDS, etc), of varying voltages and even drive strengths (so you don't need series-termination resistors). The pins are swappable; have you ever seen an MCU address bus where the pins were scattered around the chip? Your PCB designer probably has to drop a bunch of vias just to tie all the signals together correctly. With an FPGA, the PCB designer can then run the signals into the chip in pretty much any order that is convenient, and then the design can be back-annotated to the FPGA toolchain. FPGAs also have lots of nice, fancy toys. One of my favorites is the Digital Clock Manager in Xilinx chips. You feed it one clock signal, and it can derive four more from it using a wide variety of frequency multipliers and dividers, all with pristine 50% duty cycle and 100% in phase...and it can even account for the clock skew that arises from propagation delays external to the chip! EDIT (reply to addendum): You can place a "soft core" into an FPGA. You're literally wiring together a microcontroller circuit, or rather you're probably dropping someone else's circuit into your design, like Xilinx's PicoBlaze or MicroBlaze or Altera's Nios. But like the C->VHDL compilers, these cores tend to be a little bloated and slow compared to using an FSM and datapath, or an actual microcontroller. The development tools can also add significant complexity to the design process, which can be a bad thing when FPGAs are already extremely complex chips. There are also some FPGAs that have "hard cores" embedded in them, like Xilinx's Virtex4 series that have a real, dedicated IBM PowerPC with FPGA fabric around it. EDIT2 (reply to comment): I think I see now...you're asking about connecting a discrete MCU to an FPGA; i.e. two separate chips. There are good reasons to do this; the FPGA's that have hard cores and the ones that are big enough to support decent soft cores are usually monsters with many hundreds of pins that end up requiring a BGA package, which easily increases the difficulty of designing a PCB by a factor of 10. C is a lot easier, though, so MCUs definitely have their place working in tandem with an FPGA. Since it's easier to write C, you might write the "brains" or the central algorithm in the MCU, while the FPGA can implement sub-algorithms that might need accelerated. Try to put things that change into the C code, because it's easier to change, and leave the FPGA to be more dedicated type stuff that won't change often. MCU design tools are also easier to use. It takes several minutes for the design tools to build the FPGA bit file, even for somewhat simple designs, but complex MCU programs usually take a few seconds. There's much, much less to go wrong with the MCU, so they're also easier to debug...I cannot understate how complex FPGAs can be. You really need to get the datasheet for the one you have, and you should try to read every page of it. I know, it's a few hundred pages...do it anyway. The best way to connect them is to use an MCU with an external address and data bus. Then you can simply memory map the FPGA circuits into the MCU, and add your own "registers" that each have their own address. Now the FPGA can add custom peripherals, like a 32-bit timer that can latch all 4 bytes at once when the first byte is read to prevent overflows between 8-bit reads. You can also use it as glue logic to memory map more peripherals from other chips, like a separate ADC. Finally, some MCUs are designed for use with an "external master" like an FPGA. Cypress makes a few USB MCUs that have an 8051 inside, but the intent is for the USB data to be produced/consumed by e.g. an FPGA.
{ "source": [ "https://electronics.stackexchange.com/questions/4382", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1143/" ] }
4,515
How should I route USB Connector shield on PCB? Should it be connected to GND plane right where USB is placed, or should the shield be isolated from GND, or should it be connected to ground through ESD protection chip, high resistance resistor or fuse? PS. Should I put the shield connections on schematic, or just route it on PCB?
For the shield to be effective, it requires as low impedance connection as possible to your shield ground. I think those recommending resistors, or not connecting it to ground at all, or strictly talking about your digital logic ground, and assuming you have a separate shield ground. If you have a metal enclosure, this will be your shield ground. At some point, your digital ground must connect to your shield ground. For EMI reasons, this single point should be close to your I/O area. This means it's best to place your USB connector with any other I/O connectors around one section of the board and locate your shield to logic ground point at that location. There are some exceptions to the single point, rule, if you have a solid metal enclosure without any apertures, for example, multiple connection points can be helpful. In any case, at shield to circuit ground connection, some may recommend using a resistor or capacitor (or both) but rarely is there a reasonable reason to do this. You want a low inductance connection between the two to provide a path for common mode noise. Why divert noise though parasitic capacitance (e.g. radiate it out into the environment)? The only reason usually given for such tactics is to prevent ground loops, but you're talking about USB, ground loops most likely won't be an issue for most USB applications. Granted, such tactics will prevent ground loops, but they will also rend your shielding all but ineffective.
{ "source": [ "https://electronics.stackexchange.com/questions/4515", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1177/" ] }
4,611
Pardon me, I'm a total newb to electronics. My question is, when a device is measured in watts, such as a 60-watt light bulb, is this ALWAYS supposed to be assumed to be watt-hours, i.e. 60 watts per hour?
Energy is an amount, while power is a rate at which energy is used. Energy is measured in watt-hours (W·h) or joules (J). Power is measured in watts (W) or joules per second (J/s). Watt-hours are like buckets, and watts are like buckets per hour. If you have 5 buckets of energy and you pour one bucket per hour, you'll be able to pour for 5 hours before you run out. If you turn on a 60-watt light bulb for 1 hour, you have used 60 watt-hours of energy. If you use it for 2 hours, you have used 120 watt-hours of energy. If you turn it on for only 1 minute, you have used 1 watt-hour. It's a little confusing since the "per hour" is inside the term "watt", so to make the rate into an amount, you need to multiply by a time unit to cancel it out. It would be a lot more intuitive if we worked in kilojoules and kilojoules per hour. :)
{ "source": [ "https://electronics.stackexchange.com/questions/4611", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1314/" ] }
4,640
Every RC car I've cannibalized has had a small ceramic capacitor soldered on to the contacts of the motors. What is the purpose of having this? What happens to the performance of the motor with out this?
The other two people who have answered have the first part right: the small-value ceramic capacitor acts as a high frequency filter. The brushes create insane amounts of broad-spectrum high frequency noise and this can interfere with the electronics (especially the radio receiver). The capacitor acts as a short-circuit at high frequencies (Xc = 1/(2*pi*fC)) and it is soldered as close as possible to the commutator (i.e. right at the motor leads) to minimize the "antenna" these frequencies see. If the capacitor was not there the noise would "see" several inches of motor lead which would act as a great little antenna for broadcasting this noise into anything nearby, especially the sensitive radio receiver. It has nothing to do with smoothing over anything -- the capacitor is far too small to be effective as a temporary storage device. It's being used as a frequency-selective low-impedance shunt, a low-pass filter, if you will.
{ "source": [ "https://electronics.stackexchange.com/questions/4640", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1308/" ] }
4,651
Well, the question is the title: Is it worth really buying a Fluke for hobbyist use? I have a cheap meter at the moment. Is it worth spending a 3 figure sum on a nice, shiny, new Fluke? I honestly don't think so, but I'm curious what other's opinions are.
I think you should make sure you select a multimeter that answers the questions you need answered . Or start with a cheap one, and if you need more features / better accuracy, get a better one. Most folks working on digital electronics need to know if: Is this signal high or low? Is this signal switching? Is this wire continuous? Is my transistor PNP or NPN, and is the Radioshack package correct? (it wasn't, once, for me) How much current is running through this section of the circuit? What is the duty cycle of the PWM signal? (This is easily found by doing a little bit of math based on the voltage) Really, to solve all those problems, a $5USD digital multimeter does all that and more. If you need to measure capacitors, inductors, are panicky about your duty cycle, or need tight accuracy, or you need the thing to survive falling down a waterfall or being inside crazy EMF fields or your'e gonna beat the thing up, I'd look for a Fluke or something. If you need to watch really fast signals, you should be doing that with a logic analyzer or an oscilloscope anyway.
{ "source": [ "https://electronics.stackexchange.com/questions/4651", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
4,678
Possibly many questions have been asked on soldering smd parts, but I haven't found specific answers, Like: Do you use a watchmakers lens or some other type of magnifying glass while soldering these miniature components? What would be most optimum to see a larger picture? How do you solder components where pads lie beneath the package, I don't own a reflow oven and have tried to ignore these packages but can't do that anymore. Are there any techniques to manually solder BGA, iLCC, CSP amongst others. What tools do you use, apart from tweezers, soldering iron, solder wire, and a bright/ illuminated workplace. Any suitable "third hand" that you have found that makes a monster of a difference? Is there a specific tip thickness to use for the soldering iron, what about the solder wire guage? For prototyping if would not always be feasible to make a pcb, do you solder these components on a veroboard or do you buy a breakout board? You could add more to these based on your experience and wisdom... Your turn.
Before I start, this is a lot of questions in one question. Please try to break it up a little more next time. One: Do you use a watchmakers lens or some other type of magnifying glass while soldering these miniature components? What would be most optimum to see a larger picture? This question discussed optics in further detail. I have a 10x loupe that I use to inspect solder joints when I'm not at the microscope workstation at school or work, but there is no doubt that the stereo microscope is the best tool. Stereo gives you depth perception. As far as seeing the larger picture, zooming out (microscopes I've used go from ~3x to 40x) gives you plenty of room to find your place if you're concerned about that. Zooming in, however, is when a scope shines. You will burn tall plastic parts (like connector shrouds) for a while, but eventually you get a feel for where your iron is outside of the field of view. A good microscope will give you about a 3" focal length (contrast with a cheap loupe, mine is probably about 1.5" for 1/4 the magnification), so you can wave your soldering iron halfway between the two until you see a fuzzy brown cloud moving through your field of view. Move the iron back until you see the tip, and only then lower it to the pad you're soldering. A lighted diopter lens doesn't provide enough magnification, in my opinion, to justify the obtrusiveness of having the lens in the way. Same with the magnifiers on helping hands. Two: How do you solder components where pads lie beneath the package, I don't own a reflow oven and have tried to ignore these packages but can't do that anymore. Are there any techniques to manually solder BGA, iLCC, CSP amongst others. If at all possible, stay away from BGA type packages for hand soldering. In a pinch, iLCC (and the more common QFN) packages can be done by placing small domes of solder on the pad (which must extend outside of the chip boundaries), fluxing the bottom of the component, and heating the solder. If all goes well, the solder will melt, heat the contact on the chip, and the surface tension will pull the joint together. For low pin count devices, this works quite well, including crystal oscillators . If the contacts extend up the side of the chip, just heat those. Another option is hot air guns or hot air soldering stations. Steinel makes good air guns, and many soldering stations have air attachments. I've found that air guns are more effective than solder stations for applying/reflowing chips, they just seem to apply the heat more evenly and sustainably. Pay attention to the reflow profiles: You want to start heating it up slowly, over a period of a minute or two, and only then actually apply the real heat. Thermal stress is a real concern here. Note that I've only ever used this method for rework; I haven't tried it for assembly runs. Three: What tools do you use, apart from tweezers, soldering iron, solder wire, and a bright/ illuminated workplace. Any suitable "third hand" that you have found that makes a monster of a difference? Solder wick. Miles and miles of the stuff. For most work, even fine pitches, normal .11" stuff is fine, but the smaller stuff (.05" or .03") is helpful. Most tutorials will have you apply it rather indiscriminately. For fine work, you want to lay it parallel to the edge of the chip, poke the edge nearest the pad with the tip of your soldering iron, and slide it over the PCB until it contacts the edge of the chip. Be wary of allowing little shreds to break off and cause shorts. For helping hands, I've used a Panavise 301 with the 312 tray base . It holds the work 10" off the table, which lets you steady your elbows. However, some people like to put the work on the table (on an antistatic pad, of course), so you can steady the heel of your hand instead. Last, and probably most importantly, you'll want flux. Flux pens are cheap and easy to find, but I have a little dropper bottle that I like better - You don't have to worry about damaging anything if you drip the flux onto the PCB. This, of course, mandates keeping some isopropyl alcohol and cotton blotters on hand to remove the residue intermittently. Oh, and you'll also want a spool of 30-gauge wire-wrap wire to fix mistakes. Four: Is there a specific tip thickness to use for the soldering iron, what about the solder wire guage? This depends completely on what you're doing. I have a 1/32" cone that I use for most everything, and I use standard .031" solder for connectors, through-hole, and wiring work, and .01" Kester 44 for fine work. You'll just have to experiment. Five: For prototyping if would not always be feasible to make a pcb, do you solder these components on a veroboard or do you buy a breakout board? I usually dead-bug tiny components: Superglue the top, attach to protoboard (like Twin Industries 8200-45-LF), and then run 30-gauge wire to each of the pads, like this , and connect to headers or whatever you need to do. (Note: Pic of someone else's work, not mine). Then, after verifying that everything's in the right places, put a blob of hot glue over the whole thing to give the wires some strain relief.
{ "source": [ "https://electronics.stackexchange.com/questions/4678", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/444/" ] }
4,755
What are the differences and where would you use each?
There is a lot of trade-off to it. Wikipedia also: Despite the additional transistors, the reduction in ground wires and bit lines allows a denser layout and greater storage capacity per chip. In addition, NAND flash is typically permitted to contain a certain number of faults (NOR flash, as is used for a BIOS ROM, is expected to be fault-free). Manufacturers try to maximize the amount of usable storage by shrinking the size of the transistor below the size where they can be made reliably, to the size where further reductions would increase the number of faults faster than it would increase the total storage available. So, NOR flash can address easier, but is not even close to as dense. If you take at a look at a pretty decent comparison PDF. NOR has lower standyby power, is easy for code execution and has a high read speed. NAND has much lower active power(writing bits is faster and lower cost), higher write speed(by a lot), much higher capacity, much much lower cost per bit and is very easy for file storage use. due to it's lower read speed when using it for code execution you really need to ghost it to ram. To quote a small section with a great table above it... The characteristics of NAND Flash are: high density, medium read speed, high write speed, high erase speed, and an indirect or I/O like access. The characteristics of NOR Flash are lower density, high read speed, slow write speed, slow erase speed, and a random access interface.
{ "source": [ "https://electronics.stackexchange.com/questions/4755", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1411/" ] }
4,784
Why do you need to store the voltage for some time in a capacitor? I've always assumed circuits to work when you power it on and stop when you power it off. Why can't the whole circuit be drawn capacitor free? If it's meant for storage why not just use a flip-flop?
If all you wanted to build was digital circuitry, and your voltage sources really held constant voltage no matter how much current was drawn from them, and nothing produced electrical noise, you wouldn't need capacitors. But voltage sources sag when you draw current from them. Motor brushes (and lots of other components) produce horrendous voltage spikes that you want to filter out of your digital circuitry. Some people also deal with analog circuitry, where voltage and current signals vary continuously across a wide range. For that kind of time-varying circuitry, capacitors are needed.
{ "source": [ "https://electronics.stackexchange.com/questions/4784", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1424/" ] }
4,788
Could you explain how a quartz crystal works, maybe with a simple schematics with the essential things ? I know it acts like a kind of stabilizer for an oscillator, but nothing more than that.
Quartz is a piezoelectric material, which means that if you mechanically deform it, it develops charges on its surface. Similarly, if you place charges on its surface, it causes mechanical stress in the crystal. The way a quartz crystal benefits a circuit is that mechanically the crystal acts much like a tuning fork, with a natural resonant frequency, and the piezoelectric property allows that to be coupled into an electronic circuit. Since the resonant frequency is mainly determined by the physical size and shape of the quartz, you get a frequency reference that is much less sensitive to temperature than you would get using just LC circuits.
{ "source": [ "https://electronics.stackexchange.com/questions/4788", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1272/" ] }
4,951
Say I have a 1F capacitor that is charged up to 5V. Then say I connect the cap to a circuit that draws 10 mA of current when operating between 3 and 5 V. What equation would I use to calculate the voltage across the capacitor, with respect to time, as it is discharging and powering the circuit?
charge on a cap is a linear product of capacitance and voltage, Q=CV. If you plan to drop from 5V to 3V, the charge you remove is 5V*1F - 3V*1F = 2V*1F = 2 Coulombs of charge. One Amp is one Coulomb per second, so 2C can provide 0.01A for 2C / (0.01 C/sec) or 200 seconds. If you actually withdraw charge from the cap at a constant current , the voltage on the cap will decrease from 5V to 3V linearly with time, given by Vcap(t) = 5 - 2*(t/200). Of course, this assumes you have a load that draws a constant 10mA even while the voltage supplied to it changes. Common simple loads tend to have relatively constant impedance, which means that the current they draw will decrease as the cap voltage decreases, leading to the usual non-linear, decaying exponential voltage on the cap. That equation has the form of V(t) = V0 * exp(-t/RC).
{ "source": [ "https://electronics.stackexchange.com/questions/4951", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1235/" ] }
5,096
I want to connect multiple I2C slave devices to a micro controller all on the same set of pins but the I2C devices all share the same address. The addresses are fixed in the hardware. Is there any way to connect multiple devices with the same address? Perhaps some kind of I2C address translation module with each device with an configurable address so I can assign my own addresses to each one.
There is nothing built into I2C to do this, normally slave devices will have some externals pins that can be set to 0 or 1 to toggle a couple of the address bits to avoid this issue. Alternatively I've dealt with a few manufacturers that have 4 or 5 part numbers for a part, the only difference being its I2C address. Most devices have specific hardware that handles the I2C communication, that is the slave ACK is in hardware so you really can't hack around it. As for the translation module, you could buy some $0.50 PIC's with 2 I2C buses and write some quick code to make them act as address translators i guess.
{ "source": [ "https://electronics.stackexchange.com/questions/5096", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/898/" ] }
5,111
I want to learn PID (Proportional–Integral–Derivative) control mainly for temperature. I would like to learn preferably through an easy project to do. Could you please recommend something which would take a few weeks to learn? Edit: I want to control the temperature of a water tank. The heating is done by a resistor.
Controlling temperature (it depends upon your medium) isn't terribly hard. That was my first project when I started. Pardon me, if I repeat things you already know. I assume you already have a way of controlling the system (ie, a heater or cooler unit), and a way of getting feedback from the system (a temperature sensor like a thermistor or something). You'll need both to implement a PID loop, which is a type of closed loop control. All you really need to do after that is write a bit of software to send control commands, read feedback, and make decisions upon that feedback. I'd start out by reading PID without a PhD . It's the article I used when I first had to regulate temperature in a science experiment. It provides some easy-to-understand pictures, and nice sample code (a basic loop that you can tweak only needs 30 lines) that explains how to control your 'plant' - in this case, the thing you want to control the temperature of. The gist of PID - Proportional-Integral-Differential - control is to use instantaneous, past, and predicted future performance (respectively) of the system to determine how to control a system at a given point in time to reach a specified set point. In many cases, you'll have to tune the algorithm's gain factors to get the desired performance you need - how quickly the temperature will rise, how much you want to avoid overshoot, etc. You might even find you don't need the differential or even integral control to get where you want to be!
{ "source": [ "https://electronics.stackexchange.com/questions/5111", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/938/" ] }
5,139
Recently while routing a PCB, I came across the option to fill/pour my ground plane with either solid or hatched copper. I've also noticed that the old Arduino Duemilanove also had a hatched ground plane. What benefits does a hatched ground plane have over solid ground plane and vice versa?
As others said, it's mostly because it was easier to manufacture than solid layers for various reasons. They also can be used in certain situations where you need controlled impedance on a very thin board. The traces width needed to get 'normal' impedances on such a thin board would be ridiculously narrow but the cross hatching changes the impedance characteristics on adjacent layers to allow wider traces for a given impedance. If for some reason you need to do this, you can only route controlled impedance traces at 45 deg to the hatch pattern. This approach greatly increases mutual inductance between signals and consequently, cross-talk. Also note that this only works when the size of the hatch is much less than the length of the signal's rise time, this normally correlates to the frequency of the digital signals in question. As such, as frequency increases you reach a point where the hatch pattern would have to be so tightly spaced that you lose any benefit vs a solid plane. In summary: Never use a cross hatched ground plane, unless you're stuck in some really weird situation. Modern PCB construction and assembly techniques no longer require it.
{ "source": [ "https://electronics.stackexchange.com/questions/5139", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1568/" ] }
5,186
I find very difficult to read part numbers. I haven't figured out how which kind the ideal amount of light and inclination would help me. I also wonder why chipmakers don't paint the numbers in white for a better contrast.
1. Clean up the text with a cotton swab dipped in rubbing alcohol. 2. Wait for the alcohol to evaporate and rub yellow or white chalk on markings. 3. Wipe gently with a cotton swab and bingo! Source (in Portuguese): http://www.piclistbr.org/paginas.php?fname=%20dicas.htm%20&autor=%2009/2010%20-%20Luciano%20Sturaro
{ "source": [ "https://electronics.stackexchange.com/questions/5186", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1298/" ] }
5,196
I'm aware that nobody actually does this at the hobbyist level, that successful commercial products have been launched without certification, and it's probably something I can't afford if I have to ask. However, I've always wondered about the ballpark cost. About how much does it cost to receive FCC certification?
As a rough estimate, the cost is $10k-20k, plus your labor cost. In the US, all products containing electronics that oscillate above 9 kHz must be certified. The law that governs this is FCC Part 15. The lawyers call this "Title 47 CFR Part 15," meaning that it is the 15th subsection of the 47th section of the Code of Federal Regulations. In Europe, there is a similar regulation called CISPR 22. The requirements are almost the same, but slightly stricter about emissions at certain frequencies. You can read 47 CFR 15 online. It's not as incomprehensible as you might expect. It seems overwhelming, but if you read the first few PDF's, you'll realize that most of it irrelevant for any single product. Within 47 CFR 15, there are two classes of testing: Class A and Class B. Class A is an easier test to pass, intended for devices that are used in industrial settings. Class B is stricter, intended for devices that are targeted at consumers. There is additional testing for "intentional radiators," meaning radios, Wi-fi, Bluetooth and such. There may be an exception if your device is intended for use as a component in a larger system (like a microprocessor or memory card in a PC), but I'm not sure of the legal details there. The major expense is renting the test chamber. This is what's called an "anechoic chamber," instrumented with a pile of sensors for detecting electromagnetic radiation. To my knowledge, these cost around $1000/hour, and each testing session takes 2 or 3 hours. It's unlikely, but not impossible, that you will pass on the first try. Here's a decent picture of a test chamber. The one I've been in was actually much larger, like a squash court. I think it was an Intertek facility in Menlo Park, CA. Unless you're experienced with emissions testing, it is worth hiring an expert, which costs around $500/hour. They can tell you things like, "Put a ferrite bead on that power cable, and that will reduce the emissions at this frequency." The folks I've worked with arrive with a bunch of ferrite beads and inductors (and maybe caps?) of various sizes that you can use in the chamber to hack your device into compliance. (Perhaps it goes without saying, but I'm an engineer, not a lawyer. I have taken a few products through Part 15, but not in the last couple of years.) If you're thinking about doing this, start by reading EMC for Product Designers by Tim Williams. I'd avoid the books by Mark I. Montrose; I found them less helpful and more expensive.
{ "source": [ "https://electronics.stackexchange.com/questions/5196", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1363/" ] }
5,336
Every reference I can find on transistors immediately launches into theory-heavy alphabet soup. The above seems also to be assumed knowledge for reading a datasheet. I don't care; I just want to get one to work. I understand there's some relationship between the current/voltage applied to the base to get a particular current to flow from the collector to the emitter. Which numbers on the datasheet relate to that? If I'm only trying to operate the transistor in "switch" mode, do I really need to care what current I apply to the base or will I be fine just whacking a 1k resistor in between my logic level output and the transistor base? Is the only difference between an NPN and a PNP transistor which way the current flows when a current is applied to the base?
The base-emitter junction is like a diode. When the voltage across it (Vbe) exceeds approximately 0.65V (can be as low as 0.55V and as high as 0.9V, check the datasheet for your transistor) it begins conducting. The current (not voltage!) through the base emitter junction is amplified by the gain of the transistor, which is known as HFE. Ic(collector current) = Ib(base current) * HFE. Remember HFE is not constant for transistors, it varies from transistor to transistor and depending on temperature, previous usage, etc., so don't rely on it for controlled amplification. For the 2N2222 it is around 160, plus or minus 30. By applying a base-emitter voltage exceeding 0.65V to the transistor you can use it as a switch. simulate this circuit – Schematic created using CircuitLab (It is an NPN transistor you want. 2N3904 or 2N2222 will do.) If you want to use an LED which is not blue or white then use a 47 ohm resistor in series with it. When you press the switch the LED will come on.
{ "source": [ "https://electronics.stackexchange.com/questions/5336", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1633/" ] }
5,338
I'm working with two chips on a board, a dsPIC33F and a PIC24F as well as a serial EEPROM (24FC1025.) I've seen these little ESD protection devices in 0603 packages: http://uk.farnell.com/panasonic/ezaeg3a50av/esd-suppressor-0603-15v-0-1pf/dp/1292692RL For MCUs like I'm using, is this necessary? The boards may be constantly handled and the external interfaces (I2C, UART) may be exposed to ESD. Would the internal diodes protect the chip anyway and make these pointless?
The base-emitter junction is like a diode. When the voltage across it (Vbe) exceeds approximately 0.65V (can be as low as 0.55V and as high as 0.9V, check the datasheet for your transistor) it begins conducting. The current (not voltage!) through the base emitter junction is amplified by the gain of the transistor, which is known as HFE. Ic(collector current) = Ib(base current) * HFE. Remember HFE is not constant for transistors, it varies from transistor to transistor and depending on temperature, previous usage, etc., so don't rely on it for controlled amplification. For the 2N2222 it is around 160, plus or minus 30. By applying a base-emitter voltage exceeding 0.65V to the transistor you can use it as a switch. simulate this circuit – Schematic created using CircuitLab (It is an NPN transistor you want. 2N3904 or 2N2222 will do.) If you want to use an LED which is not blue or white then use a 47 ohm resistor in series with it. When you press the switch the LED will come on.
{ "source": [ "https://electronics.stackexchange.com/questions/5338", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
5,403
Is there a standard for the sizes of PCB traces? That is are some 25 mil and others 10 mil or is can you choose your own? I plan to run 400mA through some thicker traces, but less than 30mA for all other traces. About what size would I need?
You can use this nomograph to determine the width according with current: Using the nomograph Locate the width of the conductor on the left side of the bottom chart. Move right horizontally, until you intersect the line of the appropriate conductor thickness. Move down vertically to the bottom of the chart to determine the cross-sectional area of the conductor. Move up vertically, until you intersect the line of the appropriate allowable temperature rise. This is the increase in temperature of the current-carrying conductor. Conductor temperature should not exceed 105°C. For example, if the ambient temperature might reach 80°C, the temperature rise above ambient of the conductor should be less than 25°C (105°C - 80°C). In this case use the 20°C curve. Move left horizontally, to the left side if the chart to determine the maximum allowable current. Reverse the order of these steps to calculate required conductor width for a given current. More informations at this site: http://www.minco.com/products/flex.aspx?id=1124 This graph is from IPC, but I cannot find it there.
{ "source": [ "https://electronics.stackexchange.com/questions/5403", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
5,498
I heard that the current limit for a USB port is 100mA. However, I also heard that some devices can get up to 1.8A from a port. How do you get past the 100mA limit?
I think I can attempt to clear this up. USB-100mA USB by default will deliver 100mA of current (it is 500mW power because we know it is 5v, right?) to a device. This is the most you can pull from a USB hub that does not have its own power supply, as they never offer more than 4 ports and keep a greedy 100mA for themselves. Some computers that are cheaply built will use an bus-powered hub( all of your USB connections share the same 500mA source and the electronics acting as a hub use that source also ) internally to increase the number of USB ports and to save a small amount of money. This can be frustrating, but you can always be guaranteed 100mA. USB-500mA When a device is connected it goes through enumeration. This is not a trivial process and can be seen in detail on Jan Axelson's site . As you can see this is a long process, but a chip from a company like FTDI will handle the hard part for you. They discuss enumeration in one of their app notes . Near the end of enumeration you setup device parameters. Very specifically the configuration descriptors. If you look on this website they will show you all of the different pieces that can be set . It shows that you can get right up to 500mA of power requested. This is what you can expect from a computer. You can get FTDI chips to handle this for you, which is nice, as you only have to treat the chip as a serial line. USB-1.8A This is where things get interesting. You can purchase a charger that does outlet to USB at the store. This is a USB charging port. your computer does not supply these, and your device must be able to recognize it. First, to get the best information about USB, you sometimes have to bite the bullet and go to the people whom write the spec. I found great information about the USB charging spec here . The link on the page that is useful is the link for battery charging . This link seems to be tied to revision number, so I have linked both in case the revision is updated people can still access the information. Now, what does this mean. if you open up the batt_charging PDF and jump to chapter three they go into charging ports. Specifically 3.2.1 explains how this is gone about. Now they keep it very technical, but the key point is simple. A USB charging port places a termination resistance between D+ and D-. I would like to copy out the chapter that explains it, but it is a secured PDF and I cannot copy it out without retyping it. Summing it up You may pull 100mA from a computer port. You may pull 500mA after enumeration and setting the correct configuration. Computers vary their enforcement, as many others have said, but most I have had experience with will try to stop you. If you violate this, you may also damage a poorly design computer (Davr is spot on there, this is poor practice). You may pull up to 1.8A from a charging port, but this is a rare case where the port tells you something. You have to check for this and when it is verified you may do it. This is the same as buying a wall adapter, but you get to use a USB cable and USB port. Why use the charging spec? So that when my phone dies, my charger charges it quickly, but if I do not have my charger I may pull power from a computer, while using the same hardware port to communicate files and information with my computer. Please let me know if there is anything I can add.
{ "source": [ "https://electronics.stackexchange.com/questions/5498", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1657/" ] }
5,572
Something like this Falstad sim version of it (I'm tired, I keep making mistakes, so please excuse me for the second time.) Now these are not very safe PSU's, due to the lack of isolation. But in sealed units, they can be a cheap way of getting the supply voltage for a microcontroller without an SMPS or transformer. They are not 100% efficient due to the zener and resistors. But, I have several questions. How does the capacitor step down the voltage, anyway? Does it waste power as heat? If the zener were gone and the output was let to float around 50V, would it approach 100% efficiency?
This circuit is one of a category of circuits called a "Transformerless AC to DC Powersupply" or a "CR dropper circuit". For other examples, see "Massmind: Transformerless AC to DC Powersupply" or "Massmind: Transformer-less capacitive bleed power conversion" or "ST AN1476: Low-cost power supply for home appliances" . Such a device has a power factor near 0, making it questionable whether it meets EU-mandated power factor laws, such as EN61000-3-2. Even worse, when such a device is plugged into a "square wave" or "modified sine wave" UPS, it has much higher power dissipation (worse efficiency) than when plugged into mains power -- if the person who builds this circuit does not choose safety resistors and zener big enough to handle this additional power, they may overheat and fail. These two drawbacks may be why some engineers consider the "CR dropper" technique " dodgy and dangerous ". How does the capacitor step down the voltage? There are several ways of explaining this. One way (perhaps not the most intuitive): One leg of the capacitor is attached (through a safety resistor) to the "hot" mains which oscillates at over 100 VAC. The other leg of the capacitor is connected to something which is always within a few volts of ground. If the input were DC, then the capacitor would completely block any current from flowing through it. But since the input is AC, the capacitor lets a small amount of current flow through it (proportional to its capacitance). Whenever we have a voltage across a component and current flowing through the component, we electronics people can't resist calculating the effective impedance using Ohm's law: $$Z = \frac{V}{I}$$ (Normally we say R = V/I, but we like to use Z when talking about the impedance of capacitors and inductors. It's tradition, OK?) If you replace that capacitor with a "equivalent resistor" with a real impedance R equal to the absolute impedance Z of that capacitor, "the same" (RMS AC) current would flow through that resistor as through your original capacitor, and the power supply would work about the same (see ST AN1476 for an example of such a "resistor dropper" power supply). Does the capacitor waste power as heat? An ideal capacitor never converts any power to heat -- all of the electrical energy that flows into an ideal capacitor eventually flows out of the capacitor as electrical energy. A real capacitor has small amounts of parasitic series resistance (ESR) and parasitic parallel resistance, so a small amount of the input power is converted to heat. But any real capacitor dissipates far less power (far more efficient) than a "equivalent resistor" would dissipate. A real capacitor dissipates much less power than the safety resistors or a real diode bridge. If the zener were gone and the output was let to float around 50V ... If you can tweak the resistance of your load, or swap out the dropping cap for one with a different capacitance of your choice, you can force the output to float at close to whatever voltage you choose. But you will inevitably have some ripple. If the zener were gone and the output was let to float ... would it approach 100% efficiency? Good eye -- the zener is the part that is part that wastes the most energy in this circuit. A linear regulator here would significantly improve the efficiency of this circuit. If you assume ideal capacitors (which is a good assumption) and ideal diodes (not such a good assumption), no power is lost in those components. In normal operation, relatively little power is lost in the safety protection resistors. Since there's no where else for the power to go, such an idealized circuit would give you 100% efficiency. But it would also have some ripple. You may be able to follow this no-zener circuit with a linear voltage regulator to eliminate that ripple and still get a net efficiency over 75%. The "law" that " a voltage regulator always has an efficiency of \$V_{out}/V_{in}\$ " only applies to linear DC to DC regulators. That law doesn't apply to this circuit, because this circuit has AC input, and so this circuit can have much better efficiency than that "law" predicts. EDIT: Dave Tweed points out that simply replacing the zener with a linear regulator actually makes this overall circuit less efficient. I find it counter-intuitive that deliberately wasting some power makes the system perform more efficiently. (Another circuit where adding a little resistance makes it perform better: Ripple current in a linear power supply transformer ). I wonder if there is some other way to improve the efficiency of this circuit, that is less complex than a 2-transistor switching regulator ? I wonder if further modifying the circuit by adding another capacitor across the AC legs of the bridge rectifier might result in something more efficient than the original zener circuit? (In other words, a capacitive divider circuit like this Falstad simulation ?)
{ "source": [ "https://electronics.stackexchange.com/questions/5572", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
5,592
According to Wikipedia, processing power is strongly linked with Moore's law: http://en.wikipedia.org/wiki/Moore's_law The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. The trend has continued for more than half a century and is not expected to stop until 2015 or later. The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well. As someone who has some background in computer architecture, I don't understand why throwing in more transistors in a CPU would boost a its power since ultimately, instructions are roughly read/executed sequentially. Could anyone explain which part I'm missing?
A lot of things that give you more power just require more transistors to build them. Wider buses scale the transistor count up in almost all processor components. High speed caches add transistors according to cache size. If you lengthen a pipeline you need to add stages and more complex control units. If you add execution units to help mitigate a bottleneck in the pipeline, each of those requires more transistors, and then the controls to keep the execution units allocated adds still more transistors. The thing is, in an electronic circuit, everything happens in parallel. In the software world, the default is for things to be sequential, and software designers go to great pains to get parallelism built into the software so that it can take advantage of the parallel nature of hardware. Parallelism just means more stuff happening at the same time, so roughly equates to speed; the more things that can be done in parallel, the faster you can get things done. The only real parallelism is what you get when you have more transistors on the job.
{ "source": [ "https://electronics.stackexchange.com/questions/5592", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1702/" ] }
5,621
If you overclock a microcontroller, it gets hot. If you overclock a microcontroller, it needs more voltage. In some abstract way it makes sense: it is doing more computation, so it needs more energy (and being less than perfect, some of that energy dissipates as heat). However, from a just plain old Ohm's law level electricity and magnetism, what is going on? Why does the clock frequency have anything to do with power dissipation or voltage? As far as I know, the frequency of AC has nothing to do with its voltage or power, and a clock is just a super-position of a DC and a (square) AC. Frequency doesn't affect the DC. Is there some equation relating clock frequency and voltage or clock frequency and power? I mean does a high speed oscillator need more voltage or power than a low speed one?
Voltage required is affect by significantly more than clock speed, but you are correct, for higher speeds you will need higher voltages in general. Why does power consumption increase? This is a lot messier than a simple circuit, but you can think about it being similar to an RC circuit. RC circuit equivilent At DC an RC circuit consumes no power. At a frequency of infinity, which is not attainable, but you can always solve this theoretically, the capacitor acts as a short and you are left with a resistor. This means you have a simple load. As frequency decreases the capacitor stores and discharges power causing a smaller amount of power dissipated overall. What is a microcontroller? Inside it is made up of many many MOSFETs in a configuration we call CMOS . If you try to change the value of the gate of a MOSFET you are just charging or discharging a capacitor. This is a concept I have a hard time explaining to students. The transistor does a lot, but to us it just looks like a capacitor from the gate. This means in a model the CMOS will always have a load of a capacitance. Wikipedia has an image of a CMOS inverter I will reference. The CMOS inverter has an output labeled Q. Inside a microcontroller your output will be driving other CMOS logic gates. When your input A changes from high to low the capacitance on Q must be discharged through the transistor on bottom. Every time you charge a capacitor you see power use. You can see this on wikipedia under power switching and leakage . Why does voltage have to go up? As you voltage increases it makes it easier to drive the capacitance to the threshold of your logic. I know this seems like a simplistic answer, but it is that simple. When I say it is easier to drive the the capacitance I mean that it will be driven between the thresholds faster, as mazurnification put it: With increased supply drive capability of the MOS transistor also increases (bigger Vgs). That means that actual R from RC decreases and that is why gate is faster. In relation to power consumption, due to how small transistors are there is a large leakage through the gate capacitance, Mark had a bit to add about this: higher voltage results in higher leakage current. In high transistor count devices like a modern desktop CPU leakage current can account for the majority of power dissipation. as process size gets smaller and transistor counts rise, leakage current becomes more and more the critical power usage statistic.
{ "source": [ "https://electronics.stackexchange.com/questions/5621", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1271/" ] }
5,666
I am currently using AA batteries in my projects but would like to look at cell batteries for my next project, due to their compactness. Apart from the physical appearance, how do cell batteries differ from the more chunky AA batteries? (I am guessing the mAH would suffer for example?)
Let me sum up your limitations of a CR2032: 10mA is about the max current you want to pull from a single, it is easy to put them in parallel, but a large amount of testing(more than 2000 batteries worth) has confirmed this.* They can be purchased to have 400mAh, the less current you pull the closer to this it will be, pulling more then 1mA decreases this a decent bit.** Under a 1mA load they will decay all the way to 1.5V before they fail, they will be at 2.7V almost right away. You can measure an almost full voltage on them with a multimeter when they are dead. This is solved by placing a load on them.*** If you are lazy, it is very easy to tell how much charge they have left by how much they make your tongue tingle. Your tongue acts as the load and measures. This is probably by far the easiest way to test them, although it does pull a decent bit of current. I think Thomas wrote a good answer, I just thought it might be helpful to give some details of the coin cells since it seems you have used AA quite a bit. * Wikipedia says up to 15mA pulsed , but we confirmed that up to 1mA shows a nearly consistent capacity. ** Wikipedia shows a standard that is a bit lower, but my company would always purchase 400mAh or 450mAh CR2032. When you buy a "standard battery" you can expect 200mAh it seems. *** People often will measure batteries without load, when someone tells you on a project that ran out of power early, ensure their original battery measurement was under load, very easy mistake.
{ "source": [ "https://electronics.stackexchange.com/questions/5666", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1333/" ] }
5,671
Is a function generator necessary for every day lab use, or is it special purpose equipment? That is, does it have similar utility to an oscilloscope, or multimeter - would you use it regularly enough to justify it's cost?
If you're working with digital systems and square waves/pulse trains only, then it's probably not necessary. However, for analog amplifier design (ex. audio), it's an imperative. If you haven't been stymied on a project because you lacked this tool, you probably don't need it. Spend your money on an oscilloscope and logic analyzer instead. Conversely, if you can't imagine why you would need a logic analyzer, you should spend your money on a function generator.
{ "source": [ "https://electronics.stackexchange.com/questions/5671", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
5,759
When I measure a 9V DC wall adapter power supply with a multimeter it shows 11.8V. When connected to my board this voltage lowers to 10V. Is this normal? Should I be afraid of toasting my 1117CD-5.0 supplied board when using this adapter? Edit : This regulator accepts up to 12V inputs. So that will be no problem. A second "9V" adapter yields 15V to the multimeter. Will this one burn my circuit?
Yes, this can be normal. There are two kinds of wall-warts -- regulated and unregulated (the latter are usually cheaper). You have two of the unregulated ones. These provide their rated voltage (e.g. 9v) under the load specified on their label, but the no-load voltage can be much higher as you have discovered. It probably won't hurt to use the one outputting 15v, since under the load of the board it will be closer to its rated voltage. But frankly, I wouldn't chance it. Use the other one instead.
{ "source": [ "https://electronics.stackexchange.com/questions/5759", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1298/" ] }
5,830
Currently I run all my gadgets from batteries and don't use decoupling capacitors. Are they generally needed/useful when drawing energy from a battery?
In broad terms, you should always use them. It is simply something that cannot hurt you to do, but could cause serious problems to ignore. You have probably not seen any major problems with your batteries because they are placed relatively close to your chips and because they have an internal resistance to snub higher frequency signals. This could still cause power concerns in higher frequency signals. If a Microcontroller runs at 20MHz then you are having 20e6 pulses of current pulled per second. This may not seem like a big issues, but when enough inputs change at once you may cause ground bounce or many similar problems that come with high inductance paths to ground. The wikipedia article has some background if it helps. Little extra on decoupling capacitor terminology The job of a decoupling capacitor is to "decouple" your devices power draw from the rest of the circuit. If a decoupling capacitor does its job you will only measure a DC power draw. They remove the AC wave. There are different terms for decoupling capacitors. The bulk capacitors act as large power sources that can supply power for periods of time, these are required for functionality. Without a bulk filter cap you will have to have time dependent current as your chip pulls power on it's cycle. Bypass capacitors are often of lower value and are designed to terminate higher frequencies. As frequency reduces your impedance decreases for capacitors. A smaller value capacitor has a higher impedance. These small capacitors are the backbone of terminating higher frequency waves. Decade capacitors are another term for bypass caps but the name implies more. If your bulk filter cap is .1uF then your decade caps will be .01uF and .001 and even .0001uF depending on what you are doing. Normally I only see 1 decade cap, but I have had to use 2 or 3 before.
{ "source": [ "https://electronics.stackexchange.com/questions/5830", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1333/" ] }
6,151
I'm trying to repair an 800W power supply (see my previous question on this.) One thing that gets me is that the design has two schottky diode packages (in TO-220) in parallel. I was always told this was A Bad Idea, but since they are thermally coupled to the same heatsink, does it present a problem in this instance? I've also noticed the same for the input bridge rectifier, two are used in parallel.
The issue with putting diodes in parallel is that as they heat up, their resistance decreases. As a result, that diode ends up taking on more current then the other diode, resulting in it heating up even more. As you can probably see, this cycle will cause thermal run away causing the diode to eventually burn if you give it enough current. Now the fact that you couple them to the same heatsink will reduce this effect some, but I still would not recommend it. There are far too many unknowns that will affect this to not ever trust it, especially in a commercial product. Now for the case of this power supply you are looking at, it may very well be that they spent the time to get the diodes matched as closely as possible and allow the heatsink to keep them at about the same temperature. It may also be that they are running the diodes far under their capacity and they put the second one in parallel so that they aren't always running them near max capacity, but I find this unlikely.
{ "source": [ "https://electronics.stackexchange.com/questions/6151", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
6,379
For a fan, should air be sucked in, or should it blow air out? I'm talking about an enclosure mounted fan.
Airflow is the key. Any direction will do. Just keep in mind where the hot components inside your enclosure is. However, if you blow into the enclosure, you have the option of putting a dust filter on your fan. Whereas if you have your fan blowing out, air will enter your enclosure through all sorts of holes, and lots of dust may eventually accumulate.
{ "source": [ "https://electronics.stackexchange.com/questions/6379", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
6,395
I hope many people might feel little awkward to see this question. But I feel this is important. Because in my place where I am living I can get a radio (Audio Receiver) very cheaply. I don't know till what extent they allow the audio frequency ranges. I guess they use ICs for Frequency Demodulators, Frequency Clippers, etc. If the Frequency demodulator doesn't work properly and allows all the frequencies that comes through air then its going to be a big problem to the humans. Can somebody answer which are all the frequency ranges that are available and which causes human hazards?
The short answer is that unless you are dealing with professional power levels in the several watts range, RF is very difficult to cause injury with. Long answer RF doesn't affect humans directly unless there is a tremendous amount of power. Effects are typically thermal, when a particular chemical bond is struck just-so, it will absorb a photon, moving it slightly. Enough heating will damage cells by denaturing or "cooking" proteins. Particular wavelengths (2.4 GHz) are well absorbed by water and fat, but absorption is still very diffuse so it would take a tremendous dose to cause enough heating in any one area to cause damage. The FCC safe exposure limit is 1.6 W absorbed per kilogram (as per one source), and 4 W/kg (in following link) for the entire body. More : FCC RF Safety FAQ Specifically this question which has links to the Center for Devices and Radiological Health (CDRH) , Food and Drug Administration. the Environmental Protection Agency's Office of Radiation Programs OSHA guidelines and studies Atomic nuclei can also also respond to RF, allowing for nuclear magnetic resonance spectroscopy (or MRIs), but this is a strictly nuclear effect and has no influence on chemical bonds. IR and light obviously can cause burns, but only at sufficient power. IR lasers can be particularly hazardous to eyes, as it invisible it will not trigger a blink or aversion reflex, allowing a large, damaging dose to be absorbed before noticing. Higher energy photons, UV, X-ray, and gamma, are able to ionize atoms when they strike them, causing unexpected chemical reactions to occur which can destroy, damage, or mutate cells. These are all forms of "radiation", but the layperson couldn't tell what the implications of non-ionizing versus ionizing radiation are, which causes all sorts of unfounded fears of these invisible phantoms that carry our cell phone calls and webpages.
{ "source": [ "https://electronics.stackexchange.com/questions/6395", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1731/" ] }
6,436
I'm a software guy who wants to get into the hardware side of things so I can enjoy the same creativity from software design in the physical world. I've found plenty of posts here regarding how to get "up and running" in the electronics world, but I would like to know if there are any gotchas when embarking on this journey if your goal is to potentially have a device manufactured in the future. (probably robotics-centric solutions, boards that would control servos, sensors, etc). I would like to make sure that wherever I aim my focus, I won't be "learning myself into a corner", so to speak. I have read good things about the flexibility and easy to learn nature of Arduino devices, but have trouble finding anything about getting them manufactured. Are there manufacturers that can produce an arduino-based solution? What kind of production volume is available for something using Arduino? What realms/devices of programmable electronics are best for having manufactured? Any tips or info regarding learning and designing with manufacturing in mind? Any general tips for a newbie?
Just to let you know what lies ahead.... If you want to go from making a hand-built breadboard or prototype to actual PCB's, you have a lot of hours and anywhere from several hundred to a few thousand dollars cost in front of you, depending on how much you are willing to do yourself. Schematic capture and PCB layout First of all you need to capture your design using some sort of schematic capture program, and then design a PCB. One of the more popular programs is EAGLE , which I use. They have a EAGLE Light version ($49), but it can only be used for schematics with one sheet (any size), two signal layers, and 100x80mm (approx 4"x3") routing area. For any serious work, you need at least the EAGLE Standard version, which costs $747. There are probably other less costly (even free) alternatives. There are lots of others that cost thousands or tens of thousands of dollars. In any case you will have to spend considerable time learning how to use the program. Or you can pay someone like me to do it for you ($$/hour). PCB Fabrication Getting boards made is the next step by a PCB fabricator . The problem here is the NRE (non-recurring engineering) costs. Some board houses treat this as a separate figure, and others built it into their per-board quote. In any case, it is almost never economical to have just a few boards made. You might spend $100 for two boards, and $500 for 25. You need to have really large quantities to get down to just a few dollars per board. The gotcha is, if you make 25 boards, populate just a couple of them for testing and find they don't work (and there is not an easy fix -- e.g. because you laid out a connector backwards), you might end up throwing away the other 23 blank boards away and you would have been better off just getting two. I have stacks of blank PCB's as evidence of this phenomena. PCB Assembly Unless you are willing to build the boards by hand, you will need to have them assembled. Surface mount packages are difficult to deal with. If the board has BGA or QFN packages, you probably won't be able to build them yourself unless you have your own reflow oven. Getting your first two boards built by an assembly house might cost $500. Whereas getting 25 built might cost $1200. (Once again, the problem here is the NRE costs.) Getting down to just a few dollars per board requires (again) large quantities. And someone else has already discussed the problem of getting parts. Make sure you use parts that are readily available -- if both DigiKey and Mouser have hundreds of the part available you should be okay. If instead, they have it in their catalog, but it is currently out-of-stock, try to find something else. If you need some special parts that aren't carried by DigiKey or Mouser, make sure you have a reliable source before incorporating it in your product. (Note: the more unusual parts you use, the more likely you will have to add the part manually to your PCB parts library.) Custom Cases Do you want to put your board into a case? If you need to have a custom case designed, that will be a couple thou for the designer using a program like SolidWorks (I don't do that, but can recommend someone who can). If you are going to make just a few cases to begin with, you will probably need to go with rapid prototyping, such as Selective Laser Sintering (SLS). Figure at least $100 per case in small quantities. To get down to a few dollars per case cost, you need to have a custom mold made. NRE time again! Plan on spending $10,000 or more for the mold. And I won't even start on EMC or EMI testing, since I don't know if it applies to your product. As you can see from all of this, until you get into production, the cost of the electronic parts is usually not the biggest item on a per board basis. Doing your own assembly for small volumes will save you a lot of money. So it is important to design with that in mind -- no impossible to solder-by-hand parts. To get really low prices for high-volume, generally you need to go offshore -- China etc. But I would avoid doing so in the beginning.
{ "source": [ "https://electronics.stackexchange.com/questions/6436", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1886/" ] }
6,698
How do I tell which wire is which if I have a copper wire, red and green coded wire? Is the unshielded copper wire ground?
Red is for Right. Blue (or green) is for Left. Copper is for ground (I remember this with the mnemonic Red Right bLue Left Copper Common). All 3 are coated in a lacquer you need to burn or scrape off before you solder. With standard headphone plugs, with the plug facing away from you, the right pin is right, the center pin is ground, and the left pin is left. Common (or "ground") Right Left Insulating ring
{ "source": [ "https://electronics.stackexchange.com/questions/6698", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1944/" ] }
6,760
In all the computer power supplies and other power supplies I've taken apart, I've noticed they are fully isolated from the mains. Galvanic isolation through transformers, and often optical isolation for feedback. There is usually a very visible gap in the traces between the primary and secondary sides, at least 8mm across. Why is it important that these supplies be isolated?
Because the mains supply is very unpredictable, and can do all sorts of things outside its nominal specification, which might damage components or at least break the nominal design assumptions. A non-isolated design also has all its voltages referenced to one of the mains conductors, which might or might not have a useful/safe relationship to other potentials in your environment (like earth/ground, for example). If the only stuff on the low-voltage side is inaccessible electronics, then non-isolated supplies are fine - they tend to much be cheaper/simpler than isolated supplies, and lots of household equipment uses them. Even things like televisions used to work like this, if you go right back to before the time when they had external video/audio connections. The antenna connection was the only external socket, and that was capacitor-isolated. If a human being or 3rd party piece of equipment needs to interconnect with the low-voltage side of your design, then an isolated supply both gives you a clear barrier across which dangerous voltages won't pass, even in the case of component failure, and it means your circuit is now 'floating' relative to the mains. In turn, that means you can arrange for all the electronics to operate near ground potential, with all your interconnected equipment having at least roughly the same voltage reference to work from.
{ "source": [ "https://electronics.stackexchange.com/questions/6760", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
6,789
Why are power supplies almost always made using through hole components? Every computer PSU I've taken apart uses through hole components, though occasionally (not in all cases) surface mount components are found on the bottom. Don't these have to be hand assembled? (before reflow or wave soldering) If so, why are they still doing this, even though labour costs are low in China, it still must cost less for a machine to pick and place SMT stuff... or am I missing something?
Because PSUs use many big lumpy parts that are not SMDable and/or need good mechanical fixing. Also, for minimum cost they like to use single-layer PCBs - TH is a little more amenable to this as parts act as jumpers over tracks. TH parts can be machine-inserted - e.g. http://www.youtube.com/watch?v=eOQ3pZkKX24 (30kparts/hour!)
{ "source": [ "https://electronics.stackexchange.com/questions/6789", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
6,846
How much will reflected signals matter in audio applications (say between an amp and a speaker, or a pre-amp and an amp)? Mostly with regards to fidelity and not power transfer. What are the different options of matching the impedance and their pro's/cons? This can be on the output terminal, input terminal, or modifying the cable?
Impedance matching is not used in modern audio electronics. A mic output might be around 600 Ω, while mic preamp inputs are 1 kΩ or more. A line output will be something like 100 Ω, while a line input is more like 10 kΩ. A loudspeaker amplifier will be less than 0.5 Ω, while loudspeakers are more like 4 Ω. A guitar output might be 100 kΩ, while a guitar amp input is at least 1 MΩ. In all these cases, the load impedance is significantly larger than the source; they are not matched. This configuration maximizes fidelity . Impedance matching was used in the telephone systems that audio systems evolved from, and was (sometimes?) used in vacuum tube amplifiers, but even then, it's a trade-off between maximum power and maximum fidelity . Transmission line effects don't apply. With a wavelength of at least 10 km (for 20 kHz), I think the most effect you'd ever see from reflection is some comb filtering (HF roll-off) with lines a few km long? But that's totally unrealistic. Bill Whitlock : Audio cables are NOT transmission lines. Marketing hype for exotic cables often invokes classic transmission line theory and implies that nano-second response is somehow important. Real physics reminds us that audio cables do not begin to exhibit transmission-line effects in the engineering sense until they reach about 4,000 feet in physical length. Maximum power theorem doesn't apply, since: low-level signals are carried by voltage, not power high-level amplifier output impedances can be made small compared to the speakers they drive. Maximum power theorem is for matching loads to sources, not matching sources to loads . Rane Corporation : Impedance matching went out with vacuum tubes, Edsels and beehive hairdos. Modern transistor and op-amp stages do not require impedance matching. If done, impedance matching degrades audio performance . For why impedance matching is not necessary (and, in fact, hurtful) in pro audio applications, see William B. Snow, "Impedance -- Matched or Optimum" [ written in 1957! ], Sound Reinforcement: An Anthology , edited by David L. Klepper (Audio Engineering Society, NY, 1978, pp. G-9 - G-13), and the RaneNote Unity Gain and Impedance Matching: Strange Bedfellows . Shure Brothers : For audio circuits, is it important to match impedance? Not any more. In the early part of the 20th century, it was important to match impedance. Bell Laboratories found that to achieve maximum power transfer in long distance telephone circuits, the impedances of different devices should be matched. Impedance matching reduced the number of vacuum tube amplifiers needed, which were expensive, bulky, and heat producing. In 1948, Bell Laboratories invented the transistor — a cheap, small, efficient amplifier. The transistor utilizes maximum voltage transfer more efficiently than maximum power transfer. For maximum voltage transfer, the destination device (called the "load") should have an impedance of at least ten times that of the sending device (called the "source"). This is known as BRIDGING. Bridging is the most common circuit configuration when connecting audio devices. With modern audio circuits, matching impedances can actually degrade audio performance. It's a common misconception. HyperPhysics used to show an 8 ohm amplifier output , but they've improved the page since. Electronics Design showed an 8 ohm amplifier output for a long time , but they've finally fixed it after a bunch of complaints in the comments section: Therefore, unless you're the telephone company with mile-long cables, source and load impedances do not need to be matched ... to 600 ohms or any other impedance. --- Bill Whitlock, president & chief engineer of Jensen Transformers, Inc. and AES Life Fellow.
{ "source": [ "https://electronics.stackexchange.com/questions/6846", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1297/" ] }
6,959
My oscilloscope has 100 MHz -3dB bandwidth. -3dB is 0.707 units ( \$\sqrt{2} / 2\$ ). What does this mean, why 70.7% attenuation? Is there any particular reason for this attenuation level?
Voltage vs Power when using dB The -3dB point is also known as the "half power" point. In voltage it may not make not make tons of sense as to why we use ( \$\sqrt{2}/2\$ ), but lets look at an example of what it means in the sense of power. First off, \$P=V^{2}/R\$ , but lets assume R is a constant 1 \$\Omega\$ . Because of the constant 1ohm, we can remove it from the equation all together. Lets say you have a signal at 6 V, its power would then be \$(6 \text{ V})^2 = 36 \text{ W}\$ . Now I take the -3dB point, \$6\text{ V} \cdot \left( \frac{\sqrt{2}}{2} \right) = 4.2426\text{ V}\$ . Now lets get the power at the -3dB point, \$4.2426 \text{ V}^2=18 \text{ W}\$ . So originally we had 36 W, now we have 18 W (which of course is half of 36 W). Application of -3dB in Filters The -3dB point is very commonly used with filters of all types (low pass, band pass, high pass...). It is just saying the filter cuts off half of the power at that frequency. The rate at which it drops off depends on the order of the system you are using. Higher order can get closer and closer to a "brick wall" filter. Brick wall filter being one that just before the cutoff frequency you are at 0dB (no change to you signal) and just after you are at -∞ dB (no signal passes through). Why filter the input to an Oscope? Well, many reasons. All devices (analog or digital) have to do something with the signal. You can go as simple as a voltage follower up to something more complex like showing the signal on a screen or turning the signal into audio. All of the devices required to convert your signal into something that is usable have attributes about them that are frequency dependent. One simple example of this is an opamp and its GBWP. So, on an O-scope they will add a low pass filter so that none of the internal devices are having to deal with frequencies above what they can handle. When an oscope says its -3dB point is 100 MHz they are saying they have placed a low pass filter on its input has a cut off frequency (-3dB point) of 100 MHz.
{ "source": [ "https://electronics.stackexchange.com/questions/6959", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
7,003
I have finally built up a lab to design electronics in. I have quite a few designs I would like to test. I have tried the printer toner/iron technique a few times but have found that I cannot create small pitch sizes as they tear off while removing the printer paper. A few people have mentioned that this is due to using a Samsung laserjet versus a HP. I am wondering what methods you use to develop PCBs for one-offs in your lab or at home (like me). I am trying to fast track a move to SMT/SMD components and would like some tips from seasoned experts on the best PCB creation methods to test board concepts before sending them off to a PCB MFG. I would like something that balances cost, time, and beauty of the finished product geared towards a hobbyist (at this point) and geared towards SMT/SMD components. Please include pics/documentation of your preferred method. Thank you in advance for your post.
For one-offs or prototypes I use: Press-n-Peel transfer film with a laser printer (the blue one) Steel wool and detergent to clean the PCB blank, then a short etch in ammonium persulphate: that gives a very clean surface, important for a good transfer from the film A laminator to transfer the pattern to the PCB; I modified the laminator to raise its operating temperature a bit, and the PCB is a bit thick for the laminator but it works Ammonium persulphate made with hot water in an ice-cream container, and that sits in a bath of hot water (a larger ice-cream container) This gives good results down to 10 mil trace widths; could probably go finer but haven't needed to yet. For double-sided boards I tape the two layers of Press-n-Peel film to two scraps of PCB at the edges so that I can get the two layers well aligned, then put the PCB blank in and feed it through the laminator. Here are some pictures to illustrate: The bottom (left) and top (right) of a simple double-sided board (the top one is printed out mirrored so they overlay when its turned over). Normally I would print onto the blue Press-n-Peel film, just using paper here for illustration. With one side taped to the scrap PCB (left side) and the printed sides facing each other, hold them up to the light and align the other one so that all the holes and the board outline line up. Here they are both stuck to the PCB scrap. You can now put the clean blank PCB between the two (probably best to tape it to both sides to avoid any movement) and run it through the laminator (or iron it) to transfer the toner onto the PCB. You can tape the two pieces of film or paper together without using the scrap of PCB, but when you put the blank PCB between them you can get some relative movement as they flex around the thick PCB. With the scrap piece the same thickness as the blank PCB they stay in the right place. A bench drill is good for any drilling. I use drills down to 0.5 mm diameter but with 3 mm shanks so they are easily held in the drill chuck. For through holes I solder thin copper wire to the pads on either side. The wire comes from a multi-core flexible cable; individual strands are or about 0.2 mm or 8 mil diameter. This takes some time! And to solder I place solder paste with a fine-tipped syringe, place parts with fine tweezers then reflow in an electric frying pan. A few more pictures: Syringing solder paste onto SMD pads. Placing component with tweezers A finshed board - the PCB was professionally made but I assembled components and soldered as described here. These are 0402-size resistors and capacitors (quite small, amazingly easy to lose), an accelerometer in a QFN-16 package (4x4 mm) and a memory chip in an 8 pin leadless package, similar size to a SOIC-8. (This is part of a small accelerometer data logger, see vastmotion.com.au ). Good luck!
{ "source": [ "https://electronics.stackexchange.com/questions/7003", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1575/" ] }
7,042
I have browsed several ASIC manufacturer's webs, but I haven't found an actual number. I assume there would be a fixed cost associated with creating masks and such and then there will be a cost per unit. Note: that I don't actually want to have an ASIC made, I'm just curious.
I looked into ASIC's a while ago and here's what I found: Everybody has different definitions for the word "ASIC". There are (very roughly) three categories: FPGA Conversions, "normal" ASIC, and "full custom". As expected, these are in order of increasing price and increasing performance. Before describing what these are, let me tell you how a chip is made... A chip has anywhere from 4 to 12+ "layers". The bottom 3 or 4 layers contains the transistors and some basic interconnectivity. The upper layers are almost entirely used to connect things together. "Masks" are kind-of like the transparencies used in the photo-etching of a PCB, but there is one mask per IC layer. When it comes to making an ASIC, the cost of the masks is HUGE . It is not uncommon at all for a set of masks (8 layers, 35 to 50 nm) to run US$1 Million! So it is no great surprise to know that most of the "cheaper" ASIC suppliers try very hard to keep the costs of the masks down. FPGA Conversions: There are companies that specialize in FPGA to ASIC conversions. What they do is have a somewhat standard or fixed "base" which is then customized. Essentially the first 4 or 5 layers of their chip is the same for all of their customers. It contains some logic that is similar to common FPGA's. Your "customized" version will have some additional layers on top of it for routing. Essentially you're using their logic, but connecting it up in a way that works for you. Performance of these chips is maybe 30% faster than the FPGA you started with. Back in "the day", this would also be called a "sea of gates" or "gate array" chip. Pros: Low NRE (US$35k is about the lowest). Low minimum quantities (10k units/year). Cons: High per-chip costs-- maybe 50% the cost of an FPGA. Low performance, relative to the other solutions. "Normal" ASIC: In this solution, you are designing things down to the gate level. You take your VHDL/Verilog and compile it. The design for the individual gates are taken from a library of gates & devices that has been approved by the chip manufacturer (so they know it works with their process). You pay for all the masks, etc. Pros: This is what most of the chips in the world are. Performance can be very good. Per-chip costs is low. Cons: NRE for this starts at US$0.5 million and quickly goes up from there. Design verification is super important, since a simple screw-up will cost a lot of money. NRE+Minimum order qty is usually around US$1 million. Full Custom: This is similar to a Normal ASIC, except that you have the flexibility to design down to the transistor level (or below). If you need to do analog design, super low power, super high performance, or anything that can't be done in a Normal ASIC, then this is the thing for you. Pros: This requires a very specialized set of talents to do properly. Performance is great. Cons: Same con's as Normal ASIC, only more so. Odds of screwing something up is much higher. How you go about this really depends on how much of the work you want to take on. It could be as "simple" as giving the design files to a company like TSMC or UMC and they give you back the bare wafers. Then you have to test them, cut them apart, package them, probably re-test, and finally label them. Of course there are other companies that will do most of that work for you, so all you get back are the tested chips ready to be put on a PCB. If you have gotten to this point and it still seems like an ASIC is what you want to do then the next step would be to start Googling for companies and talking with them. All of those companies are slightly different, so it makes sense to talk with as many of them as you can put up with. They should also be able to tell you what the next step is beyond talking with them.
{ "source": [ "https://electronics.stackexchange.com/questions/7042", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1850/" ] }
7,179
I've heard that leaving a pin floating on an MCU when configured as an input (vs. the default output) is bad for the pin, and can eventually cause it to fail prematurely. Is this true? N.B. in my instance the pin is floating somewhere between 0.3V and 1.3V due to an incoming video signal. This sometimes falls in the no man's zone of 0.8V - 2.0V when operating from 3.3V.
Problem: Leaving a pin configured as an input floating is dangerous simply because you cannot be sure of the state of the pin. Like you mentioned, because of your circuit, your pin was sometimes LOW or sometimes in no-man's land or could sometimes go to HIGH. Result: Essentially, the floating input WILL definitely cause erratic chip operation or unpredictable behaviour. I have noticed some chips froze by simply moving my hand closer to the board (I wasn't wearing a ESD wrist band) or some would have different startup behaviour each time the board would powerup. Why: This happens simply because if there is external noise on that pin, the pin would oscillate, which would drain power as CMOS logic gates drain power when they switch states. Solution: Most micros nowdays have internal pullups as well, so that could prevent this behaviour from occuring. Another option would be to configure the pin as an output so it does not affect the internals.
{ "source": [ "https://electronics.stackexchange.com/questions/7179", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
7,423
Can some one explain this terminology please I think I understand it but not completely sure. I think pull down is where you place a resistor between +V and the other component and pull up is where you place the resistor between 0v and the component. If I am completely wrong then let me know!
It's the other way around. Pull up is where you place a resistor between a signal and +V, pull down is pulling it to ground. (from http://roguescience.org/wordpress/?page_id=11 ) Here, you can see that when the switch is open, in the pullup scenario the input pin will read high, but for pull down it will read low.
{ "source": [ "https://electronics.stackexchange.com/questions/7423", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
7,463
On the datasheet it doesn't mention an input voltage. Must the input voltage be higher than the output voltage or does it increase to 12 V if I input 8 V for example? If so, what is the current like that it outputs?
A 7812 is a linear regulator, and does not step up the input voltage if the input is below the output (for that you need a DC-DC boost regulator). The input voltage must be above the output. All linear regulators have a minimum dropout voltage , or difference. The 7812 is not a LDO (low-dropout regulator), as the typical dropout voltage is around 2 volts. (An LDO might have a dropout voltage of 0.7 volts). On page 6 of the datasheet, the maximum dropout voltage Vd is listed as 2.5 volts, meaning you need a minimum of 14.5 volts input to the device to guarantee an output of 12 volts..
{ "source": [ "https://electronics.stackexchange.com/questions/7463", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
7,624
We had a very short FPGA/Verilog course at university (5 years ago), and we always used clocks everywhere. I am now starting out with FPGAs again as a hobby, and I can't help but wonder about those clocks. Are they absolutely required, or can an FPGA-based design be completely asynchronous? Can one build a complex bunch of logic and have stuff ripple through it as fast as it can? I realise that there are a whole lot of gotchas with this, like knowing when the signal has propagated through all parts of the circuit and the output has stabilised. That's beside the point. It's not that I want to actually build a design that's entirely asynchronous, but just to improve my understanding of the capabilities. To my beginner eye, it appears that the only construct that absolutely requires a clock is a reg , and my understanding is that a typical FPGA (say, a Cyclone II) will have its flip-flops pre-wired to specific clock signals. Is this correct? Are there any other implicit clocks like this and can they typically be manually driven by the design?
A short answer would be: yes; a longer answer would be: it is not worth your time. An FPGA itself can run a completely asynchronous design no problem. The result you get is the problem since timing through any FPGA is not very predictable. The bigger problem is the fact that your timing and resultant design will almost definitely vary between different place and route sessions. You can put in constraints on individual asynchronous paths making sure that they do not take too long, but I'm not quite sure that you can specify a minimum delay. In the end it means that your design will be unpredictable and potentially completely variable with even a slight design change. You'd have to look through the entire timing report every time you change anything at all just to make sure that it would still work. On the other hand, if the design is synchronous, you just look for a pass or fail at the end of place and route (assuming your constraints are setup properly, which doesn't take long at all). In practice people aim for completely synchronous designs but if you need to simply buffer or invert a signal, you don't need to go through a flip flop as long as you constrain it properly. Hope this clears it up a bit.
{ "source": [ "https://electronics.stackexchange.com/questions/7624", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1951/" ] }
7,664
Convention? Easier to implement? Another reason? Is there a reason things like MCLR or RESET on microcontrollers are active-low, that is, you have to pull them down to reset the IC, and pull them up to "run" the IC. I'm just curious because this causes me some problems. If it were active high, I could avoid the capacitor on MCLR required in some instances and deal with just a pull-down resistor. It seems only to add to complexity.
Look at what happens during power-up: Vcc rises to a point where it's high enough to make everything work properly. However, that point isn't clearly defined and may vary from device to device. It makes sense not to use this voltage to reset the controller. It's easy, however, to keep a level low regardless of Vcc. After all, Reset is already active the instant you switch power on, since at that moment everything is at a low level. edit The graph below illustrates how the output voltage of the reset controller (i.c. an MC34064 ) remains low until Vcc is high enough to have the complete microcontroller stable.
{ "source": [ "https://electronics.stackexchange.com/questions/7664", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
7,709
A lot of times in circuits I see a resistor placed in series in a signal line and sometimes even in series with an MCU's VDD line. Is the intention of this to smooth out noise in the line? How is this different from using a small cap, like a .1µF to do the same thing?
Two common reasons are signal integrity and current limiting in lazy level conversion. For signal integrity, any mismatch in impedance of the transmission line formed by a pcb trace and attached components can cause reflections of signal transitions. If these are allowed to bounce back and forth along the trace reflecting off the mismatches at the end for many cycles until they die out, the signals "ring" and may be misinterpreted either by level or as additional edge transitions. Typically an output pin has a lower impedance than the trace and an input pin a higher impedance. If you put a series resistor of value matching the transmission line impedance on the output pin, this will instantaneously form a voltage divider and the voltage of the wavefront traveling down the line will be half the output voltage. At the receiving end, the higher impedance of the input essentially looks like an open circuit, which will produce an in-phase reflection doubling the instantaneous voltage back to the original. But if this reflection is allowed to reach back to the low-impedance output of the driver it would reflect out of phase and constructively interfere, subtracting again and producing ringing. Instead it is absorbed by the series resistor at the driver which is selected to match the line impedance. Such source termination works pretty well in point-to-point connections, but not so well in multipoint ones. Current limiting in lazy level translation is another common reason. CMOS IC technologies of different generations have different optimal operating voltages, and may have damage limits set by the tiny physical size of the transistors. Additionally, they cannot natively tolerate having an input at a higher voltage than their supply. So most chips are built with tiny diodes from the inputs to the supply to protect against overvoltage. If driving a 3.3v part from a 5v one (or more likely today, driving a 1.2 or 1.8 v one from a 3.3v source) it's tempting to just rely on those diodes to clamp the signal voltage to a safe range. However, they often cannot handle all the current that can potentially be sourced by the higher voltage output, so a series resistor is used to limit the current through the diode.
{ "source": [ "https://electronics.stackexchange.com/questions/7709", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1235/" ] }
7,901
My hardware team is planning to use an Atmel AVR 8-bit microcontroller for a future project. So far as I know, it must be programmed in C. I have found a JVM for AVR, though it is more limited than the native C libraries from Atmel. Can you suggest me an 8-bit microcontroller which supports Java? PS. I don't know C and I am inexperienced in microprocessor programming.
If you're inexperienced in the microprocessor/microcontroller programming field, you should probably learn C first, so that you can understand when and why Java is a poor choice for most microcontroller projects. Did you read the restrictions on the JVM you linked? It includes the following problems: As little as 512 bytes of program memory (not KB, and definitely not MB) As little as 768 bytes of RAM (where your variables go. You're limited to 768 characters of strings by this restriction.) About 20k Java opcodes per second on 8 Mhz AVR. Only includes java.lang.Object, java.lang.System, java.io.PrintStream, java.lang.StringBuffer, a JVM control class, and a native IO class. You will not be able to do an import java.util.*; and get all the classes not in this list. If you're not familiar with what these restrictions mean, make sure that you have a plan B if it turns out that you can't actually do the project with Java due to the space and speed restrictions. If you still want to go with Java, perhaps because you expect the device to be programmed by a lot of people who only know Java, I'd strongly suggest getting bigger hardware, likely something that runs embedded Linux. See this page from Oracle for some specs to shoot for to run the embedded JVM, in the FAQ of their discussion they recommend a minimum of 32MB of RAM and 32MB of Flash. That's about 32,000 times the RAM and 1,0000 times the Flash of the AVR you're looking at. Oracle's Java Embedded Intro page goes into more detail about the restrictions of the JVM. Their tone of voice is, as you might guess, a good deal more Java-friendly than mine. Be aware that this kind of hardware is much more difficult to design than an 8-bit AVR. I'm a computer engineering student with a computer science minor. My university's CS department has drunk the Java Kool-aid, so a lot of students in the engineering program come in knowing only Java (which is a sad state of affairs for a programmer, at least learn some Python or C++ if you don't want to learn C...), so one of my professors published a C Cheat Sheet ( Wayback machine link ) for students with a year of Java experience. It's only 75 pages; I suggest you read or skim it before making a decision. In my opinion, C is the most efficient, durable, and professional language in which to develop an embedded project. Another alternative to consider is the Arduino framework. It uses a stripped down version of the Wiring language, which is like C++ without objects or headers. It can run on many AVR chips, it's definitely not restricted to their hardware. It will give you an easier learning curve than just jumping straight into C. In conclusion, Alt text: Took me five tries to find the right one, but I managed to salvage our night out--if not the boat--in the end.
{ "source": [ "https://electronics.stackexchange.com/questions/7901", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2296/" ] }
7,913
I've always wondered this: every single modern PCB is routed at 45 degree angle increments. Why does the industry prefer this so much? Doesn't any-angle routing offer more flexibility? One plausible theory would be that the existing tools only support 45 degree increments and that there isn't much pressure to move away from this. But having just researched this topic on google, I stumbled across TopoR - Topological Router - which does away with the 45 degree increments, and according to their marketing materials it does a considerably better job than the 45-degree-limited competitors. What gives? What would it take for you personally to start routing arbitrary angles? Is it all about support in your favourite software, or are there more fundamental reasons? Example of non-45-degree routing: P.S. I also wondered the same about component placement, but it turns out that many pick & place machines are designed such that they can't place at arbitrary angles - which seems fair enough.
Fundamentally, it basically boils down to the fact that the software is way easier to design with only 45° angles. Modern autorouters are getting better, but most of the PCB tools available have roots that go back to the DOS days, and therefore there is an enormous amount of legacy pressure to not completely redesign the PCB layout interface. Furthermore, many modern EDA packages let you "push" groups of traces, with the autorouter stepping in to allow one trace to force other traces to move, even during manual routing. This is also much harder to implement when you aren't confined to rigid 45­° angles.
{ "source": [ "https://electronics.stackexchange.com/questions/7913", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1951/" ] }
7,992
If I leave my cell phone with the charger plugged in all the time, would this weaken the batter, and why? I've heard that you should only charge the cell phone when you receive a 'battery low' message and leave it off the charger at other times? The cell phone has got a Li-Ion battery.
All newer phones use Lithium polymer batteries. Why is it Partially Charged? To decrease their aging they are intended to be stored at 40% charge. This means when you receive your phone it should be at 40% charge, otherwise they will have aged your battery for you. (you are probably used to the effects of aging, like a 2 year old phone seeming to have very short battery life). When you get your phone you can use it until it is discharged, but they normally say 'charge it' because people will not notice the partial charge. Do Not Fully Discharge You should not fully worry about fully discharging, this is superstition to earlier battery technologies. Fully discharging a lithium battery is one of the best ways to make it fail. below a certain charge they will have their overcharge protection circuitry fail and you cannot charge it at all. I have seen studies that show that this makes up more than 75% of "failed" lithium batteries. Lithium Battery Aging Lithium batteries have a set number of charge discharge cycles before they fail. This might be a number like 500 cycles. You actually get more like 1000 cycles if you only discharge to 50% before recharge. Lithiums really do not like a deep discharge, I cannot stress this enough. If you would like more information about lithium battery technology let me know, I can get you many links, just drop me a comment. I have a few answers on the electronics and robotics stack exchange about it. Can I leave it plugged in all the time? Yes, and no. This is very dependent on whom makes your device. For example, my Lenovo laptop will not apply a charge to the battery unless it is under 97%. When it does charge the battery it charges directly to 100%, then stops until the battery sags below 97%. Many laptops did not do this, on most just applying charge if it is not 100%. This would put the battery through thousands of charge cycles in a week when you are not using the battery. This ages a battery quickly. If your phone maker took the time and paid the extra cash then your phone will stop charging once it reaches full charge and just power the system from the wall outlet. It is significantly more likely that your phone is charging your battery on a short cycle and aging it thoroughly. Myths Some people have some confusion from some of the myths that go about. The primary one is memory. As Battery University will say, this is mostly extinct, and actually applies to nickel-cadmium batteries . As was stated in a comment about crystals Battery university has in reference to nickel-cadmium: With memory, the crystals grow and conceal the active material from the electrolyte. In advanced stages, the sharp edges of the crystals penetrate the separator, causing high self-discharge or electrical short. Now, talking about Lithium batteries, which your phone uses, there is even more difference. To quote them battery university directly from their simple guidelines: Avoid frequent full discharges because this puts additional strain on the battery. Several partial discharges with frequent recharges are better for lithium-ion than one deep one. Recharging a partially charged lithium-ion does not cause harm because there is no memory. (In this respect, lithium-ion differs from nickel-based batteries.) Short battery life in a laptop is mainly cause by heat rather than charge / discharge patterns. I understand how this may go against what you have been taught, but I am someone who not only has research this but uses lithium batteries in my day to day work as an engineer.
{ "source": [ "https://electronics.stackexchange.com/questions/7992", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1999/" ] }
8,011
Is there anyway I can connect resistors to allow them to take more power such as using 4 1/4 watt resistors to get a 1 watt resistors. Or do i just have to use 1 1 Watt resistor.
Yes, you can use (4) 0.25 W resistors to dissipate 1 W and still remain within each individual resistor's power rating. This can be accomplished a few different ways: Place them all in series : In which case you will need to use resistors with 1/4 the resistance that you want overall. e.g. If you want 1 kΩ total, put (4) 250 Ω (240 Ω nearest 5% standard) resistors in series. Place them all in parallel : In which case you will need to use ones with 4 times the overall desired resistance e.g. If you want 1 kΩ total, put (4) 4 kΩ (3.9 kΩ nearest std.) resistors in parallel. Placing them in a 2x2 array : Where you can use resistors of the same resistance you want overall (2 in parallel gives half, but you place 2 sets of parallel in series, doubling the effective resistance) In all the mentioned cases, in order to have each resistor dissipate an equal share of power, they all must be equal in value (ohms). This isn't the only way to do it, there are several other combinations you could use with differing values, etc. Pragmatically, if you're only operating this circuit very intermittently (few seconds at a time), you might be able to get away with a single 1/4 W resistor, especially if this is on a breadboard (be careful not to melt stuff). Higher power resistors often are specified to survive surges of 8-10x their normal power dissipation for several seconds, though the typical 1/4 W thru-hole resistors are carbon film, which is a little less tolerant of this.
{ "source": [ "https://electronics.stackexchange.com/questions/8011", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
8,112
What's a good way to reduce the output from a 9V battery to the 1.8V to 5V required by an ATmega328 controller? The context is a small robotics platform with low power requirements (very slow movement).
I would use a 7805 to get 5 volts simple circuit. Here is a image: idea: please make sure that caps are ceramic/polymer caps. The ceramic caps only have low ESR value. specially the right hand one.
{ "source": [ "https://electronics.stackexchange.com/questions/8112", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2172/" ] }
8,121
It says in the datasheet these values in this picture: But I've also seen it with these values: So what values should i use or doesn't it matter? And its for when its first starting.
First Starting? Build it with either; it will work fine for you. The second option with the large capacitors and the extra small one is "more stable." Just build this, do not worry about why if you are just wanting to start your project. Let us know if something does not work. 100nF Use the values that are suggested in the datasheet. If you add capacitors a factor of 10 smaller (called decade capacitors) it will help with higher frequency noise (RF, or radio band noise), as an effects of the impedance of a capacitor. Feel free to add as many decade capacitors as you want, but you will not need them unless you start having FCC testing. They cannot hurt. 100uF When someone increases this they are allowing their circuit to pull power from the power source for longer. If you have a long power line this will act as a small tank of power. If you are going to have long power lines, or very inductive power lines, add this.
{ "source": [ "https://electronics.stackexchange.com/questions/8121", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
8,253
OK, so the uA741 is 42 years old now. For its time it may have been a great opamp; the requirements weren't as high as today, and there was far less competition. But I was wondering what's the 741's appeal today. it's slow. GBW 1MHz, slew rate < 0.5 V/us it's not low power, nor low voltage it doesn't have low bias current FET inputs it doesn't have rail-to-rail inputs or outputs it's not low noise many more modern opamps have comparable price Why is the 741 still used today?
It's an ideal op amp to learn the basics on due to its non-ideal nature. The first thing we learn is infinite input impedance, infinite gain, as well as a few other silly things. The 741 obeys none of these idealities, forcing students to learn the hard way how to cope. They see bandwidth limitations without using expensive oscillators or function generators; they see early saturation, nowhere near the rails, allowing the use of cheap multimeters. Many textbooks use the 741 as an example due to its ubiquitous availability and simple verification of non-idealities. Today, we can buy op-amps with mV offset and noise, 100s MHz bandwidth, nA leakage, etc.. One of the most time consuming part of a design is looking for parts, especially for the inexperienced. Academics aren't experienced design engineers, and will use the parts they know, as they have better things to do than look for parts (like write that grant application, right? :). This outdated part therefor gets introduced into new designs from copying legacy modular designs, and familiarity from instruction.
{ "source": [ "https://electronics.stackexchange.com/questions/8253", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
8,295
I have a 3-pin 12 V computer fan and I want to interpret its speed sensor output. At the yellow wire I get something that looks like pulse-with modulation. How would I interpret the output without actually connecting the fan to a computer?
Brief background: The tachometer output comes from a Hall-effect sensor mounted on the motor driver PCB on the fan frame. One or more magnets embedded in the fan rotor hub activate the Hall-effect sensor as they pass by. The sensor is amplified, and eventually drives a logic circuit. The fans that I have seen use an open drain/open collector output. One (or more) pulse is generated every time the the fan rotor completes a revolution. The number of pulses counted in one minute is directly proportional to the RPM of the fan. In your fan's case, I think it would be reasonable to guess that there are two pulses generated for each revolution. With the frequency that you have measured, about 1500 RPM sounds right, given that you are running it at 10V (12V nominal) and the typical is 1800-2000 RPM. If you want a more visual approach, you can make a crude strobe tachometer using just a LED and resistor. Connect a LED (brighter is better) and an appropriate current-limiting resistor between power and the tachometer pin. If you mark one of the fan blades with something easy to see, like a sticker, you should be able to shine the LED on the fan blades and see the sticker illuminated in two places. You can use this technique to count the number of times the tachometer output goes low each rotation, and to approximate the duty cycle of the signal.
{ "source": [ "https://electronics.stackexchange.com/questions/8295", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1240/" ] }
8,418
I need a trace on my PCB to carry up to 2.5 amps (average) current, with 5-6 amp spikes (it's going to a switch mode power supply.) How wide should the traces be? I've got a trade off between reliability and size, as the product is space constrained. Any tips would be appreciated.
After doing a quick google of "PCB Current Calculator", I found a PCB Current Calculator based on IPC-2152. It bases the width of the track on how much of a temperature rise the trace is allowed to have. It's nice in that it shows how much power you waste through your trace. I would design for your worst-case RMS current, since it's going to be a periodic signal. If you use 2 oz/ft 2 copper instead of the standard 1 oz/ft 2 copper, you won't need as wide of a trace to achieve the same resistance. For example, allowing for a 10 o C rise , you can get away with these numbers at 3 A with no copper plane nearby: 177 mil (4.50 mm) on 1 / 2 oz/ft 2 copper 89 mil (2.26 mm) on 1 oz/ft 2 (35µm) copper 47 mil (1.19 mm) on 2 oz/ft 2 (70µm) copper Note: IPC-2221 (The standard used in the original answer) uses old measured values for its design charts, and these charts are implemented in many calculators. As best as I can tell, this data was claimed to be 50 years old, which makes IPC-ML-910 (1968) a possible source. As @AlcubierreDrive pointed out, a new standard, IPC-2152, contains new measured data, and presumably is more accurate. More importantly, a comparison of IPC-2221 values gives the following result for trace widths: IPC-2221 (internal) > IPC-2152 > IPC-2221 (external). Actual numbers for the example above (1oz copper) are IPC-2152: 89 mil IPC-2221 (internal): 143 mil (+60%) IPC-2221 (external): 55 mil (-38%) Also note that the original numbers in this answer were based on the IPC-2221 internal calculations, which will provide a conservative estimate for all values.
{ "source": [ "https://electronics.stackexchange.com/questions/8418", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
8,565
This question popped up to me a few moments ago. I was measuring what is intended to be a 50Mhz square wave of level 0 to 2.5, however what I saw on the screen is a sine wave that was centered around 1.2V and level of 0.5 to 2.0V, the frequency was 4MHz. I checked up my oscilloscopes datasheet and it showed that the bandwidth was 10MHz with a sampling rate of 50 MS/s. I'm wondering what these figures are all about. Are they a measure of the upper frequency limit an oscilloscope can measure? Is this oscilloscope capable of measuring 50Mhz at all?
System bandwidth is a combination of probe bandwidth and oscilloscope input bandwidth. Each can be approximated by an RC lowpass circuit, which means delays add geometrically: t_system^2 = (t_probe^2 + t_scope^2) f_system = 1/sqrt((1/f_probe)^2 + (1/f_scope)^2) This means that a 10MHz 'scope with 60MHz probes can measure sinusoids of frequency 9.86MHz with -3dB (100*10^{-3/20}%) attenuation. When measuring digital pulse trains it's not so much the periodicity that matters, but the rise and fall times, as they contain the high-frequency information. Rise times can be approximated mathematically by an RC rise or a Gaussian rise, and are defined as the time for the signal to go from 10% of the difference between low voltage (logical 0) and high voltage (logical 1) , to 90% of the difference. For example, in a 5V/0V system, it is defined as the time to get from 0.1*5V=0.5V to 0.9*5V=4.5V . With these constraints and some fancy math , one can work out that each type of characteristic rise time has frequency content up to about 0.34/t_rise for Gaussian and 0.35/t_rise for RC. (I use 0.35/t_rise for no good reason and will do so for the rest of this answer.) This information works the other way, too: a particular system bandwidth is only able to measure rise times up to 0.35/f_system ; in your case, 35 to 40 nanoseconds. You're seeing something similar to a sine wave because that is what the analog front-end is letting through. Aliasing is a digital sampling artifact, and is also in effect in your measurement (aren't you lucky!). Here's a borrowed image from WP: As the analog front-end is only letting rise times 35ns to 40ns through, the ADC sampling bridge sees something like an attenuated 50MHz sine wave, but it's only sampling at 50MS/s, so it can only read sinusoids below 25MHz. Many 'scopes have an antialiasing filter (LPF) at this point, which would attenuate frequencies above 0.5 times the sample rate (Shannon-Nyquist sampling criteria). Your scope doesn't seem to have this filter, though, as the peak-to-peak voltage is still fairly high. What model is it? After the sampling bridge the data gets shoved into a few DSP processes, one of which is called decimation and cardinal spans , which further reduces sample rate and bandwidths in order to better display and analyze it (especially helpful for FFT calculation). The data is further massaged such that it doesn't display frequencies above ~0.4 times the sample rate, called a guard band . I would have expected you to see a ~20MHz sinusoid -- do you have averaging (5-point) turned on? EDIT: I'll stick my neck out and guess that your oscilloscope has digital antialiasing, using decimation and cardinal spans, which basically means a digital LPF then resampling of an interpolated path. The DSP program sees a 20MHz signal, so it decimates it until it is below 10MHz. Why 4MHz and not closer to 10MHz? "Cardinal span" means halving the bandwidth, and decimation is often by a power of two as well. Some integer power of 2 or a simple fraction of it resulted in a 4MHz sinusoid being spat out instead of ~20MHz. This is why I say every enthusiast needs an analog 'scope. :) EDIT2: Since this is getting so many views, I'd better correct the above embarrassingly thin conclusion. EDIT2: The particular tool you liked to can use undersampling, for which a windowing analog BPF input is required for antialiasing, which this tool doesn't seem to have, so it must only have a LPF, restricting it to sinusoids of less than 25MHz even when using equiv. time sampling . Although I also suspect the quality of the analog side, the digital side likely does not do the aforementioned DSP algorithms, instead streaming data or transferring one capture at a time for brute force number crunching on a PC. 50MS/s and 8-bit word lengths means this is generating ~48MB/s of raw data -- far too much to stream over USB despite its theoretical 60MB/s limit (practical limit is 30MB/s-40MB/s), nevermind the packetizing overhead, so there is some decimation right out of the box to reduce this. Working with 35MB/s gives ~37MS/s sample rate, pointing to a theoretical measurement limit of 18MHz, or 20ns rise time, when streaming, though it is likely lower as 35MB/s is amazing (but possible!). The manual indicates a Block Mode exists for capturing data at 50MB/s 'til the internal 8k memory (cough) is full (160us), then sending it to the computer at a leisurely pace. I would assume that the difficulties encountered in designing a quality analog input were partially overcome by oversampling by 2X (extra half-bit accuracy), giving an effective sample rate of 25MS/s, maximum frequency 12.5MHz, and a 10% guard band ( (0.5*25-10)/25 ), all of which could be reduced in the hand-tool itself. In conclusion, I'm not sure why you're seeing a 4MHz sinusoid as there are ways for this to happen, but would want to make the same measurement in Block Mode then analyze the data with a third-party program. I have always been hard on PC-based oscilloscopes, but this one seems to have decent inputs...
{ "source": [ "https://electronics.stackexchange.com/questions/8565", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/444/" ] }
8,685
This has just dawned on me that if you're writing an operating system then what are you writing it on? I ask this as I am reading a microprocessor fundamentals book from 1980 and this question popped in to my head: How was the first microprocessor chip programmed? The answer may be obvious but its bugging me.
I will take your question literally and discuss mostly microprocessors, not computers in general. All computers have some sort of machine code. An instruction consists of an opcode and one or more operands. For example, the ADD instruction for the Intel 4004 (the very first microprocessor) was encoded as 1000RRRR where 1000 is the opcode for ADD and RRRR represented a register number. The very first computer programs were written by hand, hand-encoding the 1's and 0's to create a program in machine language. This is then programmed into the chip. The first microprocessors used ROM (Read-Only Memory); this was later replaced by EPROM (Erasable Programmable ROM, which was erased with UV light); now programs are usually programmed into EEPROM ( "Electrically...-EPROM" , which can be erased on-chip), or specifically Flash memory. Most microprocessors can now run programs out of RAM (this is pretty much standard for everything but microcontrollers), but there has to be a way of loading the program into RAM in the first place. As Joby Taffey pointed out in his answer, this was done with toggle switches for the Altair 8080, which was powered by an Intel 8080 (which followed the 4004 and 8008). In your PC, there is a bit of ROM called the BIOS which is used to start up the computer, and load the OS into RAM. Machine language gets tedious real fast, so assembler programs were developed that take a mnemonic assembler language and translate it, usually one line of assembly code per instruction, into machine code. So instead of 10000001, one would write ADD R1. But the very first assembler had to be written in machine code. Then it could be rewritten in its own assembler code, and the machine-language version used to assemble it the first time. After that, the program could assemble itself. This is called bootstrapping and is done with compilers too -- they are typically first written in assembler (or another high-level language), and then rewritten in their own language and compiled with the original compiler until the compiler can compile itself. Since the first microprocessor was developed long after mainframes and minicomputers were around, and the 4004 wasn't really suited to running an assembler anyway, Intel probably wrote a cross-assembler that ran on one of its large computers, and translated the assembly code for the 4004 into a binary image that could be programmed into the ROM's. Once again, this is a common technique used to port compilers to a new platform (called cross-compiling ).
{ "source": [ "https://electronics.stackexchange.com/questions/8685", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1623/" ] }
8,767
What are some of the good versioning systems for hardware projects? Is there equivalents of Google Code, CVS and SVN? Are such version control systems suitable for hardware projects involving PCB files, schematics..(even firmware code)?
Basically, all VCS systems can handle text & binary files gracefully. Of course you cannot merge binary ones. So as long as you are not using obsolete things like CVS you will be good with ANY system.
{ "source": [ "https://electronics.stackexchange.com/questions/8767", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1688/" ] }
8,794
Do electrolytic capacitors have a limited shelf life? I would like to know for both aluminium and tantalum.
Aluminium Electrolytic Capacitors: Epcos: 2 years, cf. this applications information Cornell Dubilier: 3 years as per this document Nichicon: 2 years; section 2-6 in this document Several documents say that longer storage is well possible, but will require reforming before use. Panasonic, amongst others, has a number: Apply the rated voltage via a series resistor of 1 kOhm for 30 minutes (for example https://eu.industrial.panasonic.com/sites/default/pidseu/files/downloads/files/id_almiec_e.pdf#page=186 ). There is also a military handbook about reforming stored electrolytic capacitors (formerly known as MIL-STD-1131 ). Without reforming and by applying the rated voltage after a long storage duration, the reforming current might be so high that capacitors may get (too) warm and even blow up, which we do not like because we are not Beavis or Butt-Head (he he). Tantalum Capacitors: I couldn't find similar data after my initial search, but it seems like the usual MSL (moisture sensitivity levels) ratings for surface-mount parts are given and applicable.
{ "source": [ "https://electronics.stackexchange.com/questions/8794", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
9,137
In low-cost mass-produced items I often run into black blobs of what looks like resin applied directly on top of something on the PCB. What are these things exactly? I suspect this is some kind of custom IC that is layed out directly on the PCB to save on the plastic housing/connector pins. Is this correct? If so, what is this technique called? This is a photograph of the inside of a cheap digital multimeter. The black blob is the only non-basic piece of circuitry present, along with an op-amp (top) and a single bipolar junction transistor.
It's called chip-on-board. The die is glued to the PCB and wires are bonded from it to pads. The Pulsonix PCB software I use has it as an optional extra. The main benefit is reduced cost, since you don't have to pay for a package.
{ "source": [ "https://electronics.stackexchange.com/questions/9137", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1802/" ] }
9,264
I know 9600, 19200, 38400, 57600, 115200 and 1.8432 Mbaud, but no others. Why are these values used, and is it simply doubling each time or is there something more complex going on (for example, 38400 quadrupled is not 115200 baud?) The reason I ask this question is I'm designing something which may have to interact with a variety of different baud rates. It will initialise in 9600, and then switch to a specific baud rate. But I can't support arbitrary rates because the dsPIC33F I am using does not support arbitrary rates as it is limited to a 16-bit BRG down counter. It's similar in this regard to many other processors.
It started a long long time ago with teletypes — I think 75 baud. Then it's been mostly doubling ever since, with a few fractional (x1.5) multiples, for example 28,800, where there were constraints on phone-line modem tech that didn't quite allow it to double. Standard crystal values came from these early baudrates, and their availability dictates future rates. E.g., \$\begin{align}{7.3728 \,\mathrm{MHz} \over 16} &= 460,800 \;\text{baud}\\\\{7.3728 \,\mathrm{MHz} \over 64} &= 115,200 \,\text{baud}.\end{align}\$ Most UARTS use a clock of \$2^n \times 16\$ of the baudrate, more modern parts (e.g. NXP LPC) have fractional dividers to get a wider range by using non-binary multiples. Other common standards are 31,250 (MIDI) and 250K (DMX), both likely chosen as nice multiples of 'round' clocks like 1MHz etc.
{ "source": [ "https://electronics.stackexchange.com/questions/9264", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1225/" ] }
9,510
What diode modifiers are used in practice to model LEDs with SPICE (Berkeley v.3f5)? These are available to me: # Name Parameter Units Default Example Area 1 IS Saturation current A 1e-14 1e-14 * 2 RS Ohmic resistance Ω 0 10 * 3 N Emission coefficient - 1 1.0 4 TT Transit-time s 0 0.1ns 5 CJO Zero-bias junction capacitance F 0 2pF * 6 VJ Junction potential V 1 0.6 7 M Grading coefficient - 0.5 0.5 8 EG Activation energy eV 1.11 1.11 Si 0.69 Sbd 0.67 Ge 9 XTI Saturation-current temperature exponent 3.0 3.0 jn 2.0 Sbd 10 KF Flicker noise coefficient - 0 11 AF Flicker noise exponent - 1 12 FC Coeff. for for.-bias dep. cap. formula 0.5 13 BV Reverse breakdown voltage V ∞ 40.0 14 IBV Current at breakdown voltage A 1.0e-3 15 TNOM Parameter measurement temp. °C 27 50 3.4.2 Diode Model (D) The dc characteristics of the diode are determined by the parameters IS and N. An ohmic resistance, RS, is included. Charge storage effects are modeled by a transit time, TT, and a nonlinear depletion layer capacitance which is determined by the parameters CJO, VJ, and M. The temperature dependence of the saturation current is defined by the parameters EG, the energy and XTI, the saturation current temperature exponent. The nominal temperature at which these parameters were measured is TNOM, which defaults to the circuit-wide value specified on the .OPTIONS control line. Reverse breakdown is modeled by an exponential increase in the reverse diode current and is determined by the parameters BV and IBV (both of which are positive numbers). For example, using this basic, cheap red: I don't care much about high-frequency characteristics -- just would like to be able to match it's IV-curve within its operating specs (-10uA/-5V leakage to +100mA/+2.2 'ish V forward):
As you stated, there are 3 parameters that dictate the DC response of a diode. Those are the saturation current ( IS ), the emission coefficient ( N ), and the ohmic resistance ( RS ). I was able to fit the curve with a fairly high accuracy, so I'll document my model procedure. The SPICE model for the diode closely matches the Schokley diode equation: If = IS(e^(Vf/(N*Vt)) - 1) where Vt = kT/q = 26mV at room temperature. Get actual values from the graphs provided in the datasheet to use for comparison. The more points the better, and the more accurate the better. Below is a table that I estimated from the figure you provided: Vf If (mA) 1.3 0.001 1.4 0.010 1.5 0.080 1.6 0.700 1.7 5.000 1.8 20.000 1.9 40.000 2.0 65.000 2.1 80.000 Plug the values into Excel, and change the y-axis to a log scale. You should get a graph that looks identical to the original graph from the datasheet. Add another column for your graph, with If calculated from the forward voltage and the constants IS and N . We can use this configuration to iteratively find IS and N . Solve for IS and N . We are trying to match the linear part of the graph (1.3 <= Vf <= 1.7). Adjusting IS will move the curve in the y-axis. Get the calculated graph to the same order of magnitude. The next step is to find the emission coefficient ( N ). N affects both the amplitude and the slope, so some adjustment of IS may be necessary to keep the curve in the same ballpark. Once the slopes match (the lines are parallel), trim IS so that the calculated data matches the datasheet values. I got IS = 1e-18 , and N=1.8 for the diode you listed. Identify RS . This is a bit tricky. RS is responsible for the curving of the current from 1.7V and above. Consider modeling the ohmic resistance as a resistor in series with the diode. As the current through the diode increases, the voltage drop across the ohmic resistance causes the forward diode voltage Vf to increase slower. At small currents, this effect is negligible. The first thing to do is to get a ballpark estimate of RS to use in the more accurate solutions. You can calculate the effective value of RS from the datasheet values by back-calculating for Vf using the measured If . The voltage difference between the input value and the calculated Vf can be used with the forward current to generate a resistance. At the higher currents, this will be a good starting value. To plot the diode current using RS , you need to first calculate the diode Vf given a voltage for the resistor-diode series combination. Wikipedia lists an iterative function - it converges easily if the resistor voltage drop is significant. This function was easy enough to set up in Excel. For Vf values below 1.8, I hard-coded the input value because the iterative function did not converge. Then take this Vf value to calculate the If of the ideal diode. I plotted this with the original datasheet graph. Using trial and error, you should be able to get a RS value that gets pretty good overlap with the datasheet values. All that's left is to throw the model together in SPICE to verify your work. Below is my diode model that I verified using HSPICE. The simulation data is almost a perfect overlay for the datasheet graph. .model Dled_test D (IS=1a RS=3.3 N=1.8) I used this article , which helped a lot with the diode spice parameters. I cleaned up my spreadsheet, and tyblu has made it available for download here . Use at your own risk, results not guaranteed, etc... etc...
{ "source": [ "https://electronics.stackexchange.com/questions/9510", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2118/" ] }
9,553
I'm a Trainee electrician and pc hardware enthusiast. I was just wondering why a mixture of inductors and capacitors are used on motherboards? Why not just use capacitor? I thought the inductor stores electrical charge but it uses magnetism. What's so special about storing it as magnetism?
To answer this properly, you should know the properties of a capacitor and an inductor. Inductors are one of the primary components required by a switching regulator. A capacitor and an inductor are similar in the way that a capacitor resists a change of a voltage and an inductor resists a change in current. The "strength" of their resistance depends on their value Capacitors are widely used to clean up a power supply line, i.e. remove noise or ripple at (higher) frequencies. Inductors are used in switching power supplies where a relatively constant current is passed through an inductor. A switching power supply works in that a switch is opened and closed very quickly. When the switch is closed, the inductor is 'charged'. When the switch is open, the energy is drawn from the inductor into the load. Usually such a power supply is being decoupled with a capacitor to create a stable power supply line. An inductor is required to make this principle work. If you know a resistor that has an equal resistance for all frequencies of signal, you should view a capacitor as a resistor that will be infinite for DC (0Hz) and 0 for high frequencies. An inductor will be the opposite: it's resistance will be 0 at 0Hz, and infinite at high frequencies. However we don't call this resistance (that's only used for a pure resistor!) but impedance. A PC motherboard or graphics card is basically not much else than this. They have their main chips and the routing between them, and most other components are power supply or a little bit of interfacing between chips or connectors.
{ "source": [ "https://electronics.stackexchange.com/questions/9553", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2824/" ] }
10,092
Rather basic, I'm afraid, but when would you use a relay, and when would you use a transistor? In a relay the contacts wear out, so why are relays used at all?
Relays are on-off devices. Transistors can have their voltage drop varied. Relays are far slower than transistors; typically 50ms to switch, and probably more. Some types of transistors can switch in picoseconds (almost 10 orders of magnitude faster.) Relays are isolated. Transistors can be (e.g. SSR), but are often not. Relays are electromagnetic and bring problems with them - for example, try building a relay computer with many relays. You will find that relays will interfere with each other in some cases. Transistors are not very EM sensitive. They do not emit much electromagnetic interference. Relays consume a lot of current in the "on" state, most transistors do not.
{ "source": [ "https://electronics.stackexchange.com/questions/10092", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2992/" ] }
10,129
For the sake of not incorrectly connecting my power supply and damaging my board I'm going to ask a relatively dumb question. Is ground on my board the negative terminal on my battery? Explicitly should I connect the ground to the negative terminal of the battery?
Yes - just remember that your ground in that case will only be relative to the battery. If you go to connect this to another device (serial interface, etc) you need to link the ground lines so they're a common ground. So long as it is isolated, you're fine.
{ "source": [ "https://electronics.stackexchange.com/questions/10129", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2617/" ] }
10,322
Just a general electronics question: What is negative voltage, like -5 Volt? From my basic knowledge, power is generated by electrons wandering from the minus to the plus side of the power source (assuming DC power here). Is negative voltage when electrons wander from + to -? Why do some devices even need it, what is so special about it?
Someone may have better words to explain this than me, but the big thing you have to remember is voltage is a potential difference. In most cases the "difference" part is a difference between some potential and ground potential. When someone says -5v, they're saying that you are below ground. You also need to keep in mind that voltage is relative. So like I mentioned before, most people reference to "ground"; but what is ground? You can say ground is earth ground, but what about the case when you have a battery powered device that has no contact to ground. In this situation we have to treat some arbitrary point as "ground". Usually the negative terminal on the battery is what we consider from this reference. Now consider the case that you have 2 batteries in series. If both were 5 volts, then you would say you would have 10 volts total. But the assumption that you get 0/+10 is based off of "ground" as being the negative terminal on the battery that isn't touching the other battery and then 10V as being the location of the positive terminal that isn't touching the other battery. In this situation we can make the decision that we want to make the connection between the 2 batteries be our "ground" reference. This would then result in +5v on one end and -5v on the other end. Here is what I was trying to explain: +10v +++ +5v | | | | < Battery | | +5v --- 0v +++ | | | | < Another Battery | | 0v --- -5v
{ "source": [ "https://electronics.stackexchange.com/questions/10322", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3051/" ] }
10,327
What prefixes should be used on reference designators for components of various types? I think we can all agree that "R##" refers to a resistor, "C##" refers to a capacitor, and "L##" refers to an inductor. Beyond that, it appears to be a mishmash of conventions. I've seen both "IC" and "U" used for chips. "Q", "T", and "M" have been used for transistors and MOSFETs. Ordinarily, "D##" is used for diodes, but LEDs and Zeners get special treatment on some boards. What standards are available for reference, and who endorses them? JEDEC, ISO, IEEE, and other standardization bodies are welcome references (though I'd prefer an inexpensive standard), but I'm also curious to see what's used at various companies other than my own. Edit: What I'd really like to see is a list in an answer here which complies with the standard (even if it's just a quote).
There are actually standards to address this, IEC 60617 (also known as British Standard BS 3939), ANSI standard Y32 (also known as IEEE Std 315), Australian Standard AS 1102 Below is a table of some common markings from this link to an old revision of a Wikipedia article Designator Component Type AT Attenuator BR Bridge rectifier BT Battery C Capacitor CN Capacitor network D Diode (including zeners, thyristors and LEDs) DL Delay line DS Display F Fuse FB or FEB Ferrite bead FD Fiducial J Jack connector (female) JP Link (Jumper) K Relay L Inductor LS Loudspeaker or buzzer M Motor MK Microphone MP Mechanical part (including screws and fasteners) P Plug connector (male) PS Power supply Q Transistor (all types) R Resistor RN Resistor network RT Thermistor RV Varistor S Switch (all types, including push-buttons) T Transformer TC Thermocouple TUN Tuner TP Test point U Integrated circuit V Vacuum Tube VR Variable Resistor (potentiometer or rheostat) X Transducer not matching any other category Y Crystal or oscillator Z Zener Diode Component name abbreviations widely used in industry: AE: aerial, antenna B: battery BR: bridge rectifier C: capacitor CRT:cathode ray tube D or CR: diode DSP:digital signal processor F: fuse FET:field effect transistor GDT: gas discharge tube IC: integrated circuit J: wire link ("jumper") JFET: junction gate field-effect transistor L: inductor LCD:Liquid crystal display LDR: light dependent resistor LED: light emitting diode LS: speaker M: motor MCB: circuit breaker Mic: microphone MOSFET:Metal oxide semiconductor field effect transistor Ne: neon lamp OP: Operational Amplifier PCB: printed circuit board PU: pickup Q: transistor R: resistor RLA: RY: relay SCR: silicon controlled rectifier SW: switch T: transformer TFT:thin film transistor(display) TH: thermistor TP: test point Tr: transistor U: integrated circuit V: valve (tube) VC: variable capacitor VFD: vacuum fluorescent display VLSI:very large scale integration VR: variable resistor X: crystal, ceramic resonator XMER: transformer XTAL: crystal Z or ZD: Zener diode
{ "source": [ "https://electronics.stackexchange.com/questions/10327", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/857/" ] }
10,588
Lots of new batteries (for mobile devices, MP3 players, etc) have connectors with 3 pins. I would like to know what is the purpose of this and how should I use these three pins? They are usually marked as (+) plus, (-) minus, and T.
The third pin is usually for an internal temperature sensor, to ensure safety during charging. Cheap knock-off batteries sometimes have a dummy sensor that returns a "temp OK" value regardless of actual temperature. Some higher-end batteries have internal intelligence for charge control and status monitoring, in which case the third pin is for communications.
{ "source": [ "https://electronics.stackexchange.com/questions/10588", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3125/" ] }
10,731
I have this DC socket: Which I intend to wire up to a circuit to provide power. I have a DC power supply which has a positive tip polarity. I'm good to go and wire up the socket but I am a little stuck as there are 3 pins on my socket. From the look of the socket it's fairly obvious which pin is for the center pin (which I will wire pos) so I assume the other two are for the outside (which I will wire neg). Is this assumption correct? And if so why two pins instead of one?
This might be something similar to the female single channel audio connector where two of the 3 pins will be shorted when unplugged and when its plugged one of those 2 ping will be floating which usually goes to a battery and the remaining 2 pins will be connected to your plug. This diagram might help you understand better. Here pin 3 will got to the battery, pin 2 & 1 to the circuit. When you plug in the socket pin 1 & 2 would provide the power to the circuit and battery will be disconnected. when unplugged the battery would be connected back to the circuit. Though there are other configuration also in which it can be used but this is a simple example which might help you understand the way these socket works.
{ "source": [ "https://electronics.stackexchange.com/questions/10731", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2739/" ] }
10,761
Should an LM317 (specifically a TI LM317MQDCYR ) rated for 500ma output current really be getting hot to touch when only pulling 170ma at 5V? Package is SOT-223, input is 9V. Is this normal?
Dissipation is (9-5)V*170mA, or 680 mW. SOT223 thermal resistance is 62.5°C/W maximum. So the junction temperature is 62.5 * 0.680, or 42.5 degrees above ambient. And the thermal resistance to the case is about 15 °C/W, so the case will be 42.5-(15*0.680) or 32.3 degrees above ambient. If ambient is 25°C, the case is at 57°C. Does this sound right? (You should look up and substitute your own thermal values, I just grabbed some from the Zetex datasheet. But I expect all SOT223 will have similar thermal characteristics.) Notice that the 500 mA rating has nothing at all to do with it. Temperature is determined by the power dissipated, which is set by the input-output difference and the output current, and the thermal characteristics of the package and surroundings.
{ "source": [ "https://electronics.stackexchange.com/questions/10761", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2654/" ] }
10,851
It seems common that PBX and other telephone hardware use a positive-ground power supply, where the "hot" line is at -48v. What's the reason for that?
I remember this coming up many years ago in the alt.telecom newsgroup and I managed to find it for you (aren't I kind?): Why most telecommunication equipment use -48V supply voltage In summary (from the thread): "From a book I've been reading lately (Instruction in Army telegraphy and telephony, vol 1, 1917), the reason is for fault tracing. An earth fault will tend to decrease in resistance, i.e. tend towards a dead earth, if the earth is positive with respect to the conductor, thus enabling it to be located." "48V (or in the UK, 50V) seems to be arbitrary, many of the earlier CB systems of the Post Office used 22 volts or 40 volts. The automatic systems in some early exchanges of the Siemens 17 type used 60 volts IIRC. 48 to 50V may have been a happy medium (remembering that years ago, telecommunication companies were VERY conservative, and standardized across their entire network), allowing the use of long thin lines, but not risking electrocution of linemen or overheating on short circuits." "A negative voltage is really a positive earth potential. If your positive conductor i(+) is earth, you can't short it to earth. It can be shorted to the exchange earth connection if it comes into contact with a suitable conductor in the cable, but as this 'earth' is the negative battery terminal (technically) you don't get the massive current flow to earth for a conductor to earth. The only way you can get massive current flow is if you short the pair together or put the positive earth to a foreign wire connected to the negative battery terminal." "corrosion reduction—the leakage to earth that would occur if insulation were damaged opposes the corrosion." "Why negative? AFAIK to reduce electrolytic corrosion of buried cables, which were lead-sheathed."
{ "source": [ "https://electronics.stackexchange.com/questions/10851", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2773/" ] }
10,909
I am trying to decide if the information on the wikipedia page http://en.wikipedia.org/wiki/Coulomb#In_everyday_terms is reasonable. In particular the statements that a lightning bolt has about 15 coulomb, where a battery has 5000. My first instinct is that this is clearly wrong (a lightning bolt being such an energetic event, and a battery seeming mostly innocent), but then on reflection a lightning bolt does last only an extraordinarily short amount of time. In the end I am not sure how to check if this makes sense.
A common source of confusion is the difference between energy and power. A Snickers bar, for example, has more energy in it than a hand grenade. One might call a grenade exploding "energetic", but what's key here is its power (P), or ability to convert energy (E) extremely rapidly, in a very short amount of time (t): $$P = \frac{E}{t}$$ Similarly, there is an analogy in the electrical world, where charge (Q), current (I), voltage (V), power and energy do not always go hand-in-hand. The equations that relate all those are as follows: $$ I = \frac{Q}{t} $$ $$ P = I{\cdot}V $$ $$ E = P{\cdot}t = I{\cdot}V{\cdot}t $$ $$ Q = I{\cdot}t $$ In the case of a lightning bolt, V and I are both extremely high, so the power is extreme, but t is fairly low, so the high current and short time mitigate each other somewhat, so there isn't an immense amount of charge. Of note, all that voltage influences is how much energy that the same amount of charge carries. Plugging in some numbers, 120 kA & 30 µs, we get 3.6 coulombs , close to what you have. The Wikipedia article, however, says there is a fair bit of variability ("up to 350 C"), but they are within a couple orders of magnitude, and having seen a few lightning storms, some strikes are big and meaty, others not so much. In a battery, the voltage is pathetic compared to a lighting strike, but that's irrelevant for calculating charge. What's key is that it's able to provide a current that's several orders of magnitude less for dozens of orders of magnitude longer. One milliamp for one hour (1 mA·h) is equal to 3.6 coulombs (look, the same as our 120 kA, 30 µs lighting strike), and batteries often have capacities in the thousands of mA·h (2000 mA·h is typical for an AA cell).
{ "source": [ "https://electronics.stackexchange.com/questions/10909", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1773/" ] }
10,962
What is the difference between "forward" and "reverse" voltages when working with diodes and LEDs? I realize this question is answered elsewhere on the interwebs such as wikipedia, but I am looking for a short summary that is less of a technical discussion and more a useful tip to someone using diodes in a hobby circuit.
The forward voltage is the voltage drop across the diode if the voltage at the anode is more positive than the voltage at the cathode (if you connect + to the anode). You will be using this value to calculate the power dissipation of the diode and the voltage after the diode. The reverse voltage is the voltage drop across the diode if the voltage at the cathode is more positive than the voltage at the anode (if you connect + to the cathode). This is usually much higher than the forward voltage. As with forward voltage, a current will flow if the connected voltage exceeds this value. This is called a "breakdown". Common diodes are usually destroyed but with Z and Zener diodes this effect is used deliberately.
{ "source": [ "https://electronics.stackexchange.com/questions/10962", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
11,004
I’m trying to figure out how a step-up transformer works. A step-down transformer is simple and logical enough; you start out with a higher voltage and end with less, the remainder being wasted as heat. But with a step-up transformer, you end up with more voltage than you start with. I tried looking it up, but all I can find (whether online or even in some electronics texts) is general information on how transformers work (induction, Faraday’s law, construction, etc.) and explanations of the difference between step-ups and step-downs in terms of the number of turns, but not specifically how step-ups result in more voltage. Where does that extra voltage come from? Not magic…
I think what you're missing is the current... Step down transformers change a high voltage/low current, to low voltage/high current. Step up transformers change a low voltage/high current, to high voltage/low current. So, in an ideal 100% efficient transformer, the power doesn't change and no heat will be generated by the transformer, i.e. the power in = the power out, because Power = Volts x Amps.
{ "source": [ "https://electronics.stackexchange.com/questions/11004", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2042/" ] }
11,048
I've seen that a number of schematics will connect the center (common) pin of a potentiometer to one or the other leg, and it then functions more like a rheostat. Is that how a rheostat is wired internally? What's the difference between a potentiometer and a rheostat? Finally, why connect the common to a leg at all on a potentiometer, instead of just ignoring the unused leg?
The correct term for the common terminal of a potentiometer is the slider. A rheostat is simply a variable resistance used to control power to a load and you are correct about the wiring. Only the slider and one other terminal are used. A potentiometer uses all three terminals, enabling a variable voltage or signal to be tapped off from the slider. Potentiometers and rheostats are made the same way, but rheostats are usually much "beefier", as they are generally used in high-power situations. The slider is often connected to one or other terminal for safety reasons, in case it loses contact with the track.
{ "source": [ "https://electronics.stackexchange.com/questions/11048", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
11,457
I'm looking to design a PCB that can reliably survive constant impact. The board will be rigidly mounted to an enclosure that will protect the board from actually hitting anything. The nature of the impact would be similar to a bowling ball, or a hammer head - not what I would consider vibration, but frequent hits from multiple directions. As part of the device functionality, I want to measure the acceleration of the board, so dampening the impact in any way is not preferable. I don't have any measured acceleration values (G's) to provide as a baseline, and I don't really have any experience in this area. As such, I have a few closely related generic questions: What is the most force that would be OK on a board with no impact hardening measures taken? (Am I worrying too much about a non-issue?) Are there any design practices that should be followed for the PCB? What are the weak points in a design that lead to mechanical failure? Are there parts that should be avoided for a more robust design? At what force levels should I start worrying about the safety of the parts themselves?
This is just general stuff, you should really try to put a bound on the expected acceleration forces, the period and duration of those forces, thermal conditions, and expected angles of impact to get the information you need to make the design robust. What is the most force that would be OK on a board with no impact hardening measures taken? (Am I worrying too much about a non-issue?) This is very difficult to put a single number on, it depends on the types of components used and the direction/frequency of the hits. Are there any design practices that should be followed for the PCB? Lots of attachments to something solid. One of the most likely failure modes is the PCB flexing which can cause the solder joints on the PCB to crack causing intermittent or complete failure of the connection. I would try to keep the PCB as compact as you can while providing as much attachment to something that won't flex (steel enclosure) as you can. The smaller the PCB the smaller the 'overall flex' of the board. Something like 4+ layer design with solder copper power and ground planes should also add to the rigidity of the PCB but can cause additional thermal flex. Depending on what your needs are, there are specialized PCB substrates that are more rigid than your stock off the shelf FR-4, such as substrates which employ carbon fiber composites vs fiberglass. What are the weak points in a design that lead to mechanical failure? Board Flex as mentioned above can cause solder joint cracking. Stiffening of the PCB can help. You could also not use stock solder, but rather a conductive adhesive such as silver conductive epoxy. You can also use a conformal coating on the PCB which will hold surface mount components in place as well as add some stiffness to the PCB. Large Items: Lite weight surface mount devices are the best parts to use, large heavy items that sit further from the PCB will be the worst parts to use. Things like large aluminium electrolytic caps, tall inductors, transformers, etc will be the worst. They will impart the most force on their leads and solder connections to the PCB. If large devices are needed use additional attachment to the PCB. Use non-conductive, non-corrosive epoxy or something like that to attach them to the PCB or use a part with an additional PCB support. Be sure to account for the added thermal resistance when calculating the devices ability to dissipate power if using epoxy or conformal coatings. Connectors. Any connector going off the board will get beat on, make sure its a solid locking type and rated for the expected G-forces. Make sure the connector's attachment to the PCB is solid. Pure surface mount types without a through-hole attachment to the board it probably a bad idea. These usually require through-holes in the PCB near the edge of the PCB. Make sure your PCB substrate is strong enough to support the forces on these holes as with being so close to the edge the strength of the PCB is around the hole is much less. If you need a connector that leaves the enclosure, use a locking panel mount connector and solder leaders to the PCB, this will put the stress on the connector/enclosure and not on the PCB. Are there parts that should be avoided for a more robust design? See the list above but keep all parts as lite and as close to the PCB as possible. At what force levels should I start worrying about the safety of the parts themselves? Again this is hard to put a number on. If the device is getting hit 'edge on' to the PCB than your concern is lateral shear forces. What force causes a problem there is dependent on the IC. A large heavy IC with few, small attachments to the PCB is probably the worse case. Maybe a tall pulse transformer or something like that. A lite weight, short IC, with many attachments is probably strongest. Something like a 64pin QFP, even better if it has a large center pad. Some useful reading on this topic: http://www.utacgroup.com/library/EPTC2005_B5.3_P0158_FBGA_Drop-Test.pdf Some parts may be internally damaged by high G-forces, this would be on a part by part basis but would mostly be limited to devices with movable internal parts. MEMS devices, transformers, mag-jacks, etc, etc. Comments Have you considered using 2 boards? One small board with the accelerometer which is actually stiffly attached to the enclosure and a second board with the rest of the electronics on it which can then be mounted with a shock absorption system. The shock system could be as simple as rubber supports or as complex as the systems used in hard drives depending on needs. Your going to need a pretty fast processor and a pretty fast, wide range accelerometer if you want to get accurate measurements of impact events such as getting hit with a hammer.
{ "source": [ "https://electronics.stackexchange.com/questions/11457", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/638/" ] }
11,884
You need 4 channels to determine your position (including elevation), and I can understand that a few extra channels increase accuracy. However, there are maximum 12 satellites in view at any time, so why have receivers with more channels? I've seen receivers with 50 or even 66 channels , that's more than the number of satellites up. I don't see any advantages in this explosion of number of channels, while I presume that it does increase the receiver's power consumption. So, why do I need 66 channels?
The answer is complex due to the way the GPS system operates, so I'm going to simplify a number of things so you understand the principle, but if you are interested in how it's really implemented you'll need to go find a good GPS reference. In other words, what's written below is meant to give you an idea of how it works, but is technically wrong in some ways. The below is not correct enough to implement your own GPS software. Background All the satellites transmit on essentially the same frequency. They are technically walking all over each others' signals. So how does the GPS receiver deal with this? First, each satellite transmits a different message every mS. The message is 1024 bits long, and is generated by a pseudo random number generator. The GPS receiver receives the entire spectrum of all the transmitters, then it performs a process called correlation - it generates the specific sequence of one of the satellites, multiplies it by the signal input, and if its signal matches a satellite's signal exactly then the correlator has found one satellite. The mixing essentially pulls the satellite's signal out of the noise, and verified that 1) we have the right sequence and 2) we have the right timing. However, if it hasn't found a match, it has to shift its signal by one bit and try again, until it's gone through all 1023 bit periods and hasn't found a satellite. Then it moves on to trying to detect a different satellite at a different period. Due to the time shifting (1023 bits, 1,000 transmissions per second), in theory it can completely search a code in one second to find (or determine there's nothing) at a particular code. Due to the code shifting (there are currently 32 different PRN codes, one each for each satellite) it can therefore take 30+ seconds to search each satellite. Further, doppler shift due to the speed of the satellite relative to your ground speed, means that the timebase could be shifted by as much as +/- 10kHz, therefore requiring searching about 40 different frequency shifts for a correlator before it can give up on a particular PRN and timing. What this means This leaves us with a possible worst case scenario (one satellite in the air, and we try everything but the exact match first) of a time to first fix off a cold start (ie, no information about the time or location of the receiver, or location of the satellites) of 32 seconds, assuming we don't make any assumptions, or perform any clever tricks, the received signal is good, etc. However, if you have two correlators, you've just halved that time because you can search for two satellites at once. Get 12 correlators on the job and it takes less than a few seconds. Get a million correlators and in theory it can take a few milliseconds. Each correlator is called a "channel" for the sake of marketing. It's not wholly wrong - in a sense, the correlator is demodulating one particular coded frequency at a time, which is essentially what a radio receiver does when you switch channels. There are a lot of assumptions a GPS receiver can make, though, that simplify the problem space such that a generic 12 channel receiver can get a fix, in the worst case, in about 1-3 minutes. While you can get a 3D fix with a 4 channel GPS, when you lose a GPS signal (goes beyond the horizon, or you go under a bridge, etc) then you lose 3D fix and go to 2D fix with three satellites while one of your channels goes back into correlation mode. Now your receiver starts to downloaded the ephemeris and almanac, which allows the receiver to very intelligently search for signals. After 12 minutes or so it knows exactly which satellites should be in view. So the search goes pretty quickly because you know the position and code for each satellite, but you still only have a 2D fix until you actually find a new satellite. If you have a 12 channel receiver, though, you can use 4 of the strongest channels to provide your fix, a few channels to lock onto backup satellites so it can switch the calculations to them if needed, and several channels to keep searching for satellites the receiver should be able to see. In this way you never lose the full 3D fix. Since you can only see up to 12 satellites, why would you need more than 12 channels? There are 24 or so GPS satellites operating at any given time, which means that on one point on the earth you can really only see half of them. But remember - you can only search for one satellite per correlator, so the primary reason to increase correlators past twelve is to improve the time to first fix, and the main reason to improve that is for power consumption. If your GPS chipset has to be powered all the time, it's a 100mW power drain all the time. If, however, you only need to turn it on once per second for only 10mS each time, then you just cut your power consumption down to 1mW. This means your cell phone, location beacon, etc can operate for two orders of magnitude longer time on the same set of batteries while still maintaining a full real time fix on their location. Further, with millions of correlators, one can do more exact searches which can help reduce the effects of radio reflections in urban canyons (tall buildings in big cities used to foul up GPS receivers with fewer correlators). Lastly, while only 4 satellites are needed to get a 3D fix, good receivers use more satellites in its position algorithm to get a more accurate fix. So only a 4 channel receiver is required, but a 12 channel receiver can get more accuracy. Conclusion So the millions of correlators: Speeds up satellite acquisition Reduces power consumption Reduces likelihood of losing a 3D fix even in urban canyons Provide better sensitivity, allowing fixes in dense forests, and even in some tunnels Provides better positioning accuracy Thanks to borzakk for some corrections .
{ "source": [ "https://electronics.stackexchange.com/questions/11884", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2064/" ] }
12,014
In a car battery charger, I found a strange rectifier. Can someone explain to me how does it work? Here I have a transformer which is unmarked. By measuring resistance between its output terminals, I determined that the plus cable is connected to the center of the transformer. The two outer connections both have a diode connected as shown on the image. To me it looks similar to this: rectifier, but the diodes are turned backwards. I checked their orientation several times with two different multimeters and I'm sure that I've drawn them correctly.
Maybe these illustrations will help: Let's assume that the start and end of the primary and secondary windings are such that the 'starts' are at the top of the picture and the 'finishes' are at the bottom. When primary current flows from the top of the picture to the bottom, the top of the primary winding is at higher voltage than the bottom. This will induce a voltage in the secondary winding with the highest potential at the top of the winding and the lowest at at the bottom (and a potential somewhere in between, at the center-tap). You can tell quickly that D1 will be reverse-biased, because the voltage at the top is higher than at the center-tap. Current will flow however it can, which will be out of the center-tap, through D2 and into the bottom of the winding. When primary current flows from the bottom of the picture to the top, the reverse condition holds true in the secondary: D2 will be reverse-biased, and current will flow from the center-tap through the load and through D1 back to the winding.
{ "source": [ "https://electronics.stackexchange.com/questions/12014", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1240/" ] }
12,312
I'm looking at a datasheet for a digital circuit and it specifies that the typical input hysteresis is 100 mV. What does this mean exactly?
Let's say you detect a low-to-high transition at 2.5 V. A 100 mV hysteresis would mean that the low-to-high transition is detected at 2.55 V and the high-to-low transition is detected at 2.45 V, a 100 mV difference. Hysteresis is used to prevent several quickly successive changes if the input signal would contain some noise, for example. The noise could mean you cross the threshold of 2.5 V more than just once, which you don't want. A 100 mV hysteresis means that noise levels less than 100 mV won't influence the threshold passing. Which threshold applies depends on whether you go from low to high (then it's the higher threshold) or from high to low (then it's the lower one): edit Another way to illustrate hysteresis is through its transfer function , with the typical loop: As long as the input voltage remains below \$V_{T+}\$ the output is low, but if it exceeds this value the output switches to high (the up-going arrow). Then the output remains high as long as the input voltage stays above \$V_{T-}\$. When the input voltage drops below this threshold the output switches to a low level (the down-going arrow). Note: hysteresis can also be used for other purposes than increasing noise immunity. The inverter below has a hysteresis input (which makes it a Schmitt trigger , indicated by the symbol inside the inverter). This simple circuit is all you need to make an oscillator . Here's how it works. When it's switched on the capacitor's voltage is zero, so the output is high (it's an inverter!). The high output voltage starts charging the capacitor through R. When the voltage over the capacitor reaches the higher threshold the inverter sees this as a high voltage, and the output will go low. The capacitor will now discharge to the low output via R until the lower threshold is reached. The inverter will then again see this as a low voltage, and make the output high, so the capacitor starts to charge again, and the whole thing repeats. The frequency is determined by the capacitor's and resistor's value as given in the equations. The difference between the frequency for the normal HCMOS ( HC ) and the TTL-compatible ( HCT ) is because the threshold levels are different for both parts.
{ "source": [ "https://electronics.stackexchange.com/questions/12312", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2968/" ] }
12,362
When reading/writing/talking about electronics, I like to understand the acronyms and mneumonics for abbreviations used for registers, functions, filenames, pin names, etc. Usually, the first time the abbreviation is used, either the context or a parenthetical note sets it off, or it's blatantly obvious. On the Microchip dsPIC parts, the TRIS register controls the data direction. I can't find a note which uses the full word/phrase that would be abbreviated or have the acronym TRIS. ('The Register Input/output Settting' is about the best I can come up with, although 'TRIS Really Is a Silly abbreviation' is a close second guess). How do you remember this? I've heard it pronounced as a word, but I'd like to know what it means to make it easier to remember, read, and write.
TRIS stands for TRIState. It means the port is waiting for an input rather than output a high or low signal. It's named as such because a port can have 3 states: Output High Output Low Input (High Impedance)
{ "source": [ "https://electronics.stackexchange.com/questions/12362", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/857/" ] }
12,390
My circuit uses main electricity in parts of the circuit. However my AVR and other components only need 5V. So I'm using a step down transformer to generate 12V. It's then regulated to 5V. The transformer won't convert it to dc for me will it? It just steps it down then outputs a certain voltage depending on it's coils. So I would need to convert it to dc myself right?
You are correct. The transformer will only reduce the voltage (and increase the available current), so you need to add additional circuitry to rectify, smooth and regulate your 12\$V_{AC}\$ transformer output to 5\$V_{DC}\$. This is the type of circuit you should be looking to build: The transformer reduces the voltage from mains to 12\$V_{AC}\$ (RMS). The Diode Bridge (known as a bridge rectifier) will convert 12\$V_{AC}\$ to 15\$V_{DC}\$. The voltage is \$\sqrt{2}\$ times minus two diode voltage drops higher than the input voltage because the rectifier output is the peak AC voltage, not the RMS AC voltage. The first capacitor will smooth out the ripples that come from the output of the AC to DC bridge rectifier. The LM7805 regulator will maintain a constant voltage as the load varies. For example if you are switching a light bulb on and off, the current will go up and down, and if you didn't have a regulator then the voltage would drop as the bulb is switched on. The regulator keeps it at the 5\$V_{DC}\$ your microcontroller needs. The final small capacitor filters out any noise or interference on the regulated side of the circuit.
{ "source": [ "https://electronics.stackexchange.com/questions/12390", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2824/" ] }
12,407
If I have sampled a signal using proper sampling methods (Nyquist, filtering, etc) how do I relate the length of my FFT to the resulting frequency resolution I can obtain? Like if I have a 2,000 Hz and 1,999 Hz sine wave, how would I determine the length of FFT needed to accurately tell the difference between those two waves?
The frequency resolution is dependent on the relationship between the FFT length and the sampling rate of the input signal. If we collect 8192 samples for the FFT then we will have: $$\frac{8192\ \text{samples}}{2} = 4096\ \,\text{FFT bins}$$ If our sampling rate is 10 kHz, then the Nyquist-Shannon sampling theorem says that our signal can contain frequency content up to 5 kHz. Then, our frequency bin resolution is: $$\frac{5\ \text{kHz}}{4096\ \,\text{FFT bins}} \simeq \frac{1.22\ \text{Hz}}{\text{bin}}$$ This is may be the easier way to explain it conceptually but simplified:  your bin resolution is just \$\frac{f_{samp}}{N}\$, where \$f_{samp}\$ is the input signal's sampling rate and N is the number of FFT points used (sample length). We can see from the above that to get smaller FFT bins we can either run a longer FFT (that is, take more samples at the same rate before running the FFT) or decrease our sampling rate. The Catch: There is always a trade-off between temporal resolution and frequency resolution. In the example above, we need to collect 8192 samples before we can run the FFT, which when sampling at 10 kHz takes 0.82 seconds. If we tried to get smaller FFT bins by running a longer FFT it would take even longer to collect the needed samples. That may be OK, it may not be. The important point is that at a fixed sampling rate, increasing frequency resolution decreases temporal resolution. That is the more accurate your measurement in the frequency domain, the less accurate you can be in the time domain. You effectively lose all time information inside the FFT length. In this example, if a 1999 Hz tone starts and stops in the first half of the 8192 sample FFT and a 2002 Hz tone plays in the second half of the window, we would see both, but they would appear to have occurred at the same time. You also have to consider processing time. A 8192 point FFT takes some decent processing power. A way to reduce this need is to reduce the sampling rate, which is the second way to increase frequency resolution. In your example, if you drop your sampling rate to something like 4096 Hz, then you only need a 4096 point FFT to achieve 1 Hz bins *4096 Hz, then you only need a 4096 point FFT to achieve 1hz bins and can still resolve a 2khz signal. This reduces the FFT bin size, but also reduces the bandwidth of the signal. Ultimately with an FFT there will always be a trade off between frequency resolution and time resolution. You have to perform a bit of a balancing act to reach all goals.
{ "source": [ "https://electronics.stackexchange.com/questions/12407", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/319/" ] }