source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
53,587
I opened up a really tiny mobile phone charger I have, to see how it is designed. The entire "charger" is integrated into a small 2-pin mains plug, 1 x 1.25 x 0.5 inch, that has an USB socket for the phone's USB charging cable. I could not find anything that seems to be a transformer anywhere in the circuit, and yet I have tested that it is an isolated 5 volt well-regulated output. The tiny flexible PCB has merely a dozen or so SMD parts, ranging from 0402 to 4516 (metric), plus the connectors at the two ends, for mains and for USB. The SMD parts all have part numbers sanded off. How do they manage the isolation in these chargers? Responses to comments: This is a no-name "Hi-Standard USB phone charger, Extra Powerful!" I have just bought in Korea, that is supposed to work with any USB-charged cellphone. They have pictures of a half dozen different cellphones on the box, and a hydra USB cable inside that has mini-USB, micro-USB and some other types of connectors. I bought it just to see whether it is safe or not. That is why I tested for isolation first.
Although the question has provided limited details, this answer presents a somewhat different hypothesis from the standard assumption that there's an inductive coil hidden in there somewhere. The charger in question possibly uses a Piezoelectric Transformer instead of the magnetic (inductive) transformers usually seen for isolation. Does the charger looks somewhat like this? If yes, the designers have used a Piezo transformer instead of a conventional one. Interestingly, the source of this image is a paper in a Korean academic publication. This makes the hypothesis even more apt. A piezoelectric transformer designed for Mhz operation, 500 mA secondary current with 5 Volt signals, using Polyvinylidene Fluoride (PVDF) as the piezoelectric medium, could be fabricated as a thick 1210 SMD part. Since the question mentions SMD parts up to 4516 metric i.e. 1806 imperial, one of the largest of those components is probably the piezoelectric transformer providing the isolation as per the question. Some interesting information gleaned while investigating this mystery charger: Piezoelectric transformers deliver 80% to (recent experimental versions) 90% efficiency, impressive in transformer terms These transformers can provide galvanic isolation at multi-kV levels - of course, not in a SMD 1210 size, where the contacts would be too close together. PVDF exhibits piezoelectricity several times greater than quartz. Hence it is ideal for making Piezo transformers. Many LCD display CCFL backlights are made using Piezoelectric transformers instead of the inductive coil ballast used in earlier versions. So it isn't really new technology. Equipment used in magnetism-sensitive areas (e.g. MRI labs) are expected to transition to non-magnetic electronics, hence Piezo transformers where a transformer is needed. ( n.b. Any current flow, however, will still generate some magnetism courtesy H fields ) Some articles of interest: Piezoelectric Transformer With Very High Power Density Panasonic's Piezo transformers for LCD backlights - Of course, much bigger than the dimensions usable in USB chargers. A comparison of magnetic and Piezo transformers by Texas Instruments Full disclosure : I have never worked with, or even seen Piezoelectric transformers before today - the above information was a new learning for me, in the process of investigating the mystery charger .
{ "source": [ "https://electronics.stackexchange.com/questions/53587", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16523/" ] }
55,073
So I've been looking through and going through my Digital Computer Electronics book, and I came to this... It seems so simple and I understand the "point" of it, but I'm not sure I understand exactly how it works. "In a Schottky transistor, the Schottky diode shunts current from the base into the collector before the transistor goes into saturation." I guess this part confuses me above ^^^ http://en.wikipedia.org/wiki/Schottky_transistor From what I gather the Schottky Diode has a forward voltage of .25 V... So it's taking .25 V out of the Input Line (coming from the left of the picture) and putting THAT into the collector... So it'll just take less time to switch... Because there is .25 V less coming in the base? Or is adding .25 V to the collector so when the Transistor turns "on" it'll already have a little bit flowing through it (since .25 V isn't enough to actually flow through when it's off?)? Wikipedia entry is confusing. I feel pretty stupid for asking such a simple question lol.
What happens is: As the base voltage rises, the transistor begins to turn on and it's collector voltage drops (assuming it has a collector resistor or similar current limiting element) Normally a typical bipolar transistors saturation voltage is around 200mV or less. When the collector voltage, Vce drops below Vbe - Vschottky though, the schottky starts to conduct (now being forward biased) and the base current starts to flow through it into the collector. This "steals" current from the base, preventing the transistor turning on more and the collector reaching it's saturation voltage. The system will reach a state of equilibrium, since the transistor can't turn on any more without it's base current dropping (you could see it as a form of negative feedback) and will settle just around Vbe-Vschotkky (e.g.~700mv-450mV as opposed to ~200mV) So, to clarify things, the formula for Vce is: Vce = Vbe - Vschottky If we have this circuit and apply a ramped voltage from 0-2V: We get simulation results like this: Note that when Vcollector drops below ~700mV, the Schottky begins to conduct and the collector voltage levels out at around 650mV. If we remove the Schottky, then: We can see the collector drops all the way to 89mV (I used the cursor as it's hard to see from the graph)
{ "source": [ "https://electronics.stackexchange.com/questions/55073", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
55,170
I have a small 12v aquarium pump (submersible), and I'd like to hook it up to a small solar panel and put it in an outside pond. This would mean cutting some leads and soldering. Pretty simple job. For inside work, I would usually just put some electrical tape or heatshrink around the joins, but in this case, the joins will be exposed to the weather (but not submersed in water). What's the best way of waterproofing joins like this? Heatshrink + silicon? Hot glue? Something else? If there are multiple options, what are the pros and cons of each?
In order from lowest to highest cost (roughly): 1) For something simple like this, grease filled wire nuts aren't the most beautiful option, but you can readily pick up a pack of them for a few bucks at your local hardware store. Pros: Cheap, available, work Cons: Messy if you need to disassemble, clunky, look bad 2) You could also use grease-filled IDC (insulation displacement) connectors. Pros: Cheap, available Cons: Like other IDC connectors, not great at high current, messy to service 3) Use adhesive-lined heatshrink tubing. It works just like heatshrink, but has an inner liner of hot melt glue which will melt as the tube shrinks, creating a (mostly) environmentally protected connection. Pros: Moderately easy to service, fun to shrink, looks clean Cons: More expensive, is stiff when finished (can cause stress risers in the wire) 4) Use adhesive-lined crimp splices or adhesive-lined solder splices. Outer sleeve is similar to adhesive-lined heashrink but is usually transparent for inspection. Crimp style are crimped, then shrunk. Solder style have low temp solder pre-applied inside, just insert the wire, apply heat, and you've got a waterproof solder splice. Pros: Professional, adhesive layer can be inspected to ensure full coverage Cons: Really, really expensive.
{ "source": [ "https://electronics.stackexchange.com/questions/55170", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9482/" ] }
55,233
Is there a way to setup a voltage supply with voltage jitter/noise? I want to experiment with filtering out noise on various voltages etc. but not sure how to configure LTSpice to create a noisy voltage supply.
Yes, you can inject noise using the arbitrary voltage (or current) source, then use things like the random or white function to create some noise. Here is an example circuit (I separated the noise from the signal just to make things clearer - obviously you can combine them together in one function if you wish): Simulation: All the functions are detailed in the help under circuit elements -> arbitrary behavioral voltage or current sources . Noise simulation mode Also, just in case you were not aware, SPICE has a noise simulation mode, to quote from the help files: .NOISE -- Perform a Noise Analysis This is a frequency domain analysis that computes the noise due to Johnson, shot and flicker noise. The output data is noise spectral density per unit square root bandwidth. Syntax: .noise V(<out>[,<ref>]) <src> <oct, dec, lin> <Nsteps> <StartFreq> <EndFreq> Basic example: Simulation: The above is rather boring as it only models the resistor noise (I stepped the resistor through various values to show how the Johnson noise increases with resistance). But it can be very useful with more complex circuits containing diodes/transistors/opamps/etc.
{ "source": [ "https://electronics.stackexchange.com/questions/55233", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/15299/" ] }
55,236
I hope this question won't be closed as too subjective. I'd like to know best practice - how to make traces on universal PCB's with individual holes without traces (like the following image). My idea is to bend the ends of discrete components and use them to make traces to other components. Is this approach acceptable, or nasty? Although this is called "prototype" PCB, I'd like to use it for my simple final circuits (low frequency, low current applications), because I hope it will save time.
Protoboards are up to you how to use, if it works, it works. The three common methods of using them is using jumper wire, solder bridges, or using leads. Or all three, depending on your needs. Solder bridges take a lot of solder, especially long ones. They are good for two or three adjacent points. Using the leads, or bare wire, is great for straight lines and buses, using less solder than solder bridges. And jumper wire is best for when you can't/don't want to go around an existing solder joint. It all really boils down to what you need. Of course, solder bridges can be very sloppy if you don't have practice at them. And using too many jumper wires or bare leads looks ugly and not well thought out. But this is your project, you can figure out how to mount them, and what's acceptable or not. If you need much of either, what you really want to do is think about the placement of your components, move stuff around to minimize the need to use jumpers or long bridges.
{ "source": [ "https://electronics.stackexchange.com/questions/55236", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7575/" ] }
55,270
I am using MCP16322 buck regulator powered from 12V and outputs 5V and 2A. Is it ok to connect the output of two of these in parallel? Does connecting the outputs in parallel mess up the maximum capacitance values on the output of the regulators? Is it better to connect the outputs in parallel via diodes? The diodes will cause a .7v drop though which I rather avoid. Here is the application circuit.
Directly connecting the outputs of multiple regulators, switched or linear, is inadvisable for the following reasons: A marginal difference in output voltage would cause high currents to flow between the regulator output pins, potentially damaging one of the regulators. The MCP16322 is rated for 2% precision, hence for a 5 Volt nominal output, one regulator could be at 4.9 Volts, the other at 5.1 Volts. The 0.2 Volt gap would cause current flow between outputs limited only by the rail impedance of the regulators. Any delay in powering up or powering down of either regulator would cause a back-feed from the powered regulator to the non-powered one. By design, the approach stated in the question will have one of the regulators operating while the other may not be - if one of the power sources is off at a given time. This is a failure mode with strong likelihood of device damage Even if the two regulators were powered by a common source, there will be mismatches in power-up timing while the two oscillators are starting up. This is why sequencing of power supplies is required, and there are special-purpose parts for this sequencing. There will be higher peak voltage / peak current demands on output stage capacitors of the regulators, due to additive effects of the (non-synchronized) ripple voltages of the two. A buck controller that supports synchronization and sequencing would be required, instead of the selected device. If the design proposed in the question is used as-is, even if there is no immediate failure, component deterioration would reduce the expected longevity of the device due to repeated exposure to stresses not designed for. The solution : Instead of a diode-OR of the outputs of the two buck regulators, use diodes to merge the 12 Volt input sources . The design can then use a single buck regulator instead of multiple. The datasheet indicates that the regulator will not have any trouble using a 11.3 Volt input instead of 12 Volts, to produce a regulated 5 Volt output as desired. This article about sequencing of multiple voltage rails might be useful reading, it discusses the sequencing and component degeneration issues.
{ "source": [ "https://electronics.stackexchange.com/questions/55270", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5463/" ] }
55,345
This question stems from notations on schematics and seemingly conflicting information I'm seeing. I suspect that I'm seeing different vernacular for the same concepts--but I'm in a place where no one's ever told me that an "elevator" is a "lift". Then again, I may have the concept completely off, and need to be schooled so I don't blow up my workshop. :) With DC: Batteries have +/- terminals. Most schematics I see show the circuit with voltage in and ground. I've heard that most schematics don't trace the return path to the negative terminal because that's understood and it doesn't need to clutter everything up. I've also heard that on a DC circuit, ground is SYNONYMOUS with the negative terminal. On schematics, I've seen V-in and ground, I've also seen V-in, ground, and a separate trace connecting to the negative terminal. Then, we move to AC. There's a hot wire (positive), a neutral wire, and ground. I ~assume~ that in an AC circuit, positive correlates to positive, neutral to negative, and ground to ground. Transformers will correlate the +/- when changing DC. What are the facts and what are the myths? How can I tell if I need to ground something to earth vs. "ground" to the negative terminal? When do I ground to the chassis of my device? Are there standard conventions in a schematic that would indicate GROUNDED ground vs. return-to-the-source ground? Or is that something you know from experience and doing an analysis of the circuit? Is it generally safe to assume I can connect ground to negative? Or are there cases in which that would be a Very Bad Thing, and how do I identify those cases? Just trying to wrap my head around +/-/ground in AC vs. DC, and how that voltage is used...
Ground means whatever is attached to this symbol in the schematic: Everything that touches this symbol in the schematic is actually connected to everything else that touches the symbol. Since so many things connect to it, this makes the schematic easier to read. Usually the negative side of a battery is attached to that. But, there are many circuits that work differently. Some circuits need a negative voltage, so the positive side of a battery would be "ground". Some circuits need positive and negative voltages, in which case there could be two batteries, one with the negative side attached to ground, and the other with the positive side attached to ground. This works because voltages are relative. Put three \$10k\Omega\$ resistors in series, and attach them to a battery. The difference in voltage from one side of the battery is 3V (because it's a 3V battery). The difference in voltage from one side of a resistor (any of the three) to the other side of the same resistor is 1V, because the battery's 3V is divided among 3 resistors of equal value. Since voltages are relative, ground exists as a sort of assumed reference voltage. If we say an input is "5 volts", we mean "the difference between the input and ground is five volts". In the context of AC, things aren't really different, except that tradition has done a good job of making the same term "ground" mean many things. It still could mean whatever is attached to that symbol, or it could mean that 3rd connector on the wall. More on that later. As far as the circuit is concerned, live and neutral are no different. Pick either one, and the other oscillates between a higher and lower voltage, relatively. If all you have are those two wires for reference, they are indistinguishable. The difference is more important when you consider safety. The things around you are at some particular electromotive potential (voltage). Current flows when there is a difference in potential. The neutral AC line should be about the same potential as most of the things around you, so in theory, if you touch it, and also Earth, you don't get shocked, because there is no difference in voltage. If you touch the live wire, you do get shocked, because there's a difference in potential. However, I said neutral should be about the same potential as Earth, and since you are probably touching Earth, you. But, I wouldn't trust your life on it. There could be a faulty transformer on the pole near your house. There could be a lightning strike nearby. The house would be wired backwards. Or, as I mentioned the circuit will function even if the wires are reversed, it could be plugged in backwards. In the US, one of the prongs is a bit fatter to prevent this, but you never know. This is why there's the third connector, called ground or earth . This should go to a big copper rod near your house stuck in Earth, like this: It doesn't otherwise connect to anything else. There are some times this is important for safety, and other times it's important for other reasons. Point is, it has nothing to do with the electrical power supplied to your home. How can I tell if I need to ground something to earth vs. "ground" to the negative terminal? When do I ground to the chassis of my device? If we are talking about a device that plugs into the wall, leave these questions to someone else. Each country has safety regulations, and these regulations exist for good reason. Buy a DC power supply that takes care of all that for you, and connect to its output, and nothing else. Don't connect to Earth through the 3rd pin on the wall or you may circumvent the safety features of your power supply. If you are wondering if the "ground" symbol on your schematic should also be connected to box your project is in, well, it depends. Maybe you want to do that for RF shielding. Or maybe you don't, because you don't want some other device with a different idea of "ground" to touch it, which could result in noise in your circuit or melting something. In many circuits, it doesn't matter at all.
{ "source": [ "https://electronics.stackexchange.com/questions/55345", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10660/" ] }
55,389
I know the reasons for using terminating resistors on a CAN bus and how important it is. But why 120 ohm? How did this value come up? Is there any specific reason to use 120 ohm?
You need to be familiar with Transmission Line Theory to understand the deeper physics in play here. That said, here's the high-level overview: How important termination is to your system is almost exclusively determined by how long the bus wires are. Here length is determined in terms of wavelengths. If your bus is shorter than one wavelength over 10, the termination is irrelevant (practically) since there is plenty of time for the reflections introduced from an impedance mismatch to die out. Length defined in wavelengths is a strange unit on first encounter. To convert to standard units you need to know the velocity of the wave and it's frequency. Velocity is a function of the medium it travels through and the environment surrounding the medium. Usually this can be estimated fairly well through the dielectric constant of the material and assuming free-space surrounding that medium. Frequency is a little more interesting. For digital signals (such as those in CAN), you are concerned with the maximum frequency in the digital signal. That is well approximated by f,max = 1/(2*Tr) where Tr is the rise time (defined 30%-60% of the final voltage level, conservatively). Why it's 120 is simply a function of the design limited by physical size. It isn't specifically important which value they picked within a broad range (for example, they could have gone with 300 Ohms). However, all devices in the network have to conform to the bus impedance, so once the CAN standard was published there can be no more debate. Here's a reference to the publication (Thanks @MartinThompson).
{ "source": [ "https://electronics.stackexchange.com/questions/55389", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2574/" ] }
55,545
For a project I'd like three PICs (two slaves PIC18F4620, one master PIC18F46K22) to communicate over the I2C bus. Later on, more slaves may be added (like EEPROM, SRAM, ...). I'm writing the code for these PICs in C using the C18 compiler. I've looked around on the internet a lot, but couldn't find libraries to handle the (M)SSP peripheral. I've read the datasheet of both PICs on the (M)SSP peripheral in I2C mode but couldn't find out how to interface the bus. So I need master and slave libraries. What do you recommend? Do you have such a library somewhere? Is it built-in in the compiler and if yes, where? Is there a good tutorial somewhere on the net?
Microchip wrote application notes about this: AN734 on implementing an I2C slave AN735 on implementing an I2C master There's also a more theoretical AN736 on setting up a network protocol for environmental monitoring, but it isn't needed for this project. The application notes are working with ASM but that can be ported to C easily. Microchip's free C18 and XC8 compilers have I2C functions. You can read more about them in the Compiler libraries documentation , section 2.4. Here's some quick start info: Setting up You already have Microchip's C18 or XC8 compiler. They both have built-in I2C functions. To use them, you need to include i2c.h : #include i2c.h If you want to have a look at the source code, you can find it here: C18 header: installation_path /v x.xx /h/i2c.h C18 source: installation_path /v x.xx /src/pmc_common/i2c/ XC8 header: installation_path /v x.xx /include/plib/i2c.h XC8 source: installation_path /v x.xx /sources/pic18/plib/i2c/ In the documentation, you can find in which file in the /i2c/ folder a function is located. Opening the connection If you're familiar with Microchip's MSSP modules, you'll know they first have to be initialized. You can open an I2C connection on a MSSP port using the OpenI2C function. This is how it's defined: void OpenI2C (unsigned char sync_mode, unsigned char slew); With sync_mode , you can select if the device is master or slave,and, if it's a slave, whether it should use a 10-bit or 7-bit address. Most of the time, 7-bits is used, especially in small applications. The options for sync_mode are: SLAVE_7 - Slave mode, 7-bit address SLAVE_10 - Slave mode, 10-bit address MASTER - Master mode With slew , you can select if the device should use slew rate. More about what it is here: What is slew rate for I2C? Two MSSP modules There's something special about devices with two MSSP modules, like the PIC18F46K22 . They have two sets of functions, one for module 1 and one for module 2. For example, instead of OpenI2C() , they have OpenI2C1() and openI2C2() . Okay, so you've set it all up and opened the connection. Now let's do some examples: Examples Master write example If you're familiar with the I2C protocol, you'll know a typical master write sequence looks like this: Master : START | ADDR+W | | DATA | | DATA | | ... | DATA | | STOP Slave : | | ACK | | ACK | | ACK | ... | | ACK | At first, we send a START condition. Consider this picking up the phone. Then, the address with a Write bit - dialing the number. At this point, the slave with the sent address knows he's being called. He sends an Acknowledgement ("Hello"). Now, the master device can go send data - he starts talking. He sends any amount of bytes. After each byte, the slave should ACK the received data ("yes, I hear you"). When the master device has finished talking, he hangs up with the STOP condition. In C, the master write sequence would look like this for the master: IdleI2C(); // Wait until the bus is idle StartI2C(); // Send START condition IdleI2C(); // Wait for the end of the START condition WriteI2C( slave_address & 0xfe ); // Send address with R/W cleared for write IdleI2C(); // Wait for ACK WriteI2C( data[0] ); // Write first byte of data IdleI2C(); // Wait for ACK // ... WriteI2C( data[n] ); // Write nth byte of data IdleI2C(); // Wait for ACK StopI2C(); // Hang up, send STOP condition Master read example The master read sequence is slightly different from the write sequence: Master : START | ADDR+R | | | ACK | | ACK | ... | | NACK | STOP Slave : | | ACK | DATA | | DATA | | ... | DATA | | Again, the master initiates the call and dials the number. However, he now wants to get information. The slave first answers the call, then starts talking (sending data). The master acknowledges every byte until he has enough information. Then he sends a Not-ACK and hangs up with a STOP condition. In C, this would look like this for the master part: IdleI2C(); // Wait until the bus is idle StartI2C(); // Send START condition IdleI2C(); // Wait for the end of the START condition WriteI2C( slave_address | 0x01 ); // Send address with R/W set for read IdleI2C(); // Wait for ACK data[0] = ReadI2C(); // Read first byte of data AckI2C(); // Send ACK // ... data[n] = ReadI2C(); // Read nth byte of data NotAckI2C(); // Send NACK StopI2C(); // Hang up, send STOP condition Slave code For the slave, it's best to use an Interrupt Service Routine or ISR. You can setup your microcontroller to receive an interrupt when your address is called. That way you don't have to check the bus constantly. First, let's set up the basics for the interrupts. You'll have to enable interrupts, and add an ISR. It's important that PIC18s have two levels of interrupts: high and low. We're going to set I2C as a high priority interrupt, because it's very important to reply to an I2C call. What we're going to do is the following: Write an SSP ISR, for when the interrupt is an SSP interrupt (and not another interrupt) Write a general high-priority ISR, for when the interrupt is high priority. This function has to check what kind of interrupt was fired, and call the right sub-ISR (for example, the SSP ISR) Add a GOTO instruction to the general ISR on the high priority interrupt vector. We can't put the general ISR directly on the vector because it's too large in many cases. Here's a code example: // Function prototypes for the high priority ISRs void highPriorityISR(void); // Function prototype for the SSP ISR void SSPISR(void); // This is the code for at the high priority vector #pragma code high_vector=0x08 void interrupt_at_high_vector(void) { _asm GOTO highPriorityISR _endasm } #pragma code // The actual high priority ISR #pragma interrupt highPriorityISR void highPriorityISR() { if (PIR1bits.SSPIF) { // Check for SSP interrupt SSPISR(); // It is an SSP interrupt, call the SSP ISR PIR1bits.SSPIF = 0; // Clear the interrupt flag } return; } // This is the actual SSP ISR void SSPISR(void) { // We'll add code later on } Next thing to do is to enable the high priority interrupt when the chip initializes. This can be done by some simple register manipulations: RCONbits.IPEN = 1; // Enable interrupt priorities INTCON &= 0x3f; // Globally enable interrupts PIE1bits.SSPIE = 1; // Enable SSP interrupt IPR1bits.SSPIP = 1; // Set SSP interrupt priority to high Now, we have interrupts working. If you're implementing this, I'd check it now. Write a basic SSPISR() to start blinking an LED when an SSP interrupt occurs. Okay, so you got your interrupts working. Now let's write some real code for the SSPISR() function. But first some theory. We distinguish five different I2C interrupt types: Master writes, last byte was address Master writes, last byte was data Master reads, last byte was address Master reads, last byte was data NACK: end of transmission You can check at what state you are by checking the bits in the SSPSTAT register. This register is as follows in I2C mode (unused or irrelevant bits are omitted): Bit 5: D/NOT A: Data/Not address: set if the last byte was data, cleared if the last byte was an address Bit 4: P: Stop bit: set if a STOP condition occurred last (there's no active operation) Bit 3: S: Start bit: set if a START condition occurred last (there's an active operation) Bit 2: R/NOT W: Read/Not write: set if the operation is a Master Read, cleared if the operation is a Master Write Bit 0: BF: Buffer Full: set if there's data in the SSPBUFF register, cleared if not With this data, it's easy to see how to see what state the I2C module is in: State | Operation | Last byte | Bit 5 | Bit 4 | Bit 3 | Bit 2 | Bit 0 ------+-----------+-----------+-------+-------+-------+-------+------- 1 | M write | address | 0 | 0 | 1 | 0 | 1 2 | M write | data | 1 | 0 | 1 | 0 | 1 3 | M read | address | 0 | 0 | 1 | 1 | 0 4 | M read | data | 1 | 0 | 1 | 1 | 0 5 | none | - | ? | ? | ? | ? | ? In software, it's best to use state 5 as the default, which is assumed when the requirements for the other states aren't met. That way, you don't reply when you don't know what's going on, because the slave doesn't respond to a NACK. Anyway, let's have a look at the code: void SSPISR(void) { unsigned char temp, data; temp = SSPSTAT & 0x2d; if ((temp ^ 0x09) == 0x00) { // 1: write operation, last byte was address data = ReadI2C(); // Do something with data, or just return } else if ((temp ^ 0x29) == 0x00) { // 2: write operation, last byte was data data = ReadI2C(); // Do something with data, or just return } else if ((temp ^ 0x0c) == 0x00) { // 3: read operation, last byte was address // Do something, then write something to I2C WriteI2C(0x00); } else if ((temp ^ 0x2c) == 0x00) { // 4: read operation, last byte was data // Do something, then write something to I2C WriteI2C(0x00); } else { // 5: slave logic reset by NACK from master // Don't do anything, clear a buffer, reset, whatever } } You can see how you can check the SSPSTAT register (first ANDed with 0x2d so that we only have the useful bits) using bitmasks in order to see what interrupt type we have. It's your job to find out what you have to send or do when you respond to an interrupt: it depends on your application. References Again, I'd like to mention the application notes Microchip wrote about I2C: AN734 on implementing an I2C slave AN735 on implementing an I2C master AN736 on setting up a network protocol for environmental monitoring There's documentation for the compiler libraries: Compiler libraries documentation When setting up something yourself, check the datasheet of your chip on the (M)SSP section for I2C communication. I used the PIC18F46K22 for the master part and the PIC18F4620 for the slave part.
{ "source": [ "https://electronics.stackexchange.com/questions/55545", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
55,609
As a general engineering hobbyist, I am learning more about the world of microcontrollers each and every day. Once thing I don't quite understand though is the significance of the bit version of a microcontroller. I have been using the ATmega8 for several months, and it seems to work great for my purposes. I know how things like clock speed, memory, number of IO pins, types of communications buses, etc. differentiate one micrcontroller from another. But I don't quite understand the significance of, say, 8-bit vs. 16-bit vs. 32-bit. I do understand that a higher bit version allows the device to store larger numbers, but again, how does this impact my decision? If I am designing a product, under which hypothetical scenario would I decide that an 8-bit processor simply won't do, and that I need something higher. Is there any reason to believe that a theoretical 32-bit variant of the ATmega8 (all other things equal) would be superior to an 8-bit version (if such a device was possible)? I may be speaking nonsense, but I guess that's a result of my confusion.
It's not the width of number it can store, it's the width it can work with in a single operation. Customarily (but not necessarily) this also has a degree of correlation to the width of native memory addressing, and thus the amount of storage which can be easily mapped without ugly workarounds such as segmentation or bank switching. Today's 32-bit cores are superior to 8-designs in most respects (flexibility, flat memory model, and of course performance), with the major exceptions being legacy systems, applications with extreme volume and price pressure (otherwise pricing tends to correlate better with on chip memory size than with core width), and side effects of process/density. The later can provide things like 5v operation, or possibly in some cases greater radiation hardness or a simplicity advantage if trying to prove the CPU design itself to be free of logic errors. One final process/age side effect of value to many hobbyists is that 8-bit cores in DIP packages are common, while 32-bit devices in such packages are rarer (though they do exist).
{ "source": [ "https://electronics.stackexchange.com/questions/55609", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10917/" ] }
55,611
I am working on a board design that will put 16 RGB LEDs in a circle around a rotary encoder. I want this setup to be simple so that I can run it from any micro with limited software (ie. built in PWM control, no tedious management necessary on the micro). I also want it to be relatively compact so that this board will not take up much space. I would also like it to be as cheap as possible in medium quantities. These 3 criteria are likely in competition, but I would like to know what others would suggest. I have considered shift registers. These might be cheap and somewhat compact, but they would require the attached micro to spend a lot of time managing the LEDs to do any sort of colour blending. I have also looked at some PWM LED drivers. The best one I have found so far is a 16 channel chip, so I would need three to drive all my LEDs. It would be simple to use, but the space and cost would not be great. Another option might be to use some sort of FPGA or dedicated micro with lots of IO to control the LEDs. I'm not sure if the power needed by the LEDs would be too much, though. Is there some option I am missing that would fit this target usage? I am open to a solution with slightly more or less LEDs, but I would not want any less than 12. Edit: For reference, I am basically trying to replicate this board or this + this but with RGB capability, including colour blending. If the ring has to be a bit larger, that is ok to some extent. I would still like to be able to put a couple boards next to each other in a single project without too much spacing.
It's not the width of number it can store, it's the width it can work with in a single operation. Customarily (but not necessarily) this also has a degree of correlation to the width of native memory addressing, and thus the amount of storage which can be easily mapped without ugly workarounds such as segmentation or bank switching. Today's 32-bit cores are superior to 8-designs in most respects (flexibility, flat memory model, and of course performance), with the major exceptions being legacy systems, applications with extreme volume and price pressure (otherwise pricing tends to correlate better with on chip memory size than with core width), and side effects of process/density. The later can provide things like 5v operation, or possibly in some cases greater radiation hardness or a simplicity advantage if trying to prove the CPU design itself to be free of logic errors. One final process/age side effect of value to many hobbyists is that 8-bit cores in DIP packages are common, while 32-bit devices in such packages are rarer (though they do exist).
{ "source": [ "https://electronics.stackexchange.com/questions/55611", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8775/" ] }
55,697
I was reflowing and a 2K 603 resistor partially tombstoned. There was solder connecting the lead to the pad, yet my DMM showed an open. When I heated both ends and the resistor came down and contacted the pad, the DMM read 2k ohms like it should. Why didn't the solder conduct between the pad and lead? Also, sometimes when I touch a probe to a solder blob on top of pin, it shows an open. Yet when I touch the pin above the solder joint, it shows conductivity. Is this the same case when there is a cold solder joint? The solder is touching the pin and pad, yet there is no conductivity. Isn't solder a metal and therefore should conduct? How come it doesn't?
Sounds like a classic cold solder joint. This happens when the solder and pad are insufficiently heated (time or temp), or the surface is not clean, and the wetting action does not occur properly. If you use a meter to measure from one point to another on the solder itself, it should conduct, but between the solder and pad, there is not actually a good electrical connection, hence the open. It's not that solder isn't conducting, but that it hasn't made a bond to the copper and therefore contaminants or flux are actually creating a non-conductive barrier. As for the pin-to-pad conducting but not solder-to-pad, the pin is resting on top of the pad and likely making a physical connection (held in place by a brittle solder blob). The solder, however, may have a thin air gap left over from flux or contaminants, and isn't actually making contact with either.
{ "source": [ "https://electronics.stackexchange.com/questions/55697", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10191/" ] }
55,767
I know that in computers, value returned by the main() function is received by the operating system. But, what happens in the main() function of a microcontroller?
On a microcontroller, main() is not really expected to ever exit, and the behavior if it does is not defined — so it's up to whoever wrote the C runtime for the microcontroller. I've seen systems that: Have an implicit loop around main() , so that if it exits, it simply gets called again. Have a simple "jump-to-self" loop (or a HALT instruction) that gets executed if main() ever exits. Simply execute the rest of code memory that follows the call to main() . This is called "running off into the weeds". I've never seen one that actually does anything at all with the value returned by main() . If this is something you actually care about, then you should take a look at — and possibly modify — the source code for your system's C runtime library.
{ "source": [ "https://electronics.stackexchange.com/questions/55767", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18118/" ] }
55,823
I understand that I can not connect an LED directly to a battery because it will draw too much current. Thus, there must be something else in the circuit to limit the current. What options are there? Are some methods more efficient than others?
An LED requires a minimum voltage before it will turn on at all. This voltage varies with the type of LED, but is typically in the neighborhood of 1.5V - 4.4V. Once this voltage is reached, current will increase very rapidly with voltage, limited only by the LED's small resistance. Consequently, any voltage much higher than this will result in a very huge current through the LED, until either the power supply is unable to supply enough current and its voltage sags, or the LED is destroyed. Above is an example of the current-voltage relationship for an LED. Since current rises so rapidly with voltage, usually we can simplify our analysis by assuming the voltage across an LED is a constant value, regardless of current. In this case, 2V looks about right. Straight Across the Battery No battery is a perfect voltage source. As the resistance between its terminals decreases, and the current draw goes up, the voltage at the battery terminals will decrease. Consequently, there is a limit to the current the battery can provide. If the battery can't supply too much current to destroy your LED, and the battery itself won't be destroyed by sourcing this much current, putting the LED straight across the battery is the easiest, most efficient way to do it. Most batteries don't meet these requirements, but some coin cells do. You might know them from LED throwies . Series Resistor The simplest method to limit the LED current is to place a resistor in series. We known from Ohm's law that the current through a resistor is equal to the voltage across it divided by the resistance. Thus, there's a linear relationship between voltage and current for a resistor. Placing a resistor in series with the LED serves to flatten the voltage-current curve above such that small changes in supply voltage don't cause the current to shoot up radically. Current will still increase, just not radically. The value of the resistor is simple to calculate: subtract the LED's forward voltage from your supply voltage, and this is the voltage that must be across the resistor. Then, use Ohm's law to find the resistance necessary to get the current desired in the LED. The big disadvantage here is that a resistor reduces the voltage by converting electrical energy into heat. We can calculate the power in the resistor with any of these: \$ P = IE \$ \$ P = I^2 R \$ \$ P = E^2/R \$ Any power in the resistor is power not used to make light. So why don't we make the supply voltage very close to the LED voltage, so we don't need a very big resistor, thus reducing our power losses? Because if the resistor is too small, it won't regulate the current well, and our circuit will be subject to large variations in current with temperature, manufacturing variation, and supply voltage, just as if we had no resistor at all. As a rule of thumb, at least 25% of the voltage should be dropped over the resistor. Thus, one can never achieve better than 75% efficiency with a series resistor. You might be wondering if multiple LEDs can be put in parallel, sharing a single current limiting resistor. You can, but the result will not be stable, one LED may hog all the current, and be damaged. See Why exactly can't a single resistor be used for many parallel LEDs? . Linear Current Source If the goal is to deliver a constant current to the LEDs, why not make a circuit that actively regulates the current to the LEDs? This is called a current source , and here an example of one you can build with ordinary parts: Here's how it works: Q2 gets its base current through R1. As Q2 turns on, a large current flows through D1, through Q2, and through R2. As this current flows through R2, the voltage across R2 must increase (Ohm's law). If the voltage across R2 increases to 0.6V, then Q1 will begin to turn on, stealing base current from Q2, limiting the current in D1, Q2, and R2. So, R2 controls the current. This circuit works by limiting the voltage across R2 to no more than 0.6V. So to calculate the value needed for R2, we can just use Ohm's law to find the resistance that gives us the desired current at 0.6V. But what have we gained? Now any excess voltage is just being dropped in Q2 and R2, instead of a series resistor. Not much more efficient, and much more complex. Why would we bother? Remember that with a series resistor, we needed at least 25% of the total voltage to be across the resistor to get adequate current regulation. Even so, the current still varies a little with supply voltage. With this circuit, the current hardly varies with supply voltage under all conditions. We can put many LEDs in series with D1, such that their total voltage drop is say, 20V. Then, we need only another 0.6V for R2, plus a little more so Q2 has room to work. Our supply voltage could be 21.5V, and we are wasting only 1.5V in things that aren't LEDs. This means our efficiency can approach \$20V / 21.5V = 93 \% \$. That's much better than the 75% we can muster with a series resistor. Switched Mode Current Sources For the ultimate solution, there is a way to (in theory, at least) drive LEDs with 100% efficiency. It's called a switched mode power supply , and uses an inductor to convert any voltage to exactly the voltage needed to drive the LEDs. It's not a simple circuit, and we can't make it entirely 100% efficient in practice since no real components are ideal. However, properly designed, this can be more efficient than the linear current source above, and maintain the desired current over a wider range of input voltages. Here's a simple example that can be built with ordinary parts: I won't claim that this design is very efficient, but it does serve to demonstrate the principle of operation. Here's how it works: U1, R1, and C1 generate a square wave. Adjusting R1 controls the duty cycle and frequency, and consequently, the brightness of the LED. When the output (pin 3) is low, Q1 is switched on. Current flows through the inductor, L1. This current grows as energy is stored in the inductor. Then, the output goes high. Q1 switches off. But an inductor acts as a flywheel for current. The current that was flowing in L1 must continue flowing, and the only way to do that is through D1. The energy stored in L1 is transferred to D1. The output goes low again, and thus the circuit alternates between storing energy in L1 and dumping it in D1. So actually, the LED blinks rapidly, but at around 25kHz, it's not visible. The neat thing about this is it doesn't matter what our supply voltage is, or what the forward voltage of D1 is. In fact, we can put many LEDs in series with D1 and they will still light, even if the total forward voltage of the LEDs exceeds the supply voltage. With some extra circuitry, we can make a feedback loop that monitors the current in D1 and effectively adjusts R1 for us, so the LED will maintain the same brightness over a wide range of supply voltages. Handy, if you want the LED to stay bright as the battery gets low. Replace U1 with a microcontroller and make some adjustments here and there to make this more efficient, and you really have something.
{ "source": [ "https://electronics.stackexchange.com/questions/55823", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17608/" ] }
55,831
Let's say I have a 60W bulb in a lamp in my bedroom. If I kept the lamp on for 2 hours straight but the next day, I switched it on and off 10 times in intervals of 5 minutes. Which scenario would use more energy?
Leaving it on would use more energy, absolutely. Sometimes, people try to convince themselves that turning a light on and off uses more energy because there is some high inrush current, or some such thing. Firstly, incandescent lights hardly have any inrush current, because they don't have any capacitors to charge, and they need not strike an arc in the bulb. The current is initially higher because the filament resistance is lower, but: this is for a fraction of a second getting it up to temperature doesn't take any more energy than it would have taken to leave it on to maintain that temperature even though the current may be higher, it's not that much higher. Do all the other lights in your house dim temporarily when you turn one on? Secondly, if you take a fluorescent bulb, which may have capacitors, and thus may require some inrush current, it doesn't begin to make up for the cost of leaving the light on. Consider again how short the turn-on period is relative to the leaving-on period. Even if you consider the wear-and-tear on the bulb and the starter and the fixture, it's almost always more economical to turn the bulb off. I read a report by someone who bothered to do all the math, and they concluded that if you intend to leave the light off for more than about 60 seconds, it's more economical to do so.
{ "source": [ "https://electronics.stackexchange.com/questions/55831", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18142/" ] }
55,870
What is Offset null in 1st and 5th pin in IC 741 (Op-Amp)? Why it is used, though it is not used in many circuits? Give me explanation regarding the offset null! Why offset voltage was formed in IC 741?
The datasheet gives an example. By adjusting the pot we can null any offset error . An offset error is when the inputs are exactly equal but the output isn't exactly zero. This error is also characterized by the datasheet: It can be safely ignored in AC applications, where this offset will be ignored by the AC coupling. It becomes more important in DC applications, especially amplifiers, since this DC error will be amplified by the next stage. This offset voltage exists because a real omp-amp can't be ideal. There will always be some unintended asymmetries between due to random variation in manufacturing. In all cases, there are op-amp designs that can minimize these errors, but usually at the expense of some other parameter, like cost.
{ "source": [ "https://electronics.stackexchange.com/questions/55870", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17336/" ] }
55,945
If copper is more conductive than water (at any reasonable PH), submerging copper electronic circuits in should have no effect, as the electricity should continue to follow the path of least resistance (the highly conductive copper PCB paths, for example) rather than shorting into the mildly conductive water. However, dumping water on electronics clearly shorts them out, even though copper is the superior conductor. Why does this occur when the path of least resistance should be the copper rather than the water?
Fallacy correction: "Electricity" NEVER 'follows the path of least resistance'. Electricity is NOT like a single large Zorb bounding down a hillside . Electricity would be better modelled in this context as a large reservoir of water being poured down a rough hillside . Most of the water will follow major gullies and channels but some will find smaller side channels. The question is not "which path will the electricity follow?" but "how much current will flow along this given path?" Low resistance paths will conduct higher currents. Higher resistance paths will conduct less current. Only infinite resistance paths will conduct zero current. There are NO infinite resistance paths. Plus: When voltage is applied to water above a certain critical voltage, the water will decompose into Hydrogen and Oxygen - - known as "electrolysis" . Any components in the water are liable to also decompose to produce eg Chlorine gas from chlorine products in the water. Even wholly pure deionised water can be decomposed in this manner. Once you get even a small amount of current flow you get ions formed which promotes more current flow, lower resistance, more breakdown ... zap. I have dropped a "pager" into sea water and a portable phone into concentrated chlorine solution* and "saved" both with no long term damage by immediately taking out the batteries and washing them in copious quantities of fresh water and then leaving them to dry completely (a very important step). And I have seen equipment destroyed by exposure to water with the batteries then left in. Note to self: Remove portable phone from top pocket before stirring large bucket of Sodium Hypochlorite solution. When splashing along sea edge and generally having fun your pager should be in a salt-spray protected bag or, better still, left at home. Note to the young: Pager's were things we used to have before they invented cell phones. Effectively a receive only SMS system with no voice or any other features. Doctor's still seem to use them.
{ "source": [ "https://electronics.stackexchange.com/questions/55945", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18198/" ] }
56,010
I often notice usage pull resistor for base of bipolar transistor, e.g. \$R2\$ here: Why it's used? I understand pull resistors for FETs, because of gate's high impedance an EMI can easy switch it. But BJT need base's current to open and, I think, EMI has too high internal impedance to give enough current. Is it safe to leave floating base in BJT switch?
The very short answer: R2 helps to turn the BJT off fast . A bit longer: When the current ceases to be provided through R1, all there is left to stop the base from being forward biased is the current that flows through the base itself. This works for slow circuits. For fast turn-off, R2 helps to get the charge out of the base. Note that there's a parasitic capacitance from B to E and worse, from C to B. The latter is worse because while B goes down, C goes up, and the C-B-capacitor pushes charge into B while you want to lose charge in B (Miller Effect). Also, with B floating, a tiny current (interference / parasitic creepage path) may be enough to turn the BJT on. R2 helps to prevent this.
{ "source": [ "https://electronics.stackexchange.com/questions/56010", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
56,058
I have a PIC18F46K22 and program it with the XC8 compiler. In the end, I'll have a system like a pc with stdin and stdout . So in the main loop there will be a function which is checking if there is new input. If there's input, a function will be called accordingly. So for example when I input an A on stdin , the PIC will run a function like function_A instead of function_B which is called when I input a B. When the PIC is done with the function, I want that the new input is sent to the function. So when pressing A opens the RS232 transmitter, from that moment every input will be sent over RS232. In the end the project is a stand-alone text editor. So when pressing A opens the file system, from that moment you aren't text editing anymore but looking through a list of files. That means that pressing Up and Down means something different than in the text-editing environment. I've done a lot thinking on how to program this in C. I thought this up last night and would like to know if it's possible and if so, how. What I want to do is: The main function calls a function like function_A function_A changes a global variable function_addr to the address pointer of function in_function_A From that moment, main calls the function at function_addr when there is new input. So what I'd need is a main function which checks if function_addr is zero. If it is, a 'normal' function should be called, like function_A . If it isn't, the function at function_addr should be called. I also need a function_A which changes the function_addr to a pointer to in_function_A . Note: when the filesystem function should be closed, is_function_A should just change function_addr to 0. So basically my question is how can I Get the address of a function (and store it in a variable) Call a function at a specified address
A function int f(int x){ .. } Get the address of a function (and store it in a variable) int (*fp)(int) = f; Call a function at a specified address int x = (*fp)( 12 );
{ "source": [ "https://electronics.stackexchange.com/questions/56058", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
56,138
I've run across a Hitachi V-355 oscilloscope, and a friend who's willing to give me a deal if I want it. Unfortunately, there are no probes that come with it. I was searching for probes, only to learn that they're STUPID expensive (either that or I'm STUPID cheap), and I can't find any to borrow. So here's my thought: I have a ton of test probes from, say, my multimeter. I have a stable of patch cables in every possible banana plug/alligator clip/springy pin clip configuration possible. I also have a whole mess of BNC connectors and adapters in my video cable repair kit. So here's my question: Can I just solder an alligator clip cable to a BNC connector and have an oscilloscope probe? Or is there something special going on inside those probes that makes 'em cost $50.00? If the latter, and I end up biting the bullet and getting real, honest and for true oscilloscope probes, what are some things I need to be aware of when shopping?
Oscilloscope probes aren't just pieces of wire with pointy end attached to them. Typical probe will in addition to the pointy stick and the alligator clip have the input attenuation circuitry and impedance matching circuitry inside. Basically the oscilloscope input front-end has its own internal capacitance and its own internal resistance. In order to prevent signal distortion, the capacitance of the probe needs to match the capacitance of the scope. If it isn't well matched, you'll get overshooting or undershooting. Most scopes have on their front a probe compensation connector which provides test signal that can be used to tune the capacitance of the probe. Here's image of the effect: The probe's resistance provides input attenuation and it works together with the input resistance of the oscilloscope. Usual 1x/10x probes will have a switch that inserts what's usually a \$9 \mbox{ } M \Omega\$ resistor in series with the signal. In the scope, there's usually a \$1 \mbox{ } M \Omega \$ resistor for input attenuation. In the 1x mode, you only have that resistor, while in the 10x mode, both resistors provide attenuation of \$10 \mbox{ } M \Omega\$. In parallel with the probe's input resistance you have the compensation capacitor. When buying a probe, you should pay attention that the capacitor's value can match the value of scope's input capacitance. Another important part of the probe is the tip capacitance. It's modeled as a capacitor in parallel with the signal. It's purpose is to slow down the rise time of the signal entering the probe, which is in general considered a negative effect. The tip capacitance probably won't be of too much importance for a scope such as the one you're considering, but for higher frequencies, it could cause problems when accurate measurement of rise time is needed. Some probes have what's called high frequency compensation too. You probably don't want to pay for such a probe, but I'll just mention it for completeness. The high frequency compensation part will usually be in the BNC cable connector of the probe and consists of a series connection of a resistor and variable capacitor placed parallel to the signal path. It's used to fix problems with impedance increase with frequency due to cable inductance. Basically the impedance will decrease more or less linearly with the frequency until we reach the resonant frequency of the probe. After that, it will increase. The compensation system is used to move the resonant frequency point away from the frequency range we're interested in using the scope with. Finally, there's a free book available from Tektronix (if you want to give them your e-mail address) which explains how probes work in great detail. It's called ABC of probes and is currently available from here .
{ "source": [ "https://electronics.stackexchange.com/questions/56138", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10660/" ] }
56,170
I have heard people say that in motor control circuits, one must take precautions to keep the motor from feeding back into the power supply, causing the supply voltage to rise, consequently breaking things. But how can this be? Unless some external force is accelerating the motor, the back-EMF can never get higher than the supply voltage. How then could it ever drive the supply voltage higher?
A motor driven by an H-bridge is also a boost converter. Here's an H-bridge: Replace the motor with an inductor, resistance, and voltage source (back-EMF): Let's just consider that we are driving the motor in one direction, and S3 is always open, and S4 is always closed: Rotate V1, S1, and D1 (same circuit): flip the whole thing left-for-right (still the same circuit): We don't need active rectification, so we can delete S1. D2 also serves no purpose. We can also delete R1, since it's just a small resistance and doesn't change the function of the circuit other than to make it less efficient: Looking pretty close, right? Of course, a real boost converter will have a capacitor on the output to make DC, and the load isn't a battery, but a resistor, and probably V1 isn't a motor's back-EMF but rather a battery. This step isn't necessary to demonstrate how the back-EMF can feed back into your power supply, but is provided just in case you don't recognize the boost converter: QED. It can also be shown that when the motor is being accelerated, an H-bridge is a buck converter. Consequently, it's easier to think about the interaction between the battery and the motor's kinetic energy in the frame of the law of conservation of energy. Neglecting non-ideal losses in the winding resistance, switching transistors, friction, etc, an H-bridge and a motor make an efficient energy converter. To increase the motor's kinetic energy, the battery must supply energy. To decrease the motor's kinetic energy, the battery must absorb energy. If the battery, friction, or some other load can't convert the kinetic energy into heat or chemical energy, it will go somewhere else. Most likely, into your power supply decoupling capacitors, causing the power rail voltage to rise, because the energy stored in a capacitor is: \$ E = \frac{1}{2}CV^2 \$ or equivalently, \$ V = \sqrt{\dfrac{2E}{C}} \$ Where \$E\$ is energy in joules or watt-seconds, \$C\$ is the capacitance in farads, and \$V\$ is the electromotive force, in volts. To store more energy, the voltage must go up. It's not a mistake that this looks exactly like the formula for kinetic energy: \$ E = \frac{1}{2}mv^2 \$ Where \$E\$ is energy in joules, \$m\$ is mass in kilograms, and \$v\$ is velocity in meters per second, or for rotating kinetic energy, \$m\$ is the moment of inertia in \$kg \cdot m^2\$ and \$v\$ is the angular velocity, in radians per second. The point here is that you get regenerative braking even if you didn't want it. See How can I implement regenerative braking of a DC motor?
{ "source": [ "https://electronics.stackexchange.com/questions/56170", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17608/" ] }
56,210
I'm a bit confused about the concept of ground, and perhaps voltage as well, particularly when trying to analyze a circuit. When I learned about Ohm's law in grade school, I learned how to apply the law to calculate current, voltage, and resistance of simple circuits. For instance, if we were given the following circuit: We would be could be asked to calculate the current passing through the circuit. At the time, I'd simply compute (based on the rules given) 1.5V/1Ohms=1.5A. Later on, however, I learned that the reason the voltage of the resistor would be 1.5V is because voltage is really the difference in potential between two points, and that the difference of the voltage across the battery would be the same as that of the resistor (correct me if I'm mistaken), or 1.5V. I got confused, however, after the introduction of the concept of ground. The first time I tried to do the current calculation for a circuit similar to the previous circuit on a simulator, the program complained about not having a ground and "floating voltage sources". After a bit of searching, I learned that circuits need ground as a reference point or for safety reasons. It was mentioned in one explanation that one can pick any node for ground, although it's customary to design circuits so there is a "easy place" to pick ground. Thus for this circuit I picked ground at the bottom, but would it be okay to pick ground between the 7 ohm and 2 ohm resistor - or any other place? And what would be the difference when analyzing the circuit? I've read that there are 3 typical ground symbols with different meanings - chassis ground, earth ground, and signal ground. A lot of circuits I've seen used in exercises either use earth ground or signal ground. What purpose is there in using earth ground? What is the signal ground connected to? Another question: since the ground is at unknown potential, wouldn't there be current flowing to or from ground to the circuit? From what I've read we treat the ground as 0V, but wouldn't there be some sort of effect because of a difference in potential of the circuit and ground? Would the effect be different depending on what ground was used? Finally: In nodal analysis, one customarily picks a ground at the negative terminal of the battery. However, when there are multiple voltage sources, some of them are "floating". What meaning does the voltage of a floating voltage source have?
The first time I tried to do the current calculation for a circuit similar to the previous circuit on a simulator, the program complained about not having a ground and "floating voltage sources". Your simulator wants to be able to do its calculations and report out the voltages of each node relative to some reference, rather than have to report the difference between every possible pair of nodes. It needs you to tell it which node is the reference node. Other than that, for a well-designed circuit, the "ground" has no significance in the simulation. If you design a circuit where there is no dc path between two nodes, though, the circuit will be unsolvable. Typical SPICE-like simulators resolve this by connecting extra resistors, typically 1 GOhm, between every node and ground, so it is conceivable that the choice of ground node could artificially affect the results of a simulation of a very high-impedance circuit. I picked ground at the bottom, but would it be okay to pick ground between the 7 ohm and 2 ohm resistor - or any other place? And what would be the difference when analyzing the circuit? You can pick any node as your reference ground. Often we think ahead and pick a node that will eliminate terms for the equations (by setting them equal to 0), or simplify the schematic (by allowing us to indicate connections through a ground symbol instead of by a bunch of lines connecting together). I've read that there are 3 typical ground symbols with different meanings - chassis ground, earth ground, and signal ground. A lot of circuits I've seen used in exercises use earth ground or signal ground. What purpose is there in using earth ground? What is the signal ground connected to? Earth ground is used to indicate a connection to something that is physically connected to the ground beneath our feet. A wire leading through the building down to a copper rod driven into the ground, in a typical case. This ground is used for safety purposes. We assume that someone who handles our equipment will be connected to something like earth ground by their feet. So earth ground is the safest circuit node for them to touch, because it won't drive currents through their body. Chassis ground is just the potential of the case or enclosure of your circuit. For safety purposes it's often best for this to be connected to earth ground. But calling it "chassis" instead of "earth" means you haven't assumed that it is connected. Signal ground is often distinguished from earth ground (and partially isolated from it) to minimize the possibility that currents flowing through the earth ground wires will disturb measurements of the important signals. Another question: since the ground is at unknown potential, wouldn't there be current flowing to or from ground to the circuit? Remember, a complete circuit is required for current to flow. You would need connections to earth ground in two places for current to flow in and out of your circuit from earth ground. Realistically, you'd also need some kind of voltage source (a battery, or an antenna, or something) in one of those connection paths to have any sustained flow back and forth between your circuit and the earth. However, when there are multiple voltage sources, some of them are "floating". What meaning does the voltage of a floating voltage source have? If I have voltage source with value V between nodes a and b , it means that the voltage difference between a and b will be V volts. A perfect voltage source will generate whatever current is required to make this happen. If one of the nodes happens to be ground, that gives you immediately the value at the other node in your reference system. If neither of those nodes happens to be "ground" then you will need some other connections to establish the value of the voltages at a and b relative to ground.
{ "source": [ "https://electronics.stackexchange.com/questions/56210", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16834/" ] }
56,474
I have been doing hobby electronics for more than 10 years, and some of my electrolytic capacitors are easily that age. They seem to work just fine and do not show corrosion or other visible defects, but they are usually used in prototyping rather than production. Knowing that these have a limited shelf life , I'm curious if I should just discard what I have and purchase new inventory, and rotate it. How best can I tell that my old caps have failed, are out of spec, or perhaps are going to fail?
The best way to tell that an electrolytic cap is bad or about to go bad is to use an ESR meter . An ESR meter directly measures one of the biggest reasons electrolytic caps fail: when ESR gets high, P=I²R tells us that power dissipation goes up, so heat gets produced, which boils off more of the electrolyte, which causes ESR to go up, which... Eventually, poof-bang, it isn't a cap any more. Read the cap's datasheet to find out the expected value of ESR. It varies considerably among capacitor types and capacitance values. As a rule, the cheaper and smaller the cap, the higher the expected ESR. I've seen values ranging from 30 mΩ to 3 Ω. The only reason I even give numbers is to show this 100:1 ratio, not to set your expectations so you can go measuring without having read the cap's datasheet, however. You can re-form the dielectric of electrolytic caps. There are two major methods. Re-forming the Dielectric Using a Bench Supply One school of thought is to charge the cap up over many minutes via some current-limiting scheme to its rated voltage, then leave it there for many more minutes. There are several methods for doing this, all with the major goal of limiting the currents to levels that prevent the capacitor from blowing up in your face if the capacitor simply cannot be restored. The Resistor Method The simplest way to achieve this is to put a large resistor in series between the capacitor and the voltage supply. Use the RC time constant formula (τ = RC) to calculate the proper resistor value. The rule of thumb I was given is based on the fact that a capacitor is nearly fully charged after five time constants, so we set τ = 1500 in the above formula: 5 minutes in seconds × 5 time constants. We can then rearrange that to R = 1500÷C. Now simply substitute your capacitor's value into the formula to get the minimum required resistor. For example, to re-form a 220 μF cap, you'd want to charge it through a resistor no smaller than 6.8 MΩ. Set the power supply's voltage to the normal working voltage for the capacitor. If it's a 35 V capacitor, it probably has about 30 V across it in normal operation, so you'd use that as your voltage set point. I can't see a good reason to push the capacitor beyond its normal working voltage; the dielectric strength will increase over time to some physical limit and stop there. This method is nonlinear, charging fastest at the start, then slowing asymptotically as you approach the power supply's voltage set point. The Constant-Current Method A more sophisticated method would be to use a current-limited bench power supply , achieving the same end. The formula for that is I = CV÷τ. If we always want to charge over 30 minutes, τ=1800. To re-work our 220 µF example, we also need to know the ending voltage, which we'd select the same way as above. Let's use 30 V as our target again. Substituting that and our charge time into the above formula gives the necessary charging current, which in this case is 3.7 µA. If your power supply can only go down to 1 mA for the current limit setting, you then need to decide whether you want to risk recharging over only 6.6 seconds, which we get by a simple rearrangement of the formula. This method is linear, increasing the voltage across the capacitor a fixed amount per unit time until it hits the voltage set point. The main consequence of this is that the ending charge current will be higher for a given total charge time than with the resistor method, but the starting charge current will be lower. Since the danger of damaging the capacitor increases as you approach the voltage set point, that makes the resistor method safer, with the charge time being equal. Combined Method That brings us to the combined method, which was used in the link above: a constant current power supply charging the capacitor through a resistor. The resistor slows the charge current as the voltage rises, and the current-limited power supply can limit the charge rate at low voltages below what the resistor would do alone. Leakage Current If you do this with a good bench supply, once you hit the charging voltage limit, if the power supply continues to show any current flow, that is your capacitor's leakage current, which you can compare to the spec in the cap's datasheet. An ideal capacitor has a leakage current of zero, but only the best capacitors approach that ideal. Electrolytic caps are far from ideal. If you leave the capacitor in the charging setup, you may find that the leakage current drops for some time after hitting the voltage limit, then stabilizes. It is that point that you know that the dielectric is now as strong as it's going to get. Re-forming the Dielectric In-Circuit The second method also raises the capacitor voltage slowly over a long period, but it does so in-circuit. It only works for AC-powered equipment, and it is best used to re-form the dielectrics in linear power supplies, whether regulated or unregulated. You pull this trick off using a variac , which allows you to raise the AC supply voltage to the circuit slowly. I would start off at a volt or two, then tweak it upward a volt or three at a time, with many seconds between changes. As with the methods above, expect to spend at least half an hour on this. We're dealing with wet chemistry here, not semiconductor gates; it takes time. The more "linear" the circuit you do this with, the more likely it is to work well. Switching power supplies and digital circuitry are likely to be annoyed by the slowly rising rail voltage produced by this method. Some circuits can even self-destruct under such conditions, because they're designed with the assumption that the supply voltage will always rise rapidly from zero to its normal operating value. If you have a digital circuit powered by a linear-regulated power supply, you might want to re-form the power supply separate from the powered circuit. You might want to put a resistive load across the output of the power supply while you do this.
{ "source": [ "https://electronics.stackexchange.com/questions/56474", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2028/" ] }
56,622
So this seem like a simple question, but I'm trying to flip components in the schematic editor. If I select a component then use the keyboard shortcut X or Y, it brings up a menu and does not slip the component as desired. Edit --> Move --> Flip... also does not work. Suggestions? Thanks Edit: I should have mentioned that I would like to flip groups of components as well as net labels. With net labels the connection point is on the bottom left, but would like to have it on the bottom right such that I can connect it to wires aligned right.
You have to press X or Y key while you're holding the left-click mouse button pressed on the component and it will flip. To be more explicit: Left-click on the component you want to flip, keep the mouse button pressed (like when you want to move a component with the mouse) and press X or Y key to flip.
{ "source": [ "https://electronics.stackexchange.com/questions/56622", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18482/" ] }
56,649
In a list of ICs, along with the familiar package names such as QFN32 , LQFP48 , etc., I've seen a few ICs to be listed as DIE for the package size. I've never seen that description before as an IC package size, and Wikipedia does not list it either. What can it be? I assume it's some kind of chip-scale package, but it does not reveal the silicon size or any other properties, like number of pins, etc.
Danger Will Robinson! It refers to a "raw die" -- meaning the chip is not packaged. You will get a piece of exposed silicon (possibly encapsulated or partially so, but typically not). If you are asking this question, then I'm pretty sure it is not what you want . ;-) If you want an example... Consider the Max3967A from Maxim Semiconductor. If you want to buy the conventional packaged version the part number is MAX3967AETG+, but if you just want the raw microchip inside (no package) you want part number MAX3967AE/D. In the catalog the "package" for the "/D" version will be "DIE" -- meaning no package. From page 12 of the datasheet: You can see they dimension the die in the drawing for you. You will need access to a wire bonding machine in order to use a raw die (among other things). In this microscope image, you can see two wires bonded (attached) to the package in the center. And in this photo of a thick-film hybrid circuit (taken with a little less magnification) you can see the wires bonded directly to the various die as well as the package forming the connections between the frame and the outside world: it is mostly only useful for other IC manufacturers if they want to integrate it into their ICs. So you are right, it's not what I want. Why you can buy raw dies: MCM -- What you described in your comment is called a Multi-Chip Module (MCM) and, yes, you are correct. Low Cost -- It is also common in really cheap electronic devices to skip the cost of packaging. They use unpackaged dies and glue them to the substrate (PCB), bond the pads to the board directly, and then encapsulate the die in an epoxy to secure, seal, and protect everything. High Reliability -- It can also be done this way for speciality applications where the absence of a package (and the manufacturing and soldering points of failure that come with it) are advantageous for reliability.
{ "source": [ "https://electronics.stackexchange.com/questions/56649", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8903/" ] }
56,882
I'm writing to a microSD card from within my firmware, but it's the lowest priority task, so it can be interrupted by other tasks while it's in the middle of reading/writing. Now suppose I communicated with this microSD card using a UART. The problem during reads would be that the hardware RX FIFO would overflow, so the maximum delay I can effort would be (FIFO size × bytes/second), and during writes there would be no problem, because the other end would just wait until I send the next character. How does this work now I'm using SPI? Is the situation the same that for writes it doesn't matter, and for reads it depends on the SPI FIFO size?
The vast majority of SPI devices will be perfectly happy at any data rate below the specified maximum. One could perform part of a transaction, take a break at any point, come back a few years later, and finish it. Provided that there were no glitches on the clock, select, or power lines, the transaction would be completed normally. There are three main caveats to be aware of: In general, once a transaction has begun on an SPI bus, none of the wires on the bus may be used for any other purpose until that transaction is complete. In general, this means that an interrupt may not use an SPI bus except when it is the only thing that will be using the bus (it may be possible for the interrupt to have exclusive use of the bus at some times, and for the main program to have exclusive use at other times). Some devices include special pins to let them "ignore" the bus in the middle of a transaction, but even with such features I would not recommend trying to have an interrupt suspend an SPI transaction with one device, perform a transaction with some other device, and then let the underlying code resume its transaction with the first. Better to have the interrupt use a separate SPI bus. Some devices may behave oddly if a transaction goes on for too long. Some real-time clock chips, for example, don't double-buffer the time/date registers but instead latch any "time-advance" events that would occur during a transaction and apply them after the transaction is complete. If a transaction takes so long that a second time-advance event arrives, the latter event will be ignored, causing the clock to slip by that amount of time. I really see no excuse for designing a chip in such fashion (even if one didn't want the cost of double-buffering the data, specifying that software was responsible for ensuring its coherence would be cheaper than adding the "update deferral" logic, and would minimize the likelihood of clock disturbance), but such chips exist. There are a few devices which use a clock and data signal, but which use a "pause" to signify framing. The most recent example of this I've encountered was a controller-per-bulb LED light string. I don't particularly like such designs (one could just as well indicate framing using three consecutive rising edges on the data wire without any intervening clock) but again, such devices do exist. While certain types of communications require the use of particular timings, there is seldom any reason for SPI devices to require them. Nonetheless, one must be mindful of the existence of such devices.
{ "source": [ "https://electronics.stackexchange.com/questions/56882", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8150/" ] }
57,071
First, I acknowledge that there are several questions regarding this topic in the forum, however, the answers assume too much background knowledge of electronics to be of use to a true beginner (like myself). That being said, if you choose to answer, please limit your responses to heuristic (non-technical) explanations. My understanding of a pull-up resistor, is to ensure a consistent charge on a line, as opposed to a disconnected line, which could potentially fall victim to electrical fields and then produce noise. The noise could then be interpreted as an input signal and cause unexpected results from your device. Question 1) Am I correct in my understanding of the purpose of pull-up and pull-down resistors? Question 2) How does this work? Can someone provide a metaphor or analogy to describe what exactly is taking place with the electrical current?
First: Yes, your understanding is essentially correct, other than the issue being voltage and not charge. Here is my analogy: Consider a door to a house, with really smooth hinges, and no bolt or latch. The door is so light and so well-hinged that the slightest breeze would cause it to flap open and closed. Now add a light door-spring to the door. The spring keeps the door shut, but not terribly firmly: A gentle push will open it, and letting it go will cause the door to close again. A so-called "floating input " is like that door - the slightest perturbations in electromagnetic field, like the breeze above, will cause the input to randomly toggle between open and shut (low and high). Add the pull-up resistor (if you want the default to be "high") or pull-down resistor (if you want it to be "low"), and your spring is in place. Now, an external voltage applied, like the gentle push, can overcome the "keep the door shut" tendency of the spring / pull-x resistor - and once the push is removed, the input returns to the desired default value. A low value resistor in such use is like a really stiff spring - it needs a much firmer push to open, but open it will. It will also slam shut faster when the push is removed.
{ "source": [ "https://electronics.stackexchange.com/questions/57071", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18047/" ] }
57,073
I have a system that was designed for testing digital components. I have a device in the system which is used for generation and acquisition of digital signals. Now, I will happily admit to not fully understanding everything I'm about to say, but: The outputs of the device have a 50Ω nominal impedance, The cable carrying these signals is a 50Ω cable designed for the device, The PCB the cable is plugged into is supposed to use 50Ω characteristic impedance on the lines with length matching to 0.5" and signals routed on the same layer. For clarity: the output of the device is sent through a cable, which is attached to a PCB, where the signals are routed a short distance to a high pin-count connector. I am trying to see how high of a frequency I can reasonably generate with this device, so I started with the default for the software I have: 50MHz. And the signal looked like this: When I slowed down the signal to 1MHz to get a closer look at the transition, I got this: I believe that I'm following the instructions in the device's specification sheet correctly...50Ω cable (designed for the device), 50Ω traces on the PCB (if the company who designed the PCB did what they said they did), into a high-impedance load (my scope). I suppose my question is this: am I measuring the signal wrong, or was the system designed wrong? Is there anything that I am grossly misunderstanding? If the system was designed wrong, are these graphs sufficient to show that or is there another measurement that would be more illustrative? Edit: I was asked to show what the signal looks like when "properly" terminated, aka not just floating non-terminated out of the connector of my system. The process through which this signal arrives is kind of a mess (see my comments) but here's what I've got: Edit#2: I stuck a 50Ω resistor between the digital output I'm observing and the ground that the signal is referenced to and I got the following output. From my understanding of this conversation, this seems to not be what I would have expected. Edit#3: See The Photon's accepted answer, but here's what the signal looked like after I removed a long, unterminated cable that was essentially dangling in the wind and also terminated the point I was measuring with a 50Ω resistor.
First: Yes, your understanding is essentially correct, other than the issue being voltage and not charge. Here is my analogy: Consider a door to a house, with really smooth hinges, and no bolt or latch. The door is so light and so well-hinged that the slightest breeze would cause it to flap open and closed. Now add a light door-spring to the door. The spring keeps the door shut, but not terribly firmly: A gentle push will open it, and letting it go will cause the door to close again. A so-called "floating input " is like that door - the slightest perturbations in electromagnetic field, like the breeze above, will cause the input to randomly toggle between open and shut (low and high). Add the pull-up resistor (if you want the default to be "high") or pull-down resistor (if you want it to be "low"), and your spring is in place. Now, an external voltage applied, like the gentle push, can overcome the "keep the door shut" tendency of the spring / pull-x resistor - and once the push is removed, the input returns to the desired default value. A low value resistor in such use is like a really stiff spring - it needs a much firmer push to open, but open it will. It will also slam shut faster when the push is removed.
{ "source": [ "https://electronics.stackexchange.com/questions/57073", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/11836/" ] }
57,698
This New York Times article says that on November 14, 2007 Consolidated Edison Company was going to end supplying direct current to the remaining several buildings that used those to run elevators. I don't get why there would be DC-powered elevators. Yes, at the end of 19th or the beginning of 20th century DC equipment was rather popular, but it's unlikely that an elevator would last a century and all the elevators I could find information for now run on AC. Also DC is typically preferred for easier RPM control (like in trains, tramcars, etc) but this is not a problem for elevators - they typically use two-speed three-phase AC motors and speeds are changed by switching the number of poles. Why would there still be elevators running on DC?
Century old motors were well built! And probably conservatively designed because electricity was new; they didn't know which corners you could safely cut. In those days, everything mechanical was designed for easy maintenance; nuts, bolts, taper pins; simple tools to take the whole lot apart, adjust to take up wear, reassemble and use for another 10000 miles. Run out of parts? Turn another one to fit! I had a 1910-era lathe still capable of turning within about 0.002" (traded it for a 1928 model!) and my 1840s watch is keeping very good time. In an era of relatively cheap labour and expensive materials, this made sense. Who knows, we may end up back there some day! Meantime it's worth studying how things from another era are made; partly to keep the skills alive and partly because good engineering is good engineering, from any era. Just to clarify because this seems to have hit a nerve : I'm not simply equating long life with good engineering. What makes these motors good engineering is the skill with which they met their design goals using materials and techniques available at the time. And long life was almost certainly one of them; reliability (not measured as MTTF but the ratio between MTTF and MTTR) i.e. easy repair, and efficiency. Swapping motors for a fix is not the issue; replacing brushes, re-lining bearings or (major job!) rewinding the motor was what happened - and what the motors were designed for. It's NOW we kinda-sorta-fix things by replacing motors. We haven't improved THAT much on 92% efficiency in a motor in the last hundred years, but we do it with a lot less copper and iron. We can equally well admire a modern brushless motor with sealed bearings and no maintenance for ten years; they can both teach us something.
{ "source": [ "https://electronics.stackexchange.com/questions/57698", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3552/" ] }
57,824
I have been asking a few questions here to get to a proper one, the initial questions I asked are linked to at the end. I used Fritzing to make up some schematics of my initial thoughts, but at the very least I need help with the values on the components, which I only vaguely understand and picked what seem to be reasonable or common values. Basically, I have an Arduino that has 6 analog inputs. It uses a 10-bit ADC to read the voltage on any of the analog pins, so 0 = 0v, 511 = 2.5v, and 1023 = 5v, and all the values in between. It makes a LINEAR DC reading, so I'm not looking for logic 1-0 here. I have this hooked to LED lights, and I want to make them respond to music. What I want is maximum resolution with minimum components, and I think I'm using WAY too many components and making this WAY too complex. Perhaps Electret microphones are not what I want here, I'm open to something else. I'd prefer to not use op-amps to conserve space on my PCB. What I want is a simple noise level sensor. I'm not looking to reproduce the audio, or have clarity or anything, but I would like, as close I can get: Perfect Silence = as close to 0v DC (stable, not AC) as possible Medium Noise = Around 2.5v DC (stable, not AC) Loud Noise = as close to 5v DC (stable, not AC) as possible I understand with a BJT that the best I can get is going to be 0.6v to 4.4v, but this is acceptable enough. What is not, however, is half of the wave, 0.6v to 2.5v. This seems to be wasting half of my available resolution for no reason. However, if there are other setups than a BJT that can get me closer to 0v-5v, I'd be interested in giving them a shot; as long as they're simple. Here's a simpler one, that I hope that this is possible, but it requires the electret signal to have enough amplitude to drive the envelope detector circuit (diode, resistor and capacitor) to get only the positive half. I don't think it can because of the diode's forward drop, but perhaps this can be rearranged or done before the output cap? What should the values of the envelope detector and amplifier resistors be? Should a sensitivity potentiometer be placed on the signal, or RE, or RL, and what should it's value be? Linear or Logarithmic? However, perhaps the electret output can't survive the envelope detector, sensitivity shunt, and still drive a NPN transistor. If not, here's a more complex version. Do I need to go this route? Does getting my desired output from the circuit really really require all these components? Here are some of the past questions I asked before I more fully understood what I was trying to articulate, for more details. Here's what the envelope detector is 'supposed' to do, and I'm not sure how to tune it for the electret output:
Although you could do this whole thing with just an amplifier and a microcontroller (Arduino), as far as I can see, you want the analog option. I have tried to create a circuit that outputs the voice level on the microphone. The range is from 0V to 4V. However, you can upgrade it easily to 0V to 5V by just changing the OP-AMP. Now, let's go into it; First of all, I have replaced the transistor amplifier with the OP-AMP. Here is what I came up with; This is a simple inverting amplifier with a gain of 100. Here is the formula to calculate the gain; $$ V_{out}=-\dfrac{R_f}{R_{in}}*V_{in} = -\dfrac{100k}{R_{in}}*V_{in} = -100*Vin $$ As you can see, U1 takes the input signal, inverts it and then multiplies it with 100. You can change R2 or R3 and you will see that the gain of U1 changes. Inversion of the input signal does not matter here, as you will understand later on. Let's look at the output of this amplifier, and you will see that there is a big growth on the input signal. In the above graphic, you will see that output has a DC offset voltage of 2.5 volts. That is because of the virtual ground we have used. If we create a virtual ground, that means we carry the ground to an another voltage level. In this case we have moved it to 2.5 V. With the new configuration, we have created something that looks like -2.5 V, 0 V, and 2.5 V to the circuit. In order to achieve this, I had to create a new voltage rail of 2.5 volts. Since that voltage rail will not supply much power, (less than 1 mA), it is easy to create; Notice the negative feedback on the above circuit. That will give the OP-AMP the order to make \$V+=V-\$. OP-AMP will do its best to achieve this equation. Thus, the output will be 2.5 V, or in other words, half the supply voltage. And that is our new ground point. After the amplification, we should put the signal on to an "envelope detector" or in other words, "envelope follower". This will get the level of the signal, as you wish and as you showed in the picture in your question. Here is what a basic envelope follower looks like: It looks all great, however, notice that here, D3 is a diode and it drops about 0.6 V on itself. So, you loose the voltage. In order to overcome this, we are going to use what is called the "super-diode". It is super, since the voltage drop is almost 0V! In order to achieve that, we include an OP-AMP with a diode, and that is all! The OP-AMP will compensate the voltage drop of the diode, and you will have an almost ideal diode; Since there is negative feedback in this configuration, U5 will try its best to make \$V+=V-\$. So, whenever the input is say 3V, it will make its output 3.6V to compensate with the 0.6V voltage drop on D3. So, the output of this super-diode, hence the \$V-\$ input will be equal to its input voltage \$V-\$. However, when the \$V+\$ input is negative, D3 will not allow U5 to make the output negative. Also note that the negative rail for U5 is GND, which is 0 V. It will not be able to go below 0 V in any case, already. It works just like an ideal diode! Now, change D3 in the above envelope follower circuit with a super-diode, and you have a better envelope follower! Let's look at our result; We are getting close. As you can see, the output of the envelope follower, which is the red line, can go from 2.5 V to 4 V. 2.5 V is no-sound, 4 V is loud-sound and 3.25 V for medium-sound. To scale that to what you have wanted, we can subtract 2.5 V offset voltage and scale it. So, when you subtract 2.5 V, it becomes; 0 V for no-sound, 1.5 V is loud-sound and 0.75 V for medium-sound and so on. After that, if you multiply this with about 3, you will get what you exactly want. 0 V for no sound, 2.5 V for medium-sound and 5 V for loud-sound. To recap, what we want is this; \$V_{out}=(V_{in} - 2.5V) * 3 \$ In order to achieve this, we will use a differential amplifier or in other words a " subtractor ". When resistors, R1 = R2 and R3 = R4 the transfer function for the differential amplifier can be simplified to the following expression: $$V_{out}=\dfrac{R_3}{R_1}*(V2-V1)$$ If you make V1= 2.5V and R3/R1 ratio 3, then you will get the output you wish. Here is the complete schematic that will do what you want: I have used LM324 OP-AMP here for simulation purposes. That will limit the maximum output voltage to 4V. In order to have full range output, you should use a rail-to-rail output OP-AMP. I would suggest MCP6004 . Change R1 and R2 until you have the desired result. Here is what I got with the simulation: Now, when measuring these values in ADC, you will not get a linear sense , instead sound is better understood logarithmic, since our ears hear that way. So, you should use decibels . If you are not familiar with decibels, here is a great video tutorial about it. A quiet room, for example is measured to be around 40 dB. A party in a room will make the room's level go up to 100 dB, or maybe 110 dB. In this websit e, you can find great info about it, from where I also have embedded below image. Think about the decibel levels and experiment with the voltage output of the circuit. Then, calculate the ADC resolution that you will need. Probably, you will be fine with an 12-bit ADC.
{ "source": [ "https://electronics.stackexchange.com/questions/57824", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9332/" ] }
57,845
I've seen tutorials aimed at beginners suggest the way to drive an LED from something without enough current drive is this: (option A) but why not this: (option B) Option B seems to have some advantages over option A: fewer components the transistor does not saturate, leading to a faster turn-off the base current is put to good use in the LED, instead of making the base resistor warm and the advantages of option A seem to be few: brings the load closer to the supply rail but when Vcc is significantly greater than the forward voltage of the LED, this hardly matters. So, given these advantages, why would option A be preferred? Something I'm overlooking?
I would argue that there are fewer "gotcha's" with option A. I would recommend option A to people of unknown electronics skill because there's not a lot that can keep it from working. For option B to be viable, the following conditions must be true: \$V_{CC_{LED}}\$ must be equal to \$V_{CC_{CONTROL}}\$ \$V_{CC}\$ must be greater than \$V_{f_{LED}} + V_{BE}\$ It is a topology unique to BJT devices These conditions are not as universal as they might first seem. For example, with the first assumption, this rules out any auxiliary power supply for the load that is separate from the logic power supply. It also starts constricting values of \$V_{CC}\$ for a single LED when you start talking about blue or white LEDs with \$V_f\$ > 3.0 V and a controller running off a supply less than 5.0 V. And I think the other thing is that you can't really replace the BJT in option B with a MOSFET if you wanted to eliminate that base current. Additionally, it is more complicated (marginally, but still) to calculate your load resistance. With option A, you can use an analogy such as "consider the transistor to operate like a switch". This is easy to understand, and then you can use familiar equations to calculate \$R_{load}\$. \$R_{load}=\dfrac{V_{CC}-V_{f_{LED}}}{I_{LED}}\$ Compare that to what is required for option B and there is the marginal increase in difficulty: \$R_{load}=\dfrac{V_{CC}-V_{f_{LED}}-V_{BE}}{I_{LED}}\$ Couple that with the fact that the advantages of option B often are not needed. Aside from the reduced part count, the base current from option A shouldn't increase the power consumption by more than 10%, and LEDs are rarely (unsubstantiated qualitative guess) driven fast enough for BJT saturation to matter.
{ "source": [ "https://electronics.stackexchange.com/questions/57845", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17608/" ] }
57,859
I'm getting some errors when I try to compile my design in Aldec's Active-HDL. # Warning: ELAB1_0026: BITADJ128.bde(BITADJ128.vhd) : (79, 0): There is no default binding for component "buf". (No entity named "buf" was found). # Warning: ELAB1_0026: BITADJ128.bde(BITADJ128.vhd) : (157, 0): There is no default binding for component "INV". (No entity named "INV" was found). # Warning: ELAB1_0026: BITADJ128.bde(BITADJ128.vhd) : (277, 0): There is no default binding for component "GND". (No entity named "GND" was found). I have added these items to the library multiple times, and in different ways, but it still gets hosed up. I'm wondering if anyone else has had a similar issue? I inherited a large design I'm converting from EDIF to VHDL and switching from Virtex-4 to Virtex-5, and there seems to be a symbol resolution problem.
I would argue that there are fewer "gotcha's" with option A. I would recommend option A to people of unknown electronics skill because there's not a lot that can keep it from working. For option B to be viable, the following conditions must be true: \$V_{CC_{LED}}\$ must be equal to \$V_{CC_{CONTROL}}\$ \$V_{CC}\$ must be greater than \$V_{f_{LED}} + V_{BE}\$ It is a topology unique to BJT devices These conditions are not as universal as they might first seem. For example, with the first assumption, this rules out any auxiliary power supply for the load that is separate from the logic power supply. It also starts constricting values of \$V_{CC}\$ for a single LED when you start talking about blue or white LEDs with \$V_f\$ > 3.0 V and a controller running off a supply less than 5.0 V. And I think the other thing is that you can't really replace the BJT in option B with a MOSFET if you wanted to eliminate that base current. Additionally, it is more complicated (marginally, but still) to calculate your load resistance. With option A, you can use an analogy such as "consider the transistor to operate like a switch". This is easy to understand, and then you can use familiar equations to calculate \$R_{load}\$. \$R_{load}=\dfrac{V_{CC}-V_{f_{LED}}}{I_{LED}}\$ Compare that to what is required for option B and there is the marginal increase in difficulty: \$R_{load}=\dfrac{V_{CC}-V_{f_{LED}}-V_{BE}}{I_{LED}}\$ Couple that with the fact that the advantages of option B often are not needed. Aside from the reduced part count, the base current from option A shouldn't increase the power consumption by more than 10%, and LEDs are rarely (unsubstantiated qualitative guess) driven fast enough for BJT saturation to matter.
{ "source": [ "https://electronics.stackexchange.com/questions/57859", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8245/" ] }
57,902
I've heard people pronouncing the following 'words' in different ways and I would like to know the correct way of pronouncing them. SPI (spy vs s.p.i) I2C (I.2.C vs I.squared.c) LED (lead vs L.E.D) I use s.p.i, i.2.c and l.e.d because these are all abbreviations and not really 'words'. I mean, people don't really call USB as us-b right?
I think pronunciation is related to region, laziness and the presence of phonetic pronunciation. Short acronyms are regional - Several answers here indicate this by the variation in SPI pronunciation. Longer acronyms (4 letters or more) are generally pronounced phonetically due to laziness if they have a phonetic pronunciation. Some examples: Longer acronyms with phonetic pronunciation to some extent UART - Yu - ART ASCII - AS - KEY VHSIC - VEE - H - SIC Acronyms with no phonetic pronunciation VHDL - VEE - H - DEE - EL XKCD - EX - KAY - CEE - DEE ;) My pronunciation: SPI - ES - PE - EYE I 2 C - EYE - squared - C LED - EL - EE - DE USB - YU - ES - BE
{ "source": [ "https://electronics.stackexchange.com/questions/57902", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9409/" ] }
58,027
[An antenna must have] current flowing along its length, so that the resulting fields radiate that energy into space. (Receiving antennas are just this process in reverse). [This] explains why you can't just stick a small tank circuit on a board and expect it to radiate efficiently. ( source ) I understand this is true from experience, but I don't understand why. I guess the dimension of the antenna changes the fields it produces somehow, but how does this make energy radiate away more effectively? What's an energy radiating away look like? I do understand the need to tune the antenna. I'm just wondering how after we've tuned for maximal power transfer to the antenna, we get more of that energy to go to the receiving antenna.
Indeed it can be a very good antenna. Look no further than the transistor radios and AM band receivers. In those ubiquitous consumer goods the antenna consisted of a piece of very low loss ferrite with a very high permittivity. This was wrapped in many amp*turns of very fine copper wire. The high permittivity gave the antennas an effective cross-sectional area -due to the permittivity- (If I recall correctly) of a square mile or so, thus bringing the antenna's electrical size up to the dimensions of the wavelength that it was receiving. On a technical bent, you could consider that the antennae interacted with the magnetic field portion of the radiating Poynting vector.
{ "source": [ "https://electronics.stackexchange.com/questions/58027", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17608/" ] }
58,236
I was just watching a mega factory video and wondered why they use an AC motor which requires a power inverter instead of DC which may be powered directly from their DC battery? Introducing an inverter means more cost (weight, controller, etc). Are there any reasons for that? What are the differences between an AC and DC motor that may have lead to this decision? Also does anyone know what kind of motor is used in other electric cars?
You're asking about the technical tradeoffs surrounding the selection of a traction motor for an electric vehicle application. Describing the full design tradespace is far beyond what can reasonably be summarized here, but I'll outline the prominent design tradeoffs for such an application. Because the amount of energy that can be stored chemically (i.e. in a battery) is quite limited, nearly all electric vehicles are designed with efficiency in mind. Most transit application traction motors for automotive applications range between 60kW and 300kW peak power. Ohms law indicates that power losses in cabling, motor windings, and battery interconnects is P=I 2 R. Thus reducing current in half reduces resistive losses by 4x. As a result most automotive applications run at a nominal DC link voltage between 288 and 360V nom (there are other reasons for this selection of voltage, too, but let's focus on losses). Supply voltage is relevant in this discussion, as certain motors, like Brush DC, have practical upper limits on supply voltage due to commutator arcing. Ignoring more exotic motor technologies like switched/variable reluctance, there are three primary categories of electric motors used in automotive applications: Brush DC motor : mechanically commutated, only a simple DC 'chopper' is required to control torque. While Brush DC motors can have permanent magnets, the size of the magnets for traction applications makes them cost-prohibitive. As a result, most DC traction motors are series- or shunt-wound. In such a configuration, there are windings on both stator and rotor. Brushless DC motor (BLDC): electronically commutated by inverter, permanent magnets on rotor, windings on stator. Induction motor : electronically commutated by inverter, induction rotor, windings on stator. Following are some brash generalizations regarding tradeoffs between the three motor technologies. There are plenty of point examples that will defy these parameters; my goal is only to share what I would consider nominal values for this type of application. - Efficiency: Brush DC: Motor:~80%, DC controller: ~94% (passive flyback), NET=75% BLDC: ~93%, inverter: ~97% (synchronous flyback or hysteretic control), NET=90% Induction: ~91%: inverter: 97% (synchronous flyback or hysteretic control), NET=88% - Wear/Service: Brush DC: Brushes subject to wear; require periodic replacement. Bearings. BLDC: Bearings (lifetime) Induction: Bearings (lifetime) - Specific cost (cost per kW), including inverter Brush DC: Low - motor and controller are generally inexpensive BLDC: High - high power permanent magnets are very expensive Induction: Moderate - inverters add cost, but motor is cheap - Heat rejection Brush DC: Windings on rotor make heat removal from both rotor and commutator challenging with high power motors. BLDC: Windings on stator make heat rejection straightforward. Magnets on rotor have low-moderate eddy current-induced heating Induction: Windings on stator make stator heat rejection straightforward. Induced currents in rotor can require oil cooling in high power applications (in and out via shaft, not splashed). - Torque/speed behavior Brush DC: Theoretically infinite zero speed torque, torque drops with increasing speed. Brush DC automotive applications generally require 3-4 gear ratios to span the full automotive range of grade and top speed. I drove a 24kW DC motor-powered EV for a number of years that could light the tires up from a standstill (but struggled to get to 65 MPH). BLDC: Constant torque up to base speed, constant power up to max speed. Automotive applications are viable with a single ratio gearbox. Induction: Constant torque up to base speed, constant power up to max speed. Automotive applications are viable with a single ratio gearbox. Can take hundreds of ms for torque to build after application of current - Miscellaneous: Brush DC: At high voltages, commutator arcing can be problematic. Brush DC motors are canonically used in golf cart and forklift (24V or 48V) applications, though newer models are induction due to improved efficiency. Regnerative braking is tricky and requires a more complex speed controller. BLDC: Magnet cost and assembly challenges (the magnets are VERY powerful) make BLDC motors viable for lower power applications (like the two Prius motor/generators). Regnerative braking comes essentially for free. Induction: The motor is relatively cheap to make, and power electronics for automotive applications have come down in price significantly over the past 20 years. Regnerative braking comes essentially for free. Again, this is only a very top-level summary of some of the primary design drivers for motor selection. I've intentionally omitted specific power and specific torque, as those tend to vary much more with the actual implementation.
{ "source": [ "https://electronics.stackexchange.com/questions/58236", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/12105/" ] }
58,386
I have a project that I want to work on either a Uno or a Mega (or even a Due) and it would be nice if I didn't need two versions of the software. For example, on a Mega, to use SoftwareSerial, you have to use different pins than the ones on an Uno. See the docs on Software Serial . Anyway, it would be nice to detect that I'm using an Uno so I can just use pins 4 and 5 for TX/RX and if I'm using a Mega the software will detect and just use pins 10 and 11 (and of course, I'll have to wire it up differently but at least the software will be the same).
Run time To my knowledge you cannot detect the board type, but you can read the ATmega device ID. Check this question how it can be done: Can an ATmega or ATtiny device signature be read while running? Notice though when using this method, several register assignments will change, not just the pinout. Therefore your code may get significantly more complex. The advantage is that if you manage to work around all changing register assignments and other hardware dependencies, you can use a single .hex file to program your devices directly from avrdude . Compile time Another way to figure out the board/controller type is at compile time. Basically you compile parts of the code or set macros depending on the device type configured in the Arduino IDE. Check this code sniplet for an example: #if defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__) #define DEBUG_CAPTURE_SIZE 7168 #define CAPTURE_SIZE 7168 #elif defined(__AVR_ATmega328P__) #define DEBUG_CAPTURE_SIZE 1024 #define CAPTURE_SIZE 1024 #else #define DEBUG_CAPTURE_SIZE 532 #define CAPTURE_SIZE 532 #endif The code sniplet was shamelessly copied from https://github.com/gillham/logic_analyzer/wiki Check that code for some some more device specific trickery. Depending on your host's operating system, the supported controller types can be found in the following file: Linux: /usr/lib/avr/include/avr/io.h Windows: ...\Arduino\hardware\tools\avr\avr\include\avr\io.h The use of C-preprocessor (by which the above code is handled) is probably out of scope for this site. http://stackoverflow.com would be the better place for detailed questions. If you are on Linux you can easily find all supported controller types by typing: grep 'defined (__AVR' /usr/lib/avr/include/avr/io.h | sed 's/^[^(]*(\([^)]*\))/\1/'
{ "source": [ "https://electronics.stackexchange.com/questions/58386", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/3534/" ] }
58,502
Are pull-up/down resistors (whether internal or external) only needed for MCU INPUT pins? In contrast, an MCU pin configured as an OUTPUT "knows what level it's at" because it does the driving - a "floating" MCU OUTPUT pin tied to some input of another circuit doesn't make sense, because the state of the MCU pin can only be high or low... do I have this right? Now, upon MCU bootup or failure, it may be beneficial to have a pull-up/down tied to this "MCU output to IC input" line to ensure that the input to some IC is never floating. Maybe I just answered my own question here... pull-up/down resistors can be used on both input and output pins, depending on application?
Pull-up and Pull-downs are normally used to ensure a line has a defined state while not actively driven. They are used on inputs to prevent floating lines, rapidly switching between high and low and a middle "undefined" region. Outputs normally do not need them. But most mcu pins are GPIO, and sometimes on startup are defined as inputs instead of outputs. As you said, sometimes you don't want an IC pin input floating on startup, especially like a reset pin that you would normally drive with your microcontroller's GPIO. This is when you use a Weak Pull-up or Pull-down on the line. Because they are weak, and you choose the default state, they provide no interference with your circuit (If the input is supposed to be low at all times, then pulled high, you choose a weak pull-down, and vis versa), but they do consume a bit of current. This is why you choose a resistor weak (Higher the value, the weaker) enough for the job. Another normal output setup that uses pull-ups (or pull-downs, rarer) is Open Drain or Open Collector connections. These only drive a connection low, or release the line, leaving it floating. The pull-ups are used to bring the line to a high logic state.
{ "source": [ "https://electronics.stackexchange.com/questions/58502", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/15485/" ] }
58,591
I have in front of me a raw PCB. The surface mostly has three colors: dark green, orange-gold and white. As I understand, the green is resist, the orange-gold is copper, and the white just the silk screen. I find it surprising the color of the PCB's surface copper is quite different to the color of copper pipes. Please compare the orange-gold of PCBs, and the reddish-orange of copper pipes. Why does PCB copper look more yellow than copper pipes?
Because that board has been gold plated. Ironically, that saves money; copper tarnishes, and would need expensive cleaning immediately before soldering. And gold on the contact fingers makes for reliable connections. Other platings are possible : tin or silver, and formerly lead-based solder. Scrape some green solder-resist off the earth track and you would see a more reddish metal, though it would take a few months to tarnish to the colour of those pipes. EDIT : the gold plating on the contact fingers may be a different process (thicker plating, to resist wear) - plating on the board itself will be VERY thin.
{ "source": [ "https://electronics.stackexchange.com/questions/58591", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5872/" ] }
58,664
How fast is 1 MHz in an AVR microcontroller? Is it actually 1,000,000 Hz or is it 1,048,576 Hz (1,024 Hz * 1,024)?
Hz are always SI units Mega = 10^6 or one million. Strictly speaking 2^20 should use the Mi (Mebi) prefix for all applications. I know that those prefixes were codified rather late, but it was still in the 1990's and have been in popular usage for over a decade.
{ "source": [ "https://electronics.stackexchange.com/questions/58664", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18308/" ] }
59,208
I am a licensed radio amateur, and find bewildering the many different explanations, which range from folksy urban myth to Maxwell-Heaviside Equations, of what happens at the termination of a transmission line or feeder. I realise that they all come to the same thing in the end (or should do, pun perfect), but none of them give me a gut feeling for what is going on. I like diagrams, so an answer in terms of (graphical) phasors for the currents and voltages at the load would suit me best. How, for instance, does a step pulse down the line cause twice the voltage at an open circuit termination? Similarly for current at a short circuit. And how is the reflected step generated by the inductance and capacitance of the line? Can anyone help, without getting all mathematical, and not telling any "lies to children"?
OK, for what it's worth, here's how I visualize it. As you say, a transmission line has both distributed capacitance and distributed inductance, which combine to form its characteristic impedance Z 0 . Let's assume we have a step voltage source whose output impedance Z S matches Z 0 . Prior to t=0, all voltages and currents are zero. At the moment the step occurs, the voltage from the source divides itself equally across Z S and Z 0 , so the voltage at that end of the line is V S /2. The first thing that needs to happen is that the first bit of capacitance needs to be charged to that value, which requires a current to flow through the first bit of inductance. But that immediately causes the next bit of capacitance to be charged through the next bit of inductance, and so on. A voltage wave propogates down the line, with current flowing behind it, but not ahead of it. If the far end of the line is terminated with a load of the same value as Z 0 , when the voltage wave gets there, the load immediately starts drawing a current that exactly matches the current that's already flowing in the line. There's no reason for anything to change, so there's no reflection in the line. However, suppose the far end of the line is open. When the voltage wave gets there, there's no place for the current that's flowing just behind it to go, so the charge "piles up" in the last bit of capacitance until the voltage gets to the point where it can halt the current in the last bit of inductance. The voltage required to do this happens to be exactly twice the arriving voltage, which creates an inverse voltage across the last bit of inductance that matches the voltage that started the current in it in the first place. However, we now have V S at that end of the line, while most of the line is only charged to V S /2. This causes a voltage wave that propogates in the reverse direction, and as it propogates, the current that's still flowing ahead of the wave is reduced to zero behind the wave, leaving the line behind it charged to V S . (Another way of thinking about this is that the reflection creates a reverse current that exactly cancels the original forward current.) When this reflected voltage wave reaches the source, the voltage across Z S suddenly drops to zero, and therefore the current drops to zero, too. Again, everything is now in a stable state. Now, if the far end of the line is shorted (instead of open) when the incident wave gets there, we have a different constraint: The voltage can't actually rise, and the current just flows into the short. But now we have another unstable situation: That end of the line is at 0V, but the rest of the line is still charged to V s /2. Therefore, additional current flows into the short, and this current is equal to V S /2 divided by Z 0 (which happens to be equal to the original current flowing into the line). A voltage wave (stepping from V S /2 down to 0V) propogates in the reverse direction, and the current behind this wave is double the original current ahead of it. (Again, you can think of this as a negative voltage wave that cancels the original positive wave.) When this wave reaches the source, the source terminal is driven to 0V, the full source voltage is dropped across Z S and the current through Z S equals the current now flowing in the line. All is stable again. Does any of this help? One advantage of visualizing this in terms of the actual electronics (as opposed to analogies involving ropes, weights or hydraulics, etc., etc.), is that it allows you to more easily reason about other situations, such as lumped capacitances, inductances or mismatched resistive loads attached to the transmission line.
{ "source": [ "https://electronics.stackexchange.com/questions/59208", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6361/" ] }
59,212
I am trying to build a current sensor here. My original thoughts starts from this paper [real-time current-waveform sensor with plugless energy harvesting from AC power lines for home/building Energy-management systems][enter link description here] Anyway, if you do not scan that paper is OK. [link]: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5746292 Now just take the DC battery into consideration. I wanna detect the current as accurate as I could be when it charges. The charging current could be very small or very big(the up limit is several or decades amps, and the down limit is smaller, and better I hope) I have two main thoughts how to realize it. One way is use the Hall effect current sensor, but I know it is costly, and I am still trying to find more things about it. (I am not sure if it is the best way, what is the state-of-art hall effect current sensor now? ) The other way is use the shunt resistor way. Depicts like this: WRT the current detecting device, I can use a Differential Amplifier or more complex device to get the voltage, but here is the Question, the resistor is changing when it charges, the resistor's value is a function of the temperature parameter T. So as I wanna have a very accurate current sensing, how can I deal with the resistor's change by temperature? BTW, the resistor with low coefficient, such as the open air sense resistor, their temperature resistance range is like 0.005 ohm~0.03 ohm @70°, which is big to me. So is there any possible, I dont even need to know the exact resistor value(just know the vague value, 'cause it is changing due to temperature.), and I can also detect the current flowing the resistor? Or is there any temperature compensation way to make the change smaller than the open sense resistor?
OK, for what it's worth, here's how I visualize it. As you say, a transmission line has both distributed capacitance and distributed inductance, which combine to form its characteristic impedance Z 0 . Let's assume we have a step voltage source whose output impedance Z S matches Z 0 . Prior to t=0, all voltages and currents are zero. At the moment the step occurs, the voltage from the source divides itself equally across Z S and Z 0 , so the voltage at that end of the line is V S /2. The first thing that needs to happen is that the first bit of capacitance needs to be charged to that value, which requires a current to flow through the first bit of inductance. But that immediately causes the next bit of capacitance to be charged through the next bit of inductance, and so on. A voltage wave propogates down the line, with current flowing behind it, but not ahead of it. If the far end of the line is terminated with a load of the same value as Z 0 , when the voltage wave gets there, the load immediately starts drawing a current that exactly matches the current that's already flowing in the line. There's no reason for anything to change, so there's no reflection in the line. However, suppose the far end of the line is open. When the voltage wave gets there, there's no place for the current that's flowing just behind it to go, so the charge "piles up" in the last bit of capacitance until the voltage gets to the point where it can halt the current in the last bit of inductance. The voltage required to do this happens to be exactly twice the arriving voltage, which creates an inverse voltage across the last bit of inductance that matches the voltage that started the current in it in the first place. However, we now have V S at that end of the line, while most of the line is only charged to V S /2. This causes a voltage wave that propogates in the reverse direction, and as it propogates, the current that's still flowing ahead of the wave is reduced to zero behind the wave, leaving the line behind it charged to V S . (Another way of thinking about this is that the reflection creates a reverse current that exactly cancels the original forward current.) When this reflected voltage wave reaches the source, the voltage across Z S suddenly drops to zero, and therefore the current drops to zero, too. Again, everything is now in a stable state. Now, if the far end of the line is shorted (instead of open) when the incident wave gets there, we have a different constraint: The voltage can't actually rise, and the current just flows into the short. But now we have another unstable situation: That end of the line is at 0V, but the rest of the line is still charged to V s /2. Therefore, additional current flows into the short, and this current is equal to V S /2 divided by Z 0 (which happens to be equal to the original current flowing into the line). A voltage wave (stepping from V S /2 down to 0V) propogates in the reverse direction, and the current behind this wave is double the original current ahead of it. (Again, you can think of this as a negative voltage wave that cancels the original positive wave.) When this wave reaches the source, the source terminal is driven to 0V, the full source voltage is dropped across Z S and the current through Z S equals the current now flowing in the line. All is stable again. Does any of this help? One advantage of visualizing this in terms of the actual electronics (as opposed to analogies involving ropes, weights or hydraulics, etc., etc.), is that it allows you to more easily reason about other situations, such as lumped capacitances, inductances or mismatched resistive loads attached to the transmission line.
{ "source": [ "https://electronics.stackexchange.com/questions/59212", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17251/" ] }
59,277
Since the advent of the many microcontroller development boards, like Arduino, there have been a number of relay modules sold to drive mains AC loads. A lot of these seem to use an optocoupler, driver transistor and a relay to drive the load (example on Amazon ) Why are they implemented like this? Some of my thoughts: Relays provide as good or better isolation than most optocouplers There is still a driver transistor present, so it is not component saving There is still inductive kickback protection, so it is not component saving Optocouplers are not as cheap as transistors, so additional cost compared to just a driver transistor There is no need to meet any regulatory requirements as these are DIY products I have never seen small mains relays driven by optocouplers in commercial equipment A number of these boards don't seem to be designed brilliantly (no regard to clearance or creepage), so even if the optocoupler is simply to provide two layers of isolation, the board fails at this.
First, a possibly more permanent link to this product is here . And the schematic is here . (Edit 7/29/2015: Ironically my two links are now broken and OP's Amazon link is still useful) Two reasons it makes sense to use optoisolators here: The controlling device might be very far away so that it doesn't share a common ground reference with the relay board (except as connected through a long cable). Using the optoisolator means the control signal is used purely as a differential signal between Vcc and the control signal, both sourced from the controller circuit; ground potential differences won't affect the operation. The relay coil voltage is not necessarily the same as the controller's Vcc. It could even be generated by an off-line (unisolated) supply. The optoisolator then provides isolation between the potentially unisolated JD-VCC supply and the controller circuits.
{ "source": [ "https://electronics.stackexchange.com/questions/59277", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2005/" ] }
59,282
I'm using 74HC595 with SoftPWM to control some LED's. When the 74HC595 is off, its outputting about 0.57v, enough to turn my LED's on somewhat. How do I work out the value of the pull-down resistor (R1), so that the LED is completely off when meant to be, but also able to reach maximum brightness.
First, a possibly more permanent link to this product is here . And the schematic is here . (Edit 7/29/2015: Ironically my two links are now broken and OP's Amazon link is still useful) Two reasons it makes sense to use optoisolators here: The controlling device might be very far away so that it doesn't share a common ground reference with the relay board (except as connected through a long cable). Using the optoisolator means the control signal is used purely as a differential signal between Vcc and the control signal, both sourced from the controller circuit; ground potential differences won't affect the operation. The relay coil voltage is not necessarily the same as the controller's Vcc. It could even be generated by an off-line (unisolated) supply. The optoisolator then provides isolation between the potentially unisolated JD-VCC supply and the controller circuits.
{ "source": [ "https://electronics.stackexchange.com/questions/59282", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/17575/" ] }
59,325
I've seen some circuits where a decoupling capacitor is used as well as a reservoir capacitor, like this (C4 and C5): I've read about decoupling capacitors and for me it looks as if they are meant to remove small fluctuations in the supply voltage. Then I thought - wasn't that the purpose of a reservoir capacitor as well? Why wouldn't the reservoir capacitor be able to filter out the small fluctuations, if it is able to filter out the large fluctuations? So I feel like I have a basic misunderstanding here. What is the purpose of a decoupling capacitor next to a reservoir capacitor, when we assume we place both equally near to the power consuming part? Or is the only advantage of the decoupling capacitor that it is smaller and can therefore be easily placed more near to the power consuming part?
The most likely reason why that is done is because, in real life, capacitors do not have infinite bandwidth. Generally, the higher the capacitance of the capacitor, the less it will be able to react to high frequencies, while small-valued capacitors react better to higher frequencies, as seen in the graph below. Using two different-valued capacitors together is just done to improve the response of the filtering.
{ "source": [ "https://electronics.stackexchange.com/questions/59325", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
59,583
The story for this signal is the following. I've bought an NAD C 356BEE amplifier with an integrated MDC DAC module . It has optical and USB input. The optical is OK, but if I connect the DAC to my PC with USB, then it makes clicking/popping noise at specified times. The click frequency is somehow related to the signal sample rate. For example at 96 kHz it pops in every 2.5 seconds, but at 48 kHz it pops at 30 seconds. I've played a sine wave, and I've recorded the noise and zoomed in to the waveform. It's a very short signal, about 0.008 seconds. Do you have any idea what it could be? The amplitude of the noise signal is much higher than the test signal. The length of the noise signal is random (but very short, you hear just a click), but the waveform is always the same for the same test signal. Different test frequencies cause different error signals. It seems like the error signal is some transformation of the original.
That looks like a sine wave with the y-axis wrapped around. Here's my attempt at recreating it: This is a plot of the function \$1.25 \cdot \sin(t) - \operatorname{round}(1.25 \cdot \sin(t))\$, where \$\operatorname{round}(x)\$ rounds \$x\$ to the nearest integer. Perhaps the highest bit of your signal is getting cut off? That would seem likely to produce such a waveform.
{ "source": [ "https://electronics.stackexchange.com/questions/59583", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/1514/" ] }
59,595
Spring cleaning, and I'm trying to get power supplies for all my devices with missing power supplies. They're all the typical barrel power connector, and I'm having a dickens of a time trying to figure out the pin/hole diameter. I ordered the power supplies I needed based on outside diameter (e.g., 5.5mm in my example below) and was surprised to discover that while the jack fit, the center pin did NOT. How do I prevent this from happening in the future? Do they even make calipers that can get into the hole to measure the pin diameter? Radio Shack has their little keyring behind the counter with every known tip size, but all they can get from that is which stock number fits on their universal wall wart. Personally, I think that these types of "universal" kits are the worst thing to happen to electronics in, like, FOREVER. Too many parts to misplace and the tip-to-cable connector is almost always proprietary. If I try to pump them for information about what the outer and inner diameters are, they want to know if I'm happy with my current cellular provider. As you may surmise, I'm not a big fan of trusting my local Radio Shack for electronics guidance. So...that leaves me with a bunch of power supplies that don't fit their devices, and me a little peeved that I have to deal with RMAs, return shipping, etc., especially when I really don't have a clue how to figure out what to order. That also begs the question about how to ensure that I buy the right jack when designing something that NEEDS wall wart power. Where do I even start? Anyone have any ideas on how to finding the correct barrel & pin diameters when I don't have specs on the jack? Is it really trial and error? or is there some measurement device that's available to help?
Those are barrel power connectors. Looking at Digikey , it looks like common inner diameters with a 5.5mm outer diameter are 2mm, 2.1mm, and 2.5mm, but that doesn't mean that your target application doesn't have a custom size which doesn't match any of these. The one I usually use for my projects is 2.1mm*5.5mm if I can, but as far as I know this is by no means a rule of thumb. Knowing what the jack is being used for may help in identifying a correct size.
{ "source": [ "https://electronics.stackexchange.com/questions/59595", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10660/" ] }
60,199
I know that the Arduino Nano can handle 12 volts, and it says it is recommended to power it between 7 and 12 volts. So I am wondering do I just hack together a 12 volt adapter to a Mini-B or is it possible to power it through the ICSP header?
The Arduino Nano accepts the 7-12 Volt input power not from the USB port , but from the Vin pin (pin30), see the diagram below: If you want to supply regulated power, then a 5 Volt regulated adapter needs to feed the +5V pin (pin27) instead. From the official Arduino Nano page : Power: The Arduino Nano can be powered via the Mini-B USB connection, 6-20V unregulated external power supply (pin 30), or 5V regulated external power supply (pin 27). The power source is automatically selected to the highest voltage source. The FTDI FT232RL chip on the Nano is only powered if the board is being powered over USB. As a result, when running on external (non-USB) power, the 3.3V output (which is supplied by the FTDI chip) is not available and the RX and TX LEDs will flicker if digital pins 0 or 1 are high.
{ "source": [ "https://electronics.stackexchange.com/questions/60199", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/19899/" ] }
60,342
For example: The datasheet for ATtiny2313 (as do most Atmel AVR datasheets) states: 128 Bytes In-System Programmable EEPROM Endurance: 100,000 Write/Erase Cycles Imagine a program only requires two bytes to store some configuration, the other 126 bytes are effectively wasted. What concerns me is that regular updates of the two configuration bytes may wear out the device's EEPROM and render it useless. The whole device would become unreliable, because at a certain moment you just can't keep track of which bytes in EEPROM are unreliable. Is there a smart way to do wear leveling on a microcontroller's EEPROM when you effectively use only one or two bytes out of available 128?
The technique I normally use is to prefix the data with a 4-byte rolling sequence number where the largest number represents the lastest / current value. In the case of storing 2 bytes of actual data that would give 6 bytes total and then I form into a circular queue arrangement so for 128 bytes of EEPROM it would contain 21 entries and increase endurance 21 times. Then when booting the largest sequence number can be used to determine both the next sequence number to be used and the current tail of the queue. The following C pseudo-code demonstrates, this assumes that upon initial programming the EEPROM area has been erased to values of 0xFF so I ignore a sequence number of 0xFFFF: struct { uint32_t sequence_no; uint16_t my_data; } QUEUE_ENTRY; #define EEPROM_SIZE 128 #define QUEUE_ENTRIES (EEPROM_SIZE / sizeof(QUEUE_ENTRY)) uint32_t last_sequence_no; uint8_t queue_tail; uint16_t current_value; // Called at startup void load_queue() { int i; last_sequence_no = 0; queue_tail = 0; current_value = 0; for (i=0; i < QUEUE_ENTRIES; i++) { // Following assumes you've written a function where the parameters // are address, pointer to data, bytes to read read_EEPROM(i * sizeof(QUEUE_ENTRY), &QUEUE_ENTRY, sizeof(QUEUE_ENTRY)); if ((QUEUE_ENTRY.sequence_no > last_sequence_no) && (QUEUE_ENTRY.sequence_no != 0xFFFF)) { queue_tail = i; last_sequence_no = QUEUE_ENTRY.sequence_no; current_value = QUEUE_ENTRY.my_data; } } } void write_value(uint16_t v) { queue_tail++; if (queue_tail >= QUEUE_ENTRIES) queue_tail = 0; last_sequence_no++; QUEUE_ENTRY.sequence_no = last_sequence_no; QUEUE_ENTRY.my_data = v; // Following assumes you've written a function where the parameters // are address, pointer to data, bytes to write write_EEPROM(queue_tail * sizeof(QUEUE_ENTRY), &QUEUE_ENTRY, sizeof(QUEUE_ENTRY)); current_value = v; } For a smaller EEPROM a 3-byte sequence would be more efficient, although would require a bit of bit slicing instead of using standard data types.
{ "source": [ "https://electronics.stackexchange.com/questions/60342", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8627/" ] }
60,427
I searched and read many similar questions, but did not find a specific answer for how to calculate the correct value for a pulldown resistor for a MOSFET's floating gate. It seems like everyone dodges the question with a 1K, 10K, or 100K "ought to work". If I had an N-Channel IRF510 and I was going to run the gate from 9V to switch a \$V_{DS}\$ of 24V at 500mA, what value should I use for the gate's pulldown resistor and how did you calculate that value?
Here is a quantitative way to determine the boundaries of acceptable gate termination resistance \$R_g\$ for power MOSFETs . This will be a lazy lazy lazy (\$L^3\$) approach. So: Very simple FET model, just \$C_{\text{gd}}\$, \$C_{\text{gs}}\$, and \$R_g\$ included. FET capacitors regarded as linear only. FET gate has been pulled down to the source through \$R_g\$. \$V_{\text{ds}}\$ forcing voltage no more complicated than a linear ramp will be used. The intent of a (\$L^3\$) approach is get maximum insight/usefulness with minimum effort, by using a model that is simple as possible but still meaningful. Model is a simple capacitive divider with resistive pull down. \$V_{\text{gs}}\$ was solved for in the frequency domain, and then inverse Laplace transformed for time domain. Three operating conditions are analyzed using this model: A voltage appears on the drain to source while \$R_g\$ = \$\infty\$. This is a condition that should never occur in a real circuit, but is instructive to think about. The gate is terminated to the source through \$R_g\$ with some finite value, while any change to \$V_{\text{ds}}\$ slow and infrequent. Every FET in use spends some time in this condition. For example during startup all FETs go through a period where they should be off and any change of \$V_{\text{ds}}\$ happens over milli-seconds. During this type of operation, the FET is essentially a passive device. Frequent short rise and fall time switching with \$R_g\$ having some finite value. Most FETs end up spending extended time in this condition. 1. The Unterminated Gate: \$R_g\$ = \$\infty\$ After setting \$R_g\$ = \$\infty\$: \$V_{\text{gs}}\$ = \$\frac{C_{\text{gd}} V_{\text{ds}}}{C_{\text{gd}}+C_{\text{gs}}}\$ So, in this case, \$V_{\text{gs}}\$ is just a scaled version of \$V_{\text{ds}}\$, and the scale factor is the capacitive divider of \$C_{\text{gd}}\$ and \$C_{\text{gs}}\$. For the IRF510: \$V_{\text{ds-max}}\$ = 100V \$C_{\text{gd}}\$ = \$C_{\text{rss}}\$ = 20pF \$C_{\text{gs}}\$ = \$C_{\text{ciss}}\$ - \$C_{\text{gd}}\$ = 135pF - 20pF = 115pF \$V_{\text{gth-min}}\$ = 2V For a drain to source voltage greater than 14V, \$V_{\text{gs}}\$ will be greater than the 2V threshold and the part will start to conduct. It doesn't matter how the voltage appears on the drain, just that it is there. Pretty obvious why nobody ever leaves a FET gate unterminated. 2. FET off During System Startup: \$R_g\$ = Some Finite Value Allowing \$R_g\$ to be a variable finite value: \$V_{\text{gs}}\$ = \$C_{\text{gd}} V_{\text{dsSlp}} R_g \left(1-e^{-\frac{t}{R_g \left(C_{\text{gd}}+C_{\text{gs}}\right)}}\right)\$ \$V_{\text{dsSlp}}\$ is the slope or linear ramp forcing voltage (in volts/second) across the drain to source. If \$V_{\text{ds}}\$ rises from 0 to 25V in 2 milli-seconds, \$R_g\$ will need to be less than 11 MOhms for \$V_{\text{gs}}\$ to remain below the 2V threshold and remain off. Such slow rates of change (in the 1 to 10 milli-second range) for \$V_{\text{ds}}\$ are why Olin Lathrop can correctly say \$R_g\$ values of 1kOhm, 10kOhm, or 100kOhm ought to work. So, yes for a passive pull down to keep a FET off during system startup or other seldom switched low dV/dt application, almost any kilo-Ohm resistor will do. Why even waste time looking at this? If that's all there is we can all just roll over, go back to sleep, and be happy. But, there's a lot more to it, so let's look at a little of that next. 3. \$R_g\$ Requirements With High dV/dt at Drain to Source -- The dV/dt Issue Nearly all FETs end up being frequently switched, between 10KHz and 500KHz, with short rise and fall time \$V_{\text{ds}}\$ transitions. Most FETs will be turned off in 20 to 100 nano-seconds, and this is where gate termination becomes important. Let's look at the IRF510 with \$V_{\text{ds}}\$ rising linearly from 0 to 25V in 50 nano-seconds. Using the equation in conditon 2 above: \$V_{\text{gs}}\$ = \$ \text{(20pF) }\text{(25V/50nsec) }\text{Rg} \left(1-e^{-\frac{\text{50 nsec}}{\text{(20pF + 115pF)} \text{ Rg}}}\right)\$ So, plugging in a value of 270 Ohms for \$R_g\$ gives \$V_{\text{gs}}\$ ~ 2V. That would be the highest value of \$R_g\$ that could be used without the FET possibly turning back on. \$R_g\$ greater than this maximum value allows the FET to be turned on a little or a lot, depending on the energy forcing \$V_{\text{ds}}\$. FET could turn on just enough to leak current and dissipate power, but showing no real effect on \$V_{\text{ds}}\$, or could turn on enough to cause \$V_{\text{ds}}\$ to drop, which in the right conditions can cause oscillation. Clearly, the higher the peak value or transition rate of \$V_{\text{ds}}\$ the lower the gate circuit resistance must be. Finding the Minimum Value for \$R_g\$ Why not just make \$R_g\$ zero, or as small as possible? So far in this analysis, the gate circuit is dominated by resistance, but there is also inductance in the gate circuit. If gate resistance is minimized, gate inductance becomes dominant in circuit dynamics, and with \$C_{\text{gs}}\$ forms an LC resonant circuit. LCR circuits with Q > 1 become increasingly ringy, which is a problem for FET gate control if charge is injected through \$C_{\text{gd}}\$ from \$V_{\text{ds}}\$ or also from switching waveform from the gate drive . For example, an LCR circuit with a Q of 2 will ring to about 1.5 times its driving voltage. For a gate drive with a 14 V source, a Q of 2 would be enough to damage the gate of most FETs. For a series LC resonant circuit : Q = \$\frac {Z_o} {R}\$ and \$Z_o\$ = \$\sqrt {\frac {L} {C}}\$ Let's look a a specific case with the IRF510. Including routing and package inductance, the gate circuit could easily have 11 or 12 nH of inductance. Recall that the IRF510 has a \$C_{\text{gs}}\$ of 115pF, so \$Z_o\$ would be about 10 Ohms. Matching \$R_g\$ to \$Z_o\$ would give a Q of 1, which would be the maximum Q for non-overshoot of drive waveform. Minimum \$R_g\$ should be greater than \$Z_o\$. Some Things to Keep in Mind \$R_g\$ is the total series resistance between the gate and source of the FET. This includes driver output resistance, resistance in the connection from drive to FET gate, resistance in the FET structure (physical gate and package). Useable values for \$R_g\$ fall into a range, not too high and not too low. \$R_g\$ > \$R_{g-\max }\$ or \$R_g\$ < \$R_{g-\min }\$ can cause the FET to oscillate. All FETs show dV/dt effects, especially older technology parts. Consider this to be the minimum knowledge needed about gate circuit resistance in MOSFETs.
{ "source": [ "https://electronics.stackexchange.com/questions/60427", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8782/" ] }
60,732
I have a 1.5 amp-hour, 12 V battery and I have a 10 amp-hour 12 V battery. I know the voltage will increase to 24 V when they are put in series, but I don't know what goes on amp-hour wise. Does the new amp-hour rating take the amp hour of the low amp hour rating, does it take the 10 amp hours or is it like an average?
It is bad practice to connect batteries in series when they don't have the same capacity. The battery with the smaller capacity will be empty before the larger one, resulting in a lower voltage for the smaller battery. At that point things will start to get interesting as the larger battery will start to charge the smaller one through the connected circuit and with reversed voltage . The cell is not designed for being reversed and charged and bad things may happen a.o.: leaking acid and exploding. Neither of these situations are desirable. This is also the reason why most manuals of battery operated devices urge to replace all batteries at the same time.
{ "source": [ "https://electronics.stackexchange.com/questions/60732", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10190/" ] }
60,865
I have an IC that has an GPIO with which I would like to drive a LED. Since the device will be running off battery, keeping the power use low (Zero maybe) while the LED is off as a priority. The GPIO supplies 3.3V when turned on and 0.0V votes when turned off. It also has a limit of a maximum of 4mA. The LED has a forward current of 20mA and a desired forward voltage of 2.0V. When the LED is turned on it will most likely be blinking (using PWM) in the low kilohertz range. After poking around I believe this may be the type of circuit I need. Question 1: Am I even close to being on the right track. Question 2: What is the correct component to use for item (5), (Transistor or Mosfet), and how do I go about finding one (at the local Frys, RadioShack, Online) and how are they identified(specified)? Question 3: Will the choice of item (5) have any effect on the ohm value of the resistor item (3)? Apart from the normal Ohms law for the 3.0V power source and the 2.0V LED. Question 4: What would be the ohm value of the resistor item (2), if any is required.
The circuit you show should work, but is unnecessarily complicated and expensive. Here is something simpler and cheaper: Just about any small NPN transistor you can find will work in this role. If the B-E drop of the transistor is 700 mV, the LED drops 2.0 V, then there will be 600 mV accross R1 when the LED is on. In this example, that will allow 17 mA to flow thru the LED. Make the resistor higher if you can tolerate lower light from the LED and want to save some power. Another advantage of this circuit is that the collector of the transistor can be connected to something higher than 3.3 V. This won't change the current thru the LED, just the voltage drop on the transistor and therefore how much it dissipates. This can be useful if the 3.3 V is coming from a small regulator and the LED current would add a significant load. In that case, connect the collector to the unregulated voltage. The transistor in effect becomes the regulator just for the LED, and the LED current will come from the unregulated supply and not use up the limited current budget of the 3.3 V regulator. Added: I see there is some confusion how this circuit works and why there is no base resistor. The transistor is being used in emitter follower configuration to provide current gain, not voltage gain. The voltage from the digital output is sufficient to drive the LED, but it can not source enough current. This is why current gain is useful but voltage gain not necessary. Let's look at this circuit assuming the B-E drop is a fixed 700 mV, the C-E saturation voltage is 200 mV, and the gain is 20. Those are reasonable values except that the gain is low. I am using a low gain deliberately for now because we'll see later that only a minimum gain is needed from the transistor. This circuit works fine as long as the gain is anywhere from that minimum value to inifinity. So we'll analyze at the unrealistically low gain of 20 for a small signal transistor. If all works well with that, we're fine with any real small signal transistors you will come accross. The 2N4401 I showed can be counted on to have a gain of about 50 in this case, for example. The first thing to note is that the transistor can't saturate in this circuit. Since the base is driven to at most 3.3 V, the emitter is never more than 2.6 V due to the 700 mV B-E drop. That means there is always a minimum of 700 mV accross C-E, which is well above the 200 mV saturation level. Since the transistor is always in it's "linear" region, we know that the collector current is the base current times the gain. The emitter current is the sum of these two currents. The emitter to base current ratio is therefore gain+1, or 21 in our example. To calculate the various currents, it is easiest to start with the emitter and use the above relationships to get the other currents. When the digital output is at 3.3 V, the emitter is 700 mV less, or at 2.6 V. The LED is known to drop 2.0 V, so that leaves 600 mV accross R1. From Ohms law: 600mV / 36Ω = 16.7mA. That will light the LED nicely but leave a little margin to not exceed its 20 mA maximum. Since the emitter current is 16.7 mA, the base current must be 16.7 mA / 21 = 790 µA, and the collector current 16.7 mA - 790 µA = 15.9 mA. The digital output can source up to 4 mA, so we are well within spec and not even loading it significantly. The net effect is that the base voltage controls the emitter voltage, but the heavy lifting to provide the emitter current is done by the transistor, not the digital output. The ratio of how much of the LED current (the emitter current) comes from the collector compared to the base is the gain of the transistor. In the example above that gain was 20. For every 21 parts of current thru the LED, 1 part comes from the digital output and 20 parts from the 3.3 V supply via the collector of the transistor. What would happen if the gain were higher? Even less of the overall LED current would come from the base. With a gain of 20, 20/21 = 95.2% comes from the collector. With a gain of 50 it is 50/51 = 98.0%. With infinite gain it is 100%. This is why this circuit is actually very tollerant of part variation. Whether 95% or 99.9% of the LED current comes from the 3.3 V supply via the collector doesn't matter. The load on the digital output will change, but in all cases it will be well below its maximum, so that doesn't matter. The emitter voltage is the same in all cases, so the LED will see the same current whether the transistor has a gain of 20, 50, 200, or more. Another subtle advantage of this circuit which I mentioned before is that the collector need not be tied to the 3.3 V supply. How do things change if the collector were tied to 5 V, for example? Nothing from the LED or the digital output's point of view. Remember that the emitter voltage is a function of the base voltage. The collector voltage doesn't matter as long as it's high enough to keep the transistor out of saturation, which 3.3 V already was. The only difference will be the C-E drop accross the transistor. This will increase the power dissipation of the transistor, which in most cases will be the limiting factor on maximum collector voltage. Let's say the transistor can safely dissipate 150 mW. With the 16.7 mA collector current we can calculate the collector to emitter voltage to cause 150 mW dissipation: 150 mW / 16.7 mA = 9 V. We already know the emitter will be at 2.6 V, so the maximum collector voltage would be 9.0 V + 2.6 V = 11.6 V. This means that in this example we can tie the collector to any handy supply from 3.3V to 11.6 V. It doesn't even need to be regulated. It could actively fluctuate anywhere within that range and the LED current would remain nicely steady. This can be useful, for example, if the 3.3 V is made by a regulator with little current capability and most of that is already allocated. If it is running from a roughly 5 V supply, for example, then this circuit can get most of the LED current from that 5 V supply while still keeping the LED current nicely regulated . And, this circuit is very tolerant of transistor part variations. As long as the transistor has some minimum gain, which is well below what most small signal transistors provide, the circuit will work fine. One of the lessons here is to think about how a circuit really works. There is no place in engineering for knee jerk reactions or superstitions like to always put a resistor in series with the base. Put one there when it's needed, but note that it isn't always, as this circuit shows.
{ "source": [ "https://electronics.stackexchange.com/questions/60865", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/19400/" ] }
61,088
I did a search for 0603 sized high brightness Kingbright LEDs on element14, results are here . Why is there such a range of luminous intensity, even within the same colour, despite forward voltage, forward current, size and viewing angle being the same? I am using the LEDs as indicators in a very low power battery powered device. The LEDs will be pulsed at times to give user feedback. Would the LED that gives the brightest output be the one that gives the highest luminous intensity, or is there another consideration?
This is going to be really long, so just skip to the summary sentence at the end to avoid TL;DR. There are several factors contributing to varying millicandela ratings of LEDs, and more importantly the relevance of the mcd rating to the intended purpose: Angle of dispersion / beam angle : This one is the most obvious, and fairly intuitive as has been pointed out in user20264's answer. The narrower the beam angle (how far off the axis the LED's light is visible) the greater the luminous intensity for a given luminous flux: Basically the same amount of energy being pushed through a greater or smaller solid-angle. Paraphrasing Wikipedia , a light source emits one candela in a given direction if it emits monochromatic green light with a frequency of 540 THz (555 nm wavelength, yellow-green), with a radiant intensity of 1/683 watts per steradian in said direction . ( source ) This is why illumination grade LEDs are often rated in lumens rather than mCd, as the MCD can be quite misleading depending on added elements (lenses, diffusers, reflectors) that would change the effective beam angle, by definition. Practical measurement of " peak luminous intensity ": While peak luminous intensity is supposed to be measured as a single point , on-axis value, there is no global standard for the geometry and size of this "point" sensor: Is it 1 degree around the axis, 0.01 sq.mm, square bare-wafer photosensor / PIN Photodiode, circular lensed sensor (if so, what diameter lens?), half-theta angle (yes, some scientific papers use this as a measurement area), or something else entirely? Is the distance to the sensor measured from the LED package surface, the wafer surface, or the LED lens inner or outer surface? You will find nearly as many answers as there are manufacturers, and clearly, keeping this flexible allows for some "creative accounting", to favor one type of LED versus another. Geometry of lens : The specific optical arrangement used for the LED lens will change the distribution of light intensity across the illumination beam angle - One can get very intense light at the center of the beam and a long tail of fall-off, or fairly even distribution of the intensity between axis and maximum viewable angle, just like with camera optics. This impacts the " half-theta " angle, the angle at which intensity falls off to half that at the axis. Depending on the lens and thus the intensity distribution curve, half theta angles can be a small fraction of beam angle (center-intense beams), or heading towards half the beam angle or more. A smaller half-theta angle i.e. a skinny tall bell-curve with long tails, translates to high mcd values on the axis, but sharp drop-off of visibility off the axis. For greater range, such as for infrared remote controls, a smaller half-theta is of interest, while for visual indicator / illumination needs, a greater half-theta works out better, even for a fixed beam angle. Angle of view : This relates closely to the previous two points: If the half-theta or beam angles are narrow, the mcd figures can look very high, but practical usability of the LED as an indicator by itself is questionable. Yet, if a light-pipe is used, such as on some indicator boards, or for fiber optics, a narrow half-theta is a good thing . Transmission coefficient of lens This relates to the specific light wavelength emitted by an LED: Manufacturers typically standardize on one or a very small number of materials for the design of the lens element of their LEDs. Evidently, any given transparent material will have different transmission characteristics for different light wavelengths. Thus, what may be the best possible lens material for a green LED would likely be less than ideal for red. For white, this is even more complex, because common "white" LEDs have a phosphor layer of Yttrium Aluminum Garnet on a Gallium Nitride chip emitting a deep blue spectral line. The combination of the natural and the phosphorescence spectral lines requires compromises in transmission and phase, so the combination is anything but ideal in transmission for each spectral line, by the nature of the optical design. Clear v/s translucent LEDs: Milky LEDs make the mcd ratings practically irrelevant, since they are designed to disperse the generated light as evenly as possible across the surface of the LED - near-180 degree ( or should it be, near 90 degree? ) solid angles, and half-theta values of nearly the same, are common, and desirable. Thus, a milky LED will typically have poor mcd values for the same chemistry and construction as a "water-clear" LED, and colored clear LEDs will lie somewhere in the middle. Yet, for indication purposes a translucent LED is perhaps the most ideal! Wavelength of emitted light As would be seen from the definitions of luminous intensity, this differs from radiant intensity in taking into account human-vision perceived intensity of the light in question. Human beings characteristically are most sensitive to the yellow-green portion of the spectrum, around 555 nanometer wavelength: ( Source is Wikipedia, high resolution image here ) Thus, for a given amount of electrical power through an LED, the luminous intensity would vary widely with LED color, and of course drops down to zero for ultraviolet and infrared, which human vision cannot perceive. Chemistry of LED junction : Enough has been written about this, in other answers as well as elsewhere on the web, so just a brief mention: The chemistry determines the emitted color-spectrum ( see previous point ), as well as the conversion efficiency, of an LED's "Light Emitting" aspect. Also, minor variations cause spectral shifts, so two nominally identical chemistries need not be . It is thus stating the obvious that this determines both luminous flux and intensity. Efficiency of wafer / batch: Despite the best manufacturing process controls, LED manufacture is notorious for its variation in efficiency and output characteristics between batches of wafers, and even within a batch or a single wafer. Manufacturers address this by a process of " binning " - While white LEDs are binned by a complex process, by color as well as light output, color LEDs go through an essentially linear binning process for light output. Different light output levels are then packaged as differently rated products. While reputable manufacturers typically do a sincere job of binning and published rating for their LEDs, no-name LEDs are infamous for wide variation in intensity within a stated datasheet rating, as much as 1:3 ratios in extreme cases. n.b. Some manufacturers such as Philips (Luxeon range) are beginning to claim a binning-free process , due to modern improvements in manufacturing technique. Encapsulation of LED: While this is largely covered in the lens design discussion a few points previously, additional factors such as position of contact whisker / wire bond do make significant impact in LED light output. The wire bond creates occlusion of the light source, the nature of which varies between designs. An obvious response to this would be, why not always design the wire bonds to occlude as little as possible? This isn't done because the wire bond positioning, material and thickness are not just about electrical conduction, but also thermal dissipation. Some designs need better cooling, hence a whisker attached to the approximate middle of the chip, or even multiple wire bonds from a lead-frame, are opted for. Other designs do not really care about this, the power involved being too low or the substrate being better designed for thermal off-take. These trade-offs determine the occlusion compromises and thus the actual measured luminous intensity at the axis of the LED's beam. Orientation of LED substrate within package This factor has little relevance to most modern LEDs, especially SMD parts. However, older LED designs, and possibly some still in production, sometimes had orientation tolerance issues on the LED emission surface. In simple terms, the actual LED chip may or may not be perfectly perpendicular to the axis of the LED package. It is intuitive therefore that measured luminous intensity along the axis would vary from piece to piece, or between production runs, for such LEDs. Actual power of LED: While the rated current of an LED is typically controlled by your circuit to meet the datasheet specifications, the rated and actual junction voltages at that set current will invariably differ, both due to manufacturing tolerances, and due to shortcuts taken in datasheet specifications. This means that the actual power converted from electricity to light will vary as per P = V x I , for each LED design, for each minor variation in semiconductor doping, and for a variety of other factors. Part of this is addressed by the binning process, and partly the datasheets for "different LED models" which just happen to be different batches of wafers, reflect the resultant change in measured intensity. Most important, marketing mumbo-jumbo : While this fudge-factor is perhaps the least recognized by the engineering community, several years of using and recommending LEDs for various products has shown that there is a very strong influence the marketing department of a manufacturer has upon the data shown in promotional materials and datasheets for a given LED product. This is probably more pronounced in the LED industry than in most other semiconductor trades. If there are several different ways of measuring or representing any LED data, such as luminous intensity, and there are several standards or guidelines in place in the industry for any such parameter, you can be sure that the marketing drivers will ensure that different product lines or models will use different measures and measurement methodologies, even within a single manufacturer, so as to put the best possible spin on every LED. While the more reputable manufacturers may stick to merely using different intensity measurement equipment as convenient, the less scrupulous ones do not shy away from outright prevarication for their product publications. What makes this more amusing is that some of the most reputable manufacturers are also resellers, i.e. they source their non-premium product lines from the same factories as bulk sellers, so the only difference is the branding on the box or reel, and of course the 100% to 300% brand-value mark-up. How many of these resellers actually bother to re-validate the measurements and parameters, is anyone's guess. TL;DR summary: Don't trust the millicandela ratings on any LED , test them yourself if you absolutely need real data.
{ "source": [ "https://electronics.stackexchange.com/questions/61088", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6481/" ] }
61,674
Some of the books I have state that power comes from the negative terminal on the power supply.If that is the case, the circuit starts from the negative right? Say I have a simple circuit with a 6v supply, an 1K resistor and an LED. If I wire the resistor to the positive source, then wire that to the anode of the LED and back to ground, the circuit works fine. How can this be the case? Shouldn't the resistor be between the cathode and the negative source?
It is true that, in most conductors, the actual charge carriers are negatively charged electrons, which leave the negative terminal of the power source, pass through the circuit, and return to the positive terminal of the source. However, early scientists studying electricity didn't know about electrons, so arbitrarily declared that current flowed from the positive terminal of the battery, through the circuit, reeturing to the negative terminal of the battery. Today, almost everyone uses this "conventional" (positive charge) current, and you will avoid confusion by using it also. Circuits work equally well whether you wish to think of them using conventional (positive) or electron (negative) current. For your LED and resistor circuit, it doesn't matter which component is connected to the positive terminal of the battery, as Kirchoff's Current Law says that the current is the same at all points in a series circuit. That is, the resistor will limit current through the LED, whether it is placed "before" or "after" the LED, regardless of which way you think the current is flowing.
{ "source": [ "https://electronics.stackexchange.com/questions/61674", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20474/" ] }
61,873
I'm trying to understand why open hardware is so much harder to come by than software. I've tried looking around online and I couldn't find as satisfactory explanation. I understand that hardware is so much easier to keep proprietary and so much harder (impossible) to reverse engineer (in the case of ICs, not PCBs), but why would that hold back open hardware initiatives? Is it the cost of manufacturing? Is it the lack of shared knowledge about hardware design? Is it the complexity involved? With the advent of FPGAs making it so easy to design hardware (although they themselves are proprietary as well), I would expect that open hardware would be taking off at a much faster rate than it has been. I'm sorry if this is the wrong place to ask, but this has been perplexing me for about a year now and has made me wish I had taken Computer Science instead of Computer Engineering.
Everyone can edit source code at home, very few people have a chip fabrication plant to knock out a couple of custom chips. Bytes are free to create and distribute, materials are not. There's also the issue that source code is portable, and although CAD files etc. are sort of portable, there's a lot more overhead & errors & setup cost wasted materials. 3D printing crosses some of the boundaries, perhaps a bit of effort could do the same for the (much older) technology of machining, both parts & PCB's. Edited to add: re-reading the question, and perhaps the intent of the question relating to FPGA's, I would say that they're currently still a bit of a dark art to many, and just not on the radar of most people. The entry barrier is quite high, in terms of effort, understanding, and tools.
{ "source": [ "https://electronics.stackexchange.com/questions/61873", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20211/" ] }
62,353
My Intel CPU changes clock speed depending on the usage, but how does it decide what clock speed to run at? Is the clock speed determined by the OS software using an algorithm, or is it hardware based? Is it dependent on the # of interrupts? The cache turnover? Does the CPU itself set its own clock? Or does a separate controller set it? Or software?
The core clock of the CPU isn't received directly from the motherboard. That clock is usually much slower (often by a factor of 10 or more) than the internal frequency of the CPU. Instead, the clock signal from the motherboard is used as the reference frequency for a higher frequency phase locked loop controlled oscillator inside the CPU. The generated clock runs at some multiple of the reference clock, and that multiple can be changed by setting certain registers in the CPU. The actual generation of the clock is done purely in hardware. To reduce power even further, the CPU also signals to the voltage regulator supplying its core voltage to run at a lower set point. At lower frequencies the CPU can run at a lower voltage without malfunctioning, and because power consumption is proportional to the square of the voltage, even a small reduction in voltage can save a large amount of power. The voltage and frequency scaling is done by hardware, but the decision to run in a low power mode is made by software (the OS). How the OS determines the optimal mode to run in is a separate, messier, problem, but it likely comes down to mostly what %time has the system been idle lately. Mostly idle, lower the frequency. Mostly busy, raise the frequency. Once the OS decides the frequency to run at, it's just a matter of setting a register. Reference: " Enhanced Intel SpeedStep Technology for the Intel Pentium M Processor "
{ "source": [ "https://electronics.stackexchange.com/questions/62353", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20710/" ] }
62,357
As im learning about Computer Architecture im trying to understand it from the ground up (Transistors in CMOS in particular) I came across the simple 6T schematic for SRAM (2 inverters) Mostly makes sense, but I have a few questions. What exactly is being done with whatever is stored here? Is it being sent to the CPU? a Decoder?....etc...? I'd assume we use some sort of decoder/encoder to write to specific cells? And while I understand how the Memory Cell works (Keeps it's data from feedback) are we reading the Cells contents with the Word lines or Bit Lines? what about writing to them? Is the ~Bit Line (~BL) completely useless to us? Like how would I go about reading and writing to an array like this?
The core clock of the CPU isn't received directly from the motherboard. That clock is usually much slower (often by a factor of 10 or more) than the internal frequency of the CPU. Instead, the clock signal from the motherboard is used as the reference frequency for a higher frequency phase locked loop controlled oscillator inside the CPU. The generated clock runs at some multiple of the reference clock, and that multiple can be changed by setting certain registers in the CPU. The actual generation of the clock is done purely in hardware. To reduce power even further, the CPU also signals to the voltage regulator supplying its core voltage to run at a lower set point. At lower frequencies the CPU can run at a lower voltage without malfunctioning, and because power consumption is proportional to the square of the voltage, even a small reduction in voltage can save a large amount of power. The voltage and frequency scaling is done by hardware, but the decision to run in a low power mode is made by software (the OS). How the OS determines the optimal mode to run in is a separate, messier, problem, but it likely comes down to mostly what %time has the system been idle lately. Mostly idle, lower the frequency. Mostly busy, raise the frequency. Once the OS decides the frequency to run at, it's just a matter of setting a register. Reference: " Enhanced Intel SpeedStep Technology for the Intel Pentium M Processor "
{ "source": [ "https://electronics.stackexchange.com/questions/62357", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
63,661
I have found at some places, especially: Raspberry Pi Forums eLinux RPi Hardware page the following wording: S2: DSI interface. 15-pin surface mounted flat flex connector (possibly no-fit) S5: MIPI CSI-2 interface. 15-pin surface mounted flat flex connector (possibly no-fit). P2: 8-pin 2.54 mm header expansion (header not fitted on Revision 2.0 boards), providing GPU JTAG (ARM11 pinout, pin 7 is nofit for locating) For a non-English reader this is not 100% clear. Could you please tell me what it means?
This must be British. For them "fit" means something like what we would call "install". For us, "fit" means how well something fits, meaning how good it is at mechanically going into the right mounting holes or whatever, or how effective it is overall in the role it is being used in. In this case "no-fit" means "do not install". This is often done when a part may be useful for original testing in the lab. Once the product has been verified, the part is of no use anymore. Instead of respinning the PCB, you leave the pads there but just not install the part during manufacturing. The same PCB can be used to build different variants of a product depending on which parts are installed or not. In one case I had a product that was to be sold with either a RS-232 or CAN interface. Since space was tight, I re-used the PCB area for the CAN driver chip and the RS-232 chip. The pads of the two chips sortof overlapped such that only one could be installed at a time. There were some other parts that had to be installed or not depending on the variant being built.
{ "source": [ "https://electronics.stackexchange.com/questions/63661", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/11834/" ] }
64,063
I am using a voltage regulator, and to get cleaner power, the datasheet recommends using a 0.33uF capacitor. However, it doesn't say what type it wants. Stupidly, I went out and bought a 10 pack of 0.33uF 50V Radial Electrolytic Capacitors . After looking up on this site , I found that the symbol means that it is a unpolarized capitator. Will they work because they are polarized? Does this really affect anything in this circuit? Also, there is a 20% tolerance. Will this affect this circuit? And so I don't have to ask a similar question again, how did you get that? I know they have different tolerances and ratings depending on the material but does it really matter? Transistor datasheet: I can get the voltage regulator datasheet if anybody needs it. Thanks in advance.
Relax... It will work just fine. Make sure to orient the positive terminal towards the input pin of the regulator (pin #1). To be extra sure, use two in parallel (since you have them anyway!) to reduce the equivalent series resistance (ESR). That's just a precaution since I haven't read your datasheet and electrolytic types can be higher ESR than ceramics. Also, I'm assuming your maximum input voltage is less than 25V. Sizing... The values given are the minimum value needed for stability plus a little margin (usually). The regulator is a closed-loop system. It watches what happens on the output and adjusts "stuff" internally to make sure the output (really a scaled-down version of the output) always equals a desired value. Problems occur when it starts chasing its tail. If, as a result of it changing "stuff" internally, the input voltage also starts to change (or the output changes too quickly) then the changes the regulator made will have too much of an effect and it will have to undo the excess. This corrective change can also overshoot the mark, requiring another corrective change... as you can see, without sufficient "stability" in the system, the regulator can output a continuously fluctuating voltage rather than the flat line you hope for when employing a regulator. The capacitors slow down voltage changes, thereby helping to ensure overall stability.
{ "source": [ "https://electronics.stackexchange.com/questions/64063", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18442/" ] }
64,490
I want to run a micro-controller from a 1S lipo through a 3V linear regulator. I need to measure the battery voltage however. The problem with using a voltage divider is that it would drain the battery over time which may or may not have protection circuitry built in. Since the AVR I'm using has a recommended input impedance of no higher than 10K I can't make the divider too large either. Can anyone suggest a solution that would allow me to monitor this voltage without killing an unprotected battery over a couple of months? The circuit might enter deep sleep mode for an extended period meaning a voltage divider solution would consume the most power. I ended up using both Hanno and Andy's solution. Thanks for all the input. Can only choose one answer unfortunately.
The voltage divider needs to join the MCU in deep-sleep mode then... This can be achieved with a P channel FET (for instance).... When the MCU wakes up, it will want to measure the battery voltage so what it can do is turn on a circuit formed around a P channel FET that connects the battery +V to the voltage divider: - The ADC input is shown to the right and there will be no voltage reaching it unless the MCU has activated the BC547 via the 10k resistor. Without activation, the P channel FET is turned off and virtually open circuit. If you can program the MCU to have a pull-down on its control pin when asleep that should be it, else add another (say) 10k resistor from that point to ground - this ensures the P channel FET is completely off. A small word of warning, choose a P channel fet with low leakage current when off else there will be a slight drain on battery life but most fets are going to be under 100nA and many in the region of 1nA. One final thing - how does the voltage regulator perform on it's standby current when the micro is off - do you need to take care of it as well?
{ "source": [ "https://electronics.stackexchange.com/questions/64490", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6173/" ] }
65,200
Suppose I have a 100 mAh battery at 20V. I connect a 1000 kohm resistor across it. How much heat will be generated and how can I find the temperature rise in the resistor? As the battery operates I think that the current flow will reduce over time but am not sure about the voltage for a real battery. Perhaps I am not giving sufficient information here, I am sorry for that. I just wish to know, what information is needed to make such calculation? Have you ever done it? In the ideal case (taking only the most significant factors into consideration) what factors are considered to make an estimate of the heat dissipation and temperature rise and why would the real heat dissipation and temperature in the actual practical experiment be different? I know this question looks hard, but I will be very happy if I can finally have this mystery resolved.
The power delivered to a resistor, all of which it converts to heat, is the voltage accross it times the current thru it: P = IV Where P is power, I is current, and V is voltage. The current thru a resistor is related to the voltage accross it and the resistance: I = V/R where R is the resistance. With this additional relation, you can rearrange the above equations to make power as a direct function of voltage or current: P = V 2 /R P = I 2 R It so happens that if you stick to units of Volts, Amps, Watts, and Ohms, no additional conversion constants are required. In your case you have 20 V accross a 1 kΩ resistor: (20 V) 2 /(1 kΩ) = 400 mW That's how much power the resistor will be dissipating. The first step to dealing with this is to make sure the resistor is rated for that much power in the first place. Obviously, a "¼ Watt" resistor won't do. The next common size is is "½ Watt", which can take that power in theory with all appropriate conditions met. Read the datasheet carefully to see under what conditions your ½ Watt resistor can actually dissipate a ½ Watt. It might specify that ambient has to be 20 °C or less with a certain amount of ventillation. If this resistor is on a board that is in a box with something else that dissipates power, like a power supply, the ambient temperature could be significantly more than 20 °C. In that case, the "½ Watt" resistor can't really handle ½ Watt, unless perhaps there is air from a fan actively blowing accross its top. To know how much the resistor's temperature will rise above ambient you will need one more figure, which is the thermal resistance of the resistor to ambient. This will be roughly the same for the same package types, but the true answer is available only from the resistor datasheet. Let's say just to pick a number (out of thin air, I didn't look anything up, example only) that the resistor with suitable copper pads has a thermal resistance of 200 °C/W. The resistor is dissipating 400 mW, so its temperature rise will be about (400 mW)(200 °C/W) = 80 °C. If it's on a open board on your desk, you can probably figure 25 °C maximum ambient, so the resistor could get to 105 °C. Note that's hot enough to boil water, but most resistors will be fine at this temperature. Just keep your finger away. If this is on a board in a box with a power supply that raises the temperature in the box 30 °C from ambient, then the resistor temp could reach (25 °C) + (30 °C) + (80 °C) = 135 °C. Is that OK? Don't ask me, check the datasheet.
{ "source": [ "https://electronics.stackexchange.com/questions/65200", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20711/" ] }
65,246
Other then touching the leads and getting shocked, if I plug a 1M, 0.5W resistor into the 120V outlet, would it be bad? I calculated that \$\frac{120 V}{1 M\Omega} = 0.12 mA\$, which shouldn't do anything bad if only 0.12 mA is going through, but I don't know if AC is different or something.
Yes, you've basically got the right idea except for the units in your calculation being off. 120V / 1MΩ = 120µA. That's very little. The more relevant calculation is how much power the resistor will dissipate. That is the voltage accross it times the current thru it. By Ohm's law you can rearrange those equations to realize that is also the square of the voltage accross it divided by the resistance: (120V) 2 / 1MΩ = 14.4 mW That's again very little. You probably wouldn't even notice that getting warm if you touched it, although in this case touching it would be a bad idea due to the high voltage accross the two leads. Note that since 120 V is the RMS voltage, that is the equivalent DC voltage that will deliver the same power, so the above is correct and would also work identically with 120 V DC. However, there is one additional wrinkle with AC, which is the peak voltage the resistor can withstand. That does NOT average out since insulation breaks down with instantaneous voltage regardless of what the average over some time might be. Since the 120 V RMS is a sine, the peaks are sqrt(2) higher, which is 170 V. The resistor needs to be rated for at least that much voltage, else its insulation might break or it might arc between the leads.
{ "source": [ "https://electronics.stackexchange.com/questions/65246", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10190/" ] }
65,308
The test leads on my multimeter are too large to fit in the holes in my breadboard. How can I easily connect my multimeter to test voltage on my breadboard?
Use alligator clips for your DMM test leads and pin it on the leads of your breadboard wires ( terminals) where you want to test. Please see the below picture to understand what I am trying to say. Please for heaven's sake don't try forcing those test lead pins into the breadboard holes, they damage the copper strip inside (personal experience.)
{ "source": [ "https://electronics.stackexchange.com/questions/65308", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18442/" ] }
65,436
I would like to trigger a PIR sensor using an infrared LED. Is that possible? My goal is to simulate motion in front of the PIR, when there otherwise is none. I tried using a handheld remote control, but that didn't seem to work.
People often use imprecise terms to describe InfraRed and it is further complicated in that with respect to Infrared "that word infrared ... I don't think it means what you think it means" in a lot of cases. PIR - AKA Passive infrared sensors are pryoelectric devices that are optimized to detect Mammalian body temperatures (around 300 K), these warm bodies emit light at around the 10µm to 14µm wavelength range. Some people call this the "mid Infrared" but the trend is towards using the term "Thermal InfraRed" or TIR. The IR LED you are using emits probably around 900 nm -> 750 nm - so close to 1µm in wavelength. Some people call this "Near IR" but those people also tend to call 2 - 4 um wavelngth range the "Mid Infrared also". Confusing? yep. It comes from a different historical use. One from the military one from chemistry/astronomy. So you are at least a factor of 10X away in wavelength terms. And a LED emits light in a very, very narrow band of energies (it is an electronic effect after all) Also PIR's are "designed" to detect rather largish bodies, which means a fair amount of energy or photon flux. A black body emitter will increase energy in all wavelengths with increasing temperature. So an emitter at 27 C (300 K) will emit less light at 10µm than a emitter that is at 100 C (373 K) in the same band of energies. So if you want to have an emitter that will trigger the PIR, make a temperature controlled emitter, run it at 100 C to be safe, and it will emit a lot more light in the 10 - 14µm band than a body temperature device. On second thought make it 70 C just to be safe. 100 C is a little too hot. Read up about blackbody emitters for fun.
{ "source": [ "https://electronics.stackexchange.com/questions/65436", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22538/" ] }
65,503
Is there any reason why people are still using (and implementing in new systems) normal EEPROMs instead of flash memory, nowadays? From the Flash memory wikipedia : Flash memory was developed from EEPROM (electrically erasable programmable read-only memory). Would there be any disadvantages (power consumption, space, speed, etc.) on using flash instead of normal EEPROM?
To be pedantic, FLASH memory is merely a form of EEPROM: There is a marketing / branding aspect here. Typically, the distinction used today is that EEPROMS are single-byte (or storage word ) erasable / rewritable, while FLASH is block-based for erase/write operations. Relevant to the question: EEPROMs continue to be popular due to maximum erase/write cycle ratings being an order of magnitude or two better than FLASH Due to investments in design typically having been amortized over time, as with any mature technology, the cost of production and testing reduces compared to a newer technology.
{ "source": [ "https://electronics.stackexchange.com/questions/65503", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
65,533
I recently bought This MCU and in another question asked how I could flash it. The response was to use a USB to serial Connector (which I wanted to find on newark.com but could not). I wanted to know if I could solder This microUSB female connector to the io pins of my MCU and be able to flash the MCU to over microUSB from my raspberry pi. Can I?
To be pedantic, FLASH memory is merely a form of EEPROM: There is a marketing / branding aspect here. Typically, the distinction used today is that EEPROMS are single-byte (or storage word ) erasable / rewritable, while FLASH is block-based for erase/write operations. Relevant to the question: EEPROMs continue to be popular due to maximum erase/write cycle ratings being an order of magnitude or two better than FLASH Due to investments in design typically having been amortized over time, as with any mature technology, the cost of production and testing reduces compared to a newer technology.
{ "source": [ "https://electronics.stackexchange.com/questions/65533", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9515/" ] }
65,910
I am running a 160amp DC current through a 1/0awg wire. Will the connection be more reliable with crimped connections, soldered connections, or screw terminals? I have looked online and it seems that all of those connectors have their advantages and disadvantages, but for something with such high DC amperage what will be the best? I would lean towards soldering because a good solder connection will not have any air, but I want to know if I am wrong with that thought.
For high currents and thick wires, a gas-tight crimped junction is the industry standard choice. While solder appears to have its advantages, the key issue to keep in mind is the challenge of soldering 1 AWG copper wire, where the thermal conductivity of the wire itself will rapidly draw heat away from the soldering location, and insulation etc elsewhere on the wire would get overheated and damaged. Of course, for such uses, a blow-torch type gas soldering gun would be used instead of conventional soldering irons, but the issue remains. Another concern with soldering is that the solder itself could potentially melt and run, leading to an all round mess, if the junction were to heat up enough, a distinct possibility at 160 Amperes. Screw terminals would work, but the risk is of the terminal tabs coming loose over time due to mechanical vibration, and also of oxide formation at the metal contact surface, leading to increased resistance, thereby heat, and another all-round mess. A crimped spade terminal actually creates a metal-metal colloidal bond at the surface between the wire and the terminal, and if done right, no gas remains between the surfaces. This ensures longevity and safety, making this the preferred mechanism in industrial implementations.
{ "source": [ "https://electronics.stackexchange.com/questions/65910", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }
66,145
I recently got an AVRISmkII AVR programmer, and I have an ATtiny85 and ATmega328. I was wondering how I could program these chips (with the programmer) but when I try getting Atmel Studio 6 it is only for Windows. Is there a way I could use in Linux (Ubuntu specifically)? Ant suggestions? Thanks!
I don't have the time for a full explanation, but I can give you cookbook-style the commands I use on my Linux box to program AVRs: Preparations On Ubuntu, make sure several required packages are installed: sudo apt-get install avr-libc avrdude binutils-avr gcc-avr srecord optionally throw in gdb-avr simulavr for debug and simulation. I started to create a directory in which all my ATtiny projects find a home: mkdir ~/attiny: cd ~/attiny For each project I create a dedicated subfolder (and I don't mind long names): mkdir waveShare4digit8segmentDisplay; cd waveShare4digit8segmentDisplay Create source Edit the source file with your favorite text editor: vi project.cpp Settings The commands below heavily rely on environment variables, to keep maintenance easy. The base name of the files used/created: src=project Common compiler flags: cflags="-g -DF_CPU=${avrFreq} -Wall -Os - Werror -Wextra" The variables below may need to be changed depending on the exact programmer you use. Refer to the man pages for details. baud=19200 The baudrate your programmer communicates at with the PC: programmerDev=/dev/ttyUSB003 The device name where your programmer is located. Check dmesg output for details. programmerType=avrisp This may be different for your exact programmer. The variables below depend on the exact controller you want to program: avrType=attiny2313 Check avrdude -c $programmerType for supported devices. avrFreq=1000000 Check the controller's datasheet for default clock. Compile First step is to create an object file: avr-gcc ${cflags) -mmcu=${avrType) -Wa,-ahlmns=${src).lst -c -o ${src).o ${src).cpp Second step is to create an ELF file: avr-gcc ${cflags) -mmcu=${avrType) -o ${src).elf ${src).o Third step is to create an Intel Hex file, this is the file that is actually sent to the programmer: avr-objcopy -j .text -j .data -O ihex ${src).elf ${src).flash.hex Programming Final step is to program the device: avrdude -p${avrType} -c${programmerType} -P${programmerDev} -b${baud} -v -U flash:w:${src}.flash.hex Makefile As an alternative to remembering the commands, I cooked up a makefile to my personal liking, you can save it under the name Makefile (mind the capital M ). It works as follows: make makefile Edit the makefile; make edit Edit the source file; make flash Program the device's flash memory; make help List other commands. Here is the makefile: baud=19200 src=project avrType=attiny2313 avrFreq=4000000 # 4MHz for accurate baudrate timing programmerDev=/dev/ttyUSB003 programmerType=arduino cflags=-g -DF_CPU=$(avrFreq) -Wall -Os -Werror -Wextra memoryTypes=calibration eeprom efuse flash fuse hfuse lfuse lock signature application apptable boot prodsig usersig .PHONY: backup clean disassemble dumpelf edit eeprom elf flash fuses help hex makefile object program help: @echo 'backup Read all known memory types from controller and write it into a file. Available memory types: $(memoryTypes)' @echo 'clean Delete automatically created files.' @echo 'disassemble Compile source code, then disassemble object file to mnemonics.' @echo 'dumpelf Dump the contents of the .elf file. Useful for information purposes only.' @echo 'edit Edit the .cpp source file.' @echo 'eeprom Extract EEPROM data from .elf file and program the device with it.' @echo 'elf Create $(src).elf' @echo 'flash Program $(src).hex to controller flash memory.' @echo 'fuses Extract FUSES data from .elf file and program the device with it.' @echo 'help Show this text.' @echo 'hex Create all hex files for flash, eeprom and fuses.' @echo 'object Create $(src).o' @echo 'program Do all programming to controller.' edit: vi $(src).cpp makefile: vi Makefile #all: object elf hex clean: rm $(src).elf $(src).eeprom.hex $(src).fuses.hex $(src).lfuse.hex $(src).hfuse.hex $(src).efuse.hex $(src).flash.hex $(src).o date object: avr-gcc $(cflags) -mmcu=$(avrType) -Wa,-ahlmns=$(src).lst -c -o $(src).o $(src).cpp elf: object avr-gcc $(cflags) -mmcu=$(avrType) -o $(src).elf $(src).o chmod a-x $(src).elf 2>&1 hex: elf avr-objcopy -j .text -j .data -O ihex $(src).elf $(src).flash.hex avr-objcopy -j .eeprom --set-section-flags=.eeprom="alloc,load" --change-section-lma .eeprom=0 -O ihex $(src).elf $(src).eeprom.hex avr-objcopy -j .fuse -O ihex $(src).elf $(src).fuses.hex --change-section-lma .fuse=0 srec_cat $(src).fuses.hex -Intel -crop 0x00 0x01 -offset 0x00 -O $(src).lfuse.hex -Intel srec_cat $(src).fuses.hex -Intel -crop 0x01 0x02 -offset -0x01 -O $(src).hfuse.hex -Intel srec_cat $(src).fuses.hex -Intel -crop 0x02 0x03 -offset -0x02 -O $(src).efuse.hex -Intel disassemble: elf avr-objdump -s -j .fuse $(src).elf avr-objdump -C -d $(src).elf 2>&1 eeprom: hex #avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) -b$(baud) -v -U eeprom:w:$(src).eeprom.hex date fuses: hex avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) -b$(baud) -v -U lfuse:w:$(src).lfuse.hex #avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) -b$(baud) -v -U hfuse:w:$(src).hfuse.hex #avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) -b$(baud) -v -U efuse:w:$(src).efuse.hex date dumpelf: elf avr-objdump -s -h $(src).elf program: flash eeprom fuses flash: hex avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) -b$(baud) -v -U flash:w:$(src).flash.hex date backup: @for memory in $(memoryTypes); do \ avrdude -p $(avrType) -c$(programmerType) -P$(programmerDev) -b$(baud) -v -U $$memory:r:./$(avrType).$$memory.hex:i; \ done It may be seem necessary to run avrdude as root , if that happens it justifies a question in its own . It can be solved with udev but requires a bit specific information from how the programmer is recognized by the operating system. Hello World Let me throw in an 'Hello World' that makes a controller pin 2 (PB3) (eg. ATtiny13, ATtiny45, ATtiny85) toggle at 1Hz. Attach an LED and series resistor to the pin and the LED should start to blink. make edit i #include <avr/io.h> #include <util/delay.h> int main(void) { DDRB = 0x08; while (1) { PORTB = 0x00; _delay_ms(500); PORTB = 0x08; _delay_ms(500); } } <ESC>:wq make flash Done.
{ "source": [ "https://electronics.stackexchange.com/questions/66145", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/19331/" ] }
66,486
It's probably not new to you, but sometimes we want to analyze a powered circuit with a multimeter. I love to do that, but I'm a little worried that I might burn some components or the whole circuit. Are there any best practices, do's and don't for using multimeters on circuits and components ?
When set for voltage measurements a multimeter will normally have an impedance measured in megaohms. Although that impedance combined with capacitance may affect the operation of some circuits I can't think of many practical situations where damage is likely to occur. When set for current measurement it's a different story because the shunt resistor may effectively present a short-circuit. By far the most likely cause of damage (which I've done myself on the odd occasion) are the probes slipping and shorting something out. Apart from being careful in general putting some heatshrink over most of the probe to leave only the tips exposed may help. Also consider getting a set of appropriate test clips for what you're working on so they are physically secure. Also consider personal safety if you're working at high voltages and make sure any probes you use have an appropriate voltage rating and reduce the chance of your hands slipping onto dangerous voltages.
{ "source": [ "https://electronics.stackexchange.com/questions/66486", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22688/" ] }
66,588
I have always wondered how an only 1 wire line (signal and ground) can make a whole color image appear on a screen full of thousands of pixels. How exactly do these signals work and what characteristics make tv's show different things?
I used to work at Panasonic on their In-Flight Entertainment systems, so I know a bit about this kind of stuff. This description won't be 100% technically accurate (some naming might be a bit off) but I am trying to write it so anyone can understand it. Hopefully this explanation helps... The "magic" behind it can be a combination of the following things: signal amplitude, frequency, and modulation. Different types of TVs and signals work differently. This is why old TVs had to have a converter box to accept the new digital signals if they only had an analog tuner. But that really just describes how the data is presented in the signal. Basically, the color data for each pixel is sent to the TV line by line, pixel by pixel and the TV refreshes the screen so many times per second with the new data. Even though the video is really just a lot of still images being updated on the screen, they change fast enough for us to perceive them as moving, hence the old term "moving picture." Take a look at a typical "color bar" signal used to test video systems from Wikipedia . The picture itself is divided into "lines" of pixels. Every screen has so many columns and so many lines, making up the total screen resolution. Each color in this picture is spread across numerous pixels of the same line. The accompanying oscilloscope waveform helps to describe what is going on here (This image is from Tektronix ): This image shows the data for two lines of pixels. Each line starts with a "sync pulse" to align the screen and the signal. This pulse (the negative part of the waveform) is followed by data for each pixel of the line. This is actually an analog video: the pixel data is represented by the amplitude and phase of the signal. You can see the various colors as an analog voltage with differing maximum and minimum voltages. When one line is finished, another sync pulse signals the start of the next line. The video signal and the screen need to have matching resolution (number of pixels per line). If there is extra data, it is dropped. If there is not enough data, the pixels share the data (makes the picture blocky). Thanks to Pete B for mentioning this: To clarify one bit of detail regarding color signals, the luminance (brightness) of a pixel is determined by the amplitude of the signal; while the chrominance (hue) is determined by the phase of the chroma sub-carrier signal. Digital signals are a bit different in that the signal is either HI or LO. The value of HI can vary between systems. There are different ways that this works. Sometimes, a known number of bits of data constitute a packet carrying all of the pixel data (similar to network communication). Another way is to time on long the signal is HI vs how long it is LO to represent a different Pixel value. This is kind of how IR TV remote controls work, although they are sending "control codes" instead of pixel information. As you can imagine, this all takes place very, very quickly. A common TV in the United states is updated (screen refresh) 60 times per second (60Hz), or 30Hz for interlaced video. Although modern and HD TVs will typically refresh even more often (upwards of 240Hz). What this refresh rate means is that every pixel in the entire screen is updated so many times per second. The more it refreshes, the more detail is available in the picture, especially when there are a lot of fast-moving images in the video (like a chase sequence). Different TV channels (AIR or Cable) are delivered to the TV in the same method, just with different base frequencies. The TV tuner will select one of these base frequencies to display (selecting a channel) and will update the pixels based on the modulated frequencies within the base carrier. The frequencies representing the pixel color data are much, much faster than the actual refresh rate of the screen because each pixel data has to be updated so many times per second, and there are, as you said, thousands of pixels. Since humans only hear sound on a spectrum of 20Hz to 20kHz, the sound data can easily be added to the signal on top of the video and filtered out by the TV, although, for "high definition sound", the sound signal is sent through a separate wire to the TV to fit in all of the data. To really understand what is going on, you have to comprehend signal frequencies, amplitude, time division, modulation, and spectrum analysis. But I hope this kind of explains some of it...
{ "source": [ "https://electronics.stackexchange.com/questions/66588", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10190/" ] }
66,817
I am a student in Computer Engineering, and I am wondering why programs still teach digital logic. We have already taken a computer organization class in which we learn about computer architecture including Flip-Flops, registers, ALU, Logic, etc... can anybody explain why we are still expected to take a digital logic class? There is technology out there nowadays that can simplify everything for us, and that'll do for anybody who isn't planning on going into a logic-related field, yet most schools still require digital logic to graduate.
It's the classic undergrad question: Why learn how to calculate the deflection of a beam when there are finite element analysis programs? Why learn Ohm's law when there's SPICE? Why learn compressible flow when there are fluid dynamics programs? Here's why: As engineers, we are responsible for truly understanding how our designs work. That means understanding the analysis, even if the arithmetic was done by a computer. If you don't know how to do at least a reasonable approximation by hand, then how can you trust the result of the program? How can your customer trust you?
{ "source": [ "https://electronics.stackexchange.com/questions/66817", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23028/" ] }
66,893
I have seen here and there schematics with electrolytic capacitors put on AC. And this sounds weird to me. Electrolytic capacitors have a polarity, right? If we invert the polarities on DC, bad things happen. As far as I understand, AC inverts polarity every now and then (commonly 50Hz). Why can we put such capacitors on AC without damaging them? Example: from demonstration here: http://youtu.be/qdXbnhb1bVo?t=5m57s
"Can" and "should" are two things. Should you do this? No: this use is outside the specified operating parameters of ordinary electrolytic capacitors. You seem to understand this already. Can you do it? Yes, as the video demonstrates. To understand why requires some understanding of what's inside the capacitor. A capacitor is two conductors (usually plates) separated by an insulator. The more surface area, and the closer together they are, the higher the capacitance. Electrolytic capacitors have a thin film rolled up in the can. This film is covered in a thin oxide layer, and the thinness of this layer is what gives electrolytic capacitors their high capacitance relative to their size. This oxide layer is created by the chemistry of the materials in the capacitor, and the polarity of the voltage applied to each side of the film. A voltage applied in the correct direction builds and maintains the oxide layer. If the polarity is reversed, the oxide layer dissolves. If the oxide layer dissolves, you no longer have an insulator between the two plates of the capacitor. Instead of two plates separated by an insulator, you have two plates separated by a conductor. Instead of a device that blocks DC, you have a device that conducts it. Basically, you have a wire in a can. Usually, when you encounter this failure mode, a large current flows, rapidly heating the internals of the capacitor. The expanding fluid and gas ruptures the safety vent or the can explodes. Why then, does the capacitor in this example not explode? The reverse polarity voltage is never applied for very long, and never without a correct polarity voltage applied soon after to repair whatever damage was done. The oxide layer doesn't dissolve instantly when a reverse voltage is applied; it takes time. The time depends on the voltage applied, the size of the capacitor, the chemistry, etc, but half a cycle of 50 Hz AC is probably not long enough to cause serious damage. When the other half of the cycle comes around, the oxide layer is restored. Any fault current is significantly limited by the series resistors. With those resistors in series, the power available to heat the capacitor is small. There simply isn't enough power available to catastrophically destroy the capacitor because most of the available energy goes into the resistors. Perhaps you just warm the capacitor slightly. When the voltage reverses direction, the oxide layer can reform. Probably you still damage the capacitor eventually, to some extent, but it is operational enough for the demonstration.
{ "source": [ "https://electronics.stackexchange.com/questions/66893", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22688/" ] }
66,958
I've see a number of terms that make reference to the word "bias". I've read the wikipedia article but I'm after are more practical answer. Some examples of what a forward or reverse biased device is would also be well received.
Bias is another word for the operating point -- a dc voltage or current about which the instantaneous value might vary. For example, you can say you applied a "6 V peak-peak AC signal biased at +1 V". In this case the range of the signal would be from -2 to +4 V. You can see the relationship with the everyday meaning of bias , "a tendency or inclination" ( dictionary.com ) in this case with the meaning that although the voltage varies, it tends to be near the operating point. As the other answers point out, the term is often used with relation to diodes and other nonlinear components.
{ "source": [ "https://electronics.stackexchange.com/questions/66958", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18537/" ] }
67,004
I already know how to extend the number of digital pins on an Arduino using a multiplexer. I want to increase the number of analog input channels that I have. One solution is to add another Arduino as a slave. How would I increase the number of analog input channels? (I assume some MUXing and ADCs are involved). If it's too complicated to write in an answer, a general outline of circuit/code would be fine as well. I am also open to shield suggestions, though I'd prefer it if there was a not-too-hard hardware solution.
If you're already familiar with digital multiplexer chips, good news! They can be used for analog signals as well. http://playground.arduino.cc/Learning/4051 The basic premise is exactly like with digital signal multiplexing. You use the 4051 chip as a "lane changer" and read the signal of whatever lane you tell the chip to switch to. the 4051 uses 3 digital pins and 1 additional analog (or digital) pin on the arduino to create up to 8 lanes of input. Some multiplexers are chainable, so you can fairly easily add 8 more signals without needing to keep taking them away from the arduino itself. It is exactly like the technique you already know for digital signals, just hook up the channel you read from to an analog pin instead of a digital one, then cycle through the binary states and read the analog values. I made an animated gif for you.
{ "source": [ "https://electronics.stackexchange.com/questions/67004", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8144/" ] }
67,007
On newer Arduinos, one can set pinMode to one of three states: OUTPUT , INPUT , and INPUT_PULLUP . This page says: The Atmega chip on the Arduino has internal pull-up resistors (resistors that connect to power internally) that you can access. If you prefer to use these instead of external pull-down resistors, you can use the INPUT_PULLUP argument in pinMode(). This effectively inverts the behavior, where HIGH means the sensor is off, and LOW means the sensor is on. I'm rather sure that inverting the behavior isn't the only thing that it does, though. What does INPUT_PULLUP do? What makes it different from INPUT , and how does one decide which one to use?
The default is INPUT which sets up the pin as an INPUT . If the pin is totally disconnected, it will randomly read HIGH and LOW . If you tie it to +5V or 0V, it will obviously read HIGH or LOW . Internal to the Arduino, the Atmega chip has internal pullup resistors with a value around 20k ohm. (See the DigitalPins documentation for more details) These resistors can be optionally connected internally using INPUT_PULLUP . This is functionally (and electrically) equivalent to connecting a ~20k ohm resistor between the pin and +5V, the only difference is that it requires no external components and you can turn it on and off in software during the execution of your program. So why pull-ups and not pull-downs? There are likely several reasons for it, but when wiring buttons or switches or anything "normally open", you only have to tie them to ground, you don't need to run +5V out to them. Since most boards are going to be designed with large ground pours for shielding reasons anyway, tying to ground is practically reasons. Some more featured ICs like ARM chips have both pull ups and pull downs, but the 8-bit AVR line only comes with pull-ups. You just have to remember that HIGH is "open" and LOW is "closed".
{ "source": [ "https://electronics.stackexchange.com/questions/67007", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/8144/" ] }
67,030
Is it legal for me to sell a product that's built with an Arduino? I know that it's open source, but does that mean I can sell my product for profit, or only for use in prototyping?
Since legal questions sometimes need really specific answers, I found Arduino's exact position on this: Physically embedding an Arduino board inside a commercial product does not require you to disclose or open-source any information about its design. Deriving the design of a commercial product from the Eagle files for an Arduino board requires you to release the modified files under the same Creative Commons Attribution Share-Alike license. You may manufacture and sell the resulting product. Using the Arduino core and libraries for the firmware of a commercial product does not require you to release the source code for the firmware. The LGPL does, however, require you to make available object files that allow for the relinking of the firmware against updated versions of the Arduino core and libraries. Any modifications to the core and libraries must be released under the LGPL. The source code for the Arduino environment is covered by the GPL, which requires any modifications to be open-sourced under the same license. It does not prevent the sale of derivative software or its inclusion in commercial products. In all cases, the exact requirements are determined by the applicable license. Additionally, see the previous question for information about the use of the name “Arduino”. Source: http://arduino.cc/en/Main/FAQ Disclaimer: Arduino's position on this may change over time, I'm not a lawyer and this doesn't constitute legal advice, blaa, blaa, blaa... :) In short: I'm not responsible if you get in trouble. Update: February 2021 The structure of the Arduino FAQ page has changed, but it looks like the main point remains the same. The above text has been broken out across multiple sections of the "Compatible Products" section : Physically embedding a board inside a project Deriving design from Eagle files Source code and LGPL Arduino development environment (Slightly changed wording)
{ "source": [ "https://electronics.stackexchange.com/questions/67030", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/22891/" ] }
67,087
The arduino reference states that you would use the following code to read the value from analog pin #5: int val1 = analogRead(5); However to read from digital pin #5, you would pass the same pin number to digitalRead : int val2 = digitalRead(5); Shouldn't you use analogRead(A5) instead of analogRead(5) ? If not, what does the following code do: int val3 = analogRead(A5);
To answer Tyilo's specific questions: analogRead(5) and digitalRead(5) will read from two different places. The former will read from analog channel 5 or A5 and the latter will read from pin 5 which happens to be a digital pin. So yes, if you want to read an analog pin with digitalRead you should be using A5 . Why? analogRead requires a channel number internally but it will allow you to give it a pin number too. If you do give it a pin number it will convert it to its corresponding channel number. As far as I can tell analogRead is the only function which uses a channel number internally, is the only one to allow a channel number, and is the only function with this undocumented pin-to-channel conversion. To understand this let's start off with some examples. If you want to use analogRead on the first analog pin A0 you can do analogRead(0) which uses the channel number or analogRead(A0) which uses the pin number. If you were to use the pin number variant, analogRead would convert the pin number A0 to its proper channel number 0 . If you want to use digitalWrite on the first analog pin A0 you can only do digitalWrite(A0, x) . digitalWrite doesn't use analog channels internally and does not let you pass it a channel number. Well, it'll let you but you'll select the wrong pin. The same applies to digitalRead and even analogWrite . What about the pin-to-channel conversions done by analogRead ? The source for that function can be found in hardware/arduino/avr/cores/arduino/wiring_analog.c You'll see that it does some simple subtraction based on the board type. The A0 / A1 / A2 /etc. constants represent the pin number of the analog channels and can be used everywhere you need to refer to the analog inputs. For that reason they are the best option to use in your Arduino code because it's very obvious that you're using the same physical port even when you're using different functions. The definitions of those constants depend on your board. For example, here's the analog pin definition code for the Arduino Uno in hardware/arduino/avr/variants/standard/pins_arduino.h static const uint8_t A0 = 14; static const uint8_t A1 = 15; static const uint8_t A2 = 16; static const uint8_t A3 = 17; static const uint8_t A4 = 18; static const uint8_t A5 = 19; static const uint8_t A6 = 20; static const uint8_t A7 = 21; For comparaison here is the analog pin definition code for the Arduino Mega: static const uint8_t A0 = 54; static const uint8_t A1 = 55; static const uint8_t A2 = 56; [...] static const uint8_t A13 = 67; static const uint8_t A14 = 68; static const uint8_t A15 = 69; Further EE discussion on analog pins: Can I use the analog pins on the Arduino for my project as digital?
{ "source": [ "https://electronics.stackexchange.com/questions/67087", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/13877/" ] }
67,092
What's the maximum amount of current which I can draw from each of the Arduino's pins without tripping any of the internal fuses? Is there a limit per pin as well as an overall limit for the whole board?
This is a bit complex. Basically, there are a number of limiting factors: The IO lines from the microcontroller (i.e. the analog and digital pins) have both an aggregate (e.g. total) current limit, and an per-pin limit: From the ATmega328P datasheet . However, depending on how you define the Arduino "Pins", this is not the entire story. The 5V pin of the arduino is not connected through the microcontroller . As such, it can source significantly more power. When you are powering your arduino from USB, the USB interface limits your total power consumption to 500 mA. This is shared with the devices on the arduino board, so the available power will be somewhat less. When you are using an external power supply, through the barrel power connector, you are limited by the local 5V regulator, which is rated for a maximum of 1 Amp . However, this it also thermally limited , meaning that as you draw power, the regulator will heat up. When it overheats, it will shut down temporarily. The 3.3V regulated output is able to supply 150 mA max, which is the limit of the 3.3V regulator. In Summary The absolute maximum for any single IO pin is 40 mA ( this is the maximum . You should never actually pull a full 40 mA from a pin. Basically, it's the threshold at which Atmel can no longer guarantee the chip won't be damaged. You should always ensure you're safely below this current limit. ) The total current from all the IO pins together is 200 mA max The 5V output pin is good for ~400 mA on USB, ~900 mA when using an external power adapter The 900 mA is for an adapter that provides ~7V. As the adapter voltage increases, the amount of heat the regulator has to deal with also increases, so the maximum current will drop as the voltage increases. This is called thermal limiting The 3.3V output is capable of supplying 150 mA . Note - Any power drawn from the 3.3V rail has to go through the 5V rail . Therefore, if you have a 100 mA device on the 3.3V output, you need to also count it against the 5V total current. Note: This does not apply to the Arduino Due, and there are likely some differences for the Arduino Mega. It is likely generally true for any Arduino based off the ATmega328 microcontroller.
{ "source": [ "https://electronics.stackexchange.com/questions/67092", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9339/" ] }
67,103
I am new to using Arduino, and I have an Arduino Uno. For the projects I've done, I've only used the digital pins. I am building a small vehicle that uses stepper motors. I have run out of pins to control the motors for this vehicle. What are the analog pins for? Is it possible for me to use analog pins to control the rest of the step motors which I connect to the Arduino, or do I have to buy a bigger Arduino than Arduino Uno to control this contraption?
Yes, the analog pins on the Arduino can be used as digital outputs. This is documented in the Arduino input pins documentation , in the Pin Mapping section: Pin mapping The analog pins can be used identically to the digital pins, using the aliases A0 (for analog input 0), A1, etc. For example, the code would look like this to set analog pin 0 to an output, and to set it HIGH: pinMode(A0, OUTPUT); digitalWrite(A0, HIGH);
{ "source": [ "https://electronics.stackexchange.com/questions/67103", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18069/" ] }
67,112
Part one of a three part series on transitioning from Arduino to a plain AVR microcontroller and minimum supporting components ( part two , part three ) I've built up a project on my Arduino Uno to control various aspects of my beer brewing system. At this point it seems to be doing what I want, but I would like to reuse my Uno for another project. How should I move my project from the Uno and breadboard to a PCB, perfboard, or whatever? Any good solutions out there?
Here are some instructions. If you just want to know what goes where in your perfboard, read on. Here's the pinout for the ATmega328: Firstly, you'll need a LM7805 or something similar to get a 5V. If you don't know how these work, refer to this image . Power Now, connect the + end of your 12V battery to the IN of 7805, and - to the COM. Hereafter, I shall refer to any connection from COM as "GND" and any connection from OUT as "Vcc". Reset Connect Vcc to Pin 7 and 20 of the ATmega328, and GND to pin 8 and 22. Connect Vcc to a ~10 kiloohm resistor, and connect the other end of that to the RST pin (pin 1). Also, connect GND to a reset switch, and the other terminal of the reset switch to pin 1. When the reset switch is on, the Arduino will restart. If you don't want a reset switch, just connect Vcc directly to pin 1. Clock Connect GND to the negative terminals of two 22 picofarad capacitors. Connect one capacitor to pin 9, and the other capacitor to pin 10. Now, connect a 16MHz clock between pins 9 and 10: Analog reference If you use the AREF pin, just connect your AREF to pin 21. Rest of the pins These are labelled in the diagram above. Pins 23-28 are A0-A5. Pins 2-6 are digital 1-4, 11-19 are digital 5-13. Use these normally. Note that digital pin 13 (pin 19 on the microcontroller) won't have an LED anymore, but if you wish to connect one, connect it to an LED, followed by a 200-300 ohm resistor, followed by ground: Programming If your Arduino is a DIP Arduino (the ATmega is removable), then just program it using the IDE, remove the ATmega, and place it in your perfboard circuit (I assume you're using an IC holder). If the Arduino has a surface mount ATmega, see How can I use my SMD Arduino to program a separate DIP ATmega328? . That's it! Now you can easily take an Arduino project to a perfboard! Here's the final schematic:
{ "source": [ "https://electronics.stackexchange.com/questions/67112", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ] }
67,116
I was looking at my Arduino Uno and I noticed that symbol by digital pins 11 , 10 , 9 , 6 , 5 , and 3 . What do these mean? Does this affect the way it works? Can I not use these pins for certain situations?
Relax. Don't worry. These pins are called PWM and are the same as the other pins, except they have an "added bonus." Some uses from Arduino's Website : Dimming an LED Simulates an analog output. The output is still digitally toggling from 0V to 5V. However, low-pass filter (capacitor and resistor) to simulate analog voltages. Generating audio signals. Providing variable speed control for motors. Generating a modulated signal, for example to drive an infrared LED for a remote control. How it works: The PWM pins are controlled by on-chip timers which toggle the pins automatically at a rate of about 490Hz. The "Pulse Width Modulation" (PWM) is how long the pin stays on or off for a single cycle of that frequency. This can dim a LED by giving the illusion it is at half the brightness as before, where it is really flashing very quickly. When there is a 25% duty cycle, it is on one-forth of the time. If you used for a LED, it would appear about 1/4th as bright [give or take]. (Note: as some people pointed out this isn't truly proportional but let's leave it this way for simplicity. EX: 25% isn't always 1/4th the brightness.) (If you are really electrical savvy, you could probably add a capacitor to make it also an analog output.) How to use these pins to output: First, you need to define the pin as output. Then, you use analogWrite(ledPin, 128); to start it. The ledPin is the PWM pin that you want to start PWM and 128 should be replaced with a number between 0 and 255 ; 0 : 0% duty cycle (turns the pin completely off) and 255 : 100% duty cycle . (turns the pin on completely) Source: http://www.arduino-tutorials.com/arduino-pwm/ Why can't I just turn the light on and off really fast in my code?: Technically, you can, however, there are some problems: It may not be as precise as using the hardwired circuits with the Arduino Its simpler just to type instruction instead of having lots of "if" statements It's not really going to make that much of a difference if the Arduino's sole purpose is to generate PWM signals. However, if you put any delays longer than 50 MS in the main loop, it will mess up the timing. With the software approach you would want to eliminate any "delay" functions since the Arduino only runs on one thread (it can only do one thing at one time). If you know what you're doing, it won't make that much of a difference dimming the light, but if you have an extra pin with PWM, you're just wasting your time with a software approach. As others have pointed out: You still need a resistor for your circuits to limit current and voltage. You cannot skip this.
{ "source": [ "https://electronics.stackexchange.com/questions/67116", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18442/" ] }
67,272
I have an electronic component which has no letter or number on it. It has 2 legs and when you shake it there is something moving inside up and down. I am searching for it for hour still have no clue. Here is the photo I've just taken:
It's a tilt switch. It has a ball in it which breaks a contact when it rolls away or moves away from contacts. Try measuring the resistance with it in various orientations The image below is copied from here This page says about one similar: Tiny tilt switch type BT411-2 has a built in rolling ball (instead of mercury) so it doesn’t have the environmental health hazards that mercury tilt switches have. These and similar types are used in vibration car alarms, sneakers that blink, toys, etc. Many here Re your link: Strøm: 2mA maks. Størrelse: housing Ø5.2 × 14mm (leads + 15mm) Maks. temperature: 100°C non-active contact: 10 Mohm active contact: +/- 5 ohm 2 mA current rating. 10 megohm open circuit 5 Ohms operated. 5.2 mm dia x 14mm long with 15mm leads. Maximum temperature 100 C.
{ "source": [ "https://electronics.stackexchange.com/questions/67272", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16307/" ] }
67,598
How are integrated circuits (e.g. a microprocessor) fabricated from start to finish? For example, there must be some wiring with resistors, capacitors to store energy (bits) in a field, transistors, etc.... How is this done? What machinery and chemical processes are required for building an integrated circuit?
No big deal really. First you get a pile of silicon. A bucket of ordinary beach sand contains a lifetime supply if you're going to make your own chips. There is lots of silicon on this planet, but it's mostly all so annoyingly bound up with oxygen. You have to break those bonds, discard the non-silicon stuff, then refine what's left over. You need very very pure silicon to make useful chips. Just smelting the silicon oxide into elemental silicon isn't anywhere near enough. The bucket of sand was mostly silicon dioxide, but there will be little bits of other minerals, bits of snail shells (calcium carbonate), dog poop, and whatever else. Some of elements from this stuff will end up in the molten silicon mix. To get rid of this, there are various ways, most having to do with very carefully allowing the silicon to crystalize at just the right temperature and rate. That ends up pushing most of the impurities in front of the crystallization boundary. If you do this enough times, enough of the impurities get pushed to one end of the ingot, and the other end might be pure enough. To be sure, you wave a dead fish over it during a full moon while thinking only pure thoughts. If it turns out later that your chips are no good, then one possibility is you botched this step by using the wrong species for fish or that your thoughts weren't pure enough. If so, repeat back from step one. Once you have pure crystalline silicon, then you're almost done, just another 100 steps or so that all have to be just right. Now cut your pure silicon into wafers. Maybe that can be done with a table saw or something. Check with Sears to see if they sell silicon-ingot-cutting blades. Next polish the wafers so that they are very very smooth. All the rough stuff from the table saw blade needs to be gone. Preferably get it down to a wavelength or so of light. Oh, and don't let oxygen at the open surface. You'll have to flood your basement with some inert gas and hold your breath for a long time while you finish the polishing. Next you design the chip. That's just wiring a bunch of gates together on a screen and running some software. Either spend a few 100s of k$ or make your own if you've got a few 10s of man-years free. You can probably do a basic layout system, but you'll have to steal some trade secrets to be able to do the really good stuff. The people that figured out the really clever algorithms spent many M$ doing it, so don't want to give out all the cool bits for free. Once you have the layout, you'll have to print it on masks. That's just like regular printing, except for a few orders of magnitude finer detail. After you have the masks for the various layers and photolithography steps, you need to expose them onto the wafer. First you slather on the photoresist, making sure it has a uniform thickness to within a fraction of the wavelength of the light you will use. Then you expose and develop the resist. That leaves resist over some areas of your wafer and not over others, just like the mask specified. For each layer you want to build up or etch or diffuse into the chip, you apply special chemicals, usually gasses, at very precisely controlled temperatures and times. Oh, and don't forget to line up the masks for each layer in the same location on the wafer to a few 100 nm or better. You need really steady hands for that. No coffee that day. Oh, and remember, no oxygen. After a dozen or so mask steps, your chips are almost ready. Now you should probably test each one to find out which ones hit impurities or got otherwise messed up. No point putting those into packages. You'll need some really really tiny scope probes for that. Try not to breath as you're holding a dozen probes in their targets to within a few µm on the special pads you designed into the chips for that purpose. If you've done the passivation step already, you can do this in a oxygen atmosphere and take a breath now. Almost done. Now you cut up the wafer into chips, being careful to toss out the ones you found earlier were no good. Maybe you can snap them apart, or saw them, but of course you can't touch the top of the wafer. You have the chips now, but you still need to connect to them somehow. Soldering on silicon would make too much of a mess, and soldering irons don't have fine enough tips anyway. Usually you use very thin gold bond wires that are spot welded between the pads on the chip and the inside of the pins of whatever package you decide to use. Slap on the top and glob on enough epoxy to make sure it stays shut. There, that wan't so bad, was it?
{ "source": [ "https://electronics.stackexchange.com/questions/67598", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23232/" ] }
67,663
I'm working on a speed control circuit for a brushed dc motor (24v, 500rpm, 2A, 4kgcm). The main components that I plan to use are PIC16f873, 4n25 optocoupler, IRFZ44N MOSFET, BY 500 - 800 diode(for free wheeling). What is the criteria behind choosing the PWM frequency? What are the effects of very high & very low PWM frequencies on the system? What are the drawbacks and improvements to be made in the hardware provided here?
There are several issues effected by the PWM frequency when driving a motor: The pulses need to come fast enough so that the mechanical system of the motor averages them out. Usually a few 10s of Hz to a few 100 Hz is good enough for this. This is rarely the limiting factor. In some cases, it is important that whining can't be heard at the PWM frequency. Even if the mechanical system as a whole doesn't react to single pulses, individual windings of a coil can. A electric motor works on magnetic forces, with every loop of wire in a coil arranged to create these forces. That means every bit of wire in a winding has a sideways force on it proportional to current, at least part of the time. The wire in the windings can't move far, but it can still vibrate enough for the result to be audible. 1 kHz PWM frequency may be fine in all other respects, but if this is going into a end user device the whining at that frequency could be unacceptable. For this reason, PWM for end consumer motor control is often done at 25 kHz, being just a bit beyond what most people can hear. Average coil current. This can be a tricky issue. Individual coils of the motor will look mostly inductive to the driving circuit. You want the current through the coils to be mostly what you'd expect from the average applied by the PWM and not go up and down substantially each pulse. Each coil will have some finite resistance, which causes lost power proportional to the square of the current through it. The losses will be higher at the same average current when there is a large change in current over a pulse. Consider the extreme example of the coil reacting to the pulsed voltage almost instantly and you are driving it with a 50% square wave. The resistive dissipation will be 1/2 of driving the coil full on all the time, with the average current (therefore resulting motor torque) also being 1/2 of full on. However, if the coil were driven with a steady 1/2 current instead of pulses, the resistive dissipation would be 1/4 of full on but with the same 1/2 of full scale current and therefore torque. Another way to think about this is that you don't want significant AC current on top of the average DC level. The AC current does nothing to move the motor, only the average does that. The AC component therefore only causes resistive losses in the coils and other places. Switching losses. The ideal switch is either fully on or fully off, which means it never dissipates any power. Real switches don't switch instantaneously and therefore spend some finite time in a transition region where they dissipate substantial power. Part of the job of the drive electronics is to minimize this transition time. However, no matter what you do there will be some time per edge where the switch is not ideal. This time is usually fixed per edge, so its fraction of the total PWM period goes up with frequency. For example, if the switch spends a total of 1 µs in transition each pulse, then at 25 kHz PWM frequency, which is 40 µs period, the transition time is 1/40 of the total. That may be acceptable. However, if the switching frequency were increased to 100 kHz, which means 10 µs period, then the transition time would be 10%. That will likely cause problems. As for your circuit, my biggest concern is how slowly Q1 will be driven. Opto-isolators are notoriously slow (relative to most other components like individual transistors), especially when turning off. You only have R2 (although I can read its value) pulling down on the FET gate to turn it off. That's going to be slow. That may be OK if you can tolerate a slow PWM frequency, considering all the other trade-offs I mentioned above. You might consider putting a PIC on the motor side of the opto. You can communicate digitally with that PIC via a UART interface or something that doesn't have to run at the PWM frequency. That PIC then generates the appropriate PWM locally and drives Q1 hard on and off with extra circuitry for that purpose. That way the high speed signals and fast edges don't go across a opto isolator.
{ "source": [ "https://electronics.stackexchange.com/questions/67663", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/9454/" ] }
67,803
For instance a PIC10F200T Virtually any code you write will be larger than that, unless it is a single purpose chip. Is there any way to load more program memory from external storage or something? I'm just curious, I don't see how this could be very useful... but it must be.
You kids, get off my lawn! 384 bytes is plenty of space to create something quite complex in assembler. If you dig back through history to when computers were the size of a room, you'll find some truly amazing feats of artistry executed in <1k. For instance, read the classic Story of Mel - A Real Programmer . Admittedly, those guys had 4096 words of memory to play with, the decadent infidels. Also look at some of the old demoscene competitions where the challenge was to fit an "intro" into the bootblock of a floppy, typical targets being 4k or 40k and usually managing to include music and animation. Edit to add : Turns out you can implement the world's first $100 scientific calculator in 320 words. Edit for the young 'uns: Floppy = floppy disk. Bootblock = 1st sector of the floppy read at bootup. Demoscene = programming competitions amongst hacker groups. Assembler = fancy way of programming a device if you're too soft to use 8 toggle switches and a "store" button.
{ "source": [ "https://electronics.stackexchange.com/questions/67803", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20525/" ] }
67,975
We often see component values of 4.7K Ohm, 470uF, or 0.47uH. For example, digikey has millions of 4.7uF ceramic capacitors, and not a single 4.8uF or 4.6uF and only 1 listed for 4.5uF (specialty product). What's so special about the value 4.7 that sets so far apart from say 4.6 or 4.8 or even 4.4 since in the 3.. series we usually 3.3,33, etc. How did these numbers come to be so entrenched? Perhaps a historical reason?
Due to resistor colour-coding bands on leaded components two-significant digits were preferred and I reckon this graph speaks for itself: - These are the 13 resistors that span 10 to 100 in the old 10% series and they are 10, 12, 15, 18, 22, 27, 33, 39, 47, 56, 68, 82, 100. I've plotted the resistor number (1 to 13) against the log of resistance. This, plus the desire for two-significant digits, looks like a good reason. I tried offsetting a few preferred values by +/-1 and the graph wasn't as straight. There are 12 values from 10 to 82 hence E12 series. There are 24 values in the E24 range. EDIT - the magic number for the E12 series is the 12th root of ten. This equals approximately 1.21152766 and is the theoretical ratio the next highest resistor value has to be compared to the current value i.e. 10K becomes 12.115k etc. For the E24 series, the magic number is the 24th root of ten (not suprisingly) It's interesting to note that a slightly better straight line is got with several values in the range reduced. Here are the theoretical values to three significant digits: - 10.1, 12.1, 14.7, 17.8, 21.5, 26.1, 31.6, 38.3, 46.4, 56.2, 68.1 and 82.5 Clearly 27 ought to be 26, 33 ought to be 32, 39 ought to be 38 and 47 ought to be 46. Maybe 82 should be 83 as well. Here's the graph of traditional E12 series (blue) versus exact (green): - So maybe the popularity of 47 is based on some poor maths?
{ "source": [ "https://electronics.stackexchange.com/questions/67975", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4642/" ] }
67,980
I have a bright white LED from a flashlight. Aproximatley How long will it light up with a 150 farad 2.5 volt capacitor. Do I need a resistor? And if so how many Ω? The capacitor is a maxwell 150 farad 2.7 volt boostcap here .
Direct answer to the question The direct answer to your question, assuming you intend to just connect the capacitor to the LED with a series resistor is no time at all . That is because a white LED takes more than 2.7 V to light. Check its datasheet. These things usually need a bit over 3 V. There are two options. The simplest is to use a LED with a lower forward drop. Let's say you try this with a red LED that has a 1.8 V drop at 20 mA. That means at full charge, there will be 2.7V - 1.8V = 900 mV accross the resistor. If you want the maximum brightness at full charge, which we are saying is 20 mA, then you need a 900mV / 20mA = 45 Ω resistor. Let's pick the common nominal value of 47 Ω. Now that we have a capacitance and resistance we can compute the time constant, which is 150F x 47Ω = 7050 s = 118 minutes = 2 hours. At full charge, the LED will be nearly at full brightness, which will then decay slowly. There is no fixed limit at which it will suddenly go out, so we have to pick something. Let's say 5 mA is dim enough to be considered not usefully lit anymore in your application. The voltage accross the resistor will be 47Ω x 5mA = 240mV. Using the first approximation of the LED having constant voltage accross it, that means the capacitor voltage is 2 V. The question is now how long does it take to decay from 2.7 V to 2.0 V at a 2 hour time constant. That is .3 time constants, or 2100 seconds, or 35 minutes. The actual value will be a bit longer due to the LED having some effective series resistance too and therefore increasing the time constant. A better way The above tries to answer your question, but is not useful for a flashlight. For a flashlight you want to keep the light at close to the full brightness for as long as possible. That can be done with a switching power supply, which transfer Watts in to Watts out plus some loss but at different combinations of voltage and current. We therefore look at the total energy available and required and not worry about specific volts and amps too much. The energy in a capacitor is: $$E = \frac{C \times V^2}{2}$$ When C is in Farads, V in Volts, then E is in Joules. $$\frac{150F * (2.7V)^2}{2} = 547 J$$ The switching power supply will need some minimum voltage to work with. Let's say it can operate down to 1 V. That represents some energy left in the cap the circuit can't extract: $$\frac{150F * (1.0V)^2}{2} = 75 J$$ The total available to the switching power suppy is therefore 547 J - 75 J = 470 J. Due to the low voltages, the losses in the switching power supply will be quite high. Let's say that in the end only 1/2 the available energy gets delivered to the LED. That leaves us with 236 J to light the LED. Now we need to see how much power the LED needs. Let's go back to your original white LED and pick some numbers. Let's say it needs 3.5 V at 20 mA to shine nicely. That's 3.5V * 20 mA = 70 mW. (236 J)/(70 mW) = 3370 seconds, or 56 minutes. At the end of that, the light would go dead rather quickly, but you will have fairly steady brightness up until then.
{ "source": [ "https://electronics.stackexchange.com/questions/67980", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10190/" ] }
68,355
Because inductors share similar equations in their charging/discharging cycles, I am wondering if inductors have something like charge. Capacitors have capacitance and charge while an inductor has inductance and _ ? Is there a V = Q/C function for inductors?
Magnetic flux is the complement of charge. Just as a capacitor is defined by the relationship \$Q = CV\$, an inductor is defined by the relationship \$\varphi=LI\$, where \$\varphi\$ is the magnetic flux. Just as the capacitor formula becomes \$I = \dfrac{dQ}{dt} = C\dfrac{dV}{dt}\$ when we look at time variation, the inductor formula becomes \$V = \dfrac{d\varphi}{dt} = L\dfrac{dI}{dt}\$. Just as we can generalize the idea of a capacitor to the nonlinear case with the relationship \$f(Q,V)=0\$ we can generalize the idea of an inductor with the relationship \$f(\varphi,I)=0\$.
{ "source": [ "https://electronics.stackexchange.com/questions/68355", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/16910/" ] }
68,406
Unlike rechargeable batteries, capacitors have a lower capacitance in series. Why is this and if I charge each cap separately and then put them in series, will it still be a lower capacitance?
The answer to this comes from considering what is capacitance: it is the number of coulombs (C) of charge that we can store if we put a voltage (V) across the capacitor. Effect 1: If we connect capacitors in series, we are making it harder to develop a voltage across the capacitors. For instance if we connect two capacitors in series to a 5V source, then each capacitor can only charge to about 2.5V. According to this effect alone, the charge (and thus capacitance) should be the same: we connect two capacitors in series, each one charges to just half the voltage, but we have twice the capacity since there are two: so break even, right? Wrong! Effect 2: The charges on the near plates of the two capacitors cancel each other. Only the outer-most plates carry charge. This effect cuts the storage in half. Consider the following diagram. In the parallel branch on the right, we have a single capacitor which is charged. Now imagine that if we add another one in series, to form the branch on the left. Since the connection between the capacitors is conductive, bringing the two plates to the same potential, the ----- charges on the bottom plate of the top capacitor will annihilate the +++++ charges on the top plate of the bottom capacitor. So effectively we just have two plates providing the charge storage. Yet, the voltage has been cut in half. Another way to understand this is that the two plates being charged are farther apart . In free space, if we move plates farther apart, the capacitance is reduced, because the field strength is reduced. By connecting capacitors in series, we are virtually moving plates apart. Of course we can place the capacitors closer or farther on the circuit board, but we have now have two gaps instead of one between the top-most plate and the bottom-most plate. This reduces capacitance.
{ "source": [ "https://electronics.stackexchange.com/questions/68406", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10190/" ] }
68,410
I am an amateur, but i'd like to know if this circuit would work to double a 0-30 volt DC input to a 2-60 volt DC output. If so, what kind of CAPS and DIODES do i need to support this? I need to Power a motor that draws 5A max, so based on my understanding, the circuit needs to be able to handle 10 Amps. (source: coolcircuit.com ) Whatever my input voltage is, I want to double it. I am open to other solutions as well.
The answer to this comes from considering what is capacitance: it is the number of coulombs (C) of charge that we can store if we put a voltage (V) across the capacitor. Effect 1: If we connect capacitors in series, we are making it harder to develop a voltage across the capacitors. For instance if we connect two capacitors in series to a 5V source, then each capacitor can only charge to about 2.5V. According to this effect alone, the charge (and thus capacitance) should be the same: we connect two capacitors in series, each one charges to just half the voltage, but we have twice the capacity since there are two: so break even, right? Wrong! Effect 2: The charges on the near plates of the two capacitors cancel each other. Only the outer-most plates carry charge. This effect cuts the storage in half. Consider the following diagram. In the parallel branch on the right, we have a single capacitor which is charged. Now imagine that if we add another one in series, to form the branch on the left. Since the connection between the capacitors is conductive, bringing the two plates to the same potential, the ----- charges on the bottom plate of the top capacitor will annihilate the +++++ charges on the top plate of the bottom capacitor. So effectively we just have two plates providing the charge storage. Yet, the voltage has been cut in half. Another way to understand this is that the two plates being charged are farther apart . In free space, if we move plates farther apart, the capacitance is reduced, because the field strength is reduced. By connecting capacitors in series, we are virtually moving plates apart. Of course we can place the capacitors closer or farther on the circuit board, but we have now have two gaps instead of one between the top-most plate and the bottom-most plate. This reduces capacitance.
{ "source": [ "https://electronics.stackexchange.com/questions/68410", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/2354/" ] }
68,574
I am looking for components which works at high temperature. Especially I am looking for microcontrollers which works at 180 °C to 200 °C. I need to found microcontroller which works fine at extreme conditions. If the microcontroller have built in CAN interface then it would be really nice? or any other methods and microcontrollers that i can use for my project. The purpose microcontrollers is to get information from different sensors and perform some calibration. I found one microcontroller from TEXAS which is SM320F28335-HT but the cost of that microcontroller is very expensive. Is there any microcontroller with low cost and works at high temperature.
When doing down-hole stuff for oil and gas - my guess is the cost of a chip is not going to make any difference on the economics of your whole project. You may be spending more money on time wasted looking to save even $100 in parts. Say you cost $100 per hour (salaries + overhead). Not unreasonable if you are a good engineer. Say you save $100 by spending 20 hours searching for a cheaper part. That's amazingly good savings on a single microcontroller. Say you build 100 units per year. That's a lot of units in that industry, even though you may destroy one in every job you do. Your added engineering cost is 20 x $100 = $2000. Your saved production cost is 100 x $100 = $10000 per year. Looks like a great deal, but if your project is valuable to customers. In that industry that means a lot of money. Say $100.000 per job. And say you can do 100 jobs per year. That's $10M per year. Now those 20 hours you finished late because you wasted time searching for a cheaper part ends up costing your company 20/(8 x 200) x $10M = $125.000 (if and if and if... I, know - but you get the idea). My short answer: Stop searching when you have found something that works well. Unless you are making 1K-10K units per year or more... Build it and start making money instead of blindly focusing on BOM cost.
{ "source": [ "https://electronics.stackexchange.com/questions/68574", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/14087/" ] }
68,638
I've learned that it's desirable in an ideal amplifier to have high input impedance and low output impedance. Why exactly? What are the implications an amplifier has the opposite- low input impedance and high output impedance. I don't exactly understand how impedance input and output.
Actually, the premise of your question is only true if the signals you are interested in are voltages. In that case, if the amplifier draws no current through its input (has infinite, or at least very high input impedance), then connecting it to a source won't affect the signal voltage, regardless of what the source impedance is. Similarly, when you connect a load to the output of your amplifier, if the amplifier has zero output impedance, the signal voltage won't change, regardless of the current drawn by the load. These properties make it much easier to analyze the behavior of the system overall. However, if the signals you're interested in are currents rather than voltages, you want your amplifier to have zero input impedance and infinite output impedance for the same reasons.
{ "source": [ "https://electronics.stackexchange.com/questions/68638", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/15477/" ] }
68,648
I have a control system that we are working on with a forward transfer function of:- \$ \dfrac{K(2s+3)}{s^2(s^4+2s^3+4s^2+2s+7)}\$ So I've set up the routh table for this, I've found there are 2 sign changes, therefore, two poles. But does a value of k=4 affect the control system? Or any value for that matter? I can't think why multiplying the transfer function by a constant would affect the stability in any way
Actually, the premise of your question is only true if the signals you are interested in are voltages. In that case, if the amplifier draws no current through its input (has infinite, or at least very high input impedance), then connecting it to a source won't affect the signal voltage, regardless of what the source impedance is. Similarly, when you connect a load to the output of your amplifier, if the amplifier has zero output impedance, the signal voltage won't change, regardless of the current drawn by the load. These properties make it much easier to analyze the behavior of the system overall. However, if the signals you're interested in are currents rather than voltages, you want your amplifier to have zero input impedance and infinite output impedance for the same reasons.
{ "source": [ "https://electronics.stackexchange.com/questions/68648", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23689/" ] }
68,649
I want to interface a high definition (best quality) cmos camera with a development board. I have use arduino, raspberry pi and pic microcontrollers for some projects in the past but i dont know a suitable development board for this project. Anyone that has an idea? EDIT: I want first to interface one cmos camera. Then move to 2-3 and try make image triangulation with the images.
Actually, the premise of your question is only true if the signals you are interested in are voltages. In that case, if the amplifier draws no current through its input (has infinite, or at least very high input impedance), then connecting it to a source won't affect the signal voltage, regardless of what the source impedance is. Similarly, when you connect a load to the output of your amplifier, if the amplifier has zero output impedance, the signal voltage won't change, regardless of the current drawn by the load. These properties make it much easier to analyze the behavior of the system overall. However, if the signals you're interested in are currents rather than voltages, you want your amplifier to have zero input impedance and infinite output impedance for the same reasons.
{ "source": [ "https://electronics.stackexchange.com/questions/68649", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20317/" ] }
68,748
I have N-MOSFET gate connected to a 4043 logic. Id is about 100mA. Both the 4043 and MOSFET have +5V. I plan to use a 2N7000 . How large a gate resistor do I need between the 4043 and MOSFET. The logic output is sometimes switched rapidly. How fast? A motherboard HDD LED controls it. Do I need to place pull down resistor from logic to 0V, between the 4043 and the MOSFET?
It is generally a good idea to include a gate resistor to avoid ringing. Ringing (parasitic oscillation) is caused by the gate capacitance in series with the connecting wire's inductance and can cause the transistor to dissipate excessive power because it doesn't turn on quickly enough and hence the current through drain/source in combination with the somewhat high'ish drain-source impedance will heat the device up. A low ohm resistor will solve (dampen) the ringing. As @PhilFrost mentiones, a high value resistor to ground is a good idea to avoid capacitive coupling driving the transistor when it is otherwise not connected. simulate this circuit – Schematic created using CircuitLab At all times keep the wiring between logic output, transistor gate, transistor source and ground as short as possible. This will ensure fast turn on/off.
{ "source": [ "https://electronics.stackexchange.com/questions/68748", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23731/" ] }
68,755
I got a brainwave EEG(ElectroEncephaloGram) sensor that is continuously sending data over to my program at about 200 data points per second. Can someone suggest what window/bin size I should be using if I want to do a Fast Fourier Transform(FFT) of this signal? I'm thinking of using the maximum - 1024 points, but that would mean that I need almost 5 seconds of data to update the readings. Is there some smaller size I can use for faster updates that would still be accurate? Here's how my signal looks like (orange line, top): Thank you!
It is generally a good idea to include a gate resistor to avoid ringing. Ringing (parasitic oscillation) is caused by the gate capacitance in series with the connecting wire's inductance and can cause the transistor to dissipate excessive power because it doesn't turn on quickly enough and hence the current through drain/source in combination with the somewhat high'ish drain-source impedance will heat the device up. A low ohm resistor will solve (dampen) the ringing. As @PhilFrost mentiones, a high value resistor to ground is a good idea to avoid capacitive coupling driving the transistor when it is otherwise not connected. simulate this circuit – Schematic created using CircuitLab At all times keep the wiring between logic output, transistor gate, transistor source and ground as short as possible. This will ensure fast turn on/off.
{ "source": [ "https://electronics.stackexchange.com/questions/68755", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/6243/" ] }
68,757
I am trying to design an op-amp input stage to capture a guitar through an ADC, specifically the ADC on an Arduino Due (3.3V, 12 bit). I am aware there are numerous ways to configure and bias the op-amps. I have designed a simple circuit based on a inverting amp followed by an active LP filter. The system will be running off a 9V battery and the power will be provided to the op-amps and uC (regulated) from this. The aim being: Bias the signal around Vdd/2 (mid of ADC input range) Amplify the signal to use best of the ADC's range Lowpass filter the input signal at Nyquist rate (fs/2) to avoid aliasing However, I have been unable to find much information on what the benefits or drawbacks of different configurations and methods of biasing are, specifically for this kind of signal. I have not yet measured the voltage output of my guitar so the specifics of the required gain properties are not known yet! Aslo, will I require a high input impedance for a guitar? Any advice or comments would be greatly appreciated. simulate this circuit – Schematic created using CircuitLab
It is generally a good idea to include a gate resistor to avoid ringing. Ringing (parasitic oscillation) is caused by the gate capacitance in series with the connecting wire's inductance and can cause the transistor to dissipate excessive power because it doesn't turn on quickly enough and hence the current through drain/source in combination with the somewhat high'ish drain-source impedance will heat the device up. A low ohm resistor will solve (dampen) the ringing. As @PhilFrost mentiones, a high value resistor to ground is a good idea to avoid capacitive coupling driving the transistor when it is otherwise not connected. simulate this circuit – Schematic created using CircuitLab At all times keep the wiring between logic output, transistor gate, transistor source and ground as short as possible. This will ensure fast turn on/off.
{ "source": [ "https://electronics.stackexchange.com/questions/68757", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23732/" ] }
69,234
Flash memory storage and EEPROM both use floating gate transistors for storage of data. What differs between the two and why is Flash so much faster?
The first ROM devices had to have information placed in them via some mechanical, photolithographic, or other means (before integrated circuits, it was common to use a grid where diodes could be selectively installed or omitted). The first major improvement was a "fuse-PROM"--a chip containing a grid of fused diodes, and row-drive transistors that were sufficiently strong that selecting a row and forcing the state of the output one could blow the fuses on any diodes one didn't want. Although such chips were electrically writable, most of the devices in which they would be used did not have the powerful drive circuitry necessary to write to them. Instead, they would be written using a device called a "programmer", and then installed in the equipment that needed to be able to read them. The next improvement was an implanted-charge memory device, which allowed charges to be electrically implanted but not removed. If such devices were packaged in UV-transparent packages (EPROM), they could be erased with about 5-30 minutes' exposure to ultraviolet light. This made it possible to reuse devices whose contents were found not to be of value (e.g. buggy or unfinished versions of software). Putting the same chips in an opaque package allowed them to be sold more inexpensively for end-user applications where it was unlikely anyone would want to erase and reuse them (OTPROM). A succeeding improvement made it possible to erase the devices electrically without the UV light (early EEPROM). Early EEPROM devices could only be erased en masse, and programming required conditions very different from those associated with normal operation; consequently, as with PROM/EPROM devices, they were generally used in circuitry which could read but not write them. Later improvements to EEPROM made it possible to erase smaller regions, if not individual bytes, and also allowed them to be written by the same circuitry that used them. Nonetheless, the name did not change. When a technology called "Flash ROM" came on the scene, it was pretty normal for EEPROM devices to allow individual bytes to be erased and rewritten within an application circuit. Flash ROM was in some sense a step back functionally since erasure could only take place in large chunks. Nonetheless, restricting erasure to large chunks made it possible to store information much more compactly than had been possible with EEPROM. Further, many flash devices have faster write cycles but slower erase cycles than would be typical of EEPROM devices (many EEPROM devices would take 1-10ms to write a byte, and 5-50ms to erase; flash devices would generally require less than 100us to write, but some required hundreds of milliseconds to erase). I don't know that there's a clear dividing line between flash and EEPROM, since some devices that called themselves "flash" could be erased on a per-byte basis. Nonetheless, today's trend seems to be to use the term "EEPROM" for devices with per-byte erase capabilities and "flash" for devices which only support large-block erasure.
{ "source": [ "https://electronics.stackexchange.com/questions/69234", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10190/" ] }
69,311
I've been on this site now a couple of months and I notice various symbols used for MOSFETs. What is the preferred symbol for an N Channel MOSFET and why?
It is likely that you saw a Circuit Lab sysmbol and that this caused you to ask this question. The Circuit Lab N Channel MOSFET symbol is both unusual and illogical. I'd avoid using them if at all possible. Read on ... Acceptable [tm] N Channel MOSFET symbol tends to have these characteristics. Gate symbol on one side. 3 "contacts" on other side vertically. Top of these is drain. Bottom of these 3 is source. Middle has an arrow pointing INTO the FET and the outside end is connected to source. This indicates that there is a connected body diode and that it is non conducting when the source is more negative than the drain (arrow is same as would be for a discrete diode). Any symbol which obeys these guidelines should be "clear enough" and OK to use. I have very occasionally seen people use a symbol which does not comply with these guidelines but which is still recognisable as an N Channel MOSFET. SO. Any of these are OK, and you can see the differences for the unmarked P Channels. Many more examples here But!!! Jippie's example shows the rogue version. [Note: See below - this is in fact intended to be a P Channel sysmbol]. Truly horrible. I'd have to wonder if this was a P Channel symbol or an N Channel one. Even the discussion it is taken from has people expressing uncertainty re arrow direction. As shown IF that is an N Channel then it is implying body diode polarity and NOT current flow in source. Thusly ________________' Circuit Lab is apparently the (or a) culprit. This is their symbol for an N Channel MOSFET. A nasty piece of work, alas. Arrow shows usual drain-source conduction direction BUT as a MOSFET is a 2 quadrant device and will provide a true resistive on channel with \$V_{gs}\$ positive BUT \$V_{ds}\$ negative, the arrow is meaningless and, as it is in the opposite direction to most N Channel MOSFET sysmbols it is misleading to most. (Note the proper use of this symbol in table below). USER23909 helpfully pointed out this page - Wikipedia - MOSFET . This page includes the following symbols. User xxx says these may be IPC standards, but Wikipedia is silent re their source. Wikipedia MOSFET symbols http://en.wikipedia.org/wiki/MOSFET#Circuit_symbols
{ "source": [ "https://electronics.stackexchange.com/questions/69311", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/20218/" ] }
69,323
w I have a dust sensor that emits LED light, scatters the light of the material and records the light intensity via a photodiode. I know the LED will emit light in the infrared range. I want to know the (approximate) wavelength since the infrared range is quite large. Is there any way I can find the wavelength of my IRED LED (page 2) based on the spec sheet , particularly pages 4 and 5.
It is likely that you saw a Circuit Lab sysmbol and that this caused you to ask this question. The Circuit Lab N Channel MOSFET symbol is both unusual and illogical. I'd avoid using them if at all possible. Read on ... Acceptable [tm] N Channel MOSFET symbol tends to have these characteristics. Gate symbol on one side. 3 "contacts" on other side vertically. Top of these is drain. Bottom of these 3 is source. Middle has an arrow pointing INTO the FET and the outside end is connected to source. This indicates that there is a connected body diode and that it is non conducting when the source is more negative than the drain (arrow is same as would be for a discrete diode). Any symbol which obeys these guidelines should be "clear enough" and OK to use. I have very occasionally seen people use a symbol which does not comply with these guidelines but which is still recognisable as an N Channel MOSFET. SO. Any of these are OK, and you can see the differences for the unmarked P Channels. Many more examples here But!!! Jippie's example shows the rogue version. [Note: See below - this is in fact intended to be a P Channel sysmbol]. Truly horrible. I'd have to wonder if this was a P Channel symbol or an N Channel one. Even the discussion it is taken from has people expressing uncertainty re arrow direction. As shown IF that is an N Channel then it is implying body diode polarity and NOT current flow in source. Thusly ________________' Circuit Lab is apparently the (or a) culprit. This is their symbol for an N Channel MOSFET. A nasty piece of work, alas. Arrow shows usual drain-source conduction direction BUT as a MOSFET is a 2 quadrant device and will provide a true resistive on channel with \$V_{gs}\$ positive BUT \$V_{ds}\$ negative, the arrow is meaningless and, as it is in the opposite direction to most N Channel MOSFET sysmbols it is misleading to most. (Note the proper use of this symbol in table below). USER23909 helpfully pointed out this page - Wikipedia - MOSFET . This page includes the following symbols. User xxx says these may be IPC standards, but Wikipedia is silent re their source. Wikipedia MOSFET symbols http://en.wikipedia.org/wiki/MOSFET#Circuit_symbols
{ "source": [ "https://electronics.stackexchange.com/questions/69323", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/23006/" ] }
69,661
Silver has a lower resistivity than gold and is cheaper, so why are high end audio components gold plated?
Gold is highly resistant to corrosion or oxidation, so prevents poor connections from those sources.It is also fairly soft, so the mating surfaces deform slightly, increasing contact area to reduce resistance. The gold plating is very thin, so the added resistance from the gold is easily overcome by its other properties. Note that gold is only needed on the actual contact areas - gold plating (or colour) on the body of the connector is only there to attract the gullible. Many commercial (not audiophile) connectors have selective gold plating on the contacts - the gold is only placed where it really matters.
{ "source": [ "https://electronics.stackexchange.com/questions/69661", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/10423/" ] }