source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
211,843 | This is the box of a LED spot I just bought. I'm wondering why the power consumption is measured in kWh/1000h and not simply in Watts. Edit: The labeling standard can be found here . (Guide for the application of the commision regulation (EU) No. 874/2012 with regard to energy labelling of electrical lamps and luminaries). | Anyone who has a clue about how physical units works will of course realize that kWh/1000h means "1000 watt-hours per 1000 hours" which can be shortened to just W . But when it comes to lamps, the unit "W" is already used for the light output. Light bulbs which use more energy-efficient technologies than the classical incandescent light bulb often state their light output in equivalency to an incandescent bulb with a specific power consumption. Until 2010 you could often find LED light bulbs stating to be "equivalent to a 40W bulb". So the consumer knows that if they want to replace an old 40W incandescent bulb with an equally bright LED bulb, they need to look for a 40W LED bulb. A consumer buying an LED lamp with an input power of 40W might be surprised by how bright it is. Also, the average consumer doesn't know much about how electricity works. They know they need to pay for their electricity consumption in a unit called "kWh", so they want to know how much they need to pay when they run the device for x hours. So from the point of view of the average consumer, the unit "Watt" means "light-intensity" and "kWh per hour" means "energy consumption". A physicist will of course inject that the unit for visible light radiated by a source is "Lumen" and "Watt" is the unit energy consumption should be measured in, so that's what should be printed on light bulb boxes. But physicists aren't average consumers. Using different units for each - even if both of them are misleading from a physicist's point of view - is what's the least misleading way to communicate it to the end-user. | {
"source": [
"https://electronics.stackexchange.com/questions/211843",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/8446/"
]
} |
212,001 | I have found a hardware bug on a board design after many PCBs have been manufactured but not populated. I can fix the problem by removing a SOT-23 component and putting a wire across two of the pads. I have too many PCBs manufactured that manually installing a wire across the two pads of the removed component is not economical in time or money. How can this be fixed using an automated production method? Are there components available to fix this kind of problem, i.e. a package with just a wire between the two pins? The link in question is one of the SOT23-5 diagonals. One suggestion is to use a zero ohm resistor. These typically come in rectangular packages with rectangular leads. Would a pick-and-place machine handle resistors placed at 45 degrees to the pads? What would happen during reflow? Would the surface tension due to the incorrect alignment of the leads to pads cause the resistor to spin and detach from the intended pad? | You can buy zero Ohm links in a SOT23 package. Various connections are available, have a look at http://www.topline.tv/SOT_jumper.html | {
"source": [
"https://electronics.stackexchange.com/questions/212001",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/40842/"
]
} |
212,320 | I have an ordinary Weller soldering iron with a conical tip, which I fear does not transfer much heat to the area. Since I am planning to do a dead bug project I thought it would be better to get a flat on the tip of the iron, possibly by grinding it so it looks like this: However, I am told that this will not work because the tip is not actually solid, but is plated, and if I grind it, then the plating will get ground off. Is this true? Can I just buy an alternative tip and then install it somehow on the iron, or do I need a special kind of iron that supports interchangeable tips. The type of station I have is an older analog station which is one piece with the holder and sponge on top and a strip of LED lights that indicate the temperature. UPDATE My Weller is an S4240. On examining it closer, it has a knurl and screw on sleeve. When this is unscrewed the tip slides out and apparently can be replaced. | DON'T grind your soldering tips. It will ruin them. Good quality tips are made from copper with a thin layer of iron or another metal on top. The copper conducts the heat, and the other metal prevents the copper from corroding. You can buy new tips of any size and shape you want. They are pretty cheap. There is usually a tiny screw on the side of the soldering iron near the hot end that lets you change the tip. Here's a cross-section of what's inside good tips (image by Hakko ) | {
"source": [
"https://electronics.stackexchange.com/questions/212320",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39947/"
]
} |
212,916 | The timing of quartz clocks is regulated by a crystal oscillator . This crystal oscillator effectively forms an RLC circuit. If this is so, what properties does a crystal oscillator have that makes it advantageous over an RLC circuit? | Crystal oscillators are much more accurate, they are small, have low temperature coefficients and low drift at a low cost. | {
"source": [
"https://electronics.stackexchange.com/questions/212916",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/18963/"
]
} |
213,010 | I need to use a microcontroller on a system that must stay working without major changes for a long time (decades). To ensure that there always will be replacement parts, I need a microcontroller that will be long run produced or produced by some manufactures in a firmware binary and encapsulation pin compatible way. What can I do to ensure that the microcontroller I choose meets these criteria? The application doesn't need much computing power. Its aim is to control motors and other industrial systems. A microcontroller of 8 bits capable of changing the state of about 8-16 IO pins at a frequency of 0.5-1 MHz is OK. An ADC may be valuable, but can be replaced by a simple external comparator. | The FPGA manufacturers say if you use a 'soft core', that is, a microcontroller written in VHDL, then that VHDL design can be implemented on any future programmable FPGA hardware, thus freeing you from the likelyhood of any particular piece of hardware going out of production. To buy that argument, you would need to assume that programmable hardware will continue to be available over your timespan (which is probable), and will continue to be available in chip sizes, costs and voltages that will suit your product (which I find harder to believe). To use this approach, you would have to accept that you may need to do a new hardware design to accept a new package, which kinda defeats your object of no major changes. My approach, and my advice would be, to isolate your control processing from the rest of the circuitry on a small board, and define your own interface to it, the fewer pins the better. Perhaps SPI makes a suitable interface, or a nybble bus with data read/write and address strobes. Then if your chosen processor becomes obsolete during the product lifetime, you only have to redesign and test a small board, rather than a large board with vital analogue product functions on it. Program the control processor in C. Split your code strictly into generic algorithm, and hardware interface modules. Then if particular bits of hardware have to change, you have isolated the rewrite to a small number of modules, and are not crawling all over your code. Choose a suitable voltage, I'd prefer 3.3v to 5v for instance. When you choose your small control board, you could do worse than to pick a form factor that matches an available Arduino or PIC dev board. Then, your development and prototyping get a leg-up, and you could even start low run production with bought modules before designing a lower cost replacement. | {
"source": [
"https://electronics.stackexchange.com/questions/213010",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/74741/"
]
} |
213,079 | This is a "wet" temperature sensor for a Boilermate 2000 thermal store. The pin connectors are inserted into a plastic plug that connects to the control board (see 2nd image). The pins do not stick out from the plug (see 3rd image which shows that the main pins are on the PCB). Are the pin connectors some sort of standard part? If so, what are they called? | Those are called Bootlace Ferrules and, yes, they're pretty standard in electrical wiring. They come in many different sizes, and each size has its own colour. There are also Twin Entry options, that allow two wires to be joined: | {
"source": [
"https://electronics.stackexchange.com/questions/213079",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98069/"
]
} |
213,445 | I am working on a project with a group and I am responsible for the digital part of the project, so I will be writing the code. To go from Analog to Digital, I have to choose a microcontroller. I was looking at TI microcontrollers and found that they have so many. They have: Stellaris Hercules MSP430 Series And the list just goes on.. My questions: Which micro controller does one use and why ? Under what conditions should I use microcontroller X rather than Y ? Why are there so many different micro controllers? | I am a TI employee who works in an MCU development group, but this is not an official statement from TI. In particular, this is not an official statement about roadmaps or priorities. Also, I'm not in marketing, so if I contradict any of our marketing material, they're right and I'm wrong. :-) M D's answer is correct, but I thought some more detail would be helpful. TI targets different applications with different requirements. When you're competing for an MCU socket (and there is a lot of competition in this industry), both features and price matter. A ten cent cost difference can win or lose the socket. One of the main drivers of cost is die size -- how much stuff is on the chip. Thus, it makes sense to have different product lines, and different families within those product lines. Product lines differ mainly in peripheral types and architecture, while families within a line products differ mainly in terms of cost and feature set. Here are some details on the product lines: Hercules is a continuation of the TMS470/TMS570 line. It's focused on safety and performance. One of the key features of Hercules is dual CPUs running the same code in parallel ("lock-step"). This lets you immediately detect faults in the CPU itself. Check out this datasheet for some performance info on a newer product. The Cortex-R5F CPU runs at >300 MHz, and there's a large number of peripherals with higher-end features -- the CAN modules have 64 mailboxes, for example. Obviously, this stuff isn't cheap. But look at the applications -- defibrillators, ventilators, elevators, insulin pumps... these are places where customers are willing to pay for safety. Hercules also goes into automotive products that have a wider temperature range and longer operating life. C2000's focus is on supporting control algorithms. The C28x "CPU" is really a DSP, and its instruction set has been extended to handle things like trigonometry and complex numbers. There's also a separate task-based processor called the Control Law Accelerator (CLA) that can run control algorithms independently of the CPU. The ADCs and PWMs support a lot of timing options, too. Performance varies from midrange ( Piccolo ) to high-end ( dual-core Delfino ). The big applications here are power converters, power line communication, industrial drives, and motor control. MSP430 is all about low power. They have some products that use FRAM (ferroelectric nonvolatile memory), which uses less power than flash, and even one that runs off of 0.9V (one battery). They have some less-common peripherals to support things like LCDs and capacitive touch sensing. Look through their datasheets and you'll see applications like remote sensors, smoke alarms, and smart meters. I don't know much about the Wireless MCU group, but obviously wireless connectivity has its own special requirements. They seem to have Cortex-M and MSP430 CPUs, with applications in consumer electronics and the Internet of Things. IoT has been a big buzzword for a while now, so I'd imagine that's one of their main targets. Their newest (?) product is described as an "Internet-on-a-chip™ solution". UPDATE : Fellow TIer justinrjy commented with more info about Wireless/Connectivity MCUs: "'Wireless MCU' products are distinguished by having a processor core that runs the drivers/stack of the wireless protocol. For instance, the CC26xx runs the entire BLE stack on the uC itself, making it really easy to develop for. Same with the CC3200, except that processor runs the WiFi drivers all on the Cortex-M4. The integrated core and drivers are really what make these a 'Wireless MCU', instead of a transceiver." As you can see, these product lines are targeting very different applications with very different requirements. Putting a 300 MHz Hercules chip into a battery-powered device would be a disaster, but so would putting an MSP430 into an airbag. Physical size can also matter. A 337-pin BGA package is awkward to fit in a tiny sensor, but it's nothing for a piece of industrial equipment. Within the product lines, there are multiple families. C2000 Delfino devices are faster, have more peripherals, and have more pins on their packages. They can also cost (at least) twice as much as a Piccolo device. Which one do you need? It depends on your application. MSP430 has some products that balance power consumption and performance, and others that focus solely on low power. (That one-battery MCU maxes out at 4 MHz and 2 kB of RAM.) There are many products within each family because new products are developed all the time. Transistors get smaller/cheaper, so more stuff can go on a chip. A mid-range MCU today would have been ultra-high-end ten years ago. Each product is usually made to target a few specific applications and support others where possible. Finally, there are multiple variants of each product (AKA the last digit in the part number). These usually have different amounts of memory and (maybe) small variations in what peripherals are available. Again, this is all about providing a price range. The short version is that each product provides a different balance of price, performance, and features. It's plain old market segmentation. Our customers are manufacturers, who care much more about small price differences than end users. People buy every part number we have, so clearly the demand is out there. :-) UPDATE: Jeremy asked how the requirements of big customers affect the design process, and whether we make custom MCUs. I've seen several TMS470/570 MCUs that were made for a single large automotive customer. That group also had a couple MCUs whose architectures were designed by and for one customer. In at least one of those, the customer wrote most of the RTL. Those are under heavy NDA restrictions, so I can't give details. General market products usually have at least one big customer in mind. Sometimes big customers get a special part number. Sometimes we'll add a peripheral just to win a big socket. But in general, I think big customers are more of a floor than a ceiling when it comes to features. An extreme example of custom parts is our high-reliability group. I've only heard stories about these guys, but apparently they take existing products and remake them to work in extreme conditions -- high temperatures, radiation, people shooting at you, etc. I know someone who buys HiRel TMS470s for down-hole drilling, where the temperature can reach 200C. (Maybe this one -- in stock at Arrow for only $400/chip!) They have a bunch of standard products listed on the web site, but from what I've heard, they can build to order even in small quantities -- you can buy a dozen HiRel versions of any chip you want if you're willing to spend $50,000+ per chip. :-) As a rule of thumb, everything in business is negotiable if you're spending enough money. | {
"source": [
"https://electronics.stackexchange.com/questions/213445",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98243/"
]
} |
214,139 | I am trying to find relays for my application and I read a data sheet which looks fine but specifies Minimum applicable load: 10mV 10µA In my circuit, it is expected that the relay closes but no voltage and current is applied. You can think of it as 2 relays in series where one is open and one is closed, so there is no current. This sounds like something I'd do since I was at school. Why would a relay require a minimum voltage and current on the load side? Is it allowed to operate that relay under my conditions or not? What could possibly break in a relay if I don't respect this requirement? What does "minimum applicable load" mean? When and how do I need to consider this value? | The primary reason that almost all relays have a minimum load requirement is that the mechanical action of closing coupled with an actual current flow are required to 'whet' the contact and break through a layer of oxidation that invariably builds up. That is one reason that small signal relays generally use expensive contact alloys which resist oxidation, but as the phone company found out decades ago, even pure gold contacts can have issues in a high humidity environment. While oxidation doesn't affect the gold contacts, repeated cycles of moist/dry air would deposit an insulating layer. | {
"source": [
"https://electronics.stackexchange.com/questions/214139",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39271/"
]
} |
214,436 | I'm totally new to electronics and I wonder why we need to put a resistor in series with a photoresistor to measure the variation of light? I mean, photoresistor is already a resistor, why do we have to decrease the voltage in the circuit with an additional resistor? Thanks in advance for your answers. | EDIT: Added example for calculating voltages in a voltage divider Because if you want to measure the resistance of something, you need to apply voltage to it. And if you apply voltage, you need to somehow measure that voltage, and by simply measuring between the photoresistor's terminal which is on the \$+5\;V\;(V_{cc})\$ and the terminal which is on \$GND\$, you get exactly \$+5\;V\$, there is no changing voltage, no matter how small or how large the resistance of the photoresistor is. simulate this circuit – Schematic created using CircuitLab You measure 5V in the schematic above. You solve the problem by using a voltage divider: simulate this circuit Now you can measure the voltage drop on the resistor, and from that value you can guess the amount of light the photoresistor recieves. Example: In the second diagram you can see that the voltage is applied across a \$50\;\Omega\$ and a \$100\;\Omega\$ resistance. Because Ohm's law says that \$U=R\cdot I\$ and the current must be equal in a series circuit, the same amount of current flows through \$R_1\$ and \$R_2\$. In a series circuit, current stays the same, but voltage is shared between the circuits. We can write down the following equation: \$U_{R_1}\$ = \$R_1\cdot I\$ You could ask how can we calculate the voltage if we don't know the current. Well, we don't know the current, but we can calculate it using Ohm's law. We write down the original Ohm's law equation differently: \$U=R\cdot I\;\Rightarrow\;I=\frac UR\$ Because in this case the total resistance is \$R_1+R_2\$ (or \$150\;\Omega\$ in our example), the equation for the current will be \$I=\frac{U}{R_1+R_2}\$. We can use this equation to substitute the single \$I\$ variable in the above mentioned equation. So the equation for each of the resistors will be: \$U_{R_1}\$ = \$R_1\cdot\frac{U}{R_1+R_2}\$ \$U_{R_2}\$ = \$R_2\cdot\frac{U}{R_1+R_2}\$. If we have \$50\;\Omega\$ on \$R_1\$ and \$100\;\Omega\$ on \$R_2\$, then the voltages on them will be \$U_{R_1}\$ = \$R_1\cdot\frac{U}{R_1+R_2}=50\;\Omega\cdot\frac{5\;V}{50\;\Omega+100\;\Omega}=50\;\Omega\cdot\frac{5\;V}{150\;\Omega}=50\;\Omega\cdot0,0\dot3\;A=1,\dot6\;V\$ \$U_{R_2}\$ = \$R_2\cdot\frac{U}{R_1+R_2}=100\;\Omega\cdot\frac{5\;V}{50\;\Omega+100\;\Omega}=100\;\Omega\cdot\frac{5\;V}{150\;\Omega}=100\;\Omega\cdot0,0\dot3\;A=3,\dot3\;V\$. If \$R_2\$ will change (for example less illumination) and its resistance rises to \$150\;\Omega\$, the voltages will be \$U_{R_1}\$ = \$R_1\cdot\frac{U}{R_1+R_2}=50\;\Omega\cdot\frac{5\;V}{50\;\Omega+150\;\Omega}=50\;\Omega\cdot\frac{5\;V}{200\;\Omega}=50\;\Omega\cdot0,025\;A=1,25\;V\$. \$U_{R_2}\$ = \$R_2\cdot\frac{U}{R_1+R_2}=150\;\Omega\cdot\frac{5\;V}{50\;\Omega+150\;\Omega}=150\;\Omega\cdot\frac{5\;V}{200\;\Omega}=150\;\Omega\cdot0,025\;A=3,75\;V\$. The more the resistance of the photoresistor rises, the more voltage will drop across it. If we give the photoresistor more illumination and its resistance falls to \$75\;\Omega\$, then the voltages will be \$U_{R_1}\$ = \$R_1\cdot\frac{U}{R_1+R_2}=50\;\Omega\cdot\frac{5\;V}{50\;\Omega+75\;\Omega}=50\;\Omega\cdot\frac{5\;V}{125\;\Omega}=50\;\Omega\cdot0,04\;A=2\;V\$ \$U_{R_2}\$ = \$R_2\cdot\frac{U}{R_1+R_2}=75\;\Omega\cdot\frac{5\;V}{50\;\Omega+75\;\Omega}=75\;\Omega\cdot\frac{5\;V}{125\;\Omega}=75\;\Omega\cdot0,04\;A=3\;V\$. The lesser the resistance of the photoresistor gets, the less voltage will drop across it (and more voltage will drop across the other resistor). As you can see, we moved from \$3,\dot3\;V\$ to \$3,75\;V\$ when the photoresistor's resistance rised then the voltage dropped to \$3\;V\$ when the resistance fell. | {
"source": [
"https://electronics.stackexchange.com/questions/214436",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98823/"
]
} |
215,110 | What is the purpose of the resistor \$R_2\$ ? If I remove \$R_2\$ , the circuit will perform the same result, won't it? simulate this circuit – Schematic created using CircuitLab | R2 is used to prevent a floating base. It gives it a defined state, in case the node labeled 2.8V isn't connected. It's a weak pull-down resistor . A floating pin, not pulled to a known state, will act like a mini-antenna, and can float high or low many times and turn the transistor on and off at random. If that node is driven all the time, either high or low, then R2 is superfluous and can be removed. If the node is connected to for example a microcontroller gpio, that can go High-Impedence/Input (likely at start-up), then R2 keeps the transistor off until the microcontroller goes into output mode. If the transistor is actually a Mosfet, then R2 is a small drain resistor. Mosfets have a capacitance that may keep it on, if not drained. | {
"source": [
"https://electronics.stackexchange.com/questions/215110",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/96076/"
]
} |
215,639 | A diode has an exponential I-V curve. To a first approximation, it will pass whatever amount of current is required to keep the voltage across it constant. Is there a passive component that will (approximately) drop any amount of voltage to maintain a constant current, possibly with a logarithmic I-V curve? | Yes, they are called current regulator diodes . They are essentially a JFET with gate joined internally to source so that you get approximately IDSS for voltages above the pinch-off voltage (and below the breakdown). There are much better circuits possible using IC technology, so I think the current regulator diodes are mostly a relic from the past. Compare this AL5809 LED regulator IC (two leads) Something similar is possible using a three-terminal regulator such as an LM317 and a resistor (resulting in a two lead device). It's arguable whether the current regulator diode is actually passive or not, but I'll leave the ontological discussion to others. | {
"source": [
"https://electronics.stackexchange.com/questions/215639",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38641/"
]
} |
215,886 | Consider that the hardware team will take 2 months to develop some hardware, but by that time I will need to have the software ready. My question is that how can I write the software and test it without having the hardware? Is there any standard/s to be followed? How do you do it? | Not having hardware during the initial stages of firmware development happens. Common strategies to deal with this are: Spend time up front architecting the system carefully before you write any code. Of course you should do this anyway, but in this case it's even more important than usual. It's much easier to debug well thought out software than a pasta-based mess. Properly modularize everything, minimizing the interfaces between modules. This will help contain bugs to individual modules, and allow easier testing of individual modules. Write code bottom-up, hardware-touching drivers go first, high level application logic last. This allows discovering inconveniences imposed by the architecture early on. Don't be afraid to change the architecture as hardware realities come to light, but make sure all the documentation is updated accordingly. Simulate. Most microcontrollers companies provide software simulators of their microcontrollers. These can only go so far, but can still be very useful. Simulating the inputs and measuring the outputs of the hardware may be difficult, but checking higher level logic this way shouldn't be too hard. This is where the modular design helps again. If you can't reasonably simulate some low level hardware interactions, you use a different version of the module that touches that hardware but that passes its own simulated actions to the upper levels. The upper levels won't know this is happening. You won't be checking the low level module this way, but most everything else. In short, use good software desing practices, which of course you should be doing anyway. | {
"source": [
"https://electronics.stackexchange.com/questions/215886",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/52628/"
]
} |
216,959 | It seems that a well-designed SMPS has a capacitor connecting the ground planes of the primary and secondary sides of the transformer, such as the C13 capacitor here . What is the purpose of this capacitor? I've let myself understand that it's for EMI suppression, but what kind of EMI does it suppress, and how? It seems to me to be the only leg of an open circuit and thus completely inert, but obviously I'm wrong about that. | Switched mode power supplies use what is known as a "flyback converter" to provide voltage conversion and galvanic isolation. A core component of this converter is a high frequency transformer. Practical transformers have some stray capacitance between primary and secondary windings. This capacitance interacts with the switching operation of the converter. If there is no other connection between input and output this will result in a high frequency voltage between the output and input. This is really bad from an EMC perspective. The cables from the power brick are now essentially acting as an antenna transmitting the high frequency generated by the switching process. To suppress the high frequency common mode is is necessary to put capacitors between the input and output side of the power supply with a capacitance substantially higher than the capacitance in the flyback transformer. This effectively shorts out the high frequency and prevents it escaping from the device. When desinging a class 2 (unearthed) PSU we have no choice but to connect these capacitors to the input "live" and/or "neutral". Since most of the world doesn't enforce polarity on unearthed sockets we have to assume that either or both of the "live" and "neutral" terminals may be at a sinificant voltage relative to earth and we usually end up with a symmetrical design as a "least bad option". That is why if you measure the output of a class 2 PSU relative to mains earth with a high impedance meter you will usually see around half the mains voltage. That means on a class 2 PSU we have a difficult tradeoff between safety and EMC. Making the capacitors bigger improves EMC but also results in higher "touch current" (the current that will flow through someone or something who touches the output of the PSU and mains earth). This tradeoff becomes more problematic as the PSU gets bigger (and hence the stray capacitance in the transformer gets bigger). On a class 1 (earthed) PSU we can use the mains earth as a barrier between input and output either by connecting the output to mains earth (as is common in desktop PC PSUs) or by using two capacitors, one from the output to mains earth and one from mains earth to the input (this is what most laptop power bricks do). This avoids the touch current problem while still providing a high frequency path to control EMC. Short circuit failure of these capacitors would be very bad. In a class 1 PSU failure of the capacitor between the mains supply and mains earth would mean a short to earth, (equivalent to a failure of "basic" insulation). This is bad but if the earthing system is functional it shouldn't be a major direct hazard to users. In a class 2 PSU a failure of the capacitor is much worse, it would mean a direct and serious safety hazard to the user (equivilent to a failure or "double" or "reinforced" insulation). To prevent hazards to the user the capacitors must be designed so that short circuit failure is very unlikely. So special capacitors are used for this purpose. These capacitors are known as "Y capacitors" (X capacitors on the other hand are used between mains live and mains neutral). There are two main subtypes of "Y capacitor", "Y1" and "Y2" (with Y1 being the higher rated type). In general Y1 capacitors are used in class 2 equipment while Y2 capacitors are used in class 1 equipment. So does that capacitor between the primary and secondary sides of the SMPS mean that the output is not isolated? I've seen lab supplies that can be connected in series to make double the voltage. How do they do that if it isn't isolated? Some power supplies have their outputs hard-connected to earth. Obviously you can't take a pair of power supplies that have the same output terminal hard-connected to earth and put them in series. Other power supplies only have capactive coupling from the output to either the input or to mains earth. These can be connected in series since capacitors block DC. | {
"source": [
"https://electronics.stackexchange.com/questions/216959",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/89980/"
]
} |
217,423 | ICs are typically packaged (encapsulated) in a black "plastic". What is this packaging material made of? ( source ) | Transfer-molded epoxy (which is a thermoset material). Thermoset plastics, once cross-linked will not melt, they only char at high temperatures. See, for example, this TI/NS document: As mentioned in the NS document above, epoxy cresol novolac (ECN) is the most common ingredient of these epoxies. ( source ) Link to fire at Sumitomo in Japan in '93. | {
"source": [
"https://electronics.stackexchange.com/questions/217423",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/59472/"
]
} |
219,397 | Is there a way to tell if a circuit is high or low pass without memorizing different topologies? | Look at the extremes. DC and very high frequencies. For DC you can remove the capacitors and short the inductors. For high frequencies you can short the capacitors and remove the inductors. By looking at the resulting circuit it should be easy to tell whether low or high frequencies can pass. | {
"source": [
"https://electronics.stackexchange.com/questions/219397",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/94590/"
]
} |
219,527 | I have been reading about CPUs recently and came to know that all logical blocks and memory on CPU can be made out of transistors. So is it the only electronic component on CPU? Edit (Made after first two answers): But the making of CPU only talks about projecting transistor diagrams (May be that is the major part). But how are additional components like diodes, capacitors etc. added to CPU? | The logical blocks and memories can be made out of only transistors. The important question is: are all of the circuits on CPUs logical blocks and memories, or is there anything else? The answer is, there are always some other circuits. Here are some examples: ESD protection circuits often uses diodes and resistors Internal bypass capacitors : actually these can be made just from transistor gates, but they are often also made in the metal layers. Analog blocks like internal LDO regulators, bandgap references, power-on-reset comparators, etc. are usually best implemented with some resistors among the transistors. It may be possible to get rid of the resistors and use 100% transistors in some of these cases, but it's not necessarily optimal. Internal oscillators may use inductor-capacitor (LC) tank circuits (though inductors are so large that they are not cost-effective on modern general purpose CPUs). Etc., etc. | {
"source": [
"https://electronics.stackexchange.com/questions/219527",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/101664/"
]
} |
219,621 | I'm trying to replace a faulty DC power jack on a laptop, and having quite a difficult time with it. At this point, I'm not sure if the problem is with my equipment, or my technique. Equipment: Weller 50w temp controlled iron (max temp: 850f, ETA tip) Dremel gas powered iron / hot air gun ( this thing ) There are six pads on the underside of the board I need to desolder to remove the old jack. The four on the outside, far as I can tell, only provide mechanical support rather than electrical connectivity. This is what the board looks like from the top: And the bottom: (It's a bit of a mess due to my previous attempts. The black/brown crap is just flux, not char on the board) It took a lot of messing around to get it out as far as it currently is. The main problem is that removing the existing solder is proving to be nigh impossible - I've got both some desoldering wick, and a solder sucker. The sucker has proven all but useless, the moment I move the iron out of the way to get the sucker in place, the solder has already rehardened. The wick kinda works, but it seems to take a very long time to get very little solder absorbed. As in, you can see the faintest hint of silver color in the copper wick. My technique was to set my iron to max temp (850f), let it get up to temperature (verified on the digital display), add some flux, hold the wick in place on top of the pad and press the tip of the iron into it. My understanding is that this high temperature is required due to factory solder being trickier to deal with than the stuff you buy on a spool, and also likely to be the lead free kind, which requires a higher melting temperature. Now the other option I have is the torch/hot air gun, but I don't want to mess around with it too much for fear of scorching the board. Hence why I'm here, asking someone who's hopefully an expert. How do I tell when my work area is getting too hot?
Given what I've described here, am I doing anything obviously wrong?
Am I missing some crucial piece of equipment to make this job easier? | The jack is faulty, so... Cut all the plastic away with side cutters, leaving just the metal terminals. Now you can remove each terminal separately. Hold the circuit board vertically in a vise. Get your iron hot as usual, and apply more solder to the pad to improve heat transfer. Grab a terminal (on its edge to reduce heat transfer) with your pliers, then heat the pad with your iron. The solder should melt quickly, then you simply pull the terminal out of the board. Finally, use a solder sucker and/or desoldering braid to remove excess solder from the hole. | {
"source": [
"https://electronics.stackexchange.com/questions/219621",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/101712/"
]
} |
219,734 | LEDs are an old technology, why did the industries take so long to put them into light bulbs? Was there any technological gap missing? | It is not possible to produce white light without an efficient blue LED, either using RGB LEDs or a blue LED + yellow phosphor. The breakthrough was the invention of the high-brightness Gallium-Nitride blue LED by Shuji Nakamura at Nichia
in the early 1990s. It still took a while to get the overall efficiency up to the level of fluorescent bulbs, and it's only in the last decade that LEDs finally came out on top. | {
"source": [
"https://electronics.stackexchange.com/questions/219734",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/101286/"
]
} |
220,444 | I've many mobile (smart phone) chargers. The output of them is 5 volts but the battery needs 3.7v only. Why is it better to make a 5v charger then convert the 5v to 3.7v ? Why don't we just make a 3.7v charger? | If you look at the charge profile of a lithium battery you'll see that at certain points it changes from constant current to constant voltage charging: - This means that some form of "in series" charge control mechanism needs to be present to act initially as a constant current source then change to a constant voltage source. This charge control circuit comes with an overhead - it needs maybe 0.5 volts across it (minimum) to do its job. Given that the final part of the charge regime is constant voltage at usually 4.2 volts, it makes sense to use a wall wart with 5 volts output. Even if the wall wart dropped to 4.7 volts the charge circuit would still have enough overhead to deliver 4.2 volts. | {
"source": [
"https://electronics.stackexchange.com/questions/220444",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50937/"
]
} |
221,308 | In trying to understand brushed DC motors, this post has been very helpful but I still have some fundamental questions about the brush mechanism. For instance, what is the purpose of the spring? ( source ) And since a brush looks mostly like a spring, how did the name "brush" even come about? | The purpose of the brushes is to make electrical contact with a rotating conductor (the commutator). Originally, these were bundles of wire that would be dragged across the commutator. At any time, at least a few strands of the wire would be making contact. These bundles, of course, look like "brushes". Things have improved though, and now we use solid, low-friction, conductive materials for the brushes. It is common to use assorted types of graphite. These brushes must be held against the rotating commutator, and the material eventually wears away and must be replaced. The spring pushes the brush against the commutator, providing good electrical contact as the material slowly wears away. Here's a representative picture dredged from Google. The dark material is the actual "brush", made of conductive graphite. Please note that, unlike the brushes in your link, these don't have a wire connecting to the graphite. This is because the springs themselves are conductive! An additional wire can be used in higher-current applications. | {
"source": [
"https://electronics.stackexchange.com/questions/221308",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/102739/"
]
} |
221,312 | I need a cheap, somewhat accurate (~0.5%) voltage reference for some DACs. At first I was going to use a LDO voltage regulator (a TC1223 specifically) for this, it seems to fit the bill looking at the datasheet. Then I saw there is a separate category of ICs called voltage references rather than voltage regulators. But from what I can tell the voltage references with the same initial accuracy as the regulator I mentioned above, costs more, and also requires one or more external resistors (at least the shunt diode reference types). So I was wondering what the difference between regulators and references are, and whether or not I can make do with a regulator for my needs or if I should get a reference, regardless of the higher price for seemingly similar specs. Thanks. | A voltage regulator is designed to take a variable voltage in (say, 2-5v), and output a constant voltage (say, 3.3v). Now, voltage regulators are typically used to power a circuit, which means they will have a current output of a few hundred mA or more, generally speaking. In order to keep cost, size, etc down, the output tolerance on voltage regulators are (again, generally) a few 10s or 100s of mV. For example, the RG71055 voltage regulator has a minimum output voltage of 5.2v, and a maximum of 5.8v, with a target output voltage of 5.5v, and can source 30mA. That's about a 5% voltage tolerance, assuming I number crunched correctly. On the flip side, a voltage reference is designed to take a variable voltage, and deliver EXACTLY the rated output voltage. For example, the LT1790 can supply 5v with a tolerance of 0.1%, which is a 50x improvement over the RG71055. However, the LT1790 can only source 5mA max, which is 6x less than the RG71055. A voltage reference is used when you need to know that this line is exactly a certain voltage (in other words, really tight tolerances). On Digikey, you can get a voltage reference with 0.01% tolerance. With voltage regulators, you'd be lucky to get one with a 1% tolerance. | {
"source": [
"https://electronics.stackexchange.com/questions/221312",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/72797/"
]
} |
221,483 | For two DACs, one being sent D0-D7 and the other being sent D8-D15, with power supply being 5V, if 5V is added to output of 2nd DAC and then the two DAC outputs are summed, should result in a 16 bit DAC made up of two 8 bit DACs. The only problem is that if the second DAC has 0x00 input then the 5V addition needs to be cancelled out which I am not sure how to do. The summing can be done by summing amplifier. The circuit need only work upto few 10s of kHz. Is there something fundamentally wrong with this idea? | It's possible, but it won't work well. Firstly, there is the problem of combining the two outputs, with one scaled precisely 1/256 of the other. (Whether you attenuate one by 1/256, amplify the other by 256, or some other arrangement, *16 and /16 for example, doesn't matter). The big problem however is that an 8-bit DAC is likely to be accurate to something better than 8 bits : it may have a "DNL" specification of 1/4 LSB and an "INL" specification of 1/2LSB. These are the "Differential" and "Integral" nonlinearity specifications, and are a measure of how large each step between adjacent codes really is. (DNL provides a guarantee between any two adjacent codes, INL between any two codes across the full range of the DAC). Ideally, each step would be precisely 1/256 of the full scale value; but a 1/4LSB DNL specification indicates that adjacent steps may differ from that ideal by 25% - this is normally acceptable behaviour in a DAC. The trouble is that an 0.25 LSB error in your MSB DAC contributes a 64 LSB error (1/4 of the entire range) in your LSB DAC! In other words, your 16 bit DAC has the linearity and distortion of a 10 bit DAC, which for most applications of a 16 bit DAC, is unacceptable. Now if you can find an 8-bit DAC that guarantees 16-bit accuracy (INL and DNL better than 1/256 LSB) then go ahead : however they aren't economic to make, so the only way to get one is to start with a 16-bit DAC! Another answer suggests "software compensation" ... mapping out the exact errors in your MSB DAC and compensating for them by adding the inverse error to the LSB DAC : something long pondered by audio engineers in the days when 16-bit DACs were expensive... In short, it can be made to work to some extent, but if the 8-bit DAC drifts with temperature or age (it probably wasn't designed to be ultra-stable), the compensation is no longer accurate enough to be worth the complexity and expense. | {
"source": [
"https://electronics.stackexchange.com/questions/221483",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/20711/"
]
} |
221,531 | I've seen YouTube videos of supercapacitors replacing car batteries. Is this practical? And if so, why haven't they been offered in the automotive market place? https://www.youtube.com/watch?v=GPJao1xLe7w the type supercapacitor he is using has the following data sheet: http://www.nooelec.com/files/2600f.pdf Note that it contains 8,125 Joules of Stored Energy. Then if you go to http://www.rapidtables.com/calc/electric/Joule_to_Watt_Calculator.htm and enter 8125 in the box and let's say 5 seconds of starting (it shd start up in 1 second in actuality). You then get 1,625 Watts. Remember 1 HP = 750 Watts , so you have just over 2 HP of starting power. Remember he's using six of them.
6 x 8125= 48,750 J. @ 16.2V. (for a 2 sec. start it's over 24,000 Watts(32 HP) of instant power)
Easily enough to start your car.
Without a battery too. A good car battery would have 700 CCA. @ 14V = 9800 Watts(13 HP). Quite a difference. (The average starter is 1.9 to 2 HP) Here, as time goes by, I will post more videos to bolster my claim: https://www.youtube.com/watch?v=EoWMF3VkI6U http://www.extremetech.com/computing/183839-new-supercapacitor-technology-could-store-conduct-power-on-the-same-copper-wires http://scitechdaily.com/graphene-based-supercapacitors-may-significantly-boost-power-electric-vehicles/ http://phys.org/news/2015-09-micro-supercapacitor-unmatched-energy-storage.html https://www.tecategroup.com/ultracapacitors-supercapacitors/industry_news.php http://www.skeletontech.com/news/skeleton-technologies-launches-markets-highest-energy-density-ultracapacitor/ https://www.youtube.com/watch?v=ZgozrScGN8U Bottom line is, if you have enough Farads, you have energy density. And this really settles the matter once and for all... https://www.youtube.com/watch?v=JAT_8H23iGI | It is not practical. I do not know why people do this, there is no benefit whatsoever. It amounts to misuse of something useful. Simply put, those videos are by people who don't know what they are doing and are misusing supercapacitors for a bizarre and senseless application they are neither well-suited to nor even practical. And they are offered on the automotive market, just not as battery replacements, for the same reason headlights are offered on the automotive market, just not as car stereo replacements. Because that wouldn't make any sense. The sole reason supercapacitors exist is power density. They have terrible energy density, and that terrible energy density comes at many many times the cost. The entire point of a battery is bulk energy storage. Using supercapacitors to do the thing they are the worst at instead of something that is cheap, readily available, and proven for over 100 years is... the kindest but much too weak word I can use to describe that is "silly." Those videos exist, but just because there is a video of it doesn't make it a good idea. It isn't. What is a good idea is using supercapacitors for the reason they exist, which unsurprisingly is the exact way they are being used in automotive applications. Batteries have great energy density, but compared to supercapacitors (or any capacitor), batteries don't even come close in power density. Beyond that, forcing a battery to provide high amounts of power is hard on it and will reduce its long term life, and the quicker you drain a battery, the lower its apparent energy capacity will be. A battery will last much longer if drained at a 10 hour rate vs. a 1 hour rate. Meaning, at a rate that will discharge it in 10 hours vs. just 1 hour. Higher power means a higher discharge rate. This power density weakness is bidirectional: batteries are bad at delivering huge spikes of energy, and bad at accepting them. They like things nice and steady. That is where super capacitors come in. They have terrible energy density, but great power density. 99% of the time, the big power spikes demanded in automotive applications are also brief - things like braking, a burst of acceleration, the inrush current of the starter motor, that sort of thing. The only reasonable (and intended) way to use a supercapacitor is in addition to a battery, never in replacement of a battery. They perfectly complement each other. A battery deals with storing tons of energy, while capacitors deliver it at high power when needed. They permit things like capturing nearly all of the energy back from regenerative breaking, because all that energy can simply be dumped right into them and they'll handle it like champs. It can then be siphoned back into the battery at a controlled rate that the battery can deal with. Supercapacitors can let even an extremely weak battery in extreme cold start the car, because the battery is relieved of power demands. But that weak battery will keep working and still slowly but surely recharge the capacitors and stay charged long after those video makers' cars will be dead in the water. Long story short, they are used in the automotive industry, and the people in those videos are simply spending money to make their cars inferior in many important ways by misusing supercapacitors in a way that is only detrimental. They are not replacements for batteries because batteries store tons of energy, capacitors do not. Used in tandem, however, they are a very good match and pick up the slack in the areas the other is weak. | {
"source": [
"https://electronics.stackexchange.com/questions/221531",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/100880/"
]
} |
223,063 | I understand the traditional difference between the 7400 series and the 4000 series logic ICs, but since there are CMOS versions of the 7400 series, is there an advantage to use the 7400 CMOS version over the 4000 series chips? Please note I'm not talking about TTL vs CMOS, as that has been discussed thoroughly before. If there is no difference (read significant difference), I'm assuming then that they would be able to interface with each other? I would think the voltage levels for high and low for both CMOS versions would be near identical, but please correct me if I'm wrong. | The difference depends on your system definition. On the one hand, 74HC operates over a limited voltage range, with 6 volts specified as the maximum supply voltage. The CD4000 series, on the other hand, is rated to a maximum of 18 volts, so it may well be easier to use the CD4000 series in a battery-operated system. If the limited voltage range of the 74HC line is not a problem, the line is much faster. For instance, comparing the CD4011/74HC00 (Quad 2-input NAND gates) gives propagation delays at 5 volts of 90 nsec (typ) vs 7 nsec. For the CD4063 vs the 74HC85 (4-bit magnitude comparator) the numbers are 625 nsec (typ) vs. 63 nsec. The CD40192 4-bit up/down counter has a typical count frequency of 4 MHz, while the 74HC192 goes at 36 MHz. It's worth keeping in mind that both lines run faster at higher Vdd, so a CD4000 at 15 volts will do better than the numbers here, but the order of magnitude difference is not erased. And yes, the 74HC series is designed to roughly maintain the speeds of the 7400/74LS00 lines, and often allows drop-in replacement for much lower power. | {
"source": [
"https://electronics.stackexchange.com/questions/223063",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/90674/"
]
} |
223,929 | I am mostly doing development on devices that have ported Linux so the standard C library provides lots of it's functionality through implementing system calls which have a standardised behaviour. However for bare metal, there is no underlying OS. Is there a standard related to how a c library should be implemented or do you have to relearn peculiarity of a library implementations when you switch to new board which provides a different BSP? | Yes, there is a standard, simply the C standard library . The library functions do not require a "full blown" OS, or any OS at all, and there are a number of implementations out there tailored to "bare metal" code, Newlib perhaps being the best known. Taking Newlib as an example, it requires you to write a small subset of core functions, mainly how files and memory allocation is handled in your system. If you're using a common target platform, chances are that someone already did this job for you. If you're using linux (probably also OSX and maybe even cygwin/msys?) and type man strlen , it should have a section called something like CONFORMING TO , which would tell you that the implementation conforms to a specific standard. This way you can figure out if something you've been using is a standard function or if it depends on a specific OS. | {
"source": [
"https://electronics.stackexchange.com/questions/223929",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/37061/"
]
} |
224,156 | When C code is written, compiled and uploaded to a microcontroller, the microcontroller starts running. But if we take this uploading and startup process step by step in slow motion, I have some confusions about what is actually happening inside the MCU (memory, CPU, bootloader). Here is (most probably wrong) what I would answer if someone were to ask me: Compiled binary code is written to flash ROM (or EEPROM) through USB Bootloader copies some part of this code to RAM. If true, how does
the boot-loader know what to copy (which part of the ROM to copy to
the RAM)? CPU starts fetching instructions and data of the code from ROM and
RAM Is this wrong? Is it possible to summarize this process of booting and startup with some information about how the memory, bootloader, and CPU interact in this phase? I have found many basic explanations of how a PC boots via BIOS. But I'm stuck with the microcontroller startup process. | 1) the compiled binary is written to prom/flash yes. USB, serial, i2c, jtag, etc depends on the device as to what is supported by that device, irrelevent for understanding the boot process. 2) This is typically not true for a microcontroller, the primary use case is to have instructions in rom/flash and data in ram. No matter what the architecture. for a non-microcontroller, your pc, your laptop, your server, the program is copied from non-volatile (disk) to ram then run from there. Some microcontrollers let you use ram as well, even ones that claim harvard even though it appears to violate the definition. There is nothing about harvard that prevents you from mapping ram into the instruction side, you just need to have a mechanism to get the instructions there after power is up (which violates the definition, but harvard systems would have to do that to be useful other than as microcontrollers). 3) sort of. Each cpu "boots" in a deterministic, as designed, way. The most common way is a vector table where the address for the first instructions to run after powering up are in the reset vector, an address that the hardware reads then uses that address to start running. The other general way is to have the processor start executing without a vector table at some well known address. Sometimes the chip will have "straps", some pins that you can tie high or low before releasing reset, that the logic uses to boot different ways. You have to separate the cpu itself, the processor core from the rest of the system. Understand how the cpu operates, and then understand that the chip/system designers have setup address decoders around the outside of the cpu so that some part of the cpus address space communicates with a flash, and some with ram and some with peripherals (uart, i2c, spi, gpio, etc). You can take that same cpu core if you wish, and wrap it differently. This is what you get when you buy something arm or mips based. arm and mips make cpu cores, which chip people buy and wrap their own stuff around, for various reasons they dont make that stuff compatible from brand to brand. Which is why rarely can ask a generic arm question when it comes to anything outside the core. A microcontroller attempts to be a system on a chip, so its non-volatile memory (flash/rom), volatile (sram), and cpu are all on the same chip along with a mixture of peripherals. But the chip is designed internally such that the flash is mapped into the address space of the cpu that matches the boot characteristics of that cpu. If for example the cpu has a reset vector at address 0xFFFC, then there needs to be flash/rom that responds to that address that we can program via 1), along with enough flash/rom in the address space for useful programs. A chip designer may choose to have 0x1000 bytes of flash starting at 0xF000 in order to satisfy those requirements. And perhaps they put some amount of ram at a lower address or maybe 0x0000, and the peripherals somewhere in the middle. Another architecture of cpu might start executing at address zero, so they would need to do the opposite, place the flash so that it answers to an address range around zero. say 0x0000 to 0x0FFF for example. and then put some ram elsewhere. The chip designers know how the cpu boots and they have placed non-volatile storage there (flash/rom). It is then up to the software folks to write the boot code to match the well known behavior of that cpu. You have to place the reset vector address in the reset vector and your boot code at the address you defined in the reset vector. The toolchain can help you greatly here. sometimes, esp with point and click ides or other sandboxes they may do most of the work for you all you do is call apis in a high level language (C). But, however it is done the program loaded into the flash/rom has to match the hardwired boot behavior of the cpu. Before the C portion of your program main() and on if you use main as your entry point, some things have to be done. A C programmer assumes that when the declare a variable with an initial value, they expect that to actually work. Well, variables, other than const ones, are in ram, but if you have one with an initial value that initial value has to be in non-volatile ram. So this is the .data segment and the C bootstrap needs to copy .data stuff from flash to ram (where is usually determined for you by the toolchain). Global variables that you declare without an initial value are assumed to be zero before your program starts although you should really not assume that and thankfully some compilers are starting to warn about uninitialized variables. This is the .bss segment, and the C bootstrap zeros that out in ram, the content, zeros, does not have to be stored in non-volatile memory, but the starting address and how much does. Again the toolchain helps you greatly here. And lastly the bare minimum is you need to setup a stack pointer as C programs expect to be able to have local variables and call other functions. Then maybe some other chip specific stuff is done, or we let the rest of the chip specific stuff happen in C. The cortex-m series cores from arm will do some of this for you, the stack pointer is in the vector table, there is a reset vector to point at the code to be run after reset, so that other than whatever you have to do to generate the vector table (which you usually use asm for anyway) you can go pure C without asm. now you dont get your .data copied over nor your .bss zeroed so you have to do that yourself if you want to try to go without asm on something cortex-m based. The bigger feature is not the reset vector but interrupt vectors where the hardware follows the arms recommended C calling convention and preserves registers for you, and uses the correct return for that vector, so that you dont have to wrap the right asm around each handler (or have toolchain specific directives for your target to have the toolchain wrap it for you). Chip specific stuff may be for example, microcontrollers are often used in battery based systems, so low power so some come out of reset with most of the peripherals turned off, and you have to turn each of these sub systems on so you can use them. Uarts, gpios, etc. Often a low-ish clock speed is used, straight from a crystal or internal oscillator. And your system design may show that you need a faster clock, so you initialize that. your clock may be too fast for the flash or ram so you may have needed to change the wait states before upping the clock. Might need to setup the uart, or usb or other interfaces. then your application can do its thing. A computer desktop, laptop, server, and a microcontroller are no different in how they boot/work. Except that they are not mostly on one chip. The bios program is often on a separate chip flash/rom from the cpu. Although recently x86 cpus are pulling more and more of what used to be support chips into the same package (pcie controllers, etc) but you still have most of your ram and rom off chip, but it is still a system and it still works exactly the same at a high level. The cpu boot process is well known, the board designers place the flash/rom in the address space where the cpu boots. that program (part of the BIOS on an x86 pc) does all the things mentioned above, it starts up various peripherals, it initializes dram, enumerates the pcie buses, and so on. Is often quite configurable by the user based on bios settings or what we used to call cmos settings, because at the time that is what tech was used. Doesnt matter, there are user settings that you can go and change to tell the bios boot code how to vary what it does. different folks will use different terminology. a chip boots, that is the first code that runs. sometimes called bootstrap. a bootloader with the word loader often means that if you dont do anything to interfere it is a bootstrap which takes you from generic booting into something larger, your application or operating system. but the loader part implies that you can interrupt the boot process and then maybe load other test programs. if you have ever used uboot for example on an embedded linux system, you can hit a key and stop the normal boot then you can download a test kernel into ram and boot it instead of the one that is on flash, or you can download your own programs, or you can download the new kernel then have the bootloader write it to flash so that next time you boot it runs the new stuff. but bootloader as a term is often used for any kind of booting even if it doesnt have a loader portion to it. As far as the cpu itself, the core processor, which doesnt know ram from flash from peripherals. There is no notion of bootloader, operating system, application. It is just a sequence of instructions that are fed into the cpu to be executed. These are software terms to distinguish different programming tasks from one another. Software concepts from one another. Some microcontrollers have a separate bootloader provided by the chip vendor in a separate flash or separate area of flash that you might not be able to modify. In this case there is often a pin or set of pins (I call them straps) that if you tie them high or low before reset is released you are telling the logic and/or that bootloader what to do, for example one strap combination may tell the chip to run that bootloader and wait on the uart for data to be programmed into the flash. Set the straps the other way and your program boots not the chip vendors bootloader, allowing for field programming of the chip or recovering from your program crashing. Sometimes it is just pure logic that allows you to program the flash. This is quite common these days, but if you go wayback you did need/want your own bootloader for the same reasons (of course go too far back and you pulled the eeprom/prom/rom chip out of the socket and replaced it with another or reprogrammed it in a fixture. And you can still have your own bootloader if you want, even if there are hardware ways to field program (avr/arduino). The reason why most microcontrollers have much more flash than ram is that the primary use case is to run the program directly from flash, and only have enough ram to cover stack and variables. Although in some cases you can run programs from ram which you have to compile right and store in flash then copy before calling. EDIT flash.s .cpu cortex-m0
.thumb
.thumb_func
.global _start
_start:
stacktop: .word 0x20001000
.word reset
.word hang
.word hang
.word hang
.thumb_func
reset:
bl notmain
b hang
.thumb_func
hang: b . notmain.c int notmain ( void )
{
unsigned int x=1;
unsigned int y;
y = x + 1;
return(0);
} flash.ld MEMORY
{
bob : ORIGIN = 0x00000000, LENGTH = 0x1000
ted : ORIGIN = 0x20000000, LENGTH = 0x1000
}
SECTIONS
{
.text : { *(.text*) } > bob
.rodata : { *(.rodata*) } > bob
.bss : { *(.bss*) } > ted
.data : { *(.bss*) } > ted AT > bob
} So this is an example for a cortex-m0, the cortex-ms all work the same as far as this example goes. The particular chip, for this example, has application flash at address 0x00000000 in the arm address space and ram at 0x20000000. The way a cortex-m boots is the 32 bit word at address 0x0000 is the address to initialize the stack pointer. I dont need much stack for this example so 0x20001000 will suffice, obviously there has to be ram below that address (the way the arm pushes, is it subtracts first then pushes so if you set 0x20001000 the first item on the stack is at address 0x2000FFFC you dont have to use 0x2000FFFC). The 32 bit word at address 0x0004 is the address to the reset handler, basically the first code that runs after a reset. Then there are more interrupt and event handlers that are specific to that cortex m core and chip, possibly as many as 128 or 256, if you dont use them then you dont need to setup the table for them, I threw in a few for demonstration purposes. But you would have to make sure you have the right vector at the address hardcoded in the logic for a particular interrupt/event (for example a uart rx data interrupt, or a gpio pin state change interrupt, as well as the undefined instructions, data aborts and such). I do not need to deal with .data nor .bss in this example because I know already there is nothing in those segments by looking at the code. If there were I would deal with it, and will in a second. So the stack is setup, check, .data taken care of, check, .bss, check, so the C bootstrap stuff is done, can branch to the entry function for C. Because some compilers will add extra junk if they see the function main() and on the way to main, I dont use that exact name, I used notmain() here as my C entry point. So the reset handler calls notmain() then if/when notmain() returns it goes to hang which is just an infinite loop, possibly poorly named. I firmly believe in mastering the tools, many folks dont, but what you will find is that each bare metal developer does his/her own thing, because of the near complete freedom, not remotely as constrained as you would be making apps or web pages. They again do their own thing. I prefer to have my own bootstrap code and linker script. Others rely on the toolchain, or play in the vendors sandbox where most of the work is done by someone else (and if something breaks you are in a world of hurt, and with bare metal things break often and in dramatic ways). So assembling, compiling and linking with gnu tools I get: 00000000 <_start>:
0: 20001000 andcs r1, r0, r0
4: 00000015 andeq r0, r0, r5, lsl r0
8: 0000001b andeq r0, r0, fp, lsl r0
c: 0000001b andeq r0, r0, fp, lsl r0
10: 0000001b andeq r0, r0, fp, lsl r0
00000014 <reset>:
14: f000 f802 bl 1c <notmain>
18: e7ff b.n 1a <hang>
0000001a <hang>:
1a: e7fe b.n 1a <hang>
0000001c <notmain>:
1c: 2000 movs r0, #0
1e: 4770 bx lr So how does the bootloader know where stuff is. Because the compiler did the work. In the first case the assembler generated the code for flash.s, and by doing so knows where the labels are (labels are just addresses just like function names or variable names, etc) so I didnt have to count bytes and fill in the vector table manually, I used a label name and the assembler did it for me. Now you ask, if reset is address 0x14 why did the assembler put 0x15 in the vector table. Well this is a cortex-m and it boots and only runs in thumb mode. With ARM when you branch to an address if branching to thumb mode the lsbit needs to be set, if arm mode then reset. So you always need that bit set. I know the tools and by putting .thumb_func before a label, if that label is used as it is in the vector table or for branching to or whatever. The toolchain knows to set the lsbit. So it has here 0x14|1 = 0x15. Likewise for hang. Now the disassembler doesnt show 0x1D for the call to notmain() but dont worry the tools have correctly built the instruction. Now that code in notmain, those local variables are, not used, they are dead code. The compiler even comments on that fact by saying y is set but not used. Note the address space, these things all start at address 0x0000 and go from there so the vector table is properly placed, the .text or program space is also properly placed, how I got flash.s in front of notmain.c's code is by knowing the tools, a common mistake is to not get that right and crash and burn hard. IMO you have to dissassemble to make sure things are placed right before you boot the first time, once you have things in the right place you dont necessarily have to check every time. Just for new projects or if they hang. Now something that surprises some folks is that there is no reason whatsoever to expect any two compilers to produce the same output from the same input. Or even the same compiler with different settings. Using clang, the llvm compiler I get these two outputs with and without optimization llvm/clang optimized 00000000 <_start>:
0: 20001000 andcs r1, r0, r0
4: 00000015 andeq r0, r0, r5, lsl r0
8: 0000001b andeq r0, r0, fp, lsl r0
c: 0000001b andeq r0, r0, fp, lsl r0
10: 0000001b andeq r0, r0, fp, lsl r0
00000014 <reset>:
14: f000 f802 bl 1c <notmain>
18: e7ff b.n 1a <hang>
0000001a <hang>:
1a: e7fe b.n 1a <hang>
0000001c <notmain>:
1c: 2000 movs r0, #0
1e: 4770 bx lr not optimized 00000000 <_start>:
0: 20001000 andcs r1, r0, r0
4: 00000015 andeq r0, r0, r5, lsl r0
8: 0000001b andeq r0, r0, fp, lsl r0
c: 0000001b andeq r0, r0, fp, lsl r0
10: 0000001b andeq r0, r0, fp, lsl r0
00000014 <reset>:
14: f000 f802 bl 1c <notmain>
18: e7ff b.n 1a <hang>
0000001a <hang>:
1a: e7fe b.n 1a <hang>
0000001c <notmain>:
1c: b082 sub sp, #8
1e: 2001 movs r0, #1
20: 9001 str r0, [sp, #4]
22: 2002 movs r0, #2
24: 9000 str r0, [sp, #0]
26: 2000 movs r0, #0
28: b002 add sp, #8
2a: 4770 bx lr so that is a lie the compiler did optimize out the addition, but it did allocate two items on the stack for the variables, since these are local variables they are in ram but on the stack not at fixed addresses, will see with globals that that changes. But the compiler realized that it could compute y at compile time and there was no reason to compute it at run time so it simply placed a 1 in the stack space allocated for x and a 2 for the stack space allocated for y. the compiler "allocates" this space with internal tables I declare stack plus 0 for variable y and stack plus 4 for variable x. the compiler can do whatever it wants so long as the code it implements conforms to the C standard or expetations of a C programmer. There is no reason why the compiler has to leave x at stack + 4 for the duration of the function, it could move it around as much as it wants, but remember humans make compilers and humans have to debug compilers and you have to balance maintenance and debugging with performance, and very often you will see that compiler generated code tends to setup a stack frame once and keep everything relative to the stack pointer throughout the function. If I add a function dummy in assembler .thumb_func
.globl dummy
dummy:
bx lr and then call it void dummy ( unsigned int );
int notmain ( void )
{
unsigned int x=1;
unsigned int y;
y = x + 1;
dummy(y);
return(0);
} the output changes 00000000 <_start>:
0: 20001000 andcs r1, r0, r0
4: 00000015 andeq r0, r0, r5, lsl r0
8: 0000001b andeq r0, r0, fp, lsl r0
c: 0000001b andeq r0, r0, fp, lsl r0
10: 0000001b andeq r0, r0, fp, lsl r0
00000014 <reset>:
14: f000 f804 bl 20 <notmain>
18: e7ff b.n 1a <hang>
0000001a <hang>:
1a: e7fe b.n 1a <hang>
0000001c <dummy>:
1c: 4770 bx lr
...
00000020 <notmain>:
20: b510 push {r4, lr}
22: 2002 movs r0, #2
24: f7ff fffa bl 1c <dummy>
28: 2000 movs r0, #0
2a: bc10 pop {r4}
2c: bc02 pop {r1}
2e: 4708 bx r1 now that we have nested functions, the notmain function needs to preserve its return address, so that it can clobber the return address for the nested call. this is because the arm uses a register for returns, if it used the stack like say an x86 or some others well...it would still use the stack but differently. Now you ask why did it push r4? Well, the calling convention not long ago changed to keep the stack aligned on 64 bit (two word) boundaries instead of 32 bit, one word boundaries. So they need to push something to keep the stack aligned, so the compiler arbitrarily chose r4 for some reason, doesnt matter why. Popping into r4 would be a bug though as per the calling convention for this target, we dont clobber r4 on a function call, we can clobber r0 through r3. r0 is the return value. Looks like it is doing a tail optimization maybe, I dont know for some reason it didnt use lr to return. But we see that the x and y math is optimized to a hardcoded value of 2 being passed to the dummy function (dummy was specifically coded in a separate file, in this case asm, so that the compiler wouldnt optimize the function call out completely, if I had a dummy function that simply returned in C in notmain.c the optimizer would have removed the x, y, and dummy function call because they are all dead/useless code). Also note that because flash.s code got larger notmain is elsehwere and the toolchain has taken care of patching up all the addresses for us so we dont have to do that manually. unoptimized clang for reference 00000020 <notmain>:
20: b580 push {r7, lr}
22: af00 add r7, sp, #0
24: b082 sub sp, #8
26: 2001 movs r0, #1
28: 9001 str r0, [sp, #4]
2a: 2002 movs r0, #2
2c: 9000 str r0, [sp, #0]
2e: f7ff fff5 bl 1c <dummy>
32: 2000 movs r0, #0
34: b002 add sp, #8
36: bd80 pop {r7, pc} optimized clang 00000020 <notmain>:
20: b580 push {r7, lr}
22: af00 add r7, sp, #0
24: 2002 movs r0, #2
26: f7ff fff9 bl 1c <dummy>
2a: 2000 movs r0, #0
2c: bd80 pop {r7, pc} that compiler author chose to use r7 as the dummy variable to align the stack, also it is creating a frame pointer using r7 even though it doesnt have anything in the stack frame. basically instruction could have been optimized out. but it used the pop to return not three instructions, that was probably on me I bet I could get gcc to do that with the right command line options (specifying the processor). this should mostly answer the rest of your questions void dummy ( unsigned int );
unsigned int x=1;
unsigned int y;
int notmain ( void )
{
y = x + 1;
dummy(y);
return(0);
} I have globals now. so they go in either .data or .bss if they dont get optimized out. before we look at the final output lets look at the itermediate object 00000000 <notmain>:
0: b510 push {r4, lr}
2: 4b05 ldr r3, [pc, #20] ; (18 <notmain+0x18>)
4: 6818 ldr r0, [r3, #0]
6: 4b05 ldr r3, [pc, #20] ; (1c <notmain+0x1c>)
8: 3001 adds r0, #1
a: 6018 str r0, [r3, #0]
c: f7ff fffe bl 0 <dummy>
10: 2000 movs r0, #0
12: bc10 pop {r4}
14: bc02 pop {r1}
16: 4708 bx r1
...
Disassembly of section .data:
00000000 <x>:
0: 00000001 andeq r0, r0, r1 now there is info missing from this but it gives an idea of what is going on, the linker is the one that takes objects and links them together with information provided it (in this case flash.ld) that tells it where .text and .data and such goes. the compiler does not know such things, it can only focus on the code it is presented, any external it has to leave a hole for the linker to fill in the connection. Any data it has to leave a way to link those things together, so the addresses for everything are zero based here simply because the compiler and this dissassembler dont know. there is other info not shown here that the linker uses to place things. the code here is position independent enough so the linker can do its job. we then see at least a disassembly of the linked output 00000020 <notmain>:
20: b510 push {r4, lr}
22: 4b05 ldr r3, [pc, #20] ; (38 <notmain+0x18>)
24: 6818 ldr r0, [r3, #0]
26: 4b05 ldr r3, [pc, #20] ; (3c <notmain+0x1c>)
28: 3001 adds r0, #1
2a: 6018 str r0, [r3, #0]
2c: f7ff fff6 bl 1c <dummy>
30: 2000 movs r0, #0
32: bc10 pop {r4}
34: bc02 pop {r1}
36: 4708 bx r1
38: 20000004 andcs r0, r0, r4
3c: 20000000 andcs r0, r0, r0
Disassembly of section .bss:
20000000 <y>:
20000000: 00000000 andeq r0, r0, r0
Disassembly of section .data:
20000004 <x>:
20000004: 00000001 andeq r0, r0, r1 the compiler has basically asked for two 32 bit variables in ram. One is in .bss because I didnt initialize it so it is assumed to init as zero. the other is .data because I did initialize it on declaration. Now because these are global variables it is assumed that other functions can modify them. the compiler makes no assumptions as to when notmain can be called so it cannot optimize with what it can see, the y = x + 1 math, so it has to do that runtime. It has to read from ram the two variables add them and save back. Now clearly this code wont work. Why? because my bootstrap as shown here does not prepare the ram before calling notmain, so whatever garbage was in 0x20000000 and 0x20000004 when the chip woke up is what will be used for y and x. Not going to show that here. you can read my even longer winded rambling on .data and .bss and why I dont ever need them in my bare metal code, but if you feel you have to and want to master the tools rather than hoping someone else did it right... https://github.com/dwelch67/raspberrypi/tree/master/bssdata linker scripts, and bootstraps are somewhat compiler specific so everything you learn about one version of one compiler could get tossed on the next version or with some other compiler, yet another reason why I dont put a ton of effort into .data and .bss preparation just to be this lazy: unsigned int x=1; I would much rather do this unsigned int x;
...
x = 1; and let the compiler put it in .text for me. Sometimes it saves flash that way sometimes it burns more. It is most definitely much easier to program and port from toolchain version or one compiler to another. Much more reliable, less error prone. Yep, does not conform to the C standard. now what if we make these static globals? void dummy ( unsigned int );
static unsigned int x=1;
static unsigned int y;
int notmain ( void )
{
y = x + 1;
dummy(y);
return(0);
} well 00000020 <notmain>:
20: b510 push {r4, lr}
22: 2002 movs r0, #2
24: f7ff fffa bl 1c <dummy>
28: 2000 movs r0, #0
2a: bc10 pop {r4}
2c: bc02 pop {r1}
2e: 4708 bx r1 obviously those variables cannot be modified by other code, so the compiler can now at compile time optimize out the dead code, like it did before. unoptimized 00000020 <notmain>:
20: b580 push {r7, lr}
22: af00 add r7, sp, #0
24: 4804 ldr r0, [pc, #16] ; (38 <notmain+0x18>)
26: 6800 ldr r0, [r0, #0]
28: 1c40 adds r0, r0, #1
2a: 4904 ldr r1, [pc, #16] ; (3c <notmain+0x1c>)
2c: 6008 str r0, [r1, #0]
2e: f7ff fff5 bl 1c <dummy>
32: 2000 movs r0, #0
34: bd80 pop {r7, pc}
36: 46c0 nop ; (mov r8, r8)
38: 20000004 andcs r0, r0, r4
3c: 20000000 andcs r0, r0, r0 this compiler which used the stack for locals, now uses ram for globals and this code as written is broken because I didnt handle .data nor .bss properly. and one last thing that we cant see in the disassembly. :1000000000100020150000001B0000001B00000075
:100010001B00000000F004F8FFE7FEE77047000057
:1000200080B500AF04480068401C04490860FFF731
:10003000F5FF002080BDC046040000200000002025
:08004000E0FFFF7F010000005A
:0400480078563412A0
:00000001FF I changed x to be pre-init with 0x12345678. My linker script (this is for gnu ld) has this ted at bob thing. that tells the linker I want the final place to be in the ted address space, but store it in the binary in the ted address space and someone will move it for you. And we can see that happened. this is intel hex format. and we can see the 0x12345678 :0400480078563412A0 is in the flash address space of the binary. readelf also shows this Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
EXIDX 0x010040 0x00000040 0x00000040 0x00008 0x00008 R 0x4
LOAD 0x010000 0x00000000 0x00000000 0x00048 0x00048 R E 0x10000
LOAD 0x020004 0x20000004 0x00000048 0x00004 0x00004 RW 0x10000
LOAD 0x030000 0x20000000 0x20000000 0x00000 0x00004 RW 0x10000
GNU_STACK 0x000000 0x00000000 0x00000000 0x00000 0x00000 RWE 0x10 the LOAD line where the virtual address is 0x20000004 and the physical is 0x48 | {
"source": [
"https://electronics.stackexchange.com/questions/224156",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
]
} |
224,201 | The two pins are next to each other and they must be connect together. How should I do that? Just connect them together with a simple trace? Or pour them together? | A large pour would allow more current (just like a wider trace), and would result in a larger solder blob. You probably need neither. A simple trace between the middles of the pads might look like an accidental solder bridge if you don't know the circuit. To avoid such a misunderstanding, it might be a better idea to route the trace outside, if you can afford the space: | {
"source": [
"https://electronics.stackexchange.com/questions/224201",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/104167/"
]
} |
224,256 | I just watched a video on Facebook by somebody who built a project to run an LED using a 9 volt DC motor scrounged from a battery operated toy. They simply wired the LED to the motor and then used a pulley system to spin the motor. Here's a link to the video: Creating a generator from a DC motor . Doesn't a typical DC motor like those used in toys act like an AC generator if you spin it? When you move a magnet past a coil of wire you get an AC pulse out of the coil. I would be surprised if a cheap DC toy motor contained a rectifier diode since it's designed to be a DC motor , not a DC generator. So my expectation is that a typical DC motor would act like an AC generator. Further, a 9 volt DC motor spun at a fairly high speed would probably emit around 9 volts AC with a fair amount of current behind it, so I would think you'd risk exceeding the reverse breakdown voltage on a small LED and burning it out. I think the project in the video in question would need a rectifier diode (ideally a full-wave rectifier) and a current limiting resistor or it would risk blowing out the LED. | I would be surprised if a cheap DC toy motor contained a rectifier
diode since it's designed to be a DC motor, not a DC generator A cheap DC motor of the type that has a permanent magnet stator uses brushes and a rotor commutator to continually reverse the current into the rotor coil thus the effect is like feeding AC into the coil: - If you didn't do this the rotor would spin maybe up to half a full turn and stop. Then it would take too much current and maybe burn out. When driven as a generator the commutator does indeed work as a rectifier to produce a DC output: - Maybe you are thinking that a cheap motor uses slip rings. This type of motor requires AC and will produce AC: - | {
"source": [
"https://electronics.stackexchange.com/questions/224256",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/46155/"
]
} |
224,355 | So I'm switching from PICs to ARM and I bought an STM32F4 discovery board.
So far I understand that to program it you can either access all registers directly in memory (obvious way) and also there are 3 main libraries you can use to make your life easier.
Now my question is, which one of those 3 (CMSIS, HAL, Std Peripherals Lib) is the most LOW level one? ie. the one with the less overhead. My goal is to learn the controller's inner workings and not make my life easier (only a little), so I would like to know which of these is closer to the core without resorting to use assembly. | Definitely the CMSIS. It is not exactly a library, it mostly contains definitions for the various registers. It is exactly what one needs to access the microcontroller's registers easy, so as to implement his/her own HAL. It has no overhead, since you just access the registers. Keep in mind that CMSIS, unlike the other two, is defined by ARM and not ST. This means that the various CMSIS libraries out there for the various microcontrollers are quite similar, which greatly aids in portability. Furthermore, CMSIS is the simpler one so it is (IMO) the most versatile, and most reliable, with possibly fewer (or no) bugs. Some hal libraries for the various mcu's that I've used are quite infamous for their bugs. On the other hand, CMSIS needs quite more work from you. It is however my personal choice, since I prefer to invest my time creating quality libraries, that suit my needs, and understanding how the chip works, than just spending time to learn another, new library. | {
"source": [
"https://electronics.stackexchange.com/questions/224355",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98720/"
]
} |
224,618 | Which is the most efficient way/minimal code required to startup a STM32F4? The startup files that come from ST seem to have a lot of unnecessary code. | You may not want to use the vendor-provided start-up code. There are few reassons people do this: Create more efficient, or less bloated code.
Have a special requirement that the vendor code does not meet.
You want to know how stuff work.
You want some kind of universal code, to use in many different MCUs.
You want total control, over you the process.
etc.. The following apply to C programs only (no C++, exceptions etc), and Cortex M microcontrollers (regardless of make/model). Also I assume that you use GCC, although there may be no or little difference with other compilers. Finally I use newlib. Linker Script The first thing to do is to create a linker script. You have to tell your compiler how to arrange things in memory. I won't get into details about the linker script, as it is a topic on its own. /*
* Linker script.
*/
/*
* Set the output format. Currently set for Cortex M architectures,
* may need to be modified if the library has to support other MCUs,
* or completelly removed.
*/
OUTPUT_FORMAT ("elf32-littlearm", "elf32-bigarm", "elf32-littlearm")
/*
* Just refering a function included in the vector table, and that
* it is defined in the same file with it, so the vector table does
* not get optimized out.
*/
EXTERN(Reset_Handler)
/*
* ST32F103x8 memory setup.
*/
MEMORY
{
FLASH (rx) : ORIGIN = 0x00000000, LENGTH = 64k
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 20k
}
/*
* Necessary group so the newlib stubs provided in the library,
* will correctly be linked with the appropriate newlib functions,
* and not optimized out, giving errors for undefined symbols.
* This way the libraries can be fed to the linker in any order.
*/
GROUP(
libgcc.a
libg.a
libc.a
libm.a
libnosys.a
)
/*
* Stack start pointer. Here set to the end of the stack
* memory, as in most architectures (including all the
* new ARM ones), the stack starts from the maximum address
* and grows towards the bottom.
*/
__stack = ORIGIN(RAM) + LENGTH(RAM);
/*
* Programm entry function. Used by the debugger only.
*/
ENTRY(_start)
/*
* Memory Allocation Sections
*/
SECTIONS
{
/*
* For normal programs should evaluate to 0, for placing the vector
* table at the correct position.
*/
. = ORIGIN(FLASH);
/*
* First link the vector table.
*/
.vectors : ALIGN(4)
{
FILL(0xFF)
__vectors_start__ = ABSOLUTE(.);
KEEP(*(.vectors))
*(.after_vectors .after_vectors.*)
} > FLASH
/*
* Start of text.
*/
_text = .;
/*
* Text section
*/
.text : ALIGN(4)
{
*(.text)
*(.text.*)
*(.glue_7t)
*(.glue_7)
*(.gcc*)
} > FLASH
/*
* Arm section unwinding.
* If removed may cause random crashes.
*/
.ARM.extab :
{
*(.ARM.extab* .gnu.linkonce.armextab.*)
} > FLASH
/*
* Arm stack unwinding.
* If removed may cause random crashes.
*/
.ARM.exidx :
{
__exidx_start = .;
*(.ARM.exidx* .gnu.linkonce.armexidx.*)
__exidx_end = .;
} > FLASH
/*
* Section used by C++ to access eh_frame.
* Generaly not used, but it doesn't harm to be there.
*/
.eh_frame_hdr :
{
*(.eh_frame_hdr)
} > FLASH
/*
* Stack unwinding code.
* Generaly not used, but it doesn't harm to be there.
*/
.eh_frame : ONLY_IF_RO
{
*(.eh_frame)
} > FLASH
/*
* Read-only data. Consts should also be here.
*/
.rodata : ALIGN(4)
{
. = ALIGN(4);
__rodata_start__ = .;
*(.rodata)
*(.rodata.*)
. = ALIGN(4);
__rodata_end__ = .;
} > FLASH
/*
* End of text.
*/
_etext = .;
/*
* Data section.
*/
.data : ALIGN(4)
{
FILL(0xFF)
. = ALIGN(4);
PROVIDE(__textdata__ = LOADADDR(.data));
PROVIDE(__data_start__ = .);
*(.data)
*(.data.*)
*(.ramtext)
. = ALIGN(4);
PROVIDE(__data_end__ = .);
} > RAM AT > FLASH
/*
* BSS section.
*/
.bss (NOLOAD) : ALIGN(4)
{
. = ALIGN(4);
PROVIDE(_bss_start = .);
__bss_start__ = .;
*(.bss)
*(.bss.*)
*(COMMON)
. = ALIGN(4);
PROVIDE(_bss_end = .);
__bss_end__ = .;
PROVIDE(end = .);
} > RAM
/*
* Non-initialized variables section.
* A variable should be explicitly placed
* here, aiming in speeding-up boot time.
*/
.noinit (NOLOAD) : ALIGN(4)
{
__noinit_start__ = .;
*(.noinit .noinit.*)
. = ALIGN(4) ;
__noinit_end__ = .;
} > RAM
/*
* Heap section.
*/
.heap (NOLOAD) :
{
. = ALIGN(4);
__heap_start__ = .;
__heap_base__ = .;
. = ORIGIN(HEAP_RAM) + LENGTH(HEAP_RAM);
__heap_end__ = .;
} > RAM
} You may directly use the provided linker script. Some things to note: This is a simplified version of the linker script I use. During
stripping-down I may introduced bugs to the code, please double check
it. Since I use it for other MCUs than you, you have to change the MEMORY layout to fit your own. You may need to change the libraries linked bellow to link with your own. Here it links against newlib. Vector Table You have to include in your code a vector table. This is simply a look-up table of function pointers, that the hardware will jump to automatically in case of an interrupt. This is fairly easy to do in C. Take a look at the following file. This is for the STM32F103C8 MCU, but it is very easy to change to your needs. #include "stm32f10x.h"
#include "debug.h"
//Start-up code.
extern void __attribute__((noreturn, weak)) _start (void);
// Default interrupt handler
void __attribute__ ((section(".after_vectors"), noreturn)) __Default_Handler(void);
// Reset handler
void __attribute__ ((section(".after_vectors"), noreturn)) Reset_Handler (void);
/** Non-maskable interrupt (RCC clock security system) */
void NMI_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** All class of fault */
void HardFault_Handler(void) __attribute__ ((interrupt, weak));
/** Memory management */
void MemManage_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Pre-fetch fault, memory access fault */
void BusFault_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Undefined instruction or illegal state */
void UsageFault_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** System service call via SWI instruction */
void SVC_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Debug monitor */
void DebugMon_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Pendable request for system service */
void PendSV_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** System tick timer */
void SysTick_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Window watchdog interrupt */
void WWDG_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** PVD through EXTI line detection interrupt */
void PVD_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Tamper interrupt */
void TAMPER_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** RTC global interrupt */
void RTC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Flash global interrupt */
void FLASH_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** RCC global interrupt */
void RCC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line0 interrupt */
void EXTI0_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line1 interrupt */
void EXTI1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line2 interrupt */
void EXTI2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line3 interrupt */
void EXTI3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line4 interrupt */
void EXTI4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel1 global interrupt */
void DMA1_Channel1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel2 global interrupt */
void DMA1_Channel2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel3 global interrupt */
void DMA1_Channel3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel4 global interrupt */
void DMA1_Channel4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel5 global interrupt */
void DMA1_Channel5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel6 global interrupt */
void DMA1_Channel6_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel7 global interrupt */
void DMA1_Channel7_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** ADC1 and ADC2 global interrupt */
void ADC1_2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USB high priority or CAN TX interrupts */
void USB_HP_CAN_TX_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USB low priority or CAN RX0 interrupts */
void USB_LP_CAN_RX0_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** CAN RX1 interrupt */
void CAN_RX1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** CAN SCE interrupt */
void CAN_SCE_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line[9:5] interrupts */
void EXTI9_5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 break interrupt */
void TIM1_BRK_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 update interrupt */
void TIM1_UP_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 trigger and commutation interrupts */
void TIM1_TRG_COM_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 capture compare interrupt */
void TIM1_CC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM2 global interrupt */
void TIM2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM3 global interrupt */
void TIM3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM4 global interrupt */
void TIM4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C1 event interrupt */
void I2C1_EV_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C1 error interrupt */
void I2C1_ER_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C2 event interrupt */
void I2C2_EV_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C2 error interrupt */
void I2C2_ER_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SPI1 global interrupt */
void SPI1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SPI2 global interrupt */
void SPI2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USART1 global interrupt */
void USART1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USART2 global interrupt */
void USART2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USART3 global interrupt */
void USART3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line[15:10] interrupts */
void EXTI15_10_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** RTC alarm through EXTI line interrupt */
void RTCAlarm_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USB wakeup from suspend through EXTI line interrupt */
void USBWakeup_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 break interrupt */
void TIM8_BRK_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 update interrupt */
void TIM8_UP_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 trigger and commutation interrupts */
void TIM8_TRG_COM_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 capture compare interrupt */
void TIM8_CC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** ADC3 global interrupt */
void ADC3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** FSMC global interrupt */
void FSMC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SDIO global interrupt */
void SDIO_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM5 global interrupt */
void TIM5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SPI3 global interrupt */
void SPI3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** UART4 global interrupt */
void UART4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** UART5 global interrupt */
void UART5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM6 global interrupt */
void TIM6_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM7 global interrupt */
void TIM7_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel1 global interrupt */
void DMA2_Channel1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel2 global interrupt */
void DMA2_Channel2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel3 global interrupt */
void DMA2_Channel3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel4 and DMA2 Channel5 global interrupts */
void DMA2_Channel4_5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
// Stack start variable, needed in the vector table.
extern unsigned int __stack;
// Typedef for the vector table entries.
typedef void (* const pHandler)(void);
/** STM32F103 Vector Table */
__attribute__ ((section(".vectors"), used)) pHandler vectors[] =
{
(pHandler) &__stack, // The initial stack pointer
Reset_Handler, // The reset handler
NMI_Handler, // The NMI handler
HardFault_Handler, // The hard fault handler
#if defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7EM__)
MemManage_Handler, // The MPU fault handler
BusFault_Handler,// The bus fault handler
UsageFault_Handler,// The usage fault handler
#else
0, 0, 0, // Reserved
#endif
0, // Reserved
0, // Reserved
0, // Reserved
0, // Reserved
SVC_Handler, // SVCall handler
#if defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7EM__)
DebugMon_Handler, // Debug monitor handler
#else
0, // Reserved
#endif
0, // Reserved
PendSV_Handler, // The PendSV handler
SysTick_Handler, // The SysTick handler
// ----------------------------------------------------------------------
WWDG_IRQHandler, // Window watchdog interrupt
PVD_IRQHandler, // PVD through EXTI line detection interrupt
TAMPER_IRQHandler, // Tamper interrupt
RTC_IRQHandler, // RTC global interrupt
FLASH_IRQHandler, // Flash global interrupt
RCC_IRQHandler, // RCC global interrupt
EXTI0_IRQHandler, // EXTI Line0 interrupt
EXTI1_IRQHandler, // EXTI Line1 interrupt
EXTI2_IRQHandler, // EXTI Line2 interrupt
EXTI3_IRQHandler, // EXTI Line3 interrupt
EXTI4_IRQHandler, // EXTI Line4 interrupt
DMA1_Channel1_IRQHandler, // DMA1 Channel1 global interrupt
DMA1_Channel2_IRQHandler, // DMA1 Channel2 global interrupt
DMA1_Channel3_IRQHandler, // DMA1 Channel3 global interrupt
DMA1_Channel4_IRQHandler, // DMA1 Channel4 global interrupt
DMA1_Channel5_IRQHandler, // DMA1 Channel5 global interrupt
DMA1_Channel6_IRQHandler, // DMA1 Channel6 global interrupt
DMA1_Channel7_IRQHandler, // DMA1 Channel7 global interrupt
ADC1_2_IRQHandler, // ADC1 and ADC2 global interrupt
USB_HP_CAN_TX_IRQHandler, // USB high priority or CAN TX interrupts
USB_LP_CAN_RX0_IRQHandler, // USB low priority or CAN RX0 interrupts
CAN_RX1_IRQHandler, // CAN RX1 interrupt
CAN_SCE_IRQHandler, // CAN SCE interrupt
EXTI9_5_IRQHandler, // EXTI Line[9:5] interrupts
TIM1_BRK_IRQHandler, // TIM1 break interrupt
TIM1_UP_IRQHandler, // TIM1 update interrupt
TIM1_TRG_COM_IRQHandler, // TIM1 trigger and commutation interrupts
TIM1_CC_IRQHandler, // TIM1 capture compare interrupt
TIM2_IRQHandler, // TIM2 global interrupt
TIM3_IRQHandler, // TIM3 global interrupt
TIM4_IRQHandler, // TIM4 global interrupt
I2C1_EV_IRQHandler, // I2C1 event interrupt
I2C1_ER_IRQHandler, // I2C1 error interrupt
I2C2_EV_IRQHandler, // I2C2 event interrupt
I2C2_ER_IRQHandler, // I2C2 error interrupt
SPI1_IRQHandler, // SPI1 global interrupt
SPI2_IRQHandler, // SPI2 global interrupt
USART1_IRQHandler, // USART1 global interrupt
USART2_IRQHandler, // USART2 global interrupt
USART3_IRQHandler, // USART3 global interrupt
EXTI15_10_IRQHandler, // EXTI Line[15:10] interrupts
RTCAlarm_IRQHandler, // RTC alarm through EXTI line interrupt
USBWakeup_IRQHandler, // USB wakeup from suspend through EXTI line interrupt
TIM8_BRK_IRQHandler, // TIM8 break interrupt
TIM8_UP_IRQHandler, // TIM8 update interrupt
TIM8_TRG_COM_IRQHandler, // TIM8 trigger and commutation interrupts
TIM8_CC_IRQHandler, // TIM8 capture compare interrupt
ADC3_IRQHandler, // ADC3 global interrupt
FSMC_IRQHandler, // FSMC global interrupt
SDIO_IRQHandler, // SDIO global interrupt
TIM5_IRQHandler, // TIM5 global interrupt
SPI3_IRQHandler, // SPI3 global interrupt
UART4_IRQHandler, // UART4 global interrupt
UART5_IRQHandler, // UART5 global interrupt
TIM6_IRQHandler, // TIM6 global interrupt
TIM7_IRQHandler, // TIM7 global interrupt
DMA2_Channel1_IRQHandler, // DMA2 Channel1 global interrupt
DMA2_Channel2_IRQHandler, // DMA2 Channel2 global interrupt
DMA2_Channel3_IRQHandler, // DMA2 Channel3 global interrupt
DMA2_Channel4_5_IRQHandler // DMA2 Channel4 and DMA2 Channel5 global interrupts
};
/** Default exception/interrupt handler */
void __attribute__ ((section(".after_vectors"), noreturn)) __Default_Handler(void)
{
#ifdef DEBUG
while (1);
#else
NVIC_SystemReset();
while(1);
#endif
}
/** Reset handler */
void __attribute__ ((section(".after_vectors"), noreturn)) Reset_Handler(void)
{
_start();
while(1);
} What is happening here.
- First I declare my _start function so it can be used bellow.
- I declare a default handler for all the interrupts, and the reset handler
- I declare all the interrupts handlers needed for my MCU. Note that these functions are just aliases to the default handler, i.e. when any of them is called, the default handler will be called instead. Also they are declared week, so you can override them by your code. If you need any of the handlers, then you redeclare it in your code, and your code will be linked. If you don't need any of them, there is simply a default one and you don't have to do anything. The default handler should be structured as such, that if your application needs a handler but you don't implement it, it will help you in debugging your code, or recover the system if it is in the wild.
- I get the __stack symbol declared in the linker script. It is needed in the vector table.
- I define the table itself. Note the first entry is a pointer to the start of the stack, and the others are pointers to the handlers.
- Finally I provide a simple implementation for the default handler and the reset handler. Note that the reset handler is the one that is called after reset, and that calls the startup code. Keep in mind that the attribute ((section())) in the vector table is absolutely needed, so as the linker will place the table at the correct position (Normally address 0x00000000). What modifications are needed on the above file. Include the CMSIS file of your MCU If you modify the linker script, change the section names Change the vector table entries to match your MCU Change the handlers prototypes to match your MCU System Calls Since I use newlib, it requires you to provide the implementations of some functions. You may implement the printf, scanf etc, but they are not needed. Personally I provide only the following: _sbrk which is needed by malloc. (No modifications needed) #include <sys/types.h>
#include <errno.h>
caddr_t __attribute__((used)) _sbrk(int incr)
{
extern char __heap_start__; // Defined by the linker.
extern char __heap_end__; // Defined by the linker.
static char* current_heap_end;
char* current_block_address;
if (current_heap_end == 0)
{
current_heap_end = &__heap_start__;
}
current_block_address = current_heap_end;
// Need to align heap to word boundary, else will get
// hard faults on Cortex-M0. So we assume that heap starts on
// word boundary, hence make sure we always add a multiple of
// 4 to it.
incr = (incr + 3) & (~3); // align value to 4
if (current_heap_end + incr > &__heap_end__)
{
// Heap has overflowed
errno = ENOMEM;
return (caddr_t) - 1;
}
current_heap_end += incr;
return (caddr_t) current_block_address;
} _exit, which is not needed, but I like the idea. (You may only need to modify the CMSIS include). #include <sys/types.h>
#include <errno.h>
#include "stm32f10x.h"
void __attribute__((noreturn, used)) _exit(int code)
{
(void) code;
NVIC_SystemReset();
while(1);
} Start-up Code Finally the start-up code! #include <stdint.h>
#include "stm32f10x.h"
#include "gpio.h"
#include "flash.h"
/** Main program entry point. */
extern int main(void);
/** Exit system call. */
extern void _exit(int code);
/** Initializes the data section. */
static void __attribute__((always_inline)) __initialize_data (unsigned int* from, unsigned int* region_begin, unsigned int* region_end);
/** Initializes the BSS section. */
static void __attribute__((always_inline)) __initialize_bss (unsigned int* region_begin, unsigned int* region_end);
/** Start-up code. */
void __attribute__ ((section(".after_vectors"), noreturn, used)) _start(void);
void _start (void)
{
//Before switching on the main oscillator and the PLL,
//and getting to higher and dangerous frequencies,
//configuration of the flash controller is necessary.
//Enable the flash prefetch buffer. Can be achieved when CCLK
//is lower than 24MHz.
Flash_prefetchBuffer(1);
//Set latency to 2 clock cycles. Necessary for setting the clock
//to the maximum 72MHz.
Flash_setLatency(2);
// Initialize hardware right after configuring flash, to switch
//clock to higher frequency and have the rest of the
//initializations run faster.
SystemInit();
// Copy the DATA segment from Flash to RAM (inlined).
__initialize_data(&__textdata__, &__data_start__, &__data_end__);
// Zero fill the BSS section (inlined).
__initialize_bss(&__bss_start__, &__bss_end__);
//Core is running normally, RAM and FLASH are initialized
//properly, now the system must be fully functional.
//Update the SystemCoreClock variable.
SystemCoreClockUpdate();
// Call the main entry point, and save the exit code.
int code = main();
//Main should never return. If it does, let the system exit gracefully.
_exit (code);
// Should never reach this, _exit() should have already
// performed a reset.
while(1);
}
static inline void __initialize_data (unsigned int* from, unsigned int* region_begin, unsigned int* region_end)
{
// Iterate and copy word by word.
// It is assumed that the pointers are word aligned.
unsigned int *p = region_begin;
while (p < region_end)
*p++ = *from++;
}
static inline void __initialize_bss (unsigned int* region_begin, unsigned int* region_end)
{
// Iterate and clear word by word.
// It is assumed that the pointers are word aligned.
unsigned int *p = region_begin;
while (p < region_end)
*p++ = 0;
} What is happening here. First I configure the Flash controller, as this is required by my MCU, before changing frequency. You may add any very basic and needed for your hardware code here. Note that the code placed here should not access any globals in the RAM, as they are not initialized yet. Also note that the MCU still operates at a low frequency, so only call the absolutely needed. Then I call the CMSIS function SystemInit(). This is somewhat portable, that's why I use it. It mostly handles the core, not the MCU ot self, in my specific implementations it only enables the PLL, and sets the MCU to its final high frequency. You may substitute it with your more efficient code, but it is not a big deal. Next step, now that the MCU is fast, is to initialize the RAM. Pretty straight-forward. The MCU is up and running normally now. I just call the CMSIS function SystemCoreClockUpdate(), as I use in my code the SystemCoreClock variable, but it is not needed, just my preference. Finally I call the main function. Your application now executes normally. If the main returns, a call to _exit() is a good practise, to restart your system. More or less this is it. | {
"source": [
"https://electronics.stackexchange.com/questions/224618",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/98720/"
]
} |
224,623 | I've created a project in CubeMX for my F7 Discovery. I followed instructions form here . However, in my Device Manager on Windows nothing appears. When I used precompiled code from link I was assured drivers installed on Windows works properly. MCU after checking this: if( hUsbDevice_0 ==NULL) returns from the function. What's wrong? Maybe I should use another function to enable something in USB module on MCU? | You may not want to use the vendor-provided start-up code. There are few reassons people do this: Create more efficient, or less bloated code.
Have a special requirement that the vendor code does not meet.
You want to know how stuff work.
You want some kind of universal code, to use in many different MCUs.
You want total control, over you the process.
etc.. The following apply to C programs only (no C++, exceptions etc), and Cortex M microcontrollers (regardless of make/model). Also I assume that you use GCC, although there may be no or little difference with other compilers. Finally I use newlib. Linker Script The first thing to do is to create a linker script. You have to tell your compiler how to arrange things in memory. I won't get into details about the linker script, as it is a topic on its own. /*
* Linker script.
*/
/*
* Set the output format. Currently set for Cortex M architectures,
* may need to be modified if the library has to support other MCUs,
* or completelly removed.
*/
OUTPUT_FORMAT ("elf32-littlearm", "elf32-bigarm", "elf32-littlearm")
/*
* Just refering a function included in the vector table, and that
* it is defined in the same file with it, so the vector table does
* not get optimized out.
*/
EXTERN(Reset_Handler)
/*
* ST32F103x8 memory setup.
*/
MEMORY
{
FLASH (rx) : ORIGIN = 0x00000000, LENGTH = 64k
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 20k
}
/*
* Necessary group so the newlib stubs provided in the library,
* will correctly be linked with the appropriate newlib functions,
* and not optimized out, giving errors for undefined symbols.
* This way the libraries can be fed to the linker in any order.
*/
GROUP(
libgcc.a
libg.a
libc.a
libm.a
libnosys.a
)
/*
* Stack start pointer. Here set to the end of the stack
* memory, as in most architectures (including all the
* new ARM ones), the stack starts from the maximum address
* and grows towards the bottom.
*/
__stack = ORIGIN(RAM) + LENGTH(RAM);
/*
* Programm entry function. Used by the debugger only.
*/
ENTRY(_start)
/*
* Memory Allocation Sections
*/
SECTIONS
{
/*
* For normal programs should evaluate to 0, for placing the vector
* table at the correct position.
*/
. = ORIGIN(FLASH);
/*
* First link the vector table.
*/
.vectors : ALIGN(4)
{
FILL(0xFF)
__vectors_start__ = ABSOLUTE(.);
KEEP(*(.vectors))
*(.after_vectors .after_vectors.*)
} > FLASH
/*
* Start of text.
*/
_text = .;
/*
* Text section
*/
.text : ALIGN(4)
{
*(.text)
*(.text.*)
*(.glue_7t)
*(.glue_7)
*(.gcc*)
} > FLASH
/*
* Arm section unwinding.
* If removed may cause random crashes.
*/
.ARM.extab :
{
*(.ARM.extab* .gnu.linkonce.armextab.*)
} > FLASH
/*
* Arm stack unwinding.
* If removed may cause random crashes.
*/
.ARM.exidx :
{
__exidx_start = .;
*(.ARM.exidx* .gnu.linkonce.armexidx.*)
__exidx_end = .;
} > FLASH
/*
* Section used by C++ to access eh_frame.
* Generaly not used, but it doesn't harm to be there.
*/
.eh_frame_hdr :
{
*(.eh_frame_hdr)
} > FLASH
/*
* Stack unwinding code.
* Generaly not used, but it doesn't harm to be there.
*/
.eh_frame : ONLY_IF_RO
{
*(.eh_frame)
} > FLASH
/*
* Read-only data. Consts should also be here.
*/
.rodata : ALIGN(4)
{
. = ALIGN(4);
__rodata_start__ = .;
*(.rodata)
*(.rodata.*)
. = ALIGN(4);
__rodata_end__ = .;
} > FLASH
/*
* End of text.
*/
_etext = .;
/*
* Data section.
*/
.data : ALIGN(4)
{
FILL(0xFF)
. = ALIGN(4);
PROVIDE(__textdata__ = LOADADDR(.data));
PROVIDE(__data_start__ = .);
*(.data)
*(.data.*)
*(.ramtext)
. = ALIGN(4);
PROVIDE(__data_end__ = .);
} > RAM AT > FLASH
/*
* BSS section.
*/
.bss (NOLOAD) : ALIGN(4)
{
. = ALIGN(4);
PROVIDE(_bss_start = .);
__bss_start__ = .;
*(.bss)
*(.bss.*)
*(COMMON)
. = ALIGN(4);
PROVIDE(_bss_end = .);
__bss_end__ = .;
PROVIDE(end = .);
} > RAM
/*
* Non-initialized variables section.
* A variable should be explicitly placed
* here, aiming in speeding-up boot time.
*/
.noinit (NOLOAD) : ALIGN(4)
{
__noinit_start__ = .;
*(.noinit .noinit.*)
. = ALIGN(4) ;
__noinit_end__ = .;
} > RAM
/*
* Heap section.
*/
.heap (NOLOAD) :
{
. = ALIGN(4);
__heap_start__ = .;
__heap_base__ = .;
. = ORIGIN(HEAP_RAM) + LENGTH(HEAP_RAM);
__heap_end__ = .;
} > RAM
} You may directly use the provided linker script. Some things to note: This is a simplified version of the linker script I use. During
stripping-down I may introduced bugs to the code, please double check
it. Since I use it for other MCUs than you, you have to change the MEMORY layout to fit your own. You may need to change the libraries linked bellow to link with your own. Here it links against newlib. Vector Table You have to include in your code a vector table. This is simply a look-up table of function pointers, that the hardware will jump to automatically in case of an interrupt. This is fairly easy to do in C. Take a look at the following file. This is for the STM32F103C8 MCU, but it is very easy to change to your needs. #include "stm32f10x.h"
#include "debug.h"
//Start-up code.
extern void __attribute__((noreturn, weak)) _start (void);
// Default interrupt handler
void __attribute__ ((section(".after_vectors"), noreturn)) __Default_Handler(void);
// Reset handler
void __attribute__ ((section(".after_vectors"), noreturn)) Reset_Handler (void);
/** Non-maskable interrupt (RCC clock security system) */
void NMI_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** All class of fault */
void HardFault_Handler(void) __attribute__ ((interrupt, weak));
/** Memory management */
void MemManage_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Pre-fetch fault, memory access fault */
void BusFault_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Undefined instruction or illegal state */
void UsageFault_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** System service call via SWI instruction */
void SVC_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Debug monitor */
void DebugMon_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Pendable request for system service */
void PendSV_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** System tick timer */
void SysTick_Handler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Window watchdog interrupt */
void WWDG_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** PVD through EXTI line detection interrupt */
void PVD_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Tamper interrupt */
void TAMPER_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** RTC global interrupt */
void RTC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** Flash global interrupt */
void FLASH_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** RCC global interrupt */
void RCC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line0 interrupt */
void EXTI0_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line1 interrupt */
void EXTI1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line2 interrupt */
void EXTI2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line3 interrupt */
void EXTI3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line4 interrupt */
void EXTI4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel1 global interrupt */
void DMA1_Channel1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel2 global interrupt */
void DMA1_Channel2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel3 global interrupt */
void DMA1_Channel3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel4 global interrupt */
void DMA1_Channel4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel5 global interrupt */
void DMA1_Channel5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel6 global interrupt */
void DMA1_Channel6_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA1 Channel7 global interrupt */
void DMA1_Channel7_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** ADC1 and ADC2 global interrupt */
void ADC1_2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USB high priority or CAN TX interrupts */
void USB_HP_CAN_TX_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USB low priority or CAN RX0 interrupts */
void USB_LP_CAN_RX0_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** CAN RX1 interrupt */
void CAN_RX1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** CAN SCE interrupt */
void CAN_SCE_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line[9:5] interrupts */
void EXTI9_5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 break interrupt */
void TIM1_BRK_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 update interrupt */
void TIM1_UP_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 trigger and commutation interrupts */
void TIM1_TRG_COM_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM1 capture compare interrupt */
void TIM1_CC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM2 global interrupt */
void TIM2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM3 global interrupt */
void TIM3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM4 global interrupt */
void TIM4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C1 event interrupt */
void I2C1_EV_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C1 error interrupt */
void I2C1_ER_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C2 event interrupt */
void I2C2_EV_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** I2C2 error interrupt */
void I2C2_ER_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SPI1 global interrupt */
void SPI1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SPI2 global interrupt */
void SPI2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USART1 global interrupt */
void USART1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USART2 global interrupt */
void USART2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USART3 global interrupt */
void USART3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** EXTI Line[15:10] interrupts */
void EXTI15_10_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** RTC alarm through EXTI line interrupt */
void RTCAlarm_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** USB wakeup from suspend through EXTI line interrupt */
void USBWakeup_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 break interrupt */
void TIM8_BRK_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 update interrupt */
void TIM8_UP_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 trigger and commutation interrupts */
void TIM8_TRG_COM_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM8 capture compare interrupt */
void TIM8_CC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** ADC3 global interrupt */
void ADC3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** FSMC global interrupt */
void FSMC_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SDIO global interrupt */
void SDIO_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM5 global interrupt */
void TIM5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** SPI3 global interrupt */
void SPI3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** UART4 global interrupt */
void UART4_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** UART5 global interrupt */
void UART5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM6 global interrupt */
void TIM6_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** TIM7 global interrupt */
void TIM7_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel1 global interrupt */
void DMA2_Channel1_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel2 global interrupt */
void DMA2_Channel2_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel3 global interrupt */
void DMA2_Channel3_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
/** DMA2 Channel4 and DMA2 Channel5 global interrupts */
void DMA2_Channel4_5_IRQHandler(void) __attribute__ ((interrupt, weak, alias("__Default_Handler")));
// Stack start variable, needed in the vector table.
extern unsigned int __stack;
// Typedef for the vector table entries.
typedef void (* const pHandler)(void);
/** STM32F103 Vector Table */
__attribute__ ((section(".vectors"), used)) pHandler vectors[] =
{
(pHandler) &__stack, // The initial stack pointer
Reset_Handler, // The reset handler
NMI_Handler, // The NMI handler
HardFault_Handler, // The hard fault handler
#if defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7EM__)
MemManage_Handler, // The MPU fault handler
BusFault_Handler,// The bus fault handler
UsageFault_Handler,// The usage fault handler
#else
0, 0, 0, // Reserved
#endif
0, // Reserved
0, // Reserved
0, // Reserved
0, // Reserved
SVC_Handler, // SVCall handler
#if defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7EM__)
DebugMon_Handler, // Debug monitor handler
#else
0, // Reserved
#endif
0, // Reserved
PendSV_Handler, // The PendSV handler
SysTick_Handler, // The SysTick handler
// ----------------------------------------------------------------------
WWDG_IRQHandler, // Window watchdog interrupt
PVD_IRQHandler, // PVD through EXTI line detection interrupt
TAMPER_IRQHandler, // Tamper interrupt
RTC_IRQHandler, // RTC global interrupt
FLASH_IRQHandler, // Flash global interrupt
RCC_IRQHandler, // RCC global interrupt
EXTI0_IRQHandler, // EXTI Line0 interrupt
EXTI1_IRQHandler, // EXTI Line1 interrupt
EXTI2_IRQHandler, // EXTI Line2 interrupt
EXTI3_IRQHandler, // EXTI Line3 interrupt
EXTI4_IRQHandler, // EXTI Line4 interrupt
DMA1_Channel1_IRQHandler, // DMA1 Channel1 global interrupt
DMA1_Channel2_IRQHandler, // DMA1 Channel2 global interrupt
DMA1_Channel3_IRQHandler, // DMA1 Channel3 global interrupt
DMA1_Channel4_IRQHandler, // DMA1 Channel4 global interrupt
DMA1_Channel5_IRQHandler, // DMA1 Channel5 global interrupt
DMA1_Channel6_IRQHandler, // DMA1 Channel6 global interrupt
DMA1_Channel7_IRQHandler, // DMA1 Channel7 global interrupt
ADC1_2_IRQHandler, // ADC1 and ADC2 global interrupt
USB_HP_CAN_TX_IRQHandler, // USB high priority or CAN TX interrupts
USB_LP_CAN_RX0_IRQHandler, // USB low priority or CAN RX0 interrupts
CAN_RX1_IRQHandler, // CAN RX1 interrupt
CAN_SCE_IRQHandler, // CAN SCE interrupt
EXTI9_5_IRQHandler, // EXTI Line[9:5] interrupts
TIM1_BRK_IRQHandler, // TIM1 break interrupt
TIM1_UP_IRQHandler, // TIM1 update interrupt
TIM1_TRG_COM_IRQHandler, // TIM1 trigger and commutation interrupts
TIM1_CC_IRQHandler, // TIM1 capture compare interrupt
TIM2_IRQHandler, // TIM2 global interrupt
TIM3_IRQHandler, // TIM3 global interrupt
TIM4_IRQHandler, // TIM4 global interrupt
I2C1_EV_IRQHandler, // I2C1 event interrupt
I2C1_ER_IRQHandler, // I2C1 error interrupt
I2C2_EV_IRQHandler, // I2C2 event interrupt
I2C2_ER_IRQHandler, // I2C2 error interrupt
SPI1_IRQHandler, // SPI1 global interrupt
SPI2_IRQHandler, // SPI2 global interrupt
USART1_IRQHandler, // USART1 global interrupt
USART2_IRQHandler, // USART2 global interrupt
USART3_IRQHandler, // USART3 global interrupt
EXTI15_10_IRQHandler, // EXTI Line[15:10] interrupts
RTCAlarm_IRQHandler, // RTC alarm through EXTI line interrupt
USBWakeup_IRQHandler, // USB wakeup from suspend through EXTI line interrupt
TIM8_BRK_IRQHandler, // TIM8 break interrupt
TIM8_UP_IRQHandler, // TIM8 update interrupt
TIM8_TRG_COM_IRQHandler, // TIM8 trigger and commutation interrupts
TIM8_CC_IRQHandler, // TIM8 capture compare interrupt
ADC3_IRQHandler, // ADC3 global interrupt
FSMC_IRQHandler, // FSMC global interrupt
SDIO_IRQHandler, // SDIO global interrupt
TIM5_IRQHandler, // TIM5 global interrupt
SPI3_IRQHandler, // SPI3 global interrupt
UART4_IRQHandler, // UART4 global interrupt
UART5_IRQHandler, // UART5 global interrupt
TIM6_IRQHandler, // TIM6 global interrupt
TIM7_IRQHandler, // TIM7 global interrupt
DMA2_Channel1_IRQHandler, // DMA2 Channel1 global interrupt
DMA2_Channel2_IRQHandler, // DMA2 Channel2 global interrupt
DMA2_Channel3_IRQHandler, // DMA2 Channel3 global interrupt
DMA2_Channel4_5_IRQHandler // DMA2 Channel4 and DMA2 Channel5 global interrupts
};
/** Default exception/interrupt handler */
void __attribute__ ((section(".after_vectors"), noreturn)) __Default_Handler(void)
{
#ifdef DEBUG
while (1);
#else
NVIC_SystemReset();
while(1);
#endif
}
/** Reset handler */
void __attribute__ ((section(".after_vectors"), noreturn)) Reset_Handler(void)
{
_start();
while(1);
} What is happening here.
- First I declare my _start function so it can be used bellow.
- I declare a default handler for all the interrupts, and the reset handler
- I declare all the interrupts handlers needed for my MCU. Note that these functions are just aliases to the default handler, i.e. when any of them is called, the default handler will be called instead. Also they are declared week, so you can override them by your code. If you need any of the handlers, then you redeclare it in your code, and your code will be linked. If you don't need any of them, there is simply a default one and you don't have to do anything. The default handler should be structured as such, that if your application needs a handler but you don't implement it, it will help you in debugging your code, or recover the system if it is in the wild.
- I get the __stack symbol declared in the linker script. It is needed in the vector table.
- I define the table itself. Note the first entry is a pointer to the start of the stack, and the others are pointers to the handlers.
- Finally I provide a simple implementation for the default handler and the reset handler. Note that the reset handler is the one that is called after reset, and that calls the startup code. Keep in mind that the attribute ((section())) in the vector table is absolutely needed, so as the linker will place the table at the correct position (Normally address 0x00000000). What modifications are needed on the above file. Include the CMSIS file of your MCU If you modify the linker script, change the section names Change the vector table entries to match your MCU Change the handlers prototypes to match your MCU System Calls Since I use newlib, it requires you to provide the implementations of some functions. You may implement the printf, scanf etc, but they are not needed. Personally I provide only the following: _sbrk which is needed by malloc. (No modifications needed) #include <sys/types.h>
#include <errno.h>
caddr_t __attribute__((used)) _sbrk(int incr)
{
extern char __heap_start__; // Defined by the linker.
extern char __heap_end__; // Defined by the linker.
static char* current_heap_end;
char* current_block_address;
if (current_heap_end == 0)
{
current_heap_end = &__heap_start__;
}
current_block_address = current_heap_end;
// Need to align heap to word boundary, else will get
// hard faults on Cortex-M0. So we assume that heap starts on
// word boundary, hence make sure we always add a multiple of
// 4 to it.
incr = (incr + 3) & (~3); // align value to 4
if (current_heap_end + incr > &__heap_end__)
{
// Heap has overflowed
errno = ENOMEM;
return (caddr_t) - 1;
}
current_heap_end += incr;
return (caddr_t) current_block_address;
} _exit, which is not needed, but I like the idea. (You may only need to modify the CMSIS include). #include <sys/types.h>
#include <errno.h>
#include "stm32f10x.h"
void __attribute__((noreturn, used)) _exit(int code)
{
(void) code;
NVIC_SystemReset();
while(1);
} Start-up Code Finally the start-up code! #include <stdint.h>
#include "stm32f10x.h"
#include "gpio.h"
#include "flash.h"
/** Main program entry point. */
extern int main(void);
/** Exit system call. */
extern void _exit(int code);
/** Initializes the data section. */
static void __attribute__((always_inline)) __initialize_data (unsigned int* from, unsigned int* region_begin, unsigned int* region_end);
/** Initializes the BSS section. */
static void __attribute__((always_inline)) __initialize_bss (unsigned int* region_begin, unsigned int* region_end);
/** Start-up code. */
void __attribute__ ((section(".after_vectors"), noreturn, used)) _start(void);
void _start (void)
{
//Before switching on the main oscillator and the PLL,
//and getting to higher and dangerous frequencies,
//configuration of the flash controller is necessary.
//Enable the flash prefetch buffer. Can be achieved when CCLK
//is lower than 24MHz.
Flash_prefetchBuffer(1);
//Set latency to 2 clock cycles. Necessary for setting the clock
//to the maximum 72MHz.
Flash_setLatency(2);
// Initialize hardware right after configuring flash, to switch
//clock to higher frequency and have the rest of the
//initializations run faster.
SystemInit();
// Copy the DATA segment from Flash to RAM (inlined).
__initialize_data(&__textdata__, &__data_start__, &__data_end__);
// Zero fill the BSS section (inlined).
__initialize_bss(&__bss_start__, &__bss_end__);
//Core is running normally, RAM and FLASH are initialized
//properly, now the system must be fully functional.
//Update the SystemCoreClock variable.
SystemCoreClockUpdate();
// Call the main entry point, and save the exit code.
int code = main();
//Main should never return. If it does, let the system exit gracefully.
_exit (code);
// Should never reach this, _exit() should have already
// performed a reset.
while(1);
}
static inline void __initialize_data (unsigned int* from, unsigned int* region_begin, unsigned int* region_end)
{
// Iterate and copy word by word.
// It is assumed that the pointers are word aligned.
unsigned int *p = region_begin;
while (p < region_end)
*p++ = *from++;
}
static inline void __initialize_bss (unsigned int* region_begin, unsigned int* region_end)
{
// Iterate and clear word by word.
// It is assumed that the pointers are word aligned.
unsigned int *p = region_begin;
while (p < region_end)
*p++ = 0;
} What is happening here. First I configure the Flash controller, as this is required by my MCU, before changing frequency. You may add any very basic and needed for your hardware code here. Note that the code placed here should not access any globals in the RAM, as they are not initialized yet. Also note that the MCU still operates at a low frequency, so only call the absolutely needed. Then I call the CMSIS function SystemInit(). This is somewhat portable, that's why I use it. It mostly handles the core, not the MCU ot self, in my specific implementations it only enables the PLL, and sets the MCU to its final high frequency. You may substitute it with your more efficient code, but it is not a big deal. Next step, now that the MCU is fast, is to initialize the RAM. Pretty straight-forward. The MCU is up and running normally now. I just call the CMSIS function SystemCoreClockUpdate(), as I use in my code the SystemCoreClock variable, but it is not needed, just my preference. Finally I call the main function. Your application now executes normally. If the main returns, a call to _exit() is a good practise, to restart your system. More or less this is it. | {
"source": [
"https://electronics.stackexchange.com/questions/224623",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/50371/"
]
} |
224,924 | I'm pretty new to embedded systems, and I keep seeing this term used all the time, but I can't quite understand what is it. A quick search online yields this Wikipedia page: https://en.wikipedia.org/wiki/Joint_Test_Action_Group which describe it as some-kind of protocol for debugging. But in other contexts it used like it's able to program a chip's memory like a programmer. What is it? | It is like USB, SPI, I2C, and other "busses", and it has a number of popular uses, not limited to: One in particular is in testing of silicon before too much is invested in each part, for example while a chip is still on the wafer you can check most of the part. Granted dicing the wafer can do damage so you want to test again but maybe you do that before packing, maybe after. You can use it to do boundary scan on boards. You can take a board on a production line (granted the board has to be designed right and some percentage of the chips have to support this) but you can do low speed connectivity tests, stimulate the pin on one end of a trace and scan other parts to see that they are or are not connected per the design of the board. Since the chips already have these dedicated pins why not, for processors, use that same interface as a way to talk to an on chip debugger (OCD), design something into the processor and allow that to be talked to via jtag. It is a generic-ish way to allow you to isolate things on the chain you want to send a series of bits to or get a series of bits from, in a way that each thing you want to talk to can be designed for various numbers of bits from a small number to a large number. For a debugger you would naturally have it write to or read from a register-sized thing in your chip - maybe a 16 bit register or 32 bit. But for silicon or board testing your scan chain might be dozens of bits. Each individual thing you address can vary in size, if you want, from any other thing, making this a very versatile bus with a relatively small number of pins that is attractive for these types of use cases. Perhaps because of the popularity for software debugging they now have some two pin solutions to save on pin count for microcontrollers and perhaps others will adopt that, perhaps not. | {
"source": [
"https://electronics.stackexchange.com/questions/224924",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/51059/"
]
} |
226,576 | After you've designed your circuit and have the PCB layout completed, how do people in the EE industry go about designing the enclosure? What kind of considerations need to be made to accommodate the PCB in an enclosure? | First things first. You do not, repeat not, wait until the board is laid out to design the enclosure. Not if you want to be a professional. The circuit designer must work closely with the mechanical designer. For equipment incorporating pcbs, the following need to be considered: power supply location and dimensions, thermal limits (both from the board AND the power supply), cable routing, exterior connector choice and placement, EMI shielding (considering EMI in both directions), external access and mounting, and potentially other effects as well. All of these things can bite you on the butt if you ignore them, and any of them may well impact the board dimensions and layout. Typically this is an iterative, back-and-forth process. The mechanical designer (or designers) will say "This is what you've got to work with." The electrical says, "Nope. Won't work. I need more room for cables/waste heat/board area/etc". Mechanical says, "Well, how about if we do such-and such", and the conversation keeps up until either an accommodation is reached, or it becomes clear that something has to give in the system requirements, at which time the problem gets bounced to the managers. Only once the two sides are in agreement can the board layout begin, and details of board geometry are dealt with via CAD, as the board physical design is integrated with the enclosure. EDIT - It occurs to me that my above discussion may be a bit idealistic. It is certainly possible that, in some organizations, the layout requirements are presented as a fait accompli. For instance, marketing has decreed that the latest version of the circuit (which requires much more functionality) must fit into the existing boxes. And, of course, the new circuit must cost less than the old version. In this sort of situation, a good designer may well rise to the occasion - or maybe not. In either case, it's probably a good idea to update your resume. If you fail, you may well get canned. If you succeed, you know that management is going to do it again and again until you do fail. Either way, it's time to look elsewhere. | {
"source": [
"https://electronics.stackexchange.com/questions/226576",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/43263/"
]
} |
226,582 | Not a dup of which barely attends to the technical reasons for or against using 90 degree routes. As a hobbyist who's enrolling in an EE program in the fall, should I just get in the habit of avoiding right angles in PCB layout? Is this always a best practice or something that only RF or high speed scenarios require ? Beyond mechanical or aesthetic reasons, are there reason you would want to use 90 degree angles? | EDIT: I forgot about one case where 90 degree corners ARE bad: high voltage PCBs. This is nothing to do with reflection or radiation, but how any sort of sharp shape will concentrate the high electric field and make a dielectric breakdown and arc over more likely. This can be exploited for PCB spark gaps, but otherwise, one should avoid right angle corners on a high voltage PCB, 1kV+. And one should use rounded pads for everything, even SMD resistor/capacitor pads. Avoid sharp copper shapes as much as possible. No, there is no reason to prefer 45 angles over right angles. I will say this definitively: The other answers claiming that right angles cause more EMI are demonstrably false. This is not some sort of theoretical unknown that can be debated. We can measure EMI radiated from various trace shapes, and we have, and right angles do not radiate more than than 45 degree angles. Bring up as many theoretical reasons why right angles should be bad for EMI, but they don't matter. The simple empirical reality of the situation is that they don't, and what they 'should' do is not going to change that. In fact, this is true even at very high frequencies, which I will address further down in this post.
If I am wrong, by all means, show me measurements demonstrating that 90 degree corners are worse. Or better yet, if there is a measurable difference, surely it would be straight forward to build a meter that could determine if a trace had a right angle or 45 degree angle corner in it entirely by performing measurements on the input and output. Or picking up the EMI. I will eat my words if anyone can build a meter to do that. I am quite sure no one will, because there is not any measurable difference in EMI or reflection at frequencies that even permit 45 (or 90) degree corners in the first place. There are of course other nonsense reasons being given. Etching and right angles was only ever a problem before anyone was even using 45 degree angle traces and was instead hand taping out boards using rubylith. Processes improved enough that such concerns haven't been concerns for at least 3 decades. If there was any etch related problem, you had better tell all those boards with square 0.5 or 0.4mm pitch QFN pads that they can't keep using those parts, since apparently our PCB etch processes would mangle the shape of those pads completely. At least, if one is to believe some of the other answers in this thread. Of course, the etch argument is also obviously nonsense, and you need look at all the tiny square pads etched all the time with perfectly sharp corners to understand that it is nonsense. What bothers me is that no one is asking the one question that we should be asking: Why are 45 degree traces used? The answer is a bit anti-climatic: tradition. At least, when it is used without a valid, rational reason. You can route more traces in the same space using 45 degree angles, simply due to corners taking up more room (square root of two and all that ). So using them is perfectly valid in many routing situations. But there is no reason to preferentially use them over right angles, so you should get in the habit of using whatever seems like the best solution for that very specific trace. If you want to be good at routing boards, a great way to ensure this never happens is to enforce arbitrary design rules upon yourself that give no benefit. It's just choosing to limit your routing strategies for no reason. People may doubt my tradition argument, but I come bearing hard proof. I have a lot of circuit boards spanning times from the 60s to the present day, and it is clear that 45 degree angle traces are little more than an artifact of EDA software coming onto the scene and imposing arbitrary limitations (8 possible routing angles... computers struggled to do even simple vector graphics at the time, this made things easier). First, here is a board for a frequency filter for an old HP synthesizer. This produced a lot of RF frequencies and used bazillion order filters, 24 of them, all carrying some multiple of 10MHz. This was a piece of HP test gear mind you, this was the state-of-the-art. It was made in the 70s, back when boards were still hand taped out. Boards of this era, even RF ones, never used 45 degree angles because their use was an artificial constraint due to software that didn't exist yet. Let's flip it over... But those have lots of stuff rounded too...this was probably due to them being masked with rubylith cut by hand. Let's move forward to 1983, when EDA software was very much in use. Suddenly, all those curves and angles going any which way vanished, and only 8 directions were used. This is entirely because of the tools of the period, there was no design choice going on here. No one chose to only use 8 directions, they only HAD 8 directions to pick from in those early EDA tools. The following is a western digital disk controller from 1983. Oh my... its as if they didn't care one way or the other about right angled or 45 degree trace corners. (Hint: they didn't.) They use both with wild abandon! More close ups... As you can see, it appears that the only real correlation between when one is used is that when needing higher routing density, 45 degree cornering is used much more often (though not always). This is the only concern that really influences cornering choice. Otherwise, use whatever you like. Clearly, this designer didn't particularly like either of them more than the other. He probably used to tape boards out, but has moved on to using EDA tools. He wasn't using 90 degree angles OR 45 degree angles on his traces before, and has no preference when he or she designed this. If you are using FR4, then right angles don't matter. For the simple reason that if you can tolerate the dielectric loss caused by FR4 on your signal, it isn't fast enough for right angles to matter. Even 2.4GHz Wifi has a wavelength of about 5 inches. Of course it is not going to be meaningfully effected by a feature orders of magnitude smaller than it's own wavelength, like the shape trace corner. 2 45 degree turns or one 90 degree one are going to be virtually identical in over all effect. And, shape is not even the important factor in the instances when frequency effects. Impedance is. You must maintain a characteristic impedance, such that the instantaneous impedance the signal sees at any given step along the way is always the same. It is discontinuities that cause reflections and radiation. The easy way of maintaining impedance is simply to use curves, but they must have a radius of curvature at least 3 times the width of the trace. This is to maintain the trace width to a nearly constant value, thus maintaining impedance. This is what determines the shape, but any strategy that maintains impedance is valid. One last picture, the inside of an old Tektronix Oscilloscope: If you need to corner a trace more compactly, then using a 90 degree or two 45 degree angles are both incorrect. A 90 degree corner causes an impedance discontinuity when the trace width to increases by a factor of \$\sqrt{2}\$ at the 90 degree corner, causing a sudden drop in impedance. This will indeed cause reflections and radiate. If you use a 45 degree angle, you cause not just one but two discontinuities, and while individually they are not as severe (each 45 degree angle widens the trace by a factor of \$\sqrt{4-\sqrt{2}}\$), that approximately 1.08 difference is still a significant impedance change, and causes reflections and radiation. Only, it happens twice, so you'll multiple get phase shifted reflections and radiations. 45 degree angles are at best no better than right angles at the very issues that supposedly make them 'better'. The simple truth is that there is no actual reason, and essentially no difference between them. So how do you correctly corner a trace when cornering strategy actually matters? Any way you want, as long as you maintain your impedance. Which is not possible with 90 or 45 degree bends. You can maintain impedance in any way you wish, and while it is difficult to increase impedance to balance out the extra width (and loss of impedance) caused by 45 or 90 degree cornering, it is easy to reduce impedance to balance out increased impedance by narrowing the trace. Let's back up for a second and examine the whole trace width vs. impedance thing. Why does trace width have such a significant effect on the impedance? It's not the tiny change in the already minuscule DC resistance, of course. It's the capacitance! Those traces form one plate of a capacitor with the signal return plane underneath it. So once you've added extra area using a right angle or 45 degree trace, there is nothing you can do, that extra capacitance is there to stay. However, if you take a 90 degree corner and chop off part of the corner on one side, and based on the dielectric constant of your pcb substrate as well as its thickness between the signal trace and the return plane, you can calculate exactly how much you need to chop off to maintain the over all capacitance. And the result is somewhat ironically the juxtaposition of a 45 and 90 degree bend: Fundamentally, it is a 90 degree corner with a bit of it chopped off (mitered) to maintain capacitance, and thus impedance. There is nothing wrong with sharp angles if you account for impedance. Curves are just easier, so I prefer them because I'm lazy. Sometimes they take up too much space though. Whether you corner 90 degrees or 45 degrees is irrelevant. You can route topologically and not follow any direction if you prefer. This all began with a software quirk and it has morphed into tradition, and even apparently superstition. I promise you that any engineer that claims this somehow matters will never back this up with hard data, and will not be able to if pressed to do so. It is easy to find evidence that it doesn't matter, because it doesn't. Which is why I just took some pictures to prove my point. In the high end situations it does matter, any rule of thumb about one corner over the other is completely worthless anyway, as both are just as wrong. If you can use 45 degree traces, you can use 90 degree ones with no measurable impact. Use which over one you like or is suited for a specific trace density. Engineers never used to care, and there is no reason you should. Don't let the unsupported answers (upvoted as they unfortunately are, despite being false information) pull you in. Validate rule of thumbs you're told with data. Tradition and faith has no place in good engineering. | {
"source": [
"https://electronics.stackexchange.com/questions/226582",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/43263/"
]
} |
226,593 | Do seasoned designers tend to do a fair amount of calculations or are large parts of circuits designed intuitively? I'm asking because it seems like design engineers tend to have a sense of what value cap you want to have here, resistor there, for common parts of circuits. If that's the case is it because they're just recycling designs?
To the novice this is mind blowing. Though, books like the Art of Electronics seem to encourage the approach of making approximate calculations on the fly. | I'm a professional electrical engineer who routinely designs new circuits for volume production, and have been for over 35 years. Yes, I frequently do calculations to determine the exact part specs. There are also many cases where experience and intuition are good enough and the requirements loose enough that I just pick a value. Don't confuse that with a random value, though. For example, for a pulldown resistor on the MISO line of a SPI bus, I'll just spec 100 kΩ and be done with it. 10 kΩ would work fine too, and someone else picking that wouldn't be wrong either. If I'm using a 20 kΩ resistor elsewhere, then I might spec another one on the MISO line to avoid adding another part to the BOM. The point is sometimes you have a lot of leeway, and intuition and experience are good enough. On the other hand, looking at the schematic of my latest design, which I'm in the middle of bringing up first boards of now, I see a case where I spent some time not only specifying the part value but calculating the result of variance on the rest of the system. There were three cases of two resistors used in the feedback to a switching power supply. Here is the problem worded like homework: A powersupply chip feedback input threshold is 800 mV ±2%. You are using three instances of this chip, to make the 12 V, 5 V, and 3.3 V power supplies. You have previously decided to use around 10 kΩ for the bottom resistor of each voltage divider. Determine the full resistor specs in each case, and determine the min/max resulting nominal supply voltage. Stick to readily available resistor values. Use 1% if suitable and spec accordingly. That's a genuine real world problem that took a few minutes with a calculator. By the way, I determined that 1% resistors were good enough. That's actually what I expected, but did the calculations anyway to make sure. I also noted the full nominal range for each supply right on the schematic. Not only might this be useful to refer to later, but it also shows that this issue was considered and the calculations done. I or someone else won't have to wonder a year later what the tolerance of the 3.3 V supply is, for example, and re-do the calculations. Here is a snippet from the schematic showing the case described above: I just picked R2, R4, and R6, but did the calculations to determine R1, R3, and R5, and the resulting power supply nominal ranges. Added about the SHx parts (response to comment) The SH parts are what I call "shorts". These are just copper on the board. Their purpose is to allow a single physical net to be broken into two logical nets in the software, which is Eagle in this case. In all three cases above, the SH parts connect the local ground of a switching power supply to the board-wide ground plane. Switching power supplies can have significant currents running across their grounds, and these currents can have high frequency components. Much of this current just circulates locally. By making the local ground a separate net connected to the main ground in only one place, these circulating currents stay in a small local net and do not cross the main ground plane. The small local ground net radiates far less, and the currents don't cause offsets in the main ground. Eventually power has to flow out of a power supply and return via the ground. However, that current can be filtered much more than the high frequency internal currents of a switching power supply. If done right, only the well behaved output current of the switcher makes it out of the immediate vicinity to other parts of the overall circuit. You really want to keep local high frequency currents off the main ground plane. Not only does that avoid the ground voltage offsets those currents can cause, but it prevents the main ground from becoming a patch antenna. Fortunately, many of the nasty ground currents are also local. That means they can be kept local by connecting the local ground net to the main ground in only one spot. Good examples of this include the path between the ground side of a bypass cap and the ground pin of the IC it is bypassing. That's exactly what you don't want running across the main ground. Don't just connect the ground side of a bypass cap to the main ground thru a via. Connect it back to the IC ground via its own track or local ground, then connect that to the main ground in one place. | {
"source": [
"https://electronics.stackexchange.com/questions/226593",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/43263/"
]
} |
226,820 | Assumptions: Computer architecture: Describes how the different modules of a processor interact with each other. A computer architecture is defined using vhdl files Computer Organization: Describes the physical layout of the processor modules on silicon. A computer organization is defined using a set of photo masks (and manufacturing process eg chemical, that goes at each step) Computer Organization, therefore, requires that the fab process be taken into account. ARM is not in the fabrication business, therefore it does not sell photo masks. My question(s): What exactly is ARM selling to a vendor (eg: freescale)? For a SoC (System On Chip), (eg: iMx6 ), which part is ARM and which is Freescale? Who did the integration? | You're using those terms wrong. "Computer organization" is a rarely-used term for the microarchitecture, and "computer architecture" is a superset of that. Integrated circuit IP blocks come in two basic forms: A soft macro is the RTL (VHDL or Verilog) that describes the functional implementation of the IP. This is compiled into a gate-level netlist, which is then turned into a physical layout to produce the mask set for manufacturing. Here's an example from Cadence -- an Ethernet MAC. When you buy it, you get Verilog files, documentation, and a Verilog testbench for verification. A hard macro is a physical layout of the IP suitable for a given process. It's added to the larger chip layout as a single block, which saves some steps in the design process. Here's another Cadence example -- an Ethernet PHY. It's offered in 180nm and 130nm processes at TSMC, UMC, and SMIC, and is delivered to the customer in the form of GDSII layout files. ARM sells both of these. The MCUs I've worked on usually use soft macros of ARM Cortex CPUs. We had some older product with ARM7 hard macros, but I don't know if they were hardened by ARM or us. Today, ARM has hard macro versions of the Cortex-A series listed on their web site . Most of their products are synthesizable (soft macros), though. It looks like you can download the (soft) Cortex-M0 for free for non-commercial use on the ARM DesignStart site. In an SoC, the ARM part is just the CPU. (The designer can also buy peripheral IP from ARM, but it's not required.) The SoCs I've worked on have a mix of third-party and internal IP. | {
"source": [
"https://electronics.stackexchange.com/questions/226820",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/18887/"
]
} |
227,033 | I have designed and printed a 4-layer PCB that accommodates 91 infrared LEDs in a 7x13 rectangular layout. This will be used as a backlight for a machine vision project. I am having a problem where individual LEDs are burning out or perhaps becoming disconnected from the circuit in some way. I suspect the heat dissipation may be the cause of the problem. Image PCB Layout Each row of 7 LEDs (green LED text) is wired in series. The 12V supply (VCC powerplane) connects to the first LED. The next 6 are wired in series. Finally a current-limiting resistor (green R text) connects the last LED to the ground plane. Specifications: VCC plane: 12V, 2A supply LED: TSHG6200 . 100mA maximum rated
current. Current limiting resistor: 20 ohms Solder: Thermoflow
Sn60/PB40 Total estimated power dissipation: 12V * 0.1A per row *
13 rows = 15.6W. Size of array: 13 rows of 7 LEDs, approximately 7cm
x 6cm Measurements With a 12V power supply, there is about 1.45V over each LED, and about 2.0V over the current limiting resistor, meaning a current of 100mA. Because this is right at the maximum allowable current, I put a big high-power potentiometer between the power supply and the VCC plane, and used this to regulate the input voltage to be slightly lower (11.5V or so). This gets the current safely below the maximum allowable amount. I am also using a Darlington pair to control the backlight with an Arduino. The backlight is on almost all the time, and occasionally is pulsed off for about 30ms. I don't think this is relevant to the problem but can provide more details if necessary. Problem After about 10-30 minutes of use, one or more of the rows of LEDs will go out. If I measure the voltage across each LED in the broken row, most LEDs are at about 0.8V and one has about 8.0V across it. No current is flowing. Sometimes resoldering the pins or tapping the LED fixes this. Sometimes it has to be replaced. In any case I only get another 10-30 minutes of use before another one goes out. Another observation is that the whole back side of the board is kind of sticky. You can see this in the picture above. I wonder if it is getting too hot and the solder is becoming compromised (perhaps exuding flux??). Question What should I try to improve the reliability? I've already tried running it at a lower voltage to get the current safely below the rated maximum. I wonder if I need to use a different kind of solder? Or some kind of heat sink? The LEDs get hot to the touch but not unbearably so. Edit, after trying suggestions Thanks everyone for the tips! I did something quite simple -- pointed a computer fan to blow air across the array -- and it worked fantastically! I guess this is really obvious to many of you but I was surprised at how enormous the difference was. Without fan: 25mA per row -> 39C 33mA per row -> 41C 40mA per row -> 48C 55mA per row -> 52C So we get into the "danger zone" of temperature well before reaching the maximum current per LED. With fan: 35mA per row -> 26C 60mA per row -> 30C 90mA per row -> 34C I ran it at 90mA per row and 34C for over an hour with no problems. Great! | You have already hit on the answer: your LEDs are getting hot. 15 watts may not sound like much, but it's building up and killing your LEDs. I suggest you get a thermistor and attach it to the center of the board, then monitor the temperature as the system operates. Even better, attach it to the body of one of the LEDs. Because you're using this as a backlight, don't use narrow-beam LEDs. Use relatively wide beam units, and space them apart so air can flow through. If you can find a source of, let's say, 35 degree LEDs, install only every other one in a checkerboard pattern, soldering jumper as necessary. You'll only get half the total brightness, but that's barely perceptible, and the improved airflow should be a big help. You may also need to provide a fan with some ducting to keep the air flow through the array adequate. And always include a temperature monitor. While not directly applicable, this YouTube video shows the principles of cooling. In your case, since you've got a forest of vertically standing LEDs, it is important not to let the LEDs touch each other, since this will block the flow of air. | {
"source": [
"https://electronics.stackexchange.com/questions/227033",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/65110/"
]
} |
227,581 | When I was reading the datasheet for these relays ( http://www.mouser.com/ds/2/357/105A_117SIP_107DIP_171DIP-6475.pdf ), it says to put 1/2" spacing between two adjacent relays. Why is this? As the relays are fully insulated, why would I have a problem by putting them right next to each other? | The magnetic field of one relay coil could affect the one next to it. For example, it could actuate the relay next to it, or affect the vibration resistance, or it could decrease the drop-out voltage. Since adjacent relays with the same polarity drive will have fields that oppose, it can also increase the pull-in voltage, if they are driven the same way. More in this application note from Coto (a major and old-line manufacturer), I've reproduced the relevant page below, but there is plenty of other good information in this document. | {
"source": [
"https://electronics.stackexchange.com/questions/227581",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/96120/"
]
} |
227,637 | So, first year EE student, and I just learned about op-amps. I understand the ideal model, and know how to analyze them, and understand the idea behind them/the circuit that we were shown that is inside them. Except, that's not the real circuit, it has a dependent source. My question is, what is actually inside an op-amp? If we were to replace the dependent source with real sources, what would we see? (I guess this is also more of a question about 'What are dependent sources, really?'). I have searched everywhere, and I always find the same answer 'Dependent sources are useful tools to model a circuit'. But what are they really? | Here is a $35 kit you can make, which ends up being the equivalent of a 741 op-amp using discrete 13 2N3904 and 7 2N3906 transistors. It has eight binding posts representing the eight pins of the device. Here is a link to the datasheet, which includes the schematic for the kit (shown below) and a BOM. Compare that to a "real" 741 out of the TI datasheet : They are virtually the same, even down to the resistor values. There is also an 11 page "Principles of Operation" which goes into quite a bit of detail on how it works. And finally, they have a Wiki . | {
"source": [
"https://electronics.stackexchange.com/questions/227637",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/71060/"
]
} |
227,645 | I'm planning on shifting the output of an AD8226 instrumentation amplifier by 'about' 0.5 volts using a 2.5 v precision reference and a voltage divider. According to the datasheet this is a bad idea and a buffer must be used: I understand that if I don't use a buffer I'll probably end up with a slightly different voltage shift and a slight increase in gain. I'm guessing these can be calibrated out by software without a problem. If yes, then why should I use a buffer? Also, there is a note at the last line about a degradation in CMRR. I'm using the device for reading a 0-10 v (very slowly changing) signal in an industrial environment (single ended). Will the CMRR degradation be significant? P.S. Voltage divider used has two resistors 20k and 4.7k. | Here is a $35 kit you can make, which ends up being the equivalent of a 741 op-amp using discrete 13 2N3904 and 7 2N3906 transistors. It has eight binding posts representing the eight pins of the device. Here is a link to the datasheet, which includes the schematic for the kit (shown below) and a BOM. Compare that to a "real" 741 out of the TI datasheet : They are virtually the same, even down to the resistor values. There is also an 11 page "Principles of Operation" which goes into quite a bit of detail on how it works. And finally, they have a Wiki . | {
"source": [
"https://electronics.stackexchange.com/questions/227645",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/24101/"
]
} |
227,705 | With the advent of ICs in past decades, circuits have decreased in size exponentially over time. However, it appears RF components and connections, with coax SMA cable, connectors, and components, like the one below, are still hefty and large: Why have they not shrunk? Why can't coax, as you see on the side of the this amplifier, be decreased in dimensions? | Why can't coax, as you see on the side of the this amplifier, be
decreased in dimensions? It's all down to characteristic impedance of cable: - If you plug in the numbers, to obtain a centre conductor thickness (d) that is not unfeasibly small, dimension D cannot be to low. For instance if d = 1mm then for a relative permeability of 2.2, D has to be about 3.4 mm to obtain a 50 ohm characteristic impedance. Then on top of this is the thickness of the screen and the plastic outer covering. These numbers scale down ratiometrically but imagine having a centre conductor of 0.1mm - how reliable will this be and just how much current could it carry? For 75 ohm systems and a 1mm centre conductor, dimension D needs to be 6.5mm (relative permeability of 2.2). Characteristic impedance is important just in case you weren't aware of this. | {
"source": [
"https://electronics.stackexchange.com/questions/227705",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103501/"
]
} |
227,706 | I'm so sorry posting such a horrible-looking picture, but I really cannot found pictures similar to it online and/or in English! If you found one, please let me know and I'll replace it immediately! The left graph is a i D -u GS profile of a nMOSFET, and the right one is its i D -u DS . As you can see, the cparabola on the left is extremely alike to the dashed curve on the right graph(that is, the curve marked "预夹断轨迹", which indicates the boundary between linear and saturation modes). I guess there might be some relationship between the two curves. However, I cannot figure out on my own. I've tried to draw a 3D graph showing the relationship between i D , u GS and u DS , but failed, because I don't know any model about MOSFETs. So is there any relationship between the two curves? | Why can't coax, as you see on the side of the this amplifier, be
decreased in dimensions? It's all down to characteristic impedance of cable: - If you plug in the numbers, to obtain a centre conductor thickness (d) that is not unfeasibly small, dimension D cannot be to low. For instance if d = 1mm then for a relative permeability of 2.2, D has to be about 3.4 mm to obtain a 50 ohm characteristic impedance. Then on top of this is the thickness of the screen and the plastic outer covering. These numbers scale down ratiometrically but imagine having a centre conductor of 0.1mm - how reliable will this be and just how much current could it carry? For 75 ohm systems and a 1mm centre conductor, dimension D needs to be 6.5mm (relative permeability of 2.2). Characteristic impedance is important just in case you weren't aware of this. | {
"source": [
"https://electronics.stackexchange.com/questions/227706",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103845/"
]
} |
227,796 | Given the same number of pipeline stages and the same manufacturing node (say, 65 nm) and the same voltage, simple devices should run faster than more complicated ones. Also, merging multiple pipeline stages into one should not slow down by a factor grater than the number of stages. Now take a five-year-old CPU, running 14 pipeline stages at 2.8 GHz. Suppose one merges the stages; that would slow down to below 200 MHz. Now increase voltage and reduce number of bits per word; that would actually speed things up. That's why I don't understand why many currently manufactured microcontrollers, such as AVL, run at abysmal speed (such as 20 MHz at 5 V), even though far more complicated CPUs manufactured years ago were capable of running 150x faster, or 10x faster if you roll all pipeline stages into one, at 1.2 V-ish. According to the most coarse back-of-the-envelope calculations, microcontrollers—even if manufactured using borderline obsolete technology—should run at least 10x faster at one quarter of the voltage they are supplied with. Thus the question: What are the reasons for slow microcontroller clock rates? | There are other factors that contribute to the speed. Memory: Actual performance is often limited by memory latency. Intel CPUs have large caches to make up for this. Microcontrollers usually don't. Flash memory is much slower than DRAM. Power consumption: This is often a big deal in embedded applications. Actual 200 MHz Intel CPUs consumed more than 10 watts (often much more), and needed a big heat-sink and a fan. That takes space and money, and it's not even counting the external logic and memory that went with it. A 20 MHz AVR takes about 0.2 watts, which includes everything you need. This is also related to the process -- faster transistors tend to be leakier. Operating conditions: As Dmitry points out in the comments, many microcontrollers can operate over a wide voltage and temperature range. That ATMega I mentioned above works from -40C to 85C, and can be stored at anything from -65C to 150C. (Other MCUs work up to 125C or even 155C.) The VCC voltage can be anything from 2.7V to 5.5V (5V +/- 10% for peak performance). This Core i7 datasheet is hard to read since they trim the allowed VCC during manufacturing, but the voltage and temperature tolerances are certainly narrower -- ~3% voltage tolerance and 105C max junction temperature. (5C minimum, but when you're pulling >100 amps, minimum temperatures aren't really a problem.) Gate count: Simpler isn't always faster. If it were, Intel wouldn't need any CPU architects! It's not just pipelining; you also need things like a high-performance FPU. That jacks up the price. A lot of low-end MCUs have integer-only CPUs for that reason. Die area budget: Microcontrollers have to fit a lot of functionality into one die, which often includes all of the memory used for the application. (SRAM and reliable NOR flash are quite large.) PC CPUs talk to off-chip memory and peripherals. Process: Those 5V AVRs are made on an ancient low-cost process. Remember, they were designed from the ground up to be cheap. Intel sells consumer products at high margins using the best technology money can buy. Intel's also selling pure CMOS. MCU processes need to produce on-chip flash memory, which is more difficult. Many of the above factors are related. You can buy 200 MHz microcontrollers today ( here's an example ). Of course, they cost ten times as much as those 20 MHz ATMegas ... The short version is that speed is more complicated than simplicity, and cheap products are optimized for cheapness, not speed. | {
"source": [
"https://electronics.stackexchange.com/questions/227796",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/31398/"
]
} |
228,345 | Let's say I have one 2 kΩ resistor with 5% tolerance. If I replace it with two 1 kΩ resistors with 5% tolerance, will resulting tolerance go up, down, or remain unchanged? I'm bad with probabilities, and I'm not sure what exactly tolerance says about resistance and its distribution. I am aware that in the 'worst case' it will be the same; I'm more interested in what will happen on average. Will the chance of getting a more precise value increase if I use a series of resistors (because deviations will cancel each other out)? On 'intuitive level' I think that it will, but I have no idea how to do the math with probabilities and find out if I'm actually right. | The worst case won't get any better. The result of your example is still 2 kΩ ±5%. The probability that the result is closer to the middle gets better with multiple resistors, but only if each resistor is random within its range , which includes that it is independent of the others. This is not the case if they are from the same reel, or possibly even from the same manufacturer within some time window. The manufacturer's selection process may also make the error non-random. For example, if they make resistors with a wide variance, then pick the ones that fall within 1% and sell them as 1% parts, then sell the remaining ones as 5% parts, the 5% parts will have a double-hump distribution with no values being within 1%. Because you can't know the error distribution within the worst case error window, and because even if you did, the worst case stays the same, doing what you are suggesting is not useful to electronic design. If you specify 5% resistors, then the design must work correctly with any resistance within the ±5% range. If not, then you need to specify the resistance requirement more tightly. | {
"source": [
"https://electronics.stackexchange.com/questions/228345",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/57095/"
]
} |
228,959 | I was watching my oscilloscope as I was holding a very short wire attached to the probe. It was showing about 0 voltage (small spikes) until I touched the metal. When I touched the metal, the scope immediately showed a 60.0Hz (peroid of 16.66ms), approximately sinusoidal (very noisy) waveform. Considering that the power in my area is a 60Hz AC, I think it most likely that it was picking up coupling from the power lines. This sine wave was immediately replaced with a flat, -200mV line when I touched an electrical ground, but resumed when I let go. Was I acting as an antenna for electrical noise from the mains line? I was wearing rubber shoes on a concrete floor and touching nothing but the probe. | Antennas are usually thought of as converting a radio wave into an electrical signal. I say this because you wouldn't name the plates of a capacitor as antennas. Capacitive coupling (not EM antenna reception) is the phenomena you witnessed when you touched the o-scope probe. You might have heard about fluorescent lamps lighting up when pointed at overhead wires: - The alternating electric field produced by the overhead lines cause a small current to flow through the lamps and light them - this is the same principle involved when touching the scope probe. Picture taken from here . If there were an earthed shield above the tops of the tubes and below the overhead wires, those lamps would not illuminate. Your body has a large surface area and this massively increased the capacitance between probe tip and the local conductors supplying over one hundred volts RMS around your building. When you also touched earth, this galvanic electrical connection would dominate the capacitive "connection" to your AC power wiring and the "picked up" signal reduced significantly. It's all about potential dividers formed by capacitors and resistors and not really about antennas (RF electromagnetic wave devices). And the presence of the human body can reduce signals reaching (say) a sensor plate: - Here, the hand (and its capacitance to ground) diverts the electric field from hitting the "receive" plate and less current flows into that plate. | {
"source": [
"https://electronics.stackexchange.com/questions/228959",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/96120/"
]
} |
230,155 | I've been trying to design a charging system for a small robot powered by a 2S 20C lithium polymer (LiPo) battery. Were I to trust everything I read online, I would believe that the LiPo will kill me in my sleep and steal my life savings. The common advice I read, if you are brave enough to use LiPo batteries, is "never leave unattended", "never charge on top of a flammable or conductive surface", and "never charge at a rate faster than 1 C ". I understand why this is prudent, but what is the actual risk with LiPo batteries? Nearly every cell phone, both Android and iPhone alike, contains a LiPo battery, which most people, including myself, charge while unattended—often-times while left on a flammable or conductive surface. Yet you never hear about someone bursting into flames because their cell phone exploded. Yes, I know there are freak accidents, but how dangerous are modern LiPo batteries? Why do so many online commenters treat stand-alone LiPo batteries like bombs waiting to go off, but don't even think twice about the LiPo sitting in their pocket? | Every cell phone (as well as laptop and nearly everything with a rechargeable battery) uses LiIon /LiPo (essentially equivalent for the purposes of this discussion). And you're right: In terms of actual incidences, lithium-ion and lithium-polymer are the safest battery chemistry to be in wide use, bar none. And the only reason this now ubiquitous chemistry hasn't murdered you and/or your family several times over is that these cells aren't charged unattended. You may not be attending it personally, but every single one of those lithium-ion batteries has a significant amount of protection and monitoring circuitry that is permanently integrated into the pack. It acts as the gatekeeper. It monitors every cell in a battery. It disconnects the output terminals and prevents them from being overcharged. It disconnects the output if they are discharged at too high a current. It disconnects the output if it is CHARGED at too high a current. If any of the cells are going bad, the output is disconnected. If any cell gets too hot, it disconnects the output. If anyone of the cells is over-discharged, it disconnects the output (and permanently - if you forget to charge a lithium-ion battery for too long, you will find that it will no longer charge. It is effectively destroyed, and the protection circuit will not permit you to charge the cells). Indeed, every single phone battery, laptop battery, *whatever battery that is a rechargeable lithium chemistry is the most closely monitored, scrutinized, and actively managed, the diametric opposite of 'unattended' as one can get for a battery. And the reason so much extra trouble is done is because lithium-ion batteries are actually that dangerous . They need protection circuitry to be safe, and they are not even remotely safe without it. Other chemistries such is NiMH or NiCad can be used relatively safely as bare cells, without any monitoring. If they get too hot, they can vent (which has happened to me personally), and it can be pretty startling, but it isn't going to burn down your house or land you an extended stay in a burn unit. Lithium-ion batteries will do both, and that's pretty much the only outcome. Ironically, lithium-ion batteries have become the safest packaged battery by being the most dangerous battery chemistry. You might be wondering what actually makes them so dangerous. Other battery chemistries, such as lead-acid or NiMH or NiCad, are not pressurized at room temperature, though heat does generate some internal pressure. They also have aqueous, non-flammable electrolytes. They store energy in the form of a relatively slow oxidation/reduction reaction, one whose rate of energy release is too low to, say, cause them to eject 6-foot jets of flame. Or any flame, really. Lithium-ion batteries are fundamentally different. They store energy like a spring. That's not a metaphor. Well, like two springs. Lithium ions are forced between the atoms of covalently-bonded anode material, pushing them apart and 'stretching' the bonds, storing energy. This process is called intercalation . Upon discharge, the lithium ions move out of the anode and into the cathode. This is very much electromechanical, and both the anode and cathode experience significant mechanical strain from this. In fact, both anode and cathode alternatively increase or decrease in physical volume depending on the battery's state of charge. This change in volume is uneven however, so a fully charged lithium-ion battery is actually exerting nontrivial amounts of pressure on its container or other parts of itself. Lithium-ion batteries are generally under a lot of internal pressure, unlike other chemistries. The other problem is their electrolyte is a volatile, extremely flammable solvent that will burn quite vigorously and easily. The complex chemistry of lithium-ion cells is not even completely understood, and there are a few different chemistries with different levels of reactivity and inherent danger, but the ones with high energy density all can undergo thermal runaway. Basically, if they get too hot, lithium ions will begin reacting with oxygen stored as metal oxides in the cathode and release even more heat, which accelerates the reaction further. What inevitably results is a battery that self-ignites, sprays its highly flammable solvent electrolyte out of itself, and promptly ignites that as well, now that a fresh supply of oxygen is available. That's just bonus fire however, there is still a ton of fire from the lithium metal oxidizing with the ample store of oxygen inside. If they get too hot that happens. If they are overcharged, they become unstable and mechanical shock can make them go off like a grenade. If they are over-discharged, some of the metal in the cathode undergoes an irreversible chemical reaction and will form metallic shunts. These shunts will be invisible, until charging expands part of the battery enough that the separating membrane is punctured by one of these shunts, creating a dead short, which of course results in fire, etc.: The lithium-ion failure mode we know and love. So, just to be clear, not only is overcharging dangerous, but so is over-discharging, and the battery will wait until you've pumped a ton of energy back into it before spectacularly failing on you, and without any warning or measurable signs. That covers consumer batteries. All this protection circuitry is less able to mitigate the danger of high drain applications, however. High drain generates no small amount of heat (which is bad) and more worrying, it causes huge amounts of mechanical stress on the anode and cathode. Fissures can form and widen, leading to instability if you're unlucky, or just a shorter useful life if it is not too severe. This is why you see LiPos rated in 'C', or how quickly they can be safely discharged. Please, take those ratings seriously and derate it, both for safety and because many manufacturers simply lie about the C rating of their batteries. Even with all that, sometimes an RC Lipo will just burst into flame for no reason. You absolutely need to heed the warnings to never charge them unattended, and everything else. You should buy a safety bag to charge them in because it might prevent your house from burning down (possibly with you or loved ones inside). Even if the risk is very low, the damage it can cause is vast, and the measures needed to mitigate most of that potential for damage are trivial. Don't ignore everything you're being told - it's all spot on. It comes from people who have learned to respect LiPos for what they are, and you should too. The thing you definitely want to avoid is having this lesson taught to you by a lithium-ion battery, instead of peers online and offline. The latter might flame you on a forum, but the former will literally flame you. Let's see some videos of stuff exploding! Let me go a little more into how they fail. I've discussed the mechanism, but what really happens? Lithium-ion batteries really only have one failure mode, which is kind of exploding then shooting out a stunningly huge amount of fire in a giant jet of flame for several seconds, and then continuing general burning-related activities for a bit after that. This is a chemical fire, so you cannot extinguish it (lithium-ion batteries will still shoot out huge jets of fire even in the vacuum of space. The oxidizer is contained inside, it doesn't need air or oxygen to burn). Oh, and throwing water on lithium does nothing good , at least in terms of fire reduction. Here is a 'greatest hits' list of some good examples of failure. Note that this does sometimes happen in high drain RC cases even with proper safety measures in place. Comparing high drain applications to the much safer and lower currents of phones is not at all a valid one. Hundreds of amperes ≠ a few hundred milliamperes. RC plane failure. Knife stabs smartphone-sized battery. Overcharged LiPo spontaneously explodes. Laptop battery in a thermal runaway is lightly pressed on, making it explode. | {
"source": [
"https://electronics.stackexchange.com/questions/230155",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/6852/"
]
} |
230,166 | I have a signal source with a very high output resistance. The output of this source is connected to the input of comparator. Actually the input capacitance of comparator is not large (about 20fF). However, the output resistance of the source is very large (10Mohm). So, is there a way to drive a capacitance load with large output resistance source here?
The voltage source is analog signal not digital one. | Every cell phone (as well as laptop and nearly everything with a rechargeable battery) uses LiIon /LiPo (essentially equivalent for the purposes of this discussion). And you're right: In terms of actual incidences, lithium-ion and lithium-polymer are the safest battery chemistry to be in wide use, bar none. And the only reason this now ubiquitous chemistry hasn't murdered you and/or your family several times over is that these cells aren't charged unattended. You may not be attending it personally, but every single one of those lithium-ion batteries has a significant amount of protection and monitoring circuitry that is permanently integrated into the pack. It acts as the gatekeeper. It monitors every cell in a battery. It disconnects the output terminals and prevents them from being overcharged. It disconnects the output if they are discharged at too high a current. It disconnects the output if it is CHARGED at too high a current. If any of the cells are going bad, the output is disconnected. If any cell gets too hot, it disconnects the output. If anyone of the cells is over-discharged, it disconnects the output (and permanently - if you forget to charge a lithium-ion battery for too long, you will find that it will no longer charge. It is effectively destroyed, and the protection circuit will not permit you to charge the cells). Indeed, every single phone battery, laptop battery, *whatever battery that is a rechargeable lithium chemistry is the most closely monitored, scrutinized, and actively managed, the diametric opposite of 'unattended' as one can get for a battery. And the reason so much extra trouble is done is because lithium-ion batteries are actually that dangerous . They need protection circuitry to be safe, and they are not even remotely safe without it. Other chemistries such is NiMH or NiCad can be used relatively safely as bare cells, without any monitoring. If they get too hot, they can vent (which has happened to me personally), and it can be pretty startling, but it isn't going to burn down your house or land you an extended stay in a burn unit. Lithium-ion batteries will do both, and that's pretty much the only outcome. Ironically, lithium-ion batteries have become the safest packaged battery by being the most dangerous battery chemistry. You might be wondering what actually makes them so dangerous. Other battery chemistries, such as lead-acid or NiMH or NiCad, are not pressurized at room temperature, though heat does generate some internal pressure. They also have aqueous, non-flammable electrolytes. They store energy in the form of a relatively slow oxidation/reduction reaction, one whose rate of energy release is too low to, say, cause them to eject 6-foot jets of flame. Or any flame, really. Lithium-ion batteries are fundamentally different. They store energy like a spring. That's not a metaphor. Well, like two springs. Lithium ions are forced between the atoms of covalently-bonded anode material, pushing them apart and 'stretching' the bonds, storing energy. This process is called intercalation . Upon discharge, the lithium ions move out of the anode and into the cathode. This is very much electromechanical, and both the anode and cathode experience significant mechanical strain from this. In fact, both anode and cathode alternatively increase or decrease in physical volume depending on the battery's state of charge. This change in volume is uneven however, so a fully charged lithium-ion battery is actually exerting nontrivial amounts of pressure on its container or other parts of itself. Lithium-ion batteries are generally under a lot of internal pressure, unlike other chemistries. The other problem is their electrolyte is a volatile, extremely flammable solvent that will burn quite vigorously and easily. The complex chemistry of lithium-ion cells is not even completely understood, and there are a few different chemistries with different levels of reactivity and inherent danger, but the ones with high energy density all can undergo thermal runaway. Basically, if they get too hot, lithium ions will begin reacting with oxygen stored as metal oxides in the cathode and release even more heat, which accelerates the reaction further. What inevitably results is a battery that self-ignites, sprays its highly flammable solvent electrolyte out of itself, and promptly ignites that as well, now that a fresh supply of oxygen is available. That's just bonus fire however, there is still a ton of fire from the lithium metal oxidizing with the ample store of oxygen inside. If they get too hot that happens. If they are overcharged, they become unstable and mechanical shock can make them go off like a grenade. If they are over-discharged, some of the metal in the cathode undergoes an irreversible chemical reaction and will form metallic shunts. These shunts will be invisible, until charging expands part of the battery enough that the separating membrane is punctured by one of these shunts, creating a dead short, which of course results in fire, etc.: The lithium-ion failure mode we know and love. So, just to be clear, not only is overcharging dangerous, but so is over-discharging, and the battery will wait until you've pumped a ton of energy back into it before spectacularly failing on you, and without any warning or measurable signs. That covers consumer batteries. All this protection circuitry is less able to mitigate the danger of high drain applications, however. High drain generates no small amount of heat (which is bad) and more worrying, it causes huge amounts of mechanical stress on the anode and cathode. Fissures can form and widen, leading to instability if you're unlucky, or just a shorter useful life if it is not too severe. This is why you see LiPos rated in 'C', or how quickly they can be safely discharged. Please, take those ratings seriously and derate it, both for safety and because many manufacturers simply lie about the C rating of their batteries. Even with all that, sometimes an RC Lipo will just burst into flame for no reason. You absolutely need to heed the warnings to never charge them unattended, and everything else. You should buy a safety bag to charge them in because it might prevent your house from burning down (possibly with you or loved ones inside). Even if the risk is very low, the damage it can cause is vast, and the measures needed to mitigate most of that potential for damage are trivial. Don't ignore everything you're being told - it's all spot on. It comes from people who have learned to respect LiPos for what they are, and you should too. The thing you definitely want to avoid is having this lesson taught to you by a lithium-ion battery, instead of peers online and offline. The latter might flame you on a forum, but the former will literally flame you. Let's see some videos of stuff exploding! Let me go a little more into how they fail. I've discussed the mechanism, but what really happens? Lithium-ion batteries really only have one failure mode, which is kind of exploding then shooting out a stunningly huge amount of fire in a giant jet of flame for several seconds, and then continuing general burning-related activities for a bit after that. This is a chemical fire, so you cannot extinguish it (lithium-ion batteries will still shoot out huge jets of fire even in the vacuum of space. The oxidizer is contained inside, it doesn't need air or oxygen to burn). Oh, and throwing water on lithium does nothing good , at least in terms of fire reduction. Here is a 'greatest hits' list of some good examples of failure. Note that this does sometimes happen in high drain RC cases even with proper safety measures in place. Comparing high drain applications to the much safer and lower currents of phones is not at all a valid one. Hundreds of amperes ≠ a few hundred milliamperes. RC plane failure. Knife stabs smartphone-sized battery. Overcharged LiPo spontaneously explodes. Laptop battery in a thermal runaway is lightly pressed on, making it explode. | {
"source": [
"https://electronics.stackexchange.com/questions/230166",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/91853/"
]
} |
230,175 | I have a 3 phase wound rotor induction motor which I would like to convert into a generator. I was wondering if it is possible to excite the rotor windings using a DC current. I attempted to do this, but I don't think the rotor windings were wired for that. The rotor windings are three phases with a common neutral inside the motor. I'm not sure if the approach I'm suggesting is possible. But I would think that applying a DC current through the rotor windings would produce a magnetic field. But it only worked as a "generator" when I applied an AC current | Every cell phone (as well as laptop and nearly everything with a rechargeable battery) uses LiIon /LiPo (essentially equivalent for the purposes of this discussion). And you're right: In terms of actual incidences, lithium-ion and lithium-polymer are the safest battery chemistry to be in wide use, bar none. And the only reason this now ubiquitous chemistry hasn't murdered you and/or your family several times over is that these cells aren't charged unattended. You may not be attending it personally, but every single one of those lithium-ion batteries has a significant amount of protection and monitoring circuitry that is permanently integrated into the pack. It acts as the gatekeeper. It monitors every cell in a battery. It disconnects the output terminals and prevents them from being overcharged. It disconnects the output if they are discharged at too high a current. It disconnects the output if it is CHARGED at too high a current. If any of the cells are going bad, the output is disconnected. If any cell gets too hot, it disconnects the output. If anyone of the cells is over-discharged, it disconnects the output (and permanently - if you forget to charge a lithium-ion battery for too long, you will find that it will no longer charge. It is effectively destroyed, and the protection circuit will not permit you to charge the cells). Indeed, every single phone battery, laptop battery, *whatever battery that is a rechargeable lithium chemistry is the most closely monitored, scrutinized, and actively managed, the diametric opposite of 'unattended' as one can get for a battery. And the reason so much extra trouble is done is because lithium-ion batteries are actually that dangerous . They need protection circuitry to be safe, and they are not even remotely safe without it. Other chemistries such is NiMH or NiCad can be used relatively safely as bare cells, without any monitoring. If they get too hot, they can vent (which has happened to me personally), and it can be pretty startling, but it isn't going to burn down your house or land you an extended stay in a burn unit. Lithium-ion batteries will do both, and that's pretty much the only outcome. Ironically, lithium-ion batteries have become the safest packaged battery by being the most dangerous battery chemistry. You might be wondering what actually makes them so dangerous. Other battery chemistries, such as lead-acid or NiMH or NiCad, are not pressurized at room temperature, though heat does generate some internal pressure. They also have aqueous, non-flammable electrolytes. They store energy in the form of a relatively slow oxidation/reduction reaction, one whose rate of energy release is too low to, say, cause them to eject 6-foot jets of flame. Or any flame, really. Lithium-ion batteries are fundamentally different. They store energy like a spring. That's not a metaphor. Well, like two springs. Lithium ions are forced between the atoms of covalently-bonded anode material, pushing them apart and 'stretching' the bonds, storing energy. This process is called intercalation . Upon discharge, the lithium ions move out of the anode and into the cathode. This is very much electromechanical, and both the anode and cathode experience significant mechanical strain from this. In fact, both anode and cathode alternatively increase or decrease in physical volume depending on the battery's state of charge. This change in volume is uneven however, so a fully charged lithium-ion battery is actually exerting nontrivial amounts of pressure on its container or other parts of itself. Lithium-ion batteries are generally under a lot of internal pressure, unlike other chemistries. The other problem is their electrolyte is a volatile, extremely flammable solvent that will burn quite vigorously and easily. The complex chemistry of lithium-ion cells is not even completely understood, and there are a few different chemistries with different levels of reactivity and inherent danger, but the ones with high energy density all can undergo thermal runaway. Basically, if they get too hot, lithium ions will begin reacting with oxygen stored as metal oxides in the cathode and release even more heat, which accelerates the reaction further. What inevitably results is a battery that self-ignites, sprays its highly flammable solvent electrolyte out of itself, and promptly ignites that as well, now that a fresh supply of oxygen is available. That's just bonus fire however, there is still a ton of fire from the lithium metal oxidizing with the ample store of oxygen inside. If they get too hot that happens. If they are overcharged, they become unstable and mechanical shock can make them go off like a grenade. If they are over-discharged, some of the metal in the cathode undergoes an irreversible chemical reaction and will form metallic shunts. These shunts will be invisible, until charging expands part of the battery enough that the separating membrane is punctured by one of these shunts, creating a dead short, which of course results in fire, etc.: The lithium-ion failure mode we know and love. So, just to be clear, not only is overcharging dangerous, but so is over-discharging, and the battery will wait until you've pumped a ton of energy back into it before spectacularly failing on you, and without any warning or measurable signs. That covers consumer batteries. All this protection circuitry is less able to mitigate the danger of high drain applications, however. High drain generates no small amount of heat (which is bad) and more worrying, it causes huge amounts of mechanical stress on the anode and cathode. Fissures can form and widen, leading to instability if you're unlucky, or just a shorter useful life if it is not too severe. This is why you see LiPos rated in 'C', or how quickly they can be safely discharged. Please, take those ratings seriously and derate it, both for safety and because many manufacturers simply lie about the C rating of their batteries. Even with all that, sometimes an RC Lipo will just burst into flame for no reason. You absolutely need to heed the warnings to never charge them unattended, and everything else. You should buy a safety bag to charge them in because it might prevent your house from burning down (possibly with you or loved ones inside). Even if the risk is very low, the damage it can cause is vast, and the measures needed to mitigate most of that potential for damage are trivial. Don't ignore everything you're being told - it's all spot on. It comes from people who have learned to respect LiPos for what they are, and you should too. The thing you definitely want to avoid is having this lesson taught to you by a lithium-ion battery, instead of peers online and offline. The latter might flame you on a forum, but the former will literally flame you. Let's see some videos of stuff exploding! Let me go a little more into how they fail. I've discussed the mechanism, but what really happens? Lithium-ion batteries really only have one failure mode, which is kind of exploding then shooting out a stunningly huge amount of fire in a giant jet of flame for several seconds, and then continuing general burning-related activities for a bit after that. This is a chemical fire, so you cannot extinguish it (lithium-ion batteries will still shoot out huge jets of fire even in the vacuum of space. The oxidizer is contained inside, it doesn't need air or oxygen to burn). Oh, and throwing water on lithium does nothing good , at least in terms of fire reduction. Here is a 'greatest hits' list of some good examples of failure. Note that this does sometimes happen in high drain RC cases even with proper safety measures in place. Comparing high drain applications to the much safer and lower currents of phones is not at all a valid one. Hundreds of amperes ≠ a few hundred milliamperes. RC plane failure. Knife stabs smartphone-sized battery. Overcharged LiPo spontaneously explodes. Laptop battery in a thermal runaway is lightly pressed on, making it explode. | {
"source": [
"https://electronics.stackexchange.com/questions/230175",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107869/"
]
} |
230,196 | I need to implement simple demux in HDL: Demultiplexer logic would be: {a, b} = {in, 0} if sel == 0 {a, b} = {0, in} if sel == 1 I've started from basic logic table and first of all I wrote all posible combinations of all pins and in result i got: /**
* in | sel | a | b
* ----------------
* 0 | 0 | 0 | 0
* 1 | 0 | 0 | 0
* 0 | 1 | 0 | 0
* 1 | 1 | 0 | 0
* 0 | 0 | 1 | 0
* 1 | 0 | 1 | 0
* 0 | 1 | 1 | 0
* 1 | 1 | 1 | 0
* 0 | 0 | 0 | 1
* 1 | 0 | 0 | 1
* 0 | 1 | 0 | 1
* 1 | 1 | 0 | 1
* 0 | 0 | 1 | 1
* 1 | 0 | 1 | 1
* 0 | 1 | 1 | 1
* 1 | 1 | 1 | 1
*/ Working on gate with single output i would need to build logical expression where output is equal to 1.
Now I'm bit confused how to simplify this logic table even if I know how final result should look like: /**
* in | sel | a | b
* -----------------
* 0 | 0 | 0 | 0
* 1 | 0 | 1 | 0
* 0 | 1 | 0 | 0
* 1 | 1 | 0 | 1
*/ It would be great to get an idea how it is simplified, thanks. | Every cell phone (as well as laptop and nearly everything with a rechargeable battery) uses LiIon /LiPo (essentially equivalent for the purposes of this discussion). And you're right: In terms of actual incidences, lithium-ion and lithium-polymer are the safest battery chemistry to be in wide use, bar none. And the only reason this now ubiquitous chemistry hasn't murdered you and/or your family several times over is that these cells aren't charged unattended. You may not be attending it personally, but every single one of those lithium-ion batteries has a significant amount of protection and monitoring circuitry that is permanently integrated into the pack. It acts as the gatekeeper. It monitors every cell in a battery. It disconnects the output terminals and prevents them from being overcharged. It disconnects the output if they are discharged at too high a current. It disconnects the output if it is CHARGED at too high a current. If any of the cells are going bad, the output is disconnected. If any cell gets too hot, it disconnects the output. If anyone of the cells is over-discharged, it disconnects the output (and permanently - if you forget to charge a lithium-ion battery for too long, you will find that it will no longer charge. It is effectively destroyed, and the protection circuit will not permit you to charge the cells). Indeed, every single phone battery, laptop battery, *whatever battery that is a rechargeable lithium chemistry is the most closely monitored, scrutinized, and actively managed, the diametric opposite of 'unattended' as one can get for a battery. And the reason so much extra trouble is done is because lithium-ion batteries are actually that dangerous . They need protection circuitry to be safe, and they are not even remotely safe without it. Other chemistries such is NiMH or NiCad can be used relatively safely as bare cells, without any monitoring. If they get too hot, they can vent (which has happened to me personally), and it can be pretty startling, but it isn't going to burn down your house or land you an extended stay in a burn unit. Lithium-ion batteries will do both, and that's pretty much the only outcome. Ironically, lithium-ion batteries have become the safest packaged battery by being the most dangerous battery chemistry. You might be wondering what actually makes them so dangerous. Other battery chemistries, such as lead-acid or NiMH or NiCad, are not pressurized at room temperature, though heat does generate some internal pressure. They also have aqueous, non-flammable electrolytes. They store energy in the form of a relatively slow oxidation/reduction reaction, one whose rate of energy release is too low to, say, cause them to eject 6-foot jets of flame. Or any flame, really. Lithium-ion batteries are fundamentally different. They store energy like a spring. That's not a metaphor. Well, like two springs. Lithium ions are forced between the atoms of covalently-bonded anode material, pushing them apart and 'stretching' the bonds, storing energy. This process is called intercalation . Upon discharge, the lithium ions move out of the anode and into the cathode. This is very much electromechanical, and both the anode and cathode experience significant mechanical strain from this. In fact, both anode and cathode alternatively increase or decrease in physical volume depending on the battery's state of charge. This change in volume is uneven however, so a fully charged lithium-ion battery is actually exerting nontrivial amounts of pressure on its container or other parts of itself. Lithium-ion batteries are generally under a lot of internal pressure, unlike other chemistries. The other problem is their electrolyte is a volatile, extremely flammable solvent that will burn quite vigorously and easily. The complex chemistry of lithium-ion cells is not even completely understood, and there are a few different chemistries with different levels of reactivity and inherent danger, but the ones with high energy density all can undergo thermal runaway. Basically, if they get too hot, lithium ions will begin reacting with oxygen stored as metal oxides in the cathode and release even more heat, which accelerates the reaction further. What inevitably results is a battery that self-ignites, sprays its highly flammable solvent electrolyte out of itself, and promptly ignites that as well, now that a fresh supply of oxygen is available. That's just bonus fire however, there is still a ton of fire from the lithium metal oxidizing with the ample store of oxygen inside. If they get too hot that happens. If they are overcharged, they become unstable and mechanical shock can make them go off like a grenade. If they are over-discharged, some of the metal in the cathode undergoes an irreversible chemical reaction and will form metallic shunts. These shunts will be invisible, until charging expands part of the battery enough that the separating membrane is punctured by one of these shunts, creating a dead short, which of course results in fire, etc.: The lithium-ion failure mode we know and love. So, just to be clear, not only is overcharging dangerous, but so is over-discharging, and the battery will wait until you've pumped a ton of energy back into it before spectacularly failing on you, and without any warning or measurable signs. That covers consumer batteries. All this protection circuitry is less able to mitigate the danger of high drain applications, however. High drain generates no small amount of heat (which is bad) and more worrying, it causes huge amounts of mechanical stress on the anode and cathode. Fissures can form and widen, leading to instability if you're unlucky, or just a shorter useful life if it is not too severe. This is why you see LiPos rated in 'C', or how quickly they can be safely discharged. Please, take those ratings seriously and derate it, both for safety and because many manufacturers simply lie about the C rating of their batteries. Even with all that, sometimes an RC Lipo will just burst into flame for no reason. You absolutely need to heed the warnings to never charge them unattended, and everything else. You should buy a safety bag to charge them in because it might prevent your house from burning down (possibly with you or loved ones inside). Even if the risk is very low, the damage it can cause is vast, and the measures needed to mitigate most of that potential for damage are trivial. Don't ignore everything you're being told - it's all spot on. It comes from people who have learned to respect LiPos for what they are, and you should too. The thing you definitely want to avoid is having this lesson taught to you by a lithium-ion battery, instead of peers online and offline. The latter might flame you on a forum, but the former will literally flame you. Let's see some videos of stuff exploding! Let me go a little more into how they fail. I've discussed the mechanism, but what really happens? Lithium-ion batteries really only have one failure mode, which is kind of exploding then shooting out a stunningly huge amount of fire in a giant jet of flame for several seconds, and then continuing general burning-related activities for a bit after that. This is a chemical fire, so you cannot extinguish it (lithium-ion batteries will still shoot out huge jets of fire even in the vacuum of space. The oxidizer is contained inside, it doesn't need air or oxygen to burn). Oh, and throwing water on lithium does nothing good , at least in terms of fire reduction. Here is a 'greatest hits' list of some good examples of failure. Note that this does sometimes happen in high drain RC cases even with proper safety measures in place. Comparing high drain applications to the much safer and lower currents of phones is not at all a valid one. Hundreds of amperes ≠ a few hundred milliamperes. RC plane failure. Knife stabs smartphone-sized battery. Overcharged LiPo spontaneously explodes. Laptop battery in a thermal runaway is lightly pressed on, making it explode. | {
"source": [
"https://electronics.stackexchange.com/questions/230196",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107804/"
]
} |
230,224 | I have a bit of a specific question here: I am the proud owner of an STM32L476 microcontroller. According to the datasheet (page 41) , this specific MCU not only has a built-in LCD controller, but it can also drive an LCD in one of the built-in low-power modes. Problem is, I have no idea how to communicate with the LCD controller! I have scoured this datasheet and managed only to find the controller's memory address. Does anyone know how to use this LCD controller? P.S. I have searched the programming manual , but the word "LCD" doesn't appear once in the document. I really am at a total loss here | Every cell phone (as well as laptop and nearly everything with a rechargeable battery) uses LiIon /LiPo (essentially equivalent for the purposes of this discussion). And you're right: In terms of actual incidences, lithium-ion and lithium-polymer are the safest battery chemistry to be in wide use, bar none. And the only reason this now ubiquitous chemistry hasn't murdered you and/or your family several times over is that these cells aren't charged unattended. You may not be attending it personally, but every single one of those lithium-ion batteries has a significant amount of protection and monitoring circuitry that is permanently integrated into the pack. It acts as the gatekeeper. It monitors every cell in a battery. It disconnects the output terminals and prevents them from being overcharged. It disconnects the output if they are discharged at too high a current. It disconnects the output if it is CHARGED at too high a current. If any of the cells are going bad, the output is disconnected. If any cell gets too hot, it disconnects the output. If anyone of the cells is over-discharged, it disconnects the output (and permanently - if you forget to charge a lithium-ion battery for too long, you will find that it will no longer charge. It is effectively destroyed, and the protection circuit will not permit you to charge the cells). Indeed, every single phone battery, laptop battery, *whatever battery that is a rechargeable lithium chemistry is the most closely monitored, scrutinized, and actively managed, the diametric opposite of 'unattended' as one can get for a battery. And the reason so much extra trouble is done is because lithium-ion batteries are actually that dangerous . They need protection circuitry to be safe, and they are not even remotely safe without it. Other chemistries such is NiMH or NiCad can be used relatively safely as bare cells, without any monitoring. If they get too hot, they can vent (which has happened to me personally), and it can be pretty startling, but it isn't going to burn down your house or land you an extended stay in a burn unit. Lithium-ion batteries will do both, and that's pretty much the only outcome. Ironically, lithium-ion batteries have become the safest packaged battery by being the most dangerous battery chemistry. You might be wondering what actually makes them so dangerous. Other battery chemistries, such as lead-acid or NiMH or NiCad, are not pressurized at room temperature, though heat does generate some internal pressure. They also have aqueous, non-flammable electrolytes. They store energy in the form of a relatively slow oxidation/reduction reaction, one whose rate of energy release is too low to, say, cause them to eject 6-foot jets of flame. Or any flame, really. Lithium-ion batteries are fundamentally different. They store energy like a spring. That's not a metaphor. Well, like two springs. Lithium ions are forced between the atoms of covalently-bonded anode material, pushing them apart and 'stretching' the bonds, storing energy. This process is called intercalation . Upon discharge, the lithium ions move out of the anode and into the cathode. This is very much electromechanical, and both the anode and cathode experience significant mechanical strain from this. In fact, both anode and cathode alternatively increase or decrease in physical volume depending on the battery's state of charge. This change in volume is uneven however, so a fully charged lithium-ion battery is actually exerting nontrivial amounts of pressure on its container or other parts of itself. Lithium-ion batteries are generally under a lot of internal pressure, unlike other chemistries. The other problem is their electrolyte is a volatile, extremely flammable solvent that will burn quite vigorously and easily. The complex chemistry of lithium-ion cells is not even completely understood, and there are a few different chemistries with different levels of reactivity and inherent danger, but the ones with high energy density all can undergo thermal runaway. Basically, if they get too hot, lithium ions will begin reacting with oxygen stored as metal oxides in the cathode and release even more heat, which accelerates the reaction further. What inevitably results is a battery that self-ignites, sprays its highly flammable solvent electrolyte out of itself, and promptly ignites that as well, now that a fresh supply of oxygen is available. That's just bonus fire however, there is still a ton of fire from the lithium metal oxidizing with the ample store of oxygen inside. If they get too hot that happens. If they are overcharged, they become unstable and mechanical shock can make them go off like a grenade. If they are over-discharged, some of the metal in the cathode undergoes an irreversible chemical reaction and will form metallic shunts. These shunts will be invisible, until charging expands part of the battery enough that the separating membrane is punctured by one of these shunts, creating a dead short, which of course results in fire, etc.: The lithium-ion failure mode we know and love. So, just to be clear, not only is overcharging dangerous, but so is over-discharging, and the battery will wait until you've pumped a ton of energy back into it before spectacularly failing on you, and without any warning or measurable signs. That covers consumer batteries. All this protection circuitry is less able to mitigate the danger of high drain applications, however. High drain generates no small amount of heat (which is bad) and more worrying, it causes huge amounts of mechanical stress on the anode and cathode. Fissures can form and widen, leading to instability if you're unlucky, or just a shorter useful life if it is not too severe. This is why you see LiPos rated in 'C', or how quickly they can be safely discharged. Please, take those ratings seriously and derate it, both for safety and because many manufacturers simply lie about the C rating of their batteries. Even with all that, sometimes an RC Lipo will just burst into flame for no reason. You absolutely need to heed the warnings to never charge them unattended, and everything else. You should buy a safety bag to charge them in because it might prevent your house from burning down (possibly with you or loved ones inside). Even if the risk is very low, the damage it can cause is vast, and the measures needed to mitigate most of that potential for damage are trivial. Don't ignore everything you're being told - it's all spot on. It comes from people who have learned to respect LiPos for what they are, and you should too. The thing you definitely want to avoid is having this lesson taught to you by a lithium-ion battery, instead of peers online and offline. The latter might flame you on a forum, but the former will literally flame you. Let's see some videos of stuff exploding! Let me go a little more into how they fail. I've discussed the mechanism, but what really happens? Lithium-ion batteries really only have one failure mode, which is kind of exploding then shooting out a stunningly huge amount of fire in a giant jet of flame for several seconds, and then continuing general burning-related activities for a bit after that. This is a chemical fire, so you cannot extinguish it (lithium-ion batteries will still shoot out huge jets of fire even in the vacuum of space. The oxidizer is contained inside, it doesn't need air or oxygen to burn). Oh, and throwing water on lithium does nothing good , at least in terms of fire reduction. Here is a 'greatest hits' list of some good examples of failure. Note that this does sometimes happen in high drain RC cases even with proper safety measures in place. Comparing high drain applications to the much safer and lower currents of phones is not at all a valid one. Hundreds of amperes ≠ a few hundred milliamperes. RC plane failure. Knife stabs smartphone-sized battery. Overcharged LiPo spontaneously explodes. Laptop battery in a thermal runaway is lightly pressed on, making it explode. | {
"source": [
"https://electronics.stackexchange.com/questions/230224",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/36229/"
]
} |
230,525 | I had a selection of 0805 LEDs in my last Mouser order and was a bit surprised to discover they arrived in heat-sealed (rather than folded and taped with sticker) bags containing a desiccant bag and some sort of litmus-esque paper indicator saying something about baking them if the dots on it went pink (instead of blue). A quantity of through-hole LEDs I had in the same order just came in the regular ESD-protective bag, folded over with a sticker. What's up with SMD LEDs that they work so hard to keep them dry? Surely their dryness can't be assured once they're installed. Does it have to do with their solderability or something? Why would that be different than other SMD parts? | Through-hole components are generally hand or wave soldered. These apply on local heating to the pads and not to the component itself. SMD parts on the other hand are generally reflow soldered. This involves putting the entire part in a hot oven for an extended time. The components are generally made from porous materials which by their nature absorb moisture from the atmosphere. If they get water inside them, putting the parts in an oven can cause it to rapidly turn to steam which in turn expands and can fracture or destroy the part. As such the devices are packed for shipping in such a way as to reduce the risk of exposure to moisture. Parts that have been out of a sealed bag or which have been sitting for extended periods are generally baked at a lower temperature first to cook off any water before being reflowed in order to stop fracturing. In the case that the parts are being hand soldered, baking is probably not necessary, especially since it will likely not be a production run. But the distributer doesn't know this, so they do their duty and ensure the parts are properly packaged. The same applies to ICs as well as LEDs. In fact there are also different classes of sensitivity - some parts are more moisture sensitive than others, and as a result require different levels of packaging and/or have different shelf lives. In your case it is a Level 2 device which is one of the lesser sensitive parts - pre-baking isn't strictly required unless they are exposed to >60% humidity measured at room temp and they can be stored in the packed bag for many months. | {
"source": [
"https://electronics.stackexchange.com/questions/230525",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/70923/"
]
} |
230,538 | First of all, I'm fairly new to electronics. I've been working on my own version of a miniature hovercraft just like this one http://www.thingiverse.com/thing:68639 . I'm using an arduino pro mini for the controller, a nrf24l01 transceiver, small brushed motors (hubsan motors) running at 3 volts and a 3.7v 500mah 25c lips. I'm also using MOSFET transistors for the speed control. Here's the problem. I wired everything according to this page http://www.circuitmagic.com/arduino/run-small-brushed-motor-for-mini-quadcopter/ , but when I turn the motors on, there's a huge voltage drop. When plugging one motor directly to the battery, it drops about one volt. When attaching two motors, the voltage drops down to 0.86 volts. I don't know how to provide a steady 3.7 volts to my motors (without it dropping significantly), but I know it's possible since quadcopters like cheerson cx-10 runs four similar motors using a 3.7v 100mah lipo while I'm only using two motors. For that matter, how are micro quadcopters running multiple brushed motors without the motors running slow. I can only get a gentle breeze from mine. Please tell me if there's a way to regulate voltage or any other step or component I'm missing in my setup. | Through-hole components are generally hand or wave soldered. These apply on local heating to the pads and not to the component itself. SMD parts on the other hand are generally reflow soldered. This involves putting the entire part in a hot oven for an extended time. The components are generally made from porous materials which by their nature absorb moisture from the atmosphere. If they get water inside them, putting the parts in an oven can cause it to rapidly turn to steam which in turn expands and can fracture or destroy the part. As such the devices are packed for shipping in such a way as to reduce the risk of exposure to moisture. Parts that have been out of a sealed bag or which have been sitting for extended periods are generally baked at a lower temperature first to cook off any water before being reflowed in order to stop fracturing. In the case that the parts are being hand soldered, baking is probably not necessary, especially since it will likely not be a production run. But the distributer doesn't know this, so they do their duty and ensure the parts are properly packaged. The same applies to ICs as well as LEDs. In fact there are also different classes of sensitivity - some parts are more moisture sensitive than others, and as a result require different levels of packaging and/or have different shelf lives. In your case it is a Level 2 device which is one of the lesser sensitive parts - pre-baking isn't strictly required unless they are exposed to >60% humidity measured at room temp and they can be stored in the packed bag for many months. | {
"source": [
"https://electronics.stackexchange.com/questions/230538",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/108084/"
]
} |
230,544 | A voltage divider follows a fet in an audio booster. The pot in this divider is 250K and I'm reading a range of 0 to 6vac in my circuit. Because I want to build this into a volume pedal I'd like to be able to raise the lowest possible voltage and reduce the maximum possible voltage of this arrangement. Both of these variations to be on their own pot. Could somebody please help me to work out a solution to this? Thanks. | Through-hole components are generally hand or wave soldered. These apply on local heating to the pads and not to the component itself. SMD parts on the other hand are generally reflow soldered. This involves putting the entire part in a hot oven for an extended time. The components are generally made from porous materials which by their nature absorb moisture from the atmosphere. If they get water inside them, putting the parts in an oven can cause it to rapidly turn to steam which in turn expands and can fracture or destroy the part. As such the devices are packed for shipping in such a way as to reduce the risk of exposure to moisture. Parts that have been out of a sealed bag or which have been sitting for extended periods are generally baked at a lower temperature first to cook off any water before being reflowed in order to stop fracturing. In the case that the parts are being hand soldered, baking is probably not necessary, especially since it will likely not be a production run. But the distributer doesn't know this, so they do their duty and ensure the parts are properly packaged. The same applies to ICs as well as LEDs. In fact there are also different classes of sensitivity - some parts are more moisture sensitive than others, and as a result require different levels of packaging and/or have different shelf lives. In your case it is a Level 2 device which is one of the lesser sensitive parts - pre-baking isn't strictly required unless they are exposed to >60% humidity measured at room temp and they can be stored in the packed bag for many months. | {
"source": [
"https://electronics.stackexchange.com/questions/230544",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/108087/"
]
} |
230,609 | To break off any circuit, the common way is to disconnect the negative and or ground terminal. I rarely see people disconnecting the positive and if so, they're not well educated in electrical engineering most of the times. Well, I've no electrical degree either and to me, breaking the circuit off at the negative side seems counter intuitive. In my intuition, breaking the input makes more sense than breaking the output, due this is the side where the energy is coming from. Why to disconnect the negative/ground terminal, not the positive? | There is no real inherent distinction between breaking one side of a loop or the other side- it's all in series so breaking the negative or the positive side of the supply keeps electrons from flowing. When you have an electronic switch and are turning off part of the circuitry with other circuitry it's easier to break the negative - called a low side switch (requires fewer parts), but if some other circuitry remains connected, that may cause problems. For example, if I break the ground connection on a module, the 'input' pins may start sourcing current back to the controller (because otherwise they would have to go negative with respect to 'ground' on the module)- it won't necessarily switch the current off completely and it may even damage the controller or module in some situations. Look at a few of the many answers on this site where people have tried this, failed and a high-side switch has been suggested. It comes up quite regularly. If it's something completely isolated like a relay coil, most designers will use a low side switch because it's simpler and there is no advantage the other way. In cars, the chassis is used as a return, so high side is preferred if the load is remote. Here is a useful document on automotive applications. Speaking of automotive, there is one particular situation worthy of mention where removing the negative connection is recommended for safety reasons- and that is when you are working on a car. Since the negative terminal is almost always connected very solidly to the chassis, if you try to remove the positive terminal with a (conductive) wrench/spanner and the tool touches the chassis hundreds of amperes will flow, causing the wrench to get red hot. Some people have left their wedding or other rings on and receive severe burns (to the point of possibly losing their finger) when the ring formed part of the circuit. So take the negative terminal off first and put it on last if you are working on a car. And remove jewellery. | {
"source": [
"https://electronics.stackexchange.com/questions/230609",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/56325/"
]
} |
230,776 | I've read a bit about the construction of a digital computer in Shocken/Nisan's: The Elements of Computing Systems . But this book says nothing about certain electrical aspects in computers, for example: It's often said that 0's and 1's are represented by voltage, if the voltage is in the interval [0, 0.9), then it is a 0. If the voltage is in the interval [0.9, 1.5), then it is a 1 (voltages may vary, I'm only giving an example). But I never read what keeps electrical voltages "well-behaved" in a way that a 0 could never accidentally become a 1 due to electrical volatility[1] inside the computer. Perhaps it's possible for the voltage to be very near 0.9, then what is done to avoid it passing the threshold? [1]: Supposing it exists. | It's often said that 0's and 1's are represented by voltage, if the voltage is in the interval [0, 0.9), then it is a 0. If the voltage is in the interval [0.9, 1.5), then it is a 1 (voltages may vary, I'm only giving an example). To some degree, you've mostly created this problem by using an unrealistic example. There is a much larger gap between logical low and high in real circuits. For example, 5V CMOS logic will output 0-0.2V for logic low and 4.7-5V for logic high, and will consistently accept anything under 1.3V as low or anything over 3.7V as high. That is, there are much tighter margins on the outputs than the inputs, and there's a huge gap left between voltages which might be used for logical low signals (<1.3V) and ones that might be used for logical high (>3.7V). This is all specifically laid out to make allowances for noise, and to prevent the very sort of accidental switching that you're describing. Here's a visual representation of the thresholds for a variety of logic standards, which I've borrowed from interfacebus.com : Each column represents one logic standard, and the vertical axis is voltage. Here's what each color represents: Orange: Voltages in this range are output for logical high, and will be accepted as logical high. Light green: Voltages in this range will be accepted as logical high. Pink/blue: Voltages in this range won't be interpreted consistently, but the ones in the pink area will usually end up being interpreted as high, and ones in the blue area will usually be low. Bluish green: Voltages in this range will be accepted as logical low. Yellow: Voltages in this range are output for logical low, and will be interpreted as logical low. | {
"source": [
"https://electronics.stackexchange.com/questions/230776",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/8699/"
]
} |
231,034 | I am in first year of Engineering school and I was given an assignment containing this circuit, which drives pressure sensors in a pitot tube : I am struggling to understand the whole circuit, and more precisely the first op-amp, which output (pin 1) and e- (inverting) input (pin 2) are connected to ground. What is its use? How can such an op-amp have an influence on the overall circuit, if its output is not used? | The first OP-amp is actually creating the circuit ground. The 7810 creates a stable 10 volt, which is then divided by the voltage divider R2 and R3, filtered by C3 to make a stable 5 volt level relative to the most negative level. The OP-amp then buffers this, and the rest of the circuit uses its output as the reference ground. Remember that ground in a circuit like this is just a convenience, a node that is used when referring to other voltages. | {
"source": [
"https://electronics.stackexchange.com/questions/231034",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/108331/"
]
} |
232,035 | Recently I did some work on my car electric circuits. I see many relays are used in car circuits. These relays are used for simple switching, and I wonder why these circuits are based on relays and not on transistors or other electronic components usable for switching purposes. I thought that transistors were cheaper, smaller and more reliable than classic el-mech relays for switching. Note: In car applications a 12 V car battery is used to power the coil of a relay and the same 12 V power is what's switched by the relay. Sometimes a relay switches just another signal line, i.e. without any high power load on it. And still, I can see no transistors in there. So I believe there is a solid reason why it is done this way, and I must be missing something here. :-) | Relays are much more stable temperature-wise: a sealed relay has essentially the same characteristics at -30°C and +70°C, both temperatures being common for cars. A transistor works quite differently at -30°C and +70°C, so the schematic has to be designed to account for those variations. I once worked on a product with temperature range starting at -55°C, which used both relays and semiconductor devices. The funny part about the design was that below -20°C only the relay part was powered, which activated air heaters and would only switch on the semiconductor part once the temperature reached 0°C. Relays also offer galvanic isolation, which effectively confines faults. Common failures like short circuits usually damage only one relay, whereas in transistor-based circuits several devices along the way would be affected. I bet people still want their car's motor running even when the air conditioner or a window lifter dies. | {
"source": [
"https://electronics.stackexchange.com/questions/232035",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/7575/"
]
} |
232,595 | As far as I understand, through holes in PCBs are often plated, hence the term PTH. Letting red denote copper, the first figure shows a through hole which is plated, and the second figure shows one that is not. The thick black line is the pin of a component, while the silver denotes solder applied. I can't figure out why the copper plating (otherwise known as the barrel) is needed - can someone explain why? With through plating: Without through plating (why isn't this the norm?) : | In order for your scheme to connect the top and bottom layer, TWO conditions must BOTH be met: The pad on the TOP must be accessible and must be soldered (separately). The pad on the BOTTOM must be accessible and must be soldered (separately). In very many cases the top pad of a thru-hole component is NOT accessible because the body of the component covers it. So that is not practical. In MOST cases there IS NO component lead at all where you need to via from one side to the other. Inserting short bits of wire and soldering BOTH SIDES is simply not practical even for manual assembly not to mention automated assembly as virtual all modern gear comes from. It doubles the effort to require soldering to BOTH sides of even a thru=hole component lead. That takes double the assembly time, and greatly increases the chances of assembly error. It is simply not reasonable at any level. | {
"source": [
"https://electronics.stackexchange.com/questions/232595",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/103501/"
]
} |
232,631 | A quick google around and all I seem to be able to find are people talking about the physics & the chemistry of the capacitors but not how this affects choosing which to use. Avoiding talking about the difference in their make-up, and the larger capacities found in electrolytic caps, what are the main thoughts that drive which type of capacitor to use for an application? For example, why do I see it suggested to use ceramic caps for power decoupling per microprocessor & a larger electrolytic capacitor per board? why not use electrolytic all around? | 1. Capacitors There are a lot of misconceptions about capacitors, so I wanted to briefly clarify what capacitance is and what capacitors do. Capacitance measures how much energy will be stored in the electric field generated between two different points for a given difference of potential. This is why capacitance is often called the 'dual' of inductance. Inductance is how much energy a given current flow will store in a magnetic field, and capacitance is the same, but for the energy stored in an electric field (by a potential difference, rather than current). Capacitors do not store electric charge, which is the first big misconception. They store energy. For every charge carrier you force onto one plate, a charge carrier on the opposite plate leaves. The net charge remains the same (neglecting any possible much smaller unbalanced 'static' charge that might build up on asymetrical exposed outer plates). Capacitors store energy in the dielectric, NOT in the conductive plates. Only two things determine a capacitor's effectiveness: its physical dimensions (plate area and distance separating them), and the dielectric constant of the insulating between the plates. More area means a bigger field, closer plates mean a stronger field (since field strength is measured in Volts per meter, so the same difference of potential across a much smaller distance yields a stronger electric field). The dielectric constant is how strong a field will be generated in a specific medium. The 'baseline' dielectric constant is \$\varepsilon\$ , with a normalized value of 1. This is the dielectric constant of a perfect vacuum, or the field strength that occurs through spacetime itself. Matter has a very large impact on this, and can support the generation of much stronger fields. The best materials are materials with lots of electric dipoles that will enhance the strength of a field generated within the material. Plate area, dielectric, and plate separation. That's really all there is to capacitors. So why are they so complicated and varied? They aren't. Except the ones with much more than thousands of pF of capacitance. If you want such ludicrous amounts of capacitance as we mostly take for granted today, such amounts as in millions of picofarads (microfarads), and even order of magnitudes beyond, we are at the mercy of physics. Like any good engineer, in the face of limits imposed by the laws of nature, we cheat and get around those limits anyway. Electrolytic capacitors and high capacitance (0.1µF to 100µF+) ceramic capacitors are the dirty tricks we used. 2. Electrolytic capacitors Aluminum The first and most important distinction (for which they're named for) is that electrolytic capacitors use an electrolyte. The electrolyte serves as the second plate. Being a liquid, this means it can be directly up against a dielectric, even one that is unevenly shaped. In aluminum electrolytic capacitors, this enables us to take advantage of aluminum's surface oxidation (the hard stuff, sometimes deliberately porous and dye impregnated for colours, on anodized aluminum which amounts to an insulating Sapphire coating) for use as the dielectric. Without an electrolytic 'plate' however, the unevenness of the surface would prevent a rigid metallic plate from getting close enough to gain anything advantage from using aluminum oxide in the first place. Even better, by using a liquid, the surface of aluminum foil can be roughened, causing a large increase in effective surface area. Then it is anodized until a sufficiently thick layer of aluminum oxide has formed on its surface. A rough surface of which all will be directly adjacent to the other 'plate' – our liquid electrolyte. There are problems, however. The most familiar one is polarity. Anodization of aluminum, if you couldn't tell by its similarity to the word anode , is a polarity-dependent process. The capacitor must always be used in the polarity that anodizes the aluminum. The opposite polarity will allow the electrolyte to destroy the surface oxide, which leaves you with a shorted capacitor. Some electrolytes will slowly eat away this layer anyway, so many aluminum electrolytic capacitors have a shelf-life. They are designed to be used, and that use has the beneficial side effect of maintaining and even restoring the surface oxide. However, with long enough disuse, the oxide can be completely destroyed. If you must use an old dusty capacitor of unsure condition, it is best to 'reform' them by applying a very low current (hundreds of µA to mA) from a constant current power supply, and let the voltage rise slowly until it reaches its rated voltage. This prevents the very high leakage current (initially) from damaging the capacitor, and slowly rebuilds the surface oxides until the leakage is hopefully at acceptable levels. The other problem is that electrolytes are, due to chemistry, something ionic dissolved in a solvent. Non-polymer aluminum ones use water (with some other 'secret sauce' ingredients added to it). What does water do when current flows through it? It electrolyses! Great if you wanted oxygen and hydrogen gas, terrible if you didn't. In batteries, controlled recharging can reabsorb this gas, but capacitors do not have an electrochemical reaction that is reversed. They're just using the electrolyte as a thing that is conductive. So no matter what, they generate minute amounts of hydrogen gas (the oxygen is used to build up the aluminum oxide layer), and while very small, it prevents us from hermetically sealing these capacitors. So they dry out. The standard useful life at maximum temperature is 2,000 hours. That's not very long. Around 83 days. This is simply due to higher temperatures causing the water to evaporate more quickly. If you want something to have any longevity, it is important to keep them as cool as possible, and get the highest endurance models (I've seen ones as high as 15,000 hours). As the electrolyte dries out, it becomes less conductive, which increases ESR, which in turn increases heat, which compounds the problem. Tantalum Tantalum capacitors are the other variety of electrolytic capacitors. These use manganese dioxide as their electrolyte, which is solid in its finished form. During production, manganese dioxide is dissolved in an acid, then electrochemically deposited (similar to electroplating) onto the surface of tantalum powder which is then sintered. The exact details of the 'magic' part where they create an electrical connection between all the tiny pieces of tantalum powder and the dielectric is not known to me (edits or comments are appreciated!) but suffice it to say, tantalum capacitors are made from tantalum because of a chemistry that permits us to easily manufacture them from a powder (high surface area). This gives them terrific volumetric efficiency, but at a cost: the free tantalum and manganese dioxide can undergo a reaction similar to thermite, which is aluminum and iron oxide. Only, the tantalum reaction has much lower activation temperatures - temperatures that are easily and quickly achieved should opposite polarity or an overvoltage event punch a hole through the dielectric (tantalum pentoxide, much like aluminum oxide) and create a short. This is why you see tantalum capacitors voltage and current derated by 50% or more. For those unaware of thermite (which is a lot hotter but still not dissimilar to the tantalum and MnO 2 reaction), there is a ton of fire and heat. It is used to weld railroad rails to each other, and it does this task in seconds. There are also polymer electrolytic capacitors, which use conductive polymer that, in its monomer form, is a liquid, but when exposed to the right catalyst, will polymerize into a solid material. This is just like super glue, which is a liquid monomer that polymerizes solid once it is exposed to moisture (either in/on the surfaces it is applied to, or from the air itself). In this way, polymer capacitors can be mostly a solid electrolyte, which results in reduced ESR, greater longevity, and generally better robustness. They still have some small amount of solvent in the polymer matrix however, and it is needed to be conductive. So they still dry out. No free lunch sadly. Now, what are the actual electrical properties of these types of capacitors? We already mentioned polarity, but the other is their ESR and ESL. Electrolytic capacitors, due to being constructed as a very long plate wound into a coil, have relatively high ESL (equivalent series inductance). So high in fact, that they are completely ineffective as capacitors above 100kHz, or 150kHz for polymer types. Above this frequency, they are basically just resistors that block DC. They won't do anything to your voltage ripple, and instead will make the ripple be equal to the ripple current multiplied by the capacitor's ESR, which can often make ripple even worse . Of course, this means any sort of high frequency noise or spike will just shoot right through an aluminum electrolytic capacitor like it wasn't even there. Tantalums are not quite as bad, but they still lose their effectiveness with medium frequencies (the best and smallest ones can almost hit 1MHz, most lose their capacitive characteristic around 300–600kHz). All in all, electrolytic capacitors are great for storing a ton of energy in a small space, but are really only useful for dealing with noise or ripple below 100kHz. If not for that critical weakness, there would be little reason to use anything else. 3. Ceramic Capacitors Ceramic capacitors use a ceramic as their dielectric, with metallization on either side as the plates. I will not be going into Class 1 (low capacitance) types, but only class II. Class II capacitors cheat using the ferroelectric effect. This is very much akin to ferromagnetism, only with electric fields instead. A ferroelectric material has a ton of electric dipoles that can, to some degree or another, be oriented in the presence of an external electric field. So the application of an electric field will pull the dipoles into alignment, which requires energy, and causes a massive amount of energy to ultimately be stored in the electric field. Remember how a vacuum was the baseline of 1? The ferroelectric ceramics used in modern MLCCs have a dielectric constant on the order of 7,000. Unfortunately, just like ferromagnetic materials, as a stronger and stronger field magnetizes (or polarizes in our case) a material, it begins running out of more dipoles to polarize. It saturates. This ultimately translates into the nasty property of X5R/X7R/etc type ceramic capacitors: their capacitance drops with bias voltage. The higher the voltage across their terminals, the lower their effective capacitance. The amount of energy stored is still always increasing with voltage, but it is not nearly so good as you would expect based on its unbiased capacitance. Voltage rating of a ceramic capacitor has very little effect on this. In fact, the actual withstanding voltage of most ceramics is much higher, 75 or 100V for the lower voltage ones. In fact, many ceramic capacitors I suspect are the exact same part but with different part numbers, the same 4.7µF capacitor being sold as both a 35V and 50V capacitor under different labels. The graph of some MLCCs' capacitance vs. bias voltage is identical, save for the lower voltage one having its graph truncated at its rated voltage. Suspicious, certainly, but I could be wrong. Anyway, buying higher rated ceramics will do nothing to combat this voltage related capacitance falloff, the only factor that ultimately plays a role is the physical volume of the dielectric. More material means more dipoles. So physically larger capacitors will retain more of their capacitance under voltage. This is also not a trivial effect. A 1210 10µF 50V ceramic capacitor, a veritable beast of a capacitor, will lose 80% of its capacitance by 50V. Some are a little better, some are a little worse, but 80% is a reasonable figure. The best I have seen was a 1210 (inches) keep about 3µF of capacitance by the time it hit 60V, in a 1210 package anyway. A 10µF 1206 (inches) sized 50V ceramic will be lucky to have 500nF left by 50V. Class II ceramics are also piezoelectric and pyroelectric, though this doesn't really impact them electrically. They have been known to vibrate or sing due to ripple, and can act as microphones. Probably best to avoid using them as coupling capacitors in audio circuits. Otherwise, ceramics have the lowest ESL and ESR of any capacitor. They're the most 'capacitor-like' of the bunch. Their ESL is so low that the primary source is the height of the end terminations on the package itself Yes, that height of an 0805 ceramic is the main source of its 3 nH of ESL. They still behave like capacitors into the many MHz, or even higher for specialized RF types. They also can decouple a lot of noise, and decouple very fast things like digital circuits, things electrolytics are useless for. In conclusion, electrolytics are: lots of bulk capacitance in a tiny package terrible in every other way They are slow, they wear out, they catch fire, they will turn into a short if you polarize them wrong. By every criteria capacitors are measured by, save for capacitance itself, electrolytics are absolutely terrible. You use them because you have to, never because you want to. Ceramics are: Unstable and lose a lot of their capacitance under voltage bias Can vibrate or act as microphones. Or nanoactuators! Are otherwise awesome. Ceramic capacitors are what you want to use, but aren't always able to. They actually behave like capacitors and even at high frequencies, but can't match the volumetric efficiency of electrolytics, and only Class 1 types (which have very small amounts of capacitance) are going to have a stable capacitance. They vary quite a bit with temperature and voltage. Oh, they also can crack and are not as mechanically robust. Oh, one last note, you can use electrolytics just fine in AC/non-polarized applications, with all their other problems still in play of course. Just connect a pair of regular polarised electrolytic capacitors, with same polarity terminals terminals together, and now the opposite polarity ends are the terminals of a brand new, non-polar electrolytic. As long as their capacitance values are fairly well-matched and there is limited amount of steady state DC bias, the capacitors seem to hold out in use. | {
"source": [
"https://electronics.stackexchange.com/questions/232631",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/68276/"
]
} |
232,642 | Although the adsorption of moisture by FR4 is quite low, what is the best way to minimize it or eliminate it completely? | 1. Capacitors There are a lot of misconceptions about capacitors, so I wanted to briefly clarify what capacitance is and what capacitors do. Capacitance measures how much energy will be stored in the electric field generated between two different points for a given difference of potential. This is why capacitance is often called the 'dual' of inductance. Inductance is how much energy a given current flow will store in a magnetic field, and capacitance is the same, but for the energy stored in an electric field (by a potential difference, rather than current). Capacitors do not store electric charge, which is the first big misconception. They store energy. For every charge carrier you force onto one plate, a charge carrier on the opposite plate leaves. The net charge remains the same (neglecting any possible much smaller unbalanced 'static' charge that might build up on asymetrical exposed outer plates). Capacitors store energy in the dielectric, NOT in the conductive plates. Only two things determine a capacitor's effectiveness: its physical dimensions (plate area and distance separating them), and the dielectric constant of the insulating between the plates. More area means a bigger field, closer plates mean a stronger field (since field strength is measured in Volts per meter, so the same difference of potential across a much smaller distance yields a stronger electric field). The dielectric constant is how strong a field will be generated in a specific medium. The 'baseline' dielectric constant is \$\varepsilon\$ , with a normalized value of 1. This is the dielectric constant of a perfect vacuum, or the field strength that occurs through spacetime itself. Matter has a very large impact on this, and can support the generation of much stronger fields. The best materials are materials with lots of electric dipoles that will enhance the strength of a field generated within the material. Plate area, dielectric, and plate separation. That's really all there is to capacitors. So why are they so complicated and varied? They aren't. Except the ones with much more than thousands of pF of capacitance. If you want such ludicrous amounts of capacitance as we mostly take for granted today, such amounts as in millions of picofarads (microfarads), and even order of magnitudes beyond, we are at the mercy of physics. Like any good engineer, in the face of limits imposed by the laws of nature, we cheat and get around those limits anyway. Electrolytic capacitors and high capacitance (0.1µF to 100µF+) ceramic capacitors are the dirty tricks we used. 2. Electrolytic capacitors Aluminum The first and most important distinction (for which they're named for) is that electrolytic capacitors use an electrolyte. The electrolyte serves as the second plate. Being a liquid, this means it can be directly up against a dielectric, even one that is unevenly shaped. In aluminum electrolytic capacitors, this enables us to take advantage of aluminum's surface oxidation (the hard stuff, sometimes deliberately porous and dye impregnated for colours, on anodized aluminum which amounts to an insulating Sapphire coating) for use as the dielectric. Without an electrolytic 'plate' however, the unevenness of the surface would prevent a rigid metallic plate from getting close enough to gain anything advantage from using aluminum oxide in the first place. Even better, by using a liquid, the surface of aluminum foil can be roughened, causing a large increase in effective surface area. Then it is anodized until a sufficiently thick layer of aluminum oxide has formed on its surface. A rough surface of which all will be directly adjacent to the other 'plate' – our liquid electrolyte. There are problems, however. The most familiar one is polarity. Anodization of aluminum, if you couldn't tell by its similarity to the word anode , is a polarity-dependent process. The capacitor must always be used in the polarity that anodizes the aluminum. The opposite polarity will allow the electrolyte to destroy the surface oxide, which leaves you with a shorted capacitor. Some electrolytes will slowly eat away this layer anyway, so many aluminum electrolytic capacitors have a shelf-life. They are designed to be used, and that use has the beneficial side effect of maintaining and even restoring the surface oxide. However, with long enough disuse, the oxide can be completely destroyed. If you must use an old dusty capacitor of unsure condition, it is best to 'reform' them by applying a very low current (hundreds of µA to mA) from a constant current power supply, and let the voltage rise slowly until it reaches its rated voltage. This prevents the very high leakage current (initially) from damaging the capacitor, and slowly rebuilds the surface oxides until the leakage is hopefully at acceptable levels. The other problem is that electrolytes are, due to chemistry, something ionic dissolved in a solvent. Non-polymer aluminum ones use water (with some other 'secret sauce' ingredients added to it). What does water do when current flows through it? It electrolyses! Great if you wanted oxygen and hydrogen gas, terrible if you didn't. In batteries, controlled recharging can reabsorb this gas, but capacitors do not have an electrochemical reaction that is reversed. They're just using the electrolyte as a thing that is conductive. So no matter what, they generate minute amounts of hydrogen gas (the oxygen is used to build up the aluminum oxide layer), and while very small, it prevents us from hermetically sealing these capacitors. So they dry out. The standard useful life at maximum temperature is 2,000 hours. That's not very long. Around 83 days. This is simply due to higher temperatures causing the water to evaporate more quickly. If you want something to have any longevity, it is important to keep them as cool as possible, and get the highest endurance models (I've seen ones as high as 15,000 hours). As the electrolyte dries out, it becomes less conductive, which increases ESR, which in turn increases heat, which compounds the problem. Tantalum Tantalum capacitors are the other variety of electrolytic capacitors. These use manganese dioxide as their electrolyte, which is solid in its finished form. During production, manganese dioxide is dissolved in an acid, then electrochemically deposited (similar to electroplating) onto the surface of tantalum powder which is then sintered. The exact details of the 'magic' part where they create an electrical connection between all the tiny pieces of tantalum powder and the dielectric is not known to me (edits or comments are appreciated!) but suffice it to say, tantalum capacitors are made from tantalum because of a chemistry that permits us to easily manufacture them from a powder (high surface area). This gives them terrific volumetric efficiency, but at a cost: the free tantalum and manganese dioxide can undergo a reaction similar to thermite, which is aluminum and iron oxide. Only, the tantalum reaction has much lower activation temperatures - temperatures that are easily and quickly achieved should opposite polarity or an overvoltage event punch a hole through the dielectric (tantalum pentoxide, much like aluminum oxide) and create a short. This is why you see tantalum capacitors voltage and current derated by 50% or more. For those unaware of thermite (which is a lot hotter but still not dissimilar to the tantalum and MnO 2 reaction), there is a ton of fire and heat. It is used to weld railroad rails to each other, and it does this task in seconds. There are also polymer electrolytic capacitors, which use conductive polymer that, in its monomer form, is a liquid, but when exposed to the right catalyst, will polymerize into a solid material. This is just like super glue, which is a liquid monomer that polymerizes solid once it is exposed to moisture (either in/on the surfaces it is applied to, or from the air itself). In this way, polymer capacitors can be mostly a solid electrolyte, which results in reduced ESR, greater longevity, and generally better robustness. They still have some small amount of solvent in the polymer matrix however, and it is needed to be conductive. So they still dry out. No free lunch sadly. Now, what are the actual electrical properties of these types of capacitors? We already mentioned polarity, but the other is their ESR and ESL. Electrolytic capacitors, due to being constructed as a very long plate wound into a coil, have relatively high ESL (equivalent series inductance). So high in fact, that they are completely ineffective as capacitors above 100kHz, or 150kHz for polymer types. Above this frequency, they are basically just resistors that block DC. They won't do anything to your voltage ripple, and instead will make the ripple be equal to the ripple current multiplied by the capacitor's ESR, which can often make ripple even worse . Of course, this means any sort of high frequency noise or spike will just shoot right through an aluminum electrolytic capacitor like it wasn't even there. Tantalums are not quite as bad, but they still lose their effectiveness with medium frequencies (the best and smallest ones can almost hit 1MHz, most lose their capacitive characteristic around 300–600kHz). All in all, electrolytic capacitors are great for storing a ton of energy in a small space, but are really only useful for dealing with noise or ripple below 100kHz. If not for that critical weakness, there would be little reason to use anything else. 3. Ceramic Capacitors Ceramic capacitors use a ceramic as their dielectric, with metallization on either side as the plates. I will not be going into Class 1 (low capacitance) types, but only class II. Class II capacitors cheat using the ferroelectric effect. This is very much akin to ferromagnetism, only with electric fields instead. A ferroelectric material has a ton of electric dipoles that can, to some degree or another, be oriented in the presence of an external electric field. So the application of an electric field will pull the dipoles into alignment, which requires energy, and causes a massive amount of energy to ultimately be stored in the electric field. Remember how a vacuum was the baseline of 1? The ferroelectric ceramics used in modern MLCCs have a dielectric constant on the order of 7,000. Unfortunately, just like ferromagnetic materials, as a stronger and stronger field magnetizes (or polarizes in our case) a material, it begins running out of more dipoles to polarize. It saturates. This ultimately translates into the nasty property of X5R/X7R/etc type ceramic capacitors: their capacitance drops with bias voltage. The higher the voltage across their terminals, the lower their effective capacitance. The amount of energy stored is still always increasing with voltage, but it is not nearly so good as you would expect based on its unbiased capacitance. Voltage rating of a ceramic capacitor has very little effect on this. In fact, the actual withstanding voltage of most ceramics is much higher, 75 or 100V for the lower voltage ones. In fact, many ceramic capacitors I suspect are the exact same part but with different part numbers, the same 4.7µF capacitor being sold as both a 35V and 50V capacitor under different labels. The graph of some MLCCs' capacitance vs. bias voltage is identical, save for the lower voltage one having its graph truncated at its rated voltage. Suspicious, certainly, but I could be wrong. Anyway, buying higher rated ceramics will do nothing to combat this voltage related capacitance falloff, the only factor that ultimately plays a role is the physical volume of the dielectric. More material means more dipoles. So physically larger capacitors will retain more of their capacitance under voltage. This is also not a trivial effect. A 1210 10µF 50V ceramic capacitor, a veritable beast of a capacitor, will lose 80% of its capacitance by 50V. Some are a little better, some are a little worse, but 80% is a reasonable figure. The best I have seen was a 1210 (inches) keep about 3µF of capacitance by the time it hit 60V, in a 1210 package anyway. A 10µF 1206 (inches) sized 50V ceramic will be lucky to have 500nF left by 50V. Class II ceramics are also piezoelectric and pyroelectric, though this doesn't really impact them electrically. They have been known to vibrate or sing due to ripple, and can act as microphones. Probably best to avoid using them as coupling capacitors in audio circuits. Otherwise, ceramics have the lowest ESL and ESR of any capacitor. They're the most 'capacitor-like' of the bunch. Their ESL is so low that the primary source is the height of the end terminations on the package itself Yes, that height of an 0805 ceramic is the main source of its 3 nH of ESL. They still behave like capacitors into the many MHz, or even higher for specialized RF types. They also can decouple a lot of noise, and decouple very fast things like digital circuits, things electrolytics are useless for. In conclusion, electrolytics are: lots of bulk capacitance in a tiny package terrible in every other way They are slow, they wear out, they catch fire, they will turn into a short if you polarize them wrong. By every criteria capacitors are measured by, save for capacitance itself, electrolytics are absolutely terrible. You use them because you have to, never because you want to. Ceramics are: Unstable and lose a lot of their capacitance under voltage bias Can vibrate or act as microphones. Or nanoactuators! Are otherwise awesome. Ceramic capacitors are what you want to use, but aren't always able to. They actually behave like capacitors and even at high frequencies, but can't match the volumetric efficiency of electrolytics, and only Class 1 types (which have very small amounts of capacitance) are going to have a stable capacitance. They vary quite a bit with temperature and voltage. Oh, they also can crack and are not as mechanically robust. Oh, one last note, you can use electrolytics just fine in AC/non-polarized applications, with all their other problems still in play of course. Just connect a pair of regular polarised electrolytic capacitors, with same polarity terminals terminals together, and now the opposite polarity ends are the terminals of a brand new, non-polar electrolytic. As long as their capacitance values are fairly well-matched and there is limited amount of steady state DC bias, the capacitors seem to hold out in use. | {
"source": [
"https://electronics.stackexchange.com/questions/232642",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/56642/"
]
} |
233,127 | Most modern touch screens in portable devices are made of glass. This glass often breaks if accidentally dropped. Also, it is very reflective, making it difficult to use in strong light. I know that touch screens without glass exist. For example, the multi-touch screen on my e-ink e-reader has a plastic front. I remember many other examples, such as the personal in-flight entertainment systems on many airplanes. What are the reasons that most modern portable touch devices come with a glass panel on their fronts, rather than plastic or something else? The cracking of glass seems to be a pretty big problem. Edit: I've seen a lot of cracked touch devices, and it's nearly always only the front panel that's cracked. The actual display is usually fine underneath. Even the digitizer usually works perfectly. | Title of question: Is there a technical reason why most touch screens use glass rather than plastic? Note the word "technical" and not "marketing" What are the reasons that most modern portable touch devices come with
a glass panel on their fronts, rather than plastic or something else? Glass (as a cheap and common material) has a good dielectric constant (more than most cheap plastics) and this makes the change in capacitance bigger for those devices using that technology. This makes life easier on the electronics that has to detect finger positions and movement. Taken from this article | {
"source": [
"https://electronics.stackexchange.com/questions/233127",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/108245/"
]
} |
233,230 | I googled and found out through some forum that: DC has a constant amplitude which overheats and destroys the voice coil of the speaker. Could someone clarify if this answer is complete and accurate? | ALL current will heat the voice coil of a speaker. But AC current is useful to reproduce sounds (which is what a speaker is made for). On the other hand, DC current will produce the equivalent amount of heating as an equivalent AC current, but it will produce nothing but a fixed offset (versus moving the cone in and out to produce sound). And while you can hear AC current, and you can hear when it is "too loud" and distorting the speaker, you cannot hear DC, so you don't know whether your speaker voice-coil is sitting there frying until you see the smoke.Also DC current biases the cone off center which could increase even harmonic distortion. For these reasons it is never a good idea to allow DC current to go into a speaker voice-coil. | {
"source": [
"https://electronics.stackexchange.com/questions/233230",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/108058/"
]
} |
233,600 | I have been looking at circuit protection using TVS and Zener diodes. I have seen the following symbols used to represent TVS diodes in circuit diagrams: simulate this circuit – Schematic created using CircuitLab I guess the first question is whether there is a meaningful difference between TVS and Zener diodes , and the answer seems to be, "Their characteristics are similar, but their design and test specs, and intended applications, are different: Zeners are for specific and potentially continuous voltage regulation. TVS diodes are less precise about voltage and designed rather to shunt (and survive) large power transients." My impression so far is that of the symbols above: Should be assumed to refer to a Zener diode (unless notes indicate otherwise). Unambiguously indicates a TVS diode. Unambiguously indicates a TVS diode. Probably refers to a pair of Zener diodes, but could refer to a single TVS diode. Are these reasonable assumptions? I imagine that the only time one would consistently run into trouble is when using a TVS diode instead of a pair of Zener diodes. E.g., using a TVS diode, with its imprecise breakdown voltage , when the circuit calls for a "waveform clipper" would produce terrible results. On the other hand, using a Zener when a TVS was intended one would likely either never notice the difference if large power transients aren't part of routine operation, or else one would probably notice the difference quite quickly as the Zener was fried? Or is the correct answer to this ambiguity simply, "Yes, they're ambiguous. And until you're sure which diode to use you're not ready to build the circuit." | The reason the same symbol is sometimes used for TVS diodes (Transorbs) and Zeners, is a Transorb has a lot in common with a Zener. An ideal Zener and an Ideal TVS-diode would be indistinguishable in their characteristics.
This leads to ... laziness in library management (or ignorance) and the same symbol is used. Regulator Zeners and TVS-Zener diodes differ in aspects of their construction to facilitate either higher continuous rating or high pulse capability. Zener TVS devices are constructed with large area silicon p-n junctions designed to operate in avalanche and handle much higher currents than their cousins, Zener voltage regulator diodes Only uni-directional TVS diodes are created at wafer level. The bidirectional TVS diodes you can buy are just two such dies packaged in series. Examples of symbols for some TVS devices: From your images Zener diode unless the part number calls up a TVS TVS TVS Back-to-back Zeners unless the part number calls up two unidirectional TVS http://www.onsemi.com/pub_link/Collateral/HBD854-D.PDF Using a TVS diode instead of, or in addition to a MOV for AC line protection? | {
"source": [
"https://electronics.stackexchange.com/questions/233600",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/39511/"
]
} |
233,769 | I've got a PC fan here (I've peeled off a sticker). And although this one doesn't come with the usual 3 or 4 wire connectors, it has enough electronics in it to only operate when power + is on red, and - is black but/and it won't operate when power is connected the other way round. I don't know anything about how these (brushless?) PC fans are controlled, but I assume one of the basic/first steps in the circuitry is to make sure polarity is right. Right? So, now, is it possible to reverse the motor with a simple hack, or is is not possible without heavy modification of the controller or stuff... Ideas how to trick it into spinning the other way round? (Notes: Sorry for not providing the exact model of the fan. Let's assume it's pretty generic. Also, I know the fan blades are designed to spin this way round, and reversing it would mean having a less optimal fan. I can't just flip the housing. It's a long story why, but that's not an option. I couldn't get the casing open or the fan blades off, it's all pretty sturdy) Update: Taking all the feedback I had so far into account, I first tried to get that darn thing open (no success) and then had a closer look from the outside: What we see here from the side is (4 legs) a hall sensor, right? That means/would mean: it's the "sensored type" of fan, meaning that even breaking it open and swapping motor cables would have no effect (no cables btw., the motor seems soldered to the controller board). As I have a hard time deciding on the accepted answer, I think I have to check Olin's as he was the first to point that out, although pericynthion was first with an answer. | Reversing the direction of rotation will be difficult. Since you only supply power, there is a controller in the fan that senses rotor position and commutates the motor accordingly. If you can get in there, you can probably reverse the direction by flipping two of the three sensors and two of the three windings. That won't be easy to do since these are all nicely integrated onto a small board. Supplying negative power won't work, just fry the electronics. However, you can still "reverse" the fan by simply installing it backwards. | {
"source": [
"https://electronics.stackexchange.com/questions/233769",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/109863/"
]
} |
233,851 | For a very long time I have been wondering, where does electricity go after being used? When I use my tablet it runs from a battery. Where does the power go? | Congratulations, you've uncovered the biggest scam of the 20th century. Read this quick before the men in black come and get you. Think about how AC power really works. "AC" stands for alternating current , meaning it goes back and forth. The power companies give you electricity on one half-cycle, then suck it all back on the next half cycle. What a racket! Seriously though, "electricity" is not a thing, more like a concept. "Amount of electricity" is nonsensical babble. You can have some specified amount of power, voltage, current, or other measurable properties, but not "electricity". For those that don't fully understand current, voltage, and power, it is best to just avoid using the word "electricity" at all, since they'll most likely use it incorrectly. To therefore answer your question, electricity doesn't "go" anywhere since it's not a thing or stuff that ever was anywhere in the first place. Current and voltage together can be used to move energy around. When a battery is powering your tablet, it is producing voltage and current, thereby transferring power from inside it to the outside. The tablet uses that power to operate the computer inside, light the screen, transmit data over radio waves, etc. Energy (and power, power is just energy per time) is not created or destroyed, just moved around. In the case of the battery powering the tablet, the energy starts out in chemical form inside the battery. It then takes on electrical form coming out of the battery. The tablet uses it in electrical form, but eventually it gets turned to heat. If you leave a tablet running just sitting there, you should be able to notice that it's a bit warmer than whatever it's sitting on. The energy that was in the battery in chemical form is now in the stuff the tablet is made of in thermal form. Eventually that will heat the air in the room, which will heat something else, etc. By the time the relatively small amount of energy in a tablet battery is spread out over a whole room, you'd need sensitive scientific instruments to detect it. | {
"source": [
"https://electronics.stackexchange.com/questions/233851",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/109905/"
]
} |
234,671 | I wanted to give VFDs a try so I ordered a couple of 8x14 segments display modules. They arrived today, but I'm concerned because each has a big black stain in the corner. If it were on an LCD I'd suspect they're broken, but since I don't know this display technology at all, can someone tell me: Are those displays broken or is the stain a normal thing? If this thing has a name, what is it called? Here is a picture of the modules (the stain is on the top right corner): Note that after checking on the seller website and a few pictures on other online resources, this kind of stain is visible on some of them, so it may be a normal thing. | It's getter, it should absorb any residual gases in the display. Vacuum tubes have getters too (the black part on top). If that spot turns white (though in some cases I have seen it become transparent/invisible or just fall off) it means that the glass is broken and the VFD or tube is full of air and is no longer functional. | {
"source": [
"https://electronics.stackexchange.com/questions/234671",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/91879/"
]
} |
234,849 | I had a power outage and I thought about the utility poles near the street. How do electricians know which power line is down considering that there are many miles of them? For example, if there's a car that crashed into the pole and brought the power line down, how does the electrician (or power utility) know where the break is located? | Customer service power lines are typically segmented into blocks or rings, so that damage to part of the grid does not disable a huge geographic region. A substation may feed several separate customer grids near it from a large regional power line. Each customer grid has its own circuit breakers at the substation, which are usually outfitted with automatic reclosers. During a power failure, you may notice the power go off, come back on, go off, come back on, and then go off a third time and stay off. This is due to automatic reclosing, which attempts to immediately restore power. Some line failures such as a falling dead tree branch are brief, and power can be restored immediately after the tree branch has fallen past the line. If the fault can not be cleared, the faulting line segment is usually immediately reported by the recloser or substation to a regional electric grid control center. It is often completely unnecessary to call in a widespread power failure during a storm, as the control center knows immediately when a line segment has lost power. For example, automatic reclosing on Youtube: Smart Grid - Self Healing - NOJA Power https://www.youtube.com/watch?v=rVTIwr1Rk2c There is no way to instantly pinpoint the location of a fault on a line segment that cannot be automatically restored. These faults are typically located by linemen getting into a utility truck and driving around the neighborhood inspecting every part of the line segment. They have detailed maps to guide them. Multiple utility trucks may be assigned to inspect the down segment. If you see a electric utility truck slowly driving down the street during an outage, it is because they are visually inspecting the lines. When there is an outage at night, utility trucks are equipped with high power lighting. The utility trucks will drive slowly along roads with one person aiming the spotlight, and another person looking at the illuminated lines and equipment to check for damage. This is part of the reason why an outage that can't be automatically reclosed may take several hours to be restored, due to the linemen needing to drive around and inspect all the lines to check for broken wires, damaged insulators, heavy ice weighing down lines, or objects contacting the lines. And this is also why the customer's local power lines are segmented into small areas, because the larger the customer line segment, the more time it takes to drive around during an outage to locate the fault. The driving to look for line problems takes even more time in adverse weather conditions such as a blizzard or freezing rain. For example, on Youtube: How Power Gets Restored - Puget Sound Energy https://www.youtube.com/watch?v=gLikTRjrmnc Once the power line fault is discovered, the linemen will take additional safety measures to ground and/or short-circuit the power lines, so that in the event power is restored before their work is completed, the safety grounds will immediately re-trip the line breakers and prevent injury to the linemen. Safety grounding also protects linemen from home power systems which could be miswired to backfeed power into the inactive line. A small home generator is capable of backfeeding into a down power line and energizing it at its 12,000 volt (or higher) operating voltage. Safety grounding and shorting of the down power line assures these miswired home generation systems will result in blown fuses or burned out home generators, rather than a dead lineman. For example, on Youtube: Protective Grounding https://www.youtube.com/watch?v=T0vZDl2kFI8 For the very large transmission lines that feed a region (the poles are much taller than local grids), these are often cross-country and may not be near roads. Access to these lines for inspection and repair can be done by ground vehicles, but access is difficult in the winter or when there has been rain and the ground is soft and muddy. Inspection and repair work for these very large transmission lines is instead often done by helicopter, and due to the helicopter not being in contact with the ground, they can approach a live power line, make contact with it, and work on the line to do maintenance or repairs while it still is fully powered. For example, on Youtube: Helicopter Maintenance on energized 765,000 Volt Line https://www.youtube.com/watch?v=x94BH9TUiHM 'Hot-Washing' the Insulators of a 500,000 Volt Power Line! https://www.youtube.com/watch?v=lcjhjna9jZE | {
"source": [
"https://electronics.stackexchange.com/questions/234849",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/43145/"
]
} |
235,386 | Cited article In the article above, which deals with working with high voltage, there is the statement “Measure the voltage to ensure that the capacitor has safely discharged, and be prepared for a surprise as with some capacitors the voltage will return after discharge due to an unwanted property of large electrolytics . Repeat the discharge until the voltage is gone.” What is this unwanted property ? | This phenomenom is called dielectric absorption . It is caused by hysteresis in the response of the polarized molecules in the dielectric to the applied electric field. | {
"source": [
"https://electronics.stackexchange.com/questions/235386",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/94057/"
]
} |
235,669 | I am designing a two layer board, the problem is I do not know how to select via diameter and drill size, as well as outer and inner diameters. In my circuit I use 056, 012 and 006 mil traces: I have asked the manufacturer , they said they can make vias as small as 1 mil. So my question is what should I choose for Outer, Inner diameter and drill size? For example, is it OK to use 10 mil drill for 6 mill trace? and what should it be for 56 and 12 mil tracks? Also what is the green cylinder is going to look like when I get the board manufactured? I am really short on money I can not afford to make mistakes. | The goal is to create a via with at least as much conductive area within the hole as the trace connecting to it (generally speaking, of course). My personal rule is to make the drill size diameter the same as the width of the trace, and the pad size roughly twice the diameter. This gives you a little bit of leeway in case your board is too dense to allow these sizes, and you need to adjust them. This is just a general rule that can be useful to beginners. It gives you a good size to shoot for. Here is what completed vias look like on the board: It is important to note that small vias will cost you quite a bit more than regular size ones. Generally I don't recommend going below an 8 mil drill. A microvia is a via that is less than 6 mil in diameter, and will cost you quite a bit more. Physical size (beyond the 6 mil "microvia" limit) really isn't that important unless you need to consider current-carrying capability or controlled-impedance. Once these come into play there are a lot of things you'll need to consider such as plating type, plating thickness, plating length (thickness of your board), via positioning, etc. In basic designs, however, where you just need to bring one trace to another layer, I would suggest using 8 mil for all traces smaller than 8 mil, and for thicker traces use the trace width for the drill diameter. It's just a good rule of thumb. | {
"source": [
"https://electronics.stackexchange.com/questions/235669",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/3757/"
]
} |
235,672 | The new processors are all 3.3V and dropping. So while I am used to 5V arduinos being able to power the gate of a MOSFET, at least most of the way, this is not going to be true any more. And for an IRF630, someone pointed out that I really need to drive the gate to 10V to get the rated on-resistance. So what's the canonical way to do this? Do I have a 10V power supply, and drop the voltage for the processor, have a charge pump that generates the 10V from 3.3V? The current will be very small, because the gate has massive impedance. Finally, assuming the 10V power supply, what's a good way to switch that voltage to the gate? Do I have to use a small junction transistor because I don't have the voltage to switch a MOSFET? The junction transistor is shown in this solution: Multiplying the voltage of an output pin on an Arduino board I'm just asking if there is another way. | Three options. Use a mosfet with a VGS compatible with 3.3V. Typically known as a logic level mosfet. Use a simple npn transistor as a switch to drive the mosfet at a higher voltage. Logic would be inverted. Use a dedicated mosfet driver IC. | {
"source": [
"https://electronics.stackexchange.com/questions/235672",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4054/"
]
} |
236,666 | As I understand a buffer gate is the opposite of a NOT gate and does not change the input: However I sometimes see buffer gate ICs used in circuits and to an inexperienced eye they seem to do nothing at all. For example recently I've seen a non-inverting buffer gate used at the output of an emitter follower, roughly something like this: So when would one require to use a buffer IC in their circuit? What could be the purpose of the gate in the aforementioned schematic? | Buffers are used whenever you need... well... a buffer. As in the literal meaning of the word. They're used when you need to buffer the input from the output. There are countless ways to use a buffer. There are digital logic gate buffers, which are passthroughs logicwise, and there are analog buffers, which act as passthroughs but for an analog voltage. The latter is kind of outside the scope of your question, but if you're curious, look up 'voltage follower'. So when or why would you use one? At least when the simplest and cheapest buffer of all, a copper wire/trace is readily available? Here are a few reasons: 1. Logical Isolation. Most buffers have an ~OE pin or similar, an output enable pin. This allows you to turn any logic line into a tristate one. This is especially useful if you want to be able to connect or isolate two busses (with buffers both ways if needed), or maybe just a device. A buffer, being a buffer between those things, lets you do that. 2. Level Translation. Many buffers let the output side be powered from a different voltage than the input side. This has obvious uses for translating voltage levels. 3. Digitization/repeating/cleanup. Some buffers have hysteresis, so they can take a signal that is trying real hard to be digital, but just doesn't have very good rise times or isn't quite playing right with thresholds or whatever, and clean it up and turn it into a nice, sharp, clean-edged digital signal. 4. Physical Isolation You have to send a digital signal further than you like, things are noisy, and a buffer makes a great repeater. Instead of a GPIO pin on the receiving end having a foot of pcb trace connected to it, acting as an antenna, inductor, and capacitor and literally vomiting whatever the heck noise and awfulness it wants directly into that poor pin's gaping mouth, you use a buffer. Now the GPIO pin only sees the trace between it and the buffer, and the current loops are isolated. Heck, you can even properly terminate the signal now, like with a 50Ω resistor (or whatever), because you have a buffer on the transmit end too and can load them in ways you could never load a wimpy little µC pin. 5. Driving loads. Your digital input source is high impedance, too high to actually interface with the device you want to control. A common example might be an LED. So you use a buffer. You select one that can drive, say, a hefty 20mA easily, and you drive the LED with the buffer, instead of the logic signal directly. Example: You want status indication LEDs on something like a I2C bus, but adding LEDs directly to the I2C lines would cause signaling issues. So you use a buffer. 6. Sacrifice . Buffers often have various protection features, like ESD protection, etc. And often they do not. But either way, they act as a buffer between something and another thing. If you have something that might experience some sort of transient condition that could damage something, you put a buffer between that thing and the transient source. Put another way, chips love exploding almost as much as they love semiconducting. And most of the time, when something goes wrong, chips explode. Without buffers, often whatever transient that is popping chips left and right will reach deep into your circuit and destroy a bunch of chips at once. Buffers can prevent that. I'm a big fan of the sacrificial buffer. If something is going to explode, I'd prefer it be a 50¢ buffer and not a $1000 FPGA. Those are some of the most common reasons I could think of off the top of my head. I'm sure there are other situations, maybe you'll get more answers with more uses. I think everyone will agree that buffers are terribly useful, even if at first glance, they seem rather pointless. | {
"source": [
"https://electronics.stackexchange.com/questions/236666",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38958/"
]
} |
236,850 | From Wikipedia the common temperature range for electrical components is: Commercial: 0 to 70 °C Industrial: -40 to 85 °C Military: -55 to 125 °C I can understand the lower part (-40°C and -55°C) as these temperatures do exist in cold countries like Canada or Russia, or at high altitudes, but the higher part (85°C or 125°C) is a bit confusing for some parts. Transistors, capacitors, and resistors heating is very understandable, but some ICs have approximately constant low heat generation (like logic gates) If I am considering a microcontroller or operated in a Sahara deserts at 50 °C ambient ( I don’t know if there is higher temperature on earth) why would I need 125 °C or 85 °C? The heat built up from power loss inside shouldn’t be 50 °C or 70 °C otherwise the Commercial part would fail immediately in for example, 25 °C environment? If I live in a moderate climate where the temperatures can only fluctuate in the 0–35 °C range all year around, and designing industrial products for the same country only (no export) could I use commercial grade components (assuming no certification, legislation, and accountability exist and only engineering ethics govern your actions)? | The maximum temperature the silicon experiences can be much more than ambient. 50 °C ambient certainly happens. That's only 122 °F. I've personally experienced that in the Kofa Wildlife refuge north of Yuma Arizona. You need to design to worst case, not wishful case. So let's say ambient can be 60 °C (140 °F). That by itself isn't much of a problem, but you don't get that by itself. Take the same thermometer that reads 60 °C in open air and put it in a metal box sitting on the ground in the sun. It's going to get much hotter. I've seen someone fry a egg on the hood of a car in the sun in Phoenix AZ. Granted, this was a stunt deliberately set up for this purpose. The car was parked at the right angle, the piece of hood was tilted at the right angle, and painted flat black. However, it still shows that just a piece of metal sitting in the sun can get really hot. I once left a car parked at the Las Vegas airport for a few days. I had left one of those cheap "stick" ballpoint pens on the dashboard, partly sticking out over the side. When I got back the pen was bent at 90° over the lip of the dashboard. I don't know what temperature such pens melt at, but clearly it gets a lot hotter than ambient under common enough conditions in a enclosed box. If you left some cheap piece of consumer electronics on the dashboard in the sun and it didn't work, you'd probably be a little irritated, then toss it and replace it. If the controller for your oil pump stopped working in the summer because it got too hot, you'd lose a lot of money, be pretty upset, and probably buy the replacement from a different company that takes quality more seriously. If your missile defense system stopped working because you deployed it in the desert of Iraq instead of some nice comfy test range in Massachusetts where it was developed, you'd be dead. The procurement officers that don't get fired will be extra careful to require all electronics to work at high temperature, and insist it get tested under those conditions. | {
"source": [
"https://electronics.stackexchange.com/questions/236850",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/81645/"
]
} |
236,983 | I know google's answer and wikipedia's answer for the above question. But I have a more specific question on hand. USB hubs have many incoming and port and just one outgoing port, I am able to use say n-number of devices together because of this. But, I do not understand how can one USB port do data transfers with n-number of USB ports? how can it send different data to all USB ports at the same time? * simulate this circuit – Schematic created using CircuitLab *. | It's all to do with arbitration. Any system which requires multiple devices to be connected needs some way of determining who should talk when. There are different schemes as you would expect depending on the application. A common example - in networking we have many nodes all talking to each other. This is done by each node having an address (e.g. IP address), and when a node wants to talk to another node, it sends out a packet to that address. You then have devices such as routers which take packets coming in on multiple ports and forward them on to the correct port. The arbitration is done using memory to store packets until the destination port is free. Now on to USB. This is actually much simpler than networking because not all nodes are made equal. You have two sorts, a host, and an endpoint. There is only ever one host, but can be many endpoints. In this case arbitration is much easier because only the host port is allowed to talk at will. Endpoints are only allowed to talk when asked to by the host , and the host only ever talks to one endpoint at a time. For host->endpoint packets, the USB hubs simply pass the request from the host to all of the endpoints. Because all endpoints have an address, only the one to which the request was addressed will do anything with it (e.g. respond), all others will ignore the packet. For endpoint->host packets, the host first sends a packet to a specific endpoint by address to say "you can talk now", and then that endpoint must immediately send a response. Because only one endpoint is allowed to talk at any given time, the USB hub will simply route the packet from whichever port responds to a request from the host. In terms of how the host works out what devices are attached, and how endpoint get their address, this is achieved through enumeration. All host and hub ports have pull-down resistors (15kOhm) on the D+ and D- lines. These put the data lines of that port into a known state when there is no device attached, a state in which the port will not send any data over D+/D- lines at all. When a device is attached, it makes itself known by connecting either the D+ (full-speed) or D- (low speed) data line to VCC using a 1.5kOhm resistor. This triggers an enumeration event. The port will then begin the process of configuring the device and assigning an address. If you were to plug in two devices simultaneously, they will be enumerated one at a time . If there are no hubs, the host simply talks to the new device and sets it up. If there are hubs in the system, it is the hub which reports the new device is attached . If a hub reports a new device is connected, the host will instruct the hub to reset the new device and start up communications. During the reset, the endpoint is given a default address of 0 (*). The host can then talk to the endpoint using the default address, and configure it with a unique non-zero address that will allow it to know when it is being talked to. (*) Because only one device is ever enumerated at a time, the address 0 will always be unique to the newly attached device. You might then ask, "well how can I then have multiple devices all talking at the same time?". Say you have a mouse, a keyboard, and a flash drive all connected to the same USB hub. We all know you can use your mouse and keyboard at the same time while also copying files to/from your flash drive, but if only one device can talk at a time, how can that be possible? Well, it all comes down to the fact that the few hundred milliseconds it takes for your brain to notice that you have pressed a key and expect the screen to update is an eternity to the computer. A USB 2.0 interface can run at up to 480Mbps (USB 3.1 can run at up to 10Gbps!), which means that even though the host is only ever talking to one endpoint at any given time, it cycles between them so fast that you can't tell it's doing it. USB Host: "Hey, mouse on port 1, tell me if you've moved. Ok now keyboard on port 2 have you got any key presses to report? Now you there on port 3, flash drive, store this data for me. Anyone else I need to talk to? nope, ok then, mouse on port 1, tell me if you've moved..." Human: "Oh look, the computer noticed I just moved my mouse, pressed a key on my keyboard, and copied a picture to the flash drive, all at the same time!" The host device keeps track of which endpoint addresses are used and will send packets to each one sequentially or as needed (i.e. when the OS request access to a specific device). So while it is not all happening simultaneously, the arbitration is so fast that the computers pet human can't tell the difference. | {
"source": [
"https://electronics.stackexchange.com/questions/236983",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/110382/"
]
} |
237,307 | I've been reading up about NASA's Juno mission, and came across the Wikipedia article about JunoCam , which is Juno's onboard visible-light camera. In the article, it's mentioned that the resolution of the sensor is 1200x1600 pixels, which comes out to just under 2MP. Obviously, sending any camera into deep space and establishing a stable orbit around Jupiter is no small feat -- but seeing as Juno launched in 2011, why is JunoCam's sensor's resolution so low? I'm assuming - maybe too optimistically - that design changes like sensor selection would be finalized 4-5 years before launch. In 2006-2007, entry-level consumer DLSRs often sported 10MP sensors. Basically; Is it more difficult to harden a higher-resolution sensor against hazards in space? If not, what reasons could NASA have to avoid using higher-resolution sensors? | There is one overriding requirement for deep-space missions: reliability. In general NASA Preferred Parts are quite stodgy, because the overriding need is for a mature, well-understood technology. Cutting-edge technology that doesn't work is frowned upon under the circumstances. So 10-year-old image sensors are about what you expect. Additionally, if you read the JunoCam article you linked, you'll see (second paragraph, first sentence) that data transfer rates are quite slow, on the order of 40 MB per 11 days. Increasing image size cuts down the number of images which can be acquired, and I expect that a lot of effort went into determining the tradeoff between number of images and image resolution. For what it's worth, NASA has been pushing for better data rates for its programs, but the limited power and long ranges involved make this a non-trivial problem. The LADEE mission a couple of years ago incorporated the LLCD (Lunar Laser Communication Demonstrator) which worked quite well, and this holds great promise (optical communication limit of 1 bit/photon at the receiver), so future missions may be able to do a lot better. | {
"source": [
"https://electronics.stackexchange.com/questions/237307",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/15813/"
]
} |
237,740 | There are different memory segments to which various types of data are put into from C code after compilation. I.e: .text , .data , .bss , stack and heap. I just want to know where each of these segments would reside in a microcontroller memory. That is, which data goes into what type of memory, given the memory types are RAM, NVRAM, ROM, EEPROM, FLASH etc. I have found answers to similar questions here, but they failed to explain what would be the contents of each of the different memory types. Any sort of help is highly appreciated. Thanks in advance! | .text The .text segment contains the actual code, and is programmed into Flash memory for microcontrollers. There may be more than one text segment when there are multiple, non-contiguous blocks of Flash memory; e.g. a start vector and interrupt vectors located at the top of memory, and code starting at 0; or separate sections for a bootstrap and main program. .bss and .data There are three types of data that can be allocated external to a function or procedure; the first is uninitialized data (historically called .bss, which also includes the 0 initialized data), and the second is initialized (non-bss), or .data. The name "bss" historically comes from "Block Started by Symbol", used in an assembler some 60 years ago. Both of these areas areas are located in RAM. As a program is compiled, variables will be allocated to one of these two general areas. During the linking stage, all of the data items will be collected together. All variables which need to be initialized will have a portion of the program memory set aside to hold the initial values, and just before main() is called, the variables will be initialized, typically by a module called crt0. The bss section is initialized to all zeros by the same startup code. With a few microcontrollers, there are shorter instructions that allow access to the first page (first 256 locations, sometime called page 0) of RAM. The compiler for these processors may reserve a keyword like near to designate variables to be placed there. Similarly, there are also microcontrollers that can only reference certain areas via a pointer register (requiring extra instructions), and such variables are designated far . Finally, some processors can address a section of memory bit by bit and the compiler will have a way to specify that (such as the keyword bit ). So there might be additional segments like .nearbss and .neardata, etc., where these variables are collected. .rodata The third type of data external to a function or procedure is like the initialized variables, except it is read-only and cannot be modified by the program. In the C language, these variables are denoted using the const keyword. They are usually stored as part of the program flash memory. Sometimes they are identified as part of a .rodata (read-only data) segment. On microcontrollers using the Harvard architecture , the compiler must use special instructions to access these variables. stack and heap The stack and heap are both placed in RAM. Depending on the architecture of the processor, the stack may grow up, or grow down. If it grows up, it will be placed at the bottom of RAM. If it grows down, it will be placed at the end of RAM. The heap will use the remaining RAM not allocated to variables, and grow the opposite direction of the stack. The maximum size of the stack and heap can usually be specified as linker parameters. Variables placed on the stack are any variables defined within a function or procedure without the keyword static . They were once called automatic variables ( auto keyword), but that keyword is not needed. Historically, auto exists because it was part of the B language which preceded C, and there it was needed. Function parameters are also placed on the stack. Here is a typical layout for RAM (assuming no special page 0 section): EEPROM, ROM, and NVRAM Before Flash memory came along, EEPROM (electrically erasable programmable read-only memory) was used to store the program and const data (.text and .rodata segments). Now there is just a small amount (e.g. 2KB to 8KB bytes) of EEPROM available, if any at all, and it is typically used for storing configuration data or other small amounts of data that need to be retained over a power-down power up cycle. These are not declared as variables in the program, but instead are written to using special registers in the microcontroller. EEPROM may also be implemented in a separate chip and accessed via an SPI or I²C bus. ROM is essentially the same as Flash, except it is programmed at the factory (not programmable by the user). It is used only for very high volume devices. NVRAM (non-volatile RAM) is an alternative to EEPROM, and is usually implemented as an external IC. Regular RAM may be considered non-volatile if it is battery-backed up; in that case no special access methods are needed. Although data can be saved to Flash, Flash memory has a limited number of erase/program cycles (1000 to 10,000) so it's not really designed for that. It also requires blocks of memory to be erased at once, so it's inconvenient to update just a few bytes. It's intended for code and read-only variables. EEPROM has much higher limits on erase/program cycles (100,000 to 1,000,000) so it is much better for this purpose. If there is EEPROM available on the microcontroller and it's large enough, it's where you want to save non-volatile data. However you will also have to erase in blocks first (typically 4KB) before writing. If there is no EEPROM or it's too small, then an external chip is needed. An 32KB EEPROM is only 66¢ and can be erased/written to 1,000,000 times. An NVRAM with the same number of erase/program operations is much more expensive (x10) NVRAMs are typically faster for reading than EEPROMs, but slower for writing. They may be written to one byte at a time, or in blocks. A better alternative to both of these is FRAM (ferroelectric RAM), which has essentially infinite write cycles (100 trillion) and no write delays. It's about the same price as NVRAM, around $5 for 32KB. | {
"source": [
"https://electronics.stackexchange.com/questions/237740",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/111841/"
]
} |
237,777 | I am currently working on a design which includes the AIS3624DQ accelerometer from ST. In the datasheet , it says (section 4, page 17): "Power supply decoupling capacitors (100 nF ceramic, 10 μF aluminum) should
be placed as near as possible to the pin 14 of the device (common design practice)." Can I replace the 10μF aluminum (due to its large size) with a tantalum capacitor instead? | You can replace the aluminum electrolytic with a tantalum, but using neither is a much better choice. Nowadays, ceramics can easily cover the 10 µF at 10s of volts range. There is no point using either a electrolytic or tantalum. You also don't need a separate 100 nF (that value is so 1980s anyway) capacitor if you use a ceramic for the larger value. Think about what is going on here and what the datasheet is trying to say. These devices are notorious for being quite sensitive to power supply noise. I've actually seen a similar part amplify power ripple from the power supply to the output. The datasheet therefore wants you to put a "large" amount of capacitance on the power line to the device. That's where the 10 µF came from. Back when this datasheet was written, or whoever wrote it stopped keeping up with developments, 10 µF was a unreasonably large request for any capacitor technology that was good at high frequencies. So they suggest a electrolytic for the 10 µF "bulk" capacitance, but to then place a 100 nF ceramic across that. That ceramic will have lower impedance at high frequencies than the electrolytic, despite the fact that it has 100 time less capacitance. Even in the last 15-20 years or so, that 100 nF could have been 1 µF without being burdensome. The common value of 100 nF comes from the ancient thru-hole days. That was the largest size cheap ceramic capacitor that still worked like a capacitor at the high frequencies required by digital chips. Look at computer boards from the 1970s and you will see a 100 nF disk capacitor next to every one of the digital ICs. Unfortunately, using 100 nF for high frequency bypass has become a legend on its own. However, the 1 µF multi-layer ceramic capacitors of today are cheap and actually have better characteristics than the old leaded 100 nF caps of the Pleistocene. Take a look at a impedance versus frequency graph of a family of ceramic caps, and you'll see the 1 µF has lower impedance just about everywhere compared to the 100 nF. There may be a small dip in the 100 nF near its resonant point where it has lower impedance than the 1 µF, but that will be small and not very relevant. So, the answer to your question is to use a single 10 µF ceramic. Make sure whatever you use still is actually 10 µF or more at the power voltage you are using. Some types of ceramics go down in capacitance with applied voltage. Actually today you can use a 15 or 20 µF ceramic and have better characteristics across the board compared to the 100 nF ceramic and 10 µF electrolytic recommended by the datasheet. | {
"source": [
"https://electronics.stackexchange.com/questions/237777",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/104309/"
]
} |
237,779 | I am using the inverter circuit I posted before Inverter in this previous thread to drive a BLDC motor. However, the current waveforms have some strange shapes that I couldn't understand. Below is the Hall sensor signals and current waveforms. I have confirmed that the six-step switching sequence is correct with respect to Hall signals. So my question is, why could the current peaks marked in yellow circles happen? How to eliminate them? [Edit] The current is measured using Tektronix TCPA300 current sensor and clamped at individual motor terminal wire. I also captured the Vgs for upper arm and lower arm and I couldn't see there is wrong ON status for either one. Upper arm Vgs vs current Lower arm Vgs vs current I also tested another driver board which has only 6 MOSFETs in the inverter, the phenomenon is the same. The following picture is the motor terminal voltage Va vs. Ia. From this picture, it seems the lower side freewheeling diode was conducting when both upper and lower MOSFET was OFF . How could that happen? [Edited again to answer @John Birckhead] Here is the results showing the ground at MOSFET current path side and the gate driver side. However, from the results, I couldn't conclude that lower side MOSFETs were wrongly turned-on because of noise spikes on grounds because their amplitudes are only less than 1V. | You can replace the aluminum electrolytic with a tantalum, but using neither is a much better choice. Nowadays, ceramics can easily cover the 10 µF at 10s of volts range. There is no point using either a electrolytic or tantalum. You also don't need a separate 100 nF (that value is so 1980s anyway) capacitor if you use a ceramic for the larger value. Think about what is going on here and what the datasheet is trying to say. These devices are notorious for being quite sensitive to power supply noise. I've actually seen a similar part amplify power ripple from the power supply to the output. The datasheet therefore wants you to put a "large" amount of capacitance on the power line to the device. That's where the 10 µF came from. Back when this datasheet was written, or whoever wrote it stopped keeping up with developments, 10 µF was a unreasonably large request for any capacitor technology that was good at high frequencies. So they suggest a electrolytic for the 10 µF "bulk" capacitance, but to then place a 100 nF ceramic across that. That ceramic will have lower impedance at high frequencies than the electrolytic, despite the fact that it has 100 time less capacitance. Even in the last 15-20 years or so, that 100 nF could have been 1 µF without being burdensome. The common value of 100 nF comes from the ancient thru-hole days. That was the largest size cheap ceramic capacitor that still worked like a capacitor at the high frequencies required by digital chips. Look at computer boards from the 1970s and you will see a 100 nF disk capacitor next to every one of the digital ICs. Unfortunately, using 100 nF for high frequency bypass has become a legend on its own. However, the 1 µF multi-layer ceramic capacitors of today are cheap and actually have better characteristics than the old leaded 100 nF caps of the Pleistocene. Take a look at a impedance versus frequency graph of a family of ceramic caps, and you'll see the 1 µF has lower impedance just about everywhere compared to the 100 nF. There may be a small dip in the 100 nF near its resonant point where it has lower impedance than the 1 µF, but that will be small and not very relevant. So, the answer to your question is to use a single 10 µF ceramic. Make sure whatever you use still is actually 10 µF or more at the power voltage you are using. Some types of ceramics go down in capacitance with applied voltage. Actually today you can use a 15 or 20 µF ceramic and have better characteristics across the board compared to the 100 nF ceramic and 10 µF electrolytic recommended by the datasheet. | {
"source": [
"https://electronics.stackexchange.com/questions/237779",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/51222/"
]
} |
239,257 | There has been a lot of research around GaN transistors, proving that they have a very low on-resistance, low gate-charge and are very effective at high temperatures. So why are we still mostly producing Si transistors? Even if the GaN transistor is more expensive in production, it surely must compensate if it's used in IC's? | I've been using GaN extensively since 2013 or so, primarily for a niche application that can easily benefit from one huge advantage GaN has over Si -- radiation tolerance. There's no gate-oxide to puncture and suffer from SEGR, and public research has shown the parts living past 1MRad with minimal degradation. The small size is amazing as well -- in the size of maybe a quarter or two (the coin), you can implement a 10A+ DC/DC converter with ease. Coupled with the ability to purchase them with leaded-solder bars, and some third-parties packaging them in hermetically sealed packages, they are the future. It's more expensive, and "trickier" to work with. There is no gate-oxide, just a metal-semiconductor junction, so the gate drive voltage is highly restrictive (for enhancement mode as built by EPC) -- any excess voltage will destroy the part. There are only a handful of publicly available gate drivers right now -- folks are just now starting to build more drivers and give us more options than the National LM5113. The 'canonical' implementation you'll see around is the BGA LM5113 + LGA GaN FETs, because even the bond-wires in other packages add too much inductance. As a reminder, here's where that ringing comes from: EPC's eGaN devices utilize a 2DEG and can be classed as a HEMT in our applications. This is where a lot of their stupidly low RDS(on) comes from -- it's usually in the single-digit milliohms. They have incredibly fast speeds, which means you have to be very aware of Miller-effect induced turn-on. Additionally, as mentioned above, parasitic inductances in the switching loop become much more critical at these speeds -- you actually have to think about your dielectric thicknesses and component placement to keep that loop inductance low (<3nH is doing alright, IIRC, but as discussed below, it can/should be much lower), as also seen below: For EPC, they are also built at a conventional foundry, lowering costs. Other folks include GaN systems, Triquint, Cree, etc -- some of those are specifically for RF purposes, whereas EPC primarily targets power conversion / related applications (LIDAR, etc.). GaN is natively depletion-mode as well, so folks have different solutions for making them enhancement, including simply stacking a small P-channel MOSFET on the gate to invert its behavior. Another interesting behavior is the "lack" of reverse recovery charge, at the expense of a higher-than-silicon diode drop when in that state. It's kind of a marketing thing -- they tell you that "because there are no minority carriers involved in conduction in an enhancement-mode GaN HEMT, there are no reverse recovery losses". What they kind of gloss over is that V_{SD} is generally up in the 2-3V+ range compared to 0.8V in a Si FET -- just something to be aware of as a system designer. I'll touch on the gate again as well -- your drivers basically have to keep a ~5.2V bootstrap diode internally to prevent cracking the gates on the parts. Any excess inductance on the gate trace can lead to ringing that will destroy the part, whereas your average Si MOSFET usually has a Vgs around +/-20V or so. I've had to spend many a hour with a hot-air gun replacing a LGA part because I messed this up. Overall, I'm a fan of the parts for my application. I don't think the cost is down there with Si yet, but if you're doing niche work or want the highest possible performance, GaN is the way to go -- the winners of the Google Little Box Challenge used a GaN-based power stage in their converter. Silicon is still cheap, easy to use, and people understand it, especially from a reliability POV. GaN vendors are going to great lengths to prove their device reliability figures, but MOSFETs have many decades of lessons-learned and reliability engineering data at the device physics level to convince folks that the part isn't going to burn out over time. | {
"source": [
"https://electronics.stackexchange.com/questions/239257",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/113093/"
]
} |
239,259 | I'm fairly new to this forum and the entire field of electrical engineering. I have a microcontroller (Atmega328P) project that includes a current sensor and several radios (900Mhz-2.4ghz). It is battery powered from a lithium 36V nominal battery. I was looking at using the LM46002 simple switcher from texas instruments to power my project. However, as I delve deeper into the design I'm not sure what switching frequency to use, and also what supporting circuitry I might need such as bypass capacitors and the such to help smooth the output. I will be making ADC measurements and the same question applies to the ADC circuitry - what considerations will I have to make with the design to best power my project and get the most accurate measurements. Thanks for your time :) | I've been using GaN extensively since 2013 or so, primarily for a niche application that can easily benefit from one huge advantage GaN has over Si -- radiation tolerance. There's no gate-oxide to puncture and suffer from SEGR, and public research has shown the parts living past 1MRad with minimal degradation. The small size is amazing as well -- in the size of maybe a quarter or two (the coin), you can implement a 10A+ DC/DC converter with ease. Coupled with the ability to purchase them with leaded-solder bars, and some third-parties packaging them in hermetically sealed packages, they are the future. It's more expensive, and "trickier" to work with. There is no gate-oxide, just a metal-semiconductor junction, so the gate drive voltage is highly restrictive (for enhancement mode as built by EPC) -- any excess voltage will destroy the part. There are only a handful of publicly available gate drivers right now -- folks are just now starting to build more drivers and give us more options than the National LM5113. The 'canonical' implementation you'll see around is the BGA LM5113 + LGA GaN FETs, because even the bond-wires in other packages add too much inductance. As a reminder, here's where that ringing comes from: EPC's eGaN devices utilize a 2DEG and can be classed as a HEMT in our applications. This is where a lot of their stupidly low RDS(on) comes from -- it's usually in the single-digit milliohms. They have incredibly fast speeds, which means you have to be very aware of Miller-effect induced turn-on. Additionally, as mentioned above, parasitic inductances in the switching loop become much more critical at these speeds -- you actually have to think about your dielectric thicknesses and component placement to keep that loop inductance low (<3nH is doing alright, IIRC, but as discussed below, it can/should be much lower), as also seen below: For EPC, they are also built at a conventional foundry, lowering costs. Other folks include GaN systems, Triquint, Cree, etc -- some of those are specifically for RF purposes, whereas EPC primarily targets power conversion / related applications (LIDAR, etc.). GaN is natively depletion-mode as well, so folks have different solutions for making them enhancement, including simply stacking a small P-channel MOSFET on the gate to invert its behavior. Another interesting behavior is the "lack" of reverse recovery charge, at the expense of a higher-than-silicon diode drop when in that state. It's kind of a marketing thing -- they tell you that "because there are no minority carriers involved in conduction in an enhancement-mode GaN HEMT, there are no reverse recovery losses". What they kind of gloss over is that V_{SD} is generally up in the 2-3V+ range compared to 0.8V in a Si FET -- just something to be aware of as a system designer. I'll touch on the gate again as well -- your drivers basically have to keep a ~5.2V bootstrap diode internally to prevent cracking the gates on the parts. Any excess inductance on the gate trace can lead to ringing that will destroy the part, whereas your average Si MOSFET usually has a Vgs around +/-20V or so. I've had to spend many a hour with a hot-air gun replacing a LGA part because I messed this up. Overall, I'm a fan of the parts for my application. I don't think the cost is down there with Si yet, but if you're doing niche work or want the highest possible performance, GaN is the way to go -- the winners of the Google Little Box Challenge used a GaN-based power stage in their converter. Silicon is still cheap, easy to use, and people understand it, especially from a reliability POV. GaN vendors are going to great lengths to prove their device reliability figures, but MOSFETs have many decades of lessons-learned and reliability engineering data at the device physics level to convince folks that the part isn't going to burn out over time. | {
"source": [
"https://electronics.stackexchange.com/questions/239259",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107683/"
]
} |
239,513 | How to memorize a diode's polarity, such as anode and cathode in symbol? | Two mnemonic technique I'll mention. Both technique I've learnt from someone else, many-years ago. We can easily remember, anode is abbreviated as A, and Cathode as K . That is standard, and easy to remember. Now , Write a K Now, just fill in the blanks to make it a diode. Now , the side of diode, where K was drawn, is Cathode (K). By default , the opposite side is anode (A). Now, if once we learn to recognize the K , if the diode orientate to a different direction in the diagram, we could easily identify the anode and cathode. ----------------------------------------------------------------- The triangle inside the diode, makes an arrow sign . That tells the direction of allowed-direction of current. From the second-method , we-can easily remember, the direction of current. And ow. A selected portion of Circuit portion(portion of interest) (here diode)'s Cathode is the electrode that Vomits out-out positive-charge; and the circuit's portion-of-interest (here diode)'s Anode is the electrode that Sucks-in the positive charge. above-table: Anode-Cathode Vs Plus-Minus Disambiguation That is applicable not only for diodes. It is applicable for any-components like Electrochemical (Battery)-Cell, , Electrolytic-cell, Cathode -ray-Tube( CRT ), etc. Circuit's portion-of-interest (here diode)'s Anode is to be attached with circuit- counterpart (here battery)'s Cathode. And Circuit's portion-of-interest (here diode)'s Cathode to be attached with circuit's Counterpart (here battery)'s Anode. Within the circuit's portion of interest (Here it is diode), Current flows in its Anode to Cathode direction. In that selected portion's counterpart , or outer-portion of the path, current flows from Anode(of the selected part, here diode) to Cathode ( of the selected part, here diode). | {
"source": [
"https://electronics.stackexchange.com/questions/239513",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107801/"
]
} |
239,530 | Following is a power supply with dual +-15V output: http://docs-europe.electrocomponents.com/webdocs/13f8/0900766b813f8a87.pdf I'm confused to understand two points: 1-) What is the difference between Output A and Output C ? Both looks variable. One is 1A the other is 0.2 A. I cannot figure out what is the difference. For dual op-Amps which one should be used? 2-) It also says 0 to 30V. How can it be configured for that? Output A with +15 plus and -15 GND? | Two mnemonic technique I'll mention. Both technique I've learnt from someone else, many-years ago. We can easily remember, anode is abbreviated as A, and Cathode as K . That is standard, and easy to remember. Now , Write a K Now, just fill in the blanks to make it a diode. Now , the side of diode, where K was drawn, is Cathode (K). By default , the opposite side is anode (A). Now, if once we learn to recognize the K , if the diode orientate to a different direction in the diagram, we could easily identify the anode and cathode. ----------------------------------------------------------------- The triangle inside the diode, makes an arrow sign . That tells the direction of allowed-direction of current. From the second-method , we-can easily remember, the direction of current. And ow. A selected portion of Circuit portion(portion of interest) (here diode)'s Cathode is the electrode that Vomits out-out positive-charge; and the circuit's portion-of-interest (here diode)'s Anode is the electrode that Sucks-in the positive charge. above-table: Anode-Cathode Vs Plus-Minus Disambiguation That is applicable not only for diodes. It is applicable for any-components like Electrochemical (Battery)-Cell, , Electrolytic-cell, Cathode -ray-Tube( CRT ), etc. Circuit's portion-of-interest (here diode)'s Anode is to be attached with circuit- counterpart (here battery)'s Cathode. And Circuit's portion-of-interest (here diode)'s Cathode to be attached with circuit's Counterpart (here battery)'s Anode. Within the circuit's portion of interest (Here it is diode), Current flows in its Anode to Cathode direction. In that selected portion's counterpart , or outer-portion of the path, current flows from Anode(of the selected part, here diode) to Cathode ( of the selected part, here diode). | {
"source": [
"https://electronics.stackexchange.com/questions/239530",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
]
} |
239,641 | Why is a capacitor a linear device? One property for linearity is that the capacitance or some such parameter must not change with voltage or current. Is this enough to make a device linear? A few sources say that the \$Q=CU\$ has a linear characteristic with voltage and so it is a linear device but wouldn't there be at least one such parameter in a MOSFET/diode that does change with respect to voltage or current in a linear manner - for example the voltage of a diode decreases linearly with the temperature. So what should I exactly consider for linearity? | First of all, an I-V curve does not make any sense for a capacitor. This is because a capacitor follows the following equation:
$$i = C \frac{dV}{dt}$$ Note that the current depends on the rate of change of voltage. So you can have the same current at two different voltages, if the rate of change is the same. The reason a capacitor is a linear device is because differentiation is linear. Superposition becomes:
$$i_1 +i_2 = \frac{d}{dt}(v_1 + v_2) = \frac{dv_1}{dt} + \frac{dv_2}{dt}$$ | {
"source": [
"https://electronics.stackexchange.com/questions/239641",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/95317/"
]
} |
239,653 | I am working on a system similar to a 3D printer, which works with stepper motors, heated bed and an hot air fan. This system will work inside a chamber, with a temperature around 4 degrees and around 50% of relative humidity - These variables are concerning to me. I am wondering what measures should I take to avoid the condensation of droplets and effects related to ESD due to the humidity of the environment, which affects to the PCB integrity and can cause short-circuits. Thank you very much for your answers,
Antonio. | First of all, an I-V curve does not make any sense for a capacitor. This is because a capacitor follows the following equation:
$$i = C \frac{dV}{dt}$$ Note that the current depends on the rate of change of voltage. So you can have the same current at two different voltages, if the rate of change is the same. The reason a capacitor is a linear device is because differentiation is linear. Superposition becomes:
$$i_1 +i_2 = \frac{d}{dt}(v_1 + v_2) = \frac{dv_1}{dt} + \frac{dv_2}{dt}$$ | {
"source": [
"https://electronics.stackexchange.com/questions/239653",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/102080/"
]
} |
240,604 | In digital multimeter, there is a transistor-check function, and there is a term hFE. I do-not know how-to use-it, however, different Websites say it is a measurement transistor's gain ("the ratio of the Ic/Ib") such as: https://www.quora.com/What-does-hFE-mean-on-a-multimeter ;etc. . But I want to know, what-is the full-form (full-NAME) of hFE? | It is a mouth-full to say as a full name. I found this description here. hFE is an abbreviation, and it stands for " Hybrid parameter forward current gain, common emitter " , and is a measure of the DC gain of a
junction transistor. So on a multimeter, it indicates a mode where
the meter can measure (probably crudely), the HFE of a transistor. EDIT: When I talk about transistor gain with other engineer's, we often use the term 'beta', yet in a datasheet 'hFE' is normally what is used by the manufacture based on calibrated equipment for a standalone transistor. For some transistors hFE readings may be done at several crucial frequencies as well as DC. 'Beta' is a better term for common-base designs, or just a general statement about DC and/or AC current gain in a known circuit . As a refinement of the original answer, @carloc mentioned that a 'hFE' spelling refers to a DC signal of relatively large amplitude, while 'hfe' refers to a small signal measured deferentially around some common bias point. No specific thresholds were given, though my original answer refers to 'hFE', the DC gain of the transistor. | {
"source": [
"https://electronics.stackexchange.com/questions/240604",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107801/"
]
} |
242,163 | Why are the Edison screws allowed to be used? They seem unsafe. Why are they designed that way? They require a complete grounding system to ground devices for the rare situation where the hot wires somehow touches the metal case of the device While they let you use a lamp fixture which has an exposed hot contact when you remove the lamp, and nothing is there to protect your finger from touching it. I would expect this socket to have a different design which will cover the contact from accidental touch. | Here's your opportunity. The market's looking for that right now so build a better mousetrap. The USDOE and California CEC want to murder the Edison base to finally stop people from using incandescent bulbs, and enable fixture designs that don't have to worry so much about dissipating heat. They mandated GU24 in 2008, which solves some of your concerns. Take a look at how that's going 8 years later. LOL. There are several flaws in the GU24 that you should address in your new design. Ease of installing "blind" when you just can't see the socket or it's deep in a recess. Equipment Grounding Conductor. 3-way lamp support. Or since dinosaurs called and want their dual-filament bulbs back... how about a standard for a signal pin and protocol to command the bulb to "dim". In track lighting, the signal line could be bussed to each outlet and controlled by a single dimmer. Multi-voltage, either standardize that all bulbs must be multi-voltage, or have different keying for 120V, 220-240V and 277V. Good luck! | {
"source": [
"https://electronics.stackexchange.com/questions/242163",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/69341/"
]
} |
242,293 | I'll be using a microcontroller to create a PWM signal for motor control. I understand how PWM and duty cycle works, however I am unsure about an ideal frequency. I do not have my motor in yet, so I can't just test it and find out. The picture depicts a graph of RPM vs Voltage. It's linear from 50 RPM @ 8 V to 150 RPM @ 24 V. I will not be varying voltage, just the time it receives a given voltage. So can I assume a linear response? At a 10% duty and 24 V supply it would run at a speed of 15 RPM? If it makes a difference, I'll include the setup. I am running 24 V directly to an H-bridge that controls the motor. Obviously I have two PWM pins going from the MCU to the gates of the two enable MOSFETS. | In short: You have linear control of the 'speed' by applying a PWM signal, now the frequency of that signal has to be high enough so that your DC Motor only passes the DC component of the PWM signal, which is just the average. Think of the motor as a low pass filter. If you look the transfer function or relationship angular speed to voltage, this is what you have: $$\frac{\omega(s)}{V(s)}=\frac{K}{\tau s+1} $$ This is the first order model of a DC motor or simply a low pass filter with cutoff frequency $$f_c=\frac{1}{2\pi\tau}$$ Where \$\tau\$ is the motor's time constant. So as long as your frequency is beyond the cutoff, your motor will only see the DC part or the average of the PWM signal and you will have a speed in concordance with the PWM duty cycle. Of course, there are some tradeoffs you should consider if you go with a high frequency. Long story: Theoretically, you would need to know the motor's time constant in order to choose the 'right' PWM frequency. As you probably know, the time it takes the motor to reach almost 100 % its final value is $$ t_{\text{final}}\approx 5\tau$$ Your PWM frequency has to be high enough so that the motor (essentially a low pass filter) averages out your input voltage, which is a square wave. Example, let's say you have a motor with a time constant \$\tau=10\text{ ms}\$ . I am going to use a first order model to simulate its response to several PWM periods. This is the DC motor model: $$\frac{\omega(s)}{V(s)}=\frac{K}{10^{-3} s+1} $$ Let's let \$k=1\$ for simplicity. But more importantly here are the responses we're looking at. For this first example, PWM period is \$ 3\tau\$ and the duty cycle is 50 %. Here is the response from the motor: The yellow graph is the PWM signal (50 % duty cycle and period \$ 3\tau=30 ms\$ ) and the purple one is the speed of the motor. As you can see, the speed of the motor swings widely because the frequency of the PWM is not high enough. Now let's increase the PWM frequency. The PWM period is now \$ 0.1\tau=1\text{
ms}\$ and duty cycle is still 50 %. As you can see, now the speed is pretty much constant because the high frequencies components of the PWM signal are being filtered out.
In conclusion, I would pick a frequency that is at least $$f_s\geq \frac{5}{2\pi\tau}$$ This is just a very theoretical explanation on how to choose the PWM frequency. Hope it helps! | {
"source": [
"https://electronics.stackexchange.com/questions/242293",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/111917/"
]
} |
242,641 | I'm doing a project with some tight requirements, in which I need to measure the distance between two points, which might not be in direct view of each other, and on parts that are too small to mount anything on them. An elegant solution would be to tie some kind of conductive rubber band between them, which would change resistance as it stretches, and then measuring the resistance, but all I've got from Google is stuff about physical resistance of exercise bands. | There ABSOLUTELY are such devices, made of conductive rubber of some sort. For example, HERE , from RobotShop.com I suspect there is a bunch of hysteresis, and that the resistive properties will change over time and use. Perhaps one of the other suggestions would work better | {
"source": [
"https://electronics.stackexchange.com/questions/242641",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/114901/"
]
} |
242,648 | There is a part, 1N4148, or whatever from manufacturer A, but the datasheet from manufacturer B is more detailed, has nice graphs, etc. Is in general data for the exact part consistent between manufacturers? | Generally, the primary specifications will be the same, but beware as the devil is in the details as noted by Olin. As an example, take the LM1117 . Parts with the same base number are also made by On Semiconductor and AMS . Looking at the datasheets, TI has this to say on stability: The output capacitor is critical in maintaining regulator stability,
and must meet the required conditions for both minimum amount of
capacitance and equivalent series resistance (ESR). The minimum output
capacitance required by the LM1117 is 10 μF, if a tantalum capacitor
is used. Any increase of the output capacitance will merely improve
the loop stability and transient response. The ESR of the output
capacitor should range between 0.3 Ω to 22 Ω. In the case of the
adjustable regulator, when the CADJ is used, a larger output
capacitance (22-μF tantalum) is required. AMS simply states: Stability The circuit design used in the AMS1117 series requires the use of an
output capacitor as part of the device frequency compensation. The
addition of 22μF solid tantalum on the output will ensure stability
for all operating conditions. When the adjustment terminal is bypassed
with a capacitor to improve the ripple rejection, the requirement for
an output capacitor increases. The value of 22μF tantalum covers all
cases of bypassing the adjustment terminal. Without bypassing the
adjustment terminal smaller capacitors can be used with equally good
results. To further improve stability and transient response of these
devices larger values of output capacitor can be used. On Semiconductor has this: Frequency compensation for the regulator is provided by capacitor Cout
and its use is mandatory to ensure output stability. A minimum
capacitance value of 4.7 μF with an equivalent series resistance
(ESR) that is within the limits of 33 mΩ (typ) to 2.2Ω is required.
See Figures 12 and 13. The capacitor type can be ceramic, tantalum,
or aluminum electrolytic as long as it meets the minimum capacitance
value and ESR limits over the circuit’s entire operating temperature
range. Higher values of output capacitance can be used to enhance loop
stability and transient response with the additional benefit of
reducing output noise. You should note that all these statements have subtle differences for a part that is designed for the same task; other parameters in the datasheets vary as well. This is but one type of part from the millions out there. Even the humble resistor and capacitor from various manufacturers can have differences (even though they are apparently the same type of device) that you may care about (in high reliability designs, this is definitely true). Update. Dim makes an excellent point on schematic notation where the generic number may not be sufficient. In what I currently do (avionics including flight controls) we have internal part numbers which are used in the schematic; these numbers map to a single part from a single manufacturer to deal with this precise issue. If you are using a specific manufacturers part, use that manufacturers datasheet. | {
"source": [
"https://electronics.stackexchange.com/questions/242648",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/114889/"
]
} |
242,883 | I mean the simple analog headset pluggable into the jack of a phone. Not USB, not bluetooth, not fancy proprietary plugs with extra connectors - just a generic stereo+mic jack. The four "bands" on the jack plug are GND, right earphone, left earphone and microphone. And there's nothing to cover the buttons - usually "Volume up/down" + "Media key" for receiving the call. How do these buttons communicate being pressed to the phone? | Each switch bridges the high-impedance microphone with a low resistance, allowing internal circuitry to sense the buttons. Here's a helpful image: The MIC+ line has a bias voltage (to supply the mic), and by adding some additional circuitry to the mic preamp, it's easy to differentiate those resistor values. This is the most common scheme for "on-headphone" controls. Additionally, it's very easy to implement in the headphones, allowing for cheap headphones and requires only a little bit more circuitry in the phone. | {
"source": [
"https://electronics.stackexchange.com/questions/242883",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/27495/"
]
} |
243,402 | I have tried to create a circuit to switch a large 7-segment LED display ( LDS-CD16RI ) using a pair of MOSFETs, as follows: simulate this circuit – Schematic created using CircuitLab Here I am trying to use a 3.3V logic signal (illustrated as the circled 1) to switch the 24V to drive the LEDs. This circuit is repeated for each of the segments of the display. The typical forward voltage of each of the LEDs (which are in series inside each segment of the display) is 6.8V, and their max steady forward current is 20mA, so I aimed for 10mA current through the LEDs. Since my supply voltage is only 24V I planned to actually drop about 5.75V across the LEDs to give me some headroom for the voltage dropped across M2 and R2. I arrived at the value for current-limiting resistor R2 at 100Ω using: $$ R = \frac{V_s - V_f}{I} = \frac{24 - (5.75*4)}{0.01} = 100Ω $$ Before building this circuit I calculated the power dissipated by R2 as follows: $$ P = \frac{V^2}{R} = \frac{1^2}{100} = 0.01\mathrm{W} $$ 0.01W seemed safely below the 0.25W limit of the through-hole resistors I used, so I proceeded with constructing and testing this circuit. To cut a long story short: R2 burned up shortly after a segment was illuminated. This occurred for each of the separate instances of this circuit driving the various display segments, suggesting that it was a design error rather than a single component failure. From my calculations and further analysis, I cannot yet understand why this occurred. To check my work, I re-constructed the circuit in a simulator which suggested that power from R2 would in fact be 6.84mW, which is a result I cannot explain but in any case one smaller than what I had calculated above. I expect I have made an error somewhere in my calculations or my assumptions, but I have been unable to locate it. Assuming the problem is that the resistor is indeed dissipating too much power, can my circuit be adjusted to address this? Is R2 a red herring here and the problem exists elsewhere in my circuit? Is my approach itself flawed? | 6.8 volts seems awfully high for a single LED. Are you sure that 6.8 is not the number for all four LEDs? That would make it 1.7 volts per LED, which is more reasonable for a red LED. And that would mean that you are currently pushing 172 milliamps, or almost 3 watts through your resistor. If that is the case, you should lower your power supply to less than 20 volts (maybe 12 volts) to keep from destroying the gate of your MosFET (M2). | {
"source": [
"https://electronics.stackexchange.com/questions/243402",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/15863/"
]
} |
243,712 | Electrically Erasable Programmable Read-Only Memory ( EEPROM ): If it's using Read-Only Memory ( ROM ) then how am I able to write to it? | The EEPROM acronym has some history which follows the development of the technology. ROM : Read-Only Memory. Written at the factory. PROM : Programmable Read-Only Memory but programmable (once) by the user. Really a one-time programmable, forever readable memory. Get it wrong and you dump the chip. EPROM : Eraseable Programmable Read-Only Memory. Usually erased using UV light through a quartz window above the chip. A bit of trouble but very useful. EEPROM : Electrically Erasable Programmable Read-Only Memory. Can be erased or re-written under program control. Figure 1. An Intel 1702A EPROM, one of the earliest EPROM types, 256 by 8 bit. The small quartz window admits UV light for erasure. Source: Wikipedia EPROM . So, I hear you say, why do they call it eepROm when it is writeable? The answer to this is, I suspect, that, unlike RAM (random access memory) it holds its contents during power cycle and, therefore, behaved more like a ROM . | {
"source": [
"https://electronics.stackexchange.com/questions/243712",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/26377/"
]
} |
243,880 | For me there is no doubt that one of the most time consuming tasks when it comes to produce a new board is going from a ratsnest to the final layout.
I must admit I am not an expert, but to me it takes days, and I would never ever be able to do it without the aid of Kicad, even for circuits of modest complexity.
It would be very interesting for me to know how it was at the beginning, when EDA (electronic design automation) software didn't exist at all. Which was the technique, which were the tools? I'm convinced one should learn how to do math with paper and pencil before using a calculator, this is why I'm asking. | Before computers were cheap and available enough to be used for such things, a "layout person" (a specialty of draftsman) would manually design the board layout. This was done on a drafting table at larger size than the real board. The engineer provided a D-size schematic to generate the board from. The layout guy would lightly pencil in tracks, then use special tape over the rough sketches. This tape was black and similar to masking tape. It came in rolls for pre-determined trace widths at specific enlargement ratios. For example, you'd have a roll of "20 mil" tape to be used at 4x enlargement, so the tape was actually 80 mils wide. There were also adhesive sheets to be cut with a exacto knife for arbitrarily shaped copper areas. As WhatRoughBeast mentioned in a comment, there were also various pre-made adhesive patterns you could buy for various enlargement sizes. Examples were the footprint of a 14 pin DIP, a TO-92 package, and the like. These made some of the grunt work easier and less error-prone. The finished taped drafting sheet was then used photographically to make the transparencies that were used to manufacture the board. Actually there was a finished taped sheet for each PCB layer. It might take two weeks for the layout to be finished for maybe a 40 square inch board, depending on complexity, of course. After that the layout guy and engineer would spend a day "roadmapping". The layout guy would start on one pin of one part, then follow the traces and call out all other part pins encountered, marking the traces as checked. The engineer would follow along on the schematic, marking connections as checked. This is how missing and erroneous connections where found. After roadmapping, usually a day or two of more layout work would be required to fix problems found, then more roadmapping, etc. However, all that is ancient history. While interesting as history, it really isn't relevant today. It's so much nicer to use a integrated schematic and board design package where the software guarantees that the final layout matches the schematic. | {
"source": [
"https://electronics.stackexchange.com/questions/243880",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/86603/"
]
} |
243,977 | Not-gate, if get a 0(Off) input, it gives an 1 (On) output. And if get a 1 (On) input, gives-back a 0(Off) output. Now, if-I could bring the output back to the input of the not-gate, then what will happen? If the gate is getting a 1 input, it is giving a 0- output, and then if it is getting a 0 input, it is giving an 1 output. The situation sounds-like a physical-model of a "self-contradiction" (self- false) (like when fever-attacked kid-Bertrand Russel waiting to be april-fooled by his brother, taking preparation against all possible tricks, Bertrand Russel's brother made Bertrand an april-fool by doing "no-april-fool" at all; and if Bertrand's brother uses any april-fool trick, Bertrand will Not be april-fooled, and if Bertrand's brother use no april fool that means Bertrand hasbeen april-fooled by-his brother). Now, what will-be happen in case of the real hardware called a NOT gate ? I ASSUME the possibilities; the gate will always remain as 0 (off)-output . the gate will always remain as 1(on)-output . The gate will be "PULSATING"; once it will 1 output; at the next-moment, after receiving that 1(on)-signal it will give-out a Zero (off) signal, and the cycle will run on and on. The frequency of this oscillation will depend on physical-characteristics of circuit component. the circuit will be get-damaged ( due to some anomalous current, overheating, etc) and soon permanently stop working. Will something happen within-these assumptions? PS. I'm thinking about this-problem from my schooldays, but since yet I do-not know, how to assemble a not-gate in a circuit, from-where they could be bought, etc; I yet could-not test it experimentally. | What happens is usually cases 3. or 5. You have not defined case 5 :-) The joined input-output will sit at some voltage near the middle of the supply. 74HC14: When a Schmitt triggered gate is used oscillation will almost certainly occur. Assume Vin-out initially = low = 0. When input = 0 output will transition to 1. Time to do this is propagation delay of gate (usually ns to us depending on type. When output starts to go high the rate of change will be affected by the load. Here the load is the gate input capacitance + any stray wiring capacitance driven via the gate output resistance and any wiring resistance. Cin_gate is in data sheet and may be in the order of 10 pF (varies with family). On a PCB wiring capacitance will be low. In this situation series inductance may also have a small effect but usually so small as to be ignorable. Output resistance varies widely with gate type. Very approximately Rout_effective = V/I = Vout/Iout_max. eg if dd = 5V, Iout max = 20 mA then Rout ~~~= 5/.020 = 250 Ohms. This is very dynamic but gives an idea. When Vout = 1 has driven Cin to a high level via Rseries + Rout then the gate will see VIn = 1 and start to switch to Vo = 0. After a propagation delay the output starts to fall. And so it continues. 74HC04 : When a non Schmitt triggered gate is used oscillation MAY occur by the mechanism above but it is more likely that the gate will settle into a linear mode with Vin-Vout at about half supply. Internal transistor-switch-pairs which are intended to be eother high or low output most of the time may be held in an intermediate state. This may lead to high current draw and may lead to IC destruction, but also may not. As a a guide: 74HC04 inverter datasheet Propagation delay ~~= 20 ns 74HC14 inverter datasheet Propagation delay ~~= 35 ns 74HC14 propagation delay is about 50% more than for 74HC04 but hysteresis of Schmitt trigger input gate menas Vin takes slightly longer to rise so probably means overall delay about double for Schmitt triggered gate. If Cin = 10 pF and Rout = 250 Ohms then the time constant of Vout driving Cin = t = RC = 250 x 10E-12 ~~= 3E-9 = 3 ns. Pairs of numbers below separated by "/" are for 74HC04 / 74HC14
As the propagation delay ~= 20 /40 ns ('04/'14) (see fig 6 in 74HC04 datasheet) then the total low to high and low to high time for 1 oscillation cycle is perhaps 50 / 100 ns so oscillation around 20 / 10 Mhz is suggested. In practice this feels perhaps "a bit high" for the 74HC14 but oscillation in the MHz range is likely with no other loads at 5V. The 74HC04 probably will not oscillate but if it does will probably do so at a higher frequency. Note: The Schmitt gate will oscillate at a lower frequency both due to longer propagation delay and because the hi-lo thresholds are defined and separated by the hysteresis voltage - so Cin takes very slightly longer to charge. The non Schmitt gate will probably oscillate higher if it does oscillate but is more likely to go into a linear mode - possibly with low amplitude oscillation superimposed. _____________________________________________ What's inside?: Mario has shown the conceptual diagram of a simple inverter such as a 74C04. These were amongst the first CMOS gates - but the low output drive was 'annoying' and buffered gates with more drive soon arrived. To obtain the extra current drive they have a high current output stage separate from the input stage. As they both invert the overall result is NOT an inverter so they add a 3rd inverting stage to get overall inversion. The end result is "an inverter" externally and a black box of unknown happenstance when driven semi analog-ly. For the 74HC04 the diagram below is as shown in the Fairchild and TI and the NXP datasheets BUT ON-Semi , just to be different make the 2nd stage a buffer with an inverting input. The result is the same, logic wise. So, overall, no guarantee what will happen when allowed to function in a semi-analog fashion. One inverter of 6 in 74HC04: Note that this is just for ONE CMOS based version - there are many other CMOS versions. CMOS is the most commonly used but original TTL, LSTTL, STTL. ECL and more. | {
"source": [
"https://electronics.stackexchange.com/questions/243977",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/107801/"
]
} |
243,989 | In one of my current projects I'm using an MC7805 in a D2PAK package to generate my logic supply of 5 V from an available 24 VDC supply. The current required by the circuit is 250 mA. This results in a dissipated power of the MC7805 of: \$P=(24\ V-5\ V)*230\ mA=4.37\ W \$ The PCB has to be assembled into a small plastic housing with the MC7805 inside. The arrangement is like this: So heatsinks like for example these are not possible. Also the housing itself has a quite small volume and would heat up. My first try to solve this thermal issue was to add vias to the pad and make an exposed pad on the other side of the PCB. Like this I want to dissipate the heat on the outside of the housing. Apparently this was not good enough as the thermal overload protection of the MC7805 kicked in after a about a minute. So I added a small heatsink to the exposed pad at the backside of the PCB and now it seems to be working (the heat sink is still getting pretty hot!). Besides my trial-and-error approach I would like to understand this thermal design a bit better and optimize it (as of now I cannot say what would be the temperature of the junction, and therefore I don't know how reliable this would be). I already read a couple of other questions , but so far I'm still not completely clear (even thinking of power as current, temperature as voltage and resistors as thermal resistance, thermal design has always puzzled me...)_ So regarding this design I would have a couple of questions: When using vias, the plating of the via is conducting the heat, while the air in the via hole is more or less isolating. So if not filled with solder, you want maximize the copper area of the vias in order to minimize the thermal resistance top to bottom layer. As I kept the solder stop mask open, the vias should be covered with solderpaste and getting filled while re-flow soldering. To minimize the thermal resistance between top and bottom layer I assume it would be best to have as much 'hole' area as possible. Is this assumption right? Is there a 'not incredible complicated' way to calculate the thermal resistance between junction and bottom pad? If not, can I somehow measure this thermal resistance (with a temperature sensor? As the top pad and the D2PAK housing will also dissipate some heat. Can I ( following the resistor analogy ) put these in parallel? How would the thermal resistor network for this system look like? I would like to further optimize this thermal design. I cannot increase the size of the housing and PCB. I cannot add a fan. I cannot increase the size of the top layer pad. I have already increased the size of the bottom pad to the maximum possible of 20 mm x 20 mm (above picture mentions both pads as 15 mm x 15 mm. Do you see any further things I could optimize? | Ok, first I am going to try to give a nice little primer on thermal engineering, since you say you want to get a better handle on it. It sounds like you're at that point where you understand the terms, have seen some of the math, but a true intuitive understanding has yet to develop, that 'Ah hah!' moment with the light bulb going off hasn't happened yet. It's a very frustrating point to be at! Don't worry, you'll get it if you keep at it. The single most important part about thermal stuff: 1. It's exactly like one-way electricity. So let's use ohm's law. Heat flow is just like current flow, only there is no 'return', heat always always always flows from higher potential to lower potential. Potential being heat energy, in this case. Power is our current. And, conveniently, thermal resistance is...resistance. Otherwise, it is exactly the same. Watts are your amps, your current. And indeed, this makes sense, as more watts means more heat flow, right? And just like voltage, the temperature here is relative. We are not talking about absolute temperature at any point, but only the temperature difference, or potential difference, between to things. So when we say that there is, say, a 10°C temperature potential, that simply means one thing is 10°C hotter than the other thing we're talking about. Ambient temperature is our 'ground'. So to translate all this into real absolute temperatures, you simply add it on top of whatever the ambient temperature is. Things like your LM7805 that produce heat are perfectly modeled as constant current sources. Because power is current, and it is acting like a constant power device, constantly generating 4.4W of heat, so it's like a constant current source generating 4.4A. Just like constant current sources, a constant power source will increase temperature (like the voltage of a constant current source) as high as it needs to maintain the current/power. And what determines the current that will flow? Thermal resistance! 1 ohm is really saying that you will need 1 volt of potential difference to push 1A through it. Likewise, while the units are funky (°C/W), thermal resistance is saying the same. 1 °C/W is just like one Ω. You will need 1°C of temperature difference to push 1 watt of thermal 'current' through that resistance. Better still, things like voltage drops, parallel or series thermal circuits, it is all the same. If a thermal resistance is just one part of a larger total thermal resistance along your thermal path ('circuit'), then you can find the 'voltage drop' (temperature increase) across any thermal resistance in exactly the same way you would find the voltage drop across a resistor. You can add them for series, 1/(1/R1....1/Rn) just like you would for parallel resistances. It all works and without exception. 2. But it takes time for things to get hot! Ohm's law is not really a law, but was originally an emperical model, and later realized was just the DC limit of Kirchoff's law. In other words, ohm's law only works for steady state circuits. This is likewise true for thermals. All that I wrote above is only valid once a system has reached equilibrium. That means you've let everything that is dissipating power (our constant 'current' power sources) do that for a while and so everything has reached a fixed temperature, and only by increasing or decreasing the power will anything's relative temperatures change. This usually doesn't take too long, but it also isn't instantaneous. We can see this quite clearly simply because things take time to heat up. This can be modeled as thermal capacitance. Basically, they will take time to 'charge', and you'll see a large temperature difference between a hot object and a cool one, until they reach equilibrium. You can think of most objects as at least two series resistors (for one point of thermal contact and the other. The top and bottom of your pad, for example) with a capacitor in between. This is not particularly relevant or useful in this situation, where all we care about is steady state, but I thought I'd mention it for completeness. 3. Practicalities If we are equating heat to electrical current flow, where is it all flowing to? It is flowing into the environment. For all intents and purposes, we can usually think of the environment as a giant, infinite heatsink that will maintain a fixed temperature no matter how many watts we push into it. Of course, this isn't quite the case, rooms can get hot, a computer can certainly heat up a room. But in the case of 5W, it is fine. The thermal resistance of the junction to case, then case to pad, pad to the pad on the other side of the pcb, bottom pad to heatsink, and finally, heatsink to air, form our total thermal circuit and all of those thermal resistances added up is our true thermal resistance. Those graphs you're looking at, those are looking at the resistances of just one piece of the system, NOT the total system. From those graphs, you'd think a square of copper could dissipate a watt and only rise 50°C. This is only true if the circuit board is magical and infinitely large and will never warm up. The junction in question will be 50° hotter than the circuit board, but that's not very useful if you've heated the circuit board to 200°C. You've exceeded the operating temperature either way. The unfortunate reality is that natural convection is pretty terrible at cooling stuff. Heatsinks have lots of surface area to increase convection cooling, and are often anodized black to increase their radiative cooling (black objects radiate the most heat, while shiny/reflective objects radiate almost none. Just like an antenna, being good at transmitting makes it good at receiving, and that is why darker to black things get so hot in the sun, and shiny things hardly get hot at all. It works both ways). But you'll find that most heatsinks have a pretty high thermal resistance for natural convection. Check the datasheet, often the thermal resistances of heatsinks are ones for a certain minimum CFPM of air flow over the heatsink. In other words, when there is a fan blowing air. Natural convection will be much poorer in thermal performance. Keeping the thermal resistances between the junction and heatsink is relatively easy. Solder joins have negligible thermal resistance (though solder itself is not a very good conductor of heat, at least compared to copper), and copper is second only to silver (among normal, non-exotic materials at least. Diamond, graphene etc. are more thermally conductive but also not available on Digikey). Even the fiberclass substrate of a circuit board isn't totally terrible at conducting heat. It's not good, but its not terrible either. The hard part is actually dissipating the heat out into the environment. That is always the choke point. And why engineering is hard. Personally, I design high power DC/DC converters (amongst other things). Efficiency stops being something you want, and becomes something you NEED. You NEED <x>% efficiency to make a DC/DC converter as small as it needs to be, because it simply will not be able to shed any additional waste heat. At this point, the thermal resistances of individual components are meaningless, and they are all tightly coupled on a slab of copper anyway. The entire module will heat up until it reaches equilibrium. No individual component will actually have enough thermal resistance to overheat theoretically, but the entire board as a bulk object can heat up until it desolders itself if it can't shed the watts quickly enough into the environment. And, as I said earlier, natural convection is really really terrible at cooling things. It's also primarily a function of surface area. So a plate of copper and a circuit board with the same circuit area will have very similar thermal resistances to the environment. The copper will make the heat more uniform throughout it, but it won't be able to shed any more watts than fiberglass. It comes down to surface area. And the numbers are not good. 1 cm^2 represents about 1000°C/W of thermal resistance. So a relatively large circuit board that is 100mm x 50 mm will be 50 squares, each a square centimeter, and each a parallel thermal resistance of 1000°C/W. So this board has a resistance to ambient of 20°C/W. So, in your case of 4.4W, it won't matter what you do on the board, pad size, thermal vias, any of that. 4.4W is going to heat up that board to about 88°C above ambient. And there is no getting around it. What heatsinks do is fold a lot of surface area into a small volume, and so using one will lower the overall thermal resistance and everything gets less hot. But all of it will warm up. Good thermal design is as much about directing where heat flows as it is removing it from your widget. You've done a pretty good job with your heatsink and enclosure setup. But, you are concerned about the wrong things. There isn't a simple way to calculate the thermal resistance of the pad through the pcb, but it only takes around 17% of a pad's area dedicated to vias before you hit diminishing returns hard. Usually using 0.3mm vias with 1mm spacing and filling the thermal pad like that will give you as good as you will get. Just do that, and you'll have no reason to ever worry about the actual value. You care about the system as a whole, not one junction. You did have a problem where the thermal resistance from the junction specifically to the larger circuit board and surfaces that would shed the heat into the environment was too high, so the component overheated. Either the heat couldn't spread out to the rest of the dissipating surface fast enough, or it could, but there wasn't enough surface to dissipate it into the environment quickly enough. You've addressed both possibilities by giving a low impedance thermal path from the LM7805 to the heatsink, which itself provides more surface area and lots of extra places for heat to escape. The enclosure, circuit board, etc. will of course still get warm eventually. Just like electrical current, it follows all paths proportional to the resistance. By providing less total resistance, the LM7805 as a thermal 'current' source need not get quite so hot, and the other paths are splitting the wattage ('current') between them, and the lowest resistance path (the heatsink) will get proportionally hotter. You're keeping everything else at a lower temperature by providing a preferential thermal path through the heatsink. But everything else is still going to help, and still going to warm up, to a greater or lesser degree. So, to answer your specific bullet point questions: You don't need to measure the thermal resistance of the junction to bottom pad, and knowing it is not useful information. It is not going to change anything, and you can't really improve it beyond what you have anyway. | {
"source": [
"https://electronics.stackexchange.com/questions/243989",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/54368/"
]
} |
244,190 | I would like use an accelerometer equipped device to record a motion trajectory, as high-resolution and low-noise as possible. For instance, let's say I run a jogging route of 6 kilometres, returning exactly back to the location I started from. So I have the idea that I could possibly do without a GPS module and just record at constant rate the data from an accelerometer, say an ADXL345 . My questions thus: can I use a 3-axis accelerometer to integrate twice, from
acceleration to velocity to distances? if the constraint is that I return to the exact location from where
I started, can I apply an error-correction to the trajectory that
compensates for drift, so that the last (x,y,z) coordinate of the
recorded and integrated data becomes identical to the first one? say I run from location A to B and back to A again. If I apply the
mentioned drift-correction, do I still have a meaningful/correct
position of spot B? If not, how would I achieve this? Do I have to combine the accelerometer with a GPS? | No, this won't work in theory or practice because you do not have sensors to capture rotational motion. When you rotate an accelerometer, it is unable to detect that its coordinate system has rotated with respect to the desired coordinate system. What you are trying to do is called inertial navigation. In principle, to do inertial navigation, you need a three-axis accelerometer as well as a three-axis gyro (or angular rate sensor) to capture rotational motion. Then the acceleration data can be converted to displacements in the frame of reference you are using. In practice, even if you add a gyro, doing this accurately is very difficult because small constant errors in acceleration become very large position errors during the process of integration. The only saving grace in your case is that if you add the assumption that you start and stop in the same place, you may be able to leverage that to calibrate out the drift (again, assuming you add a 3-axis gyro). Although user CortAmmon expressed skepticism that this extra information would be sufficient for calibrating out any drift in the acceleration measurement. CortAmmon points out that the Northrup Grumman LN200 inertial measurement unit costs US$90,000, and could be expected to have a position error measured in km after the time it takes to do a run. Items like this are not only very expensive, but likely "export controlled" if made in the US. The reason is that Inertial nav units are used in missiles. This gives them the ability to hit a target even when GPS is being jammed. | {
"source": [
"https://electronics.stackexchange.com/questions/244190",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/115730/"
]
} |
244,941 | Plenty has been said about why birds are safe to land on power lines. But condors, apparently, are not . The linked article explains that condors are electrocuted often enough and endangered enough that they are doing special "power pole aversion training" to keep them away even after release. Why are condors electrocuted while other birds (pigeons, crows, etc.) are perfectly safe? | Wingspan. Condors are large birds and can easily bridge the gap between the divided power lines. Pigeons, Crows, etc, are tiny. The only way to get shocked is by completing a circuit after all. Which is why that article has a spread its wings pun in the title California Condor Recovery Program Spreads Its Wings . From another article : The California condor is big. In fact, it's the largest flying bird in North America with a wingspan of 9 1/2 feet. Michael Mace, curator of birds for the San Diego Zoo Safari Park, tells NPR's Arun Rath that the condor "is like the 747 compared to a Cessna if you look at it proportionally with other species like eagles and turkey vultures." Mace works in a condor power line aversion training program at the zoo. It was developed to address the condors' unfortunate run-ins with power lines. "When they're flying, there's no reason to look forward because they're scanning the earth looking for carrion," Mace explains. Because the birds have no reason to look forward, they fly into power lines and risk electrocution. On top of that, when the condors are looking for a place to sleep, they land on power poles and structures, and get electrocuted there too. Their large size makes them more vulnerable to electrocution than smaller birds, because they're more likely to touch two lines at once. (Touching just one wire is safe, which is why many birds land on power lines without consequence). Since replacing power lines with a larger spacing between wires is completely impractical and unfeasible, a system of training animals with aversion therapy was devised. Humans are trained the same with constant warnings not to touch live power cables, and training for power line workers. | {
"source": [
"https://electronics.stackexchange.com/questions/244941",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/116139/"
]
} |
245,201 | I think I understand more or less how an ordinary semiconductor diode works: Crystal doped differently in different regions, carrier depletion where they meet, bla bla bla. However, actual diodes that one builds circuits with do not end with bits of n-doped and p-doped silicon. They're little ceramic/plastic packages with metal leads coming out of the ends. Somehow the current needs to pass between those metal leads and the semiconductor inside. And there's a problem. If I understand things correctly, a metal ought to be the ultimate n-carrier material -- every atom in the lattice contributes at least one electron to a conduction band. When we stick a metal lead onto the p-doped end of the semiconductor, we ought to get another pn-junction, one that goes in the wrong direction for forward current to flow. How come the entire component can conduct in the forward direction anyway? Is it just a matter of making the area of the silicon-metal interface so big that the total reverse leakage current of the p/metal junction is greater than the forward current we want the entire diode to carry? (I'm imagining large volumes of finely interdigitated metal and silicon for multi-ampere rectifiers). Or is there something else going on? | There is a type of diode called a Schottky diode, which is basically a metal-semiconductor junction, so it raises the question, how do you form a metal contact with any semiconductor device, not just a diode. The answer lies in why a metal-semi junction exhibits diode behaviour in some circumstances. First we need to look quickly at the difference between metal and n-type and p-type semiconductors. Metals are a continuous band of electron states. Electrons prefer to be in the lower states, so this is show with the shaded brown region. The red line indicates the average energy level (Fermi level) which in the metal is basically how "full" it is with electrons. There is then an escape energy where electrons are no longer bound to the structure - they become free. This is shown as the work function \$\phi_m\$ . For semiconductors, the bands are a little different. There is a gap in the middle where electrons don't like to be. The structure is split into the valence band which is typically full of electrons, and the conduction band which is typically empty. Depending on how much the semiconductor gets doped, the average energy will change. In n-type, additional electrons are added to the conduction band which moves the average energy up. In p-type electrons are removed from the valence band, moving the average energy down. When you have a discrete junction between the metal and semiconductor regions, in simplistic terms it causes bending of the band structure. The energy bands in the semiconductor curve to match those of the metal at the junction. The rules are simply that the Fermi energies must match across the structure, and that the escape energy level must match at the junction. Depending on how the bands bend will determine whether and an inbuilt energy barrier forms (a diode). Ohmic Contact using Work Function If the metal has a higher work function than an n-type semiconductor, the bands of the semiconductor bend upwards to meet it. This causes the lower edge of the conduction band to rise up causing a potential barrier (diode) which must be overcome in order for electrons to flow from the conduction band of the semiconductor into the metal. Conversely if the metal has a lower work function than the n-type semiconductor, the bands of the semiconductor bend down to meet it. This results in no barrier because electrons don't need to gain energy to get into the metal. For a p-type semiconductor, the opposite is true. The metal must have a higher work function that the semiconductor because in a p-type material the majority carriers are holes in the valence band, so electrons need to flow from the metal out into the semiconductor. However, this type of contact is rarely used. As you point out in the comments, the optimal current flow is the opposite from what we need in the diode. I chose to include it for completeness, and to look at the difference between the structure of a pure Ohmic contact and a Schottky diode contact. Ohmic contact using Tunnelling The more common method is to use the Schottky format (which forms a barrier), but to make the barrier larger - sounds odd, but its true. When you make the barrier larger, it gets thinner. When the barrier is thin enough, quantum effects take over. The electrons can basically tunnel through the barrier and junction loses its diode behaviour. As a result, we now form an Ohmic contact. Once electrons are able to tunnel in large numbers, the barrier basically becomes nothing more than a resistive path. Electrons can tunnel both ways through the barrier, i.e., from metal to semi, or from semi to metal. The barrier is made higher by more heavily doping the semiconductor in the region around the contact which forces the bend in the bands to be larger because the difference in Fermi level between the metal and semiconductor gets larger. This in turn results in a narrowing of the barrier. The same can be done with a P-type. The tunnelling occurs through the barrier in the valence band. Once you have an Ohmic connection with the semiconductor, you can simply deposit a metal bond pad onto the connection point, and then wire bond those to the diodes metal pads (SMD) or legs (through-hole). | {
"source": [
"https://electronics.stackexchange.com/questions/245201",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/18339/"
]
} |
245,400 | I am looking at this MCU and was wondering if it makes sense to use an external crystal. Extracted from the datasheet pg1, *Clock management – 4 to 32 MHz crystal oscillator – 32 kHz oscillator for RTC with calibration
– Internal 8 MHz RC with x6 PLL option
– Internal 40 kHz RC oscillator – Internal 48 MHz oscillator with automatic trimming based on ext. synchronization* The internal oscillator can be up to 48Mhz. The external crystal is between 4 - 32 Mhz. Why would one use an external crystal when the internal one is faster than 48Mhz given that external crystal costs money and occupies space? When should one use an external crystal? | The internal oscillator is much less stable than an external crystal oscillator. If I'm reading the datasheet correctly, the internal 48 MHz oscillator is only factory calibrated to within 2.9% of the specified frequency - not even good enough for RS-232. There are ways to synchronize it to an external clock, I think it's designed to be used in a USB device situation where you can lock the PLL to the USB bitstream. An external crystal is typically accurate to around 20 ppm , parts-per-million. That's 0.002% from the specified frequency. If you need even better, there are even temperature compensated, ovenized crystal oscillators. Additionally, you may want an exact clock speed at a different frequency, typically for communication with a device or master over an asynchronous communications channel. For this you might need an oscillator at for example 29491200 Hz (115200*256). | {
"source": [
"https://electronics.stackexchange.com/questions/245400",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/76826/"
]
} |
245,610 | Current is the amount of electrons passing through a wire. Can we say that voltage is the speed of those electrons? | Is voltage the speed of electrons? No, it's not the speed of the electrons moving within the conductor. The voltage unit is potential energy per charge : An example... Imagine we have a ball of mass M = 10 kg . This ball exists in a conservative gravitational field (the Earth's gravitational field). If we want to raise it by a height of 1 meter, we must - somehow - supply an X amount of energy, that gives the ball enough speed to move 1m above its surface. We will give the ball this amount of energy in terms of kinetic energy (speed). So we throw the ball upwards with some speed, and as the ball moves upward, its speed decreases; and its potential energy increases until the it stops and all the kinetic energy is converted to potential energy. The following picture shows the amount of potential energy for a ball of mass M = 10 kg at different heights above sea level: But what if we want to make a generic scale? For any ball of an arbitrary mass, at any height, we can get the amount of energy for every 1 kg in it (Energy per mass): Now we can say that, at a height of 3 meters above sea level, any object of mass X will have an amount of energy equals 29.4 joules for every 1 kg of mass. This is due to the earth's gravitational field . Voltage , or electric potential , is the amount of potential energy (joules) that any "charged body" within an electric field will have, for every 1 coulomb of electric charge in it. | {
"source": [
"https://electronics.stackexchange.com/questions/245610",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/111665/"
]
} |
246,699 | Why are there 3.15A fuses? Did someone decide that \$\pi\$A was a good rating?
Or is it \$\sqrt{10}\$A they're aiming for? Is it even possible to make fuses with better than +/-5% tolerance? | Each fuse rating is about 1.26 x higher than the previous value. Having said that preferred values do tend to be located at slightly easier to remember numbers: - 100 mA to 125 mA has a ratio of 1.25 125 mA to 160 mA has a ratio of 1.28 160 mA to 200 mA has a ratio of 1.25 200 mA to 250 mA has a ratio of 1.25 250 mA to 315 mA has a ratio of 1.26 315 mA to 400 mA has a ratio of 1.27 400 mA to 500 mA has a ratio of 1.25 500 mA to 630 mA has a ratio of 1.26 630 mA to 800 mA has a ratio of 1.27 800 mA to 1000 mA has a ratio of 1.25 315 mA just happens to span quite a large gap between 250 mA and 400 mA so I suppose the ratio-halfway point should really be \$\sqrt{250\times 400}\$ = 316.2 mA. Near enough! But, the bottom line is that consecutive fuses (in the standard range shown above) are "spaced" \$10^{1/10}\$ in ratio or 1.2589:1. See this picture below taken from this wiki page on preferred numbers: - These numbers are not-unheard of in audio circles either. The 3rd octave graphic equalizer: - See also this question about why the number "47" is popular for resistors and capacitors. Is it even possible to make fuses with better than +/-5% tolerance? I expect it is but fuses don't dictate performance only functionality so, tight tolerances are not really needed. Resistors on the other hand totally dictate performance on some analogue circuits so tight tolerances (down to 0.01%) are definitely needed. | {
"source": [
"https://electronics.stackexchange.com/questions/246699",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/13590/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.