url
stringlengths 15
1.48k
| date
timestamp[s] | file_path
stringlengths 125
155
| language_score
float64 0.65
1
| token_count
int64 75
32.8k
| dump
stringclasses 96
values | global_id
stringlengths 41
46
| lang
stringclasses 1
value | text
stringlengths 295
153k
| domain
stringclasses 67
values |
---|---|---|---|---|---|---|---|---|---|
https://evopipes.lv/en/products/electrical-installations/evoel-fl-0h-smart | 2020-12-03T06:39:24 | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141723602.80/warc/CC-MAIN-20201203062440-20201203092440-00315.warc.gz | 0.736509 | 545 | CC-MAIN-2020-50 | webtext-fineweb__CC-MAIN-2020-50__0__166063407 | en | Low resistance, flexible, halogen-free conduits
Comp. strength: 320 N/5cm, Classification: 22433
EVOEL FL-0H-SMART is flexible, halogen-free electrical installation conduit made of a special light grey (RAL 7035) plastic material, with an orange inner gliding layer.
The conduit features a low mechanical resistance, a high thermal resistance, and a high flexibility at constant cross-section parameters. The special structure of the inner surface of the conduit with outstanding gliding properties allows to extend the cable pulling distances and reduce the length of installation work.
- Available sizes: 16, 20, 25, 32, 40, 50 [mm]
- Available in rolls of 100, 50 or 25 [m], depending on the diameter of the conduit
- Internal layer offers outstanding gliding properties
- Extended cable pulling distances
- Reduced installation time
- Conduit is from halogen-free material
Due to the use of the halogen-free, thermally resistant material, the conduits are suitable for simple concealed installations as well as for installations in hollow walls, partitions, and suspended ceilings in public buildings: schools, kindergartens, hospitals, hotels, theatres, cinemas, museums, stadiums, arenas, shopping-centres, airports, railway terminals, and office buildings
- Installations in hollow walls
- Concealed installations
- Power distribution rooms and substations
- Private buildings
- Public buildings
- Multi-apartment buildings (with more then 5 floors)
- Material - a special plastic (halogen-free)
- Compression strength- low 320 N/5cm
- Impact strength- low
- Temperature resistance from -25°C to +105°C
- Non flame propagator, self-extinguishing
Pipe dimensions Dimensions DN16 DN20 DN25 DN32 DN40 DN50 Outer Ø [mm] 16 20 25 32 40 50 Inner Ø [mm] 11.6 14.7 19.1 24.6 31.5 40.2 Bend radius ≥ [mm] 60 80 100 130 170 220
R- Bend radius
D- Bend diameter
The SMART conduits are manufactured in compliance with standards:
- EN 61386-1:2018
- EN 61386-22:2004+AC/A11:2011
|Code||DN/OD||Length [m]||On pallet [m]|
* Available with a metal wire for pulling of cables
Product pictures are provided for informative purposes only.
Proportions and colours of the original production may differ from the pictures. | physics |
http://earlylearningcomm-otter.squarespace.com/otterblog/2016/5/28/optometry-school-visit | 2017-07-24T14:32:26 | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00242.warc.gz | 0.969803 | 185 | CC-MAIN-2017-30 | webtext-fineweb__CC-MAIN-2017-30__0__1705001 | en | On Monday, we went on a field trip to Pacific University's Optometry school to learn more about light and how our eyes work. Our trip was filled with learning about eye anatomy, experiencing optical illusions, and even seeing a real brain! We learned that there is a hole in our eye (the pupil!) that light enters our eye through allowing us to see.
We felt like real optometry students in the lecture hall, anatomy classroom, and the clinic lab. Taking a closer look at an eye was fascinating, we even saw a contact lens sitting on the eye! Many of our questions were answered on the trip; we had been wondering why it is painful and bad to look directly at the sun. It is because ultraviolet radiation from the sun can burn our eyes, just like it does our skin when you get a sunburn. Thank you Dr. Horn for teaching us more about light, vision, and eyes! | physics |
http://laser-led-lamp-safety.seibersdorf-laboratories.at/fuer-hersteller/pruefstelle-leds-lampen?fsize=2 | 2021-05-06T17:55:41 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00219.warc.gz | 0.912559 | 299 | CC-MAIN-2021-21 | webtext-fineweb__CC-MAIN-2021-21__0__143971408 | en | We test LEDs and lamps (luminaires) and assign them to risk groups according to the standard IEC 62471 (in Europe: EN 62471). The risk group provides basic information as to how long a person can be exposed to radiation at the given reference distance before the biological exposure limits are exceeded. For the correct determining of the risk group, the spectral irradiance or the spectral radiance (depending on the type of limit) has to be determined at the reference distance.
Especially the measurement of the biologically effective radiance by means of imaging methods requires considerable metrological know-how (e.g. consideration of time-dependence relating to the angle of acceptance). Despite the complexity of the subject, we have developed our own evaluation procedure in order to be able to offer our clients accurate measurements with optimised reduction of effort and cost.
For radiation in the UV-range it is necessary to use double-monochromators, as diode arrays do not feature sufficient dynamics and stray light suppression. Our test centre is equipped with a portable double monochromator optimized for testing of solar simulators, as well as a larger double monochromator with computer controlled grating stages for the measurement over a large spectral range in one run.
The standard IEC 62471 is identical to the standard CIE S009 and as EN 62471 it is listed as a harmonised standard under the low voltage directive.
>> testing standards | physics |
https://www.bajajauto.com/bajajqute/other-commuters-safety.aspx | 2021-07-24T01:52:46 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00225.warc.gz | 0.969628 | 116 | CC-MAIN-2021-31 | webtext-fineweb__CC-MAIN-2021-31__0__45728021 | en | We believe safety is for all. Hence, when it comes to vehicles in an urban setting, safety of the people around is equally important for us. While most 'so-called safe vehicles' may not be able to provide it, here is Qute starting a trend.
You probably know that the momentum of an object is the product of its mass and velocity. Qute is lightweight with a restricted top speed of 70 km/h. This controls its momentum, as compared to a bigger, heavier vehicle, and minimises damage on impact, ensuring safety of other commuters. | physics |
https://www.jcgced.com/news/p/item/34742/kstate-receives-nrc-funding-for-nuclear-engineering-fellowship-program | 2023-12-08T00:25:47 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00052.warc.gz | 0.891575 | 159 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__300636457 | en | K-State receives NRC funding for Nuclear Engineering Fellowship Program
23 Apr 2021
MANHATTAN — The U.S. Nuclear Regulatory Commission, or NRC, has funded a new Nuclear Engineering Fellowship Program to provide financial support and mentoring to at least three Kansas State University nuclear engineering doctoral students.
Students selected for the four-year, $400,000 program — under the direction of Amir Bahadori, associate professor and Steve Hsu Keystone research scholar in the Alan Levin Department of Mechanical and Nuclear Engineering — will perform research in areas of interest to the NRC.
Collaborators on the project — all K-State associate professors of mechanical and nuclear engineering and Steve Hsu Keystone research scholars — include Walter McNeil, Jeremy Roberts and Hitesh Bindra. | physics |
http://cubajournal.co/elevated-seismic-activity-continues-in-cuba/ | 2024-04-20T06:11:45 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817491.77/warc/CC-MAIN-20240420060257-20240420090257-00090.warc.gz | 0.969766 | 163 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__20314054 | en | Elevated seismic activity is continuing in Cuba, with earthquakes of over 4.0 magnitude on consecutive days this week, according to the United States Geological Survey.
There were two quakes on Monday, with one at 4.8 on the Richter scale; that quake, whose epicenter was south of Santiago de Cuba, caused weak shaking in the Guantanamo area.
An earlier quake of 4.4-magnitude also struck near Santiago de Cuba in Cuba’s south. There were no reports of damage or injuries, however.
The shaking came after several similar quakes in Cuba last week.
While Cuba is part of a wider Caribbean region that is seismically active, such quake activity is generally rare on the island, particularly in this kind of time concentration.
— Cuba Journal Staff | physics |
https://www.icebookshop.com/product/structural-dynamics-for-engineers-2nd-edition | 2023-01-31T17:28:20 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00506.warc.gz | 0.895563 | 214 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__222324298 | en | Structural Dynamics for Engineers, Second edition is the essential introduction to the dynamics of civil engineering structures for students of structural engineering and graduate engineers.
This book uses carefully-selected worked examples to instil an understanding of the theories underlying widely-used computer analysis systems and show readers how to carry out simple hand calculations in structural dynamics. The methods presented enable readers to check the validity of their results and eliminate errors in their calculations.
• Worked examples in every chapter demonstrate the use of the theories presented.
• Additional purpose-written problems allow you to practice your skills.
• Covers the implementation of damping in design and analysis and the use of dampers to reduce vibration in dynamically sensitive structures.
• Addresses the use of power spectra to predict responses to wind and earthquakes.
• Helps readers to understand and implement modern design codes, which increasingly require knowledge of vibration caused by man or the environment.
Structural Dynamics for Engineers, Second edition provides student and graduate engineers with a clear understanding of the evaluation of structural dynamics using simple methods. | physics |
https://www.sbtautoaccessories.com/news/what-is-the-difference-between-car-polarized-glasses-and-car-anti-glare-glasses/ | 2023-12-07T04:38:40 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100632.0/warc/CC-MAIN-20231207022257-20231207052257-00580.warc.gz | 0.917341 | 341 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__104309217 | en | Car polarized glasses and car anti-glare glasses are two different types of eyewear designed to improve driving safety. While they may seem similar at first glance, there are key differences between the two.
Difference between car polarized glasses and car anti-glare glasses
Car polarized glasses use polarized lenses to reduce glare. These lenses are made from a special material that filters out horizontally polarized light, which is the type of light that causes glare. When light passes through the polarized lens, it is polarized perpendicular to the lens, allowing only vertically polarized light to pass through. This reduces the amount of glare and brightness from reflections off road surfaces or other vehicles, improving visibility and driving safety.
Car anti-glare glasses use anti-glare coatings on the lenses to reduce glare. These coatings are designed to scatter and absorb light reflected off road surfaces or other vehicles, reducing the amount of glare that enters the driver's eyes. The coating is applied to the lens surface using special processes, absorbing light waves and redirecting them in random directions, reducing the amount of light entering the driver's eyes.
Car polarized glasses and car anti-glare glasses are designed to improve driving safety by reducing glare and brightness from reflections off road surfaces or other vehicles. Polarized lenses filter out horizontally polarized light using a special material, while anti-glare coatings scatter and absorb light reflected off the lens surface using special processes. Car polarized glasses provide better contrast and color distinction, while car anti-glare glasses may offer additional UV protection. It is important to choose the correct type of eyewear based on your driving needs and preferences.
Post time: Oct-08-2023 | physics |
https://ledmirrorworld.com.au/blogs/led-mirrors/the-science-behind-led-mirrors-how-they-work | 2024-04-13T10:32:05 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816587.89/warc/CC-MAIN-20240413083102-20240413113102-00651.warc.gz | 0.916105 | 750 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__138853563 | en | LED mirrors are not just stylish additions to bathrooms and dressing areas; they are marvels of modern technology. Understanding the science behind how they work can enhance our appreciation of these innovative fixtures. This article delves into the mechanics and technology of LED mirrors.
- The Basics of LED Technology
At the heart of LED mirrors are Light Emitting Diodes (LEDs). Unlike traditional incandescent bulbs that produce light by heating a filament, LEDs create light through electroluminescence. When electric current passes through a microchip, it illuminates the tiny light sources we call LEDs.
- Energy Efficiency of LEDs
One of the primary advantages of LED technology is its energy efficiency. LEDs require significantly less electricity to produce the same amount of light compared to traditional lighting. This efficiency is due to their ability to produce more lumens per watt.
- Longevity and Durability
LEDs have a much longer lifespan than traditional bulbs. They can last tens of thousands of hours, reducing the need for frequent replacements. This longevity is partly due to the low heat production of LEDs, which minimizes wear and tear on the components.
- The Structure of LED Mirrors
An LED mirror typically consists of a glass surface with an integrated layer of LED strips. These strips can be positioned around the edges or behind the mirror to create different lighting effects. The LEDs are covered with a diffuser to spread the light evenly and reduce glare.
- Color Temperature and Brightness
LED mirrors often feature adjustable color temperatures and brightness. This is achieved through varying the current and the color of the LEDs. Warmer lights are produced by LEDs emitting a more yellow hue, while cooler lights are achieved with bluer tones.
- Touch Sensors and Dimming
Modern LED mirrors incorporate touch sensors and dimming capabilities. This is achieved through electronic circuits that control the LED operation. Touch sensors work by detecting the electrical capacity change when a person's finger is near, allowing for on/off and dimming control.
- The Anti-Fog Function
Many LED mirrors have an anti-fog feature, which is essentially a demister pad – a heating element attached to the back of the mirror. It gently heats the mirror's surface to prevent condensation, keeping the mirror clear in humid conditions.
- Safety and LED Mirrors
Safety is a crucial aspect of LED mirror design. The LEDs in these mirrors operate at a low voltage, typically 12V or 24V, minimizing electrical hazards. Additionally, the construction materials are often water-resistant to ensure safe operation in wet bathroom environments.
- Installation and Power Supply
Installing an LED mirror involves connecting it to a power source. This can be direct wiring to the home's electrical system or a plug-in configuration. The mirror's LED system is designed to be energy-efficient, ensuring that even with regular use, power consumption remains low.
- Environmental Impact
The eco-friendly aspect of LED mirrors is significant. LEDs are mercury-free, unlike some traditional bulbs, and their long life reduces waste. Their low energy consumption also lessens the environmental impact.
LED mirrors are a perfect blend of technology and design, offering functionality, efficiency, and style. The science behind them, from the energy-efficient LEDs to the sophisticated electrical circuits for touch control, represents a remarkable advance in lighting and mirror technology. Beyond their aesthetic appeal, LED mirrors are a testament to the innovative use of technology to enhance our daily lives. To explore a range of LED mirrors that combine advanced technology with elegant design, visit our website at ledmirrorworld.com.au, where we offer an array of options to fit your style and technical needs. | physics |
https://coptool.com/using-welding-mask-watch-solar-eclipse/ | 2019-08-19T00:22:29 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00137.warc.gz | 0.935716 | 665 | CC-MAIN-2019-35 | webtext-fineweb__CC-MAIN-2019-35__0__236693409 | en | This Monday the 21st the moon will fully eclipse the sun for a narrow band across the US, during a “complete eclipse” you can actually look at the sun if it’s fully behind the moon without any protection and not be harmed because the sun is completely covered. However if even a tiny sliver of the sun is visible, which will be the case for the other 99.9+% of the country at any given time you will need to wear serious eye protection. You can check out the American Astronomical Society (AAS.org) for legit vendors of “Eclipse Glasses” but it’s not pretty, from their site Note: It is now too late to buy solar viewers in time for August 21st. Virtually all vendors are sold out, whether or not they’re listed as such below. See our pinhole projection page for other ways to enjoy the partial phases of the eclipse.
Good news is this is just another excuse to buy that Iron Man Welding Mask you’ve been thinking about! Unfortunately you probably can’t order anywhere online and have it by Monday but perhaps one of these masks at Home Depot can be ordered for a local pickup this weekend. While there are a wide range of price points for welding masks the important specification to note is mask must be a 12 or 13 shade for sufficient protection from the sun. Darker the better but at 14 you may not be able to see much. Welding Glasses or Dark Green Face Shield rated IR3 or IR5 are not going to be protective enough for staring at the sun and can cause serious eye damage. ***WARNING*** if the auto darkening doesn’t come on and stay on (with no flickering) you are not protected so make sure it works correctly.
Masks like the Save Phace have adjustable shade controls (9-13) so just make sure to crank it up past 12 and with the most sensitivity possible! If your mask does not have an adjustable nob check with the manufacturer to get the rating.
We are not scientists but the team at NASA has a bunch of them and we believe them, which is where we are getting our info – (https://eclipse2017.nasa.gov/safety) “Viewing with Protection — Experts suggests that one widely available filter for safe solar viewing is welders glass of sufficiently high number. The only ones that are safe for direct viewing of the Sun with your eyes are those of Shade 12 or higher. These are much darker than the filters used for most kinds of welding. If you have an old welder’s helmet around the house and are thinking of using it to view the Sun, make sure you know the filter’s shade number. If it’s less than 12 (and it probably is), don’t even think about using it to look at the Sun. Many people find the Sun too bright even in a Shade 12 filter, and some find the Sun too dim in a Shade 14 filter — but Shade 13 filters are uncommon and can be hard to find.”
Enjoy the Eclipse Monday as it passes overhead but be Safe doing it!!! Check out Ohio Power Tool for all your Safety & PPE needs. | physics |
https://riequip.co.nz/collections/concrete-vibrators | 2023-11-30T13:28:14 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00710.warc.gz | 0.915343 | 630 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__274487288 | en | When concrete is being poured it has the potential to form air bubbles or air pockets, meaning that when it hardens the concrete structure could end up being substantially weaker. When you have a construction project that involves concrete, inserting a concrete vibrator will vigorously shake the concrete and aggregate particles around it, forcing the entrapped air out of the concrete, closing the air bubbles together, and settling the area of concrete. Vibrating concrete therefore increases the density of the concrete and makes the concrete much stronger and more durable. Not only is it important to use a high quality concrete vibrator, but the person using one also needs to be trained properly. If the concrete has been poorly vibrated then it will still have voids in the concrete, a problem called ‘honeycombing’. For the integrity of a concrete structure, it’s important that the concrete is not only flat, but all the air has been adequately vibrated out of it. Otherwise, the concrete will look unsightly, be porous, and lack strength.
Tips For Using A Concrete Vibrator
There are some techniques that can be applied when using a concrete vibrator to help you get the best result possible.
- Be ready with the concrete vibrator before the concrete is poured.
- Don’t turn on the concrete vibrator until the tip is fully submerged in the concrete.
- Concrete vibrators need to be inserted into wet concrete vertically for 5 to 16 seconds before it is removed slowly.
- The concrete vibrator isn’t to be used to push around or move the concrete.
- Concrete vibrators can consolidate an area about 6 inches around it, so the concrete vibrator will need to be used across several sections of the concrete (depending on the size of the project).
- Overlap the previous radius of vibration to ensure all the concrete is adequately vibrated.
- You will see if consolidation has occurred when air bubbles rise out of the concrete.
- You may also notice that a thin film of water appears on the surface. This is normal as water is lighter than cement.
- When the bubbles stop you have vibrated that area correctly and can move on to the next spot. You will need to continue the process again and again until the entire area of concrete has been vibrated.
- Be careful not to over vibrate the concrete as this can cause a thick layer of water to appear on the concrete, and this could sink it which also lowers the strength of the concrete.
- Don’t bend the concrete vibrator excessively as this can damage the flexi shaft.
Concrete Vibrators From Leading Brands
No matter the size of the area, Riequip has a concrete vibrator which will help you to get the job done efficiently. Our available brands are Vibe Tech, Northrock, and Altrad Belle. If you need assistance in deciding the right concrete vibrator for your project or business needs, then call us today on 0800 378 478 and we’ll get you sorted. | physics |
http://www.tuttlesvc.org/2012/04/science.html | 2024-04-17T12:08:54 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817153.39/warc/CC-MAIN-20240417110701-20240417140701-00003.warc.gz | 0.983591 | 133 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__44249179 | en | This is an apology to Lorri Lee Lown. I was wrong about something and insisted that I was right. Doubts arose and I had to check my assumptions (based on experience). That is that the increased friction caused by heavier riders and the increased drag due to their size overcame the increased gravitational pull. I was wrong enough that I couldn’t even push the numbers to fake it. I’m man enough to admit it. It doesn’t happen all the time. This time I was wrong.
Simply put, given similar parameters, a heavier bike and rider will go faster coasting down a hill than a lighter rider. | physics |
https://kstronics7.wordpress.com/2013/02/26/inside-the-741-op-amp/ | 2018-06-23T12:18:45 | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864958.91/warc/CC-MAIN-20180623113131-20180623133131-00160.warc.gz | 0.90243 | 977 | CC-MAIN-2018-26 | webtext-fineweb__CC-MAIN-2018-26__0__84033585 | en | The type 741 operational amplifier is the basic model for a wide range of commercial devices. Many different manufacturers produce similar or equivalent devices and they all have variations in their designations, but the digits “741” are part of the designation in most cases. We’ll use the Signetics device, designated the µA741, as our example here.
The numbers in parentheses at the external connections for the above schematic diagram refer to the terminal pinouts for the 8-pin IC package. The pin numbers are the same for both the 8-pin mini-DIP package and the 8-pin round Type-T metal can. In both cases, pin 8 has no connection.
There are a number of interesting points about this circuit. First, the input transistors are connected as npn emitter followers, feeding their outputs directly to a pair of pnp transistors configured as common-base amplifiers. This configuration isolates the inputs, preventing signal feedback that might otherwise have some harmful frequency-dependent effects.
Note the two pairs of transistors shown in red. One transistor in each pair has its collector connected to its base, as well as to the base of the other transistor. In addition, the transistor emitters are connected together, in this case to the V+ power source. In some diagrams, the transistor with the collector and base shorted together is rendered as a diode, which shows bias for the other transistor, but doesn’t show the full value of this configuration.
This arrangement is known as a current mirror. The two transistors are manufactured side by side on the same silicon die, at the same time. Thus, they have essentially identical characteristics. The controlling transistor (on the left in each pair) will necessarily set its emitter-base voltage to exactly that value that will sustain the collector current it is carrying, even down to fractions of a millivolt. In so doing, it also sets the emitter-base voltage of the second transistor to the same value. Since the transistors are essentially identical, the second transistor will carry exactly the same current as the first, even to an independent circuit.
The use of a current mirror on the input circuit allows the inputs to accommodate large common-mode voltage swings without exceeding the active range of any transistor in the circuit. The second current mirror in red provides a constant-current active load for the output circuitry, again without regard for the actual output voltage.
A third current mirror, shown in blue, is a bit different. That 5K resistor in series with the emitter of the mirrored transistor limits its collector current to virtually nothing. Thus, it serves as a high-impedance connection to the negative power supply, providing a reference without loading the input circuit. This particular circuit is therefore able to provide the slight base bias current needed for the PNP transistors in the differential input circuit, while allowing those transistors to operate correctly over a wide common-mode input voltage range.
The final odd circuit within the op amp is shown in green. Here, the two resistors bias the transistor in what would seem to be an unusual way, since there is no apparent signal input to the base of the transistor. To understand its purpose, assume zero base current for a moment, and a VBE of 0.625 volt. Ohm’s Law then requires a current of 0.625 ÷ 7.5K = 0.0833mA through the 7.5K resistor. The same current must also flow through the 4.5K resistor, which will therefore exhibit a voltage drop of 0.0833mA × 4.5K = 0.375V. The total voltage across the two resistors, then, and therefore across the transistor, is 0.625V + 0.375V = 1.0V. This, then, is a simple voltage reference, providing an internal 1-volt difference without a connection to either power supply, nor to ground. This circuit floats internally, and provides its 1-volt bias regardless of the actual dc output voltage of the overall circuit.
The offset null connections (pins 1 and 5) provide a simple way to balance out the internal variations and zero out the output offset which might be apparent with zero input voltage. It is used simply by connecting a trimmer potentiometer between pins 1 and 5, as shown to the right. The slider on the potentiometer is connected to the negative power supply. To adjust for zero offset, ground the input resistor and use the offset null potentiometer to set the output voltage precisely to zero.
The offset null terminals are not available in packages such as the 5558 and 1458, which put two independent op amps in a single 8-pin mini-DIP package. | physics |
http://www.creativityland.ca/free-your-thinking-in-six-minutes/ | 2018-10-19T14:22:20 | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00109.warc.gz | 0.919706 | 168 | CC-MAIN-2018-43 | webtext-fineweb__CC-MAIN-2018-43__0__26284148 | en | Feeling stuck? Here’s a six-minute clip guaranteed to shift your thinking. Maybe it’ll inspire your next creation. Nobel prize, anyone?
Neil deGrasse Tyson, Ph.D. (born October 5, 1958) is an American astrophysicist and science communicator. He is the Frederick P. Rose Director of the Hayden Planetarium at the Rose Center for Earth and Space, and a Research Associate in the Department of Astrophysics at the American Museum of Natural History. Since 2006 he has hosted the educational science television show NOVA scienceNOW on PBS, and is a frequent guest on The Daily Show, The Colbert Report, Real Time with Bill Maher, and Jeopardy!. Tyson will be hosting a new sequel to Carl Sagan’s Cosmos: A Personal Voyage TV series. | physics |
http://[email protected]/artsandsciences.aspx?id=12874 | 2013-05-21T18:41:32 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700438490/warc/CC-MAIN-20130516103358-00099-ip-10-60-113-184.ec2.internal.warc.gz | 0.895991 | 317 | CC-MAIN-2013-20 | webtext-fineweb__CC-MAIN-2013-20__0__199710422 | en | A B.S. degree in computational physics reflects the increasing use of computer methods in physics and engineering. The program consists of a core physics curriculum with added course work in mathematics, computer science, and computational physics.
Requirements for a Bachelor of Science in Computational Physics
The following table summarizes all requirements for a Bachelor of Science in computational physics. For an overview of a particular course, click on its title. Our Course Descriptions page lists all physics and pre-engineering courses. For one possible breakdown of the major by semester, see the Plan of Study.
* The School of Arts and Sciences requires all majors to complete a foreign language course at the 202 level or higher. Students not prepared to begin at this level will need to take additional courses in the language, which count as general electives.
** It is recommended that some of the following courses be included among the electives:
- MATH 241, MATH 431, MATH 453, MATH 487;
- CS 131, CS 132, CS 256;
- CHEM 401, CHEM 402;
- PHYS 304, PHYS 309, PHYS 404, PHYS 408, PHYS 409, PHYS 410.
† The Clare College curriculum also includes a three-credit quantitative reasoning requirement. Computational physics majors satisfy this requirement by passing any of the courses listed in the above table under Mathematics.
The comprehensive requirement for physics majors is the passing of an oral examination (PHYS 490. Physics Senior Comprehensive) in the second semester of the senior year. | physics |
https://teaching-point.net/product/7th-grade-science/ | 2023-09-26T08:57:13 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00491.warc.gz | 0.916232 | 146 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__235468202 | en | This course addresses earth space science, physics, and biology. Science skills such as the scientific method, collecting, evaluating and analyzing data, and explaining scientific phenomena are taught and reinforced. Students will also access and process information from a variety of texts, use scientific skills and processes to explain the interactions of matter and energy and the energy transformations that occur, and they will analyze and display data. The labs and activities guide students step by step through the scientific method from constructing hypotheses, identifying variables, following procedures, collecting and examining data, and analyzing results. Pre-assessments are given at the beginning of units for information on students’ readiness and quizzes and tests are given at the end of topics to ensure learning. | physics |
https://sunbelzz.wordpress.com/2019/05/20/hummingbird-robot-using-ai-to-go-soon-where-drones-cant/ | 2020-10-30T03:51:23 | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00640.warc.gz | 0.942247 | 174 | CC-MAIN-2020-45 | webtext-fineweb__CC-MAIN-2020-45__0__23500431 | en | Artificial intelligence, combined with flexible flapping wings,
Artificial intelligence, combined with flexible flapping wings, also allows the robot to teach itself new tricks. Even though the robot can’t see yet, for example, it senses by touching surfaces. Each touch alters an electrical current, which the researchers realized they could track. “The robot can essentially create a map without seeing its surroundings. This could be helpful in a situation when the robot might be searching for victims in a dark place – and it means one less sensor to add when we do give the robot the ability to see,” said Xinyan Deng, an associate professor of mechanical engineering at Purdue. Drones can’t be made infinitely smaller, due to the way conventional aerodynamics work. They wouldn’t be able to generate enough lift to support their weight. | physics |
http://eikins.com.ng/2016/11/05/governors-mechanism/ | 2019-01-17T22:16:39 | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659340.1/warc/CC-MAIN-20190117204652-20190117230652-00186.warc.gz | 0.946192 | 983 | CC-MAIN-2019-04 | webtext-fineweb__CC-MAIN-2019-04__0__14489235 | en | Governors are automatic devices that regulate the speed of an engine by altering the rate of flow of its working fluid, that is, fuel. If the engine is loaded or if there is an increase in load, this would cause the speed of the engine to decrease. The governor would react by operating the mechanism that operates the supply valve. This leads to an increase to fuel supply and thus increase in engine speed bringing the engine to its initial original speed. Hence, the function of a governor is to maintain the engine speed within certain limits irrespective of the load.
Governors are classified thus:
- Inertia and flywheel governors: this is type of governor in which inertia predominates. This means it works with principle of resistance to motion. These types of governors are fitted to the crankshaft or flywheel. Their design and appearance differ from the centrifugal governors. They have balls arranged in such a way that the force of inertia alters their position when the shaft is accelerated or decelerated. Springs controls the amount of displacement of the balls and this in turn regulate the fuel supply. The inertia governor is very sensitive than the centrifugal governor however balancing the revolving parts is extremely difficult. This makes it less popular in engines.
- Centrifugal governors: this can be further classified as (i) gravity controlled; this is the type of governor which gravity is the major force which balances the balls as they revolve (ii) Spring controlled governors are those in which springs balances the centrifugal force.
Centrifugal governors are more popular in engines because it easier to balance the weights. The following important types centrifugal governors would be discussed further, this include; watt governor, Porter governor, Proell governor and Hartnell governor.
(a) Watt Governor: This was invented by James Watt and used on the very types of steam engines. It works in engines designed for slow movements and it is not longer in use.
It consists of two arms which are hinged at the top of the spindle and two balls are attached on the other end. Two links are hinged with the arms and the other end hinged with the sleeve. The sleeve slides over the spindle. The rotation of the spindle is via bevel gears from the crankshaft. The revolution of the spindle causes the weights to spread out due to centrifugal force. This causes the sleeve to move upward. The movement of the sleeve is then transmitted by means of a lever which partly closes or opens the steam pipe and this in turn reduces or increases the supply of steam to the engine. Thus the speed of the engine is adjusted.
(b) Porter Governor
This comprises of two or more governor balls rotating about the axis of the governor shaft which is driven through suitable gears from the crankshaft. The balls are hinged to the upper and lower arms. The other end of the lower arm is attached to the sleeve which act as the central weight. If load is placed on the engines, the speed of the engine decreases, the balls would fly inwards and the sleeve moves downward and opens the fuel valve for more fuel to flow in and thus increase the speed of the engine. On the other hand, if load is removed from the engine, the speed of the engine increases and the balls would fly outwards, this in turn would cause the sleeve to move upward, thus reducing fuel flow until the speed of engine returns to its designed speed. The design speed of the engine is the speed when the outward inertia or centrifugal force is just balanced by the inward controlling force.
(c) Proell Governor
This is similar to the Porter governor. The balls are placed on an extension of the lower arms. For a given weight of ball, sleeve and height of governor, a Proell governor runs at a lower speed
(d) Hartnell Governor
This governor operates by the use of springs. It has a case attached to the spindle. Inside the casing is a compressed spring that presses against the top of the casing and adjustable collars. The sleeve moves up and down in response to the speed of the governor. The balls are placed on the bell crank lever which is hinged to the lower end of the casing. Increase or decrease in speed of the governor causes the ball to fly outward or inward respectively.
Uses and Application
Used in petrol engines, diesel engines.
R.K., Rajput. Thermal Engineering. Delhi : Laxmi Publications (P) LTD, 2009. 9788131805954.
Governor (mechanics).” Microsoft® Student 2009 [DVD]. Redmond, WA: Microsoft Corporation, 2008.
governor.” Encyclopædia Britannica. Encyclopaedia Britannica Student and Home Edition. Chicago: Encyclopædia Britannica, 2010 | physics |
https://medicine.ekmd.huji.ac.il/schools/pharmacy/En/home/IDR_EC/IDR_NMRU/Pages/Safety.aspx | 2019-07-21T14:41:35 | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00334.warc.gz | 0.859376 | 185 | CC-MAIN-2019-30 | webtext-fineweb__CC-MAIN-2019-30__0__16018538 | en | People with medical implants (such as cardiac implants pacemakers, aneurysm clips, surgical clips or prostheses) should check with us before entering the NMR room.
All magnetic objects should be kept outside the NMR magnet room.
Keep electronics, credit cards and magnetic storage media outside of the NMR magnet room.
In case a metallic object strikes the magnet, get NMR Facility staff immediately. Do NOT attempt to pull the object off yourself.
In the event of a magnet quench (sudden and rapid boil-off of the magnets cryogens (helium and nitrogen) that can be detected by visible (and/or audible) emission of cryogenic gas from the magnet), the magnet cryogens that vaporizes displace air and can result in asphyxiation. If a quench occurs, immediately evacuate the NMR room!
website by Bynet Software Systems | physics |
https://www.swisscluster.com/products/sc-1 | 2023-09-27T15:01:52 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510300.41/warc/CC-MAIN-20230927135227-20230927165227-00539.warc.gz | 0.814037 | 1,035 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__152192622 | en | The SC-1 is a ground-breaking and high-performing cluster equipment that combines Atomic Layer Deposition (ALD) with Physical Vapor Deposition (PVD) in an extremely compact, modular, and fully automated system for high-throughput production of multinanolayered coatings from the ALD and PVD materials library.
This innovative patent-pending cluster system revolutionizes the traditional cluster equipment, as it eliminates the need for transfer arms and multiple antechambers, which occupy a significant amount of lab space, with high acquisition, operating and maintenance costs.
・ Capable of performing both (PE)-ALD and PVD without breaking the vacuum or move the samples between chambers to fabricate hundreds of multinanolayered films, reducing fabrication time.
・ Scalable, modular and flexible system that allows to easily increase or decrease chamber dimensions, adapt new hardware and incorporate multiple in-situ metrology equipment.
Why ALD and PVD?
Breaking the Grain-Growth
The combination of ALD and PVD layers creates a unique microstructure by stabilising grain size and hindering grain growth, which translates to improved mechanical and thermal properties.
TEM image showing a 200 multinanolayered coating of 20 nm of PVD Al and 1 nm of ALD Al2O3.
Interchangeable Substrate Holders
・ 4 - 6 in. wafers
・ High temperature (~900 °C) rotational and z-stages
・ Temperature Gradient Stages (30 °C to 450 °C)
ALD Precursor Lines
・ Up to 12 precursors with individual inlets
Standard bottles for heated (150 °C) and non-heated sources
Bubblers for low vapor pressure precursors heated to 200 °C
・ Ozone option
・ Direct Microwave plasma sources for PE-ALD
・ Up to 8 magnetrons in different configurations and sizes (2 - 4 inch)
In-Situ Metrology Equipment
・ OES systems
・ In-situ wafer stress measurements
・ And more...
・ Al2O3, TiO2, ZnO, Y2O3, ZrO2, HfO2, Cu, Al, Ti, and more...
Electronics and Software
Mass Flow Controllers
・ 4 Analog MFC
・ 60 Digital MFC
Pneumatic (ALD) valves
・ 24 valves
・ 4 Analog
・ 3 gate valves with feedback
・ 4 Flow meters
・ 16 Channel PID regulation with K-Type sensors
・ 4 PT100/PT1000
・ 8 Interlock in
・ 12 Interlock out
・ 2 Ethernet
・ 2 RS485
・ Manual control
・ Recipe Creator
Materials FactoryWith the benefits of both ALD and PVD
Hundreds of multinanolayers of multiple material systems from the PVD and ALD materials library can be fabricated. The combination of ALD and PVD layers creates unique microstructures and properties to fit in your desired application.
The ALD-PVD microstructure can be further tailored with different film thicknesses along the cross-section and with different deposition temperatures throughout your substrate and process using our temperature gradient stage (TGS).
The system is fully automated; all the devices and components are connected to our easy-to use software. Hundreds of multinanolayers with both ALD and PVD can be fabricated with the push of a button.
The temperature gradient stage (TGS) allows to screen a large temperature window to scan precursors, growth rates, microstuructures, chemical compositions, mechanical behaviour and more in a single deposition.
Modular and ScalableCustomizable and Upgradable
The ALD and PVD chambers in the SC-1 can be acquired and operated individually and then be upgraded to a cluster system. The chambers can be scaled to different dimensions to fit specific requirements. New components or in-situ metrology equipment can be added and incorporated to our software and recipe creator.
By reducing the need of antechambers and mechanical arms, we reduce the complexity and lab space required.
User-friendlyPlug-and-Play functionality for beginner and advanced users.
With our easy-to use recipe creator, complex recipes can be made easy in order to fabricate hundreds of multinanolayers with different parameters in both ALD and PVD process during the deposition.
The easy attachable/detachable panels makes this equipment extremely easy to replace, clean and service the parts. | physics |
https://iqzb.com/post/shell-and-tube-heat-exchanger | 2023-12-01T22:50:43 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100308.37/warc/CC-MAIN-20231201215122-20231202005122-00629.warc.gz | 0.905473 | 1,466 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__122617691 | en | Chapter One: Understanding Shell and Tube Heat Exchangers
A shell and tube heat exchanger (STHE) is a common device. It has a big round shell that holds tube bundles. A heat exchanger transfers heat between two similar substances or mediums. Shell and tube heat exchangers are noticeable for their simple design and efficient heat exchange rates.
Shell and Tube Heat Exchanger
Shell and tube heat exchangers are popular because they are simple and effective at moving heat. The basic process involves the flow of liquid or steam into the shell, and heating the tubes. For optimal heat transfer, a configuration of four passes through the tubes is considered the most effective method.
Chapter Two: Designing Shell and Tube Heat Exchangers
Shell and tube heat exchangers undergo sophisticated computer-aided design processes. Key components include the shell, shell cover, tubes, channel, channel cover, tube sheet, baffles, and nozzles. The Tubular Exchanger Manufacturers Association (TEMA) establishes specifications and standards for STHEs.
The shell is made from pipe or welded metal plates. It needs to endure extreme temperatures and not corrode. To reduce space, make sure the diameter is consistent and round, with no gaps between the edge and baffles.
Channels or Heads
The type of channel or head depends on the specific application. Bonnet-type heads are often used when they don’t need to be removed often. Removable cover channels are used for maintenance.
Tubes are made from different materials like steel, titanium, or copper. They are welded or extruded. We select tube sizes and thicknesses based on pressure, temperature, stress, and corrosion.
The tube sheet is a plate with holes. It supports tubes on both ends of the shell. It also extends beyond the tubes, creating a chamber covered by heads.
An expansion joint is important for temperature changes. It prevents stress-related problems in the heat exchanger’s parts.
The distance between tubes affects how easy it is to clean and how turbulent it is. Square pitch arrangements, allowing vapor to rise between tubes, are advantageous.
Baffles direct flow in the shell and tube sides, increasing fluid velocity and minimizing fouling. In horizontal heat exchangers, baffles support tubes and prevent sagging or vibration damage.
Tie Rods and Spacers
Tie rods and spacers support baffles and maintain spacing, preventing sagging. The number is determined by the shell’s diameter and the number of baffles.
Leading Shell and Tube Heat Exchanger Manufacturers and Companies
- Enerquip Thermal Solutions
- Mason Manufacturing LLC
- Delta T Heat Exchangers
- Exact Exchanger, Inc.
Chapter Three: Operation of Shell and Tube Heat Exchangers
A shell and tube heat exchanger facilitates the exchange of temperature between two fluids. In this process, one fluid flows through the tubes, and the other flows through the shell. The decision on which fluid enters which side is termed fluid allocation. The decision is influenced by factors such as pressure differences. Lower-pressure fluids enter the shell side.
The shell side, more expensive and harder to clean than the tubes, has baffles directing fluid flow across tube bundles. It is suitable for processing viscous fluids and those with high flow rates.
The tube side requires turbulent flow achieved by installing turbulators inside the tubes. Turbulence helps transfer heat and keeps the flow smooth with less pressure.
Shell and tube heat exchangers can have one to eight passes, influencing heat transfer efficiency. As the number of passes increases, so does the heat transfer coefficient.
Operating Shell and Tube Heat Exchanger
During the heat exchange process, fluids in the shell and tubes come into thermal contact. As a result, one fluid becomes cooler while the other becomes warmer.
Chapter Four: Varieties of Shell and Tube Heat Exchangers
Shell and tube heat exchangers are classified by TEMA into Class B, Class C, and Class R, based on construction and service type.
Flow types include parallel, counter, and cross. Fluids enter and exit at the same ends in parallel flow. Counterflow has opposite directions. Crossflow involves fluids flowing perpendicular to each other.
Fixed Tube Sheet TEMA Type M
This design has straight tubes secured at both ends to stationary tube sheets welded to the shell. It is cost-effective, easy to clean, and maintain.
U Tube Heat Exchanger
U-tube heat exchangers have tubes configured like a ‘U,’ with inlet and outlet valves located at one end. They can handle high-temperature variances.
Floating Head Heat Exchanger TEMA Type S
Similar to U U-tube design, the floating head can withstand high temperature variances. The floating end allows for easy cleaning and inspection.
TEMA Type T or Type AKT
This design allows pulling out the tube bundle for maintenance and has an abnormal clearance between the baffle and shell.
Scraped Surface Heat Exchanger
Scraped surface heat exchangers are made for thick substances. They have blades to remove buildup and transfer heat efficiently.
Chapter Five: Advantages of Shell and Tube Heat Exchangers
Shell and tube heat exchangers have many advantages, so they can be used in different industries and applications.
They are cost-effective compared to plate-type coolers.
Capable of handling a wide temperature range, ensuring consistent production.
Built to withstand high pressures, adhering to industry codes.
Designed to minimize pressure loss, enhancing overall performance.
Adaptable design allows for adjustments to fit specific production processes.
Multi-tube design accommodates thermal expansion, suitable for handling flammable and toxic fluids.
Chapter Six: Standards and Regulations for Shell and Tube Heat Exchangers
The food, drink, dairy, and medicine industries have rules to ensure their products are safe and consistent.
3-A Sanitary Standards (3-ASSI)
Focuses on keeping equipment clean in place (CIP) for dairy, food, and pharmaceutical industries.
American Petroleum Industry Standard 660 (API660)
These rules govern the creation of heat exchangers in the petroleum and petrochemical industries.
Tubular Exchangers Manufacturers Association (TEMA)
Sets widely used standards for the design and categorization of shell and tube heat exchangers.
American Society of Mechanical Engineers (ASME)
ASME Code VIII applies to the pressurized parts of shell and tube heat exchangers.
Pressure Equipment Directive (PED)
International standard for products manufactured in the U.S. but used globally, ensuring safety.
Canadian Registration Number (CRN)
A provincial approval system based on size, fluids, pressure, and temperature range, varying by province.
Shell and tube heat exchangers are widely used in various industries. They are popular due to their straightforward design and efficient heat transfer. By following rules and guidelines, they can be used safely, reliably, and for various purposes. | physics |
http://www.cefc2018.org/ | 2020-04-02T18:16:24 | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00221.warc.gz | 0.8884 | 285 | CC-MAIN-2020-16 | webtext-fineweb__CC-MAIN-2020-16__0__226950855 | en | The Eighteenth Biennial IEEE Conference on Electromagnetic Field Computation CEFC 2018 will be held in Hangzhou, China, October 28-31, 2018. The official conference website will be used for all conference activities including digest submission and review, registration and accommodation. The website is http://www.cefc2018.org.
It is our great pleasure to announce the Eighteenth Biennial IEEE Conference on Electromagnetic Field Computation (CEFC 2018), which is cosponsored by IEEE Magnetics Society, China Electrotechnical Society and Chinese Society for Electrical Engineering. We welcome you to participate in one of the most important biennial scientific and technical events. The aims of the IEEE CEFC are to present the latest developments in modeling and simulation methodologies for the analysis of electromagnetic fields and wave interactions, with the application emphasis being on the computer-aided design of low and high frequency devices, components and systems. Scientists and engineers worldwide are invited to submit original contributions in the areas of Static and Quasi-static Fields, Wave Propagation, Material Modeling, Coupled Problems, Numerical Techniques, Optimization and Design, Software Methodology, Nanomagnetics, Nanophotonics, Bioelectric Field Computation as well as Devices and Applications.
We are looking forward to meeting you in Hangzhou.
Thank you very much.
Organizers, CEFC 2018 | physics |
http://www.begneragenturer.se/lap-laser.aspx | 2019-04-21T06:22:57 | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530253.25/warc/CC-MAIN-20190421060341-20190421081232-00046.warc.gz | 0.926201 | 405 | CC-MAIN-2019-18 | webtext-fineweb__CC-MAIN-2019-18__0__185793259 | en | LASER MEASUREMENT FOR ALL METALS
For more than 25 years, LAP systems have been measuring dimensions of long and flat products in production lines. We manufacture laser based systems for non-contact measurement of position, width, thickness, diameter, contour and shape for the entire process chain, from continuous casting to the finished product. Careful assessment of customer requirements, from management to the operator level, of plant conditions and operating procedures, is essential to engineer and implement well thought-out solutions. Any project begins with the thorough understanding of customer needs.
RELIABLE IN HARSH ENVIRONMENTS
In the metal industries, measurement systems have to prove reliable every day under difficult operating conditions. Through decades of experience, our engineers understand these conditions and know how to protect measurement systems to keep them working. Outstanding sensor technology, mechanical stability, shock and dirt resistance, thermal insulation, cooling systems, and easy maintenance contribute to the overall performance of the systems.
MORE THAN JUST SENSORS
Specific needs of production processes and related production lines often require individual adaptation of a measurement system. Because of this we offer single source systems, from sensors through mechanics, electronics to software, and finally integration into the process. You are dealing with a single contact and clear responsibilities during the entire project. Close cooperation between LAP and their customers guarantees short installation cycles and high operator acceptance.
LEADING IN NEW TECHNOLOGIES
Quality demands on flat and long products are rising continually. Consequently, LAP is permanently refining and developing its systems. The RDMS profile gauge as an example is worldwide accepted to measure the true profile of long products. It‘s patented technology detects and measures all relevant shape defects precisely, namely three-lobed shapes and asymmetric fill. This patented technology provides reliable data for modern rolling technologies such as 3-roll stands.
WORLDWIDE, MORE THAN 200 LAP SYSTEMS ARE SUCCESSFULLY IN USE! | physics |
https://golfagogo.forumactif.org/t3715-tom-wishon-10-myths-about-shafts-factual-info-about-shafts-to-help-you-all | 2020-10-01T01:42:38 | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00000.warc.gz | 0.943521 | 6,001 | CC-MAIN-2020-40 | webtext-fineweb__CC-MAIN-2020-40__0__958661 | en | Myths About Shafts
1. The shaft is the engine of the golf club
2. The shaft is the most important component of the golf club
3. The letter flex code on the shaft tells me how stiff the shaft is
4. The shaft is a key element for the amount of backspin imparted on a shot
5. How a shaft plays and performs for one golfer or group of golfers is important for other golfers to know to be able to make a proper shaft selection decision
6. The more expensive a shaft, the better it is
7. The flex of the shaft has a very important effect on shot performance for all golfers
8. The higher the clubhead speed of the golfer, the stiffer the shaft should be
9. The right shaft adds distance by “kicking faster” through the ball
10. How a shaft performs for a golfer(s) is an indication of its quality
Shaft Myth #1 – The shaft is the engine of the golf club
If I had a dollar for every time I have heard this statement, I might not be rich, but I definitely would be able to go to a nice restaurant and enjoy a good dinner with a good bottle of wine! It is far more truthful to say that “the golfer is the engine,” while “the shaft is the transmission of the golf club.”
A shaft does not create energy during the swing. It is simply the component that takes the energy generated by the golfer and transmits it to the clubhead to hit the ball. It is true that if certain elements of the shaft are not properly fit to the golfer’s specific swing characteristics, the golfer can lose distance by experiencing a lower clubhead speed or more off-center hits than he could generate from using a correctly fit shaft. At the same token, if the shaft is accurately fit, the golfer has a much better chance of fully optimizing his/her potential to hit the ball to the best of their ability.
Performance wise, the shaft, 1) can affect the dynamic loft of the clubhead at impact within a narrow range of 2 to 3 degrees, but only for those golfers with a later to very late release; 2) will chiefly control the total weight of the club, which in turn can have an effect on the golfer’s clubhead speed, 3) can affect some golfers’ (not all) confidence and swing consistency by displaying a “bending feel” during the swing that is either more preferred or less preferred by the golfer. That’s it, that’s the full list of what the shaft can do.
Shaft Myth #2 – The shaft is the most important component of the golf club
Sorry, but when you’re talking about ALL golfers, the shaft is not as important to the actual performance of the shot as is the clubhead. I’ll give you an example of when this was actually “tested and proven” in the golf industry by a huge number of golfers. Back in the early 1970s when PING golf company moved to the front of the golf industry through the introduction of their deep cavity back original Ping Eye model irons, the standard shaft installed in every set of Eye irons was a 125 gram X flex steel shaft.
Ping’s founder Karsten Solheim used these shafts in his irons because he believed a heavier and stiffer shaft would help all golfers hit the ball straighter. Literally millions of sets of PING irons with X flex heavier weight steel shafts were sold throughout the 1970s and you know what? Literally millions of golfers liked their new PING irons more than their previous irons. Why? Because the original PING Eye irons were the very first irons with a deep cavity back design AND lower lofts than what had been the norm for irons – this meant the moment of inertia (MOI) of the Eye irons was FAR higher than any previous iron model yet designed. This in turn gave golfers such a huge improvement in off center hit performance as well as on center hit distance over the irons they previously used that this big leap forward in head performance completely overshadowed the potentially bad effects to golfers using a shaft that was too heavy and too stiff for their swing.
Of course, we know today that playing with too heavy and too stiff of a shaft can rob the golfer of clubhead speed and shot consistency and make the feeling of impact become “dead and boardy.” But the point shown by the PING example of the 1970s is that if the clubhead’s improvement is great enough for the golfer over what they used to play, the shaft does not have to be accurately fit for the golfer to still realize significant game improvement.
Shaft Myth #3 – The letter flex code on the shaft tells me how stiff the shaft is
No it doesn’t because there are absolutely no standards in the golf industry for how stiff any of the shaft flex codes are. Every golf company and shaft company is free to determine how stiff their various shaft flex letter codes are to be. As a result it is very common for the R Flex from one company to be similar in stiffness to the S Flex from another company or the A Flex from a third company. Not only that, but it is very common for a flex in one model of shaft to be stiffer or more flexible than the same letter flex in a different shaft model from the same company!
There is no better proof than to offer a clear illustration. Following is a graph comparison of 7 different R-Flex shafts, from 6 different companies. These shafts were all measured using the same methodology to graph the comparative stiffness at 7 identical points along the length of each different shaft. The numerical measurements represent cycles per minute (CPM) of frequency measured with a 454 gram weight on the tip end of the shaft.
For comparison of the relative stiffness for all these R Flex shafts, focus on the CPM measurements for the 41 in and 36 in columns in the data chart. At these points on the grip end of the shaft, a difference of 7 CPM in the 41/36 measurements is equivalent to one full flex, based on averages from more than 2000 different shafts. (when the tip weight is reduced to 205g, a 10cpm difference is equivalent to one full flex level) As you can see, among these 7 shafts there is a relative stiffness difference of 28 CPM, which is nearly four full flexes – and yet all of these shafts are labeled by their respective companies as being an R Flex shaft.
Next let’s look at a graph comparison of a number of the R Flex shafts from different shaft models, all from the same company. Within these 6 different R Flex shafts all from the same company, can be seen a range in basic stiffness of 19.5 CPM, which equates to a difference of nearly 3 full flex levels. Yet all are labeled as R flex shafts.
It is VERY IMPORTANT to understand that such variations are by intent and DO NOT represent a mistake or lack of quality in any manner by these companies. Remember, each company is free to determine their own standards for the actual stiffness for what each flex of each shaft is to be. It is not wrong – it just is the way it is.
What’s wrong is when golfers do not know this and make buying decisions based only on a meaningless letter code imprinted on the shaft. So the next time you head out to buy a new club(s) or a new shaft, please remember that R does not equal R, S does not equal S, and none of the letter codes equal each other. If you want another good reason for why it is worth it to be professionally custom fit by an experienced custom Clubmaker, here is yet another one of many reasons to do so. Many of the experienced clubmakers are well aware of the variations among the flexes of all the shafts and can guide you into the very best shaft selection for YOUR swing characteristics.
Shaft Myth #4 – The shaft is a key element for the amount of backspin imparted on a shot
That can be true. . . but only if you are a golfer who unhinges your wrist-**** angle late in the downswing and you have a clubhead speed north of 100 mph with the driver. If you are a golfer with a late release and a clubhead speed in the area of 85mph, the shaft is only going to have a small effect on backspin. And if you are a golfer with an early release, no matter what your clubhead speed, the amount of backspin you put on the shot is purely going to be determined by your clubhead speed, your angle of attack into the ball, the loft of your driver and where on the face you made contact with the ball.
It is very common for companies to market shafts as having spin characteristics – “low, medium or high spin” in their design. The problem is that it takes a very specific type of swing characteristic to even allow the shaft to have any effect whatsoever on the amount of spin imparted on the shot. That swing move is when you unhinge your wrist-**** angle to release the club during the downswing. In short, the later you hold onto the wrist-**** angle on the downswing, and the higher your clubhead speed, the more the shaft could have an effect on the backspin of the shot.
Here’s why, and here’s how shafts may or may not have a bearing on the amount of spin on a shot. First of all, keep in mind that only three things determine the amount of backspin on a shot – clubhead speed, the dynamic loft on the clubface at the point of impact, and the point of impact in relation to the center of gravity of the clubhead. (Angle of attack is a part of the dynamic loft) The higher the clubhead speed, the higher will be the spin for any given loft angle, the higher the loft on the clubhead at the moment of impact, and the lower the point of impact in relation to the CG, the greater will be the amount of backspin. Vice versa applies to these things for less spin.
But let’s talk about how the swing gets involved in all of this to be able to potentially interact with the club to have an effect on spin. Let’s say we’ve all made our backswing and we have the club positioned at the top, ready to swing down to the ball. From the moment the club starts down, for as long as we retain and hold our wrist-**** angle between our arms and the shaft, the arms and the club are accelerating at the same rate and the arms and club are both moving at the same velocity.
The split-second we start to unhinge the wrist-**** angle, the arms begin to slow down while the club begins to accelerate to a higher velocity. Because the arms are slowing down while holding on to the club, the faster moving clubhead starts to push against the shaft that is being held back by the hands and the shaft begins to flex forward. The more flexible the design of the shaft and/or the more tip flexible the design of the shaft, the more the shaft could flex forward at impact and from it, have more of an effect on launch angle, trajectory and spin.
If the golfer happens to hold the wrist-**** angle until very late in the downswing, the forward flexing of the shaft happens right when the clubhead meets the ball. If the shaft comes to impact flexed forward, this forward curve of the shaft increases the loft on the clubhead at impact – which in turn increases the launch angle AND increases the amount of backspin put on the shot. When shaft companies say this or that shaft is a “low spin design”, what they mean is that the shaft is designed to either be stiffer overall, or, stiffer in the tip section of the shaft. Stiffer shaft means less forward bending before impact, which means less of a loft increase at impact on the clubhead. . . but ONLY for a player with a later to very late unhinging of the wrist-**** angle on the downswing.
On the other hand, if the golfer unhinges the wrist-**** angle early on the downswing, all this forward flexing of the shaft happens well before impact. Thus for the early release golfer, by the time the clubhead gets to the ball, the shaft will have had time to flex back to a virtual straight position. That’s why for early release golfers, the shaft cannot have any additional effect on the dynamic loft on the clubhead and the amount of spin on the shot.
Myth #5 – How a shaft plays and performs for one golfer or group of golfers is important for other golfers to know to be able to make a proper shaft selection
Only if the golfers involved all happen to have EXACTLY, and I mean exactly, the same swing characteristics is someone else’s experience with a particular shaft of any importance. And how often do two or more golfers swing exactly the same way?
I can’t tell you how many times I have scanned posts on golf equipment internet forums from golfers who ask a question such as, “has anyone tried the XYZ shaft and what do you think of it?” Invariably, almost every golfer’s response comes back citing this or that personal opinion or playing result without ever saying one thing about any of their specific swing characteristics.
In addition, numerous times I have heard a golfer comment about a shaft to say something like, “that XYZ shaft is really a bad shaft. If golfers knew that shaft performance is so tied to specific golf swing characteristics they would say instead, “that shaft is probably a good shaft for some other golfer, but it is a bad shaft FOR ME AND MY SPECIFIC SWING CHARACTERISTICS.”
There is no such thing as a good shaft or a bad shaft in this game. There are only shafts that fit their owners and shafts that do not fit their owners. More than any other component, the performance of the shaft is completely related to a series of finite, specific swing and playing characteristics – your clubhead speed, your transition move to start the downswing, your downswing aggressiveness/tempo, the point during the downswing when you unhinge your wrist-**** angle to release the club to impact and whether you as a golfer do or do not have a specific, preferred sense for the bending feel of the shaft during the swing.
Shaft Myth #6 – The more expensive a shaft, the better its quality and the better it performs
There are few things in the golf industry that have become as much of a sore spot with me as this matter of shafts that cost $100, $200, $300 and even more. Shoot, I remember when we all thought a $40 shaft was expensive! What’s even worse are the uninformed golfers who see these $100 – $300 shafts and automatically form the opinion that if it costs that much, it has to be a really good shaft.
You want to know what the definition of a “good shaft” is? A good shaft is any shaft that has been very accurately matched for its weight, overall stiffness, bend profile, weight distribution and torque to a golfer’s clubhead speed, transition force, downswing tempo, wrist-**** release, strength and sense of feel. That’s the definition of a “good shaft” and it has absolutely nothing to do with brand, model or price.
There are 5 different specifications that determine the performance differences between shafts. 1) mass (weight); 2) overall stiffness (flex); 3) bend profile (distribution of the stiffness over the length of the shaft); 4) weight distribution (balance point); 5) torsional stiffness (torque). Two of these, the weight and the torque, are definitely related to the cost of the shaft. The lighter the weight and the lower the torque of a shaft, the more expensive the shaft will be to make. In other words, if you want to make a very stiff 45 gram shaft with less than 3? of torque, that shaft is going to cost a lot more money to make than a 65 gram softer flex shaft with 5? of torque. . . but not $100 to $300 by any means.
The other three shaft design elements, a shaft’s overall stiffness, bend profile and balance point, are not even close to being as price sensitive as the weight and torque. Standard modulus (low cost) graphite raw materials can be used to make any flex, bend profile or balance point from soft L to very stiff X.
Yes, many of the high dollar shafts are actually made with more expensive raw composite materials. But they don’t need to be made with such expensive materials to achieve their weight, flex, bend profile, balance point and torque. In my career I have measured the specifications of literally thousands of different shafts, and from my experience, I have yet to see a $100 to $300 shaft that could not be duplicated for weight, flex, bend profile, balance point and torque and sold at a normal profit in the industry for an aftermarket price of $25 to $50.
Shaft Myth #7 – The flex of the shaft has an important effect on shot performance for all golfers
For some golfers, very definitely this is true. But for many golfers, approaching even the majority of golfers, the flex of the shaft is one of the very least important of all the fitting specifications of a golf club.
To sum it up, the higher the clubhead speed, the more forceful the transition move, the more aggressive the downswing, the later the unhinging of the wrist-**** angle, and the more the golfer has an specific preference for the bending feel of the shaft, the more important the shaft flex will be to shot performance. For a slower swinging, smooth tempo, early release golfer who does not have a refined sense of feel for the bending action of the shaft, the flex is virtually unimportant and the WEIGHT of the shaft becomes the only important fitting element related to the shaft.
The one swing characteristic that has the most influence on making the flex become an important part of the performance of the shaft is the point of the wrist-**** release during the downswing. The later the wrist-**** release, the more the shaft can arrive at impact in a flexed forward position – which is how the shaft flex can have a visible effect on the launch angle, height and spin rate of the shaft.
Second after the release in terms of the swing moves that dictate the importance of shaft flex is the force the golfer applies during the transition move to start the downswing. The more forcefully, the more sudden, and the more aggressive the golfer starts the downswing, the more bending force is applied to the shaft. The more the golfer bends the shaft at the start of the downswing, the more the golfer could feel differences in shaft stiffness and from that, develop a preference for a specific type of bending feel in a shaft that if satisfied, can make a big difference in shot consistency and clubhead speed.
Shaft Myth #8 – The higher the clubhead speed of the golfer, the stiffer the shaft should be
There are two reasons this is frequently not true. First, as we said previously, with no standards in the golf industry for shaft flex, there are very definitely a lot of R flex shafts that are stiffer than a lot of S flex and even X flex shafts. So it can be very possible for a golfer with a certain clubhead speed to be properly fit with an S flex in one company’s shaft model, but to find that another company’s R flex may in fact be stiffer.
The second and main reason this statement is frequently not true is because clubhead speed is not the main element in the swing that determines how much a golfer actually bends a shaft during the swing. The swing element that applies the chief amount of bending force to a shaft is the golfer’s transition move to start the downswing. Among two golfers with the same clubhead speed, it can be very common for one golfer to have a short backswing with a very forceful, abrupt and sudden acceleration to start the downswing, while the other golfer might start the downswing with a much smoother, more gradual acceleration of the club.
Among two golfers with the same clubhead speed, the one with the stronger, more forceful transition move will always put more bending force on the shaft, and from it, will typically need a stiffer shaft than the golfer with the same swing speed who has a smooth, gradual acceleration of the club during the downswing. It is also not uncommon to see a golfer with a slower swing speed and stronger transition as well as a golfer with a higher swing speed and smoother transition move. In such a case, the slower swinging golfer with stronger transition would need a stiffer shaft than the golfer with a higher clubhead speed but smoother, less forceful transition move.
The bottom line is that while clubhead speed definitely offers a starting point for flex selection, the most accurate shaft fitting involves a careful evaluation of the other swing movements that have a direct effect on how much the shaft is flexed during the swing.
Shaft Myth #9 – The right shaft adds distance by “kicking faster” through the ball
It’s easy to assume this is true when you see a golfer use a different shaft with the same clubhead and experience a higher clubhead speed and more distance. Also contributing to this thought is the fact that a few companies have actually used the term “tip velocity” in the marketing of a shaft. As a result, there are a lot of golfers who believe shafts can be designed to possess the ability to “kick faster” than other shafts.
When a golfer changes shafts in an existing club and achieves a higher clubhead speed or gains distance, the things that most typically explain the increase in distance are as follows:
1. When an existing clubhead is re-shafted, along with the specs of the new shaft itself there very definitely can be changes in the length, the total weight and the swingweight of the club that happen as a result of switching from one shaft to another. If the new length, new total weight, new swingweight or combination of any of these three happen to fit the golfer’s size, strength, athletic ability and swing characteristics better than these elements did before in the club with the former shaft, very definitely this can result in a higher clubhead speed and more distance.
Very experienced clubfitters have seen many times when a change of 10 to 20 grams in the total weight, a change of 2 to 3 swingweights and/or a change of ½” to 1″ in the length of a club can all of a sudden allow the club to fit the golfer so well that a marked increase of 3 to 5mph in clubhead speed can occur. Such changes in total weight and/or swingweight are not at all unusual when shafts are changed because of the wide range in weight and balance point among different models of shafts.
2. When a golfer switches to a shaft that fits his sense of feel or feel preference better than a previous shaft, the golfer can very definitely have the tendency to swing in a more free, more unrestricted, and more confident manner than before – which in turn can very definitely can result in a higher clubhead speed from which more distance occurs.
Think about it this way. If you’ve played a lot of golf and hit a lot of different golf clubs, at one time or another you have probably hit or played with clubs in which the shafts are either much too stiff or too flexible for your sense of feel when you swing the club and hit the ball. When you have hit clubs with shafts that are too stiff, what is your first inclination? Probably to try to swing harder so as to elevate your swing speed/force to better match the stiffer shaft. And what happened to your swing consistency when you did this? That’s right, not the best results.
Perhaps at some point in your playing life you have tripped across a club with a shaft that when you swung the club, everything just felt perfect. The shaft didn’t feel too stiff or too flexible when you started the downswing and when you released the club to hit the ball, you felt the shaft kick at exactly the right time and with exactly the right amount of kick. In such a case, I bet your natural inclination was to forget about any type of swing manipulation and to just “let it fly” when you swung – a full, free, unrestricted swing with nothing getting in the way of “letting it go.”
For golfers who do have a preferred sense of feel for the way the shaft bends during the swing, even if they cannot clearly describe that feel in words, being able to find a shaft that bends, flexes and unloads in exactly the manner they prefer is a sure ticket to swinging with the highest natural clubhead speed their swing can generate. And definitely a higher speed than they can generate when the shaft either feels too stiff or too flexible.
Shafts cannot be designed to have a higher or lower flexing velocity. They simply are designed with differences in stiffness, weight, torque and weight distribution which either do or do not fit the swing characteristics and preference for feel of the golfer using the shaft. Again, there is no magic in this. There are thousands of combinations of shaft weight, flex, bend profile, weight distribution, and torque and thousands of combinations of golfer swing speed, transition force, downswing tempo, wrist **** release and feel preferences. The perfect shaft is when these two sides get matched up to each other in a perfect shaft fitting.
Shaft Myth #10 – How a shaft performs for a golfer is an indication of its quality
No, how a shaft performs for a golfer is an indication of how well the shaft’s weight, flex, bend profile, balance point and torque were FIT to the golfer’s size, strength, athletic ability and swing characteristics. Remember what I said about “good shafts” and “bad shafts”? There are no such things. There are only well fit and poorly fit shafts.
Shaft quality is more a case of how consistently can the shaft maker hit each one of the production specifications for each shaft they make within a very narrow range of error tolerance, shaft after shaft after shaft. And believe me, by no means does the cost of a shaft guarantee this definition of shaft quality.
In my shaft research work, my shaft data base now includes nearly 2000 different shaft models. In doing this, I get a chance to measure all sorts of specifications on a lot of different shafts from most of the shaft manufacturers in the world. When you measure multiples of the same model and flex of shafts, you get the chance to see who maintains tight error tolerances and who doesn’t. And I can tell you, the price of a shaft is not always related to how consistent or how tight the tolerances are for a company’s shafts.
Some of the high dollar shafts do display very tight, consistent error tolerances. Some don’t. And some of the lower priced shafts show a very high level of shaft to shaft consistency while again, some do not.
- Messages : 10236
Date d'inscription : 13/09/2010
Age : 57
Localisation : RP/64
Index : 8.4
Club : Montgriffon
ça confirme ce que je pense. tant que la tête de club est tolérante, du reste OSEFR !
ça confirme ce que je pense. tant que la tête de club est tolérante, du reste OSEFR !
- Messages : 3196
Date d'inscription : 08/03/2011
Age : 46
Localisation : Languedoc
Index : 4.9
Club : Pitch et Putt narbonne champion de france p&p 2014
Permission de ce forum:Vous ne pouvez pas répondre aux sujets dans ce forum | physics |
https://www.shopyvision.com/product/revel-performabe-c426be-centre-speaker/ | 2023-11-29T09:20:45 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00256.warc.gz | 0.829865 | 813 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__9296665 | en | Revel PerformaBe C426Be Centre Speaker Each
- The Revel C426Be joins our award-winning PerformaBe series as a high-performance center channel loudspeaker for use in a multichannel home theater environment. As part of the PerformaBe loudspeaker line, the C426Be includes a 1-inch (25mm) beryllium tweeter driven by massive 85mm dual ceramic magnets.
- The Revel C426Be joins our award-winning PerformaBe series as a high-performance center channel loudspeaker for use in a multichannel home theater environment. As part of the PerformaBe loudspeaker line, the C426Be includes a 1-inch (25mm) beryllium tweeter driven by massive 85mm dual ceramic magnets. The powerful tweeter and 5th-generation ceramic-coated, cast-aluminum Acoustic Lens waveguide seamlessly integrate with the directivity of the companion midrange driver resulting in greater efficiency, improved dynamic range, reduced distortion, and increased power handling compared to aluminum or titanium tweeters.
- Beryllium – Element 4 on the Periodic Table – is a rare earth metal that is renowned for its remarkable physical properties that make it the ideal material for a high-frequency transducer. Compared to aluminum and titanium tweeter diaphragms, Beryllium offers 4.5 times the stiffness and three times more damping, and does so at only half of the weight. Beryllium tweeters are the centerpiece of the Revel PerformaBe loudspeaker series.
- The 5.25-inch (130mm) midrange and quadruple 6.5-inch (165mm) aluminum cone woofers utilize Deep Ceramic Composite (DCC) cones for improved performance. DCC is a plasma electrolytic oxidation process that uses a plasma discharge to create a coarse ceramic coating on both sides of the aluminum core. The deep ceramic layers sandwiching the aluminum core provide constrained layer damping that push cone breakup modes outside of the passband allowing the driver to maintain ideal pistonic motion throughout its range. DCC cones combine with optimized motor magnetics to deliver improved mid- and low-frequency performance.
- PerformaBe crossover networks utilize all film capacitors and air core inductors in the midrange and tweeter circuits. These premium components allow Revel’s world-class engineers to extract nuances and details from music that would otherwise be lost. Combined with traditional Revel high-order crossover slopes and proprietary Acoustic Lens waveguide geometry, transducer integration is seamless.
- Brand: Revel
- Model: C426Be
- Color: black
- Type: 3-way quadruple 6.5”Center channel loudspeaker
- High–frequency transducer: 1″ (25mm) Beryllium dome with acoustic lens wave guide
- Low- Frequency transducer: Four 6.5”(165mm) Deep Ceramic Composite (DCC) aluminium cones with cast frames
- Mid-Frequency Driver Components: 5-1/4″ (130mm) Deep Ceramic Composite (DCC) aluminum cone, with cast frame
- Frequency Response (-6dB): 38 Hz-40 kHz (-6 dB)
- Low-frequency Extension: 35 Hz (-10 dB); 38 Hz (-6 dB); 41 Hz (-3 dB)
- Recommended Amplifier Power: 50 – 350 Watts RMS
- Crossover Frequencies: 210 Hz; 2.1 kHz
- Nominal impedance: 8 ohms
- Enclosure type: Bass-reflex via 2 rear-mounted ports
- Inputs: Dual gold-plated binding posts with shorting straps
- Sensitivity (2.83V/1m): 90 dB
- Dimension: 263 mm x 979.7 mm x 358 mm
- Weight: 27.7 Kg
- Warranty Type: Manufacturers
- Warranty Period: 1 Year | physics |
https://cestlafranz.com/balloon-animals-and-bouncy-castles-on-the-moon-condition-of-inflatable-habitat/ | 2023-12-02T11:43:02 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00753.warc.gz | 0.921437 | 1,030 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__170462161 | en | Balloon animals and bouncy castles on the moon. Condition of inflatable habitat
Each year, NASA’s Breakthrough, Innovate, and Game-Change (BIG) Idea Challenge invites innovative students to build and demonstrate concepts that could benefit future human missions to the Moon and beyond. This year’s theme is “Inflatable Systems for Lunar Operations,” which can significantly reduce the mass and volume of stored payloads sent to the Moon. This is critical to the Artemis program because it returns astronauts to the Moon for the first time since the Apollo era more than fifty years ago. It will also reduce the costs of sending payloads to the Moon, Mars and other destinations in deep space.
The BIG Idea Challenge is sponsored by NASA’s Space Technology Mission Directorate (STMD) as part of a collaborative effort between the Game Change Development (GCD) program and the agency’s Office of Science, Technology, Engineering, and Mathematics (STEM) Engagement. The competition is jointly administered by the National Institute of Space (NIA) and the Johns Hopkins Applied Physics Laboratory (JHUAPL) and is funded by a GCD and National Space Grant and Fellowship Project. As part of the challenge, teams of five to 25 students and faculty advisors will submit proposals, and five to eight finalists will be selected for further development.
Despite decades of growth and development, the greatest challenges to sending manned missions into space remain limited in size and mass. Like it or not, launches are still subject to the rocket equation, which creates a vicious cycle in which larger payloads require more propellant to break free from Earth’s gravity. This in turn means larger rockets with heavier fuel tanks, etc. As such, large structures cannot be placed on the surface of the Moon or Mars without complex deployment mechanisms and assembly on site.
NASA has explored multiple solutions to this problem, which include using local resources to create building materials and provide for astronauts’ needs. In-Site Resource Utilization (ISRU). This has the advantage of reducing the amount of supplies astronauts will need to bring with them while reducing reliance on resupply missions. Another solution is to send large inflatable systems, which are low in mass and can be packed tightly into payload covers. Once they reach their destination and are inflated, they expand to many times their stored size.
Combined with advanced fabrics and internal pressure reinforcement, inflatable systems can provide robust habitats and environmental protection against harsh extraterrestrial conditions. That’s the purpose of the 2024 Big Idea Challenge, in which university-level teams are tasked with designing habitats that include inflatable components. These range from towers, bridges and antennas to soft robots, actuators, connectors, deployment mechanisms, airlocks and temporary shelters. Nicky Werkheiser, technology maturation manager for NASA’s STMD, said in a recent NASA press release:
“This challenge is particularly exciting because it applies outside-the-box thinking to the design and engineering processes that will be required to integrate inflatable components into space missions. Harnessing the impressive creativity demonstrated by this collective could provide truly new solutions for future space exploration.”
Finalists will be selected by a panel of NASA and industry experts who will evaluate the proposal and video package for mission scenarios involving inflatable systems. The five to eight selected classes will receive a stipend of between $50,000 and $150,000, including expenses for hardware, materials, testing equipment, software, etc. The teams will spend the next nine months developing, refining, testing their proposals and preparing a business plan. A technical writing of 15 to 20 pages detailing their findings. This will be followed by the annual BIG Idea Forum next fall, where they will be invited to submit their concepts for technical design review.
This will include proof-of-concept demonstrations in analog test environments that simulate lunar conditions. Tomas Gonzalez Torres, space grants project manager for NASA’s Office of STEM Engagement, said:
“When it comes to mission-critical technology for upcoming space exploration efforts, academia is an important partner. University-level teams are pushing the boundaries in terms of creativity and also in demonstrating technological readiness for innovative ideas. These ideas can be incorporated into technology development at the micro and macro level.”
This year’s competition complements the 2023 Lunar Forge Challenge, in which undergraduate and graduate students were awarded up to $180,000 to design, develop and demonstrate technologies that will enable the production of lunar infrastructure through ISRU-derived minerals. These and other technologies will be critical to the Artemis missions and the long-term goals that NASA, fellow agencies, and commercial partners have to create permanent infrastructure on the Moon. In addition to enhancing lunar exploration, research, and perhaps settlement, these efforts will enable future missions to Mars and beyond.
To learn more about the 2024 Big Idea Challenge and how to enter, visit the BIG Idea website at bigidea.nianet.org.
Further reading: NASA | physics |
https://www.edntaiwan.com/tools/microstrip/ | 2023-06-01T12:50:46 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00315.warc.gz | 0.930021 | 191 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__105822136 | en | The microstrip is a very simple yet useful way to create a transmission line with a PCB. There are some advantages to using a microstrip transmission line over other alternatives. Modeling approximation can be used to design the microstrip trace. By understanding the microstrip transmission line, designers can properly build these structures to meet their needs.
A microstrip is constructed with a flat conductor suspended over a ground plane. The conductor and ground plane are separated by a dielectric. The suface microstrip transmission line also has free space (air) as the dielectric above the conductor. This structure can be built in materials other than printed circuit boards, but will always consist of a conductor separated from a ground plane by some dielectric material.
Models have been created to approximate the characteristics of the microstrip transmission line.
The source of this formula is based on Wheeler’s equation.
Related Products: RF Transceiver | physics |
https://developmentco.com/engineering-apprentices-team-development/ | 2023-12-06T00:46:55 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00298.warc.gz | 0.964804 | 382 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__214240840 | en | Learning how to build great project teams is an essential skill in the engineering world. And a motivated, confident and well-bonded team is the Holy Grail of any workplace. So it was truly inspirational watching our group of 16 to 18 year old engineering apprentices take on the rollercoaster challenge on Broad Haven beach last week.
Using limited equipment, the team had to work within a tight timeframe to construct a structure which would allow a ball to travel its length by gravity alone. The ball also had to complete the course within a specified time range and the rising tide and gusting winds gave an added (and very real) edge to the task!
After electing a team-leader, the team sub-divided into three separate teams, each constructing a different part of the ‘Roller Coaster’. Working against the clock, the team leader was tasked with motivating and guiding each team, and then bringing the three teams together to assemble the complete structure. Collaboration amongst the three teams, and sharing of resources and information, was vital for the success of the overall operation. Coming at the end of a week-long residential programme, Roller Coaster was a great finale, with lots of energy, high tension and finally euphoria as the ball completed its course in a perfect 45 seconds.
As an outside observer, I was truly impressed by how well the group (who had met for the first time on this residential course, and was a first time away from home for many) worked together to complete the task. Apart from open communication and team cooperation, the task involved planning, innovation, time management, maximising efficiency of resources, leadership and trust. It was wonderful to witness the emerging confidence of the apprentices as they recognised the value of their individual contributions, and the huge shared sense of achievement when the “moment of truth” arrived and their project delivered success. | physics |
https://johnsonforgovernor.org/2023/02/07/the-wonders-of-air-to-water-heat-pumps-everything-you-wanted-to-know/ | 2023-09-23T12:10:56 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00109.warc.gz | 0.940956 | 933 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__132253491 | en | Air water heat pumps are becoming increasingly popular and are now being used in more homes than ever before. This innovative technology is revolutionizing the way we heat our homes and offices, offering a clean and efficient energy solution that has a minimal impact on the environment. In this article, we’ll look at what õhk vesi soojuspumbad are, how they work, their advantages and disadvantages, and whether or not they’re suitable for your home.
An air-to-water heat pump is a device that extracts thermal energy from the outside air and transfers it to domestic hot water. It works by drawing in outside air using a built-in fan system and then passing it over an evaporator coil containing refrigerant gas. The gas absorbs the energy from the air, which is then compressed to a higher temperature before passing through another condenser coil inside the unit. This produces hot water for use in showers, baths, and sinks, as well as heating radiators throughout the home.
How do they work?
Air-to-water heat pumps work by transferring thermal energy from one source to another through a refrigeration cycle. First, cold outside air is drawn in through an external grill where it passes over an evaporator coil containing refrigerant gas – typically R-134A or R-407C – which absorbs the energy from the air. The heated gas is then compressed to a higher temperature by an internal compressor before passing through another condenser coil inside the unit, where it releases its thermal energy into domestic hot water stored in a tank or cylinder. Any remaining energy can be used to pre-heat incoming cold mains water if required, or stored for future use when temperatures drop below zero degrees Celsius (32°F).
What are the benefits?
One of the main advantages of air-to-water heat pumps is their ability to produce large amounts of energy with minimal emissions; they produce 50% less CO2 than traditional fossil fuels such as oil and natural gas, while also having much lower running costs due to their high efficiency (over 300%). They also require little maintenance once installed and can provide up to four times more energy than electricity alone, making them ideal for larger households looking to make long-term savings on their heating bills.
What are the disadvantages?
Although most people see great value in installing an air-to-water heat pump system, there are some drawbacks, such as potentially high upfront costs due to installation fees and special components needed for operation; noise pollution from both indoor units mounted on walls/ceilings (although this can be minimised) and outdoor fans; potential problems with condensation forming around windows during winter months; siting restrictions due to the need for adequate ventilation space around outdoor units; limited availability in certain areas not suitable for pumping hot/cold air during winter/summer months; need for regular maintenance every two years or so depending on usage, etc. . While all of these drawbacks should be considered before purchasing/installing any type of HVAC equipment, many homeowners still find these systems to be a very worthwhile investment given their overall cost-effectiveness when compared to other options available on the market today – especially those focused solely on renewable energy such as solar panels etc.
Are air/water heat pumps right for my home?
Ultimately, deciding whether or not to invest in an air/water heat pump will depend on your own individual needs, but generally speaking, if you’re looking for affordable heating solutions that have a low environmental impact while offering significant long-term savings, then these systems are likely to prove beneficial – particularly if you live in cooler climates or regions with shorter winters, where traditional methods may not perform optimally during the colder months. However, before you invest, make sure you research your local regulations regarding the permits/installations required for safe operation – failure to do so could result in costly fines down the line!
Is it worth the investment?
In conclusion, whilst there may be some initial financial outlay associated with purchasing & installing this type of HVAC equipment – especially if you don’t already have an existing ductwork setup – the overall return on investment should outweigh any negatives after a two to three-year period thanks largely to reduced running costs achieved through increased efficiency ratings provided by modern systems currently available on the market today. So if you’re looking to maximize your energy consumption whilst minimizing your impact on the environment, you may want to consider investing some time in researching different models going forward – but always remember to check local regulations first to ensure you comply with applicable laws! | physics |
https://www.bigshocks.com/oil-systems/moroso-oil-accumulator.html | 2021-09-22T17:20:21 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00376.warc.gz | 0.92009 | 123 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__176227651 | en | Add To Cart
HOW THE ACCUMULATOR WORKS:
The Accumulator is tapped to the pressure side of the engine’s oiling system. When the engine is running, oil pressure forces reserve oil into the accumulator and compresses the air ahead of it.
If oil pressure should suddenly drop because of hard acceleration, severe cornering or hard braking, the air pressure immediately sends oil to the main galleries. When the danger is over and the pump is once again primed with oil, the oil pressure forces oil back into the Accumulator where it is ready for the next emergency. | physics |
http://fabspaces.cc/magic-cubes | 2013-12-07T06:21:50 | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053608/warc/CC-MAIN-20131204131733-00074-ip-10-33-133-15.ec2.internal.warc.gz | 0.945307 | 224 | CC-MAIN-2013-48 | webtext-fineweb__CC-MAIN-2013-48__0__76788318 | en | Following on the success of our Wired Cubes workshop, we have created a more ambitious version that will have kids creating electronic circuits capable of simulating a simple computer… out of paper!
We use many of the same fabrication techniques, so this workshop could very well be a follow up. The “magic” added to the cubes comes from transistors and capacitors, which will help us control how information is stored within our cubes. Yes, just like the memory in any computer, our cubes play the role of one BIT of information and many opportunities to understand computing derive from that analogy. But the journey has its own surprises and the kids will learn a couple of neat tricks that allow them to visualize how transistors work.
Towards the end of the workshop the kids are capable of using their own Magic Cubes to simulate simple computing processes and to demonstrate their proficiency in this area they will run a simple algorithm with their newly created computer.
This workshop is planned for 3 hours and is recommended for kids 10 to 14. Take a look at what Girls Learning Code had to say about our workshop: | physics |
https://worldscoolestraingauge.com/ | 2022-05-24T20:40:02 | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00451.warc.gz | 0.956588 | 1,006 | CC-MAIN-2022-21 | webtext-fineweb__CC-MAIN-2022-21__0__30311149 | en | We Make Rain Fun!
A modern twist on a classic device, the World's Coolest Rain Gauge® is the original, award-winning floating rain gauge®. Based on the Archimedean principle of water displacement, the measurement tube rises from the outer collection flute to show water accumulation. It’s practical, fun...and very cool!
Our rain gauges are made in the USA with up to 80% domestic components. Assisted by a collection of brilliantly quirky machinery and imaginatively repurposed gizmos, we fabricate, test, pack and ship the World's Coolest Rain Gauge® worldwide.
The World's Coolest Rain Gauge Company® is based in Gardiner NY. We've been making rain gauges in the beautiful Hudson Valley since 1999. Thank you for supporting another small but mighty American business!
Years ago we bought this rain gauge but then it was designed with a glass tube and copper rim. After a few years we managed to break the glass. When I found the current version, I was excited. Always loved this design. If positioned correctly it can be read a long distance from the house. We now have 3 spread out over our property. Got replacements for the tube to be used in our old one, got another new complete gauge with long pole for further out in the yard and one that sits on the deck. This week we’ve gotten lots of rain (4” today) and they all work great.
I've had The World's Coolest Rain Gauge for many many years, and love it!! Even though I put it in the garage for a few of the coldest months, the foam still gets crumbly with time. So I've just received the replacement tubes which are great! But just as great, if not more so, is this replacement tube that also has the metric measurements (like most of the world, outside the US, actually uses!!). When I write to family in Europe I can now immediately report the millimeters! Thanks for a great product, great service, fast shipping!
Wouldn’t even think of using another rain gauge - this the 1st replacement tube I’ve needed in ~10 years, not because it’s broken but the numbers have finally faded away !!!
Made of quality materials, solid and robust. Really like the way it looks and functions. The plastic scale is easy to read from my porch.
We are very pleased with this rain guage. Can easily see from our kitchen window. Easy to read from about 10 ft distance. Well built and constructed. Highly Recommend!
We run Springbrook Greens Golf Course in Sterling, NY and I was looking for something functional that also added to the aesthetic value of the place. This thing Is perfectly quirky and cool looking. LOVE IT!
Indeed this is a very cool rain gauge. Love the fact the actual measuring part is not glass. I’ve broken or forgot many times to bring my past glass rain gauges indoors before a freeze. So easy to stake evenly in the ground. Accurate too. Checked with local weather department and their report of precipitation amount matched what was in the gauge.
Received two days after ordering and packaged well.
A functional work of art. We enjoy seeing it in the garden. The copper is beautiful.
The rain gauge appears to be well constructed and looks good in the back yard . I appreciate the large print, too. I'm looking forward to seeing it in action and that can't happen soon enough as rain in Kansas is desperately needed.
Love the gauge and easy to read but over several years the foam baked from the sunlight. Top of the foam was diminishing and needed a renewal and this kit works great to keep the gauge working like new again.
Cool rain guage and fantastic quality. Most of all made in the USA
Got the gauge as a gift for my father. He has an electronic weather station and sometimes questions it's accuracy. We got a nice rain 2 days after setting up the rain gauge. The electronic gauge said we got .73" of rain, the world's coolest rain gauge read basically the same. The only complaint I have is the markings on the floating part of the gauge don't go all the way around it, so if it somehow gets spun around, you can't see the reading from afar.
This rain guage is the best! I Love it. It’s nice looking, and does the job. The replacement tube is just what I wanted.
I have purchased the World's Coolest Rain Gauge in the past. To place on my deck it required building a stand. It was exciting to see you now have developed the two-way clamp for mounting on patios. This is a housewarming gift for a friend. Thanks
The replacement gauge works great in our copper rain gauge. | physics |
https://www.rejuva.net/insulating-creme/ | 2022-10-03T09:08:09 | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00372.warc.gz | 0.924561 | 965 | CC-MAIN-2022-40 | webtext-fineweb__CC-MAIN-2022-40__0__159308304 | en | Rejuva Insulating Creme
Rejuva® Insulation Creme has been developed with superior technology. It will create an invisible insulation barrier, penetrating deep into the substrate, up to 17 mm and protecting the masonry for over 20 years. It will protect your walls and its cavity from penetrating damp, improve thermal resistance and reduce energy bills. Rejuva Insulation Creme makes walls and bricks self cleaning and allows the walls to breathe naturally
The excellent hydrophobic properties will make your masonry dry and extremely resistant to biological hazards, mould, mildew, etc.
Self cleaning efficiency
Dirt particles are unable to obtain a hold on a masonry coated with Rejuva and will simply flow off with rainfall.
Your masonry will remain clean and attractive for many years.
The masonry is able to breathe and will be permeable to water vapour.
Highlights and properties
How to apply Rejuva® Insulating Creme
After suitable preparation and treatment with Rejuva cleaner and when the facade is dry apply one full coat of Rejuva Insulation creme to the facade . Application may be by low pressure pump or airless spray.
Properties and characteristics
Rejuva® Insulation Creme has been tested and certified to ISO standard. The product penetrates deeply into the substrate, creating an invisible insulation barrier that reduces water absorption by more than 95%. REJUVA ® Insulation Creme has been tested according to EN ISO 15148:2002, demonstrating its hygrothermal performance on concrete, mortar, brick, and sandstone.
Damp walls = Thermal bridges
If the pores of a wall collect moisture, more heat is transferred than it would be possible if the cavities were filled with air. In conclusion, the thermal conductivity of the wall decreases enormously with dampness. Tests have shown that a damp content of 5% in a plain brick wall, for example, can lower the insulation performance by up to 50%, especially where cavity wall insulation has been installed. Consider that porous building materials, such as natural stone, brick and solid brick, to name a few, normally have good insulating properties. They have air-filled cavities, which offer low thermal conductivity. BUT ONLY WHEN THEY ARE DRY!
The masonry is able to breathe and will be permeable to water vapour whilst allowing air and moisture to pass through from one side to the other. A breathing masonry will improve the thermal conductivity and insulation properties. Consider that just a 5% content of damp in cavity wall insulation or other building materials can lower the thermal resistance by up to 50%.
Self cleaning efficiency
Dirt particles are unable to obtain a hold on the Rejuva® treated brickwork and will therefore simply flow off with rainfall. The masonry remains clean and attractive. Even on sides which are particularly exposed to the weather like on the north-facing side or areas in shade, as well as in areas with high air humidity or sea salt penetrated air.
Masonry insulation barrier
Rejuva® Insulation creme keeps your brickwork dry
If humidity can access into the cavity wall insulation, its insulating properties will be damaged. A masonry treated with Rejuva® Creme will be colour stable for many years. In conclusion, the façade will keep its beautiful appearance as when it was new. A treated surface will have super hydrophobic capabilities. It will not get dirty thanks to its self-cleaning properties. Therefore, dirt particles and dust will easily be washed off with rain.
Invisible insulation barrier for energy savings
Rejuva Insulation creme penetrates deeply into brickwork and forms a reaction with the mineral groups behind the surface to inject invisible insulation into the masonry.This will help to create a dry building as moisture can easily escape and breathe out. Thermal conductivity will slow down as a result and less energy will be needed.
Clear treatment for most masonry substrates
Rejuva Insulation creme ® can be used on all absorbent mineral surfaces. Clean all areas with a high pressure washer and allow to dry. Apply Rejuva cleaner to all areas and allow to dry.
Rejuva Insulation creme will penetrate into the masonry and leave a super hydrophobic ,self cleaning , breathable surface , there will be no change to the colour or sheen of the treated masonry.
Do not hesitate to contact if you have any questions. We would be pleased to give you more detailed information about our unique and clear coating technology whit a guaranteed life span of 20 years. | physics |
http://www.picoblvd.org/ | 2013-12-06T16:02:21 | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052034/warc/CC-MAIN-20131204131732-00056-ip-10-33-133-15.ec2.internal.warc.gz | 0.941578 | 653 | CC-MAIN-2013-48 | webtext-fineweb__CC-MAIN-2013-48__0__214058614 | en | Light emitting diode, abbreviated as LED, is a semiconductor source of light with a more bright and steady light spectrum compared to other sources. LEDs were originally restricted to single light uses especially as indicator lights on everything from computers to automobiles.
Several electronic companies discovered through as series of researches that when LEDs are grouped together, they emitted a very intensive, bright white light that consumed less power and generated less heat. Since their introduction, LEDs have greatly gained popularity in general lighting and are now most preferred against the traditional incandescent light bulbs. Below are some LED facts to give you a head-start on the bright side of this little electronic invention.
Some facts about Light emitting diodes
LEDs have a relatively very long useful life. Most reports estimates 50, 000 to 100, 000 hours of useful life. Other bulb such fluorescent tubes are rated at about 15,000 to 18,000 hours while incandescent bulbs at 1, 000 to 2,500 hours depending on the condition of use.
LEDs are known to produce less heat and hence consume less power as compared to compact or incandescent bulbs. It is estimated that consumers can save up to 90% on energy costs by replacing existing bulbs with LED bulbs.
LEDs are ideal for use in situations where there is frequent on-off cycling, unlike HID bulbs that require a longer time before restarting or fluorescent lamps that easily fail when cycled often.
Generates minimum heat
As mentioned earlier, LEDs generates very little heat thereby concentrating most of the energy to light. This fact ensures that there is no extra load on conditioning systems, an extra saving on the energy cost. Typically, a 60-watt standard incandescent bulb will generate 175C, a13-watt fluorescent lamp about 140 and as much as 212C for the standard PAR 75-watt bulb. When the above bulbs are replaced with LEDs, heat generated can be reduced up to as little as 27C to 30C.
No electromagnetic interference
The electronic cabling and working mechanism of the LEDs reduces the harmonic noises that are common with fluorescent bulbs. Additionally, they are not prone to electromagnetic interferences as compared to other bulbs.
LEDs can be easily dimmed by either lowering the flowered current or using pulse-width-modulation. This is what makes LED lights ideal for viewing on camera and also used as headlights on cars.
LEDs have proved in many ways to be friendlier to the environment as compared to incandescent or fluorescent bulbs. They do not contain halogen gases, toxic materials or hazardous mercury. They also do not emit infrared or ultraviolet rays, which makes them safe to humans and 100% recyclable.
LEDs, being portable solid-state components, can not be easily damaged by external shock, unlike the fragile fluorescent and incandescent bulbs. They can also work just fine in both hot and cold environments while withstanding frequent on-off cycling.
Easy on the eyes
LEDs produce soft white light without an interfering glare. The light spectrum resembles the natural daylight with an ability to enhance your working area with a bright clear light.
Check out Petro LED Signs for more resource on LED Lighting | physics |
https://www.mysuperduperwebsite.com/how-much-%CF%80 | 2023-01-27T14:09:16 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00051.warc.gz | 0.972917 | 201 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__249388692 | en | PI, denoted by the Greek letter π, is a mathematical constant that represents the ratio of a circle's circumference (the distance around the circle) to its diameter (the distance across the circle through its center). The value of PI is approximately 3.14159, but it is an irrational number, meaning that it cannot be expressed as a simple fraction and its decimal representation goes on forever without repeating. It is also a transcendental number, meaning that it is not the root of any non-zero polynomial equation with integer coefficients.
PI is used in many areas of mathematics, including geometry, trigonometry, and calculus. It is also used in physics and engineering to calculate the area and volume of circles, spheres, and other circular objects, as well as to calculate the angles and distances in circular motion.
It's worth mentioning that the number PI has been known for almost 4,000 years, and it has been calculated to millions of digits with the help of computers. | physics |
https://rehumanize.us/2023/04/03/chapter-1-information-in-the-physical-universe/ | 2024-02-21T04:20:04 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473370.18/warc/CC-MAIN-20240221034447-20240221064447-00603.warc.gz | 0.901776 | 642 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__147593005 | en | Section 1.1: Introduction to Information Theory
Information is the lifeblood of the universe. At its most basic level, information is a measure of the amount of order or disorder in a system. It forms the foundation for all phenomena, from the simplest subatomic particles to the most complex biological and artificial systems. In this chapter, we will explore the concept of information in the context of the physical universe and lay the groundwork for understanding how information underlies everything in existence.
Information theory is a branch of applied mathematics that originated in the mid-20th century with the pioneering work of Claude Shannon. His groundbreaking paper, “A Mathematical Theory of Communication,” laid the foundation for the modern understanding of information and established the concept of entropy as a measure of information content. Entropy is a central concept in information theory, as it quantifies the amount of uncertainty or randomness in a given system. When applied to the physical universe, entropy is intimately linked with the laws of thermodynamics and the flow of energy through various systems.
Information theory has since been applied to numerous scientific disciplines, ranging from biology and chemistry to computer science and quantum mechanics. In each of these fields, the concept of information has proven invaluable for understanding the fundamental principles governing the behavior of various systems. As we delve deeper into the study of the universe as a whole, the importance of information becomes increasingly evident.
In this chapter, we will explore the idea that the universe itself can be viewed as a computational system, governed by the laws of physics and the exchange of information. The cellular automaton model, first introduced by John von Neumann and later popularized by Stephen Wolfram, provides a framework for understanding how the universe can be viewed as a vast array of simple rules that give rise to complex behavior. This perspective has profound implications for our understanding of the fundamental nature of reality, suggesting that information processing lies at the heart of the universe’s evolution and structure.
As we delve deeper into the fabric of the universe, we will examine the role that information plays in the behavior of fundamental particles such as quarks, leptons, and bosons. These building blocks of matter interact and exchange information through quantum states, which encode the properties of each particle. The fundamental forces of the universe—gravity, electromagnetism, the strong nuclear force, and the weak nuclear force—govern these interactions and shape the flow of information within and between particles.
Moving up in scale, we will explore the emergence of atoms and molecules, which are formed through the intricate dance of atomic nuclei and electron orbitals. Chemical bonds link atoms together into molecular structures, with information encoded in the arrangement of these atomic and molecular systems. The formation of these structures is a testament to the organizing power of information and its ability to shape the physical universe.
In the broader context of the cosmos, we will examine the role of information in shaping the large-scale structure of the universe. From the cosmic microwave background radiation as the earliest observable information, to the influence of dark matter and dark energy on the arrangement of galaxies and galaxy clusters, information plays a crucial role in the formation and evolution of celestial bodies. | physics |
https://drfriedemann.com/captivate-podcast/the-scientific-proof-that-consciousness-creates-reality-with-mark-gober/ | 2023-12-03T21:28:43 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.53/warc/CC-MAIN-20231203193127-20231203223127-00355.warc.gz | 0.942965 | 207 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__139877823 | en | The Scientific Proof That Consciousness Creates Reality with Mark Gober
Have you ever wondered why when you are thinking about somebody, moments later that person calls you? And did you ever watch your dog get excited about your spouse or kids coming home minutes before the car pulls up on the driveway? Or have you found that when you had strong intentions, somehow the Universe seemed to align to support them? How do you explain these phenomenons?
My special guest on Empowerment Radio is Mark Gober, author of the fascinating book, An End to Upside Down Thinking. In his book, Mark presents clear and compelling scientific evidence that consciousness is the basis of all reality and can create and transform matter. Tune into Empowerment Radio this Wednesday, January 16th at 11AM PT / 2PM ET and learn more about how a new, Quantum Physics based understanding of consciousness can shed light on seeming wizard-like gifts of telepathy, remote viewing, precognition, psychokinesis, near-death and manifestation. | physics |
https://www.ipcirwin.com/plancks-constant-led-threshold-apparatus | 2023-12-04T12:14:14 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100529.8/warc/CC-MAIN-20231204115419-20231204145419-00707.warc.gz | 0.897069 | 161 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__206415911 | en | Planck's Constant LED Apparatus
A multipurpose unit that allows the user to easily calculate Planck's constant, to measure the wavelength of coloured light, demonstrate colour light transmission through colour filters and show the diffraction patterns for various wavelengths of coloured light using a range of 4 LEDs (Red, Yellow, Green and Blue) covering the light range from deep blue at 470nm to near IR at 940nm.
Box mounted with 5 x 4mm sockets for attachment to an ammeter or voltmeter to display measurements. Monitoring the voltage of each LED a graph of energy input as a function of light frequency emitted can be measured with an approximate value of Plank's constant calculated.
- Supplied with 500 lines/mm diffraction grating
- Designed and manufactured in the UK | physics |
https://chemeng.adelaide.edu.au/programs/sustainable/why/ | 2018-10-23T00:03:57 | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515555.58/warc/CC-MAIN-20181022222133-20181023003633-00340.warc.gz | 0.918261 | 350 | CC-MAIN-2018-43 | webtext-fineweb__CC-MAIN-2018-43__0__197393774 | en | Why Study Sustainable Energy Engineering?
Sustainable Energy Engineering will play an important, if not dominant, role in future technological developments. With advances in sustainable technologies and the complex requirement to solve issues of global warming, Sustainable Energy Engineering now enables the development of engineering systems that are compatible with current trends of reduced emissions, fuel efficiency and the use of environmentally sustainable materials.
What will you learn?
Students will have a broad knowledge of the principles and technologies used for the generation, storage and transmission of energy from sustainable sources. The Sustainable Energy program includes three subplans in Chemical, Electrical and Mechanical Engineering. The three streams will have a large common content, sharing courses in fundamental principles related to sustainable energy from the existing offerings in Chemical, Electrical and Electronic, and Mechanical Engineering Programs, but with greater specialisation in one of the three disciplines in each stream. In addition courses on technologies specifically related to sustainable energy which will be shared between the three streams.
Students will gain specialised knowledge in chemical, electrical or mechanical engineering such as to allow them to design and optimise the related components of sustainable energy systems.
The first two years of the Sustainable Energy Engineering program are devoted to building the engineering, mathematics and physics foundations that are followed up with specialist engineering subjects devoted to the chosen stream of specialisation in the final two years. The program emphasises engineering problem solving, analysis and design, computer-based methods, and research, communication and management skills.
Graduates of this program will have an appreciation and knowledge of social, environmental and technological issues related to the sustainable supply of energy combined with an in depth technical knowledge in one of the disciplines of chemical, electrical or mechanical engineering and the ability to design and optimise engineering systems in that discipline. | physics |
http://www.warnerbabcock.com/people/david-wolf/ | 2018-06-21T00:32:31 | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863980.55/warc/CC-MAIN-20180621001211-20180621021211-00036.warc.gz | 0.942361 | 300 | CC-MAIN-2018-26 | webtext-fineweb__CC-MAIN-2018-26__0__139347577 | en | Dr. David Wolf joined WBI in 2016 as the Vice President of Technology Development, leading WBI’s efforts in developing grant support for its green innovation initiatives. Dr. Wolf holds a Bachelor’s degree with Honors in Physics from Brooklyn College of the CUNY and a Masters and Doctorate degree in Physics from Cornell University. After his graduate training, Dr. Wolf served as a National Cancer Institute Postdoctoral Fellow at the Johns Hopkins University, which led to a twenty-five year career as a Professor of Physiology at the University of Massachusetts Medical School and Senior Scientist at the Worcester Foundation for Experimental Biology. From there, Dr. Wolf led scientific teams in the life sciences industry developing non-invasive physiological monitoring devices and research instrumentation to support drug discovery in his roles as Vice President for Research and Development at Sensor and Biohybrid Technologies, Director of Optics and Photonics at Radiation Monitoring Devices, and Director of Diagnostic Applications at Pendar Technologies. Dr. Wolf has extensive experience in optics, photonics, spectroscopy, cell biology, immunology, data modeling and numerical algorithm development. He currently serves on the Editorial Board of the Biophysical Journal and previously on the Modeling Board of the American Journal of Physiology. He was Director of the Analytical and Quantitative Light Microscopy course at the Marine Biological Laboratory at Woods Hole, Massachusetts and co-editor of the book “Digital Microscopy,” now in its fourth edition. | physics |
http://hydnoraceous.duckdns.org/page/processes-that-shape-the-earth-physics-in-action | 2022-10-01T04:35:05 | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00153.warc.gz | 0.665499 | 117 | CC-MAIN-2022-40 | webtext-fineweb__CC-MAIN-2022-40__0__192215945 | en | Series: Physics in Action
Hardcover: 120 pages
Publisher: Chelsea House Pub; 1 edition (August 1, 2007)
Product Dimensions: 7.2 x 0.5 x 9.8 inches
Amazon Rank: 5582896
Format: PDF ePub djvu ebook
Through real-life examples, this informational series explains the quantification and measurement of physical things in order to describe relationships or laws between matter and energy.... | physics |
https://180degreesnews.com/2022/07/12/webb-space-telescope-new-photographs/ | 2023-03-29T16:35:31 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00786.warc.gz | 0.928512 | 961 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__83519094 | en | Webb Space Telescope – New Photographs
Some new cosmic photographs released on Tuesday include a stellar nursery where stars are born, galaxies’ interactions, and an exoplanet’s unique glimpse from Webb Space Telescope.
After decades of anticipation, the world finally sees the first photographs captured by the James Webb Space Observatory, the most powerful space telescope ever built.
The world’s premier space observatory began construction in 2004, and after years of delays, the telescope and its giant gold mirror were launched on December 25.
The photos from Webb Space Telescope are worth the wait and will forever alter our perception of the universe.
On Monday, President Joe Biden revealed one of Webb’s first photographs, which NASA describes as “the deepest and brightest infrared image of the distant universe to date.” The remaining high-resolution color photographs will be released on Tuesday.
Several events will take place during Tuesday’s image release and stream live on NASA’s website.
The opening remarks by NASA leadership and the Webb team begin at 9:45 a.m. ET on Tuesday, followed by an image release aired at 10:30 a.m. ET. The images will be revealed one at a time, with more information provided at a press conference at 12:30 p.m. ET.
The space observatory can probe the universe’s mysteries using infrared light, undetectable to the human eye.
Webb will see into the atmospheres of exoplanets, some of which may be habitable, and unearth clues in the ongoing quest for life beyond Earth.
The telescope will also investigate every stage of cosmic history, from the first glows after the great bang that created our universe to the birth of the galaxies, stars, and planets that populate it now.
Now, the Webb telescope is ready to help us understand the universe’s origins and address fundamental issues about our existence and place in the universe, such as where we came from and whether we are alone in the universe.
The first image, released on Monday, depicts SMACS 0723, in which a large cluster of galaxy clusters acts as a magnifying glass, for the objects behind them. This process, known as gravitational lensing, resulted in Webb’s first deep field vision, which contains ancient and dim galaxies.
Some of these far-off galaxies and stellar clusters have never before been observed. The galaxy cluster is depicted as it was 4.6 billion years ago.
The image, captured by Webb’s Near-Infrared Camera, comprises photographs captured at various wavelengths of light over 12.5 hours. Deep field observations are long-term observations of sky regions that can reveal faint objects.
The Carina Nebula, WASP-96b, Southern Ring Nebula, and Stephan’s Quintet are among Webb’s other vital targets for the first image release.
The Carina Nebula, located about 7,600 light-years away, is a stellar nursery where stars are born. It is one of the sky’s largest and brightest nebulae and contains several stars far more massive than our sun.
Webb’s investigation of the enormous gas planet WASP-96b will result in the first full-color spectrum of an exoplanet. The spectrum will contain various light wavelengths that could disclose new information about the planet, such as whether it has an atmosphere or not. WASP-96b was discovered in 2014 and is 1,150 light-years away from Earth, half the mass of the planet Jupiter and orbits its star once every 3.4 days.
The Southern Ring Nebula, popularly known as the “Eight-Burst,” is located 2,000 light-years from Earth. A growing cloud of gas surrounds a dead star in this enormous planetary nebula.
The view of Stephan’s Quintet from space will reveal how galaxies interact with one another. This compact galaxy group, discovered in 1787, is located in the constellation Pegasus, 290 million light-years away. According to NASA, four of the five galaxies in the group are “engaged in a cosmic dance of frequent close encounters.”
An international committee comprised of representatives from NASA, European Space Agency, Canadian Space Agency, and Space Telescope Science Institute in Baltimore chose the targets.
According to NASA Deputy Administrator Pam Melroy, the mission, originally scheduled to last ten years, now has enough fuel to endure 20 years.
These are only the first of many photos from Webb that promise to dramatically transform our understanding of the universe over the next two decades.
Read More on Technology News. | physics |
http://goto-observatory.org/job-opportunities/ | 2021-09-21T19:30:29 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00614.warc.gz | 0.931154 | 644 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__284285 | en | Multiple positions at University of Warwick in extragalactic transients and multi-messenger astronomy
(Closing Dates from 14th March 2021)
The Astronomy and Astrophysics Group at The University of Warwick invites applications for at least two permanent academic, and one fixed-term Research Fellow in the field of extragalactic transients and multi-messenger astronomy.
Warwick has a strong track record in time-domain astrophysics and explosive transients which we wish to build upon in the era of multi-messenger astronomy. Warwick leads the international Gravitational wave Optical Transient Observer (GOTO) project, which consists of an array of wide-field telescopes aimed at fast transients and GW follow-up from both hemispheres. It has been awarded substantial funding to expand over the next few years. Members of staff are also active in transient follow-up programs, such as the ENGRAVE consortium for the follow-up of gravitational wave events with the facilities of the European Southern Observatory, and are interested in the host environments and progenitor populations. It is expected that the successful candidates will take advantage of the opportunities presented by these experiments and others on the multi-messenger roadmap.
The academic positions are available at both senior (Professor) and junior level (Assistant Professor). Expansion in this area is a strategic priority. Applicants will have a strong research track record and be ready to build their own research team with the support of colleagues at Warwick. Informal enquiries on these positions are welcomed to Danny Steeghs, [email protected].
The Research Fellow will work with with Dr Joe Lyman as part of a UKRI Future Leaders Fellowship. They will work on both software development and science exploitation of the GOTO project. Applicants must hold, or be about to attain, a PhD in a related field. Experience in observational astronomy, automated data analysis, or machine learning is desired. Informal enquiries on this position are welcomed at [email protected].
Full details of the positions and application procedure are available at the respective job pages:
– Professor (closing date 17th March 2021) https://bit.ly/3q8d1bv
– Assistant Professor (closing date 14th March 2021)
– Research fellow (closing date 21st March 2021)
All qualified applicants are encouraged to apply, especially those from under-represented groups. The Physics Department and the University of Warwick are proud of their diverse community of staff, students, and visitors, and are committed to maintaining an excellent record in teaching and research by ensuring that there is equality of opportunity for all, fostered in an environment of mutual respect and dignity. Both the Physics Department and the University of Warwick hold Athena SWAN Silver awards, a national initiative to promote gender equality for all staff and students. The Physics Department is also a Juno Champion, which is an award from the Institute of Physics to recognise our efforts to address the under-representation of women in university physics and to encourage better practice for both women and men. | physics |
http://archaeologynewsnetwork.blogspot.com/2016/07/stellar-outburst-brings-water-snowline.html | 2017-04-30T03:15:49 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00186-ip-10-145-167-34.ec2.internal.warc.gz | 0.940295 | 821 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__57431586 | en | Stellar outburst brings water snowline into view
A violent outburst by the young star V883 Orionis has given astronomers their first view of a water "snowline" in a protoplanetary disk -- the transition point around the star where the temperature and pressure are low enough for water ice to form.
|Artist impression of the water snowline around the young star V883 Orionis, as detected with ALMA |
[Credit: A. Angelich (NRAO/AUI/NSF)]
Typically, heat from a young Sun-like star prevents water molecules from freezing within a radius of about three astronomical units, around 450 million kilometers, from the star. (An astronomical unit -- AU -- is the average distance from the Earth to the Sun). Beyond that point, known as the snowline, water condenses to form a layer of ice on dust grains and other particles.
An abrupt and powerful increase in the brightness of V883 Orionis, however, has pushed the water snowline out to approximately 40 AU (about 6 billion kilometers), a distance that corresponds roughly to the orbit of Pluto in our solar system.
|ALMA image of V883 Orionis. The dark ring midway through the disk is the water snowline, the point from|
the star where the temperature and pressure dip low enough for water ice to form
[Credit: L. Cieza et al.; ALMA (ESO/NAOJ/NRAO)]
"The ALMA observations came as a surprise to us," said Lucas Cieza, an astronomer at Diego Portales University, Santiago, Chile, and lead author of a paper describing these results published in the journal Nature.
"Our observations were designed to image disk fragmentation, which is one of the proposed mechanisms for the formation of giant planets. We saw none of that, as the disk is probably too warm to fragment despite its very large mass. Instead, we found what looks like a ring at 40 AU. This illustrates well the transformational power of ALMA, which delivers exciting results even if they are not the ones we were looking for."
|This illustration shows how the outburst of the young star V883 Orionis has displaced the water snowline |
much further out from the star, and rendered it detectable with ALMA
[Credit: ALMA (ESO/NAOJ/NRAO)/L. Cieza]
Water ice helps regulate the agglomeration of dust grains into larger and larger particles. Astronomers believe that within the snowline, where water is vaporized, conditions favor the formation of smaller, rocky planets like Mars and Earth. Outside the water snowline, the presence of ice allows for the rapid formation of snowballs and cometary bodies, which facilitate the formation of massive gaseous planets such as Jupiter.
"Since water ice is more abundant than dust itself beyond the snowline, planets can aggregate more solid material and form bigger and faster there. In this way, giant planets like Jupiter and Saturn can form before the protoplanetary disk is gone," noted Zhu.
The discovery that these outbursts may blast the water snow line to about 10 times its typical radius is very significant to the development of reliable planetary formation models. Such outbursts are believed to be a stage in the evolution of most planetary systems, so this may be the first observation of a common occurrence. In that case, this direct observation from ALMA could contribute substantially to an improved understanding of how planets throughout the Universe form and evolve. It also sheds light on how water ice may have been distributed in our own protoplanetary disk.
The star V883 Orionis is located approximately 1,350 light-years from Earth in the Orion Nebula Cluster. At this distance, ALMA was able to achieve a resolution of about 12 AU -- enough to resolve the water snowline in this system but insufficient to do so around a typical young star.
Source: National Radio Astronomy Observatory [July 13, 2016] | physics |
http://www.niskayunaschools.org/VanAntwerp/schoolnews/VA-National-Science-Bowl.cfm | 2017-09-26T16:32:14 | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696653.69/warc/CC-MAIN-20170926160416-20170926180416-00600.warc.gz | 0.959134 | 282 | CC-MAIN-2017-39 | webtext-fineweb__CC-MAIN-2017-39__0__162951467 | en | At the National World War II Memorial in Washington D.C., from left to right are VA students Rasya Bollapragada, Tatiana Malcevic, Evan Schnell, Aditya Kanakasabapathy, and Arelson Rapisur. The group was visiting the capital city to compete at the National Science Bowl.
Congratulations to the Van Antwerp Science Bowl Team that did a great job competing at Nationals May 2-3 in Washington D.C., where they faced some very tough competition among the 48 teams from all across the U.S.
The National Science Bowl (NSB) is a highly competitive science education and academic event among teams of high school and middle school students who compete solving technical problems and answering questions in science and math. Competition for middle school students includes two types of competitions - an academic math and science competition and a model car race. The car race provides the students with a "hands-on" science and engineering experience where the teams design, build, and race their model cars.
The VA team placed 19th in the academic rounds and 7th in the electric car race.
While in the nation's capital, the students also took advantage of some wonderful sightseeing opportunities. They were taken on a nighttime tour of the monuments and spent a day at the National Mall where they enjoyed the National Zoo and the Air and Space Museum. | physics |
https://www.gallery263.com/event/the-physics-of-color/ | 2023-06-06T22:06:02 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653183.5/warc/CC-MAIN-20230606214755-20230607004755-00403.warc.gz | 0.941152 | 261 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__247038191 | en | Visual art moves us, but what physical phenomena make it possible to communicate this emotion in the first place? How is light generated? How does it interact with the pigments to generate color? How do hue, saturation, and brightness of color emerge from the physical properties of light? Why is mixing pigments different from mixing lights? We will explore the physics of color and its use in the visual arts through hands-on activities and demonstrations to gain an understanding of light as an electromagnetic wave, the interaction of light and matter by quantum-mechanical processes, the relation between physical principles and the fundamentals of color theory, and its application in painting, color film, etc.
Participants will get a chance to play with colored LED sticks, colored filters, and hand-held spectrometers to gain a first-hand experience of the principles of color mixing, and understanding of the physics behind various types of illuminants, and an overview of how these principles govern a variety of artistic media.
The workshop will be led by a Kaća Bradonjić, a physics faculty at Hampshire College and a visual artist, with the assistance of a Boston-based painter Mirela Kulović. Learn more by visiting their websites:
This event is proudly hosted in partnership with Artweek | physics |
https://linereview.uk/the-cosmic-microwave-background-echoes-big-bang/ | 2024-04-21T19:00:02 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00769.warc.gz | 0.929805 | 1,328 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__44171344 | en | The Cosmic Microwave Background: Echoes of the Big Bang
In the vast cosmic tapestry of our universe, there is a subtle but profound whisper from the distant past – the Cosmic Microwave Background (CMB). Discovered by accident in 1965, this celestial phenomenon became one of the most convincing pieces of evidence supporting the Big Bang theory. In this article, we delve into the CMB, exploring its origins, significance, and the remarkable insights it has provided into the early moments of our universe.
The discovery of the CMB
The story of Cosmic Microwave Background begins with the accidental discovery of Arno Penzias and Robert Wilson, two engineers at Bell Telephone Laboratories. In 1965, while trying to eliminate a stubborn source of radio noise plaguing their microwave antenna, they encountered an unexplained persistent background noise. It was a weak, almost uniform radio signal coming from all directions in the sky. Confused by its source, they ruled out a myriad of earthly explanations, including pigeon droppings and radio interference. Little did they know that they had stumbled upon the remnants of the universe’s creation.
The origins of the CMB
The Cosmic Microwave Background is the afterglow of the Big Bang, the explosive event that marked the birth of our universe approximately 13.8 billion years ago. During the early moments of the universe’s existence, it was incredibly hot and dense, making it impossible for atoms to form. Instead, it consisted of a hot ionized plasma composed mostly of protons and electrons. Photons, particles of light, constantly interacted with this plasma and scattered in all directions.
However, as the universe expanded and cooled, a key event occurred approximately 380,000 years after the Big Bang. The temperature dropped to about 3,000 degrees Celsius (5,400 degrees Fahrenheit), allowing electrons and protons to combine to form neutral hydrogen atoms. This cosmic phase transition, known as recombination, marked the moment when the universe became transparent to radiation. Photons were no longer continuously scattered and began to move freely through space.
These primordial photons, released during recombination, are what we observe today as the cosmic microwave background. Over billions of years, the Microwave Background of the universe expanded, causing these photons to redshift. As they stretched with space, their wavelengths increased and their energies decreased. Today, they have cooled to a freezing temperature of just 2.7 Kelvin (-454.81 degrees Fahrenheit), placing them squarely in the microwave part of the electromagnetic spectrum.
CMB as a Time Capsule
The cosmic microwave background is often compared to a cosmic time capsule. It holds a wealth of information about the early universe, frozen in time. By studying the CMB, scientists can examine the conditions and properties of the universe when it was just a baby. Here are some key insights CMB provided:
- Age of the Universe: One of the most fundamental pieces of information gleaned from the CMB is the age of the universe. By analyzing temperature fluctuations in the CMB, the scientists determined that the universe is approximately 13.8 billion years old, a remarkable agreement with other cosmological measurements.
- Cosmic expansion: The CMB offers valuable data on the rate of cosmic expansion characterized by the Hubble constant. By combining the CMB observations with other measurements, scientists were able to refine their estimates of the Hubble constant and shed light on the current rate of expansion of the universe.
- Cosmic Ingredients: The composition of the universe is another crucial aspect revealed by the CMB. Through this analysis, scientists have determined the basic composition of the universe, which contains roughly 5% ordinary matter, 27% dark matter, and 68% dark energy.
- Density Fluctuations: Tiny temperature fluctuations in the CMB map provide insight into initial density fluctuations in the early universe. These fluctuations are the seeds from which later galaxies and clusters of galaxies were formed by gravitational attraction.
- Cosmic Geometry: The geometry of the universe is closely related to its total mass and energy content. The CMB helped confirm that our universe is flat, indicating that its density of matter and energy is precisely balanced, a critical clue about its ultimate fate.
CMB and cosmic anisotropy
While the cosmic microwave background appears almost uniform, it is not completely so. If you look closely at the CMB sky, you will find subtle temperature variations or anisotropies. These fluctuations are incredibly small, with temperature differences on the order of microkelvins. Yet they hold the key to understanding the formation of large-scale cosmic structures such as galaxies and galaxy clusters.
The seeds of these anisotropies can be traced back to the density fluctuations imprinted in the early universe. Areas of slightly higher density attracted more matter over cosmic time and became the birthplace of galaxies. Conversely, Microwave Background regions of lower density evolved into cosmic voids. The CMB anisotropy serves as a snapshot of these primordial fluctuations and allows scientists to study the initial conditions that led to the formation of the cosmic web we observe today.
Over the decades, numerous experiments have been conducted to probe the cosmic microwave background in more detail. One of the most significant breakthroughs came with the Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001. WMAP provided a detailed map of the temperature fluctuations of the CMB, greatly improving our understanding of the early universe.
Subsequently, the European Space Agency’s Planck satellite, launched in 2009, took CMB observations to the next level. It has produced an excellent map of the CMB anisotropies, revealing their fine details with unprecedented precision. The Planck data confirmed the standard cosmological model, known as the Lambda Cold Dark Matter (ΛCDM) model, which describes the universe’s evolution and structure formation based on the CMB findings.
The cosmic microwave background, an accidental discovery, has changed our understanding of the origin and evolution of the universe. It’s a testament to the power of scientific inquiry and chance that allows us to peer back in time to the moments just after the big bang. By studying its faint whisper, scientists have revealed the universe’s age, composition, geometry, and the seeds of cosmic structure. The CMB continues to be a rich source of cosmological knowledge, and future missions and experiments promise to reveal even more about the nature of our vast and mysterious universe. | physics |
https://uplifter.com/products/vacuum-lifter/hand-suction-cups/27/hp10-hand-vacuum | 2023-11-30T03:30:06 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.87/warc/CC-MAIN-20231130031610-20231130061610-00062.warc.gz | 0.893297 | 160 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__166738126 | en | - Max. load 100 kg
- with 2-fold safety
- For smooth, curved surfaces
- Suction plate ø 254 mm, curved
- Redline indicator warns of possible vacuum loss
- Metal handle
The HP10 hand vacuum has a maximum load capacity of 100 kg. Its deep, concave suction plate (Ø 254 mm) guarantees easy suction of curved or irregular, non-porous surfaces. The red-line indicator warns the user in case of vacuum loss. The non-return valve allows re-pumping without loss of residual vacuum. Loads can be released quickly and completely at any time with the valve release lever. A practical carrying case is included for safe transport.
Note: this unit is not suitable for thin, fragile materials. | physics |
http://lidongsheng.net.cn/paper_detail_2.html | 2023-05-31T23:31:20 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647459.8/warc/CC-MAIN-20230531214247-20230601004247-00550.warc.gz | 0.929758 | 230 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__303607706 | en | |Wind energy is one of the most important renewable energy sources and many countries are predicted to increase wind energy portion of their whole national energy supply to about twenty percent in the next decade. One potential obstacle in the use of wind turbines to harvest wind energy is the maintenance of the wind turbine blades. The blades are a crucial and costly part of a wind turbine and over their service life can suffer from factors such as material degradation and fatigue, which can limit their effectiveness and safety. Thus, the ability to detect damage in wind turbine blades is of great significance for planning maintenance and continued operation of the wind turbine. This paper presents a review of recent research and development in the field of damage detection for wind turbine blades. Specifically, this paper reviews frequently employed sensors including fiber optic and piezoelectric sensors, and four promising damage detection methods, namely, transmittance function, wave propagation, impedance and vibration based methods. As a note towards the future development trend for wind turbine sensing systems, the necessity for wireless sensing and energy harvesting is briefly presented. Finally, existing problems and promising research efforts for online damage detection of turbine blades are discussed. | physics |
https://windy.app/blog/big-data-weather-forecasting.html | 2023-03-28T05:42:47 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00143.warc.gz | 0.941448 | 2,663 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__30160566 | en | Surely you've heard of such a concept as Big Data.
In short, it's just a lot of data about anything that can only be stored and analyzed by supercomputers. That is to say, it is very complicated.
But Big Data has one major advantage, which follows from the name of the concept — it gives you a big picture, which you can't get in any other way. That's why, with the development of computers into super machines, Big Data is used in many different areas of life, including meteorology, where it is one of the most important concepts. Weather forecasting is actually the oldest area to use Big Data. At the same time, it is the future of meteorology.
In this article, two Windy.app experts talk about the relations between Big Data and weather. In other words, you will find out what is behind a simple-looking forecast table or weather map you use in your weather app.
But we'll start at the very beginning — with the collection of Big Data using weather stations and other weather instruments.
Ilya Drigo, professional meteorologist, developer, and researcher
Pavel Konstantinov, assistant professor of the Department of Meteorology and Climatology at Lomonosov Moscow State University (MSU), Ph. D.
Ilya: A necessary side-step: one of the most well-known models used to describe Big Data is the so-called 5V model. Based on it, Big Data has the following properties: Volume, Velocity, Variety, Veracity, and Value. Big Data in weather forecasting was one of the first to be fully aligned with the 5V model. Indeed, these are data of enormous size (Volume) about a rapidly changing environment (Velocity) obtained from completely different sources (Variety) for which you need to verify and assess their accuracy (Veracity) and which are important for countries' economies and people's lives.
So the first step to a quality weather forecast is collecting data about atmospheric conditions. The more data you have, the better. Every single day meteorologists all around the world gather, process, and analyze terabytes of information about the condition of the atmosphere and the oceans from all kinds of sources: weather stations, weather satellites, weather buoys, weather balloons, and weather radars — these are the five main weather instruments.
However, the stations are the most numerous: there are about 40 thousand of them around the world, counting only the official ones. Put simply, these are weather observation points. They register the data in their designated location and send those over to data processing centers. At Windy.app, we don't just show ready-made forecasts. We also continuously receive and display, in real-time, data from tens of thousands of these stations all around the world, information about the state of the ocean, and even information about precipitation, also in real-time.
Pavel: Weather stations are supervised by the countries in which they are located. There is also the World Meteorological Organization (WMO), a special agency of the UN whose purpose is to ensure that the number of these stations does not decrease and the system remains operational.
Yes, incidentally, weather forecasting is actually the oldest area to use Big Data. Transmission of these data is in fact the first example of free-to-flow and free-of-charge distribution of such information in the world. This is why meteorologists are rightly considered citizens of the world.
There is a notion that land meteorology is more simple than marine meteorology because there are more land stations. I would not agree. Although it is true that there are more land stations, environmental conditions on land are more diverse. It is more difficult to produce a forecast here than for a more-or-less uniform water area of a sea or bay. At the current stage of weather observation development, we just cannot qualify one as more simple and the other more difficult.
Weather radars are a relatively new invention. Nevertheless, we have high hopes for them. Radars allow us to see the bigger picture of the weather. For instance, when we see in a forecast that a certain region has a big rain cloud going over it, most of the time, this Big Data is received from radars.
Timur Garifov / Unsplash
Ilya: The weather data collected is sent to data centers and then used for weather forecasting calculations. Nowadays, forecasts are calculated using complex algorithms called forecasting computational models. The operating principle of these is solving hydrophysical equations describing atmosphere behavior. As input about the condition of the atmosphere all around the world, these models use the data obtained through meteorological measurements.
The task of weather forecasting is so computationally demanding that it uses the most powerful supercomputers present. Calculating weather for the whole world is so complicated and expensive that only a few hydrometeorological centers in the world can afford it.
At the same time, the accuracy of a modern weather forecast is significantly high: for example, we can predict tomorrow's weather with an accuracy of about 90–92%.
But weather forecasting can be done not only for the whole world but locally. At Windy.app, we also use our model, WRF8. With this model, we calculate the forecast for the whole of Europe and East Asia (Japan and South Korea) every day. We use cloud supercomputer processing power and the WRF-ARM model effective code to provide our users with some of the most accurate everyday weather forecasts available on the market today.
This WRF model is developed and supported by the worldwide community of meteorologists and developers. Due to the high demand for calculation parallelization effectiveness and code performance, the WRF-ARM model is implemented in Fortran, a general-purpose, compiled imperative programming language that is especially suited to numeric computation and scientific computing. Even though WRF-ARM is an open-source model, to adjust it effectively requires a great amount of knowledge and effort from the experts involved: meteorologists, developers of highly effective parallel code, and DevOps specialists.
Timur Garifov / Unsplash
So every night we download fresh weather data from the National Oceanic and Atmospheric Administration (NOAA) servers, process it, and use it as our initial and boundary conditions to run our WRF8 model. We calculate the forecast for 3 days for the whole of Europe with a resolution of 8 km (4.9 mi) and for East Asia with a resolution of 3 km (1.8 mi).
For the system to work with maximum effectiveness possible, we use cloud supercomputer processing power provided by Oracle. Grid calculations are effectively parallelized using the paradigms of MPI and OpenMP parallel programming, so, for the most effective calculations, we need a cluster with a low latency of the inner-cluster network and a big number of CPU cores. Apart from that, to optimize at the compiler level, we need direct access to the cores. By running a large number of tests in the Oracle cloud infrastructure, we managed to find the optimal configuration of a bare-metal cluster using low-latency RDMA networks to minimize the calculation time, on the one hand, increase the calculation accuracy on the other, and, as a result, optimize the financial costs.
Read more about the hardware side of our calculations in the official Oracle blog.
In general, the "raw" forecasts the forecasting model generates are very difficult to comprehend. These are just huge binary files of varying formats with hundreds of various variables. Weather applications like the Windy.app are exactly what presents these weather forecast data in a format convenient and understandable for the users: kitesurfers, sailors, fishermen, paragliders, and simply everyone who is interested in meteorology. That is, multiple times a day, we download a huge amount of data from all kinds of sources or weather models (both free-of-charge and paid ones), process it, automatically check their credibility, and then put those into specialized storage. This way, our users get the most up-to-date and accurate forecast for any world location as quickly and effectively as possible from tens of various sources.
Pavel: Why so many weather models? In different areas of the Earth's surface, models also differ in accuracy. So for the territory and type of sports you are interested in, you can end up choosing both a successful model and an unsuccessful one. This can make experiences of using the same weather app different for the same area.
At Windy.app, we compare several forecasts provided by different models. Basically, the Windy.app is a hub aggregating various observation data and data from different models. We then structure those and offer them to our users in an easy-to-understand format while also giving them a chance to make their own decision and act based on these data.
Timur Garifov / Unsplash
Ilya: Forecasting methods are being constantly improved, and their accuracy increases over time. However, due to the stochastic (that is, chaotic, random) nature of weather processes, uncertainty is still very high, and that is what makes weather forecasting such a difficult task. This uncertainty can be accessed and decreased by using methods of post-processing of model weather forecasts.
There is a whole set of analytical operations which scientists use on the resulting enormous amount of data to extract exactly the valuable information they need. One example of such post-processing is converting a forecast's data about water vapor concentration to the commonly known notions of "fog" or "mist".
Also, to assess the probability and veracity of a forecast, the method of assembly modeling is used. By running a whole set (or assembly) of models using slightly different initial conditions, we obtain a set of possible scenarios for a certain meteorological situation. These are terabytes of data that need to be statistically processed to obtain the resulting probabilities of these scenarios. For example, this is how forecasts of probable movement are made for tropical hurricanes.
By later comparing forecasts with the actual measurements made by meteorological stations, we can assess the forecast error and find the patterns leading to these errors. Then, by correcting these errors using methods of statistical processing and machine learning, we can further increase our forecasts' accuracy.
Also, methods of machine learning and deep learning are already actively used in modern meteorology. There are many examples of successfully applying neural networks to locally improve the accuracy of forecasts, as well as for nowcasting (short-term weather forecasting) and setting computational model parameters. Methods of machine learning allow us to effectively find and use non-linear relationships among sets of meteorological variables; however, a complete replacement of computational models with neural networks is not yet possible.
The more powerful the computer, the more resolution (area) of forecasting we can calculate in a given time. Moreover, a computer's computational power directly affects the whole system's speed, and in some cases, it might be critical to update the forecast as quickly as possible: for example, in the case of a developing tropical hurricane where literally every second of delay counts as the damage may be prohibitive.
Finally, more powerful computers enable us to use more computationally complex methods of data assimilation and processing, which ultimately increases our forecasts' accuracy.
Timur Garifov / Unsplash
Pavel: Machine learning has helped make our forecasts more accurate. Forecasts will become more and more accurate, concurrently with our progress in developing more powerful computers.
In meteorology, we also use distributed computing as an alternative to using supercomputers. This is employed in projects where one can give a part of their computing power to help with climate calculations. However, this is not as effective for weather forecasting as it is in other areas. The reason is that roughly speaking, a weather forecast for tomorrow must be completed today; the sooner, the better. So, one cannot use distributed computing to the full extent. It might be beneficial for the overall power, but the resulting computation speed is too low.
The atmosphere is so diverse that every resulting weather situation is just one of millions possible.
The future of meteorology is not about predicting the weather within the accuracy of fractions of a percent and doing it better than the day before (the forecasts are already quite accurate); it is about better predicting hurricanes and typhoons, squalls, heat waves, extreme precipitation (rain, snow...) — everything that harms people and damages economy of cities, regions, and countries.
With that, it is the one who learns to better predict such events and provide the data about them in a convenient way who will be the leader both in meteorology and in weather applications.
Text: Ilya Drido and Pavel Konstantinov of the Windy.app team
Cover photo: Alex Kotliarskyi / Unsplash
What is a weather forecast and how it works
The guide to the world's major weather forecast models | physics |
https://quadbeam.com/products/t30-sww-suspended-solids-turbidity-sensor | 2020-11-27T03:29:29 | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00365.warc.gz | 0.834422 | 384 | CC-MAIN-2020-50 | webtext-fineweb__CC-MAIN-2020-50__0__68763716 | en | T30-SWW Suspended Solids Sensor / Turbidity Sensor
- IMMERSION: T30 SWW
HOW IT WORKS:
- Ratio-metric four beam signal processing compensates for changes in optical properties of emitters and detectors due to ageing and surface fouling or coating.
- Effects of colour and temperature are virtually eliminated.
The T30 turbidity sensor has two emitters and two detectors, set at exactly 90 degrees to each other. As each emitter is pulsed in sequence it produces two detector currents, one from the detector opposite the emitter (attenuation) and the other from the detector at 90 degrees to the emitter (scattered light).
- Remote sediment and turbidity monitoring in waterways
- Raw water intake monitoring and control
- Final effluent release monitoring
- Dosing control
- Monitoring of clarifier overflow weirs
- Four Beam Ratio-Metric Self-Compensating system - reliable, repeatable, accurate signal, better control
- Does not require a sensor-specific controller
- Has an economical power use whilst using Modbus
- Competitively priced
- From 0 - 50NTU through to 0 - 1000NTU
- 0 - 750 mg/l SiO2
- the measuring range will vary according to media and particle characteristics
TEMPERATURE & PRESSURE OPERATING RANGE:
- 0 - 50°C
- 5 bar
A cleaning nozzle is included for applications where contamination may be so high it full masks the light transmission. A water jet or air jet can be fitted to remove offending contamination.
For more information: Download Data Sheet
Interested in T30-SWW Suspended Solids Sensor / Turbidity Sensor?
Complete the form below and we'll get back to you as soon as possible
(Please specify model type if applicable) | physics |
https://divers-and-sundry.blogspot.com/2015/12/interstellar.html | 2023-11-29T22:03:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100146.5/warc/CC-MAIN-20231129204528-20231129234528-00820.warc.gz | 0.915374 | 268 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__238417193 | en | Moria gives it 5 out of 5 stars and says
In an era where box-office science-fiction is represented almost exclusively by superheroes and mass destruction spectacle, it is a genuine pleasure seeing a film that is rooted in solid science and high-concept science-fiction. We have a film that features such challenging concepts from physics as relativity, gravity and spacetime, time dilation effects, black holes and wormholes, which are a little more than the usual stuff that get served up to the popcorn bucket multiplex crowd.Empire Online gives it 5 out of 5 stars. Slash Film says, "As Interstellar ends, there’s no doubt you’ve been on a ride. A thoroughly enjoyable and memorable cinematic experience that’s well-made and acted." Rolling Stone gives it 3 1/2 out of 4 stars and praises "how enthralling it is, how gracefully it blends the cosmic and the intimate, how deftly it explores the infinite in the smallest human details."
Roger Ebert's site gives it 3 1/2 out of 4 stars and says the film is "an impressive, at times astonishing movie that overwhelmed me to the point where my usual objections to Nolan's work melted away." Rotten Tomatoes has a critics rating of 71% and an audience rating of 85%. | physics |
http://www.saukprairie.com/events/details/smarty-pants-mousetrap-machine-show-11708 | 2018-06-18T19:30:45 | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860776.63/warc/CC-MAIN-20180618183714-20180618203714-00575.warc.gz | 0.933797 | 191 | CC-MAIN-2018-26 | webtext-fineweb__CC-MAIN-2018-26__0__39935415 | en | There’s a mouse on the loose at the public library!
Award-winning balloon artist Smarty Pants presents the Mousetrap Machine Show, a hilariously enjoyable science show that demonstrates how simple machines work. These machines aren’t made from steel or wood or plastic – they’re all constructed out of giant balloons! Throughout the show, audience volunteers help Smarty Pants construct the world’s biggest balloon mousetrap to catch a runaway mouse. It’s entertaining, it’s educational and best of all – fun!
Smarty Pants is a professional entertainer specializing in the rapidly growing field of balloon art. Using his unique oversized balloon props, has been educating and entertaining audiences since 1998. With his wife and partner the Lovely Miss Dena, his balloon shows have been featured across the state of Illinois in community centers, schools, festivals and even on WGN’s morning news. | physics |
https://www.sheriffadelfahmy.org/2023/03/why-do-we-study-physics/ | 2024-04-15T22:31:54 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817033.56/warc/CC-MAIN-20240415205332-20240415235332-00316.warc.gz | 0.953322 | 2,087 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__87988855 | en | This is the second post in the series “Why do we study x?” — from the perspective of computer engineers. The first post in the series laid down the foundation for why it is important for people not to forget foundational knowledge; knowledge about how things work, that can be used to improve on current technology and to allow them to maintain existing infrastructure when this is required.
In this post, I will specifically address the question of why it is important to study physics from the perspective of a computer engineer. In order to make the post manageable, I will divide physics into the following fields that I will talk about separately
- Newtonian Mechanics
- Electricty and Electromagnetism
- Quantum Mechanics
For each of these aspects of physics, I will explain why they are important for computer engineers. Let us begin our discussion with Newtonian mechanics.
The mechanics of Isaac Newton are one of the first aspects of college level physics that a computer engineer encounters — at least in the institution where I teach. What is Newtonian Mechanics? It’s the mechanical and physical laws that can be derived from Newton’s laws of motion.
These laws essentially describe how things move and act in the real-world. By adding things like friction, air drag, and fluid mechanics, they can be used to describe, to a great degree of accuracy, how most objects behave in the real-world.
Treatment of these topics at the college level are typically given using calculus — calculus based physics. The use of calculate rather than algebra allows these laws to describe the instantaneous behavior of objects, rather than their behavior “on average”. Which is probably one of the factors that causes students to ask why we need to study them — calculus is not a favorite with many people apparently.
So, how do we, as computer engineers, use these laws? Well, first there is the obvious use case of writing physics simulation programs. Scientific programs that can be used to experiment or to simulate physical objects have to have these laws baked into their code. For example, if you are going to write the software for a wind tunnel simulator, or a flight simulator, you have to have a working knowledge of Newtonian physics — note that I use this term to mean all non-relativistic physical laws of nature.
But scientific programs are not the only programs that require programmers to have knowledge of the laws of physics — games and computer generated imagery (CGI) are two other areas of programing that need working knowledge of physics.
For games, it is important to model how the real-world works in order to make sure that when, for example, a car collides with an obstacle, it responds in a reasonable way. It is also needed to model the trajectory of a bullet or the flow of water in a game. This is typically implemented in a physics engine that developers of games can use. It is impossible to write such an engine without a working knowledge of physics.
It is interesting to note that the physics engine for games may not always model exactly how the real-world works. It may be an approximation, or a deliberate divergence from the physics of the real-world to, for example, model how objects would move in space or in low gravity environments.
Of course, such engines can be written in collaboration with physicists, but a programmer with a working knowledge of Newtonian physics would greatly speed things up and ensure that the code is correctly written. Not to mention that he/she can go into the code and change anything when the game requires it. Like when adding a new level to a game that is hosted on a space station, or on a different planet, for example.
Similarly, in GCI, there must be a model of real-world physics so that the generated models can behave in a way that is realistic to the audience. They need to respond to physical events in the same way — or at least in a similar way — to how objects in the real-world would respond to them. Again, this necessitates a physics engine that models the behavior of real-world objects. The Jurassic Park movies, for example, would have been terrible had it not been possible to use a physics engine to model the interactions of the CGI dinosaurs with the real-world.
Electricty and Electromagnetism
We can now move on to the next category of physics, electricity and electromagnetism. In addition to the previous example of writing scientific programs that can be used to model electrical circuits — think ORCAD or PSPICE — knowledge of this area of physics is essential for designing the circuits that interface the physical world to embedded systems.
You need to be able to design these circuits using the principles of electricity. For example, suppose you want to design a circuit that filters the signal from a noisy source before it is input to a processor. One possible way to do this is to design a filter using resistors, capacitors and inductors — this is called a passible filter. As you can see, without a working knowledge of electricity and circuits, it would be impossible to do this.
The entire data acquisition channel of an embedded system requires passive and active circuits — we will talk more about the later when we get to the post about the “why we study electronics?”.
Thus, this is an important field of physics that computer engineers need to be aware of. A working knowledge of this area allows us to design programs and systems that we would otherwise be incapable of doing.
The strange world of quantum mechanics is also very important to computer engineers. One of the hot trends in computers, other than AI, is quantum computing. We are getting close to being able to design commercial quantum computers at scale.
While quantum mechanics may seem a very dense subject — with its talk of wave functions, quantum engagement, and other similarly weird topics — it is essential for a future proof career in computer engineering.
Just like we need to know the basics of transistors in order to be able to construct the elementary logic gates that power today’s computers — we also need to know the basics of quantum mechanics to be able to construct the basic units of quantum computers.
Just as it is important to understand the limitations imposed on computers by their current binary logic in order to be able to program them efficiently, it is important to be able to understand how the basic units of quantum computing work in order to be able to program them.
A complete understanding of, say, Shor’s algorithm, is impossible without an understanding of qubits and how they work. You must understand at least the basics of quantum mechanics to be able enter this new and promising field of quantum computing.
Relativity, together with quantum mechanics, is one of the more difficult fields of physics to grasp. I am sure many of you would immediately object to the notion that it would be of practical use to computer engineers — how, I can imagine you say, does a theory that has such weird concepts like time dilation and length contraction be of any practical use? Surely it is merely a theoretical framework used by physicists to find mathematical solutions to their abstract problems.
You would be wrong my friend — there are many practical uses of the theory of relativity, from nuclear energy generation, to particle accelerator design to, and this is the most important part as far as we are concerned, the design of GPS systems.
How on earth would the theory of relativity have anything to do with GPS systems? Well in order to explain this, we need to understand two concepts, one from the special theory of relativity and the other from the general theory of relativity.
The first states that a moving clock runs slower, and the second states that a clock under the influence of gravity runs slower. The corollary of the second statement is that a clock that is further away from a gravitational source runs faster with respect to a clock that is closer to the source of gravity.
The theories of general and special relativity allow us to calculate by exactly how much such clocks would be slower or faster. I won’t go into the details of the calculations here, but it should be noted that they depend on the speed and altitude of the object being considered.
GPS satellites, since they are in constant motion with respect to the Earth, moving at about 14,000 km/h in the case of the US system — there are Russian and European systems that move at different but similar speeds — have clocks that should run slower than clocks on Earth according to the theory of special relativity.
In addition, since the satellites are further away from the center of the Earth than the Earth’s surface, they have clocks that should run faster than those on Earth according to the theory of general relativity. Accounting for both these factors, they work in opposite directions, gives you the exact correction that needs to be applied to the time of the GPS clocks if they are to be accurate.
Accuracy of GPS clocks are essential for the functioning of the GPS system, and if these effects are not accounted for in the software of either the satellites, the receivers or both, the location determined by the system would be inaccurate.
So those writing the software for these systems need to take these factors into account — note that a change in the altitude or speed of the satellites would require that you do the calculations again and come up with a different correction factor. Without knowledge of special and general relativity you wouldn’t be able to perform such calculations.
If you would like a soft introduction to the concepts of relativity, I highly recommend the book mentioned in this blog post that I recently made. The author uses only high school algebra and a very engaging writing style to teach you the core concepts.
So there you have it ladies and gents, a knowledge of physics is essential for computer engineers. And while you can probably get along without this knowledge, you would be an inferior engineer surpassed by those who do. Remember the point made by Isaac Asimov articulated in my previous post — failure to maintain knowledge about fundamental science is the first step towards civilizational decline. So buckle up and study your physics! | physics |
https://spymonde.com/product-tag/magnetic-gps-trackers/ | 2020-10-22T01:16:38 | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878662.15/warc/CC-MAIN-20201021235030-20201022025030-00534.warc.gz | 0.918659 | 174 | CC-MAIN-2020-45 | webtext-fineweb__CC-MAIN-2020-45__0__137981844 | en | Magnetic GPS Tracker for Audi, BMW, Ferrari, Maserati, Mercedes, Porsche & Lamborghini.
Adding more magnets to our new GPS Trackers gives far greater strength when attaching the devices to metal objects. In fact, a 95 Kilo pulling force is ample evidence of the enormous strength the range demonstrates. Guaranteed to remain in place!
Magnetic GPS Trackers are the ideal solution to your vehicle tracking needs; with no cables or wiring to deal with you can simply remove the tracker and place it on another vehicle as it suits you.
Whether you need to keep track of cars, vans or even farming machinery, SpyMonde has a Magnetic GPS Tracker to suit you. Fleets of vehicles can be tracked to monitor their location during operating hours, distance travelled and the amount of time each vehicle spends on the road at any one time. | physics |
https://austinmoms.com/event/solar-eclipse-storytime/ | 2024-04-24T19:13:20 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819847.83/warc/CC-MAIN-20240424174709-20240424204709-00638.warc.gz | 0.885662 | 138 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__86236697 | en | Welcome to our online community events calendar! Here you will find all of the local Austin happenings. Want to add an event to our calendar? Just submit your event or email us at [email protected] and we will get the info up!
Join us in celebrating the second total solar eclipse in the U.S. in less than seven years at our Solar Eclipse-Themed Children’s Storytime on Monday, April 8, 2024, at 11:30 AM. We’ll read eclipse-themed children’s books, enjoy snacks, participate in a special letterpress-printed eclipse-themed activity, and give away FREE solar eclipse glasses while supplies last. | physics |
https://superimpulse.com/micro-arcade/atari-series-2/ | 2022-11-29T09:06:41 | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00119.warc.gz | 0.911058 | 232 | CC-MAIN-2022-49 | webtext-fineweb__CC-MAIN-2022-49__0__233324593 | en | Micro Arcade Atari Series 2 including Breakout, Asteroids plus one surprise bonus game.
Play Atari Asteroids, the classic space-themed shooter game, in micro size! Travel into space and prevent your spaceship from being hit by flying saucers and asteroids. You must shoot and destroy the asteroids and saucers while not colliding with either or being hit by the saucers’ counter fire. The challenge increases as the number of asteroids increases! Your reflex and aiming skills will be put to the test!
Bring back nostalgic memories with Micro Arcade Atari Breakout and experience the same great gameplay and challenges as the original. Your mission is to destroy all the colored layers of bricks using a single ball. The ball moves around the screen, bouncing off the top and two sides of the screen. When a brick is hit, the ball bounces back and the brick is destroyed. You need to reflect it back again by using the wall and/or paddle. The higher you climb the more points you will earn. It takes skill and speed to keep this ball in play! | physics |
https://giantbubbles.co.nz/blogs/blog/why-do-my-giant-bubbles-pop | 2024-02-27T21:03:49 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474686.54/warc/CC-MAIN-20240227184934-20240227214934-00269.warc.gz | 0.918242 | 724 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__130100496 | en | Giant bubbles can be an enchanting and mesmerizing sight, but they often leave us puzzled when they suddenly pop. Understanding why giant bubbles pop is crucial for bubble enthusiasts, as it can help you create longer-lasting, more robust bubbles. This article will explore the various factors that contribute to giant bubble popping and suggest solutions to improve bubble longevity.
One of the most significant factors that affect bubble longevity is surface tension. Surface tension is the cohesive force that holds the liquid's surface together. When the surface tension is too high, it can cause bubbles to pop prematurely.
Solution: You can reduce surface tension by adding a small amount of detergent to your bubble solution. This decreases the forces working against the bubble's ability to maintain its shape.
External conditions play a significant role in bubble formation and stability. Wind, humidity, and temperature can all influence a bubble's lifetime.
Solution: To minimize the effects of environmental factors, choose a sheltered location on calm days, and consider bubble solutions specifically formulated for windy conditions. Adjusting your bubble recipe based on the weather can also help.
Impurities in the Water
Water quality can significantly impact your bubble solution. Minerals and impurities in tap water can weaken bubbles.
Solution: Use distilled or purified water to make your bubble solution, or consider using a water softener if you have hard water.
The concentration of soap or detergent in your bubble solution can affect its stability. An overly concentrated solution can result in thick, unstable bubbles.
Solution: Experiment with different soap-to-water ratios to find the right balance for your desired bubble size and stability.
Bubble Wand Design
The design of your bubble wand can also impact bubble stability. An inefficient wand may not distribute the solution evenly, leading to weak spots in the bubble.
Solution: Choose a high-quality, well-designed bubble wand. If you're using homemade wands, ensure they are symmetrical and designed for giant bubbles.
Handling and Technique
Your technique while creating and handling bubbles can be a crucial factor in their longevity. Sudden movements, uneven dipping, or overly aggressive blowing can lead to early bubble pops.
Solution: Practice and finesse your bubble-making technique. Slow, controlled movements and gentle blowing can help improve bubble stability.
Quality of Ingredients
The quality of the ingredients used in your bubble solution matters. Not all detergents and soaps are created equal.
Solution: Choose high-quality, gentle dishwashing detergents, as these are typically better for bubble-making.
Aging of the Solution
Bubble solutions tend to work better after they've had some time to age. Freshly made solutions may not perform as well.
Solution: Prepare your bubble solution in advance and let it sit for a few hours or overnight before using it for optimal results.
Use of Polymers
Some bubble solutions incorporate polymers or additives that improve bubble stability and longevity.
Solution: Consider using bubble solutions that include polymers, as they can enhance the durability of your bubbles.
Creating giant bubbles can be a delightful and awe-inspiring experience, but understanding why they pop is essential for maximizing your bubble fun. By addressing factors such as surface tension, environmental conditions, water quality, solution concentration, wand design, technique, ingredient quality, solution aging, and the use of polymers, you can significantly improve the longevity of your giant bubbles. With the right knowledge and adjustments, you can enjoy bubbles that float gracefully in the air, captivating both young and old. | physics |
https://hmmstudio.com/2020/02/29/stamina-and-no-strings-attached/ | 2023-04-01T02:08:51 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00392.warc.gz | 0.936982 | 1,567 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__102042977 | en | This blog post contains videos and is best viewed in Web View format at hmmstudio.com
You want this, you just don’t know it yet!
Everything is going wireless and I like it. All those messy wires and plugs are just so outdated and ugly.
iPhones are phasing out plug in headphones and others are following suit, but the bluetooth wireless connection is not always ideal. Wireless phone charging is catching on but you often have to place the phone or gadget on the device at a specific spot otherwise it won’t charge.
EV’s – electric cars are heading in the same direction. The induction method is used and a company has been working since 2007, originating from the physics department of MIT on an industry standard for all EV’s. You know like those clean, efficient induction stove tops.
Contents of this blog post:
- Intro to Induction Method.
- Electric Vehicle – Wireless Induction Charging as an Industry standard
- Batteries are also changing. Graphene enhanced batteries.
- Graphene is finally scalable.
Intro to Induction Method.
Below is an excerpt from thespruceeats.com about induction in stove tops, to give you an existing example of the process and its specs, so you can better understand the induction implementation for charging Electric Vehicles.
What Is an Induction Cooktop?
Read more at thespruceeats.com
“With induction cooking, your pan is heated by a magnetic field instead of having its bottom sitting on a flame as with a gas cooktop or on an element as with an electric stove. With an induction stovetop, the entire bottom of the pan actually heats up, and there’s no need to fit your pan to the burner.”
” 60 % more efficient than with a gas stovetop.
Beats an electric stovetop for efficiency by about 40 %”
” Because the surface of an induction stove or cooktop doesn’t get hot, you can touch it with your fingers without getting burned.”
“Induction cookers respond immediately to temperature adjustments, so when you lower the heat, you’ll see the results right away.”
EV – Wireless Charging as an Industry standard
No remembering to plug in and no unplugging. It charges while parked in a standard carpark space.
This is particularly important for the coming Robo-Taxi era and will make recharging, a task you won’t have to even think about; plus it will give you the capability to sell to the grid when full.
Below is a 20 min video by Undecided-Matt Ferrel, about the Magnetic Resonance Charging method building upon the induction method for electric vehicles and the company. – WiTricity
Below is an excerpt from theverge.com about induction wireless charging stations being implemented in a capital city by 2023.
“Norway’s capital city of Oslo will be the world’s first metropolitan area to install wireless, induction-based charging stations for electric taxis, in a bid to make a zero-emission cab system by as early as 2023.
Here’s how Fortum describes the system working in its press announcement:
The project aims to install wireless charging using induction technology. Charging plates are installed in the ground where the taxi is parked and a receiver is installed in the taxi. This allows for charging up to 75 kilowatts”Read more at The Verge
You will even be able to charge while driving, in the not too distant future. This was trialled in France and proven possible as a proof of concept, with a road in-layed with the tech. The car was going at pace and charging, so effectively you should never have range anxiety.
Below inset is an excerpt from Roadshow, about the trial performed in France in 2017.
“The dynamic electric vehicle charging (or DEVC)
DEVC technology is able to wirelessly send up to 20 kilowatts of inductive charging power to a compatible electric vehicle traveling across it at highway speeds.
In partnership with French research institution VEDECOM, Qualcomm has installed the tech in a 100 meter segment of test track that it calls FABRIC.
Qualcomm sees inductive roads and DEVC as a potential cure for range anxiety;Read more at CNET – Roadshow
Topping up cars as they roll over the charging segments could mean exiting a highway with more power than you started with. “
Of course with all tech, the first out of the blocks are always the priciest. But tech moves so rapidly that once started the cost decreases as advancements increase.
Batteries are also changing. Graphene enhanced batteries.
The first glimpse at a graphene enhanced battery has just hit the market. By the Company – Real Graphene.
Graphene is over 100 times stronger than steel at one atom thick and safer than regular batteries.
Exciting possibilities for Mobile phones, electric cars and electric storage to the grid.
Below is a great 9 min video by Cold Fusion–Feb 2020, about graphene enhanced batteries, their composition and applications.
I’m definitely no expert in batteries, but this sounds promising for the future of batteries.
Graphene is finally scalable.
Graphene is increasingly being used in mainstream products. The cost is coming down and its scalability appears to be coming into focus.
Clothing is one such product, its lightweight, conductive and temperature controlling. It is extremely strong which makes it perfect for sports wear and safety clothing. Graphene first came to my attention a long time ago and it’s progression has been exciting to watch. It’s scope is wide ranging and deserves a blog post of its own, in the near future.
Below inset is an excerpt from graphene-info.com with extra information, if your interest in graphene has been sparked.
Graphene Applications – Dec 09, 2019
“Graphene is considered to be the world’s thinnest, strongest and most conductive material – to both electricity and heat.
Graphene has the potential the revolutionize entire industries – in the fields of electricity, conductivity, energy generation, batteries, sensors and more.
Graphene is the world’s strongest material, and so can be used to enhance the strength of other materials. eg. plastics, metals or other materials
Graphene is the world’s most conductive material to heat.This could be useful in both microelectronics eg. thermal foils for mobile devices.
With the highest surface-area to volume ratio
It is a promising material to be used in batteries and supercapacitors.Coatings ,sensors, electronics and more
Other promising applications: anti-corrosion coatings and paints, efficient and precise sensors, faster and efficient electronics, flexible displays, efficient solar panels, faster DNA sequencing, drug delivery, and more.”
The article goes on to discuss :The latest Graphene Application news:Read more at graphene-info.com
This is not a paid promotion by any individual, business or organisation. | physics |
https://exclusivesblog.com/ancient-underground-temple-resonates-at-brain-affecting-frequency-mysterious | 2023-03-28T15:19:48 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00427.warc.gz | 0.95031 | 828 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__241293970 | en | Al Saflieni Hypogeum in Malta is the only Ancient underground temple in the world that holds a unique Mystery: it resonates at a frequency that has an unusual effect on the brain.
One of the best preserved examples of Maltese temple building culture is the underground Hypogeum structure, which dates back almost 5,000 years. However, the peculiar acoustic characteristics discovered in its underground spaces surprised scientists more than any other characteristic.
The three levels of the Hypogeum are divided into a number of elliptical chambers that can be reached via various corridors. The domed vaults in the main rooms and the intricate network of false compartments set them apart.
According to researchers, the Hypogeum was once a sanctuary, possibly housing an oracle. The Oracle Chamber was thus given its name after a special chamber inside the building that was carved out of solid limestone and displayed amazing acoustic qualities.
The word spoken in the Oracle Chamber is heard throughout the entire building and is magnified one hundred times. Some claim that this gives the impression of being inside a huge bell. At certain altitudes, the listener experiences the same degree of vibration in his bones and tissues as he does in his ears.
The structure’s remarkable acoustic characteristics have already been investigated. A 110 Hz resonance frequency was discovered in the Oracle Chamber by Maltese composer Ruben Zara and an Italian research team. This resonance frequency matches observations made in numerous other Neolithic chambers around the world, including Newgrange in Ireland.
According to Princeton University researcher Robert Jahn, the size of the room or the quality of the stone affects how high the echo rises.
But the question still stands: Was this outcome deliberate? Was it really intended for the Hypogeum to create it? Maybe there was something that our ancestors knew that we are only now rediscovering?.
Paolo Debertolis and Niccol Bisconti of the Universities of Trieste and Siena, respectively, have put forth the theory that the room was designed to produce acoustics that have an impact on people’s psyches, possibly to enhance mystical experiences during rituals. In academic circles, there is a lot of support for this concept.
In a 2008 study, Ian Cook of the University of California, Los Angeles, and his associates used EEG to monitor the brain activity of several volunteers while varying different resonant frequencies.
They discovered that at 110/111 Hz, language centers’ brain activity is significantly reduced, allowing other processes to take center stage.
This kind of brain activity is connected to a hypnotic state of drowsiness, including vivid mental images and auditory hallucinations, claims Paul Devereaux, a professor of archeoacoustics at Cambridge. This kind of shift most likely doesn’t happen at other frequencies.
Accordingly, those who participated in ritual singing, such as in the Hypogeum room, may have been subjected to vibrations that had an impact on their brain activity. Scientists studying biological behavior claim that this effect activates a region linked to mood, empathy, and social behavior.
The camera was tested to see how it responded to different voices and simple musical instruments that may have been present during the Hypogeum’s use (4000-2500 BC) using microphones installed in the Oracle’s chamber and digital recorders. e.).
The study’s findings revealed that two frequencies of structure resonance can be stimulated by the male voice (114 Hz and 68-70 Hz). The friction drum produces low resonance, while the horn and shell produce none at all.
A male voice singing “oh-oh-oh-oh” elicited a similar response as did the shamanic drum, which was made of natural leather and produced a strong resonant stimulation at a frequency of 114 Hz (while the female voice did not).
There are still questions despite research into the peculiar characteristics of the Oracle Chamber. But the mysteries of this ancient and enigmatic location are beginning to be revealed by scientists. | physics |
http://globalinnovationcommons.org/discover/special-report-section/vane-tails | 2017-04-26T02:06:33 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00234-ip-10-145-167-34.ec2.internal.warc.gz | 0.90169 | 238 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__74903256 | en | Released under the GIC Framework
A control vane for a wind turbine is a mechanism that orients wind turbines for most efficient energy production. The vane enables the turbine to move with correspondent changes in the wind's direction.
It resembles a pipe or shaft like device with rudders or fins attached to it, and a gearbox that enables rotation upon the pipe or shaft to occur.
Without a control vane for a wind turbine, wind turbines, and the generator they are connected to, run the risk of being damaged during shifts in wind patterns, and the windmill would not be able to adequately harness wind energy. That is, as wind changes direction or velocity, turbines are placed under a tremendous amount of pressure, and without the ability to turn and adjust in relation to the correspondent changes in wind patterns, the turbines may break and won't collect the amount of energy needed to operate the windmill reciprocating pumping system.
The control vane for a wind turbine is the mechanism that enables this adjustment to be made. It functions as an axel or axis that can swivel to enable the windmill to receive wind gusts in all directions 360º. | physics |
https://www.qianggroup.com/en/research/ | 2024-02-22T02:12:14 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473598.4/warc/CC-MAIN-20240221234056-20240222024056-00838.warc.gz | 0.892393 | 1,049 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__82446282 | en | Energy storage technology is a key supporting pillar for the energy revolution and achieving carbon peak and carbon neutrality. The development of energy storage technology is an important means to promote the integration and consumption of clean energy such as wind and solar energy, which account for a large proportion of total renewable energy. It is also essential to develop the energy storage technology to improve the independent controllability of key equipments and systems, so as to realize energy security and energy structure transformation of our country.
With the goal of sustainable development, our research group is engaged in innovative research in fields of energy materials and energy chemistry, especially involving lithium metal anodes, lithium–sulfur batteries, and electrocatalysis. The specific research work includes:
(1) Lithium metal batteries and energy chemistry: Lithium bond chemistry, ion–solvent complex, and chemistry about lithium metal and electrolytes
The energy chemistry of lithium is the core foundation of the energy conversion and storage process of lithium batteries. We seek to understand and explore the energy chemistry of lithium at electronic, atomic, molecular, and material scale, including the existence state, the interactions with other atoms and molecules, transport mechanism, thermodynamic and kinetic features of (electro-) chemical reactions, and temporal and spatial behavior of the lithium atom or ion in lithium batteries by developing theoretical and experimental tools. We focus on the theory of lithium bond, solvation chemistry in electrolytes, the formation process of electrode/electrolyte interphase, and the charge transfer mechanism, aiming to provide guidance for the design of electrodes, electrolytes, and electrode/electrolyte interphases.
(2) Lithium metal anodes: Highly safe, high-energy density, and long-cycling composite lithium metal anodes and solid-state batteries
Lithium metal anode is the basic material of high-energy-density rechargeable batteries. Stable lithium metal anode is the key to promote practical applications of high-safety and high-energy-density batteries. We seek to disclose the key issues hindering the stability of lithium metal anodes under practical conditions. We develop the design of the structure of composite anodes, lithiophilic materials, novel liquid electrolytes, and artificial interphases to construct long-cycling and high-specific-capacity composite lithium anodes. We explore strategies to achieve high-energy-density lithium pouch cells based on liquid, solid-state, and all-solid-state electrolytes to promote the applications of lithium metal batteries.
(3) Lithium–sulfur batteries: Long-cycling, low-cost, and high-energy-density lithium–sulfur batteries
Lithium–sulfur batteries are highly considered as promising next-generation energy storage technology due to the ultrahigh theoretical energy density of 2600 Wh kg−1. Focusing on the key electrochemical process, we conduct a series of researches on exploring the reaction mechanisms and regulation strategies, including constructing cathode skeletons to promote electron/ion conductivity, designing homogeneous/heterogeneous kinetic promoters to boost the sulfur redox kinetics, regulating the solvation structure of polysulfides to improve reaction reversibility, and protecting lithium metal anodes by inhibiting side reactions. We expect to realize practical lithium–sulfur pouch cells with high energy density and long lifespan and provide new understanding on reaction mechanisms, regulation strategies, and macroscopic applications of lithium–sulfur batteries.
(4) Energy and electrocatalysis: Zinc–air batteries, oxygen reduction and oxidation catalysts, three-phase electrocatalysis
Rechargeable zinc–air batteries have attracted intensive attentions because of their high energy density, low cost, environmental friendliness, and safety, whose actual performances are severely limited by the sluggish kinetics of the cathodic oxygen reduction and evolution reactions at the three-phase interfaces. To address the above issue, we propose the anionic regulation strategy and develop a series of precise synthesis methods to promote the intrinsic activity of noble-metal-free electrocatalysts. We also design strongly coupled interfaces to promote interfacial electron transfer and hierarchical cathode structures to construct high-performance noble-metal-free bifunctional oxygen electrocatalysts, which can make zinc–air batteries capable of cycling stably at high rate and high capacity.
(5) Machine-learning assisted functional material design
The rapid development of computer technology has greatly promoted the new paradigm of energy material design based on big data, which is expected to greatly reduce the time and expense of material design. On one hand, we aim to build a multi-scale simulation method based on density functional theory, molecular dynamics simulations, and phase field theory to establish a large database of energy material systems at the molecular level and quantitatively understand the structure–function relationship of functional materials, so as to realize the rational design and high-throughput screening of energy materials. On the other hand, based on the understanding of the micro-mechanism of the energy system related to lithium batteries, the combination of theory and experiments are supposed to accurately predict battery cycle life and battery performance at high and low temperatures. | physics |
https://wordpress.fau.edu/lifelongexchange/2019/01/15/a-walk-in-space/ | 2019-08-21T01:34:47 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00493.warc.gz | 0.958745 | 705 | CC-MAIN-2019-35 | webtext-fineweb__CC-MAIN-2019-35__0__200148407 | en | On Tuesday, January 22, 2019 at 11:15 a.m., astronaut John Grunsfeld, Ph.D., will present a one-time lecture, “A Hubble Story.” In May 2009, a team of astronauts flew to the Hubble Space Telescope on space shuttle Atlantis. On their 13-day mission and over the course of five spacewalks, they completed an extreme makeover of the orbiting observatory. Scientific results from the new and repaired instruments hint at a bright scientific future for Hubble and will be presented in the talk, as well as a narrative of the adventures on orbit. Pictures and video will be utilized during the lecture.
As a child, John Grunsfeld dreamed of becoming an astronaut. He studied science and his dream came true. A veteran of five space flights, STS-67 (1995), STS-81 (1997), STS-103 (1999) STS-109 (2002) and STS-125 (2009), John has logged more than 58 days in space, including 58 hours and 30 minutes of extravehicular activities (EVA) in eight spacewalks. He visited the Hubble Space Telescope three times as an astronaut to service and upgrade the observatory.
He earned his bachelor’s degree in physics from the Massachusetts Institute of Technology in 1980 and then returned to his native Chicago to earn a master’s degree and a doctorate in physics from the University of Chicago. After he earned his doctorate, he joined the faculty of the California Institute of Technology as a senior research fellow in physics, mathematics and astronomy.
In 1992, he joined NASA’s astronaut corps, and qualified for flight selection as a mission specialist. He was assigned as the lead for the development of portable computers for use in space. He first flew to space aboard Endeavour in March 1995. His second flight was aboard Atlantis in January 1997. This mission docked with the Russian space station Mir, exchanged U.S. astronauts living aboard the outpost and performed scientific research. John then flew on three more shuttle missions — Discovery in December 1999, Columbia in March 2002 and Atlantis in May 2009. He was the lead spacewalker in charge of Hubble activities. During this mission, he successfully serviced and upgraded the Hubble Space Telescope.
After the 1999 mission, he served as NASA’s chief of extravehicular activity. John also was an instructor in the Extravehicular Activity Branch and Robotics Branch of the astronaut program and worked on the exploration concepts and technologies for use beyond low Earth orbit in the Advanced Programs Branch. In 2004 and 2005, John was the commander and science officer on the backup crew for Expedition 13 to the International Space Station (ISS). He also served as the NASA Chief Scientist detailed to NASA headquarters from 2003 to 2004. In this position, he helped develop President George W. Bush’s “Vision for Space Exploration.”
He retired from NASA in December 2009 and served as deputy director for the Space Telescope Science Institute, in Baltimore, managing the science program for the Hubble Space Telescope and its partner in the forthcoming James Webb Space Telescope. He returned to NASA in January 2012 as the associate administrator of the Science Mission Directorate at NASA HQ in Washington. One facet of John’s duties as associate administrator is representing NASA’s current and future space science programs and projects to Congress, the media and the public.
To register for the one-time lecture, click here. | physics |
https://modm.io/reference/module/modm-driver-mmc5603/ | 2023-12-05T19:13:48 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100555.27/warc/CC-MAIN-20231205172745-20231205202745-00420.warc.gz | 0.827191 | 214 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__35813447 | en | MMC5603 3-Axis Digital Magnetometer¶
The MMC5603NJ is a monolithic complete 3-axis AMR magnetic sensor with on-chip signal processing and integrated I2C bus.
It can measure magnetic fields within the full scale range of ±30 Gauss (G), with up to 0.0625mG per LSB resolution at 20bits operation mode and 2mG total RMS noise level, enabling heading accuracy of ±1° in electronic compass applications.
An integrated SET/RESET function provides for the elimination of error due to Null Field output change with temperature. In addition it clears the sensors of any residual magnetic polarization resulting from exposure to strong external magnets. The SET/RESET function can be performed for each measurement or periodically as the specific application requires.
The MMC5603NJ is in wafer level package with an ultra-small size of 0.8x0.8x0.4mm and with an operating temperature range from -40°C to +85°C. | physics |
https://facilitaauto.com/2020/11/01/for-just-about-any-one-radioactive-nucleus-it-is/ | 2021-08-04T12:04:18 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00222.warc.gz | 0.962274 | 904 | CC-MAIN-2021-31 | webtext-fineweb__CC-MAIN-2021-31__0__37355070 | en | For just about any one radioactive nucleus, it is really not feasible to anticipate as soon as the decay procedure can happen.
Such decay is random in general, such as the throw of dice: as gamblers are finding all all too often, it really is impractical to state just once the dice should come up 7 or 11. But, for an extremely number that is large of tosses, we are able to determine the chances that 7 or 11 can come up. Likewise, whenever we have actually an extremely large numbers of radioactive atoms of 1 type (say, uranium), there clearly was a particular time frame, called its half-life, during that the odds are fifty-fifty that decay will take place for almost any associated with nuclei.
Then after 100 years you would have 1/2 gram; after 200 years, 1/4 gram; after 300 years, only 1/8 gram; and so forth if you had 1 gram of pure radioactive nuclei with a half-life of 100 years. Nonetheless, the product doesn’t disappear completely. Alternatively, the radioactive atoms are changed using their decay services and products. Sometimes the radioactive atoms are known as moms and dads and also the decay items are called child elements.
This way, radioactive elements with half-lives we now have determined can provide accurate clocks that are nuclear. By comparing exactly how much of a radioactive moms and dad element is kept in a rock to just how much of the child items have accumulated, we are able to understand how long the decay procedure happens to be taking place and therefore the length of time ago the rock formed. link summarizes the decay responses utilized most frequently to date lunar and terrestrial stones.
PBS offers an evolution show excerpt which explains how exactly we utilize radioactive elements to date world.
This technology Chanel video clip features Bill Nye the Science Guy showing just just just how researchers used radioactive dating to beautiful asian teen look for the age of Earth.
Whenever astronauts first travelled towards the Moon, certainly one of their many essential tasks ended up being to create back lunar rocks for radioactive age-dating. Until then, astronomers and geologists had no way that is reliable gauge the age of the lunar surface. Counting craters had let us determine general many years (for instance, the greatly cratered lunar highlands had been more than the dark lava plains), but researchers could maybe perhaps perhaps not assess the actual age in years. Some thought that the ages had been who are only those of Earth’s area, that has been resurfaced by numerous events that are geological. When it comes to Moon’s surface to be therefore young would indicate active geology on our satellite. Just in 1969, if the very very first Apollo examples had been dated, did we discover that the Moon is a historical, geologically dead globe. Utilizing such dating strategies, we’ve been in a position to figure out the many years of both world in addition to Moon: each ended up being created about 4.5 billion years ago (though, as we will see, Earth probably formed earlier).
We must additionally remember that the decay of radioactive nuclei generally speaking releases energy in the shape of temperature. Even though power from the single nucleus is not so big (in human terms), the enormous variety of radioactive nuclei in an earth or moon (especially at the beginning of its presence) could be a substantial supply of interior power for the globe.
Geologists estimate that about 50 % of Earth’s current heat that is internal arises from the decay of radioactive isotopes in its inside.
Key Concepts and Summary
The many years associated with the areas of items into the solar system can be calculated by counting craters: for a provided world, a far more greatly cratered region will generally be more than one which is less cratered. We are able to additionally make use of examples of stones with radioactive elements inside them to get the time because the layer where the rock formed final solidified. The half-life of a element that is radioactive the full time it will require for half the test to decay; we regulate how numerous half-lives have actually passed away by exactly how much of an example continues to be the radioactive element and exactly how much has transformed into the decay item. In this means, we now have approximated the chronilogical age of the Moon and world become approximately 4.5 billion years. | physics |
https://modaspot.shop/products/ceramabond-569-alumina-adhesive-pint | 2021-06-14T11:36:18 | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00232.warc.gz | 0.837717 | 260 | CC-MAIN-2021-25 | webtext-fineweb__CC-MAIN-2021-25__0__16318764 | en | Ceramabond 569 Alumina Adhesive, Pint
Unique inorganic formulations for bonding and sealing materials
High thermal and electrical resistance
Water-based and do not outgas after curing
Contains no volatile organic compounds (VOCs)
Shelf Life: 6 months
Ceramabond 569 is a single part, ceramic-filled paste that bonds tenaciously to ceramic, metal and quartz substrates. Primary applications for Ceramabond 569 include the assembly of platinum resistance heaters, igniters, thermocouples, probes and sensors such as oxygen analyzers, gas chromatographs, mass spectrometers, and high vacuum components.
This compound is rated for operating temperatures to 3000F (1650C) and exhibits a dielectric strength of 138 volts per mil, torque strength of 38 ft-lbs, and coefficient of thermal expansion of 4.2 in/in/F.
Ceramabond 569 is applied easily using a brush, syringe or automatic dispensing equipment. Once applied, curing is accomplished by heating at 200F for 2 hours or drying at room temperature for 24 hours. Cured product exhibits minimal shrinkage and offers exceptional mechanical strength, and moisture and thermal shock resistance. | physics |
http://www.davidzwirner.com/artists/richard-serra | 2017-05-24T23:45:50 | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607871.54/warc/CC-MAIN-20170524230905-20170525010905-00492.warc.gz | 0.96932 | 140 | CC-MAIN-2017-22 | webtext-fineweb__CC-MAIN-2017-22__0__66536042 | en | May 20 - October 15
Richard Serra: Films and Videotapes presents 16 works made between 1968 and 1979. The exhibition is the first comprehensive survey of the artist's films, and screens the works in their original 16mm format.
As Ken Johnson wrote in The New York Times, Serra's early films "insist on material conditions." Created in the same period as his initial experiments with materials such as vulcanized rubber and lead, Serra's films anticipate his ongoing focus on the spatial and temporal properties of sculpture. Included in this exhibition is the artist's first film, Hand Catching Lead (1968), which reflects his distinctive interest in the interplay of gravity and material. | physics |
http://courses.ece.ubc.ca/373/ | 2013-06-19T03:36:05 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707440693/warc/CC-MAIN-20130516123040-00019-ip-10-60-113-184.ec2.internal.warc.gz | 0.869519 | 1,583 | CC-MAIN-2013-20 | webtext-fineweb__CC-MAIN-2013-20__0__182442756 | en | My Webpage: www.ece.ubc.ca/~jurij
Course Webpage: http://courses.ece.ubc.ca/373/
Lectures: Tuesdays, Thursdays: 11:00-12:30 AM,
West Mall Swing Space (SWNG), Room 121.
Class Notes: The students should print their class notes and bring them to each class.
Office Hours: Fridays - 2:00-4:00 PM
For other times, send email to arrange a meeting (always include “EECE373” in the subject line)
5 Lab Experiments (will be determined during the course)
Laboratory experiments are an integral part of this course. In order to pass this course, all students are required to attend (attendance will be recorded), comply with the Laboratory Rules, perform adequately, and write reports for all experiments.
Lab Reports are due one week after the completion of experiment. Late submission will result in reduced marks (10%/day up to 3 days, and zero mark thereafter).
We have three Lab Sections: “^” (Alternate week, starts Sep. 17)
L1A – Tue. 8:00 – 11:00 AM
L1B – Tue. 13:00 – 16:00 PM
L1D – Thu. 16:00 – 19:00 PM
Lab-0: Lab Safety Rules Sep. 18, 20
Lab-1: AC/DC Circuits and Measurements Oct. 2, 4
Lab-2: AC Transformers Oct. 16, 18
Lab-3: Brushed DC Machines Oct. 30, Nov. 1
Lab-4: Induction Motors & VFDs Nov. 13, 15
Lab-5: Synchronous Machines Nov. 27, 29
The reports for the last Lab-5 are due on Dec. 4 and Dec. 6, respectively. You can also submit your reports earlier directly to one of the TAs. You must submit the reports for all 5 labs in order to be permitted to write the Final Exam.
Exams: There will be one Midterm Exam (in class, on Oct. 25)
Missed exams cannot be made up. In the case of illness (with advanced notice
and doctor’s note), the weight of Midterm Exam may be added to the
Quizzes: There will be several, 5 to 6, short (10-20 min.) quizzes given in the beginning of some lectures. The quizzes will be collected and marked. Missed quizzes cannot be made up and will result in zero mark. In case of illness or other legitimate reason (with advanced notice and doctor’s note), the weight of missed quiz may be added to the Final Exam.
Next Quiz: Thursday Nov. 29.
Assignments: There will be about 6 assignments throughout the course. Assignments will normally be collected at the specified due date. The assignments will be collected and counted, but will not be marked. The solutions will be posted on the web after the due date. The assignments may be looked at in the end of the term only if the student’s grate is on the pass/fail borderline.
Submission Policy: Assignments and Lab Reports will normally be collected at the specified due date at 4:00PM from the Drop-in Box labeled EECE373 Lab Reports on the first floor in MCLD building, close to the room 112B. All late Reports and Assignments must be given directly to the TA who will record time and date of submission. Late submission of Lab Reports will result in reduced marks (10%/day up to 3 days, and zero mark thereafter). Late submission of Assignments will not be accepted after the solutions have been posted on the web.
Grades: The final grade will be based on the following:
Assignments – 5%
Quizzes – 20%
Lab Reports – 15%
Midterm Exam – 20%
Final Exam – 40%
Each student must pass the Final Exam (get 50% or more) in order to pass this course. If the mark for the Final Exam is less then 50%, then this failing mark becomes the mark for the whole course.
Cheating Policy: Students are encouraged to discuss among themselves the problems in each assignment and lab experiment. However, the turned-in assignments and lab reports must show the individual work and reflect the individual understanding of material by each student. Reports suspected of cheating will not be graded (zero mark). Cheating on exams will result in zero mark and may qualify for withdrawal form the course and/or suspension from the University. Please see the UBC Regulation on Cheating and Plagiarism. All instances of cheating will be reported accordingly to this policy.
Teaching Assistants (TAs): We will have three TAs for this course:
Hamid Atighechi, Email: hamida at ece.ubc.ca
Mehrdad Chapariha, Email: mehrdadc at ece.ubc.ca
Milad Fekri, Email: miladf at ece.ubc.ca
Their offices are in the Power Lab in Kaiser 3085.
For questions regarding assignments/quizzes and marks, please contact TAs via their email and set up an appointment (always include “EECE373” in the subject line).
1. Review of AC circuits and phasors, real and reactive power. [Appendix B]. Basic principles of electromagnetism, magnetic circuits, and properties of magnetic materials [Chap.1].
2. Coupled magnetic circuits; ideal transformer dynamics and steady-state, equivalent circuits, three-phase transformers and connections, steady-state phasor relations. Per-unit system. [Chap. 2].
3. Principle of electromechanical energy conversion, basic actuators, force and torque production [Chap. 3].
4. DC machines: basic principles, voltage and torque equations, basic types of DC machines, steady-state equations and characteristics, performance of DC machines, elementary control, basic DC drives [Chap. 4, Chap. 10].
5. Rotating magnetic field, windings, mmf, two-phase device, three-phase device, p-pole devices, etc. Induction machines: basic construction and principle of operation, types, rotor slip, steady-state equivalent circuit analysis, derived torque and power, determination of parameters, principle of speed control [Chap. 5].
6. Synchronous machines: principle of operation, rotor types, stator & rotor inductances, basic equivalent circuit, steady-state power-angle characteristics, steady-state operating characteristics, synchronous motors, effect of salient pole on power-angle characteristics [Chap. 6].
7. Permanent magnet synchronous machines, brushless DC motors, reluctance motors, stepper motors, single-phase motors, basics of operation [Chap. 7 & 8].
Recommended Textbook: (the assignments and lectures will be based on this book)
P.C. Sen, PRINCIPLES OF ELECTRIC MACHINES AND POWER ELECTRONICS, Second Edition (1997),
A.E. Fitzgerald, C. Kingsley, S.D. Umans, “Electric Machinery, 6th Edition,” McGraw-Hill 2002,
S.J. Chapman, “Electric Machinery Fundamentals, 3rd Edition,” McGraw-Hill 1999, | physics |
https://www.lynda.com/After-Effects-tutorials/Shock-wave-displacement/123545/132970-4.html | 2021-03-03T16:01:59 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366969.45/warc/CC-MAIN-20210303134756-20210303164756-00079.warc.gz | 0.949599 | 828 | CC-MAIN-2021-10 | webtext-fineweb__CC-MAIN-2021-10__0__96470761 | en | It's about time to composite all the elements that we've prepared and to create the final composition. I'm inside videotreatment_04, which you can find in the third folder of the exercise file. And in the center of this movie, we will create the simulation of a shock wave displacement effect using the Caustic Effect. But for now lets start to assemble the final video composition.
I will start by opening my Video Start Composition, which already have the water ball. And here between those two layers, we will start adding those final comps that we've created earlier. So, I'm going to open up my Final Comps folder and let's start by placing the explosion comp here. And actually, we need to time it to where we actually want it to start.
So, I'm going to drag my play head and try to find where exactly the referee is releasing the ball. I think it is over here at 21 seconds. let's just drag the explosion to begin here. You can either drag it or use the open bracket key and this will slide it to where the cursor is. Since, I want to be as effective as possible, I'm also going to trim my water ball layer using another keyboard shortcut, Alt close then Open Right Bracket.
I'm also going to set the resolution to auto, in order to speed things up while we are working. And now in order to create the, the displacement effect itself, we need to take the Wave Final Composition and just drag it underneath. Now, this composition doesn't need to be visible, it only needs to act as a source for the displacement effect. So, I'm going to turn off the eye for it, and then from this solate folder here I'm going to drag one of the adjustment layers and release it on top of this one.
Let's switch it to be an adjustment layer and let's apply the caustics effect. The caustics effect can be found under the simulation category. Here it is, so double-click in order to apply it. Now, just to let you know, the caustics effect actually renders water. This is what it does. But I really like to use it as a displacement effect, because I think that it is generating a very nice result.
However, we don't need to use the pattern or the sky here, so basically we can collapse both of them down and only concentrate here on the water section. The water surface for the displacement will be of course the wave final layer that we've just created. We also don't want these blue cast to be visible. So, I'm going to change the surface opacity all the way down to zero, and then let's just play with the wave height.
Of course, let's just place our cursor here, so we can see actually what we're doing. And I think, that I want to lower it quite dramatically, so it won't have this severe effect, so maybe 0.04. Now, I'm also going to raise the smoothing, so we will get a better result. I think that ten is a good number. Now, I also want to dismiss the inner lighting of the effect. The easiest way to do it is to open up the Lighting section and just change the Light Type from Distant Source to Point Source.
And that will have a very gentle, but visible effect. I'm going to go to the beginning of the composition and press zero to create a grand preview, so you can see the intermediate result. Now, although this is a very subtle effect, I do think that it adds a lot of power and definition and it does look very impressive, giving the impression that there is some kind of a shock wave to the video making this very lively and powerful.
- Designing a pack shot
- Creating a 3D basketball
- Matching color and texture
- Creating digital explosions
- Using the Proxy rendering method
- Adding focus and vignette
- Compositing and animation
- Creating 3D titles | physics |
http://topsoftballdrills.com/long-toss-for-softball-throwing-and-pitching-drill/ | 2018-11-14T00:10:14 | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00336.warc.gz | 0.958805 | 326 | CC-MAIN-2018-47 | webtext-fineweb__CC-MAIN-2018-47__0__204708192 | en | Benefits of the Long Toss for Softball Throwing and Pitching
If you are a softball player and you’re not long tossing, you may be stunting your softball throwing and pitching development. When done properly, the long toss does 3 things:
Legs and Lower Body Strengthening
It forces you to use your legs and torso to throw the ball
Not all players are pitchers, and throwing velocity is not situation-specific. The long toss teaches players to use their legs and core more effectively to put the ball as far away from them as they can. Maximum distance throwing forces you to use your legs and core more than you would on short throws.
Speed and Power
It allows for maximum speed of movement
When trying to build power in any sports movement, you need to perform those movements at maximum speed to make improvements. Softball throwing is no different. By consciously making an effort to throw the ball as far as possible, you’re forcing yourself to work at or near 100% of your body’s maximum level of force production.
Feedback for Mechanical Adjustments
It provides instant feedback on mechanical adjustments
The long toss is probably the best method for evaluating the effect of mechanical adjustments and cleaning up your throwing mechanics. Throwing the ball as far as you can is going to require you to find a throwing motion that is both powerful and efficient. The long toss teaches players to get on top of the ball better and use the entire body in the throwing motion. The mechanics that allow for maximum distance are the same mechanics that allow for maximum velocity. | physics |
https://repo.mel.cgiar.org/handle/20.500.11766/12796 | 2023-05-30T09:01:56 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645417.33/warc/CC-MAIN-20230530063958-20230530093958-00144.warc.gz | 0.873644 | 424 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__67685995 | en | Estimation of transpiration by single trees: comparison of sap flow measurements with a combination equation
Impact factor: 4.651 (Year: 1998)
MetadataShow full item record
Timeless limited access
Heping Zhang, Lester P. Simmonds, James Morison, Donald Payne. (14/6/1998). Estimation of transpiration by single trees: comparison of sap flow measurements with a combination equation. Agricultural and Forest Meteorology, 87 (2-3), pp. 155-169.
Sap flow estimates for whole trees (scaled from measurements on selected branches using the heat balance method) were compared with estimates of transpiration based on porometry in a study of poplar trees in an agroforestry system in the south of the UK. Sap flow showed good agreement with the transpiration rate estimated using the Penman-Monteith equation with measured stomatal conductance (R-2 = 0.886) on six selected days during the season. The dominant environmental variable influencing transpiration was the vapour pressure deficit, as the aerodynamic term in the Penman-Monteith equation accounted for more than 70% of daily total transpiration, with the rest due to the radiation component. Stomatal conductance, estimated by inverting the Penman-Monteith equation from continuous measurements of sap flow over 55 days, was used to determine the parameters for a multiplicative stomatal conductance model. For an independent data set there was better agreement between measured sap flow and transpiration predicted from the stomatal conductance (R-2 = 0.90) than for calculated and predicted stomatal conductance (R-2 = 0.51). (C) 1997 Elsevier Science B.V.
- Agricultural Research Knowledge | physics |
https://www.ippo-engineering.eu/en/marine-energy-how-it-works/ | 2024-02-27T23:52:50 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00642.warc.gz | 0.946369 | 1,882 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__133575159 | en | It is one of the most available renewables on the planet, yet marine energy is rarely spoken when clean sources are mentioned. The reason is simple: the technologies are still being developed and not commercialised. Investment is slowed down by a number of factors, not least the absence of major public intervention in creating widespread support mechanisms.
The growth of solar and wind power in recent years has shown that the commitment of institutions in defining policy and funding frameworks can generate adequate incentives for their development.
But before exploring the sector’s growth potential in Europe and Italy, let’s see what exactly is meant by marine energy and how its latest innovations work.
MARINE ENERGY TECHNOLOGIES
Research into most renewables has long defined the best way to harness different types of energy source. Solar energy is captured with panels. Wind is converted into electricity through wind turbines. For marine and ocean energy, this is not yet the case. Scientists around the world are still trying to find out which technology is best suited to converting energy from the seas and oceans.
The answer may change depending on the marine environment where the plant is built. In Canada’s Bay of Fundy, where 160 billion tonnes of water are moved twice a day by the tide, a tidal power plant should be the answer.
The high power of the waves hitting Portugal’s ocean coast has encouraged local governments to fund wave research since the late 1970s, while in Italy, the University of Rome is developing more suitable systems to capture the energy produced by the small waves of the Mediterranean Sea. In the Netherlands, where more than 3,300 cubic metres of freshwater flow into the sea every second, it is salinity gradient energy that has the greatest potential.
There are many sources of marine energy and the most promising are listed below. Energy from offshore wind farms is also occasionally referred to as marine, but we have decided to exclude it from this list.
Wave energy is the most promising in terms of capacity (the earth’s surface is over 70% water) and the second most promising in terms of technology development. Installations can be placed both along the coast and in the open sea. Estimates of the potential for wave energy indicate a minimum of 4000 TWh/y, when Europe consumes about 3000 TWh/y.
TIDAL STREAM ENERGY
Tidal currents are caused by the gravitational forces of the sun and moon, and therefore the energy harvested this way is not affected by weather conditions. This is why the technologies for generating energy from tidal currents are incredibly reliable and predictable. These operate through turbines very similar to wind turbines. Since water is 832 times thicker than air, underwater turbines capture more energy than their wind-powered counterparts and have smaller blades.
The analogy with wind turbines has made it possible to make very rapid progress in terms of technological development. In Europe, there are already several projects that are laying the foundations for the start of commercial production.
TEMPERATURE DIFFERENCE (OTEC)
OTEC (Ocean Thermal Energy Conversion) plants use temperature differences to produce a constant flow of energy. These technologies are particularly suitable for the decarbonisation of tropical islands that currently depend on expensive (and polluting) fossil fuels. Through desalination, OTEC can also produce drinking water, the scarcity of which on remote islands causes severe hardship to those populations.
SALT GRADIENT ENERGY GENERATION
Near deltas and fjords, a type of power generation can be implemented that exploits the difference in salinity gradient between fresh and sea water. It can be used 24 hours a day and is therefore an ideal complementary source of energy to other renewables.
The most advanced technology of its kind is Reverse ElectroDialysis (RED), in which a saline solution and fresh water are passed through exchange membranes that alternate anions and cations to generate electricity. The potential is very high, because, in addition to being widely predictable, the energy produced by one cubic metre of fresh water is comparable to that generated by the same cubic metre falling from a height of 260 metres.
THE POTENTIAL OF MARINE ENERGY IN EUROPE
The European Union sees a great opportunity in marine energy: at the end of 2020 the Commission published the European strategy for development of offshore renewable energy sources aimed at increasing the capacity of offshore installations fivefold by 2030 and 25fold by 2050 in order to meet the Green Deal targets.
According to estimates published in the “EU Blue Economy 2020 Report”, by 2050 European production will reach a capacity of 100 GW, equivalent to 10% of the Union’s consumption or 75 million households. With almost 45% of Europeans living in coastal territories, ocean energy can therefore be readily supplied to millions of people.
For Europe, the highest potential for developing marine energy is off the Atlantic coastline, but there are also many opportunities for use in various parts of the Mediterranean Sea and the Baltic Sea. The ultra-peripheral regions of the EU, as there is a large temperature difference between deep and surface waters in the tropics, are an excellent site for OTEC technology.
Exploiting domestic resources could help to:
This last point is especially true for islands, where electricity generated through diesel is expensive and marine energy can contribute to energy self-sufficiency.
European investment in blue energy can also generate an economic boost for coastal regions, with the creation of around 400,000 jobs by 2050. The marine energy production sector can become a very important part of the blue economy, triggering growth on the coast and inland.
The need to develop new technologies for the production chain may involve innovative SMEs and large companies for the construction of ships and the design of new mechanical, maritime and electrical engineering solutions. The development of the sector also involves the inclusion of expertise on environmental impact, safety and health management.
Unlike other renewable sources, marine energy is highly predictable and can provide a stable output of electricity. Another advantage is that most plants are underwater and therefore may face less public resistance due to their low visual impact.
At present, the plants installed in Europe are mainly non-commercial and are only intended to measure their reliability.
MARINE ENERGY IN ITALY
Italy, with its 8,000 km of coastline, is a particularly suitable country for the use of marine energy converters, provided they are developed in relation to the specific characteristics of the territory. Neither the height of the waves nor the depth of the sea are comparable to those of the ocean, but the particular conditions of the Italian coastline offer interesting opportunities for the development of such technologies.
There are currently only two wave energy plants in Italy, both in Tuscany. The first one has been active since 2013 in Punta Righini, near Castiglioncello, while the second was built in 2015 in Marina di Pisa by the start-up 40South Energy. The latter is very small: it works 24 hours a day to cover the needs of 40 households.
Although the installation of the systems is still rather limited, for years research bodies such as ENEA, CNR and RSE Spa and several universities have been working on prototypes suited to the opportunities of the Italian sea. The most suitable sites are the Sardinian sea and the Strait of Messina. It is estimated that the strong currents that flow through the strait could power a city of 200,000 inhabitants such as Messina itself.
According to the first OceanSET report, a three-year European project for the implementation of ocean energy in Europe, Italy proudly ranks first among Mediterranean countries and second in the EU for public funding of marine energy development.
MARINE ENERGY: ADVANTAGES AND DISADVANTAGES
By 2040, estimates indicate that electricity generated using fossil fuels will still account for more than 35% of the total. This makes it increasingly important to invest in renewable energy sources such as marine energy, a very young technology that still needs to prove its worth.
Although the oceans and seas play a key role in global socio-economic activities, their share in energy production is still too small. Humans learned centuries ago how to harness the fluid-dynamic energy harvested from tidal mills. Since then, however, little progress has been made.
To summarise, this is caused by three main factors: the high cost of constructing and maintaining the plants; doubts about the best way to harness marine energy; and an inadequately calculated environmental impact. So, although the benefits of marine energy are clear, some issues still need to be explored.
For this reason, it is essential to develop not only the conversion devices, but also management softwares capable of adapting their operation to weather conditions and measuring their performance. IPPO Engineering is the software house specialised in environmental technologies which, thanks to its credibility, has attracted European funding over the years, such as that of the IEE – Intelligent Energy Europe Programme and the European Regional Development Fund. This is why it is an ideal partner in sustainable development projects under the Green Deal and Structural Funds.
Looking for a partner for a sustainability project? Get in touch with IPPO-Engineering! Our team get back to you in a short time. | physics |
https://mynoblesolar.com/solar-panel-installation/colorado-solar-panels/ | 2023-10-03T03:04:25 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511053.67/warc/CC-MAIN-20231003024646-20231003054646-00207.warc.gz | 0.933314 | 993 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__191756344 | en | Colorado Solar Panels
How solar works
Solar panels generate electricity by harnessing the photons produced by the sun. Solar panels are made up of solar cells; multiple panels can be wired together to create a solar array. When sunlight hits a solar cell, the photons knock electrons loose from their atoms. These free electrons are converted into a DC current, which then passes through an inverter and becomes an AC current, capable of powering your home. Any excess energy captured during peak sunlight hours is stored in a battery – which gives your home continued access to electricity, even at night. The more panels you have, the more electricity you can generate and store.
Cost of solar
Solar panels are an investment that can pay off exponentially in the long run – but they’re not inexpensive, and costs will vary significantly from one state to another. It’s important to do plenty of research to determine your savings potential and the overall value of the investment before taking the plunge. Thankfully, solar is becoming more affordable each year. Advancements in manufacturing techniques have also made PV panels more efficient, capable of generating more electricity per unit. As clean, renewable energy gets more time in the national spotlight, we hope to see more states implement incentive programs that make solar energy a reality for an even broader population.
There’s more to installing solar than mounting the panels to your roof. The entire process typically takes a few months, beginning with an initial consultation and wrapping up with a city inspection once everything’s complete. Once we’ve performed an in-home inspection and determined the number of panels you need, we’ll get to work obtaining building permits from the city. Then, the installation itself takes only a few days. Don’t let the timeline scare you: We’ve been helping homeowners set up solar panels in Colorado for years. We’ll take care of all the details and paperwork so you can focus on enjoying your soon-to-be-solar-powered home.
START YOUR FREE SOLAR QUOTE TODAYClick Here
Why upgrade your home to solar in Colorado?
Colorado ranked 11th in the nation for installed solar capacity: As of 2016, the state had 925.8 MW of solar energy installed. It’s easy to see why, with excellent sun exposure across most parts of the state and numerous local programs that incentivize the adoption of renewable energy. Colorado’s net metering program, for example, is one of the best in the country. The state mandates that utility companies pay homeowners directly for the net excess energy their solar energy systems produce. This means that some individuals can not only eliminate their electricity spending, but even turn a profit off their solar panels.
The cost of solar in Colorado
Many different variables will influence the exact cost of materials, installation, and labor for your solar panel system. The size of your home, type and size of your roof, number of panels, permitting costs, and state and local incentives will all have an impact on the final total. The only way to know exactly how much solar will cost for your unique home is by contacting us to schedule an inspection. We know it can be helpful to get an idea of the regional data, though: According to EnergySage, as of 2020, the average cost to install residential solar panels in Colorado ranged from $13,388 to $18,112 after solar tax credits. When you work with Noble Solar, we’ll include the cost of labor in our initial quote to make the breakdown as simple as possible.
The solar installation process in Colorado
Installing solar panels in Colorado isn’t all that different from the installation process in other states. However, Colorado’s weather does limit the time frame in which installation can be safely performed: If you’re considering switching to solar, we recommend scheduling your initial consultation in the spring (or ever earlier) so that permits can be obtained in enough time for a summer installation date. In general, Colorado’s climate, open land, and sun exposure makes most homes in the state excellent candidates for solar power.
Schedule your Colorado solar panel consultation
Getting started with solar in Colorado is as simple as scheduling a free, 5-minute initial phone consultation with our team of specialists. We’ll ask basic questions about your current electricity usage, the type and size of your home, why you’re interested in solar, and address any concerns you may have. We’ll also walk you through your state incentive programs so you understand your savings potential and have a rough idea of the total cost. Next, we’ll schedule an in-home or virtual (Zoom or Skype) appointment to perform a more in-depth evaluation of your home and go over equipment, costs, logistics, and next steps. | physics |
https://www.climateactionforassociations.org/post/transparent-solar-panels-could-replace-windows-in-the-future-here-s-how | 2023-12-08T19:34:07 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100769.54/warc/CC-MAIN-20231208180539-20231208210539-00540.warc.gz | 0.922675 | 331 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__49785832 | en | As the world strives toward a low-carbon future to curb the worst effects of climate change, solar energy should unquestionably be one of our strongest allies. But how viable are transparent solar technologies? Could we really generate electricity from windows in offices, homes, car’s sunroof, or even smartphones?
What is 'transparent solar energy'?
Transparent solar is a cutting-edge technology that gathers and uses light energy through windows or any glass surface, regardless of the angle. It has the potential to be a game-changer in terms of broadening the scope of solar.
In terms of engineering, researchers have created several means of transparent solar technology. Most generally though, the majority of them function more as a transparent solar concentrator, which means they are made to absorb specific UV and infrared light wavelengths that aren't visible to the naked eye and transform them into energy capable of powering electronics.
This technology is also called photovoltaic glass, and it's manufactured to provide a ranging level of transparency. In 2020, scientists in the US and Europe have achieved 100 percent transparency for solar glass, bringing us one step closer to the goal of a sustainable future that does not rely on the grid of the fossil fuel industry.
The future of high-tech windows
Transparent solar technologies are already popping up around the world. In Copenhagen, the international school's design utilizes 12,000 hued but clear solar panels all over the building, producing 200 MWh of energy annually -- that's apparently more than half of the energy the building consumes. To find out more on the topic click here
Source: Interesting Engineering | physics |
https://www.srisaradacollege.org/view_department.php?did=NQ== | 2023-06-05T07:46:30 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00265.warc.gz | 0.927555 | 410 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__111428288 | en | DEPARTMENT OF Physics
Started in 1987, the Department of Physics offers B.Sc. Physics with a sanctioned strength of 48. The Allied courses offered by the Department for Physics students are Mathematics and Chemistry, studied during the first and second years of the course. The Department offers Physics as an Allied subject for B.Sc. Mathematics and Chemistry. It also offers Basic Physics I and II as Non-Major Elective papers for second year students and C++ language as a component of major paper for B.Sc. Physics. It also opts certificate course “Physics in Everyday Life" from the year 2012.
Further the Department gives special attention to motivate the students to pass National Graduate Physics Examination (NGPE) conducted by Indian Association of Physics Teachers (IAPT) which in turn offer them research pursuits (Ph.D.) along with fellowship. E-Learning courses in Physics by means of NPTEL in collaboration with IIT Chennai and IISC Bangalore is also opted for students to develop their knowledge.
The PG course, M.Sc. Physics was started in the year 2015-2016. A spacious and well-equipped laboratory for UG and PG Physics meets the requirements of the Major and Allied syllabi. Currently to encourage and enlighten the students for excellence in the field of research, Nano lab in Physics was started on November 2017.
FACULTY MEMBERS :
Designation : Assistant Professor
Qualification : M.Sc., M.Phil., Ph.D
Research Area : DEVELOPMENT OF TRANSITION METAL CHALCOGENIDES DECORATED POLYANILINE COMPOSITE NANOFIBERS BASED COUNTER ELECTRODES FOR DYE SENSITIZED SOLAR CELL.
Email : sowmiyaelindjeane@gmail. com
Contact Us : [email protected] | physics |
http://www.ggpenergy.com/solutions/silverbullet/silverbullet-methodology/ | 2013-05-21T14:51:00 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700107557/warc/CC-MAIN-20130516102827-00087-ip-10-60-113-184.ec2.internal.warc.gz | 0.946108 | 3,362 | CC-MAIN-2013-20 | webtext-fineweb__CC-MAIN-2013-20__0__31915204 | en | An Overview of Water Tower Cooling using the GGP Silver Bullet Water Treatment System System
The cooling process involves the removal of heat, expressed in British Thermal Units or BTUs, from water. This study involves the removal of heat from the cooling tower.
The removal of heat from a building can be accomplished in a number of ways. The most common way is to use water as a transfer media because it is a more efficient carrier than any other media such as air.
The cooling tower water, also called condenser water, goes to a cooling tower where it eliminates the heat contained within it through a process called evaporation. When exposed to air, water releases its heat to air through evaporation and converts it to water vapor. We have a similar process occur when we sweat. The liquid sweat evaporates from our skin, pulling heat from our body surface it lowers the surface temperature and we feel cooler. The cooling tower lowers its water temperature through the same process.
When the tower evaporates and cools the remaining water, the minerals (dissolved salts) contained in that water stay behind. In other words, when water evaporates, it does not also carry off the minerals dissolved in it. We have the same thing occur when we sweat. The salts contained within our sweat do not evaporate and our skin feels “sticky” as these salt layers build up. So, one of the problems involved in the process of using cooling towers is how to deal with the minerals that are left behind.
In order to understand the role that minerals play in cooling towers, we must first understand the nature of salts and water. Salts have two names with the first name referring to the part of the salt that when dissolved in water has a positive charge and the last name to the part that has a negative charge. So when we dissolve ordinary table salt, sodium chloride, in water, the sodium has a positive charge and the chloride, the negative charge. When the sodium and chlorides dissolve in water, we now call them ions because they have either a positive or negative charge.
In cooling tower water we will be dealing with a different salt called calcium carbonate. This salt is the primary ingredient in limestone. As a dissolved salt, calcium carbonate has one important characteristic. It changes from its dissolved ion state back to its solid state when in a very warm environment like a large chiller. We see examples of calcium forming deposits in other warm environments like home hot water heaters, heated tea pots and on the spray nozzles of shower heads. Just as the calcium deposits in our home hot water heater make them less efficient as it builds up calcium layers at the bottom of the hot water heater, so too do the calcium deposits make chillers less efficient because they produce an insulating layer that limits the heat being transferred to the outside cooling tower.
How does the amount of calcium increase in a cooling tower? It will build up in the same way that occurs on our skin, that is by the continuous evaporation of water. We have two ways of measuring this increase in the amount of calcium in tower water. One way is by comparing the amount of calcium in the makeup water that goes into the tower to the amount of calcium in the tower water itself. The ratio of calcium in the tower water to calcium in the makeup water is called cycles of concentration (*COC*). If the calcium in the makeup water is 100 parts per million and in the tower water, 500 parts per million, then we have five cycles of concentration.
We also have another method of calculating cycles of concentration. If we compare the gallons of makeup water going into the tower to the amount of water going down the drain (called bleed-off or blow-down water), we will also get cycles of concentration. The formula for this calculation is:
Makeup Water divided by Bleed-Off Water = Cycles of Concentration
For example, if we have 15 gallons going into the tower and 3 gallons going to the drain, we also have 5 cycles of concentration. If 15 gallons went into the tower and 3 gallons went down the drain, where did the 12 gallons of makeup water go? Those 12 gallons evaporated taking with them lots of heat and cooling the tower in the process.
Why send those 3 gallons down the drain? If we just let all the water evaporate, then the concentration of calcium ions would get so high that we would have massive calcium deposits in the chillers. By sending the amount of water going down the drain, we control the amount of calcium we have in the tower water. For example, if we let the evaporation rate stay constant at 12 gallons per minute but increased (doubled) the bleed-off rate from 3 to 6 gallons, we would increase the makeup water rate from 15 gallons to 18 gallons because the makeup water rate is simple the addition of evaporated water plus bleed-off water. We have, however, dropped the cycles of concentration (COC) by increasing the bleed-off rate from 5 COC [15/3 = 5] to 3 COC [18/6 = 3].
One of the primary problems with cooling towers is how to use the least amount of water for cooling purposes and thus conserve water usage. Simply stated, we need to reduce water consumption by reducing tower water bleed-off. We will discuss this process later when we discuss SCALE CONTROL.
The second biggest item after SCALE CONTROL is controlling bacteria in cooling towers. We call this process MICROBIOLOGICAL CONTROL. Cooling towers are great breeding grounds for bacteria. Inside the tower you have a warm, moist, sunlit environment that bacteria of all sorts love to thrive it. You also have a great environment inside chillers for other types of bacteria to grow. These bacteria can produce a material that can act as an insulating slime barrier to reduce the efficiency of the chillers in much the same way that calcium provides a solid barrier. In addition, certain types of bacteria called anaerobic bacteria because they thrive in the absence of oxygen, can grow and produce acids as part of their metabolism that can attack the steel components of the chillers.
Just as we measure calcium minerals in parts per million, we measure quantities of bacteria in Colony Forming Units per milliliter, abbreviated CFU/ml. The higher the count, the more bacteria you have. It is generally accepted that the maximum acceptable count in a cooling tower is 1,000,000 CFU/ml.
The simplest way to control bacteria is to rupture the cell walls that surround their cellular bodies. We use what are called oxidizing biocides to do this job. Examples of oxidizing biocides are chlorine and bromine which are frequently found in drinking water to control bacteria. We will discuss MICROBIOLOGICAL CONTROL and the GGP Silver Bullet Water Treatment System system later in this report.
The third item we will discuss is CORROSION CONTROL. CORROSION CONTROL is keeping corrosion rates in a cooling tower or chiller to the lowest rate possible. We measure corrosion rates in mils per year, abbreviated mpy. A mil is 1/1000 of an inch so a pipe that is 1 inch thick with a corrosion rate of 2 mpy will last 500 years before it finally is destroyed. Since the 2 primary metals found in chillers or towers are steel and copper, we use a method that will protect these metal surfaces by coating them with a protective film of calcium. We will discuss this mechanism later in our report.
Mechanisms of the GGP Silver Bullet Water Treatment System system
The GGP Silver Bullet Water Treatment System system has 2 key components, production of monatomic oxygen and filtration using special glass media.
The production of the monatomic oxygen is accomplished by modifying the oxygen molecule found in air. By exposing the oxygen to an ultraviolet light source, the oxygen structure is changed, giving it a negative charge. This modified oxygen is then continuously drawn into the tower water by means of a vacuum. The oxygen stays dissolved in the tower water where it continues to circulate. Once in the water, some of the monatomic oxygen combines with the water to form hydrogen peroxide. The remaining monatomic oxygen stays dissolved in the water in much the same way that regular oxygen can stay dissolved but it carries with it a negative charge.
The monatomic oxygen has an ability to stay dissolved in water to a much greater degree, however, than ordinary oxygen because it is attracted to the positively charged part of the water molecule.
The second part of the process involves filtration. Filtration plays a key role in the removal of dirt particles. Dirt particles generally enter in a cooling tower because it is suspended in the air, gets washed out as air flows through a cooling tower and then becomes suspended in the tower water itself. We call this dirt a suspended solid because it can be removed by filtration. Dirt can play a harmful role in cooling tower water by not only acting as a harbor for bacteria to grow, but also by allowing anaerobic bacteria to grow in cooling towers where those bacteria produce acids that destroy the tower metallurgy. Dirt particles also produce deposits in chillers to rob them of their efficiency. Dirt can also affect the efficiency of plate and frame heat exchangers that are used in arid climates to significantly reduce energy costs in colder weather. If dirt is allowed to build up in the narrow spaces between the stainless steel plates, water flow is reduced and energy costs go up.
Comparison of the GGP Silver Bullet Water Treatment System system with conventional chemical treatment
Conventional chemical treatment depends upon the continuous feed of assorted chemicals to prevent the formation of scaling crystals by altering their scaling tendencies. Chemicals are fed continuously because a portion of the tower water is regularly sent to the drain (bleedoff) to keep the calcium levels in a certain range. The bleedoff process brings in fresh makeup water with lower calcium levels which dilutes the concentration of calcium in the tower water. The bleedoff process was previously described on page 2 with the main point being that reducing the bleedoff rate saves water but also increases the level of calcium in the water.
The GGP Silver Bullet Water Treatment System system uses the strong negative charge of the monatomic oxygen to bond with the positive charge of the calcium ion. The net result is that the calcium ions stay dissolved in water in much the same way as with conventional chemical treatment.
Keeping the calcium in a dissolved state prevents calcium from forming harmful deposits in chillers. It is important that we have a means of confirming that the calcium in the tower water is dissolved. We use field or laboratory tests as our means of verifying the existence of the dissolved calcium by comparing the ratio of dissolved calcium in the tower water to the amount of calcium in the makeup water. We are back again to measuring Cycles of Concentration (COC). We need to have another dissolved ion that remains in a dissolved regardless of temperature to determine what the real cycles of concentration are. That is the chloride ion and our field test also included testing makeup and tower water for chlorides as well.
We can then compare chloride to calcium COC and see if they are the same. If they are the same, then the calcium in the tower is remaining dissolved. For example, if makeup water had 25 parts per million of chlorides and tower water had 125 parts per million (hereafter abbreviated as ppm) of chlorides, we would have a ratio of 5 to 1 or 5 chloride cycles of concentration (COC). If that same makeup water had 100 ppm of calcium and the tower water had 500 ppm of calcium, then we would also have 5 calcium COC.
What if the field test showed, however, 300 ppm of calcium instead of 500 ppm of calcium on the same tower water described in the previous paragraph? Where did the missing 200 ppm of calcium go? Why did we have 5 chloride COCand only 3 calcium COC? The key to the answer is in the fact that we are only measuring DISSOLVED calcium with our field test. The missing 200 ppm of calcium are in an non-dissolved state, otherwise called a crystalline state. This crystalline state is the form calcium takes when it is forming the damaging deposits in the chiller condenser tubes.
The GGP Silver Bullet Water Treatment System system at this test site not only matched the best conventional chemical treatment program by having equal calcium and chloride COC, it exceeded the standard by having greater calcium COC than chloride COC. If we have greater calcium than chloride COC, we are actually DISSOLVING old calcium deposits making for a cleaner surface and better heat transfer. It would be like removing deposits from an old house water heater so it would take less energy to heat up the water. Only in this case we would be getting rid of more heat at the cooling tower so the chillers would operate more efficiently and use less energy.
Filtration also plays a key role in removing old calcium deposits as well as filtering out bacteria. Removing these physical contaminants through continuous filtration not only makes the water the most efficient heat transfer media possible but also extends the life of the cooling tower and chiller.
What is even more remarkable is that in the past, the chillers with conventional chemical treatment had to be acid cleaned to remove the calcium deposits. We eliminated that expense of chemicals, wasted energy and labor associated with acid cleaning.
As previously mentioned, microbiological control is generally achieved with conventional chemical treatment using oxidizing biocides like chlorine or bromine. With the GGP Silver Bullet Water Treatment System program, we are electrochemically producing another type of oxidizing agent, hydrogen peroxide, to control bacteria. Using commercially available dip slides to measure Colony Forming Units per milliliter, we consistently registered bacteria counts in the low 1,000 CFU/ml. which is even lower than the limit considered acceptable with conventional chemical treatment. Another contributing factor to low bacteria counts was the continuous use of a bypass filter that removed food sources for bacteria as well as taking out bacteria through filtration.
Corrosion control is achieved by using a small amount of calcium on the metal surfaces as a type of coating against the corrosive action of the water. The coating action is continuous with small amounts of calcium depositing on the metal surface only to be swept away and replaced by a new coat. This sweeping and depositing action is necessary to prevent excessive layers of calcium from building up and affecting water flow and heat transfer.
Corrosion control was monitored using a conventional corrosion coupon rack. This rack had weighed metal “coupons” that when exposed to water over a period of time had some minor weight loss that was translated into mils per year. The coupon rack had a measured flow rate of approximately 4 gallons per minute. It also had a strainer upstream of the coupons to remove particles that could interfere with the test like dislodged calcium deposits.
We were so effective in removing vast quantities of harmful calcium deposits that had built up over several years that the strainer frequently became clogged with these deposits and the flow rate dropped from 4 gallons per minute to zero. Since coupons must be continuously bathed with tower water, a drop in the flow rate resulted in stagnant water that gave false higher corrosion rates. We had corrosion rates that varied from 1 to 2 mils per year with mild steel and less than 0.2 mpy for copper when the flow rate was correct. These corrosion rates are considered very good by standard industry tests.
A more accurate measure of the success of the corrosion control was the appearance of the chiller metallurgy that showed virtually no evidence of corrosion.
We will continue to monitor corrosion rates bearing in mind that the coupon rack strainer has to be cleaned weekly to maintain consistent flow.
In summary, the data shows that the GGP Silver Bullet Water Treatment System system not only equaled parameters established over decades for quality water management for conventional chemical treatment but also exceeded those standards when it came to scale and microbiological control. These results were achieved without the addition of any chemical treatment. The cost of the program equaled that of conventional chemical treatment. The GGP Silver Bullet Water Treatment System system provided the additional benefit of removing airborne dirt that would reduce the efficiency of the plate and frame heat exchanger that saves so much energy in the winter months. The addition of the filter did not result in any increase in capital costs. | physics |
http://synthstuff.com/mt/archives/2017/03/in-other-astron.html | 2017-03-30T20:36:27 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00433-ip-10-233-31-227.ec2.internal.warc.gz | 0.953016 | 446 | CC-MAIN-2017-13 | webtext-fineweb__CC-MAIN-2017-13__0__58016571 | en | Our sun is in a marked quiet phase - no sunspots. From Watts Up With That:
Solar Slump: The Sun has been blank for two weeks straight
Over the weekend, we reviewed the state of the solar data for March 2017. Now, there’s a two week straight lack of sunspots, the longest stretch since 2010.
The sun is currently blank with no visible sunspots and this is the 14th straight day with a blank look which is the longest such stretch since April 2010 according to spaceweather.com. Historically weak solar cycle 24 continues to transition away from its solar maximum phase and towards the next solar minimum. In April 2010 – the last time there was a two week stretch with no visible sunspots – the sun was emerging from the last solar minimum which was historically long and deep. There have already been 26 spotless days in 2017 (34% of the entire year) and this follows 32 spotless days last year which occurred primarily during the latter part of the year. The blank look to the sun will increase in frequency over the next couple of years leading up to the next solar minimum – probably to be reached in late 2019 or 2020. By one measure, the current solar cycle is the third weakest since record keeping began in 1755 and it continues a weakening trend since solar cycle 21 peaked in 1980. One of the impacts of low solar activity is the increase of cosmic rays that can penetrate into the Earth’s upper atmosphere and this has some important consequences.
The sun is a key driver in our climate and sunspots are an excellent proxy for solar output. Fewer sunspots = less warmth. This also means more clouds with a higher albedo (reflectivity) of the Earths atmosphere and correspondingly more cooling. The solar wind is comprised of charged particles streaming against the Earth's magnetic field. This creates a barrier which deflects incoming cosmic rays. When this barrier is weaker, more cosmic rays reach the atmosphere, collide with atoms of oxygen and water releasing charged particles. These particles form nucleation sites for water vapor - hence, more clouds.
Time to lay in a stock of firewood and bundle up. | physics |
https://lifamasks.shop/en/blogs/ajankohtaista/miksi-jokaisen-kotivarana-olisi-hyva-olla-ffp3-hengityssuojaimia | 2023-12-08T09:27:50 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100739.50/warc/CC-MAIN-20231208081124-20231208111124-00451.warc.gz | 0.920093 | 613 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__291526198 | en | Why would it be good to have FFP3 respirators in everyone's home supply?
Right now, we might be horrified by the thought of aerosol particles from nuclear fallout. There are other harmful particles coming from far away, and you should protect yourself from all of them.
The size of the aerosol particles affects how far they travel in the air. The size also affects how deep the particles end up in the respiratory system. In addition, the size determines how easy or difficult it is to filter the aerosol out of the air.
In general, we could say that the smaller the particle, the further and longer it flies in the air, whether we are looking at the atmosphere, our lungs or the particle filter.
Why are particles from nuclear fallout or forest fires, for example, a concern? They are exactly the size category that travels in the atmosphere for days, even weeks, gets deep into the respiratory system and requires a proper filter to filter them.
A well-fitting FFP3 respirator effectively filters out nuclear fallout aerosols
Previously, after the Chernobyl accident and most recently the Fukushima accident, scientists found out what size particles were transported to different parts of the globe and in which size classes radioactive isotopes were observed. The findings were very similar in both.
One scientific publication reports on Fukushima fallout studies from France, Austria, the Czech Republic, Poland, Germany, and Greece. According to the study, most of the particles were in the size range 0.1-1 μm and the median size class Cesium (137) was 0.25 - 0.71 μm, Cesium (134) was 0.17 - 0.69 μm and Iodine (131) was 0.3 - 0.53 μm.
A well-fitting FFP3 respirator filters over 97% of all these size categories. Respirators are classified for particles of 0.6 μm on average, and for them the FFP3 class filtration efficiency is 99%.
Many have become familiar with respirators during the corona pandemic. Less noticed is the fact that with the FFP3 Respirator you can protect yourself comprehensively from airborne particles, not only from viruses.
FFP3 respirators are easy to take with you and use when needed. Lifa Air produces two sizes S/M and L/XL. The protector's model is designed to be as comfortable as possible to use, as there is plenty of breathing space in this model.
Size Distributions of Airborne Radionuclides from the Fukushima Nuclear Accident at Several Places in Europe. Masson et al. Environmental Science & Technology 2013, DOI: 10.1021/es401973c
Particle size distribution of radioactive aerosols after the Fukushima and the Chernobyl accidents. Mala et al. Journal of Environmental Radioactivity, 2013: https://doi.org/10.1016/j.jenvrad.2013.07.016 | physics |
http://jabbour.org/19980223.html | 2021-12-05T22:55:03 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00090.warc.gz | 0.956243 | 890 | CC-MAIN-2021-49 | webtext-fineweb__CC-MAIN-2021-49__0__113835624 | en | By Dr Kamal Jabbour, Contributing Writer
On the morning of the 1997 Marine Corps Marathon, Bob Thurston rode the course ahead of the runners looking for problems. Thurston is the regional USATF certifier for the District of Columbia, responsible for certifying that the course is at least 26 miles 385 yards. He found a few mistakes: cones out of place in the north Pentagon parking lot, timing clocks at the wrong place near Mile 8 and the halfway mark, and mile 14 about 20 yards off.
Then, disaster struck. The course marshals shortcut an entire block near mile 15. Thurston was too late to fix it. By stopping to fix the earlier problems, some 15 runners were already ahead of him. It was too late to divert the flow of runners back on the right course. Quickly, he measured the damaged. The course was 75 meters short.
In recent years, USATF has setup an elaborate process for course measurement and certification, to insure that runners' efforts do not go to waste on a short course. First, a race director designs a course, using a map and a wheel to approximate its distance. Second, an approved measurer measures the course, adjusts its length, and draws a detailed map of its layout. Third, a regional certifier approves the layout, and assigns a course certification number.
The official length of a course is the shortest distance that a runner can follow from the start to the finish. This includes cutting the tangent on the turns and running a straight line on the straight-aways. The need to measure close to the curb eliminates the car from consideration. Conversely, the wobbliness of a surveying wheel over long distances eliminates its use. Thus, the bicycle became the tool of choice for measuring courses. It can ride close to the curb and travel straight between two points.
Dr. Alan Jones, a former engineer at IBM in Endicott, NY, transformed course measurement from an art into a science by developing the Jones counter. Essentially a precise mechanical odometer, the Jones counter attaches to the front tire of a bicycle and increments a count every 3 or 4 inches.
Before measuring a course, a measurer lays out a calibration course using a steel tape. The tape must be strung with a specified tension, and a temperature correction may be necessary. Then, the measurer rides the bicycle over the calibration course several times, notes the number of counts, and computes a conversion factor from counts into distance. The measurer also notes the pressure of the tires and the ambient temperature.
With a calibrated bicycle, the measurer rides the course at least once in each direction. Adjustments to the course are usually necessary. Eventually, after numerous measurements over several days, the measurer sets the start and finish lines, and the mile and kilometer marks. An extra meter is added for every kilometer, providing a 1/1000 short course protection factor. The measurer drives nails into the pavement to mark key points, and accurately measures their location relative to permanent fixtures.
Since course measurement takes several hours, the pressure and temperature of the tires change due to friction and weather, affecting the conversion factor. Therefore, the measurer must re-calibrate his bicycle over the calibration course, at the end of each measurement, and compute a new conversion factor from counts to distance.
Fortunately for those of us who ran the 1997 Marine Corps Marathon, Bob Thurston could think as fast as he could ride. He had less than 15 minutes to find a place to stretch the course by the right amount (mile 24 in the South Pentagon Parking Lot), get there before the lead runners (that's 10 miles on the bike in the rain), measure the required correction, move the cones, and incidentally, relocate a water station with 15,000 cups.
On the morning after, careful re-measurement on a dry pavement showed that the new course was about 7.4 meters shorter than the certified course. Since the original course was 1/1000 or 42.2 meters longer than a marathon, the new course was certified as a valid marathon course.
Kamal Jabbour runs and writes on the hills of Pompey, New York. His RUNNING Column appears in The Post-Standard on Mondays. He maintains The Syracuse Running Page and receives email at [email protected]. | physics |
https://www.gaureshkapoor.com/projects-atl | 2021-09-18T02:20:46 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00366.warc.gz | 0.907059 | 198 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__98250743 | en | Regularly participated in programs and workshops at School's Atal Tinkering Lab and built the following projects with technologies of Arduino Uno microcontroller and IoT :-
• Waste-water Management Project
• Oceanic Temperature Probe
The projects were sent for model-making and innovation competitions such as at St. Columba's School.
Built the prototype of a water and waste conservation project in the school’s Atal Tinkering Lab to preserve water resources through a series of steps involving aeration, ultrafiltration, sedimentation reverse osmosis, activated carbon filtration, and more.
Oceanic Temperature Probe
Built the Oceanic Temperature Probe in the school’s Atal Tinkering Club, under the supervision of the physics professor.
It is a research-based prototype model used to determine the temperature of ocean water with varying depths. This data can be used to make weather predictions and survey the occurrence of natural calamities like floods. | physics |
https://smp.leeds.ac.uk/facilities/testing-and-characterisation/ | 2023-12-10T11:13:23 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00436.warc.gz | 0.90482 | 480 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__302811196 | en | Ultrasonic Velocity Apparatus
The main advantage of the technique is that it allows a full set of elastic constants of a material (Young's Modulus, Poissons Ratio and Shear Modulus) to be quickly and accurately measured, and by this virtue it has been used extensively at Leeds for the development of modelling schemes for composites. The sample to be tested is placed in a water bath between an ultrasonic transmitter and receiver (2.25MHz), enabling the velocity of sound in the sample to be measured. The unique aspect of the technique is that at non-zero incidence, mode conversion at the front face of the sample causes the incident longitudinal wave to be split into a longitudinal wave and a shear wave inside the sample. Measuring the velocities of these two waves allows the stiffness constants in the plane of propagation to be determined. By propagating sound in different planes it is possible to obtain all the elastic constants of a material (up to nine elastic constants if the material is anisotropic).
Image Analysis System
An in-house designed facility, developed as a collaboration between the IRC and The Instrumentation Group in the physics department. It is used to characterise the microstructure of fibre reinforced polymer composites, in particular their three dimensional fibre orientation distribution. The system works by looking at transverse sections taken from a composite, and analyses the elliptical footprint that each fibre makes with the section plane. The method is very fast, able to analyse up to 40,000 fibre images/hour, and can automatically scan large areas, up to 20mm x 20mm square. In addition to determining fibre orientation, the system can also be used to determine fibre volume fraction, and using different samples, the fibre length distribution.
Rosand Instrumented Falling Weight Impact Machine
This machine can be used for a range of impact tests. Tests that be carried out include Charpy and Izod impact geometries (ASTM D256) and clamped plate. The maximum impact velocity is 6m/s and for a maximum weight of 25kg, equates to a maximum impact energy of 450J. Tests can be carried out in an environmental chamber, allowing a temperature range of -100 to +200C to be studied.
If more information is required about use of the composites lab then please contact Dr Peter Hine for details. | physics |
http://lightningsafetyalliance.org/news_Public_Urged_to_Assist_with_Lightning_Research_Project.html | 2017-03-24T12:07:28 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187945.85/warc/CC-MAIN-20170322212947-00551-ip-10-233-31-227.ec2.internal.warc.gz | 0.898887 | 1,449 | CC-MAIN-2017-13 | webtext-fineweb__CC-MAIN-2017-13__0__44152524 | en | Public Urged to Assist with Lightning Research Project
In Support of Fire Prevention Week 2009, “Stay Fire Smart! Don’t Get Burned!”
To participate, please click the Lightning Damage Survey.
Hartford, Conn., October 4, 2009 – In conjunction with Fire Prevention Week, the Lightning Safety Alliance (LSA) is sponsoring a research project to learn more about the ways that lightning enters and damages homes and buildings. Packing up to 100 million volts of electricity and a force comparable to that of a small nuclear reactor, lightning has the power to rip through roofs, explode walls of brick and concrete and ignite deadly fires. The LSA is initiating the research project in an attempt to collect and analyze lightning data and is urging property owners, firefighters and insurance professionals to visit its web site at www.lightningsafetyalliance.org to submit information about lightning incidents, fires and damage to their homes. The LSA plans to present its findings to the National Fire Protection Association (NFPA) and its Technical Committee on Lightning Protection, which reviews information pertaining to the NFPA 780 Standard for the Installation of Lightning Protection Systems.
Lightning strikes can be direct or indirect. A direct strike to a structure typically results in resistive heating, arcing and burning, which can cause catastrophic damage to the structure and its contents. An indirect strike near a structure typically damages sensitive electronics and vulnerable building systems. In these instances, the lightning current can enter a building from a tree, fence, light pole or other nearby object. In addition, lightning can travel on underground power cables, telephone lines or metallic piping into a building.
Property owners should also be aware of lightning concerns surrounding a relatively new gas piping used to transmit fuel gas in homes, known as corrugated stainless steel tubing (CSST), which has been found to be susceptible to damage from arcing by direct or nearby lightning strikes. In some situations, lightning has created holes in the CSST, allowing gas to leak, which has resulted in home fires. On September 15, 2009, the NFPA announced the appointment of a task group to review the lightning-related technical issues affecting CSST in gas piping systems. The task group will provide the NFPA Council with a review and analysis of the jurisdictional and technical issues relating to lightning and CSST in gas piping systems and identify the need for research, data and further committee action with regard to bonding, grounding and lightning protection.
“The LSA’s research initiative will be helpful in identifying specific lightning related damage patterns that could lead to enhancements in lightning protection methods,” said John Kennelly, spokesman for the Lightning Safety Alliance (LSA), a nonprofit, non-stock, national league of lightning protection professionals and consumers dedicated to the promotion of lightning protection and safety. “Lightning protection systems are critical in protecting our national infrastructure and various governmental agencies rely heavily on nationally recognized specifications for lightning protection.”
This sentiment is echoed by Mitchell Guthrie, former chair of the NFPA Technical Committee on Lightning Protection and current chair of the International Electrotechnical Commission Committee on Lightning Protection (IEC TC81). “There is no doubt that implementing a properly designed lightning protection system significantly reduces the probability of damage from lightning to a tolerable level for any application,” added Guthrie.
MARKETWIRE (MARYVILLE, MO)
Hurricanes and tornadoes receive the news coverage, but lightning is
the second leading cause of storm-related deaths, killing more people
than tornadoes or hurricanes, topped only by flooding. In addition,
thousands of properties are damaged or destroyed each year by
lightning. A single bolt of lightning can generate heat in excess of
50,000 degrees F which can spark fires or cause surging through
electrical circuitry. The average cost of a homeowner insurance claim
from a lightning strike has more than doubled since 2004, rising to
$5,321 in 2007, according to statistics from the Insurance Information Institute (III).
Packing up to 100 million volts of electricity, a lightning strike to an unprotected home or
business can be disastrous, with lightning most often igniting roofs, sidewalls, framing and
"The good news is most personal injury and property damage caused by lightning can be
prevented," says Leslie Chapman-Henderson, CEO and president of the Federal Alliance for Safe
Homes, Inc. -- FLASH.
"Home and business owners needn't take their chances with lightning," explains Bud VanSickle,
executive director of the Lightning Protection Institute (LPI). "A professionally-installed lightning
protection system which meets U.S. Safety Standards (LPI, NFPA and UL) will prevent lightning
damage by providing a safe electrical path into the earth for lightning's destructive energy."
Lightning protection technology is a specialty discipline and expertise is required for system
design and installation. Systems for homes and businesses should be installed by trained and
experienced LPI-certified and UL-listed specialists. FLASH and LPI offer these safeguards for
property owners seeking a qualified lightning protection specialist:
-- Make sure materials and methods comply with nationally-recognized
-- Only an experienced and reputable UL-listed, LPI-certified lightning
protection contractors are qualified to install lightning protection
-- Check references. A qualified specialist should provide a list of
references and affiliation with industry groups such as NFPA, ULPA, LSA and
-- Ask about surge protection. Lightning-induced surges can damage
electronics and appliances. A qualified lightning protection contractor
can provide options for service entrance arresters and surge protection
-- Experience counts. Be wary of start-up companies or contractors
offering a "price deal" to install, fix or repair your lightning
-- When in doubt, contact www.bbb.org to locate your local Better
Business Bureau to obtain reliability report information on a contractor
before you hire.
The nonprofit Federal Alliance for Safe Homes-FLASH®, Inc. is a 501(c)(3) collaboration of
organizations dedicated to strengthening homes and safeguarding families from disaster. Based
in Tallahassee, FLASH, is the nation's fastest-growing disaster safety education organization with
more than 100 partners including FEMA, FL Division of Emergency Management, Georgia Pacific,
The Home Depot, International Code Council, National Weather Service, Renaissance
Reinsurance, Simpson Strong-Tie, State Farm, USAA and WeatherPredict Consulting, Inc. To
learn more about FLASH and access free resources, visit www.flash.org call (877) 221-SAFE
The LPI is not-for-profit, nationwide group founded in 1955 to promote lightning safety,
awareness and protection education. The organization provides a certification program to qualify
competence in lightning protection installation, design and inspection. The LPI offers a list of
certified contractors across the U.S. Visit the LPI website at www.lightning.org for more
information about lightning protection. | physics |
https://holdings.panasonic/global/olympic/nagano/support/ramsa.html | 2023-05-29T19:06:36 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644907.31/warc/CC-MAIN-20230529173312-20230529203312-00577.warc.gz | 0.946023 | 523 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__280544370 | en | The Olympic Winter Games Nagano 1998 adopted as an initiative, “the utilization of high technology that coexists with the beautiful and lush nature.” Panasonic’s RAMSA, the sound system of Nagano 1998, was tasked with how to realize this concept. After all, what exactly is an “environmentally-friendly” sound system? First and foremost, it was necessary to minimize sound spillage into the surrounding countryside. At the same time optimum sound had to be delivered to every corner of the venue. It was decided that directional speakers, highly lauded a year earlier at the pre-Olympic Games, would be used.
Optimal Installation for the Venue
Close attention was paid also to outdoor speaker installation. In heavier snowfall areas, the temperature dropped to -20 degrees Celsius overnight and daily snowfall occasionally reached 2 meters. To maximize outdoor durability, special water-repellent nets and resin coating, resistant to ultraviolet rays and temperature changes were used. At the snowboarding course where rhythmic music and live announcements were indispensable to the competition, the challenge became how to achieve clear, even sound on such a bumpy, uneven terrain.
Mac Takeuchi proposed to the team members, “Let's try doing it digitally. If we transmit using optical cables, there won't be any drop-off in signal even over long distances. We will be able to produce the highest quality sound production ever.”
This was the first-ever attempt during the Olympic Winter Games at using a sound system with optical cables for digital transmission. Digital transmission can maximize RAMSA’s elaborate sound design. The goal then, became the realization of “an indoor-quality of sound, outdoors.”
Achieved a Lag-Free Audio Environment Using Digital Transmission
For high-altitude, distant speakers, signals are first transmitted via optical cables to the unmanned sound room mid-way along the course. There the signals are amplified and applied an optimal delay according to speaker, so that the sound is homogenous across the entire course. Snowboarders take no more than one minute to hurtle down the slope. However, in reality it is a large task to transmit clear sounds that are uniform and lag-free across the uneven 1km terrain.
“Pleasant, congruous sound” is something taken for granted by the athletes and spectators who gather at the Olympic venues. However, behind this natural sound creation was RAMSA's advanced technology and the passion of Panasonic engineers seeking to shape the future of sporting events. | physics |
http://jamesao1.miniserver.com/en/flexiburn | 2018-11-13T19:03:10 | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741340.11/warc/CC-MAIN-20181113173927-20181113195333-00029.warc.gz | 0.86103 | 357 | CC-MAIN-2018-47 | webtext-fineweb__CC-MAIN-2018-47__0__174901492 | en | FlexiBurn is our vertical flammability tester. You can use FlexiBurn for testing ease of ignition and the flame spread properties of apparel, curtains, drapes, nightwear, toys, protective clothing, technical fabrics, building and other materials.
For each material, there are applicable British, European or ISO standards, which set out the precise conditions for these very critical tests.
The position of the gas burner changes according to the standard, so we designed a robotic arm for precise positioning of the burner.
Virtually every method requires a different test frame - regardless of size and pin configuration, our frames are easily interchangeable.
To guide you through the complex world of flammability standards and test methods, we supply a pre-programmed Control Module for our flammability tester.
FlexiBurn meets the requirements of BS EN 13772 for curtains and drapes, which evaluates flame spread.
Flammability standards are constantly evolving, so we have future-proofed the instrument.
FlexiBurn is used to test flame retardancy and the anti-flame properties of nightwear.
FlexiBurn is specified to assure the safety of small toys and children's playthings.
FlexiBurn is employed to evaluate the flame resistance of curtains and drapes.
We understand that you are looking for a full package so we offer complete support for the full life of your FlexiBurn:
Find out more about our full support services.
We recommend using genuine James Heal test materials to ensure accuracy and reliability when testing. The following consumables complement FlexiBurn:
Find out more about our Test Materials.
Our corporate brochure contains details of our key instruments and other services. | physics |
http://www.oegv.or.at/?page_id=176 | 2023-12-11T06:52:43 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103558.93/warc/CC-MAIN-20231211045204-20231211075204-00429.warc.gz | 0.930207 | 1,465 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__24693289 | en | Geschichte der ÖGV
Die Österreichische Gesellschaft für Vakuumtechnik (ÖGV) wurde am 23.10.1969 gegründet.
AUSTRIAN VACUUM SOCIETYOesterreichische Gesellschaft für Vakuumtechnik (OEGV)
The „Oesterreichische Gesellschaft für Vakuumtechnik“ (OEGV) – the Austrian Vacuum Society – was founded in 1969 during an informative meeting on the „Atominstitut“ of the University of Vienna by a group of scientists and industrial managers, headed by F. Viehboeck.
The original formation of the OEGV was on October 23, 1969 with acceptance of the statutes. On March 19, 1982 new objectives like Surface Science, Thin Films and Plasma Science were entered into the statutes. On November 26, 1991 the possibility of a second re-election of members of the board to ensure continuity was entered into the statutes.
The main purpose and objectives of the OEGV are to bring together and to support all those interested in production, measurement and application of vacuum, to distribute information, and to organise courses, meetings and seminars. The support is especially focused to young society members who can apply for grants given for travel to improve international contacts and experience.
As a society, the OEGV is represented by the President. The board consists of the President, Vice-President, Aktuar (Secretary), Quaestor (Treasurer) and several co-opted members (at least two members from industry), each serving a two-year term.
The current number of members is presently about 60. The members are individuals (90%), organisations, companies and distributors (10%).
The proportion of industrial involvement was very important during the formation of the OEGV. In the beginning about 50 percent of the members were coming from industry. The proportion has changed continuously. Presently most of the members are coming from the universities who dominate now the scientific structure of the society.
The scientific and technical areas covered by the OEGV have changed in parallel to the working fields of the individual members. In the beginning of the OEGV, vacuum technology in general was the main field of interaction. Nowadays surface science, nanometer structures, thin films and plasma science are the fields of main activity in the OEGV.
The OEGV has sponsored a considerable number of activities. Courses for technicians and engineers have been organised at university institutes and industrial plants. Examples are: „Production and measurement of vacuum“ (Vienna, 1970), „Vacuum technique in electricaltechnical industry“ (Vienna, 1971), „Fundamentals of vacuum technique and production of thin films“ (Vienna, 1973) and „Fundamentals and application of vacuum technique“ (Kapfenberg, 1980).
In 1977 the OEGV, together with the Oesterreichisches Forschungszentrum Seibersdorf Ges.m.b.H. and the Technische Universitaet Wien, was responsible for the organisation of the 7th International Vacuum Congress (IVC) and the 3rd International Conference on Solid Surfaces (ICSS) in Vienna at the Congress Centre Hofburg on September 12 -16, 1977. This congress, attended by more than 1300 participants, was a main activity in the first decade of the OEGV.
The 3rd European Vacuum Conference was organised in September 1991 in Vienna. In 1993 the OEGV was responsible for the 9th International Conference on Thin Films (ICTF-9) in Vienna. The 1999 European Conference on Surface Science (ECOSS-18) was held in Vienna under sponsorship of the OEGV as well as the 11th European Conference on Applications of Surface and Interface Analysis (ECASIA’05) September 2005 on TU Vienna. September 2007 the „European Conference on Surface Crystallography and Dynamics“ (ECSCD-9) was organized by P. Varga.
In addition the OEGV was active in the organisation of several IUVSTA-workshops (2nd in Obertraun 1990, 7th and 13th in Kitzsteinhorn 1993 and 1996, 25th in Leibnitz 1999, 35th in Trofaiach, 56th 2008 at Schlaining Castle, 60th 2009 in Vienna, 71th 2013 in Hernstein Castle, 72nd at Seggau Castle 2014 and the 73rd in Eisenerez 2014).
In 1979 the first joint meeting with the Hungarian Vacuum Society was organised in Gyoer, Hungary, which was the beginning of a successful series of conferences, the „Joint Vacuum Conference“ running under sponsorship of IUVSTA. The 3rd Joint Vacuum Conference was held together with EVC-3 in Vienna 1991. In spring 2002 the OEGV has organized the 9th Joint Vacuum Conference in Graz, Austria. The 15th Joint Vacuum Conference has been organized by OEGV June 2014 in Vienna.
The OEGV has sponsored and co-sponsored several other international conferences including the „Symposium on Sputtering“ (Vienna 1980) and the „Symposium on Surface Science“ (Obertraun 1983 and 1985). Joint Meetings were held with the German Vacuum Society (DGV) and the Swiss Vacuum Society (SGV) within the annual spring meetings.
Since 1980, when the Max Auwaerter Preis (Award) was first announced, the OEGV has had the honour to present this prize to the recipient in a special ceremony at conferences which it has organised or co-organised.
A main future objective of the Austrian Vacuum Society will be the re-enforcement of the collaboration between university research and industrial applications. The new technologies (nanometer structures, biological surfaces, photonic, etc.) offer a new challenge for the traditional vacuum and surface scientists, as organised in our vacuum society. The organisation of workshops and conferences, as well as the continuation of international scientific collaborations shall be our goal for the future.
An important part represents our contribution to IUVSTA by several members of our society. M. Higatsberger served as secretary general 1983-86, R. Dobrozemsky as treasurer 1992-98, Ch. Eisenmenger Sittner as scientific secretary 2010-13 and as secretary general since 2013. As divison chairs H. Stoeri (PSD), P. Varga (SSD) and M. Leisch (VSTD) has been active.
OEGV organized also several IUVSTA specific meetings: GM 6 (1977) in Vienna, ECM 32 and 33 (1977), ECM 65 (1991) and 98 (2005) in Vienna, ECM 52 (1986) in Spitz and ECM 120 (2015) in Graz. | physics |
https://www.battery-energy-storage-system.com/news/What-Battery-Storage.html | 2024-04-22T16:18:18 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00516.warc.gz | 0.926959 | 585 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__177188936 | en | What is a battery energy storage system?
A Battery Energy Storage System (BESS) is a technology that stores electrical energy in the form of chemical energy within batteries for later use. BESSs play a crucial role in modern energy systems by addressing the intermittent nature of renewable energy sources, providing grid stability, and enhancing overall energy reliability. Here are the key components and functions of a Battery Energy Storage System:
Battery Cells: The fundamental building blocks of a BESS are the individual battery cells. These cells contain electrochemical components that facilitate the conversion between electrical energy and chemical energy. Common types of batteries used in energy storage systems include lithium-ion, lead-acid, and flow batteries.
Battery Management System (BMS): The BMS is a crucial component that monitors and manages the operation of the battery cells. It ensures that the cells operate within safe voltage and temperature ranges, optimizes charging and discharging, and prevents issues such as overcharging or overheating.
Inverter System: BESSs typically include an inverter system, which converts the direct current (DC) stored in the batteries into alternating current (AC) for use in the electrical grid or for powering connected loads. In some cases, BESSs may also include a bidirectional inverter, allowing energy to flow in both directions between the batteries and the grid.
Energy Management System (EMS): The EMS is responsible for optimizing the operation of the BESS. It considers factors such as electricity prices, demand patterns, and grid conditions to determine when to charge or discharge the batteries. This optimization helps maximize the economic and operational benefits of the energy storage system.
Cooling and Thermal Management: Maintaining an optimal temperature is critical for the performance and longevity of batteries. BESSs often incorporate cooling and thermal management systems to regulate the temperature within the battery system and prevent overheating.
Enclosure and Safety Systems: BESSs are housed in enclosures designed to protect the equipment and ensure safety. Safety systems may include features such as fire suppression systems, ventilation, and other measures to mitigate potential risks associated with battery storage.
Grid Connection: BESSs are connected to the electrical grid, allowing them to interact with the grid in response to changing demand or supply conditions. They can provide services such as frequency regulation, peak shaving, and grid support, contributing to grid stability and reliability.
Battery Energy Storage Systems are deployed in a variety of applications, including supporting renewable energy integration, providing backup power during outages, and offering grid services to enhance the overall efficiency and resilience of the electrical grid. The technology continues to evolve, with ongoing research and development focused on improving performance, reducing costs, and expanding the range of applications for energy storage systems.
下一篇:Introduction to Battery Energy Storage Systems
上一篇:How are we supporting battery storage technology? | physics |
https://www.arbetslivsinstitutet.se/jobb/spectral-dynamics-of-decoheering-fluctuators-in-superconducting-circuits-2/ | 2021-02-27T15:26:56 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358976.37/warc/CC-MAIN-20210227144626-20210227174626-00489.warc.gz | 0.89518 | 968 | CC-MAIN-2021-10 | webtext-fineweb__CC-MAIN-2021-10__0__31608728 | en | OBS! Ansökningsperioden för denna annonsen har
Come to the cutting edge of quantum technology – join our multi-talented, collaborative team working toward the common goal of building a quantum computer. You will be part of the most exciting things happening in this field, such as the EU Flagship on Quantum Technology and the Wallenberg Centre for Quantum Technology.
Our ambitious goals at Chalmers are to build a quantum computer, and to apply it to real computational problems that cannot be efficiently solved on a conventional computer. Such computationally hard problems are found, e.g., in optimization, machine learning, quantum chemistry, materials science, etc.
These efforts are supported by the Wallenberg Centre for Quantum Technology (WACQT) and the OpenSuperQ project of the EU Flagship on Quantum Technology. Several industrial companies interested in applications of quantum computing are members of WACQT and OpenSuperQ, and they are developing relevant use cases in collaboration with us.
Building a quantum computer requires a multi-disciplinary effort between experimental and theoretical physicists, electrical and microwave engineers, computer scientists, software engineers, and researchers within materials science and nanotechnology. We are developing the superconducting quantum devices and control circuits, materials, firmware, and methods required to make the quantum computer reality. We work in close collaboration between the experimentalists in the Quantum Technology Laboratory (QTL), Quantum Device Physics Laboratory (QDP) and the theorists at the Applied Quantum Physics Laboratory (AQPL). The theorists help model the hardware as well as developing application use cases for the quantum processor. Our team currently has about 50 members – faculty, permanent research staff, post-doctoral researchers, PhD students, and master’s / undergraduate students – and is expanding.
Our department is host to the state-of-the-art MC2 Nanotechnology Laboratory cleanroom, and our measurement lab at QTL is well equipped with cryogenic and microwave electronic equipment. Our qubits are state of the art. We are in a position to build and operate a quantum processor!
We are looking for expertise in nanotechnology and fabrication of thin films and micro/nanoscale electronic devices, superconductor technology, and characterization techniques. Further desired skills include the design, modeling, and characterization of solid-state devices, electrical modeling and characterization techniques on microwaves and dc, numerical simulations, low-temperature techniques, noise processes, electromagnetism, solid-state and quantum physics, and cryogenic techniques. Knowledge of materials and surfaces is desirable.
• You have a PhD in Physics, Applied Physics, Nanotechnology, or equivalent
• Your verbal and written communication skills in English are very good
• You are motivated for a career in quantum technology, be it in academia or at an institute or company
• You have a collaborative attitude and an interest in working both independently and collaboratively in a team environment, sharing best practices and assuming responsibility. You are self-motivated, pay attention to detail, and possess a problem-solving analytical ability. You are willing to help supervise PhD students
Full-time temporary employment. The position is limited to a maximum of two years (1+1).
Chalmers continuously strives to be an attractive employer. Equality and diversity are substantial foundations in all activities at Chalmers.
Our offer to you
Chalmers offers a cultivating and inspiring working environment in the dynamic city of Gothenburg.
Read more about working at Chalmers and our benefits for employees.
The application should be marked with Ref 20200612 and written in English. The application should be sent electronically and be attached as pdf-files, as below:
CV: (Please name the document as: CV, Surname, Ref. number) including:
• CV, include complete list of publications
• Previous teaching and pedagogical experiences
• Two references that we can contact.
Personal letter: (Please name the document as: Personal letter, Family name, Ref. number)
1-3 pages where you:
• Introduce yourself
• Describe your previous research fields and main research results
• Describe your future goals and future research focus
• Attested copies of completed education, grades and other certificates.
Read more about the project and apply here.
Application deadline: 22 February, 2021
For questions, please contact:
Associate Professor Jonas Bylander, MC2/QT
Phone: +46 31 7725132
Professor Sergey Kubatkin, MC2/QDP
Phone: +46 31 7725475
*** Chalmers declines to consider all offers of further announcement publishing or other types of support for the recruiting process in connection with this position. *** | physics |
https://www.eurostyle-systems.fr/expertise-innovation/our-expertise/innovations-design/?lang=en | 2023-10-05T03:02:40 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511717.69/warc/CC-MAIN-20231005012006-20231005042006-00479.warc.gz | 0.93871 | 153 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__326632771 | en | Innovations & Design
Research and development
Creativity is nurtured in our company.
Our design department knows how to propose innovative solutions adapted to each of our customers.
Structured methods are applied to track progress on R&D projects and guarantee that new products and concepts become concrete realities.
Our products embody technical advances and design improvements that enhance comfort and perceived quality in new vehicles.
At our Tech Centre, we simulate the behaviour of parts and manufacturing processes and create virtual models before giving physical form to our products.
Since 1999, our design engineers have studied how our products respond to static and dynamic strain and crashes. In addition, our design partners assist us with any needs in the fields of acoustics and aerodynamics. | physics |
https://www.islington.gov.uk/energy-and-pollution/energy/bunhill-heat-network | 2020-01-19T12:56:15 | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594603.8/warc/CC-MAIN-20200119122744-20200119150744-00352.warc.gz | 0.95045 | 439 | CC-MAIN-2020-05 | webtext-fineweb__CC-MAIN-2020-05__0__122103951 | en | The Bunhill heat network supplies cheaper, greener heat to over 800 homes in Bunhill ward, as well as Finsbury Leisure Centre, Ironmonger Row Baths and offices on Old Street. The network will grow organically as new buildings built in the area are connected.
Launched in November 2012, the heat network is fed by the local energy centre on Central Street which produces both electricity and heat in a combined heat and power plant. The energy centre uses the heat created from producing electricity to create hot water that is piped into people’s homes, making it more efficient than a normal power station, for which the heat is a waste product.
Phase 2 of the Bunhill Heat and Power network involves building a new energy centre at the top of Central Street, connecting the King’s Square Estate to the network and adding capacity to supply a further 1,000 homes.
The core of the new energy centre is a 1MW heat pump that will recycle the otherwise wasted heat from a ventilation shaft on the Northern Line of the London Underground network, and will transfer that heat into the hot water network. During the summer months, the system will be reversed to inject cool air into the tube tunnels.
Works and road closures
Colloide Engineering Systems Ltd were appointed by the council as its contractor for these works. They have installed pipework under Central Street, Moreland Street, Lever Street and President Street to allow the connection of the network to President House, Rahere House, Barnabas House and Macclesfield House.
Energy Centre 2 site
Part of the works around the Energy Centre was the completion of the new ventilation shaft by TfL. Colloide are now carrying out work on the new building adjacent to Kestrel House.
Access from Central Street on to City Road will remain closed while these works take place and the eastbound lane of Moreland Street between Pickard Street and Central Street will be closed until Spring 2019.
King’s Square Estate
Works will also be taking place on the King’s Square Estate until December 2019 to finalise the connection between the heat network and the buildings. | physics |
http://marcopolie.blogspot.com/2011/04/polar-twilights.html | 2019-08-23T23:24:52 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00409.warc.gz | 0.956894 | 779 | CC-MAIN-2019-35 | webtext-fineweb__CC-MAIN-2019-35__0__205810034 | en | Before we can really call it night we get to experience and enjoy civil twilight, then nautical twilight, and finally astronomical twilight.
Civil twilight starts at sunset and ends when the sun is 6 degrees below the horizon. At medium latitudes it lasts about a half hour. Here at the pole it lasts 13 days, until April 5. During civil twilight there is still plenty of light to move around outside. The brightest planets and stars will appear by the end of civil twilight. It has now been 10 days since sunset, and I have still been able to enjoy running and skiing outside without any impairment, except for the wind.
Nautical twilight is the period when the sun is 6 to 12 degrees below the horizon. At the end of nautical twilight the horizon is still visible, and most of the stars are visible, too. It is during this period that we turn on our cameras to observe and record the auroras. This period at the pole lasts 17 days, until April 22.
The last one, astronomical twilight, is defined by the sun being between 12 and 18 degrees below the horizon. The horizon is no longer visible. It does not get any darker after the end of astronomical twilight, so that is when the night really starts. Astronomical twilight will last 20 days, until May 12. At that point there will only be 6 weeks left to the winter solstice, so the polar night should really only last about 3 months.
I look forward to observing the slow evolution of lights, from twilight to the night sky, with the appearance of the stars and of the auroras. I look forward to the next moon rise, and to the lunar eclipse, which we will experience around mid-June. Meanwhile, I am enjoying civil twilight. On March 30 I took my camera with me on a 6.7-mile ski tour to document what the South Pole looks like at this very special time.
View towards the sun on March 30: 7 days after sunset. The sun is about 3 degrees below the horizon.
View of the station on March 30. There is still plenty of light to see the buildings and the features on the snow. On this day the wind was very light, less than 5 knots, and the smoke out of our power plant created a slender and compact plume slowly drifting away.
This is a view of our beautiful 10-mt telescope against the sky opposite the sun. While the sky was pinkish in the direction of the sun, it was of an intense blue in this direction. Note the crust of ice on the sides of the building, produced by ice fog and wind in the preceding days.
I skied all the way out to one of the three wind turbines that were installed last summer and are being tested to support remote science experiments. This one is located 1.5 miles away from the station, in the direction opposite to the skiway. The path to the wind turbine is well marked with flags.
On the way back from the wind turbine I met my friend Robert, who was walking back to the station from the telescope. He took this photo of me. With a wind of less than 5 knots (6 mph) the -63 C (-81 F) temperature felt very comfortable.
Loving it here at the South Pole!
Robert is an astronomer, or, I should rather say, a polar astronomer. He has already spent 6 winters here at the South Pole operating a variety of different microwave telescopes. When not on the ice, he lives in the lovely town of Garmisch, in the Bavarian Alps (the site of one of the most famous downhill courses in the world cup ski circuit), or teaches nature classes on board cruise ships around the world. He is also teaching an 11-hour college-level astronomy class here at the South Pole, every Monday after dinner. | physics |
https://equationtraining.com/increasing-power/ | 2023-01-30T12:15:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00786.warc.gz | 0.967429 | 280 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__118071353 | en | The ability to exert as much force as you can
Being More Powerful
Power is the ability to exert as much force as you can, in as short a time as possible. Strength determines the maximum force you can apply, but power is proportional to the speed at which you can exert this force. Fast, explosive movements (such as those used in lifting or playing sports) are based on how much power an individual can generate and at what speed.
Power is a very important facet of our fitness programmes; it is a crucial component in CrossFit, weightlifting and aids the retainment of strength and stronger joints in our later years! Power is a huge component in successful lifting; having the strength to lift the weight is key of course, but having power will increase the ability to express this strength. The key behind moves such as the ‘snatch’ and ‘clean and jerk’ is speed, so being powerful is of huge importance in weightlift training.
Power is also important in everyday life – being more powerful will make almost any everyday task easier. If you are a sportsperson or athlete, then power is a crucial attribute to help you excel in your chosen discipline. Having power will help you produce the maximum amount of force (either in running, throwing or lifting etc), so you’ll be able to outshine your competitors and smash your sporting goals. | physics |
https://intrestingspacefacts.neocities.org/saturn | 2023-02-01T12:42:39 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499934.48/warc/CC-MAIN-20230201112816-20230201142816-00721.warc.gz | 0.934161 | 210 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__238439921 | en | Saturn is the sixth planet from the sun and the second largest in the solar system. It is a gas giant made mostly of hydrogen and helium. Saturn has a diameter of almost 120,000 kilometers (75,000 miles), almost nine times that of Earth. It has a mass 95 times that of Earth.
Saturn is a very cloudy planet, with a deep atmosphere that is mostly hydrogen and helium. The temperature at the top of the clouds is about -270 degrees Celsius (-454 degrees Fahrenheit). The temperature at the bottom of the clouds is about -180 degrees Celsius (-292 degrees Fahrenheit).
Saturn has a very strong magnetic field. The field is about 10 times stronger than Earth's. It has a magnetic north and south pole, just like Earth. The field traps particles from the sun and creates a huge, beautiful ring around the planet.
Saturn has more than 60 moons. The largest moon is Titan. It is bigger than the planet Mercury.
Saturn was first discovered by Galileo Galilei in 1610. | physics |
https://customwoodshiftknobs.com/viva.htm | 2024-02-22T14:42:12 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473819.62/warc/CC-MAIN-20240222125841-20240222155841-00488.warc.gz | 0.94559 | 1,443 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__288429 | en | HOME : CUSTOM GEARSHIFT KNOBS SHOP
Shown in the diagram are the internal components of one of the most highly stressed units in the automotive power train. The job of the differential in a rear-drive automobile is threefold:
Rotary motion from the transmission turns the input shaft and pinion gear of the differential. This meshes with the larger ring gear multiplying torque and converting the motion at right angles to the input shaft. The one-sided contact between ring and pinion creates forces that tend to separate them. These forces are taken by the pinion bearings and the output bearings which support the ring gear carrier. The older long-neck differential had a long pinion shaft with the bearings more widely spaced. This made the unit more durable and was popular with racers.
The axis of rotation of the pinion shaft and the ring gear do not intersect; the gears mesh somewhat below the axis of rotation of the ring gear. This design allows cars to be designed lower, without using smaller wheels, and goes back to the 1930s. Because the gear surfaces actually travel in different directions as they mesh, they slide under load. These are known as hypoid gears and require special lubricants with high pressure additives. The heat generated by this action can raise differential temperature to around 200 degrees at highway speeds, much higher under racing conditions.
The ring gear is bolted to the gear carrier, which conveys motion to the two rear half axles by two spider gears that mount within the carrier on a large pin. Captive between the spider gears are the side gears, which are connected to the output flanges through the sides of the ring gear carrier.
When the car is going in a straight line, the spider gears do not rotate on the supporting pin. In a turn, they revolve on the pin to accommodate different rates of rotation of the rear wheels. The reaction forces on these four gears pushes them outward against the gear carrier and the spacer discs. Tremendous frictional forces are generated, as these gears are not supported by bearings. Excessive wear of the spacers and gear surfaces causes increased clearances between gears. The gears mesh less completely, and the diminished gear contact surfaces are subjected to more stress. Gear material can chip off and entire teeth can break. These large pieces of metal can become jammed between other gear surfaces causing CATASTROPHIC FAILURE. This is one way ring gears break.
It is unusual for a differential to simply wear out from use. The problem is damaging bits of metal loose inside. The ring and pinion gear and the bearings are large pieces of machinery, reminiscent of a locomotive. The weak points in a standard differential are the spider and side gears. The little flakes of metal that come from gear surfaces wind up in the lube oil and get into the bearings. Regular oil changes are necessary, but are not enough. Some abrasive material may always be present. My tip is as follows: cement a large magnet onto the outside of the lower part of the rear cover of the differential. A magnet from an old speaker will do; use RTV silicone and hold it in place with tape while the cement sets. Large and small metal pieces will be held out of circulation, prolonging the life of the unit and preventing catastrophic failure. The rear cover is aluminum and will not inhibit the effects of the magnet.
Differentials are made with the assumption that the differences in rates of rotation of the inner and outer wheels in a turn will not be very great. This is not the case in autocrossing, where lifting and spinning of the inside drive wheel is common. Spider gears are under the greatest stress during these maneuvers. The spinning wheel gaining sudden traction and jolting the small gears can be damaging, especially if there is wear and excessive clearance between the gears. Limited slip differentials minimize the relative speed differences of inner and outer wheels by transferring torque more equally by use of friction clutches. The clutches themselves, however, are subject to wear under these same conditions.
The bottom line is that a differential with excessive wear can deteriorate more rapidly than one in good adjustment to start with.
So what can you do? Change the fluid regularly. Try a synthetic lube, which is more heat resistant. Try the magnet trick. If there are unusual noises, investigate. Grab the output flanges and check for side play. Visible movement is related to spider gear wear. Raise a rear wheel and spin it. A regular clunking sound could be damaged gear teeth.
At this point, the unit should be removed from the car. At least take off the rear cover. Inspection of the gears can tell you if the ring and pinion are o.k.; if they are damaged, special tools will be needed to properly adjust spacing of the new gears. Time to call in a pro or go with a rebuilt unit. Damaged spider gears can be replaced as a set with no special tools. They cost about $150., and are the same for all 1600, 2002 and 320i. Careful spinning of each input flange and feeling for roughness can assess bearing condition. Unless the unit was very low on lube, overheated or submerged in water, the bearings might be fine. Remove the gear carrier, and keep the spacers in the same places when reinstalling. Removal of the ring gear is necessary to get at the spider gears. Reinstallation requires heating the ring gear to expand it slightly to allow the bolt holes to line up. Boiling water in an old pan is usually adequate. Use loctite on the bolts. I have looked into several differentials. Some old units were assembled with no locking compound on these bolts; there were always a few loose ones. One backed out and jammed between the ring gear and case causing gear breakage. Once I heard of a case actually blowing open. In the units with loctite, no bolts came loose.
The factory manual provides adequate guidance for installation of new spider gears. The spacers come in 0.1mm size increments. With an assortment in hand, it is not hard to get minimum clearance; if you put on too many, the gears won't go in. Back off one size, and you will be correct.
The magnet works. I have found broken gear teeth stuck to the inside of the rear cover, along with a large quantity of metal flakes. Be sure and clean out the inside of the case with solvent to remove all metallic particles.
The Torsen limited slip design is a big improvement over spider gears. Worm gears mesh and slide, depending on load, providing variable limited slip action, and are relatively indestructible. Made for BMWs by Quaife engineering, the differential insert costs about $1300. It replaces the gear carrier; you supply the differential. This is about what a complete rebuilt limited slip might cost; however, for the enthusiast who is determined to cause stress to his drivetrain on a regular basis, the increased durability could pay off.
Last Modified December 25, 2000 | physics |
https://global.etsracingfuels.com/blogs/blog/guide_to_racing_fuels_the_octane_factor | 2024-04-17T02:36:27 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00269.warc.gz | 0.921108 | 1,533 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__62394263 | en | Octane is one of the most commonly discussed, highly marketed properties of performance fuels. Fuel manufacturers tout their high-octane products. Octane boosters promise added horsepower. And Hollywood promotes “high-octane” thrills for the next installment of their latest street racing franchise. Yet for all the hype, octane remains one of the least understood aspects of gasoline. A fuel’s octane rating, so the marketers would have us believe, is directly related to its power. The higher the octane number, the more powerful the fuel. Simple, right?
As you’ll soon see, this is actually incorrect and it is this misconception that has fueled (sorry) the following myths about octane.
(Please note: The following assumes a basic understanding of how 4 stroke internal combustion engines work. If you need to brush up on your strokes, our upcoming primer — Performance in the Mist: the importance of fuel atomization in combustion engines —will help. Check back soon!)
Octane Myth #1:Octane is a measure of a fuel’s power
Oddly, the answer is no. Octane is not a measure of a fuel’s power. It is a measure of its burning behavior. More specifically, it’s likelihood to combust predictably when pressurized during an engine’s compression stroke. The *higher* the octane rating, the more predictably it will combust under pressure. The lowerthe octane of the fuel, the lesspredictably it will combust. “Wait, what?” I can hear you saying. We want explosive power for our engines, we are not looking just for ‘predictability’?” Yes and no. While we do indeed want explosive power from our fuel, power comes from many different aspects of fuel chemistry. For example: combustion engines need a strong explosion to produce strong power, but that explosion must occur at precisely the right time and uniformly to extract the most energy from it . That “right time” ideally should be when the spark plug fires. Low octane fuels are more likely to combust spontaneously during compression before the spark plug fires.This is called pre-ignitionor detonation and it produces a distinctive audible “knocking” sound from the engine. This is why you may have heard the term “engine knock” or “knock sensors”.
“OK. Fine. High octane is more predictable, but surely it also has to be more powerful than low-octane fuel?” Good question. On to myth #2!
Octane Myth #2:High octane fuels are more energetic than low octane fuels
As discussed in Myth #1, high octane fuels are actually more predictable than low octane fuels. And, dispelling Myth #2, high octane fuels are actually less “powerful” (less chemically energetic) than their lower octane counterparts. The octane rating is actually a measure of the ratio between a fuel’s octane (a compound callediso-octane) and another compound called heptane. So an octane rating of 93 indicates the fuels contains 93% iso-octane and 7% heptane. Heptane is more energetic but tends have less stable combustion under compression. Iso-octane is more stable and, therefor, less energetic.
“OK.” I hear you saying.”Then why does everyone tell us high octane fuels will give your engine more power then??” Myth #3, coming right up!
Octane Myth #3:high octane fuels will make my engine produce more power than low octane fuels
Since high octane fuels are actually lesschemically powerful than low-octane fuels, they do not directlyproduce more power when combusting during the power stroke. But...since they are less susceptible to spontaneously combusting during the compression phase, — and here’s the important part — an engine can take advantage of this and more highly compress the fuel/air mixture without the risk of the fuel detonating before the spark plug fires.It’s the higher compression inside the cylinder that produces more power in the engine, not the fuel itself. So octane doesn’t produce more power, but provides the conditions for an engine to take advantage of more aggressive tuning like higher compression ratios and advanced ignition timing. It is this tuning that produces more engine power. “OK. Fine” I hear you say. “So all I have to do to get more power out of my engine is use high-octane fuel.” Well, not necessarily. Myth #4 please!
Octane Myth #4:All engines will produce more power with higher octane fuel
This myth is where much of the confusion around octane is focused. Octane is marketed by fuel companies as giving all engines more power. But as we learned in Myth #3, only engines that have been tuned to take advantage of high-octane fuel will benefit from it. The good news is that manufacturers will indicate the recommended octane fuel to be used. If 91 octane is recommended, then the engine has been tuned to take advantage of the higher octane. “OK.” I hear you say yet again “So knocking will tell me if I should be using higher octane fuel.” I bet you know what’s coming next right? Yep! Myth #5!
Octane Myth #5:If my engine isn’t knocking on low-octane fuel, it won’t benefit from high octane fuel.
You might think so given what you’ve just read and you are absolutely correct if the engine has been optimized by the manufacturer for low-octane fuel . But this is where it gets a bit tricky, many modern, high-performance engine management systems employ “knock sensors” (see Myth 1). These sensors are used to detect pre-ignition (knock) and will adjust the engine (usually the ignition timing) to accommodate whatever octane fuel is in the tank. So if the manufacturer recommends 91 octane fuel, but you’ve just filled up with 87 octane it’s likely that the engine management system will retard the ignition timing down to where the engine won’t knock. These systems can also work the other way — advancing the ignition timing to take advantage of higher octane fuel - thus boosting performance. It’s important to note, however, that the performance gains from higher spark advance is not infinite. With all other parameters being equal, an engine that can adapt to various octane levels will probably find performance gains switching from 93 Octane to 100 Octane, but not from say 100 Octane to 110 Octane.
Hopefully, this helps clarify some of the confusion around octane. And while octane is an important element to reduce detonation in high performance fuel, it is not the only element.
There are many more properties that contribute to a fuel’s resistance to detonation—as well as overall performance for racing—such as vaporization speed, combustion speed, and cooling effect - many of which we’ll be covering in detail in later blog posts. Because at ETS Racing Fuels, we believe an educated customer is our best customer. Happy racing! | physics |
https://centervospi.ru/en/articles/distortion-of-signals-in-analog-fiber-optic-links-with-direct-and-external-intensity-modulation/ | 2023-11-28T23:08:03 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00004.warc.gz | 0.892845 | 2,576 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__82587807 | en | Distortion of Signals in Analog Fiber-Optic Links with Direct and External Intensity Modulation
V.V. Shcherbakov1, A.F. Solodkov1, A.A. Zadernovsky2
1) JSC “Center VOSPI” Moscow, Russia [email protected]
2) Department of physics, Moscow Technological University, MIREA, Moscow, Russia [email protected]
This work was partially supported by Ministry of Education and Science of the Russian Federation.
Fiber-optic links are widely used in various areas of science and technology. In the field of telecommunications, digital data transmission systems are mostly employed. However, the analog systems with intensity modulation (IM) of the light produced by a semiconductor laser, transportation of the optical signal through an optical fiber and, finally, direct detection (DD) of the optical signal by a photodiode at the output of the fiber, are in demand for a variety of applications.
There are two basic schemes for the light intensity modulation, which compete with regard to simplicity and performance. The first scheme uses direct intensity modulation, which can be achieved by varying the drive current of the semiconductor laser. Unfortunately, this current variation not only modulates the laser light intensity, but induces a parasitic optical frequency modulation, the effect which is referred to as laser chirp. Frequency chirp of laser light can cause degradation and harmonic distortions of the transmitted signals after propagation in a dispersive optical fiber.
The second scheme was proposed as a solution to the problem of chirp. This scheme uses external intensity modulation of the laser light by chirp-free electrooptic Mach-Zehnder modulator (MZM). However, the intensity modulation response of MZM is highly nonlinear and exhibits intrinsic harmonic distortions of the output optical signal. After propagation in a dispersive fiber this signal can acquire additional harmonic distortions.
In this paper we examine performance characteristics of the IM/DD fiber-optic links with direct and external intensity modulation. We present experimental results on signal transmission in these systems and compare the harmonic distortions of signals.
It is worth noting that in the majority of studies of IM/DD fiber-optic links, either numerical analysis [1, 2] or rather complicated mathematical apparatus are usually used. This makes difficult obtaining the simple analytical expressions suitable for the engineering design of the fiber-optic links. This paper is intended to fill this gap. We present a simple analytical expression for the frequencies of the signals with minimum or with maximum power at the output of a fiber, as well as a simple analytical expression for the frequencies at which one can expect the minimum dispersive harmonic distortions.
A. Direct intensity modulation
The experimental setup is shown in Fig. 1. The work procedure is as follows. The single-frequency 1550 nm DFB InGaAsP laser NLKC5EBKA (2) is connected to the input of photodetector U2t, XPDV2150R (3) by a short length of standard single-mode optical fiber (4) about 1 m long and then we determine visually on screen of the analyzer Agilent N5244A (1) the frequency range where the modulation response has no significant noise and sharp breaks. In this frequency range (in our case 10MHz - 35GHz) the signal power is normalized to unity. Then, the short cable (4) is replaced by a coil of long optical fiber (5) and the output signal is displayed on the screen of analyzer (1). Acting in a similar manner, we can get intensity modulation response for several lengths of a single-mode fiber. However, the laser light intensity should not be greater than 5-7 dBm because of the risk of stimulated Brillouin scattering and nonlinear distortions.
Fig. 2 shows the experimental curves for the output-to-input power ratio of the signals (expressed in dB) versus the modulation frequency fm for several coils of fiber of different length L with the dispersion coefficient D = 17 ps/(nm km).
|Fig. 1. Experimental setup for measuring the intensity modulation response.|
|Fig. 2. Output-to-input signal power ratio (expressed in dB) versus the modulation frequency for several lengths of a single-mode fiber.|
B. Harmonic distortions.
Nonlinear distortions manifest themselves in generation of the harmonics that are not presented in the original signal. Figs. 3, 4 represent the observed relative power of the 2nd and the 3rd harmonics, respectively, (harmonic-to-carrier power ratio) versus the power of signal. We use two different sources of light. The first source is a directly modulated single-frequency 1550 nm DFB InGaAsP laser (in the figures it is referred to as N43), built-in Agilent N4373A analyzer. The second source is the laser with an external chirp-free MZM, which is not stabilized in the quadrature operating point (in the figures it is referred to as L340). The 2.2 GHz signal of modulation is generated by Agilent E5071B with the microwave filter to suppress the intrinsic harmonic distortions.
The optical signal is fed either directly to a photodetector Agilent N4373A, or through a coil of Corning fiber of 25266m length (in the figures it is referred to as AD-1). The power of harmonics in the output signal is measured by a spectrum analyzer Agilent E4404B in the segmented scanning mode.
A. Direct intensity modulation
Theoretical interpretation of the experimental results includes the frequency chirp of directly modulated laser and the group velocity dispersion of electromagnetic waves in optical fiber. The frequencies of the power extrema in Fig. 2 are found to be equal
where odd integers l give the minima, whereas even integers l give the maxima, λ is the carrier wavelength, c is the speed of light, α is the chirp parameter, also known as Henry factor and D is the dispersion coefficient of a fiber. The values of the signal power extrema are related with the laser-specific parameter k which is referred to as the adiabatic chirp coefficient. Unfortunately, laser vendors do not specify the chirp parameters. Applying the data presented in (α=2.8±0.2, k = (11.4 ± 0.5) c-1 mW-1) we come to a good agreement between our experimental and theoretical results.
B. Harmonic distortions of directly modulated signals
The relative powers (Figs. 3, 4) of the 2nd and the 3rd harmonics of radiation of directly modulated laser N43 (with chirp) are extremely small and consist of less than -55 dBc and -70 dBc, respectively. Practically, the laser operates as a linear electrical-to-optical convertor. After the optical signal transportation through the 25266 m fiber, the powers of the 2nd and the 3rd harmonics are significantly increased by 34 dBc and 36 dBc, respectively, and this increase is independent of the power of signal. The latter demonstrates the nature of dispersive harmonic distortions, which are not associated with the optical power. The primary reason for the harmonic distortions is the frequency chirp of light produced by a directly-modulated laser and the group velocity dispersion of electromagnetic waves in a fiber. In linear propagation through a fiber the different spectral components of the chirped optical signal acquire different phase changes due to dispersion. This leads to appearance the higher order harmonics which are not presented in the original signal. We have obtained a simple analytical expression for the particular frequencies
l = 1,2,3...
at which there will be no dispersive harmonic distortions for small modulation signals.
C. Harmonic distortions of externally modulated signals
When using the L340 laser source with an external chirp-free MZM, the inclusion into the optical circuit a fiber coil of 25266 m length practically does not change the relative powers of the higher order harmonics shown in Figs. 3, 4. Theoretical interpretation of this result takes into account the intrinsic nonlinearity of MZM.
The original signal of voltage modulation applied to MZM is a single-tone electrical signal. It is subjected to nonlinear distortions already at the stage of the electrical-to-optical conversion. And although by the choice of the quadrature bias one can eliminate the contribution of all even harmonics, and by the restriction of the modulation voltage amplitudes one can significantly reduce the contribution of higher order odd harmonics, the nonlinear distortions, caused by operation of MZM, are basically unavoidable.
After transportation through a dispersive fiber, the optical signal can acquire additional harmonic distortions. The photocurrent detected at the fiber output contains both even and odd harmonics of the modulation frequency despite the fact that at the quadrature operating point of MZM the fiber input optical signal contains no even harmonics. The signal output-to-input power ratio is determined by the fiber transportation parameter
|θ = πc(fm/f0)2DL||(3)|
where f0 is the carrier frequency. At the modulation frequencies, determined from the condition |cosθ| = 1, which yields θ = lπ with l = 0,1,2,3..., the modulus of the transfer function reaches the maximum value equal to unity and, thus, the optical powers of the fundamental and higher order harmonics in the input and in the output signals become equal to each other (without attenuation in the fiber).
In our experiment we have used a chirp-free MZM, which is not stabilized in the quadrature operating point and therefore we observe both even and odd harmonics at the input to the fiber. At 2.2 GHz modulation frequency we have |cosθ| ≈ 1, and, thus, the powers of the second and the third harmonics after propagation through the fiber are close to their powers at the fiber input. The harmonic distortions of the output signal in this case are entirely determined by the nonlinear properties of the MZM modulator.
We have presented experimental and theoretical results on transmission of signals in analog IM/DD fiber-optic links with direct or external intensity modulation. We have experimentally obtained the intensity modulation response for several directly-modulated IM/DD fiber-optic links of different lengths. We have obtained a simple analytical expression for the frequencies of the signals with minimum or maximum power at the fiber output. We have specified the operation conditions of directly or externally modulated IM/DD fiber-optic links, ensuring minimum harmonic distortions of the transmitted signals. The results are useful for the engineering design of fiber-optic links.
|Fig. 3. Relative power of the 2nd harmonic versus the power of signal|
|Fig. 4. Relative power of the 3rd harmonic versus the power of signal|
G.J. Meslener, “Chromatic dispersion induced distortion of modulated monochromatic light employing direct detection,” IEEE J. Quantum Electron., vol. 20, pp. 1208-1216, 1984.
A. Hilt, E. Udvary, T. Berceli, “Harmonic distortion in dispersive fiber–optical transmission of microwave signals,” Proceedings of International Topical Meeting on Microwave Photonics, pp. 151-154, 2003.
E. Pera, A. Yariv, “Large-signal theory of the effect of dispersive propagation on the intensity modulation response of semiconductor lasers,” Journal of Lightwave Technology, vol. 18, pp. 84-89, 2000.
A. Villafranca, J. Lasobras and I. Garcés, “Precise characterization of the frequency chirp in directly modulated DFB lasers,” Proceedings of 6th Spanish Conference on Electronic Devices, pp. 173-176, 2007.
Мы будем рады ответить на них! | physics |
http://cdw.bathroomshowercurtains.info/building-surveying-dissertation-subjects.html | 2018-01-17T12:57:13 | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00129.warc.gz | 0.955319 | 124 | CC-MAIN-2018-05 | webtext-fineweb__CC-MAIN-2018-05__0__102106386 | en | The simplest method for measuring height is with an altimeter using air pressure to find height. When more precise measurements are needed, means like precise levels (also known as differential leveling) are used. When precise leveling, a series of measurements between two points are taken using an instrument and a measuring rod. Differences in height between the measurements are added and subtracted in a series to get the net difference in elevation between the two endpoints. With the Global Positioning System (GPS), elevation can be measured with satellite receivers. Usually GPS is somewhat less accurate than traditional precise leveling, but may be similar over long distances. | physics |
https://speedandsmarts.com/toolbox/articles2/smallboat-sailing/sailing-downwind | 2023-12-02T01:52:11 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100309.57/warc/CC-MAIN-20231202010506-20231202040506-00190.warc.gz | 0.958545 | 3,799 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__40654377 | en | Upwind sailing requires a bit of precision. You have to keep your sails and boat on edge in order to make your way to windward. When you turn downwind, however, you can really cut loose!
Sails are eased as the boat heads off and the wind and waves send you off on what can be one of the most exhilarating rides of your life. Your local amusement park has nothing as thrilling as planing across the water and surfing down the face of a big wave. Or as relaxing as the thought of being lazily pushed along by the wind on a sunny light-air day.
"Downwind" sailing is a broad and inclusive term. Generally, any point of sail not close-hauled is considered to be "downwind". This includes close reaching, beam reaching, broad reaching and running. Reaching is going in a direction across the wind, while running is truly going with the wind.
A general rule of thumb in sailing downwind is that the more you head off away from the wind, the more you let your sails out. On a run, your sails should be eased as far as possible so their maximum area is exposed to the wind. Notable exceptions to this are catamarans and iceboats. When these boats bear off onto a reach, they accelerate quickly and build up their apparent wind. This makes them go even faster, which further increases their apparent wind and moves it forward. Even though the true wind is coming from behind, the boat is moving so quickly that it feels like the wind is coming from the bow. Therefore, the sail(s) must be trimmed in tightly. Like any boat, however, an iceboat begins to slow down when it heads too far away from the wind. This is because the apparent wind and true wind begin to work against each other, which reduces the apparent wind.
One of the major differences between upwind and downwind sailing involves how you steer the boat and trim your sails. Going upwind you generally pull the sails all the way in and then use them as a guide to steer the boat. When sailing downwind, however, you usually aim the boat straight for where you want to go, and then use this heading as a guide for how to trim your sails.
One of the easiest ways to steer downwind is to aim for an object such as a buoy or a point on shore. Be sure it is fixed, though. I can remember sailing in one overnight race where I was steering for what I thought was a light on shore. After a while, I discovered we were way off course. It turns out we had been following a slow-moving barge!
A more accurate means to steer is by using a compass. Aim your boat in the direction you want to go and note the compass heading. Then hold this course on the compass. In the situation described above, I could have known that my "fixed" object was moving if I had been watching the compass.
Another way to determine a compass course is to use a navigational chart. In fact, this is the only way to do it when you can't see your destination. The only thing I don't like about using a compass is that you have to stare at numbers. I'd much rather look around all the time.
Trimming Your Sails
Once you're steering a course you like, you must trim your sails accordingly. Ease the sails out as far as they will go until they just begin to luff along the forward edge; then trim them in slightly.
Since it is very difficult to steer a perfectly straight course, and the wind is usually shifting in direction and/or velocity, you must constantly adjust the sails in order to keep your boat performing optimally. Keep the sheets in your hands so you can trim or ease when necessary. Telltales on your shrouds or a wind pennant on the top of your mast will let you know the apparent wind direction and can tip you off to any changes that would require an adjustment in sail trim.
Another good sail trim guide are the telltales at the forward part (luff) of your sails. You can use the flow of these telltales to figure out how far to ease your sheets. For example, if the windward telltales are dancing (moving around), the sail is probably luffing slightly and should be trimmed. If the leeward telltales are dancing, the sail is stalled and needs to be eased. Ideally both telltales should flow straight back (or the windward telltales should be lifting slightly).
There are several other things you should consider when sailing downwind:
Centerboard -- Upwind the centerboard (or daggerboard) keeps the boat from going sideways and develops lift, which helps the boat move forward. As you head downwind, however, the board is much less critical because you are heading more in the direction that the wind is trying to push you. Therefore you can gradually raise the centerboard as you head off away from the wind. This reduces the drag caused by pushing the board through the water, which allows you to sail faster.
In general, you want to raise the centerboard just a bit on a tight reach, one-third of the way up on a beam reach, one-half on a broad reach, and three-quarters on a run. If your boat gets tippy, however, lower the board a bit for increased lateral stability. Be sure to lower the centerboard all the way before you turn back upwind.
Weight Placement -- The ideal fore-and-aft position of the skipper and crew varies according to wind and wave conditions. In light air, move your weight forward to keep the stern from dragging in the water (which slows you down). As it gets choppier, move back far enough to keep the bow from plowing into waves.
You also want to move aft as the wind increases. This will give you more stability because the aft sections of most hulls are flatter and therefore less tippy than the forward sections. In planing conditions, move your weight even farther toward the stern so the bow will lift up.
Your athwartships weight placement should also vary with the conditions. When sailing downwind, you want to have the helm balanced so the boat is steering straight. This minimizes drag on the rudder. It is best to let the crew sit in a comfortable position and then have the skipper move so the boat has a neutral, or balanced, helm. (The helm is said to be neutral or balanced when you can let go of the tiller and the boat continues in a straight line). Then by leaning in or out slightly, the skipper can steer the boat and at the same time feel the helm changes in the tiller. In general, it's good to keep the boat flat when going downwind.
When you have a windy tight reach, all crew weight will need to be hiking out on the windward side. On a broad reach, the crew may be to leeward and the skipper to windward. When on a run, you may even have to heel the boat a bit to windward to achieve a balanced helm. Athwartship weight placement also affects the boat's lateral stability. That is, the closer your weight is grouped near the centerline of the boat, the easier it is for the boat to roll. So spread as far outboard as possible whenever you need stability.
Spinnaker -- This is a large, full, colorful sail that you can set to improve your speed on a beam reach, broad reach or run. Handling a spinnaker takes a bit of practice.
Sailing by-the-lee -- Pretend you are sailing on a run and you head off even further so the wind is coming over your leeward stern quarter. This is called sailing "by-the-lee." It can be a dangerous situation because the wind may fill on the back side of the mainsail and cause an unexpected jibe, sending the boom flying across the boat with extreme force. This has not only caused many headaches, but has even killed some big boat sailors over the years.
There are several ways to tell when you are sailing by-the-lee: 1) You'll feel the wind coming from your leeward side (and you'll see it coming from this direction on the telltales and/or masthead fly); 2) the leech on your mainsail will start to flop back and forth; and 3) there will be very little pressure on your mainsheet. If you find yourself by-the-lee, head the boat up toward the wind, or jibe.
Wing and wing -- When you are sailing on a very broad reach or a run and you don't have a spinnaker, you can gain speed by "winging" your jib or genoa to the windward side. When the jib starts to collapse in the main's wind shadow, try pulling it over to the windward side to catch the wind.
On smaller boats, you can usually use your arm to hold the jib sheet out far enough to windward to fill the sail. But on bigger boats, you'll need something longer to hold the jib out far enough. Try using your spinnaker pole or a specially designed whisker pole (standard on racing boats like the Snipe or Star). Attach the end of the pole to the clew of your jib or genoa, and attach the other end of the pole to the mast. Then use the windward jib sheet to trim the sail.
You'll find there is a relatively small apparent wind angle where a winged jib will work effectively. If you head off too far, you'll go by the lee. If you head up too far, the leech of the jib will fold back on itself. But when you can make winging work, your boat will fly.
When it's windy, your boat can heel going downwind as well as upwind. Close and beam reaching are the most overpowering because the wind is blowing directly across your boat. There are several ways to control heel downwind. The most exhilarating of these is to use all of your body weight as leverage by hiking out to windward. Sometimes this will be enough by itself to keep the boat flat.
If you are already hiking and the boat is still heeling too much (i.e. you're "overpowered"), ease the sails so they luff slightly. This spills some of the wind and depowers the boat. Even though you are wasting wind power, the most important thing is to keep the boat generally flat. In puffy conditions, be sure to keep the mainsheet in your hand so you can ease the sail quickly whenever you get a puff.
There is one other good way to stop a boat from heeling. You can flatten a boat upwind by heading up toward the wind (called pinching or feathering). When you're going downwind, the way to reduce heeling is to head away from the wind. This lessens the sideways forces on the boat. Note that this is the exact opposite of sailing upwind.
Sometimes, when you are on a breezy run, your boat will start to roll back and forth until it seems a little out of control. There are several ways to minimize this. One that we already mentioned is to put the skipper and crew on opposite sides of the boat and use your weight to counter the rolling. At the same time, steer in the direction of the rolls so you keep the boat under the sails as much as possible. When the boat heels to windward, head up; when it heels to leeward, head off. Another way to prevent rolling is to overtrim your sails slightly and put more tension on the boom vang. You can also head up onto more of a broad reach. All of these should steady the boat.
Jibing (or gybing) is to downwind sailing what tacking is to sailing upwind. You still change from one tack to the other, but now your stern passes through the wind rather than the bow. The sails also change sides but since they are eased out for a broad reach or a run, they come across very quickly and with a great deal of force as the wind fills on their back side. Watch your heads when the sail crosses the boat!
The reason to jibe is usually that the other tack offers a faster course to your destination. Sometimes it may simply be that the other tack offers more sunshine. In racing you may have to jibe around a buoy as part of the course. Some boats, like iceboats and catamarans, or most boats in very light wind, go very slowly when they sail on a dead run. In order to get downwind, they jibe back and forth, maintaining their speed from reach to reach. This is called "tacking downwind" -- it's a lot like the zig-zagging you have to do to get to an upwind destination.
Before you jibe, check the position of the centerboard. It should be lowered most of the way to help stabilize the boat during the jibe. A contradiction occurs in windy conditions when you might think that lowering the centerboard all the way would provide more stability. In fact, the boat can "trip" over the extended board and cause a capsize. In this case, keep the board up about one-third of the way.
Begin your jibe by heading off away from the wind. The skipper should grab the mainsheet between the ratchet block and the boom. Right at the moment in your turn when you feel the pressure in the mainsail get soft, pull the sail in quickly toward the center of the boat; then let it go acropss the boat as the wind fills on the other side. In light air you can easily pull the sail across at any time; in heavy wind, however, you may need your crew to help throw the boom over.
The steering during a medium to heavy wind jibe is crucial. When the sail comes flying across, all the force of the wind is now trying to push the boat over to the new leeward side. This effect is compounded by the centrifugal force of the turn, and the result is often a capsize. To avoid this fate we must steer an "S" course. Begin the "S" by turning into your jibe. Just as the boom crosses the boat, turn slightly the other way so the boat is now aiming back under the sails. This keeps the boat from heeling too much right after the jibe. (Make sure the boom is crossing before you steer the other way, or the boom may not come across.) Once you are stable on the new jibe, head up to whatever course you choose.
During a jibe, the skipper crosses the boat facing forward, exchanging the mainsheet and hiking stick behind his or her back. The crew also faces forward and is responsible for getting the jib onto the new tack as well as making any gross weight adjustments needed to keep the boat level.
Since jibing is one of the most likely times for a capsize, it is smart to avoid jibing in heavy air until you gain more experience. In puffy winds, time your jibe so it takes place in a lull. If you are planing or surfing, however, the opposite is true. It is best to jibe when the boat is going as fast as possible so there is less pressure on the sails. This way they will come across most easily.
Planing and Surfing
For many people, planing and surfing is the most fun part of sailing. Only lighter displacement boats with relatively flat hulls will plane, but you can get most boats to surf.
Planing occurs when a boat is going fast enough to lift up out of her own bow wave and skim across the water. If you think there's enough wind to get on a plane, bear off onto a beam or broad reach. Move your weight aft and hike out so the boat is flat with a balanced helm. If the boat doesn't take off on its own, try a few quick and vigorous pumps on the mainsheet.
Surfing is just like riding a surfboard; instead of paddling with your arms to "catch a wave", however, you use your sails and weight to catch a ride. The idea is, like a surfer, to ride down the wave faces. Head up toward the wind to build speed and when you see a wave trough right in front of your bow, bear off into it. Once you get on the wave, ride it for all it's worth. If you start catching up to the next wave, turn slightly so you avoid plowing into it. When you start to slow down and feel like you'll lose the ride, head back up and accelerate again.
Pumping and ooching will often help break your boat onto a plane or surf. (Besides that, they're fun and good exercise!) Pumping is rapid trimming and releasing of a sail. It effectively increases the apparent wind on the sail during the pump and can give you the burst of speed necessary to plane or catch a wave. Ooching is sudden forward and aft body movement and is very effective in initiating a surf. Just as your boat is starting to go down a wave, ooch forward sharply. At the same time give the sail a sharp pump or two (I like to grab the mainsheet straight from the boom to make pulling easier), and off you go. When racing, there are rules that limit when and how often you can pump or ooch, but when you're not racing, you can pump and ooch to your heart's content.
When you first plane or surf, the extra speed may seem a bit scary. You should remember, however, that the faster a boat goes the more stable it will be. So enjoy yourself and your new-found speed. When you get a puff, remember to bear off under the sails and keep the boat flat. This will get you onto an even faster plane. To stop planing or surfing, slowly head up and luff your sails until your boatspeed drops. Put your centerboard down, trim in the sails and you are ready to go back upwind. | physics |
https://www.hubjub.co.uk/gear-chart-21-w.asp | 2023-12-03T13:29:54 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.23/warc/CC-MAIN-20231203125921-20231203155921-00726.warc.gz | 0.751931 | 269 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__30868725 | en | This simple gearchart is intended to help you figure your way around Hubjub's component range.
It covers all of our stock chainring and sprocket sizes. The figures are presented in the form of gear inches, which is the measurement that you will most often see quoted in newsgroup discussions.
If you're unfamiliar with the concept, it refers to the size of wheel you would be riding if you were on a penny farthing or high wheeler. Honest!
If you think this is a bit archaic, check out Sheldon Brown.
Figuring gears is not an exact science. This is so because the diameter of your rear wheel is affected by your tyre, an inherently variable component. For what it's worth, we reckoned that a 700c road wheel is 27" in diameter, a 29inch offroad wheel is 28" (measure one if you don't believe us) and a 26" MTB wheel really is 26"--which it is, if you ride two point something tyres inflated fairly hard.
If you want to do your own calculations, divide the number of teeth on your chainwheel by the number on your sprocket, and multiply it by the actual diameter of your wheel. If you habitually run low pressures, you might want to knock a bit off that diameter to compensate. | physics |
https://sahanaengineering.com/magnetic-level-indicator.html | 2021-06-14T06:33:26 | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00073.warc.gz | 0.830719 | 353 | CC-MAIN-2021-25 | webtext-fineweb__CC-MAIN-2021-25__0__28114279 | en | Magnetic Level Indicator provides clear, high clarity indication of liquid level. Typically, Magnetic Level Gauge consists of three major components: Float Chamber, Float and Indicator System. Magnetic Level Indicator is principally designed as an alternative to glass level gauges. We offer Magnetic Level Gauge having top-bottom, top and side mounted construction with two types of indicator system i.e. Capsule and Bi-colour Rollers. Magnetic level indicators are tailor made to various types of construction and sizes.
Magnetic Level Gauge operates on the principle of magnetic field coupling to provide fluid level information. Float chamber is typically constructed with non-magnetic pipe having process connections that matches to the vessel connections. Float size and weight is determined by the process fluid, pressure, temperature and the specific gravity of the process fluid. Float contains magnets to provide 360o magnetic flux field.
|Mounting Type :||Top / Side|
|Indicator System :||Bi-colour Roller/Follower Capsule|
|Housing for Indicator :||Bi-colour Roller –SS Housing Follower Capsule – Glass Tube in Aluminium / SS Housing|
|MOC of Indicator :||SS 304|
|Float Chamber :||50NB Non Magnetic Standard pipe|
|MOC of Float Chamber :||SS 304, SS 316, SS 316L, PP, PVDF, Others on request.|
|Scale :||Aluminium / SS / Acrylic engraved in mm|
|Process Connection :||Flanged in various Sizes|
|Vent :||½” Plugged / ½” Ball Valve|
|Drain :||½” Plugged / ½” Ball Valve| | physics |
https://tanglab.bg.ic.ac.uk/flow-and-wall-shear-stress-mapping/ | 2023-09-23T16:54:35 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.19/warc/CC-MAIN-20230923162848-20230923192848-00181.warc.gz | 0.911376 | 467 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__58671591 | en | Chee Hau Leow, Meng-Xing Tang, “Spatio-Temporal Flow and Wall Shear Stress Mapping Based on Incoherent Ensemble-Correlation of Ultrafast Contrast Enhanced Ultrasound Images”
Ultrasound in Medicine & Biology
Open Access funded by Engineering and Physical Sciences Research Council
In this study, a technique for high-frame-rate ultrasound imaging velocimetry (UIV) is extended first to provide more robust quantitative flow velocity mapping using ensemble correlation of images without coherent compounding, and second to generate spatio-temporal wall shear stress (WSS) distribution. A simulation model, which couples the ultrasound simulator with analytical flow solution, was implemented to evaluate its accuracy. It is shown that the proposed approach can reduce errors in velocity estimation by up to 10-fold in comparison with the coherent correlation approach. Mean errors (ME) of 3.2% and 8.6% were estimated under a steady flow condition, while 3.0% and 10.6% were found under a pulsatile condition for the velocity and wall shear rate (WSR) measurement, respectively. Appropriate filter parameters were selected to constrain the velocity profiles before WSR estimations and the effects of incorrect wall tracking were quantified under a controlled environment. Although accurate wall tracking is found to be critical in WSR measurement (as a 200 µm deviation from the wall may yield up to a 60% error), this can be mitigated by HFR imaging (of up to 10 kHz) with contrast agents, which allow for improved differentiation of the wall-fluid boundaries. In vitro investigations on two carotid bifurcation phantoms, normal and diseased, were conducted, and their relative differences in terms of the flow patterns and WSR distribution were demonstrated. It is shown that high-frame-rate UIV technique can be a non-invasive tool to measure quantitatively the spatio-temporal velocity and WSS distribution.
Flow measurement; Wall shear rate; Ultrafast ultrasound imaging; Motion effect; Microbubble contrast agents; Contrast enhanced ultrasound; Image tracking | physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.