url
stringlengths
15
1.48k
date
timestamp[s]
file_path
stringlengths
125
155
language_score
float64
0.65
1
token_count
int64
75
32.8k
dump
stringclasses
96 values
global_id
stringlengths
41
46
lang
stringclasses
1 value
text
stringlengths
295
153k
domain
stringclasses
67 values
https://modular.sebsongs.com/st-mixer-xl-build-instructions/
2023-12-02T18:39:15
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00552.warc.gz
0.814835
1,171
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__115526622
en
ST MIXER XL | BUILD INSTRUCTIONS This build is a intermediate level build with quite a bit of surface mount components. Don’t do this as your first surface mount project, as it is quite large and time consuming. However, if you have done some SMD work before, this build should be very straight forward. Do this before building this module: Check that you have all components. Gather all the tools needed (see lists below). The tools needed for this build are: Soldering station or soldering iron. High quality solder (lead free recommended). Angled tweezers for surface mounting. PCB holder (makes life much easier). Knurled Nut Driver Tool (for tightening jack socket nuts). 10 mm hex socket covered in masking tape (for tightening potentiometer nuts). Got everything? Let’s get on with it! Solder all TL074 operational amplifiers (U1, U2, U3). Make sure to check the orientation twice before soldering! See image for reference. Solder the two NE5532 operational amplifiers (U4, U5). Make sure to check the orientation twice before soldering! See image for reference. 2. Diodes Solder the 1N5819 schottky diodes (D1, D2). Make sure to check the polarity twice before soldering! See image for reference. 3. Resistors Solder the 270R resistors (R53, R54). Solder the 1K resistors (R35, R38, R55, R56). Solder the 10K resistors (R36, R43, R44, R51, R52). Solder the 20K resistors (R9, R10, R11, R12, R13, R14, R15, R16, R17, R18, R19, R20, R21, R22, R23, R24, R45, R46). Solder the 39K resistors (R39, R40). Solder the 51K resistors (R25, R26, R27, R28, R30, R31, R32, R33, R37). Solder the 100K resistors (R1, R2, R3, R4, R5, R6, R7, R8, R29, R34, R47, R48, R49, R50). 4. Capacitors Solder the 22pF ceramic capacitors (C9, C11, C12, C21, C22, C23, C24). Solder the first eight 100nF ceramic capacitors (C1, C2, C3, C4, C5, C6, C7, C8). Flip the board over and solder the rest of the 100nF ceramic capacitors (C15, C16, C17, C18, C19, C20, C25, C26, C27, C28). Solder the 10uF electrolytic capacitors (C13, C14). Make sure to check the polarity twice before soldering! See image for reference. Solder the 33uF electrolytic capacitors (C10, C29). Make sure to check the polarity twice before soldering! See image for reference. Solder the large 220uF electrolytic capacitor (C30). Make sure to check the polarity twice before soldering! See image for reference. 5. Power header Solder the 1×5 pin power header. 6. Jacks sockets and potentiometers Place the A100K potentiometers (VR1, VR2, VR3, VR4, VR11, VR12, VR13, VR14), the dual A100K potentiometers (VR5, VR6, VR15, VR16) and the B50K potentiometers (VR7, VR8, VR9, VR10). Also place the black mono jack sockets (J1, J2, J3, J4, J5, J6, J7, J8, J9, J10) and the green stereo jack sockets (J11, J12). Do not solder them yet! Attach the front panel, place the washers for the potentiometers and all the nuts for both potentiometers and jack sockets. Hand tighten all the nuts. Make sure everything is straigth and correctly seated. Solder all the joints for the potentiometers and jack sockets. Take care not to damage any components when soldering! Take extra care between the bottom electrolytic capacitors as these are quite a tight fit. If you have a thinner solder tip, use that in the tight spots, or angle your soldering tip to make it as narrow as possible. 7. Final touches Tighten all the nuts with appropriate tools, taking care not to scratch the front panel. Mount the knobs on the potentiometer shafts. Now you are done! Connect the module to power in a eurorack case, making sure you orient the power connector correctly (white line on PCB aligns with red strip on power cable) and power it up. Connect a pair of headphones to the PHONES output and connect an oscillator to INPUT 1. Increase the level for channel 1 and the headphone output, and test the panning left and right. Test all inputs and outputs to see that everything works. You’re now done!
physics
https://www.nanolever.com/en/nanolever-sigma-low-innovation-vibrations-load-cells-measuring-tools/
2021-04-21T05:29:36
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00340.warc.gz
0.950224
271
CC-MAIN-2021-17
webtext-fineweb__CC-MAIN-2021-17__0__213217137
en
Nanolever and Sigma Low: a history of innovation We were born to measure weight accurately in the automation industry sector. Nanolever’s response was the introduction of Sigma Low, a weighing technology that allows you to measure inside vibrating machines and environments. Sigma Low is the answer that the market of automatic machine manufacturers has been waiting for some time: it is the answer of those who are ready to take up even the most difficult technological challenges. As Nanolever, we find solutions for problems that seem impossible to solve – just like when we invented Sigma Low. Everyone thought it was impossible to dose gravimetrically, having to measure on a vibrating station for the descent of the product. But we have created a weight measurement system with an accuracy of 10 milligrams: the Sigma Low system. Nanolever’s proposed products and solutions allow you to: - eliminate, in the measure of weight, errors due to very low frequency vibrations (< 5 Hz) ; - perform gravimetric dosages in place of the more imprecise volumetric ones; - electronic signal conditioning at very low noise; - develop induction hobs that can weigh the ingredients directly on the ceramic glass; - innovative measuring tools to determine the stiffness of a load cell.
physics
https://techydr.com/physicists-hitch-in-proton-structure/
2024-03-04T02:00:36
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00393.warc.gz
0.947027
753
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__28917527
en
According to recent findings, the proton is more elastic than previously believed. While this discrepancy has been seen, physicists are split on whether it will be replicated in future experiments or if our current model of the proton’s structure needs to be revised. Quarks are the subatomic particles that make up protons; they are bound together by gluons and other particles, including “virtual” particles that exist very briefly. Protons deform or stretch when subjected to external electric and magnetic forces because their components, which are charged, move around as needed. The electric and magnetic polarisabilities of the proton govern how far it may be stretched in this manner. The multiple measurements of these two values provide insight into the composition of the proton. One of the earliest observations of this kind was made in 2000, and it revealed that the proton becomes stretchier in reaction to magnetic and electric fields for a short period of time before becoming stiffer, or harder to deform. Recent tests, however, have contradicted these inaccurate findings by showing that the proton simply becomes stiffer as you zoom in on smaller areas, as predicted by the proton’s conventional model. Now, researchers led by Nikolaos Sparveris of Temple University in Pennsylvania have measured the stretchability of protons with more accuracy and confirmed that, similar to the conclusion from 2000, the proton gets stretchier to electric and magnetic fields at specific length scales. Sparveris claims, “We perceive it with a higher precision” after compiling additional information. Now the [standard model] hypothesis has the upper hand. By directing a stream of low-energy electrons at a liquid-hydrogen target, Sparveris and his team were able to determine the proton’s stretch. In this setup, the proton is deformed as an electron goes past it within the hydrogen, creating a photon, or effectively an electromagnetic field. Researchers can determine the degree to which individual protons are deformed by individual photons by measuring the extent to which electrons and protons scatter away from each other. Although the anomalous finding looks similar to the study done in 2000, Judith McGovern of the University of Manchester, UK, claims that the extent of the impact has decreased by more than half. It is difficult to quantify proton polarisabilities at low energies with great accuracy, she adds, and there is no clear explanation from existing theories for why the value would rise as it does in Sparveris’s discovery. I don’t believe most people took [the 2000 result] seriously, I think they expected that it would go away, and if I’m being completely honest, I think everyone will still assume that it will go away. McGovern suggests that future tests utilising positrons (the antimatter equivalent to the electron) might help determine whether or not this anomaly exists. There will be more testing conducted by Sparveris and company. We need to rule out the idea that this is the result of some sort of experimental parameter or artefact, so we do want to repeat the experiment and take further data, he says. However, if the anomaly persists, our current knowledge of the proton’s structure will need to be revised. According to Juan Rojo of the Vrije Universiteit Amsterdam in the Netherlands: “Other measurements will illuminate whether or not this has an experimental cause, but there looks to be a true mismatch between theory and experiment.” What does this disparity tell us?” And, more specifically, what do these phenomena tell us about the proton structure?
physics
https://upperhandatlanta.com/spa-services/alternative-therapies/infrared-sauna/
2023-12-06T13:11:55
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00231.warc.gz
0.889439
1,083
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__299536069
en
Clearlight Full Spectrum Infrared Sauna Our Sauna Sessions are completely safe. They produce the same far infrared heat produced by the sun. Sunlight is a combination of visible and invisible light. The seven colors of the rainbow are visible lights, and infrared rays and ultraviolet rays are invisible lights. It is the far infrared energy that is most beneficial, penetrating the skin and increasing circulation to help rid the body of harmful toxins. Full Spectrum Infrared Heaters The Clearlight Sanctuary Sauna is a True Full Spectrum infrared sauna, offering advanced near (NIR), mid (MIR) and far (FAR) infrared technologies. Our sauna utilizes high powered, 500 watt halogen full spectrum heaters. Our True Wave Full Spectrum heating system provides all three wavelengths 100% of the time to optimize your sauna session. Our Full Spectrum infrared heaters emit about 1/3 near, 1/3 mid and 1/3 far infrared. By offering all three wavelengths the infrared heat will penetrate deeper, past the Epidermis and Dermis into the Subcutaneous layer, offering maximum effectiveness. The Many Benefits of Infrared Sauna Sessions Detoxification, Relaxation, Weight-loss, Pain Relief, Strengthening of the Immune System and Skin Cell Rejuvenation NIR promotes skin renewal, cell health, wound healing, and tissue growth. MIR helps expand blood vessels and increases circulation, so more oxygen can reach injured areas of the body. This reduces pain and speeds the healing process. FIR stimulates the sweat glands, resulting in a deep, detoxifying sweat that helps with recovery from fatigue and leaves you feeling revitalized. Plus, since sweating increases heart rate, cardiac output, and metabolic rate, you’re also burning calories. Far infrared rays have a wavelength of 8-14 microns. Far Infrared heat is needed for optimum health, by all living things. As soon as you take a seat in our IR sauna, the radiant heat surrounds you, and FAR infrared waves begin to penetrate 2 to 3 inches into your body. These waves create a deep heating action, that penetrates into your joints, muscles and tissues. IR waves also activate the sweat glands. Their primary function is to naturally eliminate and expel waste materials and toxins. Since skin is the largest organ in the human body, the deep heating of the sweat glands allow for a deep detoxifying sweat. The human body is comprised of 70% water molecules. FAR infrared waves cause these molecules to vibrate. This vibration reduces ionic bonds and breaks down the water molecules. These molecules often encapsulate gases and other toxic materials. Using our IR sauna helps to remove these impurities from your cells, specifically the cells inside your fat where your body stores waste and harmful toxins such as cholesterol and heavy metals. In our Full Spectrum IR Sauna, the average person sweats out approximately 20% toxins and 80% water, whereas in a conventional sauna, the average person sweats out only about 3% toxins and 97% water. This is a significant difference. How To Achieve Maximum Sweat The temperature inside an infrared sauna is adjustable and averages a comfortable 100 °F to 149 °F. For maximum detoxification benefits our Clearlight Infrared Sauna is best used at temperatures between 100 °F to 125 °F. This allows a person to sweat faster and to tolerate a longer period of time inside the sauna, allowing for the therapeutic effects to occur. Typical sessions last 30 minutes and can be repeated once or twice during the day to maximize the benefits. Infrared saunas leave you feeling invigorated, not depleted like conventional saunas which have temperatures that range from 180 °F to 220 °F. Benefits of Chromotherapy Chromotherapy is a method of treatment that uses the visible spectrum (colors) of electromagnetic light to cure diseases. It works on various energy points to help balance your body. It is a centuries-old concept that possibly has roots in Ayurveda, ancient Egyptian culture and traditional Chinese healing. Chromotherapy benefits are always a part of your IR Sauna session at no additional charge. The color chart below explains the health benefits associated with each color. RED: Activates the circulatory and nervous systems. STRONG GREEN: Provides anti-infectious, anti-septic and regenerative stimulation. STRONG BLUE: Lubricates joints, helps address infections, stress and nervous tension. ORANGE: Energizes and eliminates localized fat. Helps address asthma and bronchitis. GREEN: Acts as a relaxant. BLUE: Stimulates muscle and skin cells, nerves and the circulatory systems. STRONG YELLOW: Strengthens the body and acts on internal tissue. STRONG INDIGO: Helps address eye inflammation, cataracts, glaucoma and ocular fatigue. STRONG PINK: Acts as a cleanser, strengthening the veins & arteries. YELLOW: Reactivates and purifies the skin. Helps with indigestion and bodily stress. INDIGO: Activates & eliminates impurities from the blood stream. VIOLET: Relaxes the nerves & lymphatic system. Addresses inflammation and urinary illness.
physics
https://circle.myactivesg.com/learn/bowling/how-to-make-a-bowling-ball-curl
2023-02-08T19:23:41
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00245.warc.gz
0.94615
917
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__242755731
en
How do I make a bowling ball curl (hook)? File photo credit: SportSG MASTER YOUR BOWLING TECHNIQUE (2): THE HOOK Curling (more commonly known as hooking) a bowling ball can be one of the most spectacular shots in the game. Although it is generally more difficult to hook the ball, compared to spinning or throwing straight, it is the most common style used by professionals due to the power and spin action it generates. Depending on your strength and bowling ball, there are varying results when doing a hook. Some hook balls achieve very minimum curl (2-5 boards), while a more dramatic hook can appear to be thrown all the way from one end of the lane before curling back to the centre. Regardless, the basic technique remains the same. The Bowling Ball The type of hook you’re able to achieve depends much on the type of bowling ball used. While some bowlers are able to achieve some form of hook with a house ball (typically made of polyester or plastic), this requires an over-exertion of power and is difficult to replicate on a continuous basis. If you want to get on the fast track to throwing a hook, you're going to need a reactive resin or particle ball. Reactive-resin balls start around $100 and go up from there, though some, as well as particle balls, may cost several hundred dollars. Reactive balls have a porous surface that is meant to grip the lane and provide the traction through the lane oil that you need for your ball to hook. Plastic balls are much harder and smoother than reactive balls so they won't be able to get the same traction. Once you’ve gotten a reactive ball, the next step is to drill holes according to your finger size. Preferably, you should use the fingertip grip. Take your ball to a pro shop and have an expert measure your hand and drill your ball. Most stores include free drilling with the purchase of a ball. File photo credit: http://uarkpeac.pbworks.com/ Mastering The Technique If you’ve been playing straight ball at a decent level all your life, your approach and backswing should be near identical to before. Just remember to time your footwork and swing well, keep your shoulders facing forward, have your arms completely straight during the swing and then focus on the release. In order to create revolutions, there are a few things that must happen at the point of release. 1. Exiting of the thumb from the bowling ball 2. Lifting with the fingers 3. Counter-clockwise rotation of the hand and wrist (Clockwise for Lefties) File photo credit: http://www.tweedtenpin.com.au/ It is important to learn to relax your thumb at the point of release. Your thumb should exit the ball first, leaving your two bowling fingers to control the hook of the ball. This promotes lift, rotation and most important – accuracy. The next step is really where the revolutions are created. When releasing the ball, you should naturally flick your fingers as you let go. At the point where you feel your thumb exiting the ball is when you start to lift with your fingers. Try to feel the ball on the tips of your fingers and lift as you're getting ready to release the ball. The final step goes hand in hand with the previous step. As you're lifting with your fingers, you should rotate your hand and wrist counter-clockwise to the handshake position. Follow through by directing your ball and make sure it remains in the same position to avoid injuries. A good way to practice this technique is by practicing it with a tennis ball. You want to create an underhand spiral with your throw; and if you get it right, it'll go straight and then bounce drastically to the side. The hook ball requires lots of practice. Visualise the motion in your head and try it out numerous times without the ball. Most bowlers start with small curves before advancing to the more dramatic hooks. The more severe the curve, the more power is generated - but the risks are also greater. Ultimately, achieving consistency is much better than looking good. To receive the latest updates on the happenings in the Singapore sports scene, or to find out more about some of the latest programmes on offer at ActiveSG, like our Facebook page here.
physics
https://wt-obk.wearable-technologies.com/2019/08/human-skin-to-power-future-wearable-sensors/
2024-03-05T14:14:47
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948235171.95/warc/CC-MAIN-20240305124045-20240305154045-00356.warc.gz
0.960596
502
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__63734713
en
Although sensors and wearable devices are getting smaller with each passing year, they still lack comfortability for the wearer. These devices need bulky batteries and other power sources to run. Now, researchers at the University of Massachusetts Amherst have developed sensors that can be powered by the wearer’s skin. “We’re using the human skin, which is composed of mostly water, as a conductor,” said Sunghoon Ivan Lee, a UMass Amherst researcher and assistant professor of computer science. “But human skin is one big chunk of conductive material, so there’s no distinction between the signal wire and the ground wire. So, we’re using the skin as a signal wire, and air as the ground.” The self-powered sensors can be ultra-miniaturized and ergonomically designed for placement on small areas of the body, like a finger, an ear or even a tooth, reports UMass. It’s a technological innovation unreachable with conventional in-device batteries. Which is why Lee and his team believe their research can lay the groundwork to transform existing architectures and spawn a new generation of on-body sensors. “We’re working on a process that shrinks the size of devices so they can be placed on small parts of the body,” Lee said. “And because you don’t have to change batteries, there’s a variety of ways in which wearable sensors can be improved and expanded.” Originally, the team started their research when they decided to understand how stroke survivors use their limbs. “If we could put a sensor on the finger, we could obtain clinically relevant information on impairment level,” Lee said. However, there was a small problem. The sensors were too bulky, and the batteries took up too much space. So, they started looking for ways to make the sensors smaller, lighter, more pliable and more energy-efficient. They found their answer in the natural conductive properties of skin. While this new sensor is limited to usages on wrist and finger, other applications of the technology could lead to small wearable sensors placed within a person’s tooth or the ear, Lee said. Dentists could get a better understanding of the pressure or moisture levels of lost teeth. The ear sensor, on the other hand, could carry signals relating to muscle, eye and brain activities.
physics
https://yourmicrocast.com/2022/02/04/want-to-know-what-hubble-telescope-saw-on-your-birthday-this-nasa-tool-will-help-you-find-out/
2024-04-20T22:52:01
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817688.24/warc/CC-MAIN-20240420214757-20240421004757-00763.warc.gz
0.945039
387
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__157216953
en
Since its launch three decades ago, NASA’s Hubble Space Telescope has captured millions of stunning images of the cosmos. On its first launch anniversary on April 24 in 1991, it observed something amazing. It’s called the Cygnus Loop supernova remnant and it is located 2,600 light-years away from Earth. Sharing the image on Twitter, NASA asked its followers to find out what the observatory saw on their birthdays using a Web tool it has created for it. NASA also asked users to reply to its tweet to let others know what they come across using the tool. “Hubble saw something amazing on the anniversary of its April 24 launch — the Cygnus Loop supernova remnant. What did the telescope see on your special day? Find out and reply with your NASA birthday picture,” NASA tweeted. When you use the free-for-all online tool, you will be prompted to submit your birth date. Once you do that, the tool will throw up the details of what Hubble saw on that particular day. For example, if your birthday falls on February 4, the tool will tell you that the Hubble saw Galaxy Cluster MACS J0717.5+3745 on this day in 2005. While you can choose your specific date and month, you can’t select a specific year as the tool shows results from over several years. When a user replied that Hubble saw the Cartwheel galaxy on her birthday and shared its image, NASA too responded with the details of the galaxy and said excitedly, “We’re flipping out”. NASA regularly creates such tools to engage with its audience and educate them in a fun way. It separately runs Ask The Expert series on its social media channels. Through video, NASA officials explain the mysteries of the universe. For instance, NASA experts recently explained why there are no rainbows on Mars.
physics
https://theinformer.co.nz/blog/1960-when-the-chile-tsunami-reached-whitianga/
2024-02-28T15:59:34
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00323.warc.gz
0.988459
464
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__114689947
en
The Chile earthquake was the largest earthquake recorded in the 20th Century. It caused huge devastation and loss of life in Chile and across the Pacific. The tsunami it created, travelled at 200km per hour. However, by the time it reached Whitianga, it was something different. At the time of the wave’s arrival, eight-year-old Mark Alloway was on his dad’s boat, the 40ft launch, Manuroa, tied up at the Whitianga wharf. Neither Mark nor his dad had any knowledge of the Chile tsunami. Suddenly, the sea started draining out until their boat was sitting on the sea bottom. Mark’s Dad looked on incredulously. He had never seen anything like this. Then the sea started coming back in and it lifted the boat until their deck was level with the deck of the wharf – this was a metre higher than the highest tide they had ever seen. The sea had begun to lap onto the deck of the wharf. At this point Mark’s Mum arrived. She had been grocery shopping in town. She stepped easily onto the deck of the launch. Mark’s Dad knew that something very threatening was happening, but he wasn’t sure what. “We’ll go to sea” he said. “We’ll be safe at sea.” With that he cast off from the wharf. Leaving the harbour, things all seemed normal. The waves were nothing exceptional. They were about half a metre high with no crests. They settled down to what they knew. Fishing! They caught fish and then something bizarre happened. A couple of metres from the stern of the boat a large, pointed rock suddenly thrust up out of the water. It was as though it had been pushed up from the bottom of the sea. In fact the sea level had suddenly dropped, and a previously submerged rock broke the surface of the water. The appearance of this rock was indisputable evidence that something huge was happening. Only in the following days did they learn of the Chile tsunami. Caption: View of the wharf at the bottom of the tidal range taken on the morning of May 24 1960 (photo courtesy of Ted Ramsbotham).
physics
https://armedia.am/eng/print/82877/
2021-09-28T08:10:19
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060538.11/warc/CC-MAIN-20210928062408-20210928092408-00364.warc.gz
0.953156
180
CC-MAIN-2021-39
webtext-fineweb__CC-MAIN-2021-39__0__225664778
en
A moderate earthquake with a preliminary magnitude of 5.5 has struck off Luzon island in the northern Philippines, with shaking felt as far away as Manila and Quezon City, seismologists and residents say, Bnonews reports. The earthquake, which struck at 3:18 a.m. local time on Sunday, was centered in the sea, about 31 kilometers (20 miles) northeast of Lubang Island, or 89 km (55 mi) west of Calamba and 65 km (41 mi) southwest of Balanga. The Philippine Institute of Volcanology and Seismology said the quake measured 5.5 and struck at a depth of 85 kilometers (53 miles), which is relatively deep. The U.S. Geological Survey (USGS) put the magnitude slightly lower, at 5.3. There is no threat of a tsunami but aftershocks are likely.
physics
https://solar-center.stanford.edu/observe/
2024-04-14T20:00:26
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.9/warc/CC-MAIN-20240414192536-20240414222536-00476.warc.gz
0.921086
2,325
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__202779665
en
Observing the Sun for Yourself Don't ever look directly at the Sun through a telescope or in any other way, unless you have the proper filters. Can one damage their eyes by looking directly at the Sun? There are many ways you can observe the Sun, and hopefully sunspots, for yourself. The easiest and safest is to project the Sun by building your own pinhole camera. Or, if you have your own telescope, you will need to obtain a solar filter. There are even solar telescopes online, which you can access via the web to observe the Sun. Projecting the Sun You can easily, cheaply, and safely observe the Sun by projecting it through a tiny hole onto a white sheet of paper. This simple device is called a "pinhole camera". With the pin, punch a hole in the center of one of your pieces of paper. Go outside, hold the paper up and aim the hole at the Sun. (Don't look at the Sun either through the hole or in any other way! ) Now, find the image of the Sun which comes through the hole. Move your other piece of paper back and forth until the image looks best. What you are seeing is not just a dot of light coming through the hole, but an actual image of the Sun! - 2 sheets of stiff white paper - A pin - A sunny day - Perhaps a friend to help Experiment by making your holes larger or smaller. What happens to the image? What do you think would happen if you punched a thousand holes in your paper, and you put little lenses in front of each hole to refract (e.g. bend) the solar images to all fall on top of each other. What do you think you'd see? In fact, optical telescopes can be thought of as a collection of millions of "pinhole" images all focused together in one place! If you want, you can make your pinhole camera fancier by adding devices to hold up your piece of paper, or a screen to project your Sun image onto, or you can even adapt your pinhole camera into a "real" camera by adding film. Google "pinhole camera" for lots of ideas! You can also project an image of the Sun using a pair of binoculars or a small telescope: Make a Safe Sun Projector with Binoculars How to Look at the Sun Safely Observing the Sun for Yourself (a If you want to learn more about how light works, you can join artist Bob Miller's web-based "Light Walk" at the Exploratorium. It's always an eye-opening experience for students and teachers alike. His unique discoveries will change the way you look at light, shadow, and images! Bob Miller's Light Walk Projecting the Sun by Sun Funnel There is an easy-to-make cone device you can attach to a telescope so that multipole people can easily view your projection. Really awesome, and your friends will love it! Instructions at Make a Sun Funnel Using the little Sunspotter Telescope A safe and inexpensive solar telescope of your own! Thanks to the efforts of Learning Technologies, Inc., who brought you the Starlab portable planetariums and the Project Star kits, schools can now own solar telescope of their own. This wooden, folded-path, Keplerian telescope provides a much safer and convenient way to view the brilliant light of the Sun than other more common methods. By using a series of mirrors, the device projects a bright 3.25-inch solar image onto a 5-inch white viewing screen through a powerful 62mm diameter objective lens. In its perfectly curved cradle, the Sunspotter is easily aligned to the Sun in seconds, without the complication of telescopes, solar filters, and tripods. The Sunspotters run approximately $430-$500. They are available on the web. Using Your Own Telescope The safest way to look at the Sun through your own telescope is NOT to! Not only could you damage your eye, but you can also damage the lenses in the telescope. The safest practical way to see the Sun through a night-time telescope is to use a solar filter. Baader filters are the best, but others are avilable as well. How to Choose a Solar Filter There is a particular color of red (called H-alpha, coming from hydrogen atoms) that is good for viewing the Sun's chromosphere, the part of the Sun directly above the surface, and that shows the best solar activity. You can purchase a Coronado PST (Personal Solar Telescope) to observe in H-alpha! These show prominences, filaments, sunspots, plages (white areas around sunspots). You can learn more these at: Observer's Guide to the H-alpha Sun. The PSTs run about $550, and are available on the web. Plus you'll need to add a tripod. Or, you could purchase an H-alpha filter for your own telescope. This site will help educate your on the basics of front and rear mounted filters: Hydrogen-Alpha Solar Filters View the Sun through NASA's Solar Dynamics Observatory If you can't afford your own NASA solar telescope, you can at least view the glorious imagery that NASA's Solar Dynamics Observatory (SDO) produces: There is even a special tool that allows you to access this imagery and generate our own videos: If you would REALLY like to get into JHelioviewer, or you end up having to teach a community college course in astronomy, you can learn how to use this tool for yourself or for student laboratories: SDO Data in the Classroom Eclipse Glasses - for anytime! For very little money you can purchase a pair of paper eclipse glasses. They are great for both total and partial eclipses, and they work anywhere, anytime you can see the Sun! Available on the web. Observing and Drawing Sunspots Do you know what a sunspot is? If not, check out And, did you say you liked to draw? Before the advent of exotic cameras and other technological wonders, astronomers had to rely on drawings or sketches to document what they had seen. Humans have been sketching sunspots for hundreds of years, see Mt Wilson Historical Sunspot Drawing Resource Sunspot observations were first recorded in China during the Shang Dynasty (~1700 BC to ~1027 BC). In the I Ching (an ancient Chinese divination text and the oldest of the Chinese classics, c. 800 BC), a very early observation of sunspots was recorded as "three suddenly bursting fires eating a chunk of the sun" -- the first instance in recorded history of someone observing sunspots. However, large sunspots are occasionally visible with the naked eye, so it is very likely humans have been observing sunspots for thousands of years. An English monk named John of Worcester made the first drawing of sunspots in December 1128. Later, around 1611, Galileo's drawings touched off a huge controversy about whether the blotches were on the Sun or small planets orbiting it. Historic drawings are still very important. And even today, drawings are still most accurate at recording exactly what the eye sees, unaltered by the processing of fancy electronics: Do your own sunspot drawings: You can make your own sunspot drawings by oserving sunspots using any of the above techniques. Then you can compare your sketches to those at Mt. Wilson (in Pasadena, California), an observatory that has been collecting sunpot drawings since 1917. This tradition still continues. Daily Sunspot Drawings at Mt. Wilson Ranking Sunspots - Zooniverse: This is a citizen-science, Zooniverse project to rank the complexity of current sunspots. You too can participate! What is a sunspot? In 1843 an amateur German astronomer named Samuel Schwabe discovered the rise and fall of yearly sunspot counts. We now call this the sunspot cycle. Daily counts have been done since 1849, and still continue. You can do your own, although counting sunspots is not as straightforward as it sounds. You have to figure out how many spots there are, as well as how many groups. And it's hard to determine what qualifies as a sunspot group! How to follow this procedure and count your sunspots is explained at: Sunspots - Count Them, Draw Them, Rank Them A transit is somewhat like an eclipse, only there is a large disparity between the sizes of the objects. Occasionally the planets Mercury and Venus line up with our view of the Sun and appear to transit across its disc. (Thought for the day: why can't you see other planets transit the Sun?) You can view transits with any of the techniques above. Mercury transited the Sun on 9 May 2016 and 11 November 2019, see above. For viewing the next Mercury transits, see The image below shows Venus transiting the Sun in June of 2012. The next occurrence will be in December 2117. Encourage your grandchildren to witness it! What Color is the Sun? Having observed the Sun now, you might have a pretty good idea of what color it is. But you may be wrong. Many images from solar telescopes artificially color the Sun to make details more prominent (i.e. it's hard to see details when a white Sun is placed on a white background). This is a similar problem to using crayons to color the Sun on white paper. Hence many young artists choose yellow, orange, or red for the Sun. If you view the Sun at sunrise or sunset, or through eclipse glasses or filtered telescopes, the Sun may appear yellow, orange, or red. But the Sun is actually white! To explore the various colors of the Sun, and find ways to determine what they are, see What Color is the Sun? Where Does the Sun Rise and Set? When asked, most people say the Sun "rises in the east" and "sets in the west". However, this is only partially true. In fact, it is only true at two tmes during the year - the During the other times of the year, the sunrise and sunset positions make a track from north to south, and back again. If you would like to better understand how this works, see Where Does the Sun Rise and Set?
physics
https://metrication.uk/what-is-metrication/
2023-06-09T00:56:02
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655244.74/warc/CC-MAIN-20230609000217-20230609030217-00036.warc.gz
0.931852
613
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__271667650
en
What is metrication? Metrication is the process of switching from the use of an incoherent collection of countless historical and parochial weights and measures to the use of a single, coherent, standard system of weights and measures known as the metric system. More recently, it has come to refer more specifically to the process of converting to the use of the modern iteration of the metric system – the International System of Units (SI). Since its initial development in the 1790s, the metric system has become the world’s standard system of weights and measures. Every country in the world has adopted the metric system for some or all official purposes. However, while most countries have now completed metrication, a few, including the UK, have yet to adopt the metric system for all official purposes. In those few countries where old measurement units continue to be used, such as yards and miles on British road signs, the old units are now legally defined in terms of metric units. Thus, for most purposes, the metre is not permitted to be used on UK road signs to indicate distance, unless it is shown in multiples of 0.9144 m, and labelled as “yards”. (One yard is defined as 0.9144 m). What is the metric system? Officially known as the International System of Units (SI), the metric system is the international standard system of measurement. It is based on the standard decimal number system, and is designed to be easy to learn, and simple to use. In everyday use, it is used to measure road distances and speeds, floor areas, storage volumes, energy use, and the mass and volumes of food and drink. It is also used to measure temperature, electricity and the brightness of light bulbs. It is the standard system of measurement for all trade. Units of measurement in the metric system relate to each other in a logical and coherent manner. Each quantity has one unit to measure it. Standard metric prefixes can be combined with any metric unit to form subunits which are multiples or submultiples of 10. All calculations using metric units are as straight forward as any other calculation using decimal numbers. The metric system is based on properties of nature: - The distance from the North Pole to the Equator is 10 million metres, or 10 000 kilometres. - 1 metre can be divided into 10 decimetres, 100 centimetres or 1000 millimetres. - 1 m = 10 dm = 100 cm = 1000 mm. - 1 litre is the volume of a cube with sides of length 1 dm, or 10 cm. - 1 kilogram is the mass of 1 litre of water. - Water freezes at 0 degrees Celsius, and boils at 100 degrees Celsius. Since its original inception, the metric system has evolved to become a single coherent system used for measurement in all fields of human endeavour, including science, medicine, technology, industry, commerce and sport.
physics
https://medizeninstitute.com/services/hair-removal-hair-growth/laser-hair-removal/
2023-12-10T13:35:09
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102469.83/warc/CC-MAIN-20231210123756-20231210153756-00587.warc.gz
0.938674
207
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__306597472
en
How does Lumenis SPLENDOR X Work? The Lumenis Splendor X uses a combination of laser technology and an advanced cooling system that permanently removes hair from the body and face. The device is designed to deliver fast repetitions of laser energy onto the unwanted hair, using large spot sizes and a square shape to ensure complete coverage without missed areas or overlaps. It uses two laser wavelengths to allow flexibility in adjusting the energy delivered for different skin types and hair colors. During treatment, the laser energy is absorbed by the melanin in the hair shafts, which instantly converts to heat. This heat travels down the hair shaft and into the hair follicle, destroys or damages the follicle, and prevents it from growing new hair. The device’s dual cooling system helps to keep the skin comfortable during the laser hair removal process. And this cooling system comes with Cryo-touch and Cryo-air technology in a tip that emits cold air while the laser is in use to offer a safe experience.
physics
http://pyrolance.com/?page_id=75
2013-05-22T22:55:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702454815/warc/CC-MAIN-20130516110734-00019-ip-10-60-113-184.ec2.internal.warc.gz
0.898352
453
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__24534719
en
PyroLance features a patented, ultra-high pressure (UHP) nozzle that shoots water at 1,500 psi (100 Bar). It’s powerful, patented technology that provides crews with a unique ability to access, cool and extinguish the source of any fire. PyroLance is able to penetrate outer structures by integrating granite abrasive with a high-pressure stream of water. See the chart below for how quickly PyroLance is able to pierce through common barriers that stand between your crew and a fire. By enabling an exterior attack through walls, roofs, fuselages, bulkheads or any other type of barrier, PyroLance provides your crew with direct access to the source of any fire. PyroLance sprays an ultra-high pressure stream made of up billions of tiny micro-droplets of water (see diagram below). Because these micro-droplets comprise more surface area than normal water droplets, they are far more effective in fighting fire than water delivered from traditional attack lines. The chart below shows PyroLance’s ability to cool interior temperatures from 1380ºF (750ºC) to 212ºF (100ºC) in under a minute. PyroLance makes it far safer for a second attack line to enter the structure. The high-pressure mist that enters an enclosure quickly cools the fire’s thermal layer, preventing dangerous back draft conditions in the process. It also cools the flames and combustible solid fuel surfaces as it reduces the oxygen partial pressure to knock down the fire. By reducing the droplet size of water, UHP increases the surface area for any given volume of water by 16 to 20 times. This allows PyroLance to absorb heat and extinguish fires in record time. The charts below compare water effectiveness of traditional water flow and ultra-high pressure. In this example, UHP flows at a rate of 20 gpm (80 L/min) compared to normal water pressure at 100 gpm (400 L/min). With UHP, however, a full 18 gallons (68 L) of water are effective in extinguishing the fire compared to just 10 gallons (40 L) with normal water.
physics
https://gulfpress.net/gulf/uae/uae-announces-first-women-in-nuclear-middle-east-chapter-news/
2024-03-03T06:57:29
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476205.65/warc/CC-MAIN-20240303043351-20240303073351-00060.warc.gz
0.936072
575
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__71088618
en
The UAE announced today the launch of the Women in Nuclear (WiN) Middle East Chapter. The Chapter was announced at COP28 by Mohamed Al Hammadi, Managing Director and Chief Executive Officer of the Emirates Nuclear Energy Corporation (ENEC), Dominique Mouillot, President of WiN Global, and Sama Bilbao y Leon, Director-General of World Nuclear Association, as well as leaders from the wider nuclear energy industry. Egypt was the first country in the region to join the WiN Middle East Chapter, which is open for other regional countries to be a part of. WiN is a non-profit organization for women working professionally in the nuclear energy and technology fields and those interested in the nuclear sector. The launched Middle East Chapter is part of WiN Global, whose members are focused on a common goal of providing information and raising awareness about the benefits of nuclear energy while promoting gender balance. WiN Global has approximately 4,800 members in more than 107 countries. Women represent around 20 percent of the UAE Peaceful Nuclear Energy Programme, playing a key role across a range of areas including engineering, reactor operations, nuclear safety, and other technical specialities. Since its inception, ENEC has continued to support the advancement of Emirati women, who today are at the forefront of the UAE’s path to net zero by 2050. This has been a clear commitment from the UAE leadership who have supported gender equality and empowered the role of women across the clean energy sector and wider industries. Today at the Barakah Nuclear Energy Plant, women perform critical jobs, including senior reactor operators, fuel load specialists, and more. Mohamed Al Hammadi said, “Emirati women, working alongside international experts, are at the heart of our achievements at the Barakah Nuclear Energy Plant, helping to sustainably power the UAE. We are proud of our talented Emiratis for helping develop the nuclear energy sector and being a source of inspiration for our youth. “As a result of the UAE Peaceful Nuclear Energy Programme, young women across the UAE are seeing the possibilities for highly challenging and rewarding careers in one of the most important industries for clean electricity generation and tackling climate change. The establishment of the Women in Nuclear Middle East Chapter will further increase the opportunities for women to drive innovation and RD as we continue to progress towards achieving Net Zero by 2050.” Creating professional development and networking opportunities to facilitate know-how, greater collaboration with international peers, professional growth, and career advancement, WiN enhances awareness of the value of nuclear energy and advanced technology. At COP28, the role of women and gender balance in tackling climate change is a key topic, with the UAE Peaceful Nuclear Energy Program providing a clear case study for female empowerment, diversity, and inclusion. Read the full article here
physics
http://kitv.web.franklyinc.com/story/40749365/the-latest-71-magnitude-earthquake-hits-southern-california
2019-11-18T16:51:55
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00305.warc.gz
0.97429
546
CC-MAIN-2019-47
webtext-fineweb__CC-MAIN-2019-47__0__117247585
en
The Latest: 7.1 magnitude earthquake hits Southern California AP-12:15 a.m. (California time) Small communities in the Mojave Desert are reeling from a magnitude 7.1 earthquake - the second major temblor in as many days to rock Southern California. Authorities say Friday night's shaker was centered near the town of Ridgecrest - the same area where a 6.4-magnitude quake hit on Independence Day. Mark Ghillarducci, director of the California Office of Emergency Services, says there are "significant reports of structure fires, mostly as a result of gas leaks or gas line breaks throughout the city." He also says there's a report of a building collapse in tiny Trona. He says there could be even more serious damage to the region that won't be known until first light on Saturday. The quake at 8:19 p.m. was felt as far north as Sacramento and even in Las Vegas. It's been followed by a series of sizeable aftershocks. AP- 10:30 p.m. (California time) Authorities say a magnitude 7.1 earthquake that jolted California has caused injuries, sparked fires, shut roads and shaken ball games and theme parks. However, authorities say there are no deaths or major building damage reported from the quake, which struck at 8:19 p.m. Friday. It was centered about 150 miles from Los Angeles in the Mojave Desert near the town of Ridgecrest, which was still recovering from a 6.4-magnitude preshock that hit the region on Thursday. There were reports of trailers burning at a mobile home, and State Route 178 in Kern County was closed by a rockslide and roadway damage. But Kern County Fire Chief David Witt says it appears no buildings collapsed. He also says there have been a lot of ambulance calls but no reported fatalities. At 8:30 p.m. California time, 5:30 p.m. Hawaii time, an earthquake with a preliminary magnitude of 6.9 hit Southern California. The U.S. Geological Survey updated their report saying that the earthquake had a magnitude of 7.1. At this time there are no immediate reports of damage or injuries. If the preliminary magnitude is correct, it would be the largest Southern California quake in 20 years. According to the Associated Press, the quake was felt downtown as a rolling motion that seemed to last at least a half-minute. It was felt as far away as Las Vegas, and the USGS says it also was felt in Mexico. This story will be updated as details become available.
physics
http://motorsich.com.ua/AI-450M
2021-05-06T19:46:20
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00476.warc.gz
0.920157
145
CC-MAIN-2021-21
webtext-fineweb__CC-MAIN-2021-21__0__79988601
en
The AI-450M engine is designed to power multi-purpose helicopters and has a two-rotor design consisting of a gas generator rotor and free turbine rotor. The free turbine transmits power to the reduction gear, arranged in front of the engine, through the shaft passing inside the gas generator rotor shaft. The engine consists of reduction gear with accessory gearbox built in the same casing; gas generator containing inlet section, compressor, combustion chamber and compressor turbine; free turbine with its shaft. Each rotor is mounted on two bearing supports built in the engine stator. To obtain the required engine vibration characteristics the gas generator rotor front bearing support and free turbine rear rotor bearing support are carried on elastic-oil dampers.
physics
https://uvsolutionsmag.com/articles/2022/2022-iuva-research-innovation-symposium-boulder-colorado-may-23-25/
2024-04-25T10:23:32
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297292879.97/warc/CC-MAIN-20240425094819-20240425124819-00841.warc.gz
0.910227
569
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__53945686
en
The International Ultraviolet Association has announced the 2022 IUVA Research Innovation Symposium. This will be an in-person conference taking place at the University of Colorado in Boulder, Colorado, USA, from May 23-25, 2022. UV radiation has been applied in engineering systems for roughly 100 years; today, UV-based systems provide critical functions in a wide range of applications. New and emerging discoveries related to photochemistry, photobiology, UV-based applications and UV sources may provide opportunities to improve existing applications and to expand into new applications. This symposium has been organized to provide opportunities to explore new and emerging aspects of UV radiation and its applications. The conference will be conducted as a single-track event to promote engagement in presentations. In addition, the schedule for the event has been designed to facilitate social interactions and informal discussion. The individual sessions have been organized around theme areas, ranging from common applications (e.g., disinfection and AOPs), to health care, biomedical applications and new UV sources. Speakers with expertise in these areas have been invited to present new discoveries and information related to emerging applications. Session topics and focuses will include the following: Session 1: UV-Based AOP New UV-AOPs or ARPs that are more efficient and less problematic (byproducts) New applications of UV-AOPs/APRSs to address emerging environmental issues (PFAS and ARG/ARB) Latest issues in UV-AOP/ARP research and application, technology trends, and future research needs Session 2: UV in Healthcare and COVID-19 UV disinfection of human norovirus evaluating infectivity using a genome-wide PCR based approach. Expert perspectives on differences in the experimental design and the role of quality assurance of UV disinfection in the frame of water safety plans and risk assessment Far-UV for disinfection and related health concerns Session 3: UV Integration Everyday Life Non-traditional UV design and technologies UV disinfection and reuse of facial masks UVC and UVC plus ozone robots Session 4: Innovations in Harvesting/Delivery UV Novel methods of generating ultraviolet radiation Novel methods of delivering and distributing ultraviolet radiation Light engines. Diode/lensing design and its effect on radiation profile Session 5: New UV Sources, Far UV, LEDs New UV radiation source-based process design and development New UV radiation source-based applications (e.g., disinfection, water/air purification) Protocols/methods for characterizing the new radiation sources/reactors Latest issues in new UV source-based research and application, and future research needs For more information and to register, visit www.iuva.org/2022-RIS.
physics
https://en.bookimed.com/doctor/raimon-miralbell/
2020-11-23T19:29:11
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00417.warc.gz
0.808614
290
CC-MAIN-2020-50
webtext-fineweb__CC-MAIN-2020-50__0__174936544
en
35 years of experience Doctor speaks English, German, Italian, Spanish, French Professor Raimon Miralbell is a world-renowned expert in radiation oncology, prostate cancer in particular. Cooperates with CERN (The European Organization for Nuclear Research). Professor Raimon Miralbell specializes in focused radiosurgery (Gammaknife, Cyberknife, SIRT, IMRT), prostate cancer treatment with radiotherapy. - Since 1999 — Head of the Radiation Oncology Department at Centro Médico Teknon - 1979-1984 — The University Hospital of Sant Pau (Spain) — Degree in general medicine, training in radiation oncology - 1983 — The Radiation Therapy Department in the Royal Marsden Hospital (UK) — Training - 1985 — The Radiation Oncology Department at the City of Hope National Medical Center (USA) — Internship - 1987-1989 — The Harvard Cyclotron Laboratory (USA) — Research fellow. - 1997 — The Faculty of Medicine at the University of Geneva (Switzerland) — Doctor in medicine and surgery. - Varian Award for Radiation Oncology of the Swiss Society of Radiobiology and Medical Physics Has written over 100 scientific publications. Provides studies of image guided radiotherapy, advanced prostate cancer treatment in the elderly, proton therapy. Membership in organizations - The Swiss Society of Radiation Oncology.
physics
https://royfeatherstone.org/SMA/FCtestbed.html
2022-08-10T23:17:02
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00154.warc.gz
0.914492
325
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__180712407
en
This picture shows our new testbed for performing force and motion control experiments on SMA wires. At the top there is a precision linear motion stage that moves a pulley up and down. This pulley has a short chord attached, and this chord ends in two small eyelets for the SMA wires to pull on. For single-wire experiments, the pulley can be locked in place. For antagonistic pair experiments, the pulley is unlocked, and an optical shaft encoder measures the rotation. It is also possible to attach an inertial load to the pulley, like the pendulum visible in this picture. At the bottom, there is a pair of very sensitive load cells and a strain gauge amplifier. These load cells measure the tensions on the SMA wires. The wires themselves are 80cm long, but are doubled up so that the two ends are down at the load cell and the middle passes through an eyelet. The distance from the load cells to the pulley is therefore a bit more than 40cm. The linear stage can generate motions with an accuracy of 1 micron and a bandwidth of 30Hz; the optical shaft encoder measures pulley rotation with a resolution of 0.044 degrees; and the load cells can measure forces with a resolution of 0.3mN and a bandwidth of 140Hz. The testbed also has a precision transconductance amplifier (voltage in, current out) to deliver precise heating currents to the wires; and the whole thing is controlled by a DS1104 real-time control board from dSpace.
physics
https://www.annapoliscountyspectator.ca/lifestyles/kingstons-sisters-of-science-seeking-financial-support-as-team-prepares-for-international-robotics-competition-290882/
2019-09-19T20:56:11
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573735.34/warc/CC-MAIN-20190919204548-20190919230548-00413.warc.gz
0.961661
1,012
CC-MAIN-2019-39
webtext-fineweb__CC-MAIN-2019-39__0__25982229
en
The members of the all-girls robotics team the Sisters of Science are “over the moon” to have an opportunity to compete internationally but they need financial support to make it happen. At the recent Acadia Robot Programming Competitions, the seven members of the Sisters of Science (S.O.S.) placed third in the provincial FIRST LEGO League (FLL) event. FLL is open to competitors between the ages of nine and 14. The team won the Project Innovative Solution Award and was nominated for the Global Innovation Award. Coach Sara Chisholm said being nominated for the Global Innovation Award means that the girls’ project was recognized as “a patentable and practical solution to a problem.” As part of the competition, teams are challenged to identify a real-world problem relating to a given topic and propose a solution. This year, FLL teams were tasked with researching challenges people must overcome to travel throughout the solar system for extended periods of time. The “INTO ORBIT” challenge was designed to spark interest in the science of space travel. The girls researched women in space, recognizing that there still aren’t many female astronauts. “The girls identified that radiation exposure is one of the reasons that there aren’t as many women in space and why there aren’t as many women in space for long periods of time, because women’s bodies absorb that radiation a little differently,” Chisholm said. “So, they imagined a solution to that.” Because of the pending patent process, the team isn’t at liberty to reveal much more information at this point. S.O.S. is in the process of submitting for the Global Innovation Award. They’ll learn in April if they are one of 20 teams that qualify to travel to San Jose, California, in June to further develop and refine ideas. The team that wins the California competition will receive $20,000 to develop its invention. With the third-place finish at Acadia, S.O.S. qualified for the FIRST LEGO League Mountain State Invitational in West Virginia in July. The team has registered for this as well, although Chisholm said they realize that they can’t attend both. “If in April we find that we are one of the 20 teams selected, we’ll pass our bid to the Mountain State Invitational to the next Nova Scotia team that would have qualified,” Chisholm said. An online fundraiser for S.O.S. was launched on Feb. 21 on the website www.gofundme.com. The online goal is $18,000 and $1,610 had been raised as of March 9. Chisholm said they greatly appreciate the generosity being shown. The girls are very excited and are ready to meet the fundraising challenge head-on. Chisholm said it’s expected to cost approximately $20,000 for the seven members and three coaches to make the trip to California. The girls are planning local fundraisers, including a school bake sale, and they’ll be making a presentation to the Kingston Lions to see if the service club can help them stage a fundraising event. Chisholm said it probably wouldn’t cost as much to make the trip to West Virginia, so if they don’t make the cut to go to California, they’ll reduce the overall fundraising goal. If they are fortunate enough to raise $20,000 before the announcement is made in April and it turns out that they won’t be going to California, S.O.S. will use the excess funds to help other robotics teams with travel expenses. “We help each other, that’s one of the core values of the FIRST LEGO, we’re all in this together,” Chisholm said. In the beginning, the team received financial start-up support from Michelin, which Chisholm said was great. The goal for the team was full participation on the part of members and providing learning opportunities, not necessarily winning competitions. Chisholm said it’s amazing to see how it has blossomed into something wonderful for the S.O.S. members. “Kids don’t go into solving problems with the same kind of blinders as adults do,” Chisholm said. “They don’t have the same limitation in ideas of what is possible and practical and what isn’t. Sometimes that makes them all the more innovative in their solutions.” - For more information on the Sisters of Science, visit https://sites.google.com/view/sistersofscience. - Click here to donate to the Sisters of Science GoFundMe fundraiser.
physics
https://www.hanser-kundencenter.de/en/fachbuch/kunststofftechnik/verarbeitung-und-maschinen/verarbeitung-allgemein/6235/solid-phase-processing-of-polymers
2021-02-28T01:29:54
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00234.warc.gz
0.920683
157
CC-MAIN-2021-10
webtext-fineweb__CC-MAIN-2021-10__0__173674642
en
Solid Phase Processing of Polymers provides a comprehensive up-to-date account of the solid phase processing of polymers with particular emphasis on the production of oriented polymers in the form of fibers, films, and solid sections, including rods, sheets, and tubes. It explains how such oriented materials can be produced by a wide range of techniques, including tensile drawing, die drawing, ram extrusion, and hydrostatic extrusion. It also discusses how the development of molecular orientation and structural changes lead to improvements in properties, especially in mechanical properties. This book should form a useful bridge between polymer engineering and polymer physics. It will serve as a valuable aide-memoir to the experienced polymer scientists/technologists and a useful introduction to those entering a very challenging and exciting area.
physics
http://www.anglik.net/alexgrahambell.htm
2019-05-27T11:57:20
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262369.94/warc/CC-MAIN-20190527105804-20190527131804-00470.warc.gz
0.977231
862
CC-MAIN-2019-22
webtext-fineweb__CC-MAIN-2019-22__0__175681152
en
|The one-stop resource for the English language and more ...| Notable Britons in History Alexander Graham Bell (1847-1922) Scottish Inventor of the Telephone. Alexander Graham Bell was born in Edinburgh, Scotland. His father Melivell Bell was a teacher of speech, the author of Standard Elocutionist, reprinted countless times, whose other textbooks on speech and phonetics were widely used in schools and colleges throughout the English-speaking world. In 1862, Melivell bell authored "Visible Speech," to be used for pronouncing words in all languages, but it was found that the symbols it employed could be used to teach the deaf. His wife, Eliza had begun to lose her hearing at age 12. After the death of two sons, the Bell family moved to Canada to escape the tuberculosis then rampant in their native Edinburgh. In 1871, Alexander Graham Bell went to Boston to teach at Sarah Fuller's School for the Deaf (later the world-famous Horace Mann School). In 1872, Bell opened his own School of Vocal Physiology and Mechanics of Speech to utilize the "oral" method of teaching the deaf, rather than the more popular sign language. After accepting a position at Boston University, he began his experiments with electricity to send sound across the wires, taking on as his assistant an expert in electricity, Mr. Thomas A. Watson. Bell's success came through his novel ideas that electricity could be generated to "undulate' or vary in intensity as sound waves and that current could somehow be "shaped" by a practical transmitter. Bell also conceived of the idea that a single membrane or diaphragm could act like the human ear to gather the complexities of sound or speech in the air and through its vibration bring about the corresponding variations in the current flowing in the wire. In the summer of 1874, visiting his father at Brantford, Bell conceived the idea that telegraphing speech was theoretically possible by means of the induced currents in the coil of an electromagnet. He was encouraged by Joseph Henry, considered the dean of American electrical scientists for his work with electromagnetic induction, whom Bell visited at the Smithsonian Institution. The big breakthrough came on June 2, 1875. When Bell and Watson were testing their harmonic telegraph, one of Watson's reeds, screwed down too tightly, froze to the electromagnet. Watson plucked it to free it. Bell, at the other end of the line, had a receiver reed pressed to his ear and heard the twang of the plucked reed. Instead of the expected usual whine of the intermittent battery current, he heard a tone with some overtones. Running to the other room, he shouted "Watson, what did you do then? Don't change anything. Let me see." It became apparent that the reed, too tight to send intermittent current, had sent an induced, undulating current over the line, one that would vary in intensity as the air varies in density when sound passes through it. The receiving reed had acted as a diaphragm enabling Bell to detect the sound. The current had proved strong enough to be of practical use. One day later, Bell was able to transmit his own voice to Watson. Bell filed his application for his telephone patent on February 14, 1876. In 1877, Bell formed the Bell Telephone Company, and in the same year married Mabel Hubbard, ten years his junior, and embarked on a yearlong honeymoon in Europe. Bell might easily have been content with the success of his invention. Alexander Graham Bell's many laboratory notebooks demonstrate, however, that he was driven by a genuine and rare intellectual curiosity that kept him regularly searching, striving, and wanting always to learn and to create. He would continue to test out new ideas through a long and productive life. He would explore the realm of communications as well as engage in a great variety of scientific activities involving kites, airplanes, tetrahedral structures, sheep-breeding, artificial respiration, desalinization and water distillation, and hydrofoils. COPYRIGHT © anglik.net 1999-2003 ALL RIGHTS RESERVED
physics
https://www.themountainpress.com/news/community/local-student-wins-east-tennessee-regional-science-fair-awards/article_6c862842-bb03-58e1-a768-85f97699a321.html
2021-04-23T11:06:26
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00639.warc.gz
0.963147
674
CC-MAIN-2021-17
webtext-fineweb__CC-MAIN-2021-17__0__128469998
en
KODAK — Mason Payne, 13, of Kodak, recently competed at the 2021 Southern Appalachian Science & Engineering Fair (SASEF), held at the end of March. Payne currently attends Lakeway Christian Academy in White Pine, Tennessee, as a 7th grader and a middle school golf team member. His project “Which Golf Ball Flies the Longest Distance?” was part of the Physics and Astronomy category and won multiple prestigious awards. Payne received three awards in the junior division for middle school students, grades 6-8. He was the 2021 SASEF Overall 3rd Place Junior Division Award Winner; received a National Special Award from the Office of Naval Research for Excellence in Science, and he received a Junior Division Certificate of Excellence Award for exceptional merit in the Physics and Astronomy Category. Payne’s efforts garnered him monetary prizes, certificates and medallions. Payne said he initially chose this particular project because of his interest in golf. “I wanted to choose a science project with results that might answer my own personal question,” Payne said. “Every golfer wants to know which ball is the best for driving the longest distance. It helps lower your score. So I built a mechanical club swinger in the garage, and hit the balls on the football field for the experiment.” “I used 4 different ball types to strike with the club swinger, from 2-piece to 5-piece,” Payne said. “Four rounds of 20 flights each gave me the data to show the Taylormade ball was the best performer and the Titleist ball the worst performer. I was surprised to learn that the most expensive ball was not the best in action.” “It was a really fun project and the news of my awards is exciting,” Payne said. “I am very thankful for my science teacher and all of my previous science teachers who have encouraged me along the way. I never expected my project to place as well as it did. I am just really blessed.” His science teacher is Kendall Bryant, science fair sponsor is Kasey Harkness, and golf coach is David Reed. SASEF is the premier science and engineering competition for students in middle and high school for the 23-county regional area of East Tennessee. SASEF has promoted teaching intellectual inquiry through the scientific method in science, engineering and math since 1952, according to the organization’s website. This eastern region science fair is sponsored by the University of Tennessee, Knoxville and many local companies and agencies. It is held annually at the Thompson Boling Arena on the University of Tennessee, Knoxville campus, but has been held virtually for the second year in a row. The regional competition is part of the Pre-College Research Excellence Program. His winning project also qualifies to compete in the Broadcom Mathematics, Applied Science, Technology and Engineering for Rising Stars (Broadcom MASTERS). This national program grants top projects the opportunity to compete nationwide for prizes and awards. Entries will be judged during the summer of 2021. Payne is the son of Darryl and Lisa Payne of Kodak and the grandson of Melba Kelly and Cecil and Marilynne Payne of Kodak.
physics
http://www.aboveinspect.com/web/index.php?siteid=819&pageid=6851
2019-04-22T10:36:15
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578551739.43/warc/CC-MAIN-20190422095521-20190422121123-00027.warc.gz
0.947852
1,178
CC-MAIN-2019-18
webtext-fineweb__CC-MAIN-2019-18__0__19686572
en
Radon is a radioactive, colorless, odorless, and tasteless gas. It is formed as natural deposits of uranium throughout the earth’s crust decay. As radon decay products are inhaled, they can alter the cells in the lungs. These alterations can increase the potential for getting lung cancer. Radon is the second leading cause of lung cancer behind smoking. According to the Surgeon General, an estimated 21,000 people die of radon related lung cancer each year. Radon is produced by the radioactive decay of the element radium. Radioactive decay is a natural, spontaneous process in which an atom of one element decays or breaks down to form another element by losing atomic particles (protons, neutrons, or electrons). When solid radium decays to form radon gas, it loses two protons and two neutrons. These two protons and two neutrons are called an alpha particle, which is a type of radiation. The elements that produce radiation are called radioactive. Radon itself is radioactive because it also decays, losing an alpha particle and forming the element polonium. Elements that are naturally radioactive include uranium, thorium, carbon, and potassium, as well as radon and radium. Uranium is the first element in a long series of decay that produces radium and radon. Uranium is referred to as the parent element, and radium and radon are called daughters. Radium and radon also form daughter elements as they decay. The decay of each radioactive element occurs at a very specific rate. How fast an element decays is measured in terms of the element "half-life", or the amount of time for one half of a given amount of the element to decay. Uranium has a half-life of 4.4 billion years, so a 4.4-billion-year-old rock has only half of the uranium with which it started. The half-life of radon is only 3.8 days. If a jar were filled with radon, in 3.8 days only half of the radon would be left. But the newly made daughter products of radon would also be in the jar, including polonium, bismuth, and lead. Polonium is also radioactive - it is this element, which is produced by radon in the air and in people's lungs that can hurt lung tissue and cause lung cancer. Radon levels in outdoor air, indoor air, soil air, and ground water can be very different. Radioactivity is commonly measured in Pico curies (pCi). This unit of measure is named for the French physicist Marie Curie, who was a pioneer in the research on radioactive elements and their decay. One pCi is equal to the decay of about two radioactive atoms per minute. Because the level of radioactivity is directly related to the number and type of radioactive atoms present, radon and all other radioactive atoms are measured in Pico curies. For instance, a house having 4 Pico curies of radon per liter of air (4 pCi/L) has about 8 or 9 atoms of radon decaying every minute in every liter of air inside the house. A 1,000-square-foot house with 4 pCi/L of radon has nearly 2 million radon atoms decaying in it every minute. Radon levels in outdoor air, indoor air, soil air, and ground water can be very different. Outdoor air ranges from less than 0.1 pCi/L to about 30 pCi/L, but it probably averages about 0.2 pCi/L. Radon in indoor air ranges from less that 1 pCi/l to about 3,000 pCi/L, but it probably averages between 1 and 2 pCi/L. Radon in soil air (the air that occupies the pores in soil) ranges from 20 or 30 pCi/L to more than 100,000 pCi/L; most soils in the United States contain between 200 and 2,000 pCi of radon per liter of soil air. The amount of radon dissolved in ground water ranges from about 100 to nearly 3 million pCi/L. Why do radon levels vary so much between indoor air, outdoor air, soil air, and ground water' Why do some houses have high levels of indoor radon while nearby houses do not" The reasons lie primarily in the geology of radon - the factors that govern the occurrence of uranium, the formation of radon, and the movement of radon, soil gas, and ground water. Why is Radon Bad? The Surgeon General has warned that radon is the second leading cause of lung cancer in the United States today causing over 21,000 deaths per year. At 21,000 this makes it a bigger killer than drunk driving which accounts for approximately 17,000 deaths per year. You would not let your family ride in a car with a drunk driver, so why would you let them live in a home with high levels of radon? If you smoke you are at an even higher risk for developing lung cancer. The cumulative effects of smoking and exposure to radon will greatly increase your risk. Unlike may other types of cancer, lung cancer has a very low survival rate. Even with today’s medical technologies, only 7% of the people diagnosed with lung cancer will survive. Many scientific studies of radon exposure indicate that children may be more sensitive to radon. This may be due to their higher respiration rate and their rapidly dividing cells, which may be more vulnerable to radiation damage.
physics
https://www.dv8studio.com/blogs/news/10099725-cool-facts-about-led-lighting
2018-11-17T00:07:16
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743247.22/warc/CC-MAIN-20181116235534-20181117021534-00491.warc.gz
0.93647
180
CC-MAIN-2018-47
webtext-fineweb__CC-MAIN-2018-47__0__36651156
en
Cool Facts About LED Lighting - A LED Lamp is a light emitting diode (LED) that is assembled into a lamp (or bulb) for use in lighting fixtures - Unlike most fluorescent lamps (e.g. tubes and CFL lights), LED lights come to full brightness without need for any warm-up time - LED lamps have a lifespan and electrical efficiency that is several times better than incandescent lights, and significantly better than most florescent lights - LED chips need controlled direct current of only 12 volts (DC) electrical power; and are far safer than their 110V AC counterparts - LED lights keep cool during operation, and are safe to touch, unlike common light bulbs - RGB (red, green, blue) LED Lights like the ones we use on our artwork, combine the 3 primary colors in differing amounts to create almost any color imaginable! Leave a comment
physics
https://sanjuansre.com/2021/01/19/barometric-pressure
2024-04-24T14:56:12
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819668.74/warc/CC-MAIN-20240424143432-20240424173432-00465.warc.gz
0.960674
348
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__167791272
en
Barometric pressure is the measure of the weight of the atmosphere above us. Barometric pressure varies with altitude; a higher elevation will have less atmosphere above it which exerts less pressure. To keep readings standard across the world, barometric pressure is to be indicated at sea level. The barometric pressure changes as the weather systems over us change. The pressure differences have a huge effect on the weather. If you know the current air pressure at your home, as well as the pressure trend, you are able to predict certain things about the weather. As a very loose rule, a high-pressure area will be clear, and a low-pressure area will be cloudy and rainy. Many still opt to have barometers in their homes and monitor them with great regularity. There is no need to understand the complexities of all this since most barometers are marked Stormy, Rain, Change, Fair, and Very Dry but, essentially a falling barometer typically means clouds and rain and a rising barometer typically means clear and sunny. Many have learned that a falling barometer, for whatever reason, means a shift in their mood. Yes, this could be due to weather, or perhaps, they are one in the same. But, let’s go at this another way. Maybe the weather has nothing to do with anything. Grey, cold, and rainy days can be just as susceptible to the warming influence of enthusiasm as are sunny days. Even lousy days possess hidden wonder. Days that are expected to be wonderful before they begin turn out to be so much more frequently than days greeted with grumbling. Sometimes you just need an attitude adjustment to shift your perception of an entire afternoon and move forward into a pleasant evening
physics
https://blueboatcoffee.com/does-liquid-laundry-detergent-freeze/
2024-02-29T02:56:14
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474775.80/warc/CC-MAIN-20240229003536-20240229033536-00448.warc.gz
0.913994
1,563
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__97695926
en
Knowing the characteristics of liquid detergent helps us in many ways. It will give us an idea about how to store them. So, does liquid laundry detergent freeze in the first place? Well, liquid detergents are helpful and can easily get frozen at lower temperatures. However, the freezing level is different for every type of liquid detergent. The level of freezing depends on a lot of factors. Does Liquid Detergent Freeze? We already know that liquid laundry detergent can freeze at a lower temperature. Different liquid laundry detergents freeze up to different levels, and some don’t freeze. An example will make things easier for you to understand. Tide is a famous brand, and many people are aware of it. It has come up with different laundry detergents. So the tide laundry detergent can freeze only to a certain level. However, the tide pods don’t freeze at all. Even though they both are liquid laundry detergents, their properties are different. The proportion of water in the Tide laundry detergent is much higher than in the Tide pods. So the ability to freeze any liquid laundry detergent depends on the quantity of water present in it. In simpler words, any liquid laundry detergent with a lot of water will freeze as water can be turned into ice. The concentration of soap and other components in many liquid laundry detergents is higher. And the presence of other parts prevents them from freezing. You should always store liquid laundry detergents in a warm place so they don’t get exposed to cold temperatures. Cold temperatures can ruin the effectiveness of your liquid laundry detergent. Can You Use Frozen Liquid Laundry Detergent? Now, there will be some cases where you will find that your liquid laundry detergent froze even after you make sure it is kept in a warm place. Due to the surrounding temperature and weather, it might freeze. Now the main question is, can you still use it if the liquid laundry detergent gets frozen. The answer is yes, and you can use it. And it is safe to use; you don’t have to worry about your clothes getting ruined. Before using, there is one thing you will have to do. You need to thaw your liquid laundry detergent before using it. The term “thawing” means the process of a frozen substance turning into liquid due to warming it up. Now you must be thinking about how to thaw liquid detergent. Well, it is pretty simple. All you will have to do is take it out of your storage place into your living room or any other room. Let it sit for some time. The room temperature will turn it into a liquid again. Don’t even think of using heat on liquid detergent. Some people believe that thawing frozen liquid detergent is adding heat to it. There is a big reason you shouldn’t apply heat to frozen liquid detergent. When you add warmth to frozen liquid detergent, there will be a sudden change in its temperature. And due to the sudden temperature change, it might explode. Your may like: Oxiclean Vs. Shout Components That Can Freeze and Components That Can’t Freeze Every liquid detergent has some components in common. For instance, water, perfume, and fragrance to give your clothes a pleasant smell to remove any kind of stains from your clothes. Now some components can freeze, whereas some can’t. Now we will know which part is capable of freezing and which isn’t. The main component of every liquid laundry detergent in water. Due to the presence of water, liquid detergent freezes. Typically water freezes at 32 degrees Fahrenheit. However, due to soap in the liquid detergent, water tends to freeze at a much lower temperature, which is 12 degrees Fahrenheit. In most liquid detergents, the proportion of water is at least 10%. When the water in liquid detergent freezes, it gives the detergent a mushy jelly. It will look like the detergent has become more concentrated. Perfume and Fragrances We all want a detergent that will give our clothes a pleasant smell. And the perfume and Fragrances present in the detergent can be of two types. One is organic, and another one is chemical-based. And both styles are resistant to water, so they can’t get frozen in cold temperatures. The higher the proportion of perfume and fragrance in clothes, the better smell you will experience from your clothes after using. Moreover, the primary purpose of perfumes and Fragrances is to cancel out the strong, pungent smell of surfactants. Surfactants and Enzymes Surfactants don’t freeze in cold temperatures. Because surfactants are hydrophobic, which means they repel water. As surfactants don’t freeze, they remain active in cold temperatures. The purpose of surfactants is to remove tough stains from your clothes without damaging their quality. However, if surfactants get exposed to cold temperatures for longer, they tend to clump and separate. In this way, they start to lose their effectiveness. Enzymes are also like surfactants. They are protein-based stain removers and don’t freeze in cold temperatures. Around 2% of enzymes are present in every detergent. How To Store Liquid Laundry Detergent To avoid your liquid laundry detergent getting frozen, you have to make sure that you are storing it properly. Here are the ways you can follow to ensure your liquid laundry detergent is being stored properly. Store in a container containing airtight caps Unlike powdered detergents, liquid laundry detergents are not affected by moisture. But that doesn’t mean that you can allow moisture to come in contact with your liquid laundry detergent. To store your liquid laundry detergent, you will need containers with airtight caps. You will need airtight caps containers because if your liquid laundry detergent gets exposed, it will get contaminated. Air contains many microbes, and this microbial contamination can get mixed with liquid laundry detergent. Thus this microbial contamination will get transferred to your clothes. And another reason why we need airtight caps containers is that it is a way to ensure that children can’t get their hands on them, as the liquid detergent can cause serious health issues. Store the airtight caps containers at idle laundry detergent storage temperature You must already know that liquid laundry detergent is significantly affected by temperature. This is one of the reasons why you need to keep the detergent at a constant temperature. For example, you can store the liquid laundry detergent containers in a drawer. Any type of liquid laundry detergent can’t handle both high and low temperatures. Besides, it can handle moisture, so you don’t have to worry about it. The ideal temperature to store it is between 50 to 77 degrees Fahrenheit or 10 to 25 degrees Celsius. There are many things that can happen if you keep liquid laundry detergent at hot or cold temperatures. So, if you store them at a high temperature, the active components present in them will destabilize and separate. Moreover, at such temperatures, the active components won’t get destabilized as a result, and they won’t stick together and separate. In simpler words, your liquid laundry detergent will work perfectly and will be able to remove all the stains present in your clothes.
physics
https://kingfaisalprize.org/professor-rashid-a-sunyaev/
2020-01-20T13:59:00
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00264.warc.gz
0.932816
906
CC-MAIN-2020-05
webtext-fineweb__CC-MAIN-2020-05__0__174275225
en
Professor Rashid Allevich Sunyaev is a prominent Russian physicist whose outstanding contributions to high energy astrophysics and cosmology profoundly impacted both fields and placed him at the forefront of contemporary astrophysicists. Born in Tashkent, Usbekistan on March 1, 1943, Professor Sunyaev graduated from Moscow Institute of Physics and Technology in 1966 and received his Candidate of Sciences (PhD equivalent) and Doctor of Sciences degrees from Moscow University in 1968 and 1973, respectively. Between 1968-1974, he served as a scientific researcher at the Institute of Applied Mathematics and subsequently as Head of the Laboratory of Theoretical Astrophysics at the Space Research Institute of the USSR Academy of Sciences in Moscow. He became full professor at Moscow Institute of Physics and Technology from 1975-2001 and Head of the High Energy Astrophysics Department of the Space Research Institute in Moscow from 1982-2002. He is currently Director of the Max Planck Institute for Astrophysics and Chief Scientist at the Russian Space Research Institute He is also Russia’s principal scientific investigator of the International Gamma Ray Astrophysics Laboratory (INTEGRAL) of the European Space Agency. For three years beginning 2010-2011, Sunyaev will also hold the position of the Maureen and John Hendricks Visiting Professor in the School of Natural Sciences at the Institute for Advanced Study in Princeton. Professor Sunyaev’s fundamental contributions to the advancement of cosmology and astrophysics during the past thirty years cannot be over-emphasized. Among his most distinguished contributions are: his predictions of acoustic peaks in the cosmic microwave background angular distribution, and the development of both the Sunyaev-Zeldovich effect (S-Z effect) on clusters of galaxy and the theory of disk accretion (Standard Shakura-Sunyaev disk) and observational appearance of black holes in binary systems and active galactic nuclei. These and several other achievements by Sunyaev drove theoretical developments to new frontiers and led to the generation of powerful and widely used tools to study structures in the universe. Sunyaev also made significant contributions to space science. He led the team that built the X-ray observatory on Mir space station and the GRANAT orbiting X-ray observatory and is currently working with his team in preparing the world’s first astronomical X-ray satellite and on other projects related to the Plank Mission of the European Space Agency. Professor Sunyaev’s outstanding accomplishments were recognized by numerous honors and awards. He is a Fellow of the US National Academy of Science, the Russian Academy of Science and the Royal Netherlands Academy of Arts and Sciences and an honorary member of Bashkortostan and Tatarstan Academies of Sciences. He is also a member of the International Astronomical Union and former vice-president of its space research committee, member and former vice-president of the European Astronomical Society, member of the American Physical Society, international member of the American Philosophical Society and foreign fellow of the Royal Astronomical Society. In addition, Professor Sunyaev held numerous visiting and honorary professorships, Lectureships and visiting scientist/scholar positions at leading universities including Johns Hopkins University, Columbia University, University of California at Berkley, University of Virginia, Harvard-Smithonian Center for Astrophysics, Institute for Advanced Studies at Princeton University, California Institute of Technology, Cambridge University, Massachusetts Institute of Technology, Ludwig-Maxmillian University, Leiden University, Toronto University and Bose National Center for Basic Sciences in Calcutta. Apart from the King Faisal International Prize for Science, Professor Sunyaev was recognized by several prestigious awards including Bruno Rossi Prize, Crafoord Prize of the Royal Swedish Academy of Sciences, Heinemann Prize for Astrophysics of the American Physical Society, Gruber Prize, Alexander Friedman Prize by the Russian Academy of Sciences, Bruce Medal, Karl Schwarzschild Medal of the German Astronomical Society and the Gold Medal of the Royal Astronomical Society of the UK in cosmology. He published over 300 papers, some of which stand out among the most highly cited publications in astrophysics. Professor Sunyaev, has been awarded the prize in recognition of his pioneering and fundamental contributions to astrophysics and cosmology. His theoretical work on the cosmic background radiation laid the foundation for the observational exploration of the structure of galaxies and the universe. His work on black holes and binary stars was critical in advancing the field of x-ray astronomy.
physics
https://www.delta-foundation.org.tw/en/newsdetail/20
2022-05-17T08:17:35
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00390.warc.gz
0.961458
382
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__257631053
en
Mr. Chao-Lin Kuo, Professor of Stanford University, who devotes himself to the "Big Bang" theory and who has almost confirmed the theory of "cosmic expansion", has won the laurel of "DELTA Young Astronomer". On June 11, Mr. Kuo received the award presented by Mr. Bruce Cheng, founder and honorary chairman of Delta Group, and made a speech with the theme of "Big Bang and You" at the Sunshine Conference Hall of Delta Sunshine Building. Nearly 100 Delta colleagues listened to his speech. The research team of Harvard Smithsonian Center for Astrophysics in which Mr. Kuo is a member published a paper in 2014. The evidence of the early Big Bang - "Primordial Gravitational Wave" was found by making use of data from the Antarctic observatory; thus Kuo immediately became a well-known physicist in Taiwan. However, the same team published a paper in 2015 to argue that existing data still do not rule out the effect of interstellar dust and they will continue to look for other evidence. Kuo is now working with the astrophysics team of Taiwan University to set up another northern hemisphere observation base next to the Tibetan Plateau Shiquanhe Observatory, which is also an astronomical observation base established with the assistance of Mr. Bruce Cheng. It has become an important astronomical observation base in the world. Mr. Kuo is the 12th astronomer winning the honor of "DELTA Young Astronomer". Mr. Wing Huen Ip, an academician of Academia Sinica, who is in charge of the award selection, points out that these scholars are all Stars of Tomorrow in the field of astrophysics. He is confident that someone from the current winners will have a great opportunity of winning the Nobel Prize in Physics in a few years, which shows the particularity of the award in the field of global astrophysics.
physics
http://ishevents.org.uk/descargar/B00K7YGZ6W-wave-optics.html
2019-10-21T17:48:10
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00125.warc.gz
0.938194
130
CC-MAIN-2019-43
webtext-fineweb__CC-MAIN-2019-43__0__16436350
en
Sureshgupta, Sanjay Ghosh, C.K. Garg con Wave OpticsThe study of phenomena that arise when light interacts with matter and the wave features of light constitute the subject of wave optics. This book assists the learners in applying the acquired theoretical knowledge to the real-life applications. It is written in a simple language, the presentation is lucid, and detailed mathematical steps have been worked out. The book focuses on wave nature of light, wave phenomena and modern optics. This book is intended for the undergraduate students of physics—both honours and general courses—it will also be useful to those appearing in competitive examinations.
physics
http://www.jach.hawaii.edu/JCMT/surveys/sassy/
2015-01-30T00:23:14
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122192267.50/warc/CC-MAIN-20150124175632-00098-ip-10-180-212-252.ec2.internal.warc.gz
0.885942
3,113
CC-MAIN-2015-06
webtext-fineweb__CC-MAIN-2015-06__0__35193666
en
Of all the wavebands in the electromagnetic spectrum the most poorly surveyed to date remains the sub-millimetre, despite the being one of the most high-impact areas in observational astronomy (as evidenced by the high citation counts for JCMT papers and the development of the ALMA project). Only a tiny fraction of the sky has been surveyed at high angular resolution in the sub-mm; while COBE produced a full-sky map in the sub-mm it did so with only 7° angular resolution. The SCUBA-2 "All-Sky" Survey, or SASSy, is a JCMT Legacy Survey project designed to redress this balance and exploit the rapid mapping capability of SCUBA-2 to ultimately map the entire sky visible from the JCMT to an angular resolution of 14" at 850 µm. The benefits of such a wide-field survey are many, ranging from a complete census of infrared dark clouds (IRDCs) to the potential discovery of some of the most luminous high-redshift galaxies in the Universe. SASSy comes in two phases: a pilot phase in which approximately a quarter of the sky visible from the JCMT will be mapped in the form of two 10° wide strips; and an extended phase where the mapping strategy to complete the remaining sky is defined by the discoveries made in the pilot phase. The two strips that will be mapped in the pilot phase are shown in the figure below. The first is centred upon the galactic plane and is known as GP-Wide. The second, known as Pole-to-Pole or P2P, is oriented perpendicular to the galactic plane and passes through the North & South Galactic Poles and the North Ecliptic Pole. Each of these strips will be mapped down to a sensitivity of 30 mJy/beam. Following the completion of this phase our aim is to survey the remaining sky to the same depth, although the precise details will be determined from the results of the pilot phase. SASSy has been awarded 500 hours of time on the JCMT to undertake the pilot phase and will commence in the 2007B semester. Figure 1: The regions that will be mapped during the pilot phase of SASSy overlaid on an all-sky projection of the IRAS 100 µm sky. GP-Wide is shown by a red outline and P2P is outlined in black SASSy is optimised to fully exploit poorer weather conditions in order to minimise the impact of such a large survey on the rest of the JCMT Legacy Survey programme. The central role of SASSy is to act as a detection experiment and identify infrared dark clouds, star-forming cores & dusty high-redshift galaxies in as unbiased a manner as possible. By only working at 850 µm we will use the new capabilities of SCUBA-2 (improved sky noise cancellation, sensitivity and water vapour calibration) to work in less favourable weather bands than traditionally used for SCUBA operations. SASSy will operate in the grade 4 weather band, i.e. 0.12 < τ(225GHz) < 0.2. By running SASSy in this weather band we also provide a fallback project at essentially all Right Ascensions for those surveys that require more demanding weather conditions. Back to top Galactic Science Goals SASSy aims to answer the following specific Galactic science questions: How many IRDCs are there in our Galaxy and how are they distributed? The distribution of IRDCs mirrors that of the galactic mid-IR background, thus it is not possible to determine the latitude dependence of IRDCs, their numbers in the outer Galaxy where the mid-IR background is low, nor their numbers in the inner Galaxy where more distant clouds are lost against the bright emission of the Galactic Plane. As the emission level of the clouds is at or below the MSX noise floor, with MSX data alone it is only possible to determine a lower limit to the opacity and hence column density of the clouds. The GP-wide strip of SASSy is perfectly matched to the MSX and UKIDSS Galactic Plane Survey regions (|b| ≤ 5°). We will detect over 3000 IRDCs by their sub-mm emission in this region, as well as those in the outer Galaxy missed by MSX, and will determine whether the fall-off in galactic latitude is a real effect or background induced. What is the relation of IRDCs to star formation and Galactic structure? With 850 µm fluxes from SASSy and MSX constraints we will initially determine an upper limit to the temperature of these clouds that will be refined by later ASTRO-F far-IR detections and/or Herschel follow-up from either pointed observations or the proposed HiGAL survey ( Molinari et al 2005). In combination with distances determined either from ancillary UKIDSS or 2MASS photometry (Maheswar et al. 2004) or via follow-up millimetre-wave spectroscopy we will measure the mass, column density and galactic position of each IRDC. The GP-wide portion of SASSy will comprise a catalogue of over 3000 IRDCs with well determined basic properties, allowing us to investigate whether these objects are as a whole indeed associated with the early stages of high-mass star formation. As IRDCs occupy a much smaller region of phase space (i.e. possess much simpler kinematics) then either stars or GMCs, it is possible to use the IRDCs as ``test particles'' to infer the Galactic potential well. Such studies will be a valuable tool to reveal the underlying structure of our Galaxy, especially in the outer galaxy where our knowledge of the Galactic potential is poor. Is there an underlying unknown population of star formation? The P2P section of SASSy will allow a first attempt at answering this question. By searching for dense cores over a range of galactic latitudes that cut through a number of known cirrus clouds we will be able to determine the number of star-forming cores in this region that are similar to those identified by Heithausen et al. (2002) in high-latitude cirrus clouds. We will cross-correlate the positions of the detected cores with those of known clouds from the Dobashi et al. (2005), Dame et al. (2001) and Magnani et al. (1996) catalogues to infer whether these cores indicate the presence of as yet unknown molecular clouds. We will also establish the temperature, mass, column density and distance to each detected core (using photometric distances where possible for high-latitude objects). What is the fraction of clustered vs isolated star formation? The contrast between the local star forming cores identified in the GP-wide survey region with those isolated cores associated with cirrus clouds in the P2P region will allow us to begin to investigate their differences with respect to clustered star formation in the well-known molecular clouds. For example, do the physical properties of isolated (possibly high-latitude) protostellar cores vary from those found in more clustered regions like Orion? However, only by extending the GP-wide and P2P regions to encompass a larger fraction of the sky will we be able to quantify the fraction of star formation found in each environment. What is the answer to the distributed T-Tauri problem? An unbiased volume-limited sample of protostellar cores will enable us to locate the early stages of star formation at their birthplaces, rather than some 3 Myr after they have formed (Feigelson 1996). Only by identifying the birthplaces of the field T-Tauris can we solve the distributed T-Tauri problem and set tight constraints upon molecular cloud lifetimes and sizes. Back to top Extragalactic Science Goals Is there an undiscovered population of extreme luminosity objects? The favourable K-corrections in the sub-mm mean that SASSy is sensitive to galaxies across the Hubble volume and the 2000 sq. degrees of the P2P strip at z > 0.5 comprises a co-moving volume some 6x that of the entire z < 0.5 Universe. With SASSy we therefore have the potential to find galaxies that are too rare to have any local counterparts.The depth of SASSy is carefully optimised to detect extragalactic populations that are inaccessible to deeper and narrower cosmological surveys and the bright (~150 mJy) galaxies that SASSy is sensitive to are expected to be the most luminous and briefest phases in the star formation history of galaxies (e.g. Blain et al 2004). What are the number counts of bright sub-mm galaxies? Historically the existence and number densities of the most luminous galaxies have provided some of the most demanding challenges to semi-analytic models of galaxy evolution. Accounting for the mJy-level SCUBA starburst galaxy population has been a major theme of semi-analytic descriptions of galaxy evolution in the past few years. The number counts of brighter galaxies (in the > 100 mJy regime) are completely unknown; current models (e.g. Pearson 2005, Granato 2001, Rowan-Robinson et al 2001) predict counts that differ by up to several orders of magnitude. The depth and area of SASSy are selected to resolve this issue. What is the fraction of lensed sub-mm sources? Gravitational lensing statistics are a potentially powerful probe of the geometry of the Universe. The extremely steep source counts of sub-millimetre galaxies may make them strongly susceptible to gravitational magnification bias and some models imply that up to around ~ 40% of galaxies at the ~ 100 mJy level may be strongly lensed (e.g. Perrotta et al. 2003). If so, bright sub-millimetre galaxy surveys will be an extremely efficient and unprecedented selection method for strong lenses, as well as having the advantages of immunity to dust extinction effects and the benefit of well-understood SEDs in this wavelength range. SASSy is ideally placed to find populations of bright, lensed sub-mm sources and to use them to place new constraints on cosmological parameters such as ΩΛ. The Planck Connection? The Planck satellite will provide an all-sky survey (although deeper at the Ecliptic Poles), with channels at 857, 545, 353, 217, 143 and 100 GHz in the mm/sub-mm, as well as 70, 44 and 30 GHz at longer wavelengths. The eventual end-of-mission sensitivity that Planck is capable of reaching in the 850 µm band (353 GHz) is similar to that planned by SASSy. The production of the final Planck point source catalog as well as the Planck data on the CMB requires that each separate emission component, be it galactic dust, cirrus emission, foreground galaxies, S-Z clusters or the microwave background itself, must be separated from all the others. By comparing the Planck component-separated maps for the P2P region to the 850 µm map of SASSy we will be able to test the separation process and provide an informed measurement of the contribution of the galactic foreground at small scales to the Planck CMB team. SASSy is uniquely powerful in this regard since it will be able to sample the sky over the entire range of galactic latitudes at resolutions $20$ times higher than the Planck maps. Is there an undiscovered population of cold local galaxies? Sub-mm surveys of the local Universe have so far mainly been based upon the IRAS point source catalogue (e.g.~SLUGS; Dunne et al.~2000) or on HI catalogues (e.g.~SINGS). Recently, searches for Type Ia supernovae hosts have been uncovering galaxies dominated by cold ~ 20 K dust, have been found with high L850 at z=0.5 (Farrah et al. 2004). Similar galaxies at z=0.1 would have F60µm ~ 70 mJy, and so would not appear in any of the IRAS galaxy catalogues, but they would have 850 µm fluxes detectable by SASSy. Are there populations of cold, ultraluminous galaxies in the local (i.e. z ~ 0.1) Universe? The only way to determine this is with a large area, shallow survey such as SASSy. Back to top The SASSy Consortium Membership of the SASSy Consortium is still open. Until survey operations have begun researchers from JCMT partner countries (UK, Canada, the Netherlands) can apply for membership by contacting the survey coordinators. Memberships is also open to potential members from non-partner countries on a case-by-case basis. Non-partner country members will normally need to demonstrate an “added-value” to the consortium for membership to be granted. The decision to admit new members will be made by the coordinating team. Mark Thompson (University of Hertfordshire), Steve Serjeant (Open University), Tim Jenness (Joint Astronomy Centre), Douglas Scott (University of British Columbia) George J. Bendo (Imperial College London), Chris Brunt (University of Exeter), Harold Butner (Joint Astronomy Centre), Antonio Chrysostomou (University of Hertfordshire), Dave Clements (Imperial College London), Jim Collett (University of Hertfordshire), Kristen Coppin (University of British Columbia), Iain Coulson (Joint Astronomy Centre), Bill Dent (UKATC), Frossie Economou (Joint Astronomy Centre), Nye Evans (Keele University), Per Friberg (Joint Astronomy Centre), Andy Gibb (University of British Columbia), Jane Greaves (University of St Andrews), Jennifer Hatchell (University of Exeter), Wayne Holland (UKATC), Mike Hudson (University of Waterloo), Andrew Jaffe (Imperial College London), Hugh Jones (University of Hertfordshire), Johan Knapen (University of Hertfordshire), Jamie Leech (Joint Astronomy Centrre), Bob Mann (University of Edinburgh), Henry Matthews (HIA/NRC Canada), Toby Moore (Liverpool John Moores University), Ange Mortier (University of Kent), Dave Nutter (Cardiff University), Chris Pearson (ESA/JAXA), Michele Pestalozzi (University of Hertfordshire), Alexandra Pope (University of British Columbia), John Richer (University of Cambridge), Russell Shipman (SRON/Kapteyn Institute), Mattia Vaccari (Imperial College London), Ludovic Van Waerbeke (University of British Columbia), Serena Viti (University College London), Bernd Weferling (Joint Astronomy Centre), Glenn White (University of Kent/Open University/RAL), Jan Wouterloot (Joint Astronomy Centre), Ming Zhu (Joint Astronomy Centre) Back to top
physics
https://support.nksports.com/hc/en-us/articles/19879849616909-Can-I-get-a-force-curve-or-other-graphs
2024-04-19T04:58:13
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817289.27/warc/CC-MAIN-20240419043820-20240419073820-00495.warc.gz
0.962011
409
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__188784221
en
Force curves were made available in early rowing analysis systems because they are the most basic representation of the actual analog data from the force sensor, with no analysis applied. As a result, they're familiar, but that doesn’t mean they're ideal. Even after years of looking at and coaching with force curves, there is not much agreement in the sport about what makes a "good" force curve. (Lots of opinions, but not much agreement!) Also, it is challenging for athletes to use force curves as a target for changing their rowing, particularly without a clear target also represented on screen. We have chosen to present primarily numerical data that breaks the stroke down into clearly understood elements because we have demonstrated to ourselves, and many other coaches and athletes, that simple numbers are much easier to focus on when attempting to make changes or assess progress. Because the EmPower Oarlock logs five key elements each and every stroke (catch angle, slip, peak force, peak force location, wash and finish angle), we are also able to present a clear "picture" of the stroke very similar to a force curve which we call a "Stroke Profile". When charted using LiNK Logbook or using a simple Excel template, the Stroke Profile illustrates these key characteristics of one or more rower. In the example below, four scullers were evaluated during 20' session. We can see that Rower #2 is probably rigged too far into the bow (see their shallow catch angle and deep finish angle). We can also see that Rower #2 has their peak force occurring later in the drive than the other three. The coach can now provide Rower #2 with a target Peak Force Angle of -10 to -20° in order to better match the other rowers in the boat. Rower #2 can monitor himself as he rows future sessions. To create your own Stroke Profiles please use NK LINK Logbook or see Excel Templates and instructions on how to use them below:
physics
https://abeiku.net/featured/biography/stephen-hawkings-children/
2022-08-19T11:15:14
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00389.warc.gz
0.976139
500
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__8476326
en
Stephen Hawking’s Children Stephen Hawking was an English theoretical physicist, cosmologist, and author who, at the time of his death, was director of research at the Centre for Theoretical Cosmology at the University of Cambridge. Between 1979 and 2009, he was the Lucasian Professor of Mathematics at the University of Cambridge. Stephen Hawking was born on 8 January 1942. He was 76 years old when he died on 14 March 2018. He began his university education at University College, Oxford, where he received a first-class BA degree in physics. In October 1962, he began his graduate work at Trinity Hall, Cambridge, where in March 1966, he obtained his Ph.D. degree in applied mathematics and theoretical physics, specializing in general relativity and cosmology. Hawking’s scientific works included a collaboration with Roger Penrose on gravitational singularity theorems in the framework of general relativity and the theoretical prediction that black holes emit radiation, often called Hawking radiation. Initially, Hawking radiation was controversial. By the late 1970s and following the publication of further research, the discovery was widely accepted as a major breakthrough in theoretical physics. Hawking was the first to set out a theory of cosmology explained by a union of the general theory of relativity and quantum mechanics. He was a vigorous supporter of the many-worlds interpretation of quantum mechanics. Hawking achieved commercial success with several works of popular science in which he discussed his theories and cosmology in general. His book A Brief History of Time appeared on the Sunday Times bestseller list for a record-breaking 237 weeks. His net worth was $20 Million In 1963, Hawking was diagnosed with an early-onset slow-progressing form of motor neuron disease (amyotrophic lateral sclerosis – ALS, for short) that gradually, over the decades, paralyzed him. After the loss of his speech, he communicated through a speech-generating device initially through the use of a handheld switch, and eventually by using a single cheek muscle. Hawking met his future wife, Jane Wilde, at a party in 1962. The couple had three children: Robert, born May 1967, Lucy, born November 1970, and Timothy, born April 1979. In 2006, Hawking and Mason quietly divorced, and Hawking resumed closer relationships with Jane, his children, and his grandchildren. Check out our last story here
physics
https://om.co/gigaom/indium-phosphide-the-miracle-compound/
2023-04-01T21:52:45
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00636.warc.gz
0.958824
144
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__268030245
en
Red Herring :: A compound interest in fiber optics :: A handful of firms exploring the opportunities of indium phosphide, a material that is likely to revolutionize the communications industry. Though discovered in 1863, InP was pretty much ignored by the scientific community until the late ’80s, when the aerospace industry took notice. Not long after, InP-based components started appearing in avionics equipment. Indium phosphide isn’t without shortcomings, however. For one, the yields of InP wafers in semiconductor manufacturing are notoriously low, less than 40 percent, because the wafers are fragile and break. That means InP-based materials are expensive, at least for now.
physics
http://escleaningservicesinc.com/electrostatic.php
2023-12-07T21:05:29
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00344.warc.gz
0.938023
209
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__242635155
en
Electrostatic disinfection is an innovative method that saves time, energy and costs across the board because it presents a more efficient alternative to traditional cleaning techniques and cleaning solution applications. The solution contains positively charged particles, which “stick” to the negatively charges surfaces and objects they are sprayed upon. Because the particles in the spray are positively charged, they cling to and coat any surface they are aimed at. They also repel away from other positively charged particles in the spray, dispersing across the entire surface to coat them evenly and effectively. This cleaning method atomizes cleaning solutions to produce an electrically charged spray able to wrap around surfaces of all types for an even coat. As a chemical exits the electrostatic sprayer, it’s given a positive charge that is attracted to available negative surfaces. The spray attaches to and collects negatively charged unwanted particles, which are then removed from the environment with a specially designed apparatus. Surfaces that are already covered in the cleaning solution will repel the spray, making the method extremely efficient.
physics
https://thechemicalelements.com/thorium/
2023-10-02T11:40:47
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510994.61/warc/CC-MAIN-20231002100910-20231002130910-00783.warc.gz
0.808508
4,865
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__18225027
en
Thorium is a chemical element with the atomic number 90 in the periodic table. The average concentration of this substance in the upper crust of Earth amounts to 10.5 ppm, while thorium concentration in the middle layer (the mantle) is around 6.5 ppm. The core (lowest layer) of the Earth’s crust contains 1.2 ppm on average. Being a member of the actinide family of periodic table elements, this naturally occurring radioactive metal has four valence electrons. Its most important use is in nuclear power plants as a nuclear fuel. Due to its radioactivity, thorium is classified as a carcinogen substance. Chemical and Physical Properties of Thorium |Atomic weight (mass)||232.04 g.mol-1| |Color||A silvery-white lustrous metal| |Physical state||Solid at 20°C| |Half-life||From 1.7(+1.7-0.6) microseconds to 1.405(6)×1010 years| |Electronegativity according to Pauling||1.3| |Melting point||1750°C, 3182°F, 2023 K| |Boiling point||4785°C, 8645°F, 5058 K| |Van der Waals radius||0.182 nm| |Ionic radius||0.110 nm (+4)| |Most characteristic isotope||230Th, 232Th| |Electronic shell||[Rn] 6d27s2| |The energy of the first ionization||1107.6 kJ.mol-1| |The energy of the second ionization||1962.4 kJ.mol-1| |The energy of the third ionization||2774 kJ.mol-1| |Discovery date||In 1829 by Jöns Jacob Berzelius| Classified in the periodic table under the symbol Th, atomic number 90, atomic mass of 232.04 g.mol-1, and electron configuration [Rn] 6d27s2, thorium is a silvery-white, soft and ductile, naturally occurring radioactive metal. reaches its boiling point at 4785°C, 8645°F, 5058 K, while the melting point is achieved at 1750°C, 3182°F, 2023 K. This member of the actinide family of elements in the periodic table has an electronegativity of 1.3 according to Pauling, whereas the atomic radius according to van der Waals is 0.182 nm. Thorium has a dimorphic structure. It changes from face-centered cubic to body-centered cubic at above 1360oC. With a melting point of 3300°C, thorium oxide melts at the highest temperature among all oxides. Only tungsten, tantalum carbide, and a few other compounds have a higher melting point than element 90. How Was Thorium Discovered? After receiving a mineral sample from Hans Esmark, obtained from an island close to Brevik, Norway, the Swedish chemist Jöns Jacob Berzelius (1779 – 1848) embarked on studying the unusual black substance that he labeled as thorite. After successfully identifying iron, manganese, lead, tin, and uranium in the mineral, Berzelius observed another unfamiliar substance. In 1828, the scientist concluded that there is 57.91% of an oxide of the assumed new element contained in the black thorite mineral. Berzelius named this reactive substance thorium. Attempting to isolate the elemental form of the new element, Berzelius triggered a chemical reaction. First, he produced thorium chloride by mixing thorium oxide with carbon. In the next step, the discoverer heated it in a stream of chlorine gas. After this, Berzelius reacted thorium chloride with potassium and got thorium and potassium chloride as a result. In 1898, both the German organic chemist and chemical crystallographer Gerhard Schmidt (1919–1971) and the first female scientist to win the Nobel prize Marie Curie (1867–1934) independently discovered the radioactive properties of thorium. How Did Thorium Get Its Name? Thorium is named after Thor, the Scandinavian god of thunder, lightning, and war. According to the legend, the Norse God was known for his quick and volatile outbursts of anger, always ready to fight. This relates to the high chemical reactivity and the volatile reactions of element 90. Where Can You Find Thorium? Element 90 is a naturally occurring radioactive metal. Natural thorium can be found in the soil, rocks, fossil fuels, water, plants, and animals. Thorite and thorianite are minerals in which this radioactive metal mostly occurs. It can also be found in thorium silicate, monazite, etc. The metal allotrope of this chemical element is also found in minerals such as titanite, betafite, gadolinite, and zircon. Thorium is so frequently occurring in nature, that the mining locations rich in this substance can be found all over the Globe, in all continents. However, the largest quantities thorium reserves originate from the mines in Australia, the United States, Russia, Canada, and India. Extraction of thorium as a byproduct of rare-earth elements (REE), as well as isolation of this chemical element from the monazite ore is the most feasible source of thorium production. For commercial purposes, thorium is also obtained by the methods of electrolysis, extraction, and decomposition with sodium hydroxide. The world’s first thorium molten salt reactor (TMSR) experiment, after the initial experiment at the Oak Ridge National Laboratory (ORNL) during the 1960s, was conducted by the scientists at the Nuclear Research and Consultancy Group (NRG) in Petten, Netherlands. The Salt Irradiation Experiment, or SALIENT, has been prepared in collaboration with the European Commission Laboratory Joint Research Center-ITU, Karlsruhe. Currently, only China, India and Indonesia are included in this project. List of Thorium Minerals The list of minerals from which thorium can be isolated also contains the items: Thorium in Everyday Life Thorium’s physical and chemical properties can be used in a variety of ways which are beneficial for our everyday life: - As more quantities of thorium are being made available, element 90 is researched as a uranium substitute in nuclear reactors for the production of fuel that generates nuclear energy; - Thorium used to be applied in the manufacturing of carbon arc lamps, as well as in mantles of gas lights for that emit intense white light; - Thorium dioxide (ThO2) is used as a control mechanism for small amounts of plutonium and tungsten applied in the production process of the metal spirals in electric lamps; - When added to glass, thorium improves its refractive index and decreases dispersion. The thorium-enriched glass is used in the manufacturing of camera lenses and scientific equipment. Over time, this type of glass gets a slightly yellow tint, but can be cleared again by exposure to high levels of UV light. The health risks of using lenses made with thorium dioxide are minimal; - Element 90 is often used in radiometric dating of fossils, seabeds and mountain ranges; - The gaseous form of element 90 (thoria) is used in arc welding for improvement of its strength and stability; - In the Mg-Th alloy, tungsten is used along with magnesium metal to increase both creep resistance and strength of the parts used in aircraft engines and rockets; - Crucibles, scientific instruments, and heat-resistant ceramic owe their resistance to high temperatures to thorium dioxide as one of the main components; - This chemical element is also one of the catalyst agents that participates in the production of sulphuric acid, as well as in the conversion of ammonia to nitric acid in petroleum cracking; - Uranium-233 isotopes can be used in nuclear weapons or for making a nuke; - The filaments of magnetron tubes which are used to generate microwave frequencies contain traces of thorium; - Thorium is also used in gas mantles that produce light in gas lamps; - Until the 1950s, thorium dioxide was used in medical radiology as a contrast agent (under the label Thorotrast) for making diagnostic X-ray images. It has been discontinued after the studies have related thorium exposure of the patients to the increased risk of liver tumors; - Thorium fluoride gives the antireflection properties of the optical coatings. How Dangerous Is Thorium? Thorium metal dust possesses high pyrophoricity that increases the risk of fire and explosion. It’s able to ignite spontaneously when exposed to air and burns brilliantly with a white light. Exposure to thorium may occur in several ways: - Intravenous injection; - Ingestion (via contaminated water or food); - Absorption through the skin. Initially, the affected individual may experience symptoms such as: - Eye irritation; - Skin irritation; - Nausea and vomiting; - Severe bouts of cough; - ARTI (Acute Respiratory Tract Infection); - Blood disorders. Prolonged exposure to high levels of thorium can be lethal. In the human body, element 90 typically absorbs in the bones, as well as in the soft tissues and organs. As a consequence, liver, bone, and pancreatitis cancers may develop in individuals exposed to high levels of this carcinogen substance. Environmental Effects of Thorium Despite being one of the most frequently occurring chemical elements in nature, this carcinogen substance is not hazardous to the health of the biological, geological, and aquatic systems in the environment. This is mostly due to the fact that exposure to high levels of this radioactive substance may occur only near the thorium-mining areas and the factories that work with thorium or nuclear waste. Also, thorium radioactive waste takes more than 500 years to biodegrade. During this period, it poses an environmental threat due to its radioactivity. Isotopes of Thorium Element 90 has 31 observed forms. Among them, seven are naturally occurring isotopes. Since thorium is a radioactive substance, all its isotopes are unstable. While most of the thorium radioisotopes have a half-life of several microseconds to several minutes, the 232Th isotope has a half-life of 1.405(6)×1010 years. The Thorium Cycle Thorium reactors are based on the thorium fuel cycle that uses the thorium-232 isotope as a fertile material. In the thorium cycle of reactions, the thorium-232 isotope can be transformed by thermal neutrons to a fissionable uranium-233 isotope. The uranium-233 isotope is fissile on its own, i.e. the fission of this form of thorium can provide neutrons for a new thorium cycle. Th-232 isotopes produce Th-233 which undergoes a beta-decay mode (in 22 minutes of half-life) to protactinium-233. The protactinium isotope further decays to uranium-233 by undergoing a beta decay. This parallels the uranium fuel cycle in fast breeder reactors. The described chain reaction sequence can be observed from the following nuclear reactions: 232Th + n ⇒ 233Th ß decay ß decay 233Th ⇒ 233Pa ⇒ 233U It has been observed that when the uranium-235 content burns down to nearly 0.3%, the highly radioactive residue of the fuel contains radioactive isotopes of iodine, plutonium, americium, and technetium. During the Cold War, the United States had reportedly produced about 2 tonnes of uranium-233 from thorium in plutonium production reactors. |Z||N||Isotopic mass (Da) [n 2][n 3] [n 7][n 8] |Natural abundance (mole fraction)| |Excitation energy||Normal proportion||Range of variation| |216Th||90||126||216.011062(14)||26.8(3) ms||α (99.99%)||212Ra||0+| |225Th||90||135||225.023951(5)||8.72(4) min||α (90%)||221Ra||(3/2)+| |227Th||Radioactinium||90||137||227.0277041(27)||18.68(9) d||α||223Ra||1/2+||Trace[n 9]| |228Th||Radiothorium||90||138||228.0287411(24)||1.9116(16) y||α||224Ra||0+||Trace[n 10]| |229Th||90||139||229.031762(3)||7.34(16)×103 y||α||225Ra||5/2+||Trace[n 11]| |230Th[n 12]||Ionium||90||140||230.0331338(19)||7.538(30)×104 y||α||226Ra||0+||0.0002(2)[n 13]| |231Th||Uranium Y||90||141||231.0363043(19)||25.52(1) h||β−||231Pa||5/2+||Trace[n 9]| |232Th[n 14]||Thorium||90||142||232.0380553(21)||1.405(6)×1010 y||α||228Ra||0+||0.9998(2)| |234Th||Uranium X1||90||144||234.043601(4)||24.10(3) d||β−||234mPa||0+||Trace[n 13]| List of Thorium Compounds There are four outer shell electrons of thorium. All valence electrons of this highly reactive and electropositive element are able to participate in a chemical compound. It readily reacts with oxygen, hydrogen, nitrogen, the halogen elements, and sulfur, when exposed to high temperatures. With phosphorus and carbon, thorium forms binary compounds. Out of the many compounds prepared with thorium, the following are most common: - Thorium dioxide - Thorium monoxide - Thorium oxalate - Thorium tetrafluoride - Thorium(IV) carbide - Thorium(IV) chloride - Thorium(IV) hydroxide - Thorium(IV) iodide - Thorium(IV) nitrate - Thorium(IV) orthosilicate - Thorium(IV) sulfide 5 Interesting Facts and Explanations - Thorium and uranium are not only the most stable actinides but also the only members of the actinide group of the periodic table that can be safely studied in a regular laboratory. - The substances that spontaneously ignite upon exposure to air at or below 54°C(129 °F) or shortly after being exposed to air are referred to as pyrophoric substances (from the Greek word ‘πυρφόρος / pyrophorus’, meaning ‘fire-bearing’). - In 2011, the China Academy of Sciences launched an R&D program on LFTR. Liquid fluoride thorium reactors (LFTR) produce less waste during the production energy than the reactors powered by uranium. As a comparison, a traditional pressurised water reactor (PWR) would need to burn 250 tonnes of uranium to produce the same amount of energy. Also, no solid fuel rods (or chemical reprocessing) are needed because LFTRs use thorium in its natural state. - This process of isolating an element from its ore used by the discoverer of thorium was a very familiar one to Berzelius’ fellow chemists. Namely, Ørsted isolated aluminum by the same method in 1825, while in 1828 Wöhler and Bussy succeeded in isolating beryllium in the same way. - Thorium is the second naturally occurring element that has been identified as a radioactive substance, after uranium.
physics
http://www.space-explorers.com/internal/tours/apps.html
2017-11-24T00:15:28
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807044.45/warc/CC-MAIN-20171123233821-20171124013821-00696.warc.gz
0.919452
153
CC-MAIN-2017-47
webtext-fineweb__CC-MAIN-2017-47__0__95220774
en
Space Explorers applets are an exceptional tool to not only illustrate complex concepts, but also allow students to make and test predictions. The inquiry-based applets complement the lesson plans and simulations while allowing students to learn at their own pace. These are ideal for visual and hands-on learners. The Interactive Applets were created to allow students to test out various scenarios to better understand the forces involved in asteroid impacts, trajectory correction techniques, launch windows, and aerobraking calculations. These allow for a fun, fast paced learning experience related to NASA missions. The Applets are designed to be completed online, and include corresponding lesson plans combining contextual and activity-based exercises to key science concepts with fun, interactive activities. National Science Standards Request More Information
physics
http://www.educationalmurals.com/Murals/Chelmsford/Chelmsford.html
2013-12-13T22:22:03
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386165000886/warc/CC-MAIN-20131204135000-00088-ip-10-33-133-15.ec2.internal.warc.gz
0.968719
113
CC-MAIN-2013-48
webtext-fineweb__CC-MAIN-2013-48__0__82209497
en
Murals by Yetti Frenkel Chelmsford Public Library, MA Weather Ballon Mural in Memory of Steve Maloney Steve Maloney was a trustee of the Chelmsford Public Library. He was a meteorologist, and is shown in the mural releasing a weather balloon while his daughters look on. The weather balloon appears at different places in the mural as it moves higher in the atmosphere, traveling through different seasons and weather, eventually disappearing into the sky. Link to the Chelmsford Library's page about the mural and Steve Maloney
physics
http://www.ecprogress.com/index.php?tier=1&article_id=7360
2017-02-27T13:43:51
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00107-ip-10-171-10-108.ec2.internal.warc.gz
0.936077
399
CC-MAIN-2017-09
webtext-fineweb__CC-MAIN-2017-09__0__113628476
en
Celebrate the New Year with a meteor shower Stormy skies and the glare of the nearly full moon may have spoiled our view of December's Geminids meteor shower but Utahns will get another chance to view a meteor shower when January's Quadrantids reaches its predicted peak during the early morning hours of Jan. 3. According to NASA Solar System Ambassador to Utah Patrick Wiggins, "During the time of the peak, observers located away from city light pollution may see more than 100 meteors per hour." And as a bonus, the peak occurs when the moon is not in the sky, making rural skies even darker, increasing chances of a good show. Now if only the weather will cooperate. The Quadrantids and most other showers are best observed between midnight and sunrise as it's at that time the observer's place on Earth is more directly facing the oncoming meteoroid swarm. Meteor showers get their names from the constellation from which the meteors appear. For example August's Perseids seem to pour out of the constellation Perseus and December's Geminids from Gemini. The same applies to the Quadrantids however their ancient constellation, Quadrans Muralis is no longer recognized. "Many people call meteors falling stars or shooting stars," says Wiggins. "They're actually tiny specks of rock that burn up and turn to ash when they slam into the Earth's extreme upper atmosphere." Telescopes and binoculars should not be used to view this or any meteor shower because they so severely restrict the observer's field of view. Wiggins notes that his favorite winter meteor observing equipment consists of nothing more than a lawn chair, sleeping bag and something hot to drink. Some "Quads" may also be seen the morning before and the morning after the peak but their numbers will almost certainly be far fewer. For additional astronomical information log on to Wiggins' Solar System Ambassador web site at http://utahastro.info.
physics
http://euroargo-edu.org/floatdata.php?float=6900701
2018-11-14T05:41:42
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741628.8/warc/CC-MAIN-20181114041344-20181114063344-00380.warc.gz
0.938144
1,282
CC-MAIN-2018-47
webtext-fineweb__CC-MAIN-2018-47__0__24848856
en
Page loading ... Please wait. This page accesses data from several sites, so loading may take some time. This ARVOR float was launched by French scientists west of Portugal at 40.5°N, 11.1°W on 1 November 2011. It has recorded a total of 121 profiles, and made its last report on 6 February 2015 from 42.4°N, 11.9°W. If the strait of Gibraltar were closed Mediterranean sea level would fall by about 1m each year! This is because evaporation is far greater than the input of fresh water from rain and river flow. To compensate, Atlantic water flows in through the Strait of Gibraltar, and continues eastwards in a surface layer about 150m thick. Along its way the surface water becomes more and more salty, reaching over 38 psu south of Turkey. During winter this warm, salty water cools and sinks to become Mediterranean Intermediate Water (MIW), which is found at a depth of 150-600 m. MIW is still quite warm (about 14-15°C), and very salty (about 37-38 psu, compared to the 34-36 psu typical of Atlantic water). It flows slowly back towards the west, and eventually leaves the Mediterranean through the Strait of Gibraltar, underneath the Atlantic inflow. Profiles of temperature (left) and salinity (right) from Argo float 6900701. The profiles show how temperature (T) and salinity (S) change with depth from the surface to 2000m. Early profiles are dark blue, the latest profiles are deep red or brown. Click on the images for larger plots. Source of plots: IFREMER/Coriolis. The temperature and particularly the salinity profiles give us some interesting clues about how water moves below the sea surface in the North Atlantic west of Spain. Look at the T-profiles above. At what depth range do you find the greatest variability? The greatest temperature variability is found between 600m and 1600m depth. This is also the depth with the greatest variation in salinity. Surprisingly the temperature variability is greater at 600-1600m than at the surface, where heating in the summer and cooling in winter has the greatest impact. Such seasonal effects decrease with depth, as you can see if you look at other floats from similar latitudes. The variation in salinity is even more marked than the variation in temperature. Why is this? The Mediterranean Sea is a warm, dry, area. Each year more water is lost from the sea surface by evaporation than is gained by rainfall or the flow into the Mediterranean from rivers like the Nile. As a result Mediterranean water is saltier and denser than the water from the Atlantic. In the Strait of Gibraltar the Atlantic water flows into the Mediterranean on the surface, while the denser Mediterranean water flows out into the Atlantic below this. CLOSE A is Atlantic water near the surface (T ≈ 10-15 °C, S ≈ 35.3-36.0 psu). Although the Mediterranean outflow is warmer than most of the Atlantic surface water, its high salinity makes it denser, even after some mixing with Atlantic water. You can see by following the dotted red lines of equal density on the diagram (right). As a result the Mediterranean water flows downhill from the 300m deep sill at Gibraltar. Ocean salinity at 1000m. Subsurface water in the Mediterranean is around 12° C and 38 psu. As it flows out through Gibraltar and down the slope of the sill, it mixes with the Atlantic water above until it reaches a depth of about 1000m or more, where its density equals that of the water around it. We can see the spread of the Mediterranean water from an analysis of all the Argo data carried out by Mercator Ocean (left). This shows that the saltiest water is found mostly northwest of Gibraltar. The rotation of the earth makes the current veer to their right, so the deep salty outflow is concentrated in an underwater river running along the sea floor parallel with the south coast of Spain. When it reaches Cape St Vincent it turns to the right and heads northwards. Time series of temperature (left) and salinity (right) from Argo float 6900701. The sections show all the temperature (T) and salinity (S) profiles measured by the float during its life-time side by side. Each profile is represented by a very thin column where deep red is the highest values and deep blue the lowest. The colour bars on the right relate the colours to actual data values. Profile numbers are given along the top of the plot, with corresponding measurement dates along the bottom. Click on the images for larger plots. Source of plots: IFREMER/Coriolis. Ocean salinity at 1000m. Source: Mercator ocean analyses The Mercator analysis map shows that the Mediterranean water does not spread evenly out across the Atlantic, but is patchy with changes in salinity even over a short distance. This patchiness is also evident in the salinity section above (right). As the warm, salty Mediterranean water flows out across the Atlantic, eddies pinch off and drift southwestward. These lenses of warm, solty water rotate in a clockwise (=anticyclonic) direction and are called 'Meddies'. Typical Meddies are around 800 metres thick and 100 kilometers in diameter. Their salinity is about 0.8 psu higher than the surrounding ocean water. They are quite long lived because they rotate rapidly as they move slowly through the calm waters around them. Look at the float trajectory in Google Earth to see where the float has been. (If in doubt about how to reveal the float tracks, see our Google Earth screenshot for help.) Compare this to the maps of temperature and salinity for different depths available for example from Mercator ocean analyses. The Argo Information Centre has more information about this float. You can also download the data from one of the Data Centres - just select Data > Data Downloads. There are many different formats available. ASCII data can be viewed in spreadsheets such as Excel. The other data types may require more specialist software.
physics
https://www.ukca.ac.uk/wiki/index.php/Brief_summary_of_GLOMAP-mode:_global_size-resolved_aerosol_microphysics
2020-06-05T06:13:55
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00071.warc.gz
0.716023
597
CC-MAIN-2020-24
webtext-fineweb__CC-MAIN-2020-24__0__42945032
en
Brief summary of GLOMAP-mode: global size-resolved aerosol microphysics The UKCA aerosol scheme is a simplified version of the GLobal Model of Aerosol Processes (GLOMAP) (e.g. Spracklen et al., 2005). GLOMAP is designed to simulate the aerosol particle size distribution and represents the key microphysical processes (nucleation, coagulation, condensation and cloud processing) which control it. The simpler GLOMAP version used in UKCA, called GLOMAP-mode, simulates these processes within a two-moment modal aerosol dynamics approach (number and mass in each mode), which requires much fewer transported tracers (typically 15 to 30) than the original bin-resolved GLOMAP scheme which requires 100-200 transported tracers for a multi-component simulation (e.g. Spracklen et al, 2008). The dynamically evolving particle size distribution in UKCA leads to improved simulation of the number concentration of the sub-set of particles that interact with the clouds (cloud condensation nuclei, or CCN). The incorporation of the UKCA aerosol in HadGEM3 will represent a step-change in the capability of the UK climate model to simulated aerosol indirect effects on climate. A recent publication in Geoscientific Model Development (Mann et al, 2010) describes GLOMAP-mode in detail and presents evaluation of the model in the chemical transport model against a wide range of observations. Spracklen, D. V., Pringle, K. J., Carslaw, K. S., et al.: A global off-line model of size-resolved aerosol microphysics: I. Model development and prediction of aerosol properties, Atmos. Chem. Phys., 5, 2227--2252, doi:10.5194/acp-5-2227-2005, 2005. Spracklen, D. V., Carslaw, K. S., Kulmala, M., et al.: Contribution of particle formation to global cloud condensation nuclei concentrations, Geophys. Res. Lett., 35, L06808, doi:10.1029/2007GL033038, 2008. Mann, G. W., Carslaw, K. S., Spracklen, D. V., et al.: Description and evaluation of GLOMAP-mode: a modal global aerosol microphysics model for the UKCA composition-climate model. Geosci. Model Dev., 3, 519-551, 2010. For more details about the GLOMAP-mode aerosol scheme and the interface to the rest of UKCA and other parts of the UM, please contact Graham Mann (NCAS, University of Leeds) [email protected].
physics
http://www.trenalasnubes.com.ar/turismo_salta/en_tren_a_las_nubes.html
2014-04-25T00:32:59
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00183-ip-10-147-4-33.ec2.internal.warc.gz
0.954042
282
CC-MAIN-2014-15
webtext-fineweb__CC-MAIN-2014-15__0__130434470
en
The most amazing train in the world, the one that takes you to the clouds, reaches a height of 4,200 meters in its 217 km trip. It is one of the highest railways in the world, taking its way across the high picks of the Cordillera de los Andes, surrounded by striking sceneries. The train departs from the city of Salta, passes through the Valle de Lerma, enters the Quebrada del Toro and finally reaches La Puna. It takes its name Tren a las Nubes, from the clouds that are often seen under bridges and around slopes. The number of spirals, viaducts, tunnels and other twists and turns that the train passes through arises from a decision made by the designer of this project, the US engineer Richard Maury. He took into account the principle of adhesion of train wheels to the railways and the laws of physics, ruling out the funicular system commonly used, so that the train may safely reach the expected heights. The train has no cogwheels, not even for the steepest slopes, since the railways are peculiarly arranged, running through a system of zigzags and spirals. The train leaves from the General Belgrano station in the city of Salta, 1187 meters above the sea level, and ends its journey at the viaduct.
physics
https://www.chulilla.net/en/snowfalls-in-chulilla/
2024-02-28T05:14:45
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474697.2/warc/CC-MAIN-20240228044414-20240228074414-00772.warc.gz
0.956881
461
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__89092583
en
5 de Enero de1997 It was the great snowfall of the end of the millennium, an early gift from the wise men. As almost everyone will know, it rarely snows in Chulilla. For this to happen, a number of conditions must be met: 1. There must be a mass of cold air in the medium-high layers of the atmosphere, with approximate temperatures of -5º to about 1500m high (about 850 mb of atmospheric pressure). 2.We must have a slight entry of winds of maritime origin. In our area the northeast is more favorable. Analyzing these situations we have to say that for the first to occur, a mass of continental air generated in Eastern Europe needs to move westward, affecting us at least a little. These air masses cannot be produced in small regions such as the Iberian Peninsula, they need large continental regions in which the earth can cool quite quickly (compared to water). Taking a look at the world map we can see that only the northern part of the American continent and the Eurasian continent can generate such cold air mass. In our case, things are even more complicated because they are located in the surroundings of a warm sea. This causes that if the incidence of the east wind is notorious, the temperatures will rise and with this the risk of snowfall will decrease at least at low levels. Analyzing the case at hand, that is, the snowfall of January 5, 1997, we see that by reviewing the 850 mb map, which indicates the level from which the snow begins to set, the snow level was more or less at 400 m. Simply reading the newspaper the next day, we found that there were isolated snowfalls below that level. This can be easily explained, the wind, although of maritime component, was negligible, which contributed to produce a horizontal thermal stratification, which, in a valley sufficiently closed like ours (the Turia valley) if it snowed in the high part, little by little the cold air was descending altitude taking advantage of its greater weight and snowing in the lower levels of the valley. That explains how it could snow in the garden about 250 m and yet it did not curdle in Casinos at 350 m.
physics
https://golfexperiments.com/best-golf-balls/how-many-dimples-are-on-a-golf-ball/
2024-04-16T03:57:43
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817043.36/warc/CC-MAIN-20240416031446-20240416061446-00663.warc.gz
0.929186
1,438
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__17858535
en
Purpose of golf ball dimples Golf ball dimples play a crucial role in the performance of the ball during a game. Their primary purpose is to reduce air resistance or drag while the ball is in flight. Dimples create turbulence in the layer of air surrounding the ball, which reduces drag and allows the ball to travel further and more accurately. A smooth golf ball would experience much higher air resistance, resulting in a shorter and less accurate flight. The dimples create a thin layer of turbulent air that clings to the surface of the ball, reducing the pressure drag and allowing the ball to cut through the air more efficiently. This results in longer, more stable flights and increased lift, essential factors for any golfer aiming to improve their game. The number of dimples on a golf ball can vary, typically ranging from 300 to 500. Dimple count, size, shape, and depth can all impact the ball’s flight characteristics. Manufacturers continually experiment with different dimple designs to optimize performance, balancing factors such as distance, control, and consistency. History of golf ball dimples Early golf balls (featherie, gutta-percha) Before the invention of dimpled golf balls, early golfers used balls made from various materials. The featherie, used from the 15th to the mid-19th century, was a hand-sewn leather pouch stuffed with goose feathers. The gutta-percha ball, introduced in the mid-19th century, was made from the sap of the gutta-percha tree and was the first mass-produced golf ball. The Haskell ball, invented by Coburn Haskell in 1898, was a groundbreaking innovation in golf ball design. Haskell discovered that wrapping rubber threads around a solid core created a more resilient and longer-flying ball. While experimenting with various surface textures, he noticed that balls with nicks and cuts seemed to fly further. This led to the introduction of dimples, which significantly improved the ball’s aerodynamic performance. Since the introduction of the Haskell ball, golf ball manufacturers have continually refined dimple designs to improve performance. Technological advancements have allowed for various dimple shapes, depths, and patterns to be tested and optimized. These innovations have led to significant improvements in distance, accuracy, and overall playability. Today’s golf balls are the result of over a century of experimentation and progress, all aimed at helping golfers achieve their best possible game. Factors that influence dimple count One of the primary factors influencing dimple count is the aerodynamic performance of the golf ball. When a golf ball is struck, it generates lift as it moves through the air. The lift force, combined with the ball’s backspin, helps keep the ball airborne. The optimal number of dimples on a golf ball is determined by balancing lift and drag, ensuring that the ball remains stable and experiences minimal resistance during flight. Another factor that affects dimple count is the reduction of drag. Dimples create turbulence in the boundary layer of air surrounding the golf ball, which reduces the pressure drag acting on the ball. The ideal dimple count minimizes drag while maximizing lift, allowing the ball to travel the furthest distance possible. Manufacturers must find the right balance between too few and too many dimples, as both extremes can negatively impact the ball’s flight characteristics. Manufacturers also consider the trade-off between distance and control when determining the optimal dimple count. A golf ball with more dimples may generate more lift, resulting in longer shots, but it could also be harder to control, especially in windy conditions. Conversely, a ball with fewer dimples may offer better control but sacrifice distance. Balancing these factors is crucial for designing a golf ball that appeals to a wide range of golfers. Common dimple counts and patterns Standard dimple counts The standard dimple count on most golf balls ranges between 300 and 500. This range has been found to provide a good balance between aerodynamic performance, distance, and control. However, there is no universally agreed-upon “ideal” dimple count, as the optimal number depends on various factors, including the golfer’s skill level and preferences. Dimple size, shape, and depth also play a significant role in golf ball performance. Dimples can be circular, elliptical, or polygonal, with each shape offering unique aerodynamic properties. The depth of a dimple can also influence the ball’s flight characteristics. Manufacturers often experiment with different dimple shapes and depths to optimize performance for specific player types or conditions. Dimple patterns on golf balls can take various forms, including hexagonal, icosahedral, and custom designs. Hexagonal patterns, where dimples are arranged in a honeycomb-like structure, promote uniform coverage and efficient air flow around the ball. Icosahedral patterns use a series of interconnected triangles to distribute dimples evenly across the ball’s surface. Custom patterns may combine elements of different designs or feature unique dimple arrangements tailored to specific performance goals. Manufacturers continue to innovate and explore new dimple patterns to improve golf ball aerodynamics and overall performance. The impact of dimple count on a golfer’s game How dimples affect ball flight The dimple count on a golf ball directly affects its flight characteristics. The optimal number of dimples can vary depending on the golfer’s swing speed, skill level, and playing conditions. Golf balls with a higher dimple count typically generate more lift, leading to longer shots. However, they may also be more challenging to control in windy conditions. On the other hand, balls with a lower dimple count can offer better control, but may sacrifice distance. Understanding how dimple count impacts ball flight is essential for golfers looking to improve their game. Selecting the appropriate golf ball for a golfer’s skill level and preferences is crucial for maximizing performance. Beginner golfers may benefit from balls with a lower dimple count, as these can offer better control and help build confidence. More advanced golfers may prefer balls with a higher dimple count to maximize distance and take advantage of their refined swing technique. Golfers should experiment with various dimple counts and patterns to determine which combination best suits their unique playing style and skill level. In professional golf, the dimple count on a golf ball plays a critical role in determining the outcome of a match. Professional golfers are continually seeking ways to optimize their performance, and selecting the right golf ball is an essential component of this process. Golf ball manufacturers often collaborate with professional golfers to develop custom dimple patterns that cater to their specific needs and preferences. This collaboration ensures that the golf balls used in professional tournaments are designed to deliver the highest possible level of performance. By understanding the relationship between dimple count and golf ball performance, professional golfers can make informed decisions when choosing their equipment and gain a competitive edge on the course.
physics
http://www.musculographics.com/html/products/SIMM.html
2017-10-21T02:48:50
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00335.warc.gz
0.894172
1,576
CC-MAIN-2017-43
webtext-fineweb__CC-MAIN-2017-43__0__88726418
en
SIMM (Software for Interactive Musculoskeletal Modeling) is a powerful tool kit that facilitates the modeling, animation, and analysis of 3D musculoskeletal systems. In SIMM, a musculoskeletal model consists of representations of bones, muscles, ligaments, and other structures. Muscles span the joints and develop force, thus generating moments about the joints. SIMM enables an analysis of a musculoskeletal model by calculating the joint moments that each muscle can generate at any body position. By manipulating a model using the graphical interface, the user can quickly explore the effects of changing musculoskeletal geometry and other model parameters on muscle forces and joint moments. SIMM is used by hundreds of biomechanics researchers to create computer models of musculoskeletal structures and to simulate movements such as walking, cycling, running, jumping, weight lifting, reaching, and throwing. Using SIMM, researchers have created models of the upper and lower extremities to examine the biomechanical consequences of surgical procedures including tendon transfers, osteotomies, and total joint replacements. A lower-extremity model was used to estimate musculotendon lengths, velocities, moment arms, and induced accelerations during normal and pathologic gait. Studies have been conducted to investigate the treatment of individuals with spinal cord injury, to analyze joint mechanics in subjects with patellofemoral pain, to calculate forces at the knee during running and cutting, and to investigate causes of abnormal gait. SIMM has also helped bring simulation to biologists who have created computational models of the frog, tyrannosaur, cockroach, cheetah, and other animals. Leading sports performance and clinical gait analysis centers use SIMM to visualize the relationships between external forces, muscle activity, and the resulting body motion. Using SIMM, our movement analysis customers display 3D animations and help determine ways to improve an individual’s performance. SIMM is also being used to analyze and improve the performance of various internal and external prosthetics, as well as exoskeletons and other robotic assistive devices. The dynamics module in SIMM allows users to perform forward and inverse dynamic simulations on musculoskeletal models. A forward simulation can calculate the motion and contact forces resulting from the specified muscle excitations, and an inverse simulation can calculate the muscle activations and forces required to generate the specified motion. These simulations can lend considerable insight into the causes of movement abnormalities, the expected effects of possible treatments, and the design of corrective devices. Industrial designers have used dynamic simulations in SIMM to improve the design of knee implants and to analyze muscle fatigue during repetitive tasks. Motion Capture Importer. SIMM can import motion capture files (C3D, TRB, TRC) for playback and detailed musculoskeletal analysis. It can also import data in real-time from a Motion Analysis system and animate a 3D model while the data is being captured. Gait Reporting. The Motion Reporter tool creates reports of sets of motions. The reports contain averages, standard deviations, and comparison to normal data, and include formatted Excel graphs for easy analysis. For gait reports, the tool calculates gait events and automatically divides recorded motions into left and right strides. Scripting. The Scripting tool executes scripts with commands to load models and motion data, perform dynamic simulations, and create plots and reports. Scripts can also be used to save various SIMM tool settings so that they are restored the next time you start SIMM or load a particular model. Model Scaling. The scaling utility automatically scales a generic model to match any size individual, based on measurements it makes from a static motion capture trial. All model components, including muscle paths, are scaled with the body segments (scaling of muscle force-generating parameters is optional). Dynamic Simulation. With the dynamics module, you can perform forward and inverse dynamic simulations on any SIMM model. Simulations can be controlled from within the SIMM GUI, or as stand-alone programs. The dynamics module contains many sophisticated algorithms for detecting contact between bodies and for optimizing muscle activations, but it is also extensible. All of the source code for dynamic simulations is accessible to the user, so it can be modified or enhanced as needed. Muscle Wrapping. You can interactively define spheres, ellipsoids, cylinders, and torii for muscle-tendon actuators to wrap over. SIMM automatically calculates muscle paths over these wrapping objects. Muscle lengths, forces, and moment arms are all calculated correctly for the wrapped muscle. Live Plots. A live plot curve is a plot of a muscle property that is updated automatically whenever any property of the muscle changes. Live plot curves are very useful when creating and modifying muscles because you can instantly observe the effects of moving an attachment point or a wrap object (or changing any other property) on the length, moment arm, and force of the muscle. Bone Deformations. A deformation tool allows you to warp bones into new shapes to model various bony deformities. Deformations such as tibial torsion and femoral anteversion are straightforward to model and can be implemented with a range of severity of deformation. Movie import/export. You can import videos associated with motion data and play them on a virtual screen in the model window during the motion animation. This makes it easy to do a side-by-side comparison of the model animation and the live video. You can also export movies of the model window to an AVI file. Skins. In SIMM, a skin is any 3D polygonal surface that is linked to one or more body segments. By linking different regions of a skin to different segments, the skin can be made to deform when the joints move. Skins can be used to represent anatomical skin, muscle surfaces, fascia, ligaments, or any other deformable surface. They can also be rendered with texture maps to enhance the realism of the display. GUI tools. Many new user interface elements make it easy to interact with a model and to change the display properties of the bones, muscles, and other model components. Full support for “drag and drop” has been added for all SIMM file types, so it is easier than ever to load models and motion data into SIMM, as well as perform functions such as adding a new bone or running a script. Increased understanding of motion capture data through the intuitive integration of kinematics, forces, EMG, and gait events Improved surgical outcomes because multiple procedures can be simulated in advance Easy creation of comprehensive gait reports, including graphs of muscle lengths and forces Provides immediate feedback to, and helps educate, patients and colleagues Real-time visualization of muscle lengths and forces helps with analysis and improvement of athletic activities SIMM has powerful tools for creating musculoskeletal models and performing isometric and dynamic analyses. OpenSim extends these capabilities by providing additional dynamics features such as residual reduction and computed muscle control. Together, SIMM and OpenSim offer biomechanics researchers unsurpassed capabilities for modeling and simulation of the musculoskeletal system. OpenSim is an open-source software system that lets users create and analyze dynamic simulations of movement. It is being developed at Simbios, a NIH center at Stanford University for physics-based simulation of biological structures. Because OpenSim can import and export SIMM models, users can easily take advantage of features in each package. They can import their SIMM models and motion data into OpenSim, perform residual reduction and computed muscle control analyses, and export the results back to SIMM for further analysis and model development.
physics
http://marshall.org/climate-change/new-evidence-our-record-warm-march-was-not-from-global-warming/
2017-03-31T00:31:14
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218205046.28/warc/CC-MAIN-20170322213005-00360-ip-10-233-31-227.ec2.internal.warc.gz
0.947227
570
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__267027971
en
As part of my exploration of different surface temperature datasets, I’m examining the relationship between average U.S. temperatures and other weather variables in NOAA’s Integrated Surface Hourly (ISH) dataset. (I think I might have mistakenly called it “International” before, instead of “Integrated” Surface Hourly). Anyway , one of the things that popped out of my analysis is related to our record warm March this year (2012). Connecting such an event to “global warming” would require either lazy thinking, jumping to conclusions, or evidence that the warmth was not caused by persistent southerly flow over an unusually large area for that time of year. The U.S. is a pretty small place (about 2% of the Earth), and so a single high or low pressure area can cover most of the country. For example, if unusually persistent southerly flow sets up all month over most of the country, there will be unusual warmth. In that case we are talking about “weather”, not “climate change”. Why do I say that? Because one of the basic concepts you learn in meteorology is “mass continuity”. If there is persistent and widespread southerly flow over the U.S., there must be (by mass continuity) the same amount of northerly flow elsewhere at the same latitude. That means that our unusual warmth is matched by unusual coolness someplace else. Well, guess what? It turns out that our record warm March was ALSO a record for southerly flow, averaged over the U.S. This is shown in the next plot, which comes from about 250 weather stations distributed across the Lower 48 (click for large version; heavy line is trailing 12 month average): Weather records are broken on occasion, even without global warming. And here we see evidence that our March warmth was simply a chance fluctuation in weather patterns. If you claim, “Well, maybe global warming caused the extra southerly flow!”, you then are also claiming (through mass continuity) that global warming ALSO caused extra northerly flow (with below normal temperatures) somewhere else. And no matter what anyone has told you, global warming cannot cause colder than normal weather. It’s not in the physics. The fact that warming has been greatest in the Arctic means that the equator-to-pole temperature contrast has been reduced, which would mean less storminess and less North-South exchange of air masses — not more. Originally published at http://www.drroyspencer.com/2012/04/new-evidence-our-record-warm-march-was-not-from-global-warming/.
physics
http://launchar1.com/blog/monica-jacinto-coinventor-mondaloytm-%E2%80%93-breakthrough-technology-being-incorporated-ar1-engine
2018-04-23T09:26:53
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945940.14/warc/CC-MAIN-20180423085920-20180423105920-00558.warc.gz
0.93336
785
CC-MAIN-2018-17
webtext-fineweb__CC-MAIN-2018-17__0__177138642
en
The AR1 engine will provide America’s launch providers with an advanced all-U.S. engine that leap frogs the Russian RD-180, thanks to the dedicated work of AR1 team members such as Monica Jacinto, a Technical Fellow of Structural Alloys at Aerojet Rocketdyne’s Los Angeles facility. Not only did Jacinto co-invent MondaloyTM a nickel-based superalloy that’s critical to AR1 replacing the Russian RD-180 rocket engine, but she’s guiding the Aerojet Rocketdyne teams that are fabricating components for AR1, including turbomachinery, combustion devices, preburners, heat exchangers and hot-gas manifolds. From its first formulation in the mid-1990’s, Jacinto has watched Aerojet Rocketdyne support development of MondaloyTM through internal company funding and public-private partnerships with the U.S. Air Force. Jacinto has celebrated each milestone, most recently the first use of Mondaloy 200TM in an advanced rocket engine environment. It was a significant achievement under the Hydrocarbon Boost program, which is advancing domestic rocket engine technologies in support of next-generation launch systems. In those tests, which were conducted earlier this year, Aerojet Rocketdyne and the U.S. Air Force Research Laboratory (AFRL) successfully ran the oxygen-rich staged combustion sub-scale pre-burner at full power and full duration. Demonstration of Mondaloy 200TM, which was co-developed by Aerojet Rocketdyne and AFRL, was a critical step to proving the unique combination of high-strength and burn resistance necessary for hardware survival in the harsh environment of an advanced oxygen-rich staged combustion engine, according to the AFRL. It also removes the need to use exotic coatings inside the engine such as those used in the Russian RD-180 engine. Jacinto is also leading all Aerojet Rocketdyne large liquid rocket engine oxygen-compatibility efforts, such as combustion testing, particle impact testing, and friction ignition testing. She’s working closely with AR engine development teams to understand where there could be oxygen-compatibility concerns in their designs. Having materials that are able to handle the harsh environment created by an oxygen-rich staged combustion engine cycle is a key part of our overall AR1 design effort. “It’s very rewarding to see MondaloyTM performing to design parameters in these tests,” said Jacinto. “Each success puts us closer to our ultimate goal of using Mondaloy in an advanced, affordable all-U.S. rocket engine.” Jacinto’s career with Aerojet Rocketdyne spans 28 years, with her efforts focused on research & development to find affordable materials and solutions for world-class rocket engines. She has an extensive background leading diverse, integrated teams. Jacinto holds two bachelor’s degrees from Columbia University in New York, and a master’s degree in Materials Science from the University of California, Los Angeles. She mentors the next-generation of engineers in Science, Technology, Engineering and Mathematics (STEM); speaks at high schools, universities and career conferences for young students; has volunteered for Great Minds in STEM, a non-profit that promotes STEM careers in underserved communities; is a member of the Dean’s Advisory Board for the College of Engineering, Computer Science and Technology at California State University, Los Angeles; and co-chairs the Joint Army Navy NASA Air Force (JANNAF) Advanced Materials Panel. Jacinto enjoys spending time with family and friends, hiking, snowboarding and rollerblading. She also has a passion for cooking; creatively mixing ingredients to develop innovative new cuisine—much like her development of MondaloyTM.
physics
https://darknet-online.com/index.php/2020/11/16/pilot-of-the-earth-result-of-minibeansjam6/
2020-11-29T20:11:38
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00017.warc.gz
0.977812
328
CC-MAIN-2020-50
webtext-fineweb__CC-MAIN-2020-50__0__57526027
en
From 13th to 15th November RocketbeansTV, a German online TV channel, started the 6th mini game jam, called minibeansjam 6. The jams topic was “Du bewegst die Welt” (You move the world). I took part at this event and the result was the game Pilot of the Earth. In this game, god left the world after 2020 because he found out, the universe he created was very fucked up. So with himself he took some important rules of physics. The earth stopped looping around the sun. Not it is your job to move the world. While you move it, it has not to come too close to the sun, that the earth gets too hot and yes, the opposite is also not good. You also get extra points if you get one loop as close as possible in 365 in-game years. Ah I forgot, in his anger, god destroyed some planets. Please don’t crash the asteroids. At the end of the jam I could not finish the game. There is a lot to do but I will try to bring out a playable beta until Friday the 20th. At this moment the Earth can be moved, the audio works, there is a GUI and all the assets are included. There is one known bug with the parallax scrolling that needs to be fixed too. The graphics are all done by myself. The music was found at opengameart.org. The track is named “Easy walk” by Arthur. For the graphics I used GIMP and the game engine is Godot.
physics
https://venicioglass.com/laminated
2023-10-02T05:48:07
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00628.warc.gz
0.899616
293
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__269915234
en
Installation in Las Vegas, Nevada Custom - Any fabric you choose - Translucent or Opaque Mesh - Screen - Perforated Stainless Steel, Aluminum, Copper & Tin Venicio Laminated glass for architectural applications include exterior storefronts, curtain walls/divider, floors and windows. Laminated glass is a type of safety glass that holds together when shattered. In the event of breaking, it is held in place by an interlayer, typically of polyvinyl butyral (PVB) or ethylene-vinyl acetate (EVA), between its two or more layers of glass. The interlayer keeps the layers of glass bonded even when broken, and its high strength prevents the glass from breaking up into large sharp pieces. Laminated glass is also used to increase the sound insulation. For this purpose a special "acoustic PVB" or EVA compound is used for the interlayer. An additional property of laminated glass is that a PVB and thermoset EVA interlayer can block essentially 99.9% of all ultraviolet radiation. These colors can be added to any laminated glass in any combination of these basic colors to achieve any custom color desired. Click the link below to find out more.
physics
http://ijmsr.org/index.php/ijmsr/article/view/262
2023-05-28T22:20:20
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644571.22/warc/CC-MAIN-20230528214404-20230529004404-00467.warc.gz
0.790546
1,961
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__216874234
en
For use in C, X, and Ku band applications, a design of a super compact ultrathin perfect angle polarization independent metamaterial absorber was developed. AbstractAn ultrathin, triple-band met material absorber that is exceedingly compact, polarization insensitive, and has wide angular stability is the subject of this publication's abstract. This absorber was developed by the authors of this research. Because the unit cell consisting of patches achieves virtually perfect absorption (approaching 100%), the term "perfect met material absorber" is appropriate to use in this context. At 4.75 GHz (C band), the building has an absorption efficiency of 99.96%, while at 9.47 GHz (X band) it is 99.98%, and at 14.40 GHz (Ku band), it is 99.62%. The volume of the building is only 7.20.80 mm3, despite the fact that its dimensions are relatively little. The thickness of the unit cell is 0.012 at its thinnest point when it is at its lowest cut-off frequency. A ring is located on the absorber's exterior, a split ring is located in the center, another ring is located on the absorber's interior, and there is a square line with triangular patches located on the sides. The electric or magnetic polarization of the incoming wave has no effect on absorption since it is matched and is not sensitive to it. In order to ensure that the suggested design is stable, a variety of incidence angles (for both TE and TM modes) and polarization angle combinations are tested. The values of absorption are found to be pretty close to 1. During the last step of the validation process, it was found that the simulated results and the measured ones were very well aligned. Kaur KP, Upadhyaya T. Wide-angle and polarisation-independent tri-band dual-layer microwave metamaterial absorber. IET Microwave Antenna Propog. 2018;12(8):1428-1434. Zamzam P, Rezaei P, Khatami SA. Quad-band polarizationinsensitive metamaterial perfect absorber based on bilayer graphene metasurface. Phys E. 2021;128:1-8. Bhattacharya A, Bhattacharyya S, Ghosh S, Chaurasiya D,Srivastava KV. An ultra-thin penta-band polarization-insensitive compact metamaterial absorber for airborne radar application.Microw Opt Technol Lett. 2015;57:2519-2524. Munaga P, Bhattacharyya S, Ghosh S, Srivastava KV. An ultrathin compact polarization-independent hexa-band metamaterial absorber. Springer Appl Phys. 2018;24:331. Kaur M, Singh HS. Design and analysis of a compact ultrathin polarization- and incident angle-independent triple band metamaterial absorber. Microw Opt Technol Lett. 2020;62(5):1920-1929. Wanghuang T, Chen W, Huang Y, Wen G. Analysis of metamaterial absorber in normal and oblique incidence by using interference theory. AIP Adv. 2013;3:102118. Sun J, Liu L, Dong G, Ji Z. An extremely broad band metamaterial absorber based on destructive interference. Opt Express. 2011;19:21155 Agarwal M, Behara AK, Meshram MK. Dual resonating C-band with enhanced bandwidth and broad X-band metamaterial absorber. J Appl Phys. 2016;166:1-9. Li MH, Yang HL, Hou XW, Tian Y, Hou DY. Perfect metamaterial absorber with dual bands. Prog Electromag Res. 2010;108:37-49. Li H, Yuan LH, Zhou B, Shen XP, Cheng Q, Cui TJ. Ultrathin multiband gigahertz metamaterial absorbers. J Appl Phys. 2011; 110:909-912. Marathe D, Kulat K. A compact triple-band negative permittivity metamaterial for C, X-band applications. Hindawi Int J Antennas Propag. 2017;2017:7515264. Pendry JB, Holden AJ, Robbins DJ, Stewart WJ. “Magnetism from conductors and enhanced nonlinear phenomena”. IEEE Transaction Microwave Theory Technol 1999;47 (11):2075–84. Landy NI, Sajuyigbe S, Mock JJ, Smith DR, Padilla WJ. “Perfect metamaterial absorber”. Phys Rev Lett 2008;100(20):2074021–24, DOI: https://doi.org/10.1103/PhysRevLett.100.207402. Ramya Sekar, Srinivasa Rao Inabathini, “An Ultra-thin Compact Wideband Metamaterial Absorber” Electromagnetics Radio Engineering, Vol. 27, NO. 2, June 2018, DOI: 10.13164/re.2018.0364 D. Sood, Chandra C. Tripathi, “A Compact Ultrathin Ultra-Wideband Metamaterial Microwave Absorber” Journal of Microwaves, Optoelectronics and Electromagnetic Applications, Vol. 16, No. 2, June 2017 Xiangkun KONG, Junyi XU, Jin-jun MO and Shaobin LIU, “Broadband and conformal metamaterial absorber” Front. Optoelectronics. 2017, 10(2): 124–131 DOI 10.1007/s12200-017-0682-z Thtreswar Beeharry, Riad Yahiaoui , Kamardine Selemani and Habiba Hafdallah Ouslimani, “A Co-Polarization Broadband Radar Absorber for RCS Reduction”, Materials 2018, 11, 1668; doi:10.3390/ma11091668 Quanlong Yang, Xieyu Chen, Yanfeng Li, Xueqian Zhang, Yuehong Xu, Zhen Tian, Chunmei Ouyang, Jianqiang Gu, Jiaguang Han, and Weili Zhang, “Aperiodic-metamaterial-based absorber” APL Materials 5, 096107 (2017) Gobinda Sen, Sk. Nurul Islam, Amartya Banerjee, and Santanu Das, “Broadband Perfect Metamaterial Absorber on Thin Substrate for X-Band and Ku-Band Applications” Progress in Electromagnetics Research C, Vol. 73, 9–16, 2017 DOI: 10.2528/PIERC17011101 Tak J, Jin Y, Choi J. A dual-band metamaterial microwave absorber. Microw Opt Technol Lett. 2016;58:9 Huang L, Chen H. Multi-band and polarization insensitive metamaterial absorber. Prog Electromag Res. 2011;113:103-110 Dhillon AS, Mittal D, Bargota R. Triple band ultrathin polarization insensitive metamaterial absorber for defence explosive detection and airborne radar applications. Microw Opt Technol Lett. 2019;61(1):89-95 Kavitha Muthukrishnan & Venkateswaran Narasimhan, “An Ultra-Thin Triple-Band Polarization-Independent Wide-Angle Microwave Metamaterial Absorber” Plasmonics https://doi.org/10.1007/s11468-019-00985-y Saptarshi Ghosh, Somak Bhattacharyya, Yadunath Kaiprath, and Kumar Vaibhav Srivastava, “Bandwidthenhanced polarization-insensitive microwave metamaterial absorber and its equivalent circuit model”, Journal of Applied Physics 115, 104503 (2014) Peng Zhou, Gaige Zheng, Fenglin Xian, Linhua Xu, “Dynamically switchable polarization-independent and broadband metasurface perfect absorber in the visible and near-infrared spectra regime” Results in Physics 11 (2018) This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Authors need to sign following agreement with International Journal of MC Square Scientific Research before publishing their articles: - Authors need to return copyright form to Journal Editor-in-chief to proceed their articles for publication. Meantime, the journal licensed under a Creative Commons Attribution License, which permits other user to distribute the work with an acknowledgement of the authors for International Journal of MC Square Scientific Research. - Authors are also able to share their separate, additional contractual arrangements for the non-restricted contribution of the journal with an acknowledgement of publication in International Journal of MC Square Scientific Research. - Authors are allowed and encouraged to share their work during the submission process for increasing citation and exploring to increase the paper availability in worldwide way. The Effect of Open Access.
physics
https://fortheloveofnaturalliving.com/blogs/natural-living-tips-and-more/what-is-a-himalayan-salt-lamp-1
2024-04-12T10:55:29
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296815919.75/warc/CC-MAIN-20240412101354-20240412131354-00254.warc.gz
0.933839
608
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__49817376
en
We are starting to see them more and more, but what exactly is a Himalayan Salt Lamp? And what and how is it made? Himalayan Salt Lamps are made from Himalayan Salt mined from the Himalayan Mountains. Our Himalayan Salt Lamps in particular are mined from the foothills deep within the base of the Himalayan mountains in Pakistan. How is this salt formed? Himalayan Crystal Salt is salt that is from The Primal Sea - the original body of the sea. The Primal Sea is believed by scientist to be the place where all life originated. The Himalayan Crystal Salt was created when The Primal Sea was evaporated by the sun's energy. Over the millions of years, land masses formed over the top of the salty deposits. These deposits were then compressed by the tremendous force created by the developed land masses, creating Himalayan Crystal Salt. How does it work? What makes Himalayan Salt Lamps do wonderful is their ability to naturally ionize and purify the air. Ionization is great for stress and anxiety, headaches and helping with a better night's sleep. So why does it help? Because it is balancing the ions in the air and neutralizing electromagnetic radiation known as Electrosmog. What causes electromagnetic radiation are electronics such as computers, smartphones, TVs, etc. and even our heating and cooling units emit electromagnetic radiation. Electromagnetic radiation is caused by the increase in positive ions vs. the negative ions and it is this imbalance of ions in the air that is causing headaches, tension and anxiety, trouble sleeping and lack of focus. Himalayan Salt Lamps emit negative ions, balancing out the positive ions, neutralizing the environment helping to improve sleep, headaches, focus while decreasing anxiety and stress. The negative ions are able to be released due to the Himalayan Salt Lamp's Hygroscopic nature. That just means that moisture is attracted to the Himalayan Salt Lamp. The moisture is then heated by the bulb in the lamp evaporating the moisture, releasing the negative ions from the lamp. Those ions then cancel out the positive ions, balancing the environment promoting better sleep and relaxation, and reducing anxiety. This ionization process creates another wonderful benefit to having a Himalayan Salt Lamp.It is its ability to purify the air. While the lamp is attracting water vapor from the air, any mold, pollen or allergens that are floating around will be drawn towards the lamp and will stick to it. The allergens and dust collected will not be re-released. This is beneficial to individuals suffering from allergens and respiratory issues, such as asthma. Visually speaking, the natural patterns and striations of the Himalaya Salt Lamp along with the warm glow emitted is very meditative and relaxing. These lamps are wonderful for color or mood therapy, creating a wonderfully zenned environment, promoting a more relaxing mind set. Beautiful and Functional, a Himalayan Salt Lamp is a truly gorgeous and unique addition to any home or office!
physics
http://kenya.rcbowen.com/talk/viewtopic.php?p=63562
2013-06-19T21:52:05
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708808740/warc/CC-MAIN-20130516125328-00022-ip-10-60-113-184.ec2.internal.warc.gz
0.927532
306
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__97942066
en
, you glow in the dark! Potassium-40 (40K) is the primary source of radiation from the human body for two reasons. First, the 40K concentration in the body is fairly high (about 2 pCi per gram of soft tissue). As a ballpark estimate, there are 200,000 disintegrations of 40K per minute in a typical human. Second, 40K emits gamma rays in a little over 10% of its decays and most of these gamma rays escape the body. In other words, the body emits close to 20,000 gamma rays per minute from 40K. The vast majority of the beta particles that 40K emits do not escape the body. There are many other radionuclides in the human body but these are either present at lower levels than 40K (for example, 238U, 226Ra, 210Pb, 210Bi, 210Po, etc.) and/or they do not emit gamma rays (for example, 3H and 14C). Radon (and its decay products) is not a significant source of radiation because it is present at very low levels in the body. There is one other (very minor) mechanism by which the human body acts as a source of radiation: some of the gamma rays emitted by the radionuclides in the environment interact with the atoms in our bodies by what is known as the photoelectric effect. The result is the emission of x rays by these atoms.Source : Paul Frame, CHP, PhD
physics
https://dominikjarco.com/2023/04/23/eclipse/
2023-06-09T18:50:09
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656788.77/warc/CC-MAIN-20230609164851-20230609194851-00525.warc.gz
0.969819
777
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__87579436
en
On the 20th of April 2023, I saw my first total solar eclipse. Immediately after the end of totality, I started writing down my memories and feelings of the event. I’m posting an excerpt of what I wrote, both to record it for my own future reference, and in an attempt to convince whoever’s reading this to go see a total solar eclipse if you ever get a chance. Here it is: In the lead up the edges of shadows became hazy, like they were motion blurred even when still. Light became dimmer and redder as if there was smoke in the air. Like wearing sunglasses without sunglasses. A distinct coolness, and even though you’re not supposed to look at the sun, in the tiny glimpses when taking off my filter it was somehow less harsh. During totality, the sky darkened past twilight, a deep purple-navy like the sky at the beach after sunset. Stars became visible. The moon was absolute black and the sun’s corona was absolute white, a hazy effervescence that drifted out like ink through water, but the absolute essence of brightness. Beads of diamond brilliance marked a few points around the corona, piercing and blooming through the star’s haze. Transcendental. Cosmic beauty and brilliance. The total assertion in my mind was not of insignificance or of distance, but of beauty, the glory and wonder of the universe. The end of totality was glorious as a point of pure light bloomed outwards into a crescent – and then its brightness was too much and a filter was required once more. If the beginning of totality was twilight, then the end was a second dawn – the horizon’s light shifted to the 5am blue of a misty beach before sunrise, and it is as if the sun rose again over the next few minutes. This is probably the closest I’m going to get to the overview effect – why do people spend time fighting each other and working against each other when the universe contains such beauty? All of existence is raw and quivering. I have not felt a glory like this since before I renounced religion. I hope those words convey the magnitude of the experience. It can’t be communicated in pictures. The thing that struck me the most was that this was the opposite of the classic cosmic horror experience – it was cosmic wonder. The distances and scales involved didn’t matter in the slightest. All I could think about was how beautiful the universe is, and how lucky we are to be alive and able to witness it. After this, it is of no surprise to me that people chase eclipses. Kiara and I are already looking at seeing another in a few years’ time, hopefully 2026. We spoke to one person who had seen 25(!) eclipses, though half of those were annular, not total. 100% totality is essential to the experience: without it, you don’t get the sky going dark, stars coming out and being able to look at the sun with the naked eye. I’ve written before about my trip to NZ last year – there was a particular moment there that had this sort of incredible natural beauty to it. I consider that the absolute highlight of my 2022. This eclipse is probably going to be the equivalent for this year. Whenever I experience something like this, for a brief time the purpose of travel becomes clear. It is to fully experience the natural wonders of the universe. Human ingenuity is a marvel in itself, but those things that we did not create have for me always dwarfed the work of our hands. Perhaps one day I will see something that upends this. That prospect is exciting too.
physics
http://alpgroup.in/alp-products-specification.html?dataTag=NDg=&cid=3&scid=15&ascid=15&class=prd
2018-11-18T06:04:36
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743968.63/warc/CC-MAIN-20181118052443-20181118073816-00060.warc.gz
0.816808
248
CC-MAIN-2018-47
webtext-fineweb__CC-MAIN-2018-47__0__8737474
en
AEROCELL – CLASS O WITH RELEASE PAPER (SELF ADHESIVE) Class O nitrile rubber insulation suited for the following applications A) Duct Insulation B) Clean Room Application D) Cold Storage Application E) Raised Floor Insulation F) Under slab, Transport Segments G) AC Unit Insulation (Internal Insulation of units, evaporators etc.) H) General Thermal Insulation 1. Excellent insulation properties due to low K –value, that remain unchanged during service life time of hot cold insulation and are unaffected by humidity. 2. The closed cell structure will not absorb or allow the spread of water, thus preventing condensation or frost formation on cooling systems, chilled water & refrigerant lines. No additional water vapor barriers are required. 3. Supplied with reinforced aluminum foil, for increased performance and quicker installation. Adhesive backing is also available. 4. Wide working temperature (-40 Deg C to 115 Deg C) 5. Thinner gauges can be used to achieve effective insulation due to low K value and high water vapor diffusion resistance (µ). This also allows the use of material in areas with space limitation.
physics
https://www.dentaladvisor.com/research-development/biomaterials-lab/glossary/
2023-03-31T18:39:34
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00151.warc.gz
0.919121
3,167
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__133751347
en
Research & Development Biocompatibility is the measure of the response of living tissue or cells to an interfacing substance or force. Measures of biocompatibility are cellular health and behavior. Material biocompatibility can be described by the material’s chemical, mechanical, and thermal properties, surface characteristics, density and size. The application will determine which of these characteristics come into play and how critical each is to a successful biological interface. Color stability refers to the ability of a material to maintain its reflectance, reflected wavelength (color), and excitation purity over time. There are instruments that measure color and can detect changes over time. Materials which are used as dental restoratives are typically color matched to the existing dentition at the time of placement. Ideally, the match should be stabile over time and use. Factors that influence the change in color are chemical degradation and staining of the restorative material. Compressive strength describes how a material responds to a load applied in compression. As the material is loaded, it deforms. Some materials deform differently than others. Brittle materials deform elastically until they break. Rubber-like materials deform elastically until the polymeric bonds start breaking at which point the material deforms both elastically and viscoelastically. Metals deform elastically and then plastically until they fail. The load divided by the area of the material under load determines the stress applied to the material. Stress is always, therefore, in units of force/area (e.g., Newtons/mm², and pounds/in²). The stress at rupture or failure is the compressive strength. Diametral Tensile Strength Diametral tensile strength describes how a material responds to a tensile load. Also referred to as the Diametral Compression Test for Tension, it is an alternative method of placing a brittle material in tension without gripping it on opposite sides and pulling (typical tensile test). When a disk of brittle material is compressively loaded through opposite sides of its edge, tensile stresses are generated inside the material and perpendicular to the direction of applied load, which will eventually cause a tensile failure. Dimensional stability refers to a material’s ability to maintain its size and shape when subjected to environmental variables such as thermocycles of hot and cold or humidity changes and the result of polymerization or setting. It is an important characteristic of root canal fillers, impression materials and restorative materials. A root canal filler that expands too much in the presence of water could split a tooth root. Restorative materials that contract too much when they polymerize can be the cause of marginal leakage. Impression materials that change dimensions with time or exposure to humidity may not produce accurate impressions. Elastic modulus is a material property which describes its spring-like characteristics. The modulus is a ratio of the stress and the strain or more generally, the load and the resultant deformation of the material caused by the load which could be tensile, compressive, torsional, or bending (flexure). It is measured by calculating the slope of the plot of an increasing load versus the resultant increasing deformation. Stiffness is a synonym for elastic modulus. The higher the elastic modulus of a material, the more stiff it is. Steel and rubber are examples of elastic materials. Steel has the higher elastic modulus. Elastic recovery is a material property which describes how completely a material returns to its original, unloaded shape and size after a load (e.g. compressive, tensile or torsional) is removed. In dentistry, it is used to evaluate the ability of an impression material to be removed from the teeth in the mouth and tolerate being pulled past undercuts and away from inter-proximal spaces without being permanently distorted. Fatigue is a condition that can cause a structure subjected to cyclic loading to fail at a stress below the materials single cycle ultimate strength. It is caused by additive failure generated by loads below the yield load (in the elastic range of the material). Corrosion, cracks, scratches in the surface, and inclusions in the material can all contribute to decreased fatigue strength. Fatigue testing is performed by subjecting the material to cyclic loading at various maximum load levels until failure occurs. A plot of the number of cycles to failure versus the maximum stress is a tool which can help predict failure in structures. The fatigue limit or endurance limit is the level of stress that theoretically yields infinite cyclic life. Film thickness is the thickness of a material such as cement or root canal sealer when a specified quantity is compressed between two flat plates with a specified load. The more viscous the material, the thicker is the resultant film thickness. The variables that can affect film thickness are the particle size of the solid component, the mix ratio of powder to liquid, the viscosity of the liquid, the incorporation of air into the mix, and the setting time of the material. Film thickness affects the successful cementation of restorations onto prepared teeth. If too high, it may prevent the restoration from seating properly. Flexural Strength and Flexural Modulus Flexural strength and flexural modulus describe the capacity of a material to resist bending. There are several methods used to perform a flexural strength test. The three-point and four-point methods are similar in that they support a bar of the material to be tested at each end of a specified span. The four-point test generates a uniform stress field between the two load application points and is useful in materials that are flaw sensitive and the object is to determine the effect of the flaws on the ultimate strength. The three-point test is useful in determining the strength of a specimen at the point of load application. Another type of bending test is the cantilever bending test. The specimen is rigidly held at one end and the load is applied at the other end. The flexural modulus of an elastic material is the amount a material will deflect elastically per an amount of applied load. Flow is a measure of the movement of a specific amount of material under a compressive load typically at a time before the material has set. It is useful for determining how well a material like impression material will spread out over a surface. Other materials that utilize this characteristic to describe their behavior are root canal sealers and fillers. Friction is a resistance to relative movement between two bodies such as a tooth and a restorative material. Friction is proportional to the force pushing the two bodies together and is a result of mechanical impingement and molecular interaction between the materials of the two bodies. At a macroscopic level, it is affected by the interfacing material’s density, roughness, hardness, and molecular interaction. Two highly polished surfaces typically exhibit low friction unless there is a molecular interaction such as magnetism, coherence or adherence. Friction is important in dentistry because it is one of the factors that affects wear and durability of restorations. Gloss is a surface property which is related to specular reflection from that surface. It is dependent on the material, the surface roughness, and the angle of illumination. Gloss measurements can be used to compare surface preparation techniques such as finishing and polishing. Hardness is a measurement of a material’s resistance to indentation or penetration and is a function of the applied load, the strength of the material and the surface area being indented. There are several hardness techniques including Barcol, Brinell, Knoop, Rockwell, Shore and Vickers hardness. Marginal leakage is a measurement of how well a restorative system adapts to a prepared tooth. The typical methodology used is to thermocycle the prepared tooth specimen and then submerge it in a dye that will show where leakage has occurred when the tooth is sectioned. Percent elongation is a measurement of the amount of stretch that a specific size specimen (e.g., dog bone shape) will tolerate before failure. It is useful for helping to understand the maximum load carrying capabilities of rubber-like and metallic materials. Polymerization shrinkage is a measure of the change in dimension as a monomer is linked and or cross-linked during polymerization. Polymerization can be caused by chemical interaction, x-ray radiation, light, and thermal energy and is composition dependent. Polymerization shrinkage can be an issue with restorative composites and bonding agents and be the source of marginal leakage and low bond strengths. Radiopacity is a measure of the attenuation of x-rays passing thru a subject material. It is often measured in millimeters of aluminum. A typical method of determining radiopacity is to position a step wedge of pure aluminum with steps increasing by 0.5 mm per step in the x-ray view alongside the subject material. The thickness of the material is measured and the photographic density of the x-ray of the specimen and of the step wedge is measured and compared. The comparison yields a radiopacity in terms of aluminum thickness. It is an important feature for dental materials that need to be identified during an x-ray, or if broken pieces are inhaled and swallowed (e.g. denture materials, restoratives and endodontic materials). Setting time is a measurement of the elapsed time from the initiation of mixing to the setting of a material. Shark Fin Flowability The Shark Fin Flowability test measures the maximum height that an unset impression material will extrude into a wedge shaped space between two stainless steel plates while under a specific load. The test is a method of comparing the capability of impression materials to penetrate narrow cervices like interproximal spaces. Shear Bond Strength Shear bond strength is a measurement of how well one material bonds to another. It places the bond interface in shear. There are many versions of the test that utilize an anvil to load the side of a cylinder of material that is bonded at one of its ends to a substrate material. A commonly used version of this test is the Ultradent Shear Bond Test. Typical measurement units are in megaPascals (MPa). Strain in Compression Strain in compression is a measure of the flexibility of a material. A standardized load is applied to a cylindrical specimen. The pre-load height and loaded height of the specimen are compared and the percent change is recorded as the strain in compression. This test is often used to characterize the behavior of impression materials. Surface roughness is a surface condition which is a measure of a surface’s deviation from purely smooth. The variables which are used to describe surface roughness are height, frequency and wavelength. The precision of the measurement is dependent on the resolution capabilities of the measuring instrument. Typical instruments used to measure surface roughness are surface-contacting profilometers that utilize a stylus that is dragged across the surface and non-contacting instruments such as interferometers, confocal microscopes, electron microscopes, and electrical capacitance systems. It is an important parameter as it provides information about how effectively materials can be finished and polished and is a variable that affects friction and wear. Tear energy is a measure of the amount of energy required to tear a material. It is a calculation that is based on the force required to tear a uniform thickness of the material a specified distance. Tear strength is a measure of material’s ability to resist tearing. Testing is performed on a material of uniform thickness that has been partially cut with a sharp blade that creates a tear initiation site. It provides information about the behavior of materials that may be required to resist tearing e.g., impression materials that must be pulled out of interproximal spaces intact. The units of tear strength are force divided by thickness. Tensile Bond Strength Tensile bond strength is a measure of the ability of a bond between two materials to withstand a tensile load. It is determined by applying a tensile load to a material molded into the shape of an inverted cone that has been bonded to a specific substrate. The inverted cone shape facilitates pulling the cone from the substrate. The fracture load is divided by the area of the base of the cone to determine the bond strength. Tensile strength is a material property that describes its ability to resist deformation under tensile or pulling loads. It is calculated by using the maximum load along the load/deformation plot and dividing this by the nominal cross-sectional area of the specimen measured in a plane perpendicular to the load. Thermal cycling is a technique used to produce thermal stress by causing the test material to expand and contract as it is cycled back and forth from a cold medium to a hot medium. In dentistry, this form of testing is used to evaluate how well bonded materials will maintain their bond strength as they are subjected to hot and cold cycling. The intent is to simulate the thermal variance the oral cavity might experience during the eating of hot food and the drinking of cold fluids. Thermal degradation is a condition that results as materials and chemical components separate, cross-link, inter-react, or lose structural integrity as a result of temperature sensitive reactions. It is important that dental materials remain stabile as ambient environmental temperatures vary. Toughness is a measure of the energy required to plastically deform a material until it fails (rupture or fracture). Brittle materials typically have low toughness whereas ductile materials have higher toughness. Ultimate strength is a material property determined during load/deformation testing of a material or component. It is calculated by using the maximum load along the load/deformation plot and dividing this by the nominal cross-sectional area of the specimen measured in a plane perpendicular to the load. It can be determined for various types of loads (compressive, tensile, torsion, or shear). It is useful for determining the maximum load a product can sustain during a single load cycle. Working time is the elapsed time from the initiation of mixing a settable material until it reaches a consistency where it is no longer appropriate for use. It is helpful in the successful use of impression materials and some restorative and endodontic materials. Yield strength is a material property which describes the point during loading where a material starts to behave plastically (it deviates from the linear change in deformation with added load which is typical of elastic behavior). As with all strength properties, yield strength implies that the cross-sectional area has been taken into consideration by dividing the yield load by the initial area that lies in a plane perpendicular to the applied load. It is a parameter critical to the design of successful components and materials. If a material has too low a yield strength for a particular application, it will fail prematurely. A rough estimate for the fatigue strength of a material is the yield strength divided by two.
physics
http://www.stuehff-gmbh.de/eng/research---development-optatec.html
2021-08-02T17:53:56
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00414.warc.gz
0.862976
146
CC-MAIN-2021-31
webtext-fineweb__CC-MAIN-2021-31__0__82686721
en
Research & Development Fair Participation Optatec 2020 in Frankfurt 17.11.2020 to 19.11.2020 Regional development programme Fair participation Optatec, Frankfurt/Main, 17th to 19th November 2020 is funded by the European Union – European Regional Development Fund (ERDF). Presentation of a plant for etching of glass components (glass fibres, flat glass) based on molten salts. The application of this innovative procedure allows many advantages compared to use etching processes with hydrofluoric acid. Worldwide customers of this new etching technology are manufacturers of glass fibres and components, used in optic, medicine, industry and science. This new plant technology will be presented to prospective customers.
physics
https://vastubuilding.com/en/thermal-and-acoustic-insulation-energy-performance-standard-and-thermal-inertia.html
2023-02-05T08:12:33
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00175.warc.gz
0.938881
1,171
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__256334556
en
Thermal and acoustic Insulation; Energy performance standard and thermal inertia The insulation volume for the house alone is around 100m³. The vertical and horizontal timber frame construction consists of FJI composite I beams combined with LVL (laminated) horizontal beams. The outer walls are 33cm thick, 27cm of which was insulated with wood wool plus a low ventilated air outside cavity of 3cm. Particular attention has been paid to avoid thermal bridges and air leaks. The roof consists of solid wood Douglas pine trusses with a height of 29 cm, which are insulated at the top with 35mm wood wool plates so that thermal bridges are avoided here too (Although insulating, the insulation factor of wood is only 30% of that of the wood wool insulation). The total roof insulation is therefore 32.5 cm plus an inside air cavity of 3 cm. All windows and outside doors accommodate triple glass, whereby the reinforced PVC profiles, of German manufacture, are also super insulated. The Fakro roof surface windows also consist of triple glass and are surrounded by an additional insulation layer. The 30cm thick double reinforced and floating concrete slab is fully insulated. At the bottom, it rests on a 64 cm thick insulation layer of puffed glass chunks from the Technopor brand. The sides of the concrete slab were surrounded by a 14cm PIR insulation board. The garage which is separated from the main house by a double door entrance sass, has been insulated with 12cm of Rockwool in the walls and 16 cm in the roof. Same is valid for the Garden Work center. Given the wooden frame structure, special attention has been paid to the acoustic insulation so that there is minimal sound transmission between the different rooms on the one hand and the outside of the house on the other. This is essential for a pleasant living comfort. The outer walls and the roof with their thick wood wool insulation (fixed plates combined with wood wool flakes) sufficiently dampen the outside noise. This is also the case with the triple glazing. It is therefore especially important to provide good acoustic insulation between the different floors and between the different rooms. Effective acoustic insulation dampens both vibration and contact noise. This requires both soft damping material and mass. The floors between the different floors are 33 cm thick in total and consist of 2 parts: - The bearing surface part with the I beams of 24 cm of which the hollow spaces were filled with wood wool (density of 30kg / m³) and closed at the bottom by a 10mm MGO (Magnesium cement) plate and at the top with a 20mm thick MGO plate. Since MGO plates are non-flammable (class A), the floors are protected against fire. On top of the support structure there is a sandwich floor consisting of: - A wood wool board of 30mm - A floating MGO plate of 20mm glued in the tongue and groove - An acoustically inhibiting wood wool board of 7mm - A floating layered composite parquet of 15mm This layered structure with its combination of damping and solid materials ensures a very good separation of the sounds between the floors. The floors of the bathrooms, the dressing and the laundry room consist of a reinforced (iron nets and glass fibber’s) screed that rests on a damped wood wool plate of 7 mm. A stone floor is paved with a flexion glue. For the wooden partition walls on the first and second floors between the different rooms, damping materials were also combined with solid plates. They consist of: - A MGO plate of 10mm - A wooden frame made of pine SLS beams - A 70mm Rockwool SONO plate that fills the compartments between the vertical beams - A double MGO plate of 2 x 10mm This construction contributes to minimal noise pollution. The walls on the ground floor consist of 10 cm thick solid Silca glue blocks of the Xella brand. They were plastered on both sides with a 1 cm thick natural plaster layer. This composition not only contributes to a limited acoustic transfer between the walls, but also to a greater thermal inertia on the ground floor K and Ew value (PEB: Building Energy Performance) The construction of the outer shell results in a K value of 22. The Ew value takes into account all the different aspects related insulation, ventilation, energy consumption, energy-saving and energy-generating devices. The official calculated value is 14. Compared to the current standard being , this is 3 times better. So this is a house that is far ahead of its time and will withstand the test to the future with brio. A major potential disadvantage of a highly insulated timber frame construction is overheating during the summer. This can be all the more the case during the increasing heat waves. To limit this overheating, the construction partly consists of a hybrid structure: On the ground floor, the concrete slab, insulated from the outside, forms part of the living space together with the screed and the floor tile. This creates a volume of 40m³ thermally inert material. This is combined with the massive Silicate stone inside walls on the ground which create an additional inert volume of 9 m³. Furthermore, many but limited surface window surfaces were built in. This provides abundant light but limited over heating of the rooms. However, all this will not prevent the house from heating up during a prolonged heat wave. To prevent this, a reversible air / air heat pump is provided which guarantees that the house can be cooled during several continuous hot days.
physics
http://www.in-akustik.de/en/for-dealers/reference-selection/
2017-03-22T22:17:08
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186353.38/warc/CC-MAIN-20170322212946-00048-ip-10-233-31-227.ec2.internal.warc.gz
0.925825
468
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__54116887
en
Only cables that have been perfected down to the last detail are able to transport this truly sensuous pleasure without interference. We at in-akustik are pioneers in perfect signal transmission, helping to convey the finest nuances that appeal to all senses. We put passion, ambition and love into the development and production of our cables, which are known throughout the world for outstanding quality. The ultimate proof for this is Reference Selection. Dynamics, power and precision cannot be more clearly sensed with any other product range. “Without a doubt, in-akustiks LS-2404 is one of the best speaker cables that STEREO has ever tested…” (Stereo 01-2015) We have long set the bar very high in regard to quality, because cables and connections are extremely sensitive. Physical phenomena that arise during the transmission of signals can only be controlled with technical finesse and the best materials. For this reason all cables are manufactured in a German cable mill and finished by us in Ballrechten-Dottingen in elaborate manual work. “We don‘t deal in voodoo. We deal in physics. Anybody who has a good command of the laws of physics and knows what can be achieved with the right materials and outstanding cable architecture will attain results that are measurably better. We have implemented all this in our Holger Wachsmann | Product Developer Our Referenz Selection cables are only available in selected and authorized shops. If you are a specialized dealer or own an acknowledged HiFi shop and are interested in the commercialization of high end audio cables, please contact us. You can find the responsible sales representative in our contacts section. Physics rather than voodoo. Our high end speaker and interconnect cables are lavishly hand-made in our on-site manufactory at our company’s headquarter in Ballrechten-Dottingen. Stereo 09-2015: "The authentically innovative structure of in-akustiks new interconnect Referenz NF-2404 has propelled it right to the top of theclass.” We gladly leave the evaluation of our products to independent journalists, magazines and laboratories. Countless test results endorse our work. Praise that keeps us going.
physics
https://centercourtstringing.com/technology
2024-04-14T10:54:11
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.25/warc/CC-MAIN-20240414095752-20240414125752-00161.warc.gz
0.902172
415
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__26065175
en
When you pay to have your racquet restrung, you should expect professional, consistent results. You should also expect your stringer to utilize industry proven, best in class technology. All racquets are restrung on a state of the art 2022 Tecnifibre Ergo One professional grade stringing machine, the same machine that the Tecnifibre Official Stringing Team uses at ATP/WTA tournaments in Miami, Washington DC, and throughout Europe. It has the sturdiest 6-point mounting system of its class, a cast iron 360-degree turntable, and a linear electronic constant pull tensioning system. Its glide rails and clamps are among the smoothest and most reliable across the industry. We verify the calibration of the Ergo One on a weekly basis to ensure consistent stringbed tension on all racquets. We measure the finished stringbed tension of every racquet using an ERT300 Tennis Computer before it leaves the shop. Unlike spring loaded testers or downloadable apps which offer variable results at best, the ERT 300 uses harmonics to simulate ball impact at the center of the racquet's stringbed. The result is a Dynamic Tension (DT) measurement that has been considered the industry standard for decades. Your racquet's DT is tracked in our stringing database to further ensure consistent results every time you visit. The Head 3-in-1 Racquet Tuning Station is the latest precision technology added to the CCS workshop. This device measures a racquet's static weight, balance, and most importantly swingweight, a measure of how a racquet feels when a player takes a swing. Downloadable swingweight apps require measuring the length of the racquet, determining the pivot point, intricate setup, specific camera placement, and lots of trial and error. The Head 3-in-1 eliminates all of those variables to provide the most accurate swingweight measurement possible while customizing racquets and matching multiple racquets. You cannot improve what you don't measure!
physics
http://www.kritiindia.com/laboratory-2/
2019-06-20T00:52:55
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.50/warc/CC-MAIN-20190620004625-20190620030625-00099.warc.gz
0.892766
371
CC-MAIN-2019-26
webtext-fineweb__CC-MAIN-2019-26__0__125356094
en
THE LITMUS TESTS: Every Kriti product undergoes a series of stringent tests to emerge as a quality statement befitting our consumers’ expectations. The scientific procedures and latest equipment are a solid assurance of the product conforming to the most exacting standards. A bevy of modern gadgets are lined up to test the strength of our products before they see the light of the day, as: · Tensile Tester tests the tensile strength that is the physical make up of the product to determine if it is strong enough to withstand the most rigorous service conditions. · Oxidation Induction Meter- This piece of equipment is one of the most sophisticated in our armoury. It tests the products for its ability to stand the test of time and resist the degradation by oxidation. · Coefficient of Friction Tester- This test measures the force of friction between The inner lubricated layer of PLB HDPE duct and optical fiber cables to obviate any chances of damage that may be caused to the dainty piece of optical fiber by friction. · Carbon Black Dispersion Tester: This test determines the dispersion of the carbon black fiber in the polymer compound which is responsible for stability of the product under constant exposure to UV light. · Hydrostatic Pressure Testing Machine: Fully automated computerized hydrostatic pressure testing machine tests each product for its strength in bearing water pressure that flows through it. · Discharge Uniformity Measurement Machine: Hydrostatic testing machine to test the dripper discharge for every batch of production. · Impact Resistance Tester: To measure the brittleness of the pipe at 0 deg. C · ESCR Tester: To test long term property of product to understand the weather resistance of product. THE LITMUS TESTS:
physics
https://bioxas-imaging.lightsource.ca/about/macro-mode/
2023-09-23T04:21:11
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00154.warc.gz
0.883216
421
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__181445782
en
The macro resolution mode is capable of rapid XRF imaging of relatively large samples. In addition, subregions of interest can be scanned with higher spatial resolution without changing the setup. A series of pinholes, which determines the beam size, are on a motorized stage and a beam size of interest (ranging between 20 to 150 μm) can be quickly selected during the same experiment. There are two stage-configuration available depending on the sample thickness. The 45 degrees stage-configuration with respect to the incident beam are generally set up for cross-sections varying from a few μm to a couple of hundreds of μm. For thicker samples with a thickness of mm range, the 90 degrees stage-configuration is used. - Four beam sizes are available between 20 and 150 μm defined by Pt or W apertures - Bi-directional fly scanning up to 20 ms dwell time - Two stage-configurations, 90 and 45 degrees, with respect to the incoming beam - Camera for scan setup/sample visualization - Samples under ambient air The following fluxes are available with Pt aperture at 10 keV (5th harmonics). A similar flux can be obtained with the W aperture as well. - 20 μm: 0.6 x 10^11 ph/s/100 mA - 50 μm: 3.5 x 10^11 ph/s/100 mA - 100 μm: 0.9 x 10^12 ph/s/100 mA - 150 μm: 1.4 x 10^12 ph/s/100 mA The stage configuration can be changed depending on the thickness of the sample. Stage configuration at 90 degree - The macro stage at 90 degrees orientation and the 4E Vortex detector at 45 degrees to the incident beam. Stage configuration at 45 degrees - The macro stage at 45 degrees orientation and the 4E Vortex detector at 90 degrees to the incident beam.
physics
http://davidakin.com/onthehill/main-page/big-computers-tackle-the-problem-of-climate-change-and-in-doing-so-cause-climate-change/
2024-03-04T02:24:12
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00750.warc.gz
0.960848
656
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__73267436
en
The new TOP500 list is out. Geeks, like me, get excited about this. The semi-anual TOP500 ranks the world's biggest, fastest supercomputers. At the top right now is Jaguar, a Cray XT5-HE Opteron Six Core 2.6 GHz, at Oak Ridge National Laboratory in Tennessee. Jaguary was upgraded this year at a cost of US$20-million and given the task of basically modelling the entire planet so that scientists can run various climate change problems on it. Jaguar went 'live' earlier this year. It knocked an IBM system out of the top spot and the Jaguar folks were happy to brag about that. The folks who operate that IBM system, though, bragged right back. The (U.S.) National Nuclear Security Administration may not have the number one slot anymore, but it's got three of the top 10, all of which are involved in modelling nuclear explosions: The three computers in the top 10 were Roadrunner (#2, Los Alamos National Laboratory); BlueGene/L (#7, Lawrence Livermore National Laboratory); and Red Sky (#10, Sandia National Laboratories/National Renewable Energy Laboratory). In addition, the Dawn platform at Livermore was ranked as the 11th fastest in the world. “The work done on these complex machines enables us to maintain the safety, security and effectiveness of our nuclear stockpile without nuclear testing,” said NNSA Administrator Thomas D’Agostino. “The supercomputing systems are a critical example of our investment in nuclear security making contributions to broader science and discovery. I am very pleased to see our laboratories and highly skilled personnel recognized for their groundbreaking contributions to the advancement of our national security programs and the field of supercomputing.” All this supercomputing, though, may not be great news for the environment, as Bill St. Arnaud wrote on his blog this week: … the UK Meteorological Office new supercomputer is one of the single biggest sources of CO2 emissions (Scope 2) in the UK. Paradoxically this is the same computer that is being used for climate modeling in that country. Thanks to a pointer from Steve Goldstein we learn that even America’s spy agency –NSA, is also running into energy issues and as such is building a huge new data centers in Utah and Texas, of which both will probably use dirty coal based electricity as well. There is also rumors that NCAR is building a new cyber-infrastructure center in Wyoming (presumably which will also use coal based electricity) which sort of undermines its own credibility as America’s leading climate research institute. I suspect very shortly with all the new announcements of grids and supercomputers from OSG to Jaguar, that cyber-infrastructure collectively in the US will be one of the top sources of CO2 emissions as it is now in the UK. Bill's blog, incidentally is all about greening up the ICT sector. In one recent post, he noted that an Australian ISP with about 170,000 subscriber had gone carbon-neutral and was making its power and equipment purchasing decisions with an eye to lowering its carbon footprint.
physics
https://radiocontrolledworld.com/embrace-e-skating-boon-of-regenerative-braking/
2024-04-13T10:07:56
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816587.89/warc/CC-MAIN-20240413083102-20240413113102-00109.warc.gz
0.913415
3,005
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__70992717
en
Imagine extending the range of your electric skateboard while enhancing its overall efficiency – that’s the power of regenerative braking. Moving beyond the rudimentary grasp of skateboarding, the modern e-skating enthusiast needs to understand the potential of harnessing kinetic energy through this technology. This breakthrough transforms the very dynamics of braking in electric skateboards, offering far more than just stopping power. Through the course of this exploration, you’ll delve into the intricate workings of regenerative braking, contrasting it with traditional braking and delving into its commendable benefits. Understanding Regenerative Braking Unleashing the Power of Regenerative Braking In the realm of automobile technology, regenerative braking has made waves, proving to be more than just a passing trend. This innovative system turns your vehicle into an energy-harvesting dynamo every time you hit the brakes. But what exactly is regenerative braking and how does it work? Buckle up for a thrilling journey through the inner workings of this cutting-edge technology. Regenerative braking, in the simplest terms, is a system that allows your vehicle to recover and utilize the energy normally wasted during braking. In a traditional braking system, when you press on the brake pedal, the car slows down because of friction and this process generates heat. Essentially, the kinetic energy (the energy of movement) gets lost as thermal energy. All that potential power simply dissipates into the atmosphere. What a waste, right? This is where regenerative braking enters the picture. Contrary to conventional braking systems, regenerative brakes provide a way to capture that escaping kinetic energy, convert it into electrical energy, and store it for later use. It’s recycling energy, which promotes both efficiency and sustainability. Quite a game-changer! But how does this happen? Regenerative braking relies on something quite innovative – an electric motor that doubles as a generator. When the vehicle is moving and the brakes are not engaged, the motor uses energy from the battery to spin the wheels and propel the vehicle forward. In this mode, it functions as a standard motor. The magic happens when you hit the brakes. The motor switches roles and begins acting as a generator. The wheels, instead of being driven by the motor, now turn the motor, converting kinetic energy into electrical energy. It does this using the principle of electromagnetic induction, wherein moving a magnet through a wire coil generates a flow of electrons, or electricity. This electricity flows back into the vehicle’s battery or a capacitor (energy storage device), recharging it and readying it for the next go-round. When you press the accelerator again, the stored energy becomes usable, improving overall efficiency and reducing the need for other power sources. One crucial point worth noting is that regenerative brakes can only convert a portion of the available kinetic energy back into usable electricity. The rest continues to be lost as heat, making traditional friction brakes still necessary for safely stopping the vehicle in all conditions. Regenerative braking has become particularly popular in hybrid and electric vehicles where the goal is to conserve energy and extend the vehicle’s range. However, the potential of this technology, with its ethos of reuse and sustainability, could stretch well beyond these applications in the future. Delving into the world of regenerative braking reveals both its genius and complexity. Whether on highways, urban streets, or countryside roads, without a doubt, this technology is taking driving into a more sustainable future. We hope that this understanding makes your next journey even more awe-inspiring as you literally turn every stop into another starting point. So now you know, the next time you hit the brakes with regenerative braking, you’re not just slowing down; you’re powering up for the next adventure! Photo by kommumikation on Unsplash Regenerative Braking vs. Traditional Braking Steering deeper into the realm of regenerative braking, let’s unlock some further complexities of this unique technology. Unveiling additional layers of understanding, we’ll delve into its components, unveiling the essence of its functionality, without any overcomplicated chatter or repetition. Hang on for the rollercoaster ride of friction and energy as we round the corner into the heart of this innovative braking system. Twist the conventional brake model on its head, and you’ll find regenerative brakes providing a sustainable substitute. But, what nuances separate this green alternative from a traditional anchor? Beginning with the vehicle’s kinetic energy, think of it like a bouncing ball. When it’s moving, it’s got a ton of energy, right? But the moment it hits the ground, it comes to a rather quick halt. Upon stopping, its motion energy is converted into heat, dispersing into the surrounding environment—essentially wasted. Traditional brake discs and pads play the part of the ground here. The moment they engage, it’s adios kinetic energy! Now, let’s get into gear with regenerative brakes. Instead of that energy fleeing the scene, it’s put to work, transforming back into electrical energy. But how? Well, the secret lies in the machinery of the motor-generator unit. The magic unfolds when our vehicle needs to slow down. Instead of applying the clamps and heating things up, the motor’s function is reversed. Its spinning action, initially fed by electrical energy, is now creating just that. The vehicle’s forward momentum keeps the motor’s rotor spinning, turning it into a generator and replenishing lost electrical energy. This fabulously orchestrated role reversal, known as electromagnetic induction, doesn’t quite end there. Once the energy has been generated in this alternate fashion, it gets diverted to our trusty onboard battery or capacitor. Buoying number among hybrids and electric vehicles, ‘regen’ systems effectively extend battery life and range, a profitable investment indeed. But as they say, even roses have thorns. Regen’s kryptonite? Extreme decelerations and static scenarios, situations where its efficiency wavers. Consequently, the traditional friction brake system isn’t quite ready for early retirement and remains a necessary co-star for complete stopping power. Regenerative braking has been generating waves in the automobile industry, pushing the boundaries of potential energy capture. Currently riding in the luxury lane of hybrids and electric vehicles, traces of regen can even be found twirling on bike trails. Tiptoeing towards the future, ‘regen’ technology is bound to bloom beyond motorized mobility. Trains, roller coasters, elevators, anyone? The possibilities are as vast as the open road itself. Much like crafting a hobby, harnessing regenerative braking is a relentless pursue of efficiency and sustainability, taking it slow and steady, always eager for the next bend in the road. A progressive technology that turns the concept of braking head-first, regenerative braking confirms an age-old adage – waste not, want not. Advantages of Regenerative Braking The exceptional benefits of regenerative braking in electric skateboards become clearer when you apply these concepts to the real-world fun and challenges of skateboarding. Primarily, regenerative braking increases the board’s overall energy efficiency. This concept, though popular in hybrid and electric cars, is just as awe-inspiring in electric skateboards. Long rides are part of the charm and thrill of skateboarding. Here, the system plays a key role. When you brake, instead of wasting the generated kinetic energy, it captures and reuses it. This process extends the battery life and allows for more extended, uninterrupted experiences. Additionally, regenerative braking makes your ride smoother. An electric skateboard running downhill can easily pick up speed more than you’d prefer. This is where our ingenious system jumps in to maintain a manageable speed, ensuring your downhill ride stays thrilling but safe. Furthermore, regenerative braking introduces a nuanced control mechanism. It affords skateboarders the ability to take command of the board’s speed with precision. This handling may seem alien at first, but seasoned skaters acknowledge the enhanced control as a game-changer, especially during more artful maneuvers. Ultimately, the economic value of regenerative braking cannot be overlooked. Given its advantage in extending battery life, the need for frequent recharging lessens, effectively reducing your electricity bill. Moreover, the prolonged battery lifespan negates the need for immediate replacements, saving you more money in the long run. To wrap things up neatly, regenerative braking revolutionizes the electric skateboard. Its energy efficiency, rider safety, precise control, and economic benefits all provide an unparalleled experience. Just as skateboarding has evolved from simple wooden cruisers to sophisticated electric machines, so too has the braking technology enriched the sport. Embrace the change, hop on an e-skateboard, and let the regenerative braking system provide an adventure that’s worth every ride. Photo by kumpan_electric on Unsplash Real-world Application and User Experience As the popularity of electric skateboarding continues to surge, one standout feature that enhances the riding experience is regenerative braking. So, buckle up, skateboard enthusiasts! Let’s dive into learning how regenerative braking adds value to our rides and redefines skateboarding as we know it. Firstly, consider the implications of a system that harvests energy when braking. This energy would usually be lost to friction in traditional braking systems. Yet, thanks to regenerative braking, not a single bit of that power goes to waste. The energy harvested from braking then pumps back into the skateboard’s battery, significantly extending the duration of rides. You’d be surprised just how significantly this increases a board’s range, granting skaters more freedom to conquer greater distances. Taking a zoom into the steep slopes and downhill terrains, regenerative braking brings a whole new dimension to the game. Traditionally, speeding downhill was a thrill – but also posed a risk. With regenerative braking, downhill rides remain exhilarating but are less threatening. In higher speeds, this feature offers more control, reducing speed without the skater having to step off the board. It’s a clever way to reach a smooth, manageable ride that marries safety with adrenaline. For those with an advanced skill set, the added control from using regenerative braking can step up the skateboarding experience. Precision becomes second nature as the skateboard behaves a lot more responsively to the rider’s commands. It’s all about skater-friendly controlling: allowing for on-the-fly adjustments and tight turns that are the embodiment of freedom on wheels! It’s not just about the ride though. Regenerative braking also counts when it comes down to economics. The power recovery attributes not only prolong the skateboard battery’s life but also, in the bigger picture, reduce the overall charging times. Consequently, the skater enjoys a lighter burden on their electricity bill, topping the thrill of the ride with a sense of smart saving. Unquestionably, the impact of regenerative braking on electric skateboarding has been nothing short of revolutionizing. It has turned the entire experience on its head, offering safer, more controlled rides while ensuring cost efficiency. That’s the beauty of technology merged with passion – creating a path to progress in a sport or hobby we love! So, next time you strap on your helmet and ride off into the sunset, know that regenerative braking is working in the shadows, enhancing your ride in ways you may not even realize! Safe and fun skateboarding to all! Choosing an Electric Skateboard with Regenerative Braking It’s critical to mention, when selecting an electric skateboard with regenerative braking, several factors need to be considered for an optimized skateboarding experience. One of the primary factors to consider is the terrain the skateboard will be operating on. The smoother and well-paved asphalt roads allow maximum efficiency of regenerative brakes due to minimized vibration and friction. Thus, if the skateboard will frequently be used on uneven terrains, other efficient braking systems should be considered. The weight of the rider is another consideration. Heavier riders may generate more energy during braking that can be recaptured. However, this also places additional strain on the braking system. It’s essential to choose a skateboard designed to handle the rider’s weight without compromising safety. The design and quality of the electric motor also play a critical role in the effectiveness of regenerative braking. Opt for a motor that efficiently converts kinetic energy back into electrical energy. Meticulously assess the skateboard’s motor specifications, including torque and power rating, before making a purchase. A skateboard’s battery type and capacity can also influence the efficiency of regenerative braking. Lithium-ion batteries are popular due to their efficiency, longevity, and ability to sustain high power outputs. A battery with a larger capacity will, however, have a higher energy storage which means more energy can be recovered and more prolonged skating sessions. A crucial factor not to be overlooked is the skateboard’s controller and its ability to moderate the energy flow. A suitable skateboard controller should be able to seamlessly transition between motor and generator functions, thereby maximizing the electric skateboard’s overall energy efficiency. Finally, an electric skateboard’s cost should be considered. Regenerative braking technology can lead to a higher upfront cost. However, these costs may be offset in the long-run due to the extended battery life and less frequent charging, thanks to regenerative braking. Choosing the perfect electric skateboard goes beyond the lure of smart features like regenerative braking. Adequate consideration of the factors highlighted in this piece ensures a well-rounded purchase decision, and ultimately a more enchanting skateboarding adventure. Having unveiled the transformative essence of regenerative braking, it’s clear that this technology stands as a beacon for the future of electric skateboarding. Beyond just efficiency, it’s an eco-friendly method offering a budget-friendly solution in the long run. Furthermore, it adds a new dimension to the e-skating experience and explores the considerations when selecting an e-skateboard equipped with such technology. Hence, adopting regenerative braking isn’t just a step towards advanced technology, but also a stride towards responsible and thoughtful skateboarding. The vibrant world of e-skater awaits, are you ready to embrace the revolution?
physics
http://oasyswater.com/news-post/membrane-exceeds-power-density-target/
2021-08-03T06:48:06
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00237.warc.gz
0.958788
350
CC-MAIN-2021-31
webtext-fineweb__CC-MAIN-2021-31__0__185482768
en
Membrane Exceeds Power Density Target Late last week, Nathan Hancock, the research director for Massachusetts-based Oasys Water, told WDR, “We have consistently achieved a power density greater than 7 W/m2 using a 28 g/L sodium chloride draw solution and a ‘clean water’ feed at 20˚C – conditions that replicate PRO in a seawater/river water application.” Most importantly, the company was able to do so with its standard, commercially available, 4-inch diameter, 40-inch long spiral-wound element. “Our elements have been tested by third parties to confirm the results, and we’re encouraged that additional improvements currently underway will yield performance even greater than 10 W/m2. And, by switching to our proprietary thermolytic draw solution and osmotic heat engine design, we could go much higher,” he added. Hancock noted that a paper published online in Environmental Science & Technology this month by the Colorado School of Mines’ Tzahi Cath and his colleagues, showed that Oasys’ membrane outperformed the others that were tested and was able to operate at a flux that was almost three times higher than other commercially available thin-film composite and cellulose acetate forward osmosis (FO) membranes. Oasys CEO Jim Matheson told WDR that the company is exploring partnerships with leading membrane producers and PRO developers who can make use of its PRO membrane technology while the company sticks to its core focus and continues to develop FO technology-based systems. This is an excerpt from a longer article, which can be found at GWIdesalination.com.
physics
https://www.cianoregan.com/collections/all/products/postcard
2022-01-21T11:48:28
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00387.warc.gz
0.922142
363
CC-MAIN-2022-05
webtext-fineweb__CC-MAIN-2022-05__0__139701266
en
This is your chance to own a unique 4x6 inch postcard - featuring Cian's image of the full Moon over Blackrock Castle Observatory - which flew to space in 2021. Did the postcards really go to space? Yes, they really did! In 2021, 100 postcards featuring Cian's image launched from the west Texas desert on an 11-minute flight aboard Blue Origin's New Shepard crew capsule. During this time, the postcards traveled at speeds of over 3x the speed of sound, floated weightless for several minutes, while flying high above the Kármán Line - 100km above the Earth - otherwise known as the internationally recognised boundary of space! Postcard Size: 4x6 inches (10x15cm) Your postcard will be sealed in a clear protective plastic toploader so it arrives in your postbox in pristine condition. What can I do with my flown-in-space postcard? That's completely up to you! However, because you'll want to enjoy looking at both sides of the postcard, I recommend getting your hands on a see-through acrylic frame so you can gaze at the photo of Blackrock Castle on one side, and the "Flown to Space On New Shepard" stamp alongside information about the photo on the other side, Each postcard will be stamped with Blue Origin's official "Flown to Space On New Shepard - Blue Origin" postmark as shown in the preview images. In addition, your postcard will be signed by Cian, numbered out of 100 (the remaining 90 postcards will be used for outreach and education purposes at Blackrock Castle Observatory). You will also be issued with a certificate of authenticity from Cian.
physics
https://www.kartografija.hr/proj-wiki/index.php/Convergence_of_the_meridians
2021-09-23T18:36:58
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00003.warc.gz
0.916976
154
CC-MAIN-2021-39
webtext-fineweb__CC-MAIN-2021-39__0__171391236
en
Convergence of the meridians The angular drawing together of the geographic meridians in passing form the Equator to the poles. At the Equator, all meridians are mutually parallel; passing from the Equator, they converge until they meet at the poles, intersecting at angles that are equal to their difference in longitudes. The term ‘convergence of meridians’ is used to also designate the relative difference of direction of meridians at specific points on the meridians. Thus, for a geodetic line, the azimuth at one end differs from the azimuth at the other end by 180 degrees plus or minus the amount of the convergence of the meridians at the end points.
physics
https://merklepal.com/deepmind-unveils-2-2-million-new-crystals-380-000-potential-game-changers-for-tech/
2024-02-29T07:50:14
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00543.warc.gz
0.905624
431
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__28380862
en
Google DeepMind, researchers have introduced a cutting-edge deep learning tool, Graph Networks for Materials Exploration (GNoME), uncovering a whopping 2.2 million new crystals. Among these discoveries, 380,000 exhibit promising stability, potentially revolutionizing future technological advancements. GNoME's unprecedented discovery, detailed in Nature, marks a monumental stride in crystal prediction, exploration, and synthesis. With an incredible acceleration in the process, this AI-powered tool has identified a multitude of potential materials that could redefine technology applications, including superconductors and advanced batteries for electric vehicles. Notably, the tool's predictions have been confirmed by real-world validations. Researchers worldwide independently synthesized 736 of these newly discovered crystal structures, underlining GNoME's accuracy and groundbreaking potential. Prior to GNoME's innovation, material science faced hurdles in identifying viable crystals, often resorting to lengthy experimental methods. However, this revolutionary tool's predictions, equivalent to 800 years' worth of knowledge, have changed the game by showcasing unparalleled scale and accuracy in material stability. GNoME's discoveries are promising and expansive. It has surfaced over 52,000 new layered compounds akin to graphene, with transformative implications for electronics and potential superconductor development. Furthermore, it has revealed 528 potential lithium-ion conductors, indicating a substantial leap in enhancing rechargeable battery efficiency. The tool's predictions align with strict stability criteria, yielding 380,000 materials that meet established standards. GNoME's innovative graph neural network (GNN) models leverage structural and compositional pipelines, hastening the discovery process by generating candidate materials resembling known crystals and employing randomized approaches based on chemical formulas. With GNoME's extensive database now available to the research community, the potential for developing sustainable and transformative technologies appears closer than ever. These discoveries are expected to inspire new experiments and potential synthesis endeavors, potentially driving the next wave of technological innovation. The remarkable findings from GNoME's AI-driven material exploration underscore the significant impact of advanced AI tools in reshaping material science, offering a glimpse into a future brimming with innovative possibilities.
physics
https://www.bird.co/nl/blog/shocking-truth-why-shock-less-scooter-tires-score-best-for-safety/
2024-04-23T17:38:47
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00787.warc.gz
0.930924
816
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__198705080
en
- Tires are a critical component of e-scooter safety and stability - Most shared scooters use either solid/semi-solid tires with shocks or pneumatic tires - Recent testing demonstrates that pneumatic tires perform better across a wider variety of street surfaces and temperatures than solid tires with shocks, reducing vibrations by 33% Shocks or no shocks? The answer may surprise you. Most shared scooters rely on one of two tire models: solid/semi-solid tires with “shocks” or pneumatic tires without them. As a reminder, pneumatic tires consist of a central wheel hub surrounded by a rubber, typically air-filled exterior. Given that a scooter’s tires are one of the most critical components impacting traction and stability, it’s important for shared micromobility providers to invest in designs that improve vehicle safety. At Bird, we intentionally worked with a tire manufacturer to develop a unique, automotive-grade scooter tire specifically designed for the rigors of the shared micromobility industry. That means our in-house designed and engineered vehicles are equipped with specially-designed 6 ply tubeless pneumatic tires that contain a special sealant to prevent flats. We did this for two reasons: - First, pneumatic tires tend to offer better shock absorption over a wider variety of common street surfaces including gravel and cobblestone - Second, while solid tires become rigid with dropping temperatures, decreasing their traction when it’s needed most, the air in pneumatic tires condenses in colder temperatures making them softer and more compliant Recent testing at our state of the art R&D facility in Southern California has borne out these initial assumptions, demonstrating that pneumatic tires perform better than solid tires with shocks across a wide variety of street surfaces. How We Tested Pneumatic Tires vs. Solid Tires With Shocks To perform the evaluations, our engineers compared a Bird Three scooter equipped with our custom pneumatic tires and a mass-produced scooter model used by many operators worldwide with semi-solid tires and shocks. Both vehicles were outfitted with testing instruments on the handlebars to detect the vertical acceleration that would be experienced by riders as jolts and vibrations. We then ran each vehicle multiple times down test tracks located in our SoCal test facility made up of gravel, cobblestone and a wide variety of other simulated street surfaces that riders encounter across the globe. In each test, Bird Three’s unique tubeless tire design performed better than the solid tire with shocks, experiencing on average 33% less vertical acceleration than the other commonly used model. “This reduction in vibrations is significant because it means 33% more stability and control for riders when experiencing everyday bumps and uneven road conditions” said Scott Rushforth, Chief Vehicle Officer at Bird. “Pneumatics are demonstrably better at damping vibration and low-frequency bumps than solid or semi-solid tires with suspension, and this most recent testing clearly validates that.” In addition to the safety benefits that come with fewer vibrations and better handling and stability on bumpy streets, Bird’s pneumatic tires also have a hidden environmental benefit. By limiting the amount of shaking our scooters experience, we cut down on premature wear and tear of vehicle components, increasing sustainability. Pneumatic tires can also easily be changed when damaged, salvaging the wheel hub, while solids are typically serviced by replacing the entire wheel or motor assembly. Given that manufacturing represents the highest amount of a scooter’s lifetime CO2 output, that additional protection spread across a fleet of tens of thousands of vehicles quickly adds up. To learn more about the safety and sustainability of our vehicles, including how the industry’s longest footboard helps make our scooters even more stable and accessible for all riders, subscribe to the Bird Cities Blog.
physics
http://www.signetinternational.net/ceramic-ball-bearing.htm
2018-07-17T19:06:12
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00336.warc.gz
0.886692
860
CC-MAIN-2018-30
webtext-fineweb__CC-MAIN-2018-30__0__194358317
en
We are well reckoned as one of the major Ceramic Hybrid Ball Bearings Exporters based in India. Our Ceramic Ball Bearing is comprised of an outer and inner ring that is manufactured from high quality steel. Providing excellent performance, our Full Ceramic Ball Bearing is anti abrasive and non corrosive in nature. Ceramic Ball (Product Introduction) Ceramic ball specialties compared with steel ball : - Material: silcon nitride (Si3N4), zirconia (ZrO2), and alumina (Al2O3) - Specification: 1/64"~ 3/2", with different gauge. - Precision : G3 ~ G20 - Usage: for bearing, inspection, and other rolling conditions. - Capacity: 3 million pcs/month for 5/32" silicon nitride ball - Lighter than steel - Larger clastic modulus - Lower friction coefficient, rolls more freely - Lower coefficient of thermal expansion - Superior surface finish - Higher high-temperature hardness - Never rust, and can roll without oil or grease - Corrosion resistant than steel We produce FULL CERAMIC BEARING and CERAMIC BALL BEARING Full ceramic bearing means the inner ring, out ring, and ball are composed of ceramic materials including zirconia and silicon nitride. Ceramic ball bearing, also named hybrid bearing, means the ball is ceramic, while the out ring and inner ring are steel. Ceramic ball bearing could have identical structure with steel bearing. Properties advantage, compared with metal bearing - Higher limit speed: ceramic is lighter than steel, and can effectively restrain the centrifugal force, therefore enhances the limit speed. - Higher precision usage: ceramic has higher hardness and elastic modulus than steel, which means ceramic bearing is stiffer, more rigid than steel bearing, thus can be used in higher precision condition. - Longer life: lighter ceramic leads to lower centrifugal force, thus extends bearing life. Furthermore, friction coefficient of ceramic is lower than steel, which also extends bearing life. - Higher temperature usage: ceramic is more mechanical stable at elevated temperature, thus could be used under higher temperature. - Temperature-variation usage : ceramic has a lower thermal expansion coefficient, thus the clearance and the tolerance variation is lower than those of steel bearing, which leads to a large temperature-variation range usage of ceramic bearing. - Better seizure resistance: ceramic has smaller thermal expansion coefficient, indicating less thermal deformation, thus enhance seizure resistance. - Could run without oil or grease : ceramic never rusts and is self-lubricated, thus can be used in situation that requires no oil or greasc. - Resistant to acid, alkai and salt: chemical industry is the largest potential application industry of ceramic bearing to eb exploited. - More suitable for maganetic application: our ceramic bearing is non-magnetic, which means it is diffucult for magnetic particle to adhere on the race, thus reducing particle-abrasion. Ceramic ball bearing is especially outstading in above first three items, while full ceramic bearing is outstanding in other items. Full silicon nitride bearing is more superior than full zirconia bearing. It can endure higher temperature, and is more resistant to acid, alkai and salt. For rough service environment, such as chemical, metallurgy, food, electric, medical industry etc. Our bearing products have been used as high-speed motor spindle bearing, high-precision machine spindle bearing, dental drill bearing, high speed wheelhead bearing, fishing bearing, etc. To enquire about the desired product(s), just check the box and then click “Enquiry Now” button which is provided below. - Retainer Material: PTFE, NYLON, PEEK, stainless Steel or without retainer - Precision: PO for full ceramic bearing and PO~P4 for ceramic ball bearing - Major bearing series: deep groove ball bearing, angluar contact ball bearing, and thrust ball bearing.
physics
https://www.autotestgeraete.de/en/products/productcatalog/detail/produktkatalog/hydraulik/udd-01.html?tx_produktkatalog_produkteshow%5Bproduktunterkategorie%5D=&tx_produktkatalog_produkteshow%5Bziel%5D=0
2018-11-19T03:10:21
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745015.71/warc/CC-MAIN-20181119023120-20181119045120-00519.warc.gz
0.693542
228
CC-MAIN-2018-47
webtext-fineweb__CC-MAIN-2018-47__0__92350321
en
Universal, digital pressure measurement for connection of one pressure sensor. This device allows the conversion from analogue measurement to digital measurement. The measurement results can be transferred to a customer‘s diagnostic system. The measurement device is able to provide analogue proportional signals corresponding to the measurement results. This device is available with calibration certificate (as option). Also available with USB interface and software to visualize, capture and save the measured results. With analogue, proportional output signal available (on request): Range of application (examples). |Part no.||Pressure measurement range||Application| |010 805_1||-1 bis 16 bar||Vacuum, Oil pressure, Fuel pressure| |010 806_1||0 bis 60 bar||Compression| |010 807_1||0 bis 600 bar||Hydraulic| |010 808_1||0 bis 2500 bar||Common Rail, Hydraulic| 1 Pressure gauge UDD 02_LR_S 1 CD with software 1 USB cable Delivery in solid plastic case with with CNC milled foam insert
physics
http://bestbatterytips.com/tags/what-do-you-fill-batteries-with/
2016-04-28T23:49:00
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860109993.2/warc/CC-MAIN-20160428161509-00084-ip-10-239-7-51.ec2.internal.warc.gz
0.945921
385
CC-MAIN-2016-18
webtext-fineweb__CC-MAIN-2016-18__0__137787000
en
Batteries often times need water added to them over time. However there seems to be some confusion in what it is to put in them when its time. Never use tap water to fill battery cells, I can’t stress that enough. Tap water is full of natural salts and minerals that can’t be seen with the naked eye. These salts and minerals attach themselves to the lead plates in the battery, and reduce the ability of the battery to produce and move electrons. Manufactures recommend using distilled water for any filling of the battery cells that might be necessary. Distilled water has been removed of all impurities. How do you know when to fill a battery or battery cells? When you take the cell cap or caps off the top of the battery and you see the lead plates above the water level. The water level inside the battery should just cover the top of the lead plates inside. If the lead plates are visible, pour distilled water into the cell or cells until the lead plate or plates are covered. Due not over fill the battery cells, the batteries were designed for the electrolyte level to just cover the lead plates to allow for expansion inside the battery if necessary. During battery activity, some of the water through chemical action breaks down into hydrogen and oxygen gases that may evaporate from the battery. The result of this process is a low electrolyte level. The battery must be filled as described above. Maintenance free batteries do not have access to the electrolyte levels. The manufactures build maintenance free batteries with higher electrolyte levels to give for evaporation and expansion of gasses. No maintenance is required on these types of batteries. Always remember to wear safety goggles, and wear rubber gloves when conducting maintenance on batteries. All product names, logos, brands, and other trademarks featured or referred to within BestBatteryTips .com website are the property of their respective trademark holders.
physics
https://www.everydaycarrygear.com/can-you-find-gold-nuggets-with-metal-detectors/
2023-12-03T14:35:20
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.23/warc/CC-MAIN-20231203125921-20231203155921-00385.warc.gz
0.949289
1,062
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__262242652
en
Metal detectors are designed to find metals and gold is typically a kind of metal. However, there are factors to consider in finding gold nuggets with metal detectors. You must first know the metal detector features that you will use and the kind of objects it can find. It has long been proven metal detectors can find gold nuggets using the type of metal detector which has ground-levelling functions and features to locate gold. So, metal detectors are the best equipment for finding gold. Therefore it is very important to know all the information about the metal detector device that you will use and its features. Do All Types Of Metal Detectors Detect Gold? All types of metal detectors work the same by producing a magnetic field that responds to the objects buried underground. However, the response that you might receive from your metal detector will vary from several factors: - It depends on the frequency level of the metal detector - The soil type of the ground area where you are searching - The conduction of the other metals hidden in the area - Its orientation, shape, and size of the object - How deep is the object buried? So, not all metal detectors are accurate or constant enough to detect any gold nuggets or gold grains. Also, metal detectors which run lower frequencies are not really effective in locating gold due to gold size and lower conductivity. But, metal detectors with higher frequencies are better in locating gold because it has the ability to recognize small-sized materials in a more accurate function. This is the topic for the separate article, more on gold metal detectors can be learned here. Additionally, the gold compounds have more conductivity with those high-frequency waves compared to low ones. According to many gold miners, the best type of metal detector they have used is the Pulse Induction metal detector. It is costlier than the lower frequency ones, but it has the best features for gold searching. Also, gold is normally situated in areas with high mineralization. So, low-frequency metal detectors have difficulty getting the ground conductivity in those areas with lots of minerals like salt. Hence, the pulse induction metal detector has the gold optimization function, which enables it to work over high mineralization and still locate the gold. How deep can your gold prospecting metal detector reach? More advanced technology metal detectors can uncover gold even as tiny as half of a grain. The larger gold nuggets are significantly located on bigger depths. The search coils of the metal detectors play a big role in detecting the gold objects underground. The VLF metal detectors are very responsive to gold, but it is also very subtle to soil minerals. And, these minerals are present even in the best-producing gold prospecting parts of the earth. However, it is very good to be aware that there is a way to tune a metal detector for mineral discrimination. Hence, this is the best feature and strength of the Pulse Induction metal detector, as it has the ability to ignore the hardest ground mineral surroundings, and you can successfully find your gold nuggets even in its deepest setting. In gold prospecting, it is very important to know about the ground balance. The metal detector must have the feature to control the ground balance. The main purpose of this feature is it allows the metal detector machine to filter out the adjust button to the iron content in the ground. Pulse Induction metal detectors have the feature which automatically tracks the round with lesser adjustment to get the ground signal response of detecting the gold. When you are going gold hunting, you have to know and decide first if you are looking for smaller objects like gold grains or are you more interested in larger ones like gold nuggets. By principle, the metal detectors which can locate smaller gold grains are less costly than those machines or devices that can reach depth larger sizes of gold nuggets. A tiny grain of gold is often found in the upper part, like from about 2 inches (5.08 cm) underground in the areas where it is rich in gold. However, bigger gold nuggets are best found in areas with about five inches deep or more. The pulse induction metal detector is a machine that can significantly locate gold at a depth of 12 inches (30.48 cm) or one foot. So, do metal detectors detect gold nuggets? The answer is yes, it can. You just need to take note of some function limitations of the metal detector that you will use. Gold nuggets and gold grains are normally traced by using better equipment or a high-grade machine. Metal detectors with high frequency have more chance of detecting gold. So, whatever type of gold you would like to search for, the Pulse Induction metal detector is the best device for you. Also, before you go to purchase your metal detector to find gold, you must have more knowledge and awareness of what you are going to venture into. It is advised that you must do your research in finding your best search areas. You have to make sure that it has proof that there is success gold hunting in there. Most of the gold producing parts of the world are surely documented. Make sure that you operate legally and permission to search in the area. And, most importantly, enjoy the adventure!
physics
https://www.adityabirlahospital.com/best-cancer-hospital-in-pune/specialities-radiation-therapy/radiation-therapy-infra/
2020-07-10T23:08:00
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00048.warc.gz
0.892726
636
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__145512994
en
Aditya Birla Memorial Hospital offers highly conformal radiation treatments that destroy the reproductive ability of cancer cells, targeting the cancer so precisely that side effects are minimal, and there is little or no damage to the surrounding tissues. The techniques used at our cancer wing are, Volumetric Modulated Arc Therapy (VMAT), Image Guided Radiation Therapy (IGRT), Intensity Modulated Radiation Therapy (IMRT), 3D Conformal Radiotherapy (3DCRT) and above all, Stereotactic Body Radiation Therapy (SBRT) with Active Breathing Coordinator (ABC), takings optimum care to spare normal tissues, and lessen the damage toother organs surrounding the area of radiation. Additionally, our Flattening Filter Free (FFF) Radiation therapy – Pune’s first, makes the process 3 times faster than the normal procedure for patient comfort and accuracy. This is further combined with ultra-fast leaf speed of Agility MLC for precision treatment. The hospital is equipped with a state-of-the-art Linear Accelerator that offers highest accuracy, and flexible radiation delivery methods that can be programmed to match the type, size and exact shape of the tumour. We are the first hospital in Maharashtra and only the second hospital in India to commission the latest model of Linear Accelerator, Versa HD with 6D couch, from Elekta. Integrated with Elekta’s recently launched Agility 160-leaf multileaf collimator (MLC), Versa HD provides highly conformal beam shaping – a critical requirement for maximizing the dose to the precise target. Importantly, this high targeting accuracy is available over a large field-of-view, permitting delivery of high-definition (HD) beams to a wide spectrum, of complex targets. With a choice of multiple energies of electron beams in the same machine, the superficial tumours can be targeted easily. With this ground breaking combination, our clinicians can now, for the first time, take full advantage of higher dose rate delivery, enabling greater capabilities for sophisticated therapies, including Intensity Modulated Radiotherapy (IMRT), Image Guided Radiotherapy (IGRT), Volumetric Modulated Arc Therapy (VMAT), Stereotactic Radiotherapy (SRT) and Stereotactic Body Radiotherapy (SBRT), along with the widest Cone Bean CT (CBCT). The hospital boasts of the most advanced treatment planning system which is key, to map the radiation dose to the target tissue prior to treatment. - Monte Carlo Biological Algorithm: Treatment plan is generated by MONACO TPS which works on Monte Carlo biological based algorithm - Oncentra Master Plan: Generates brachytherapy plan to deliver internal radiation by using radioactive source - For the first time in Pune – Octavius 4D: Treatment plan verification by 4 dimensional patient phantom before executing the plan on patient. The intention is to make sure the radiation dose, what we have planned in the computer, is actually being delivered to the patient, thereby, offering assurance for accurate delivery of radiation.
physics
http://na.support.keysight.com/pna/help/latest/S2_Opt/Fast_Swp.htm
2020-10-24T01:18:36
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00077.warc.gz
0.859039
479
CC-MAIN-2020-45
webtext-fineweb__CC-MAIN-2020-45__0__88041168
en
You can achieve the fastest measurement sweep by adjusting the following: Other topics about Optimizing Measurements Consider changing each of the following settings as suggested. Frequency Span - Measure only the frequencies that are necessary for your device. Segment Sweep - Use segments to focus test data only where you need it. Switch Off Stepped Sweep - Use linear swept mode to minimize sweep time when possible. Auto Sweep Time - Use this default to sweep as quickly as possible for the current settings. Number of Points - Use the minimum number of points required for the measurement. For more information on how number of points and other settings affect sweep cycle time, see Technical Specifications. Noise Reduction Settings Using a combination of these settings, you can decrease the sweep time while still achieving an acceptable measurement. Average. Reduce the average factor, or switch Average off. Measurement Calibration Choice Choose the appropriate type of calibration for the required level of accuracy. When full 2-port error correction is applied, the analyzer takes both forward and reverse sweeps to gather all 12 error correction terms. This occurs even with a single S11 measurement displayed. All displayed measurements are updated as the second sweep is performed. Both sweeps are performed using the specified sweep time. When calibrating greater than 2 ports, the following formula is used to determine the number of sweeps required: N * (N-1) where N = the number of ports. When full 3-port calibration is applied, 6 sweeps are required; forward and reverse for each port pair. With full 4-port correction, 12 sweeps are required, and so forth. To limit the measurement time, perform ONLY the level of calibration that your measurements require. For example, if making only an S11 measurement, perform a 1-port calibration on that port. Sweep speed is about the same for uncorrected measurements and measurements done using a response calibration, or one-port calibration. For more information see Select a Calibration. The analyzer must update information for all active functions. To achieve an additional increase in sweep speed, switch off all of the analyzer functions that are not necessary for your measurement application. Analyzer sweep speed is dependent on various measurement settings. Experiment with the settings to get the fastest sweep and the measurement results that you need.
physics
https://bilderreich.de/1375/fact-sheet-full-moon.html
2023-12-03T00:03:21
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00145.warc.gz
0.932277
537
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__142415563
en
Flowers & Plants Scuba diving lake Countries & Travels nature & wildlife photography aerial views & drone images Facts & Profile The full moon is the lunar phase when the Moon appears fully illuminated from Earth's perspective. This occurs when Earth is located between the Sun and the Moon (more exactly, when the ecliptic longitudes of the Sun and Moon differ by 180°). This means that the lunar hemisphere facing Earth – the near side – is completely sunlit and appears as a circular disk. The full moon occurs roughly once a month. The time interval between a full moon and the next repetition of the same phase, a synodic month, averages about 29.53 days. Therefore, in those lunar calendars in which each month begins on the day of the new moon, the full moon falls on either the 14th or 15th day of the lunar month. Because a calendar month consists of a whole number of days, a month in a lunar calendar may be either 29 or 30 days long. A full moon is often thought of as an event of a full night's duration, although its phase seen from Earth continuously waxes or wanes, and is full only at the instant when waxing ends and waning begins. For any given location, about half of these maximum full moons may be visible, while the other half occurs during the day, when the full moon is below the horizon. Many almanacs list full moons not only by date, but also by their exact time, usually in Coordinated Universal Time (UTC). Typical monthly calendars that include lunar phases may be offset by one day when prepared for a different time zone. The full moon is generally a suboptimal time for astronomical observation of the Moon because shadows vanish. It is a poor time for other observations because the bright sunlight reflected by the Moon, amplified by the opposition surge, then outshines many stars. On 12 December 2008, the full moon was closer to the Earth than it had been at any time in the previous 15 years. This was referred to in popular media as a supermoon. On 19 March 2011, there was another full "supermoon", closer to the Earth than at any time in the previous 18 years. On 14 November 2016, there was another full "supermoon"; this time it was closer to the Earth than at any time in the previous 68 years. This text is based on the article from the free encyclopedia and is licensed under the Creative Commons CC-BY-SA 3.0 Unported ). A list of the is available on Wikipedia.
physics
https://www.emf-health-cluster.eu/tag/latest-news/page/2/
2024-02-23T09:56:58
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00802.warc.gz
0.882596
1,728
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__138419346
en
BioEM2023 (Oxford, UK, https://www.bioem2023.org/) has been completed. SEAWave has participated with several oral and poster presentations. The stage opened with the work “Traceable Absorbed Power Density Assessment System in the 28 GHz Band” (authors: Ninad Chitnis, Fariba Karimi, Arya Fallahi, Sven Kühn, and Niels Kuster) presented by Ninad Chitnis, from the WP4 of SEAWave. Dr Myles Capstick from IT’IS Fountation (CH) presented the design of the animal exposure setup, which will be installed at ENEA (Rome, IT) in the near future (“5G mm-wave mouse exposure system based on reverberation chambers”, Myles Capstick, Beyhan Kochali, Isaac Alonso Marin, Niels Kuster). A flash but very interesting presentation of the first results within WP10 (Risk Communication) of SEAWave was given by Sarah Link from IU Internationale Hochschule (Erfurt, DE). Sarah presented her poster entitled “How much am I exposed? Exploring public perceptions of EMF exposure from mobile Communication technology and 5G” (authors: Sarah Link, Marie Eggeling, Ferdinand Abacioglu, Christoph Böhmert). This poster was awarded the 1st Student Poster Award in Sarah Link. Well done Sarah! During the poster session Dr Serge Bories from CEA (Grenoble, FR) presented the “Performance Evaluation of DEVIN at a Low Sampling Frequency” (authors: Taghrid Mazloum, Serge Bories, David Dassonville). A joint publication from IT’IS, SPEAG, ZMT and ETH was presented by Dr. Myles Capstick titled “A 5G FR2 Human Skin Exposure System Implementing Subject Self Control Paradigms” (authors: Dr Myles Capstick, Dr Cosimo Fortunato, Dr Erdem Ofli, Mr Beyhan Kochali3, Mr Isaac Alonso Marin, Prof Niels Kuster) which detailed the development of a 5G millimeter wave exposure system at 27.5 GHz for human provocation experiments. Another SEAWave contribution to the BioEM2023 conference came from the leader of WP1, Prof Joe Wiart (Telecom Paris – IP Paris, FR), who presented the results of the first interlaboratory measurement campaign that took place in Thessaloniki (GR) in December 2022 in the poster “Comparison of Measurement Systems on Conducting RF-EMF Drive Test Campaign in Greek Urban and Suburban environments” (authors: S. Wang, W. Ben Chikha, Y. Zhang, J. Liu, M. Christopoulou, E. Karabetsos, A. Manassas, S. Iakovidis, C. Apostolidis, D. Babas, T. Samaras, J. Wiart). The final SEAWave contribution to BioEM2023 came from a joint team effort by AUTH and ENEA presented by Prof. Theodoros Samaras with the title “Electromagnetic macro-dosimetry of murine skin at millimeter waves”, (authors: Serafeim Iakovidis, Dr. Emiliano Fratini, Dr. Simona Leonardi, Dr. Caterina Merla, Dr. Rosanna Pinto, Dr. Simonetta Pazzaglia, Dr. Mariateresa Mancuso, Prof Theodoros Samaras). In a striking display of commitment to advancing the field of bioelectromagnetics, Project NextGEM made waves at BioEM2023, the world’s premier conference in this domain. Held from June 19th to June 23rd in the historic city of Oxford, UK, this year’s event witnessed Project NextGEM’s active participation and impactful contributions. European Commission Funding and EMF Research The conference began on a high note with Rita Araujo from the DG Research & Innovation of the European Commission taking the stage. She presented the EC-funded research on electromagnetic fields (EMF) and health, a critical area where NextGEM plays a pivotal role as part of the Research Cluster on EMF CLUE-H. This presentation set the stage for highlighting the importance and relevance of our project on the global stage. Leadership in Key Sessions Two key members of the NextGEM team, Mats-Olof Mattsson and Myrtill Simkó, assumed the role of session chairs, underlining their expertise in the “in-vivo” session. Their involvement showcased NextGEM’s leadership within the bioelectromagnetics community. Genotoxicity Research Insights Maria Rosaria Scarfi delivered a thought-provoking presentation during the conference, sharing a systematic review of the evidence regarding the genotoxicity of radiofrequency (RF) electromagnetic fields from in vitro studies on mammalian cells. Chaired by Olga Zeni, this presentation contributed significantly to ongoing discussions on the effects of RF electromagnetic fields on biological systems. Poster Presentations: A Visual Impact The NextGEM project team made a strong visual impact with the presentation of four posters. These visually engaging displays served as a platform to communicate their groundbreaking research findings and initiate stimulating discussions among fellow conference attendees. Shaping the WHO RF EMF Health Risk Assessment Maria Rosaria Scarfi also assumed the role of session chair for the “Reviewing for the WHO RF EMF Health Risk Assessment” session, with a focus on the requirements for studies to be informative. This underscores NextGEM’s integral role in informing health risk assessments related to RF EMF exposure, particularly in vitro studies. Numerical Dosimetry Expertise Nikolaos Petroulakis, showcased his expertise by co-chairing a session on numerical dosimetry. This highlighted NextGEM’s dedication to understanding and modeling electromagnetic field exposure, a vital aspect of bioelectromagnetics research. Supporting Emerging Talent Olga Zeni’s participation in the closing ceremony and her involvement in student awards demonstrated NextGEM’s commitment to nurturing the next generation of bioelectromagnetics researchers. Announcing BioEM2024 in Chania, Crete, Greece As the coordinator of Project NextGEM and chair of the local organizing committee for the next BioEM conference, Nikolaos Petroulakis, made an exciting announcement. BioEM2024 is scheduled to take place from June 16th to June 21st, 2024, in the picturesque city of Chania, Crete, Greece. This forthcoming event promises to be a significant gathering for the bioelectromagnetics community. In conclusion, BioEM2023 was a resounding success for Project NextGEM, with its substantial contributions leaving a lasting impact on the field of bioelectromagnetics. The team’s dedication to advancing research and fostering collaboration was palpable throughout the conference, setting the stage for even greater achievements in the future. How much are we exposed to radiofrequency electromagnetic fields? How is our electromagnetic environment changing with the introduction of new wireless technologies, in particular 5G? Is there any impact on our health and the environment? These questions will be answered over the next five years by the European Research Cluster on EMF and Health (CLUE-H), which was officially launched on 22th September 2022, with a kick-off meeting in Thessaloniki, Greece. The CLUE-H network involves more than 70 European research organisations in four research consortia (ETAIN, GOLIAT, NextGEM, SEAWave), with additional contribution from scientists in the USA, Korea and Japan. The total funding will amount to more than €29 million from the Horizon Europe 2021-2027. The results are expected to fill the knowledge gaps that exist regarding the impact of wireless technologies on health and the environment. They will be essential in ensuring a safe deployment and use of future wireless networks which will benefit citizens and society, for example in health, transport, e-government and smart cities.
physics
http://www.nsradiology.com/locations/north-valley-advanced-imaging/
2013-05-20T18:23:12
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699186520/warc/CC-MAIN-20130516101306-00026-ip-10-60-113-184.ec2.internal.warc.gz
0.854839
549
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__92449833
en
North Valley Advanced Imaging Chico, CA 95926 (530) 345-6067 (billing) (530) 894-0174 (fax) Mon-Fri: 6:30am – 7:00pm Sat: From 8:00am by Appointment Located on the corner of Esplanade and E. 7th Avenue, our advanced imaging center is accredited by the American College of Radiology (ACR) and offers conventional high-resolution Magnetic Resonance Imaging (MRI), including MR-Angiography (MRA), Breast MRI and MR-guided breast biopsy, diagnostic and screening Computed Tomography (CT), and Positron Emission Tomography (PET). Magnetic Resonance Imaging (MRI) As the name implies, MRI images are created using magnetic fields, not radiation. The stronger the magnet, the better the image. It’s that simple. Conventional MRI’s use magnets six times stronger than can be used in any “open” MRI system. Thus, physicians who want the best possible images prefer conventional MRI when at all feasible. We have two conventional MRI scanners. The newest, a GE LX Horizon Short Bore, provides the most spacious patient area possible without compromising magnet strength and CT (or CAT) scanners create cross-section pictures of the body, allowing physicians to see organs, tissue and bones in much greater detail than with conventional x-rays. CT’s are painless and have traditionally been used to diagnose disease or trauma in patients with specific symptoms. We currently have two helical (or spiral) multi-slice scanners, the newest being a 64-detector system which provides a faster scan, unprecedented image detail, and reduced radiation dose to the patient Positron Emission Tomography (PET) Positron Emission Tomography (PET) is a powerful imaging technique that holds great promise in the diagnosis and treatment of many diseases, particularly cancer. A non-invasive test, PET scans accurately image the cellular function of the human body. A CT scan provides anatomical information such as location, size, and shape. Our scanner, a Philips Gemini GXL PET/CT, allows us to combine these two scanning technologies into a single PET/CT scan, which enables physicians to more accurately diagnose and identify cancer, heart disease, and brain disorders, often earlier than other studies. Click here for more information on PET imaging. Please fill out the form below to contact North Valley Advanced Imaging
physics
https://ittools.tech/pressure-converter
2023-03-31T07:11:14
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00131.warc.gz
0.853745
564
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__91924579
en
Based on the chosen units, this converter's dynamic conversion scale will determine the equivalent pressure value from one pressure measurement unit to another and produce a conversion table. - In the upper input box, enter the value you wish to convert. - From the upper pull-down menu, choose the correct units. - From the lower pull-down menu, choose the units to convert to. - In the lower text box, the conversion will be visible. You may also use this converter to copy and paste values. To convert a value, click or touch one of the text boxes or choose a unit. Use the pressure unit converting tool to convert pressure units, search up a conversion in the tables, recognise a pressure unit and its association with other pressure units, or locate the necessary pascals for the conversion. Use of Pressure Converter Calculators A scalar property called pressure reflects how force behaves on a surface. P = F/A F is the normal force, A is an area, and P is equivalent to pressure. Many pressure units directly connect force to the area since pressure is derived from force and area. Some are clear, as pounds per square inch. However, even the SI standard Pascal is essentially an expression of one Newton per square metre. How to Convert Pressure Units? A conversion factor is used to execute conversions. Converting between units can be as easy as multiplication if you know the conversion factor: S * C = E S is the initial value, C is the conversion factor, and E is the final value after conversion. Multiply the conversion value in the right column of the table below to convert any unit into pascals, such as 5 bar. 100000 [(Pa) / (bar) / 5 bar] = 500000 Pa Divide by the value in the right column or multiply by the reciprocal, 1/x, to convert from Pa into the units in the left column. 500,000,000 PA/100,000,000 [(Pa)/(bar)] = 5 bar You can multiply by the factor for A to convert it to Pascals and divide by the factor for B to convert it back to any other unit in the left column, such as from A to B. Alternately; you can calculate the required single component by dividing the A factor by the B factor. For instance, you would multiply by 100000 and divide by 6894.757 to convert from bar to pounds per square inch. Alternately, multiply by 100000/6894.757 to get 14.503774. So you multiply by 14.503774 to directly convert from bar to pounds per square inch.
physics
https://holowriting.com/2014/10/25/physics-of-the-impossible-book-review/
2024-04-14T10:25:00
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.25/warc/CC-MAIN-20240414095752-20240414125752-00603.warc.gz
0.925365
652
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__11861553
en
IN SHORT: A wonderfully optimistic glimpse of the future, viewed through the prism of advanced modern sciences and the creations of many beloved sci-fi franchises. WHAT IT IS: A delightful thought experiment. Take a sci-fi trope and ask the question “Is there anything in our current understanding of physics that says we can’t do this?” If the answer is no, then explore how we could theoretically do that. Chapter 1 covers building force fields. WHAT IT IS NOT: Don’t expect complex equations, mathematical proofs, graphs of hard data, or engineering diagrams, but also don’t let that dissuade you. Michio Kaku is a thoroughly talented communicator of Big Ideas. Throughout the book, he skillfully keeps the discussion understandable (most of the time, anyway), even when talking about string theory or positrons traveling backwards through time. WHAT I THOUGHT: This book is awesome! Seriously, I’m struggling to remember the last time I enjoyed a book this much. Here’s a snippet of a recent conversation I had while reading Physics of the Impossible. Jacob Holo: I think I just figured out a way to destroy a planet with a flashlight. H.P. Holo: How? Jacob Holo: First I need a sum of negative matter the size of Jupiter. H.P. Holo: I don’t have that in my purse. Yes, I thoroughly enjoyed every last chapter, even when it started to get really out there with time travel craziness. Michio Kaku takes a lot of familiar ideas from science fiction, such as parallel universes and cloaking devices, then guides the reader on a journey of what may be in the centuries and millennia to come. Along the way, he draws upon a wide sampling of science fiction tales both popular and obscure, old and new. Together with well written stories about famous scientists and their theories, he weaves a memorable base from which to explore Big Questions. Is that impossible? Could we someday do that? Take teleportation, for example. Most of us are familiar with beaming from the Star Trek franchise, but did you know that physicists are already tackling real teleportation experiments? Granted, we’re talking about an atom of cesium here or a clump of rubidium there. But still. Teleportation. It could happen, probably not like we see it in Star Trek, but it could happen because nothing in our current understanding of physics says we can’t. Plus any book that references Xenosaga and Invader Zim in the same breath gets bonus points from me. From metamaterials for invisibility cloaks to negative matter throats for stable wormholes, Michio Kaku tackles a wide range of topics. Seriously people, Chapter 1 explains how to build a force field. A force field! Chapter 12 covers time travel. For anyone who loves both science fiction and the science of what may be, this book comes with my strongest possible recommendations. Go ahead. Dive in. I loved it, and I think you will too. VERDICT: Strongly recommended.
physics
https://bagsupplies.ca/wholesale-fibc-bulk-bags-canada/
2022-10-01T01:57:10
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00456.warc.gz
0.877485
237
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__233712036
en
Type A – Normal Bag For regular use, suitable for the majority of applications. All bags are UV stabilized, and manufactured to independently verifiable standards. Type B – Normal Bag (Static Resistant) As above with some anti-static properties. Reduces static build up as a result of product flow. NOT SUITABLE FOR USE IN HAZARDOUS ENVIRONMENTS. If any risk of explosion exists one of the following MUST be used. Type C – Grounded Conductive Bag Conducts all static electricity immediately and safely to earth. This bag requires manual earth connection. Type D – Anti-Static Bag Dissipates static electricity safely into atmosphere. This bag does not requires any earth connection. Maximizes usable volume Internal baffles retain the inherent cuboid shape of the bag, reducing wasted space between bags caused by “rounding” by up to 30%. For containment of fine powders. Coated fabrics and special stitching techniques combined to produce bags that can completely contain powders to 10 microns. For hazardous materials
physics
http://www.imagwiki.nibib.nih.gov/?title=Main_Page
2015-08-04T21:55:18
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992201.62/warc/CC-MAIN-20150728002312-00009-ip-10-236-191-2.ec2.internal.warc.gz
0.832021
188
CC-MAIN-2015-32
webtext-fineweb__CC-MAIN-2015-32__0__107679144
en
The IMAG 2015 Multiscale Modeling Consortium Meeting will occur on Tuesday and Wednesday September 8-9, 2015 at the NIH (Lister Hill). Please also save September 10, 2015 for potential satellite meetings as well. The registration site is now open. Please register and submit your abstracts at: 2015 IMAG Multiscale Modeling (MSM) Consortium Meeting Registration. All updates on the 2015 MSM meeting are posted on the IMAG wiki, http://www.imagwiki.nibib.nih.gov/imag-events/2015-msm-consortium-meeting. IMAG encourages all MSM Working Groups to contribute to the planning! Theme Survey Ends: Friday July 24, 2015 (link address in Registration page) Poster Abstract Submission: Friday July 24, 2015 (information in Registration page) Hotel Registration: August 10, 2015 Meeting Registration: August 28, 2015
physics
https://dailynaturefacts.com/why-earth-is-round/
2024-02-21T00:35:53
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00791.warc.gz
0.909819
1,348
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__101562222
en
Why Earth is Round and the Facts That Prove It The shape of the Earth has been debated throughout history, with some believing it to be flat and others arguing it is spherical. Overwhelming scientific evidence has definitively shown that the Earth is in fact round or spherical in shape. Here we will explore the history of this debate, the evidence that confirms the Earth’s roundness, and explain why some people still refuse to accept this despite the clear facts. A Brief History of the Flat Earth Hypothesis The idea that the Earth is flat rather than a sphere dates back thousands of years. Ancient cultures including the Egyptians, Indians, Chinese and Islamic world often depicted the Earth as a flat disc floating on water. This was the predominant cosmological model in many ancient civilizations. In ancient Greece, philosophers such as Pythagoras and Aristotle provided some of the earliest evidence that the Earth is round rather than flat. They observed ships disappearing over the horizon and saw Earth’s shadow on the moon during lunar eclipses, among other clues. But the flat Earth view persisted for centuries. When Columbus set sail in 1492, his crew was nervous about potentially sailing off the edge of the world. It wasn’t until the early 16th century that the spherical model was solidly established as the standard view, thanks to increasing evidence compiled by explorers and scientists. Still, some continued advocating the flat Earth view over the following centuries. In the 19th century, Samuel Rowbotham promoted a flat Earth ideology and argued against the accepted science. Modern flat Earth societies formed in the mid-20th century and continue arguing for a flat geography using pseudo-scientific claims, despite overwhelming evidence to the contrary. Physical Evidence The Earth is Spherical A variety of scientific disciplines provide proof that the Earth is round. Here are some of the key physical and observational pieces of evidence: - Horizon: When standing on the seashore, ships can be observed disappearing bottom first over the horizon due to the curvature of the Earth. If the Earth were flat, the entire ship would remain visible. Similar effects can be seen when standing atop a high mountain or building. - Lunar eclipses: During lunar eclipses, the Earth’s circular shadow can be seen on the moon’s surface. This shadow would not be round if the Earth were a flat disc. - Circumnavigation: Many people have successfully circumnavigated or traveled around the entire globe following a continuous path. This would be impossible on a flat map. - Seeing farther from higher: When standing on a high place like a tower, objects that were previously obscured become visible, demonstrating the curvature of the Earth. The higher the vantage point, the farther one can see. - Time zones: The fact that different time zones exist requires the Earth to be a rotating sphere. As the sun illuminates different parts of the globe, regions experience different times of day continuously. - Seasonal changes: As the Earth orbits the sun while tilted on its axis, the northern and southern hemispheres experience opposite seasons. On a flat Earth this would not occur. - Gravity: The direction of Earth’s gravitational pull always points directly towards the planet’s center mass. This matches what we would expect for a spherical planet rather than a flat disc. - Photos from space: Photographs taken from Earth orbit, the moon, and other planets all show Earth to be unambiguously spheroid in shape. No flat surfaces can be discerned. Scientific Methods Confirming A Round Earth In addition to physical evidence, modern science provides absolute proof of Earth’s spherical nature through numerous methods: - Geodesy: This science precisely measures the Earth’s size and shape by studying points on the surface. Measurements definitively show the globe is an oblate spheroid slightly bulging at the equator. - Gyroscopes: High-precision gyrocompasses rely on the spin of the Earth to operate properly. Gyroscopes demonstrate measurable forces consistent with a spinning, curved planet. - Pendulum motion: Motion studies of pendulums show the Earth is constantly curving locally, only possible on a sphere. Pendulums swing back and forth rather than tracing straight parallel lines. - Shadows from sticks: Measuring the shadows cast by vertical sticks shows they are different lengths in different locations at the same time. This discrepancy proves the Earth is curved. - Transit of Venus: Precisely timing the 2012 transit of Venus across the sun from different locations revealed the relative distance to Venus, proving a round Earth. - Visual confirmation from space: The “blue marble” photo and abundant other satellite images show a clearly spherical Earth surrounded by the blackness of space. The wide range of scientific disciplines providing evidence makes the conclusion unassailable: All measurable data confirms the Earth is ellipsoidal in shape. No credible scientific observation points to a flat Earth model. Explanations for the Persistence of Flat Earth Beliefs Despite definitive scientific proof, flat Earth theories continue to be promoted by some. There are psychological and social factors that help explain the persistence of these fringe beliefs: - Distrust of authority: Some prefer “alternative” explanations as a way to reject ideas endorsed by scientific authorities and governments. - Appeal to intuition: A flat planet matches early intuitive perceptions and is simpler than understanding spherical geometry. - Social cohesion: Joining a fringe community provides a sense of belonging and purpose missing elsewhere. - Confirmation bias: People interpret evidence to confirm pre-existing beliefs rather than changing views to match evidence. - Misunderstanding science: Lack of scientific literacy can lead to mistakenly believing conspiracies disprove accepted science. - Attention-seeking: Promoting controversial views are a way to gain notoriety and followers easily. While these and other biases help flat Earth notions persist, the core issue is a lack of skepticism and scientific understanding. Testing claims against objective reality ultimately leads to indisputable proof the Earth is round. The Earth being a sphere rather than a flat plane or disc has been definitively proved for centuries through overwhelming physical evidence and scientific measurement. A variety of geographic observations and methods from multiple fields confirm the world is round. While fringe flat Earth beliefs persist, their claims do not withstand rigorous scrutiny. The durability of the flat Earth hypothesis despite the evidence ultimately points to the need for improved scientific education, skepticism, and understanding of bias. But the facts are clear – all objective indicators show the Earth is assuredly not flat.
physics
https://www.kuprioninc.com/use-cases/integrated-circuit-packaging
2019-10-19T09:15:23
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00346.warc.gz
0.824454
308
CC-MAIN-2019-43
webtext-fineweb__CC-MAIN-2019-43__0__71433908
en
Devices are getting smaller and yet more powerful, making thermal dissipation more important than ever. This trend calls for new packaging technologies and interconnect materials. Kuprion’s nanocopper pastes and inks are key enablers of this trend by providing a thermal conductivity of 240 - 280 W/m⋅K. For example, one of the use cases of Kuprion’s nanocopper pastes is in power devices. Read more about the use of Kuprion’s nanocopper paste for large-area power devices in a paper published in the Journal of Electronic Materials co-authored by Dr. Alfred Zinn, Kuprion’s CTO, with IBM. Examples of Use Case Case Study: Copper Pillars Copper pillar emerges as an excellent replacement for traditional solder bump. Among other benefits, the copper pillar technology allows higher I/O density than traditional solder bumps. Kuprion’s CuantumFuseTM shows excellent compatibility with the flip chip technology. CuantumFuseTM Paste has the appropriate viscosity and can be applied to copper pillars using the stamp transfer method. In our test, a layer of CuantumFuseTM paste (as thin as 10 microns) is applied onto a flat glass surface. We then stamped the component to apply the paste precisely pillars onto just the tips of the pillars, as pictured. Find out more today.
physics
http://teddington.se/en/products/flexible-pipe-equipment/stalkompensatorer/
2020-07-05T16:33:12
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00114.warc.gz
0.914936
258
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__88470669
en
Metal Expansion Joints Metal Expansion Joints (Metal Bellows / Metal Compensators) handle movements in pipe systems as a result of, for example, thermal expansion, vibration or pressure shocks. Metal Expansion Joints exists in many different designs depending on its function, but at its core it always consists of a parallel-folded, stainless steel bellows body, which, through its construction, can move axially, laterally and/or angularly. In many cases it is desired to limit movements, for example, to only one direction. However, steel compensators should never be subjected to torsion (torsional stresses). Teddington has more than 50 years of technical experience in helping clients with knowhow around Metal Expansion Joints. Applications are found in all types of fluid systems; from district heating to the process industry, to exhaust gas systems. For better overiew we have arranged the different types into the following subgroups: -Axial Compensators: Handles axial movements and should operate only in the longitudinal axis. -Link Compensators: Handles lateral and / or angular movements. -Pressure-balanced compensators: Handles axial and / or lateral movements and absorbs reaction forces from internal pressure.
physics
http://www.dutchmanseb.com/rigorous-test-and-development-regime-for-kia-stinger/
2022-12-05T08:45:27
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00153.warc.gz
0.933556
3,155
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__152005893
en
- Each prototype subjected to 480-lap, 10,000 km Nürburgring validation program - Designers and engineers collaborate to refine aerodynamic efficiency - Testing and tuning work sees development of high-performance braking system, powerful engines and eight-speed transmission - Stinger goes on sale globally during Q4 2017 Video link: https://youtu.be/P2-UCy8BVdM (NüRBURGRING) June 26, 2017 – The KIA Stinger is entering the final stages of its rigorous test and development regime before going on sale later this year. The test program – carried out worldwide on road and on track – will ensure the car has the performance, reliability and dynamic sophistication to match its striking fastback design. Aerodynamics: realizing the Stinger’s gran turismo design Unveiled at the 2017 North American International Auto Show, the Stinger made a bold first impression. “The new KIA GT is a true gran turismo, a car for spirited long-distance driving,” explains Gregory Guillaume, KIA Motors Europe’s Chief Designer. “It’s not about outright power, hard-edged dynamics and brutal styling all at the expense of luxury, comfort and grace. The new GT has nothing to do with being the first to arrive at the destination – this car is all about the journey. It’s about passion.” In realising the Stinger’s production design – a fastback shape embodying grace, flair and dynamism – KIA’s designers were also guided by the company’s aerodynamics experts. In general, fastback bodies can pose more challenges during aerodynamic optimisation compared to conventional designs. Therefore, close and intense collaboration is required between designers and aerodynamicists to realise the desired shape. KIA’s Frankfurt R&D centre used computational fluid dynamics (CFD) software to quickly test and validate different ideas to enhance the car’s aerodynamic profile, while retaining the fastback silhouette. After four weeks of collaboration between designers and engineers, subtle design changes were introduced to improve air flow over the car. The bodywork was tapered slightly towards the rear of the car and new ‘gills’ were introduced behind the front wheel arches, each serving to reduce wake turbulence as air passed over the car’s flanks. A partially-flat underfloor cover, flowing into the rear diffuser, was deployed to reduce drag under the car, while the rear spoiler was remodeled with a slight ‘ducktail’ shape, reducing lift and increasing high-speed stability. At the front, larger horizontal cooling ducts were introduced to optimise brake cooling, and air inlets were shaped to reduce front-end lift. Finally, KIA’s aerodynamicists found that, by reducing the height of the rear of the roof, they could enhance the fastback’s ‘aerofoil’ shape and improve the Stinger’s aerodynamic efficiency at the same time. Design finalised, the Stinger was ready for its first on-road tests. The Stinger’s dynamics presented KIA engineers with a new challenge. As a car without a predecessor, KIA’s chassis engineers were given a blank canvas for the car’s suspension and steering characteristics. Their brief: to create a true gran turismo, with driving dynamics to match the car’s fastback design. The shape of the car has inspired efforts to imbue the Stinger with agile handling and high levels of body control, delivered to provide rewards for the more enthusiastic driver. Meanwhile, the ride needed to deliver a balance of everyday compliance and high-speed cruising comfort. To meet this brief, KIA’s engineers developed two different types of suspension. Every Stinger is suspended by MacPherson struts at the front, and fully-independent multi-link suspension at the rear. However, the ‘clean sheet’ approach to the car’s development has allowed engineers to create both a traditional passive setup, and a new adaptive system – Dynamic Stability Damping Control (DSDC). DSDC adapts the stroke length of the shock absorbers on the move, and is controlled by acceleration, braking and steering sensors. Drivers can change the characteristics of the Stinger’s shock absorbers. Using the Stinger’s Drive Mode Selector system, drivers have a choice of two damping force levels: ‘Normal’ and ‘Sport’. In ‘Normal’ mode, low levels of damping force enable maximum cruising comfort. While the suspension continues to firm up slightly under heavy cornering in ‘Normal’, the effect is less pronounced than in ‘Sport’ mode. The driver’s choice, ‘Sport’ provides more powerful damping force under all conditions, shortening the stroke of the shock absorbers to provide greater body control and handling agility during more spirited driving. DSDC will be fitted as standard to 3.3-litre V6 Stinger models. The passive suspension – fitted as standard to 2.0-litre turbo petrol and all 2.2-litre turbodiesel models – was designed to the same brief as the DSDC system. A single-mode passive setup, the standard suspension has been verified alongside DSDC at the Nürburgring Nordschleife and on the road, offering confidence at a cruise and on winding roads. Based on KIA’s most refined multi-link suspension concept, the Stinger’s suspension has been redesigned with stiffer springs and stabiliser bars for more immediate handling responses. Much of the suspension tuning has focused on creating a uniform character across the Stinger range – regardless of engine weight and the car’s rear- or all-wheel drive drivetrain configuration. While this means that all-wheel drive models offer a similar dynamic character to rear-wheel drive models, the all-wheel drive cars offer increased damping force and revised shock absorber settings for the rear axle, better planting the rear wheels to the road and enabling the car’s rear-drive character to shine through. The Stinger’s rack-mounted motor-driven power steering system (R-MDPS) provided chassis engineers with greater flexibility for tuning. Fitted as standard to every Stinger, R-MDPS lets drivers choose between two steering modes with the Drive Mode Selector – ‘Normal’ and ‘Sport’. These modes change the level of steering effort required, as well as the system’s variable steering ratios. In ‘Sport’ mode, the Stinger requires increased on-centre steering effort, with shorter gearing providing more immediate response by reducing the need for larger steering inputs. ‘Normal’ mode reduces steering effort from on-centre, for more measured steering responses at a cruise. ‘Normal’ mode also requires more effort as the steering wheel turns, with a linear build-up of resistance giving drivers greater confidence at the wheel. The result is a steering system that enables the same duality as the suspension – one that’s as relaxing and confidence-inspiring to use in a straight line, as it is immediate and engaging on more enjoyable roads. Right-hand-drive versions of the Stinger have also undergone a further level of dynamic testing in the UK, to optimize steering and suspension components for certain markets – such as the UK and Australia – before the car’s release. Producing 370 ps (approximately 272 kW), the KIA Stinger’s 3.3-litre twin-turbo V6 enables the car to accelerate from 0 to 100 km/h in just 4.9 seconds, making it the fastest-accelerating production KIA ever. Its high-performance brakes needed to be equal to the task. Targeting the highest braking performance of any KIA to date, engineers subjected the Stinger to a variety of high-speed braking tests. A rigorous range of braking challenges was devised, taking brake testers to the famous Großglockner High Alpine Road in the Austrian Alps for constant downhill brake testing. Private test facilities in Northern Germany and Eastern Spain, as well as the Nürburgring, were also used. Not only did the Stinger’s brakes have to offer objectively strong and consistent braking power, KIA’s R&D teams wanted to maintain a reassuring, responsive feel to the pedal, even after repeated heavy braking, for maximum driver confidence. More development work has been carried out on the Stinger’s brakes than any previous KIA car. High-powered 3.3-litre Stinger models feature a new braking system developed in collaboration with Brembo. The 18-inch Brembo disc brakes are designed specifically to meet the demands of the engine’s higher power output. Holed and grooved, the brakes offer high heat capacity and reduced fade levels under heavy use. They’re paired with the most powerful calipers ever found on a KIA. Very early in the Stinger’s development, engineers had considered carbon ceramic brakes to maximise the grand tourer’s braking power. However, as a KIA, the Stinger needed to remain affordable, both to buy and maintain, for customers around the world. Brembo’s high-carbon 18-inch steel brakes proved more than up to the job of bringing the Stinger to a halt, however. KIA’s internal tests are designed to validate brakes at temperatures of up to 700°C (1,292°F). Engineers went even further for the Stinger’s Brembo brake system, with temperatures rising above 800°C (over 1,472°F). Even at these temperatures, the Stinger’s brakes continue to offer consistent braking power and pedal feel. Like every KIA, the Stinger is undergoing a full, rigorous testing regime to ensure it is as reliable for owners as it is entertaining for drivers. While the Nürburgring Nordschleife has played a key role in establishing the Stinger’s dynamic character, KIA’s testing facility at the ‘Green Hell’ also sees every car tested for quality and reliability. Each development car is being put through a minimum of 10,000 km – 480 laps – of high-stress driving around the Nordschleife. Widely regarded as the ultimate proving ground, the circuit has 73 corners, a 300-metre difference in height between the highest and lowest points of the circuit, and gradients of up to 17%. The constant combination of hard acceleration, rapid deceleration, heavy cornering, and changing surfaces and camber offers an unrivalled test of dynamic prowess. This distance covered during the Stinger’s development is equivalent to over 160,000 km of on-road testing. Every Stinger prototype tested at the Nürburgring Nordschleife is treated to the same punishing regime, testing the suspension, body and powertrain to the full. KIA’s testing procedures are designed to identify powertrain wear and fluid leaks in particular, as well as gearbox heat management characteristics. Temperatures of the car’s brakes, exhaust and gearbox are constantly monitored, to make sure they consistently offer optimal performance. The brakes, for instance, have to be changed halfway through a typical daily session, such is the harsh nature of the tests to which development prototypes are subjected. The 3.3-litre Stinger is currently in its final testing phase at the Nürburgring Nordschleife, but much of the development work for the 2.0- and 2.2-litre powertrains – rear- and all-wheel drive – has been completed. One diesel prototype completed 20,000 km of testing around the Nordschleife. The engine had already completed the full 10,000 km testing distance, but further development work to the chassis meant engineers needed to test a series of new components. New parts fitted, the same powertrain – gearbox and engine – remained, going on to complete a second 10,000 km testing run. Fully run-in, the engine worked as well at it had at the start of the test. The Stinger’s eight-speed automatic transmission, available with each of the three engines, was a key focus for powertrain testing. Nordschleife testing identified a need to more efficiently manage heat in the transmission – early tests revealed the oil temperature was rising higher than preferred by the engineers. To counteract this, KIA engineers fitted the transmission with an oil cooler with a larger surface area to enable more efficient cooling. Beyond the Nürburgring, testing for the Stinger was carried out globally, with over 1.1 million km of durability testing carried out around the world – equivalent to approximately 27 circulations of the Earth around the Equator. The car’s development took place across Europe, the Middle East, Asia and North and South America, for extreme climate testing and quality verification for all components used in the Stinger. Tested for a global audience, the Stinger was subjected to extreme cold and heat and high altitude, and faced up to the unique demands of the desert, congested city centres, mountain passes and permafrost regions. The consummate gran turismo, the KIA Stinger‘s on-road refinement is particularly important. Equally important, however, is the desire of customers to enjoy the sound of the engine at work. The Stinger is the first KIA to be fitted with a new active sound system, enhancing the engine note of the car via the car’s sound system rather than via an actuator which ‘channels’ noise into the cabin. Engineered in Europe, the sound is designed to be consistent with the layout of the engine, providing drivers with additional aural inputs from the powertrain to enhance the driving experience. The 3.3-litre engine authentically enhances the distinctive V6 engine note in the cabin, while the 2.0-litre petrol lets drivers enjoy the sportier character of the four-cylinder engine under acceleration. The active sound system also refines the sound of the 2.2-litre turbodiesel engine, masking certain elements of the engine’s sound and enhancing others for a more refined engine note. Sound engineers have paired the system with the Stinger’s Drive Mode Selector, enabling drivers to change the level of engine noise in the cabin. The sound becomes slightly louder and more aggressive in tone as drivers switch from the system’s ‘Eco’ mode, through ‘Normal’, ‘Sport’ and ‘Sport+’ in Europe, and ‘Custom’ in US and other countries. Production and on-sale The KIA Stinger enters production in the second half of 2017, and goes on sale globally during the fourth quarter of the year. Pricing and final specifications are market-dependent.
physics
https://sunstreamcorp.com/battery-test/
2022-05-22T08:39:05
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00103.warc.gz
0.895925
200
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__200198812
en
12V batteries generally last about 3 years in this application. Proper maintenance such as booster charging, and maintaining fluid level can extend battery life. Batteries lose over 1% of their charge per month if not charged. Battery fluid should be checked every 3 months, and filled to within 1 inch of top of battery. A solar charger can not recover to a full charge if battery is drained beyond 50%. If battery seems low perform the following tests: - Check specifications of battery– Lift requires at least 500 CCA with a reserve capacity of 160 Amp Hours. - Capacity test the battery using a (a) Voltmeter and Hydrometer or (b)a Capacity Tester - For a charged battery the Voltmeter should read over 12.5V and the Hydrometer should be over 75% - For a charged battery the Capacity tester should read “OK” and over 12.5V
physics
https://aquabarrier.com/blog/uncategorized/determine-accurate-water-depth-for-your-worksite-with-a-sonar-fish-finder/
2022-08-15T16:38:55
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00460.warc.gz
0.888213
535
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__184410190
en
When dewatering your worksite it is important to know the depth of the water you will displace. You need to match the height of your Aqua-Barrier® temporary cofferdam to the water depth. If your cofferdam is too short, it will not be adequate to dewater your worksite. Judging depth of water can be challenging, though. Use this guide to learn how to use a sonar fish finder to more easily measure the water depth of your next worksite. A Better Way to Find an Accurate Water Depth for Your Worksite Fishermen use sonar fish finders to locate fish under the water and find ideal fishing spots. They also use them to avoid hazards such as shallow waters, logs, and rocks. Designed to display everything beneath the surface of the water, sonar fish finders are ideal for accurately measuring the water depth of your worksites. How a Sonar Fish Finder Works A sonar fish finder is composed of two parts: a transducer and a display. Sound waves are sent into the water by the transducer, which bounce off of things and then back to the transducer. The transducer interprets the data, estimating the size and depth of the objects, and sends this information to the display. The display converts this information into a graphic representation of everything below the surface of the water. Using a Sonar Fish Finder at Your Worksites It is extremely important to know the depth of your worksite when determining what height your Aqua-barrier® Cofferdam needs to be. The Aqua-barrier® is placed around your worksite and the water inside the barrier is used to inflate it. Any excess water is then displaced to a holding area. If the Aquabarrier® is too short, it will not effectively keep the water out of your worksite. Use a sonar fish finder to easily measure the depth of your worksite and the height of the cofferdam you will need. With this height, the length, and other information, use our Aqua-barrier® Cost Calculator to determine the cost of the cofferdam you will need. There is no easier way to determine the depth of the water. Make Measuring Water Depth Much Easier It can be a challenge to determine the accurate water depth for your worksite, yet it is crucial when dewatering with an Aqua-barrier® Cofferdam. Contact Us for more information on how to use a sonar fish finder to measure the water depth for your next Aqua-barrier® installation.
physics
https://inovinox.com/ultrasonic-cleaning-system/
2021-03-06T01:15:57
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00295.warc.gz
0.867107
286
CC-MAIN-2021-10
webtext-fineweb__CC-MAIN-2021-10__0__21396275
en
Using extremely fine nets, the Ultrasonic system allows the separation of very light dust types, typically really hard to screen. Some types of dust, due to their particle size and special physical characteristics, cannot be separated effectively with the mechanical movement of the vibrating screen alone, thus requiring an ultrasonic screening procedure. The ultrasonic system solves these problems, thanks to the application of an ultrasonic frequency directly on the sieving wire. The help of ultrasonic vibration allows the mechanical movement of the separator to sieve even with extremely fine nets and avoids clogging. – Increased productivity: The combined use of the ultrasonic system together with the vibrating screen achieving the best sieving results. – Better performance of the vibrating screen: Use of finer screens to improve the separation of the finest dust. – Lower maintenance costs and longer net life: the ultrasound system, unlike mechanical mesh cleaning systems, does not exert any wear action on the nets. The duration of the nets is significantly longer and downtime for the replacement of nets is reduced. – Ease of adjustment: the automatic frequency control automatically chooses the most suitable frequency. There is, therefore, no need to change the machine set-up when the products to be sieved are changed. Fill out this form to contact us today! T. +1 800 780 1017 - F. +1 305 300 0614
physics
http://robo-server.uacj.mx/en/welcome.html
2024-03-02T14:27:01
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475825.14/warc/CC-MAIN-20240302120344-20240302150344-00875.warc.gz
0.893995
378
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__27943206
en
The Robotics Laboratory conducts both fundamental and practical research in the realm of computational mobile robotics. Our research endeavors encompass various aspects, including the design of robotic structures, the modeling of physical systems, simulation techniques, and the applied control of a diverse range of robotic systems—ranging from actuated and under-actuated to bionic robotics. A significant focus of our group lies in exploring mechanical design, conducting mathematical analyses, and developing numerical models for physical robotic systems. Our research spans four distinct modalities: rolling, walking, swimming, and flying robots. This multifaceted approach allows us to delve deeply into the intricacies of each modality, contributing to a comprehensive understanding and advancement of robotic technologies. About Us The Robotics Laboratory, led by the Mechatronics research group and established in August 2007 by Dr. Edgar A. Martínez-García, is dedicated to fostering theoretical analysis, applied mathematical methodologies, computational simulation in robotics, and the practical implementation of experimental projects. Our research approach is centered on cultivating individual student projects, employing this as a shared strategy to facilitate effective skill development. Within the laboratory, we equip students with the tools for engaging in multidisciplinary engineering activities, empowering them to hone both theoretical understanding and practical skills in the field of robotics. Our research is focused on the development of innovative models for autonomous robots, and our specific pursuits within this domain include: - Simulating and mathematically modeling physical robotic systems. - Applying dynamic control to mobile robotic functions, including navigation, planning, and sensing. - Design, analysis and modeling of under-actuated robotic mechanisms. - Analyzing and modeling bionic robots, including biomechanical models and their motion. - Researching deterministic sensor fusion models through physics-based integro-differential equations. - The application of scientific computing and mathematical methods for modeling robots.
physics
https://www.build-stuff.com/books/vacuum-forming-for-the-hobbyist/
2020-07-11T23:11:21
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00347.warc.gz
0.919749
926
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__146007852
en
Vacuum Forming for the Hobbyist PDF eBook digital download12.95 shapes. Build your own low cost equipment using hardware store items and your kitchen oven as a heat source. to its simplest form, so it can be done right in the home. Its a mystery to me why vacuum forming is so largely ignored in the hobby and craft fields. Its a fast and easy way to mold high quality plastic parts. Best of all, it requires no special skills and very little equipment. you how to get 5 times more forming power. Chapter 2 tells why heat lamps and heat guns won’t work well, and shows how to use your kitchen oven and alternate heat sources uses heat to soften a plastic sheet, and then vacuum to suck it down tightly against a pattern or mold. The plastic quickly cools and retains them in your kitchen oven. The mold or pattern can be made from wood, plaster, epoxy resin, aluminum, plastic and many other materials or built up from a combination of materials. Many times, you can form over an uses. Some examples are: Signs, Holiday decorations, Soap and Candy molds, Containers and packaging. We are not trying to melt the plastic, just make it soft like a sheet of rubber. Your kitchen oven was designed to heat food at these temperatures, so its a safe and convenient way to heat plastic as well. This chapter shows the differences between gas and electric ovens and how to use them sources are discussed, such as, electric frying pans and griddles, toaster ovens, hot plates etc.., with advice on using each one. and conversion tables. Vacuum is commonly rated in "Inches of Mercury" (IN. HG.) Most commercial vacuum forming is done with 25 -27 IN.HG. with a maximum of about 30 inches possible. 30 IN.HG. Don’t be fooled by the commercials that show them picking up bowling balls. It doesn’t matter how many horsepower, or how loud it is, or how much it dims the lights. Even the best "Shop Vacs" don’t pull very hard, they just flow a lot of air! Learn how to increase that 50% by coupling two vacuum cleaners together. Seven other low cost sources of higher vacuum are discussed, such as. intake manifold vacuum (from your car), Modified bicycle pumps, air powered and electric pumps. Learn how to modify a bicycle pump to pull 27 IN.HG. and use stored vacuum form tanks. cleaner with another higher vacuum source to create a "two Stage" system. You won’t find this information available anywhere else! the hardware store, and use this frame with inexpensive spring clips to hold a plastic sheet for heating. See ideas for simple vacuum boxes made from cake pans, and more sophisticated two stage vacuum boxes. to combine a vacuum cleaner with a second higher vacuum source. This method uses the speed of a vacuum cleaner, but finishes off with a more powerful vacuum source. The valve is easy to make and works automatically. you are likely to come across. This chapter discusses the common types and gives you practical advice on choosing a plastic for your application. Properties such as impact resistance, forming characteristics, pre-drying, and cost are considered. Useful tips on where to buy plastic sheets and how to deal with plastic distributors. mainly on wood and plaster molds. Learn six "common sense" rules of moldmaking, such as avoiding undercuts, surface preparation and the use of release agents. Read important information on using hollow molds and forming over existing objects. An example shows how to cast a plaster mold to reproduce an existing model car body. advice on the actual forming process. Learn how to tell when the plastic is ready to form and learn about common problems and how to solve them. Photographs show sample parts made with different plastics. model car body over the plaster pattern created in the last chapter. Read about glues and paints for different plastics. with more forming tips and solutions to common problems, as well as information on plans that are available for building your own machines with built in ovens. Please visit the book index on this website for other books and plans on vacuum forming equipment..
physics
https://2023.oceanoise.com/conclusions/shipping
2024-04-13T13:45:53
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816734.69/warc/CC-MAIN-20240413114018-20240413144018-00806.warc.gz
0.926947
1,730
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__164728351
en
Report of the Round Table Session Audoly, C.1*, de Jong, C.2*, Baudin, E.3, Brooker, A.4, Coomber, F.5, Gervaise, C.6, MacGillivray, A.7, Salinas, R.8, Širović A.9, and Wittekind, D.10 1 DCNS, France 2 TNO, The Netherlands 3 Bureau Veritas 4 Institute of Sound and Vibration Research, University of Southampton, UK 5 CIMA research foundation, Italy 6 GIPSA-lab, France 7 JASCO Applied Sciences, Canada 8 TSI, Spain 9 Scripps Institution of Oceanography, UCSD, USA 10 DW-ShipConsult, Germany This report can be referenced as: Audoly, C., de Jong, C., Baudin, E., Brooker, A., Coomber, F., Gervaise, C., MacGillivray, A., Salinas, R., Širović A., and Wittekind, D. (2015). Report of the Shipping Session, oceanoise2017, Vilanova i la Geltrú, Barcelona, Spain, 10-15 May. (Editors Michel André & Peter Sigray). Retrieved from https://2023.oceanoise.com Sound radiated by ships provides a relevant contribution to the underwater sound in oceans and seas. This ‘shipping noise’ is part of the soundscape in which marine species live. It may mask other sounds relevant to these species, like the communication sounds from their conspecifics or sounds that help them to orientate and to avoid predators, but to what extent this masking occurs is largely unknown. Current research towards assessment of the risks that shipping noise poses to marine life aims at quantifying the sound levels to which marine species are exposed as well as establishing threshold levels above which the noise has a significant impact on marine species. In the ‘shipping’ session of OCEANOISE 2015 the focus was on the exposure assessment. How well do we know how much underwater sound is produced by ships and how it is distributed in the underwater environment? Researchers from various institutions presented the state-of-the art in measuring and modelling radiated noise from ships. Several presentations were from two current EU research projects on shipping noise modelling and measurement: ‘AQUO’ and ‘SONIC’, that will be finalized at the end of 2015. In a panel discussion the main conclusions of the session were summarized and discussed, revealing what remains to be done. Ship noise measurements The first part of the session was devoted to ship noise measurements. Various approaches are being used to quantify the radiated noise output of ships, encompassing dedicated trials on a cooperating ship as well as measurements of opportunity of ships passing acoustic sensors that are installed close to a shipping lane. Considering the fact that it is unlikely that all vessels in the world would be submitted to dedicated radiated noise trials, these two approaches are complementary. However, comparison of the results of these approaches is generally difficult, due to the lack of standardization of measurement and analysis procedures. Though international ship noise measurement standards have been available in the military domain for many years, the translation of these to the civil domain is very recent. In 2009, the American organization ANSI issued a standard for ship noise measurements in deep water . Starting from this, the International Organization for Standardization (ISO) is now developing ship noise measurement standards, the first of which has become available in 2011 as a temporary ‘publicly available specification’ . These are for dedicated ship trials in which the ship cooperates by carrying out stable runs at fixed distance to the recording system. The result is presented as a ‘radiated noise level’ which quantifies the measured mean square sound pressure when the ship is around its closest point of approach to the measurement system, normalized with the square of the distance between the hydrophone and a fixed reference position on the ship. It does not take into account the actual sound propagation between ship and hydrophone during the measurements. Correction for the actual propagation loss would result in a ship ‘source level’ for the specified ship reference position, of which in particular the depth below the water surface has a large influence. This source level and source depth are the input parameters required for calculation of the distribution of sound in the environment, e.g. to generate sound maps. Procedures to determine source level from measurements have been proposed, and the development of a standard procedure (for dedicated ship trials) is undertaken by ISO, but will take some time. A particular topic of interest during the session was the uncertainty in the reported radiated noise levels. Variations in the operational parameters (speed, engine settings) leads to variations in radiated noise, but even at fixed nominal settings the measurement results show a significant spread. This uncertainty can be partly understood from variations in measurement geometry and properties of the environment, but shows a substantial random component as well. Only a proper control of the measurement parameters and statistical information from multiple runs and multiple hydrophones can reduce the uncertainty in the reported levels to within a few decibel. Ship noise modelling Ship noise modelling was discussed in the second part of the session. Various institutions propose production of underwater sound maps of oceans and seas in relation with shipping, on the basis of appropriate underwater sound propagation models and the estimated source levels of vessels sailing in the area. Regarding the environment, some databases provide the parameters that are relevant for the propagation (bathymetry, sound speed, sediment properties). Regarding the description of the traffic, many sea areas are nowadays covered by vessel tracking systems (Automatic Identification System; AIS) that provide the identification, position, course and speed of ships that can be used as input for calculating shipping sound maps. Regarding the ship acoustic source levels, an ideal situation would be to have an extended database of radiated noise levels of all existing ships, taking into account their operational parameters (speed, load condition,…), which is not realistic. As a consequence, there is need for modelling the underwater radiated noise of vessels, according to their type, size, speed and other parameters, in order to feed the models that produce underwater sound maps of oceans and seas. There are, however, several problems that need to be solved. Not all vessels carry AIS equipment and not all parts of the seas are covered by the AIS receivers. The AIS messages contain a list of parameters that characterize the ship and its operation, but not all parameters relevant for radiated sound are included and the reliability is not always sufficient for all parameters. Besides, the radiated noise from a particular ship depends generally more on its specific architecture (type of propulsion, hull, and propeller) than on its size or type. The development of models that predict the acoustic source level of ships on the basis of the AIS parameters is still in its infancy. In particular the lack of data of standardized radiated noise measurements in combination with the relevant AIS parameters hinders the development of reliable ship noise models. Remaining open issues Although significant progress has been made in recent years in the understanding of radiated noise from individual ships and shipping noise in general, assessment of the risks for the environment and of the need for mitigation needs further development. Open issues are: – Standardized procedures for radiated noise measurements – Standardized procedures for determining ship source levels from measurements – Reliable AIS information, including the parameters relevant for radiated noise – Reliable and validated models for the radiated noise of ships on the basis of AIS parameters, and how to get inputs related to ship architecture or technology – Dose-response studies: What are relevant indicators to quantify the effects of ship noise on marine life? At what value of these indicators does ship noise cause a risk? - ANSI/ASA S12.64-2009/Part 1: Quantities and Procedures for Description and Measurement of Underwater Sound from Ships – Part 1: General Requirements. - PAS/17208-1: Acoustics — Quantities and procedures for description and measurement of underwater sound from ships — Part 1: General requirements for measurements in deep water, 2011.
physics
http://eia.met.gov.na/web/projects/2673
2024-02-27T23:43:20
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00398.warc.gz
0.916198
711
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__19304411
en
The amended ECC No. APP 3664 is required by the Proponent (MEL Oil and Gas Exploration (Namibia) (Pty) Ltd with respect to the addition of 1174.32 km long of new seismic survey lines to the already approved 330.84 km long and bringing the total proposed survey lines for PEL 93 to 1505.16 km. During the seismic survey, the generated seismic wave travels into the earth and get reflected by various rock layers of the subsurface formations and returns to the surface where it is recorded by receivers called geophones which are like microphones. The proposed 2D seismic survey will use the Vibrator as the energy sources and will utilise wireless receivers that will allow for greater line offsets. The centred vibrating metal plates from the Vibroseis truck will generate acoustic / sound waves that will penetrate deep into the ground below the survey line and bounced off the various subsurface rock layers. The wireless receivers that will be installed along the survey lines at between 5 – 10 m station intervals will measure the returning sound / acoustic wave. The resultant product following complex processing is a vertical sonic cross-section of the subsurface beneath the survey line showing the geological materials (de-risked geological sub-model). The interpreted 2D seismic survey data sets is used to find specific drilling locations where potential reservoirs within the AOI oil or gas may be trapped in sufficient commercial quantities. Vibroseis trucks especially the recently designed broadband units have greater advantage in energy spectrum control as this can be done with much ease. The force applied to the ground can be monitored and adjusted in real time. Hence the effective usage of Vibroseis in urban areas. With enhanced mechanical and hydraulic components and shaker redesign, latest Vibroseis such as the Nomad 65 with similar specs to the Explorer 360 has a superior performance of optimised broadband acquisition by bringing down the sweep start’s frequency at full drive from 7 to 5.4Hz. Therefore, the time spent in emitting the very low frequencies from 1Hz can be significantly reduced, with a positive impact on crew production and cost. New technologies in Vibrioses such as the Nomad 65 will facilitate the recording of an extra low frequency bandwidth that has proved to be very beneficial for vertical resolution and seismic inversion. The implementation of the proposed 2D ground survey programme can be divided into the following fice (5) stages: 1. Planning and mobilisation (Pre-survey preparation, field scouting and mapping of buffers and offsets along proposed survey lines). 2. Base camp and fly-camp sites setups and operations. 3. Widening of tracks by pruning vegetation overgrowth and tracks levelling as may be applicable. 4. Actual survey operation (data acquisition), and. 5. Demobilisation and closure (Survey completion). The proposed survey will be undertaken along the already existing roads, farm fences and tracks and already disturbed areas. In consultations and with the approval of the landowners / local community, very limited new cutlines may be created to either straighten a survey line or create new access as may be required. A typical survey track will need a space opening along the survey line (track) of about three meters (3 m) wide. Prpject status REVIEW IN PROGRESS Ministry of Environment and Tourism Department of Environmental Affairs (+264 -61) 284 2701 (T) (+264-61) 240 339 (F)
physics
http://metall-online.net/602/brix-refractometer-calculator-features/
2024-04-18T03:59:44
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817187.10/warc/CC-MAIN-20240418030928-20240418060928-00404.warc.gz
0.903062
4,815
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__135646610
en
Brix Refractometers: Measure Sugar Content Effectively Brix refractometers are very important tools for accurately measuring the sugar content in a variety of liquids. Whether you’re from the food and beverage industry, agriculture, and even home brewing, these devices provide precise readings which help ensure quality and consistency with your products. By understanding the way you use a refractometer and interpret the final results, you can manage the sugar content and then make informed decisions throughout the production process. This short article will direct you throughout the basics of Brix meter , their importance in numerous industries, as well as the different types available for sale. Prepare yourself to discover the world of Brix refractometers and unlock the real potential of sugar content measurement . - Brix refractometers are very important for accurately measuring sugar content. - They are utilised in various industries , including food and beverages, agriculture, and brewing. - Learning the basics of Brix refractometers is very important for accurate measurements . - Choosing the right Brix testing equipment ensures reliable results. - Interpreting Brix test results works well for making informed decisions during production. Knowing the Basics of Brix Refractometer To effectively work with a Brix refractometer, it’s crucial that you know the fundamental concepts behind these units. A Brix meter , also known as a refractometer, can be a handheld instrument that measures the refractive index of any liquid. The refractive index provides valuable details about the concentration or sugar content in the liquid. By shining light through a sample and measuring the way it bends, the refractometer calculates the Brix value, which represents the sugar content being a percentage. The science behind refractometry is founded on the principle that light bends when it travels through different mediums, which bending is directly associated with the density and power of the substances inside the liquid. Using a clear understanding of these concepts will empower you to make accurate sugar content measurements utilizing a Brix refractometer. Picking the right Brix Testing Equipment Deciding on the best Brix testing equipment is very important to make sure accurate and reliable measurements. When selecting a refractometer, take into account the following factors: - Application: Identify the actual industry or purpose for which you have to have the refractometer, like food and beverages, agriculture, or laboratory testing. - Selection of Measurement: Determine the plethora of sugar content you will be measuring and ensure that the refractometer you end up picking can accurately measure within that range. - Intended Use: Decide whether you want a refractometer for lab testing or field use. Portable and handheld refractometers are suitable for on-the-go testing, while digital refractometers offer enhanced accuracy and automatic temperature compensation. - Construction: Look for a refractometer with durable construction to resist frequent use. A lightweight yet sturdy design will make it easier to handle. - Clear Scale: Be sure that the scale around the refractometer is readable and gives clear measurements. This helps prevent errors in interpretation. - Features and Functionalities: Consider additional features and functionalities that may be advantageous for your specific needs. These include automatic temperature compensation, data logging capabilities, and connectivity options. By carefully considering these factors, you can choose the right Brix testing equipment that aligns with your requirements. This may boost your measurement accuracy and efficiency, ultimately ultimately causing improved product quality and decision-making. The Importance of Brix Measurement in a variety of Industries Brix measurement is really a critical element in the success and quality of products across several industries. By accurately measuring the sugar content in liquids, Brix measurement ensures consistency, optimal ripeness, and precise formulation. Various sectors, like food and beverages, agriculture, winemaking, brewing, and pharmaceuticals, heavily count on Brix measurement with regard to their operations. - From the food industry, Brix measurement plays a key role to maintain consistent product quality and sweetness levels. It allows producers of fresh fruit juices, soft drinks, jams, syrups, along with other similar things to ensure their products meet desired taste profiles. - In agriculture, Brix measurement helps farmers determine the best harvesting time for vegetables and fruit. By gauging the sugar content, they could ensure optimal ripeness and flavor, causing better produce for consumers. - Winemaking and brewing heavily depend on Brix measurement to discover the sugar content of grapes and malt. This data is very important for having the desired fermentation process, causing wines, beers, and spirits together with the intended flavors and alcohol content. - Even during the pharmaceutical industry, Brix measurement is critical for formulating syrups and oral suspensions accurately. By precisely measuring the sugar content, pharmaceutical companies can ensure secure and efficient medication administration. Understanding the value of Brix measurement during these various industries emphasizes the requirement for accurate and reliable devices that may measure sugar content effectively . By utilizing high-quality Brix refractometers along with other sugar content measuring devices , businesses can maintain quality control, achieve consistency, making informed decisions in their production processes. The way you use a Brix Refractometer To acquire accurate sugar content measurements using a Brix refractometer, it’s vital that you adhere to the proper steps. Here’s tips on utilizing a Brix refractometer effectively: Preparing Your Sample - Be sure that your sample is well-mixed to get representative readings. - Remove any debris or air bubbles from your sample, as they can interfere with the measurements. - Utilize a clean dropper or pipette to transfer a couple of drops of your sample on the prism of your refractometer. - Close the cover of your refractometer to evenly spread the liquid over the prism. Conducting the Refractometer Test Once your sample is ready, follow these steps to conduct the refractometer test : - Look through the eyepiece of the refractometer and ensure how the scale is obvious and visible. - Adjust the focus of your refractometer till the scale is sharp and easy to read. - Locate the boundary between light and dark in the scale. - Read the Brix value at this boundary, which matches the sugar content of your own sample. - Take notice of the Brix value for additional analysis or comparison. It’s vital that you look at the temperature when working with a refractometer, as it could modify the accuracy of the readings. Ensure that you make amends for the temperature or work with a refractometer with automatic temperature compensation to get precise measurements. Following these steps, you may confidently utilize a Brix refractometer to determine sugar content to make informed decisions inside your industry. Interpreting Your Brix Test Results Once you have obtained your Brix test result, you must interpret the reading and understand its implications. The interpretation of Brix test results is determined by the actual application and industry requirements. By way of example, in winemaking, higher Brix values indicate riper grapes and potentially higher alcohol content, when in fruit juice production, lower Brix values could possibly be desired for a more balanced sweetness. To facilitate interpretation, use a Brix refractometer calculator or reference charts that correlate Brix values to sugar content percentages. Additionally, comprehending the Brix scales along with their specific applications within different industries will aid in effectively interpreting your Brix test results. Key Options that come with Quality Refractometer Instruments When it comes to picking out a refractometer instrument, you will find key features to take into account that will greatly enhance its functionality and usability. These characteristics ensure accurate and reliable measurements, making your testing process more efficient and effective. Whether you’re inside the food and beverage industry, agriculture, or some other field that requires sugar content measurement , learning the key features of refractometer instruments will assist you in choosing the best option device for your specific needs. 1. Handheld Refractometers: These devices offer convenience and portability for on-the-go testing. They are compact and light-weight, causing them to be perfect for field applications. With a handheld refractometer, you can easily measure sugar content wherever you are, ensuring quality control even just in remote locations. 2. Digital Refractometers: Digital refractometers provide enhanced accuracy and precision. They have digital displays which provide instant readings, eliminating the need for manual interpretation. With automatic temperature compensation, these devices make up temperature variations, ensuring accurate measurements regardless of the environmental conditions. 3. Portable Refractometers: Portable refractometers are specially created for field use. They can be constructed with rugged materials, ensuring durability and effectiveness against environmental conditions. These products can withstand harsh weather conditions, allowing for reliable measurements in challenging outdoor environments. 4. Prism Material: The prism material of the refractometer can be another important aspect to take into account. Common prism materials include glass and sapphire. Glass prisms are compatible with most applications and offer accurate measurements . Sapphire prisms, about the other hand, tend to be more durable and resistant to scratches, making them suitable for demanding testing environments. 5. Type of Source Of Light: The type of source of light found in a refractometer can impact the clarity and accuracy of the readings. LED light sources are typically used because they provide consistent and bright illumination, ensuring accurate measurements. Other types of light sources, such as tungsten bulbs, may produce varying levels of illumination and modify the longevity of the final results. 6. Durability and Reliability: It’s important to go with a refractometer instrument which is built to last. Try to find devices that provide durability and reliability. Consider factors such as the quality of materials used, the trustworthiness of the company, and customer reviews. Choosing a high-quality refractometer will ensure accurate measurements and long term usability. By understanding and considering these key features, you can pick a refractometer instrument that best suits your particular requirements. Whether it’s a handheld refractometer for on-the-go testing, a digital refractometer for enhanced accuracy, or even a portable refractometer for field use, selecting the best device will allow you to measure sugar content accurately making informed decisions within your industry. Types of Refractometers: Handheld, Digital, and Portable Devices Refractometers can be found in different types to focus on various testing scenarios and user preferences. Understanding these different types can help you choose the best choice device for your personal specific testing needs. Handheld Refractometers for On-the-Go Testing Handheld refractometers are compact, lightweight, and easy to carry, which makes them ideal for on-the-go testing applications. These products are typically analog and rely on manual adjustments for focus and readings. Perfect for situations where portability is key, letting you measure sugar content conveniently wherever you might be. Digital Refractometers for Enhanced Accuracy Digital refractometers offer enhanced accuracy and precision in sugar content measurement . They feature digital displays that provide instant readings and automatic temperature compensation, ensuring accurate measurements even during fluctuating temperature conditions. With a lot more features for example data logging and connectivity options for data transfer, digital refractometers offer advanced functionality for additional efficient testing. Portable Refractometers for Field Use Portable refractometers are specifically intended for field use, with rugged construction and effectiveness against environmental elements. These products are often water-resistant and can withstand harsh conditions, causing them to be well suited for outdoor testing. Portable refractometers are reliable tools that deliver accurate measurements in challenging environments, letting you maintain quality control in field applications. Maintenance and Calibration of Brix Meters Regular maintenance and calibration of Brix meters are crucial to ensure accurate and reliable measurements. To preserve your refractometer, it’s vital that you clean the prism and also other parts after each use to get rid of any residue or contaminants. Adhere to the manufacturer’s instructions to clean, as some refractometers may need specific cleaning solutions or methods. Cleaning Refractometer Parts When cleaning your refractometer, begin by gently wiping the prism surface having a soft, lint-free cloth or cotton swab. Take care not to scratch or damage the prism. For stubborn residue, use a mild detergent or cleaning solution recommended with the manufacturer. Avoid using harsh chemicals or abrasive materials which could harm the refractometer. After cleaning, rinse the prism thoroughly with clean water and dry it by using a soft cloth to stop water spots or streaks. Calibrating for Precise Measurements Calibration is essential to maintaining measurement precision. It requires comparing the refractometer readings to known standards or solutions and adjusting these devices accordingly. Some refractometers have built-in calibration features, while some might require manual adjustment using calibration fluids. Keep to the manufacturer’s instructions to guarantee proper calibration procedures. To calibrate your refractometer, begin by preparing a calibration solution using a known Brix value. Apply a few drops of your solution onto the prism and close the cover. Check out the eyepiece and adjust the calibration dial or screw before the refractometer reads the appropriate Brix price of the calibration solution. Repeat the calibration process periodically to guarantee accurate measurements. Understand that calibration requirements may vary dependant upon the refractometer model and manufacturer’s recommendations. By regularly cleaning and calibrating your refractometer, it is possible to maintain its accuracy and reliability, ensuring precise measurements in your sugar content analysis. Always make reference to the manufacturer’s instructions and guidelines to guarantee proper maintenance and calibration procedures for the specific refractometer model. Brix refractometers are highly specialized instruments designed particularly for measuring sugar content in liquids. They can be commonly used in various industries for example food and beverages, agriculture, brewing, and more. These instruments provide quick and accurate measurements of sugar content, allowing users to help make informed decisions and keep consistent product quality. There are actually different models and configurations of Brix refractometers available, each tailored to specific applications and user requirements. These instruments incorporate various features and functionalities to enhance measurement accuracy and ease of use. By making use of a Brix refractometer, you are able to simplify the entire process of sugar content measurement in your industry. These instruments provide reliable results, letting you make adjustments and ensure the preferred sweetness levels or ripeness in your products. Using their quick and accurate measurements, Brix refractometers are essential tools for maintaining product quality and making informed decisions during production. Comparing Brix Refractometer Cost and Value When evaluating the purchase of a Brix refractometer, it’s important to look at the cost and importance of different options available for sale. While cost is really a factor, it really should not be the sole determinant of your respective decision. Instead, concentrate on the value which a refractometer brings to your testing processes. Consider factors such as accuracy, durability, simplicity of use, and extra features when assessing the value of a refractometer. An increased-quality refractometer which offers reliable and accurate measurements may come at the slightly higher price, but it will offer long-term benefits plus a better roi. Comparing the values of several models is vital, but it’s essential to weigh these costs from the value they feature. Picking a cheaper refractometer that sacrifices accuracy or durability might end up costing you more in the long run when it comes to faulty measurements or the demand for frequent replacements. By carefully evaluating the fee and value of different Brix refractometers, you possibly can make an educated decision that aligns together with your budget and testing needs. It’s worth choosing a high-quality refractometer which offers accurate and reliable measurements, ensuring the efficiency and effectiveness of your own testing processes. Advancements in Sugar Content Measuring Devices Advancements in technology have generated the introduction of more advanced and sophisticated sugar content measuring devices , specifically refractometer instruments . These advancements have revolutionized just how sugar content is measured, providing users with increased accurate and efficient solutions. Digital refractometers are one such advancement which offers enhanced accuracy in measuring sugar content. These units utilize digital displays and automatic temperature compensation, ensuring precise readings even in varying temperature conditions. Another significant advancement is the usage of advanced materials for that prisms of refractometers. These materials provide improved clarity and durability, making it possible for better measurement accuracy and long-term usability of the devices. Connectivity options were also introduced in modern refractometers, making it possible for easy data transfer and analysis. Users are now able to connect their devices to computer systems or another devices to streamline data management and decision-making processes. These advancements in sugar content measuring devices have fostered greater productivity and quality control in a variety of industries. Having the ability to achieve more precise and efficient measurements, businesses can make informed decisions and ensure consistent product quality. Overall, the advancements in sugar content measuring devices , such as refractometer instruments , have revolutionized just how sugar content is measured. With enhanced accuracy, automatic temperature compensation, improved materials, and connectivity options, these devices have greatly benefited industries that count on precise sugar content measurements. Brix refractometers are powerful tools for accurately measuring sugar content in various industries, including food and beverage, agriculture, plus more. These devices provide valuable insights and maintain consistent quality in products. By knowing the basics of Brix refractometers, choosing the right equipment, and properly using and maintaining the refractometer, accurate measurements is possible, enabling informed decision-making. Embracing the advancements in sugar content measuring devices allows businesses to elevate their processes to new quantities of accuracy and efficiency. With Brix refractometers as trusted companions, users can confidently measure sugar content and unlock the real potential in their products. By making use of Brix refractometers, businesses can make certain that their goods fulfill the desired sugar content specifications, leading to enhanced quality control and client satisfaction. Whether it’s monitoring sweetness levels in beverages or determining the ripeness of fruits, Brix refractometers provide reliable and accurate measurements which can be necessary for various industries. With their ability to provide precise readings, Brix refractometers play a crucial role to maintain product consistency, optimizing production processes, and making data-driven decisions. Buying a high-quality refractometer and following proper maintenance and calibration procedures are key to obtaining accurate and reliable sugar content measurements. By harnessing the power of Brix refractometers, businesses can remain ahead inside their respective industries and deliver products that satisfy the highest standards of quality. What exactly is a Brix Meter? A Brix meter , also referred to as a refractometer, is really a handheld instrument that measures the refractive index of your liquid. It gives valuable information about the concentration or sugar content in the liquid by shining light by way of a sample and measuring how it bends. The refractometer calculates the Brix value, which represents the sugar content like a percentage. Just what is the science behind refractometry? Refractometry is based on the principle that light bends whenever it travels through different mediums. The bending of light is directly relevant to the density and concentration of the substances from the liquid. By measuring how light bends via a sample, a refractometer can determine the refractive index and calculate the Brix value, which represents the sugar content of your liquid. How can i choose the best Brix testing equipment? In choosing Brix testing equipment, consider factors like the application, selection of measurement, and intended use. It’s crucial that you choose a high-quality refractometer with durable construction along with a clear scale which is readable. Features like automatic temperature compensation and data logging capabilities can also enhance measurement accuracy and efficiency. Why is Brix measurement important in various industries? Brix measurement plays a crucial role in industries including food and beverages, agriculture, winemaking, brewing, and pharmaceuticals. It helps ensure consistent product quality, gauges fruit and vegetable ripeness, determines sugar content for fermentation, and aids in the formulation of syrups and oral suspensions. How do I use a Brix refractometer? Try using a Brix refractometer, ready your sample by ensuring it can be well-mixed and free from debris or air bubbles. Use a few drops from the sample to the prism in the refractometer and close the cover to evenly spread the liquid. Look through the eyepiece, adjust the main focus before the scale is clear, and read the Brix value where boundary between light and dark falls on the scale. How do I interpret my Brix test results? Interpreting Brix test results depends upon the specific application and industry requirements. Higher Brix values may indicate riper grapes for winemaking, while lower Brix values might be desired for balanced sweetness in juice production. Using a Brix refractometer calculator or reference charts may help correlate Brix values to sugar content percentages. What are the key options that come with quality refractometer instruments? Quality refractometer instruments have features such as durability, clear scale readability, automatic temperature compensation, data logging capabilities, and the cabability to withstand environmental conditions. Consider factors such as the prism material, light source type, and overall reliability when choosing a refractometer. Exactly what are the types of refractometers available? Refractometers come in three main types: handheld, digital, and portable. Handheld refractometers are compact and portable, perfect for on-the-go testing. Digital refractometers offer enhanced accuracy with digital displays. Portable refractometers are designed for field use, with rugged construction and resistance to environmental conditions. How can i maintain and calibrate a Brix meter? Regular maintenance involves washing the refractometer parts after each use to eliminate residue or contaminants. Follow the manufacturer’s instructions for cleaning solutions or methods. Calibration ensures measurement precision and involves comparing the refractometer readings to known standards or solutions. Some refractometers have built-in calibration features, while others require manual adjustment using calibration fluids. What must i consider when you compare Brix refractometer cost and value? When you compare cost and value, consider factors such as accuracy, durability, ease of use, and extra features. Do a price comparison while bearing in mind the long term benefits and roi. Purchasing a high-quality refractometer which offers reliable and accurate measurements is usually worth every penny. What advancements happen to be produced in sugar content measuring devices? Advancements in technology have triggered more complex and sophisticated sugar content measuring devices . These include digital refractometers with enhanced accuracy, automatic temperature compensation, data logging capabilities, and connectivity options for data transfer and analysis. Precisely why are Brix refractometers vital for sugar content measurement? Brix refractometers are highly specialized instruments designed particularly for measuring sugar content in liquids. They are essential tools in different industries, providing quick and accurate measurements which allow users to produce informed decisions, maintain consistent quality, and enhance productivity.
physics
https://delta-electronics.com.br/en/faq/como-usar-a-funcao-de-frenagem-cc-dc-brake-do-c2000-e-ms300/
2022-07-03T15:42:02
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00400.warc.gz
0.81454
647
CC-MAIN-2022-27
webtext-fineweb__CC-MAIN-2022-27__0__6609448
en
The DC brake function serves to induce a direct voltage in the inverter in order to lock the poles of the induction motor and thus brake the motor movement. The brake function is generally used for deceleration, but it can also be used for acceleration in lifting applications. To execute the braking function, you need 3 pieces of information: 1 - Percentage of current to be injected 2 – Injection time 3 – Injection start frequency See the following parameter settings for DC braking. This parameter defines the level of the DC braking current output (in percent) to the motor during starting and stopping. When you define the percentage of DC braking current, the rated current is considered 100%. Start with a low DC braking current level and slowly increase it until the proper braking torque is achieved. However, to avoid burning the motor, the DC braking current must NOT exceed the rated current. Therefore, DO NOT use the DC brake to replace a mechanical brake, otherwise injury or accident may result. During acceleration, if the inertia or if the load weight is large, the engine may not have power at the start of the acceleration due to external forces or the inertia of the engine itself. This parameter defines the time that the current will be injected at the motor start (acceleration). This parameter emits DC current, generating torque for force or acceleration to obtain a stable start before motor operation. This parameter determines the duration of the DC braking current output to the motor. Setting this parameter to 0.0 disables DC brake on startup. If you use the inverter with the motor running, it may cause damage to the motor or trip the inverter protection due to excess current. The motor can keep turning after the deceleration ramp ends, even if the drive has stopped the movement, the motor can keep turning due to external forces or the inertia of the motor itself. This parameter defines the time the current will be injected. This parameter emits DC current, generating torque to force the motor to stop on deceleration. This parameter defines the time the current will be injected, generating torque to force the drive to stop after the drive stops output to ensure the motor stops. Setting this parameter to 0.0 disables DC brake at stop. If you use the inverter with the motor running, it may cause damage to the motor or trip the inverter protection due to excess current. This parameter determines the start of the frequency that will be injected into the inverter. When this setting is less than Pr.01-09 (Start frequency), the DC brake start frequency starts at the minimum frequency. The DC brake sequence diagram is as follows. Use the DC brake before starting the motor when the load is moving at stop or when there are lifting, fans and pumps. Motor is in free run status and in an unknown direction of rotation before drive starts. Perform DC brake before starting the motor. Use DC brake at stop when you need to quickly brake the motor or control positioning or when the deceleration force is not enough to stop the motor inertia, such as with cranes or cutting machines.
physics
https://awards.concurrences.com/en/auteurs/alexander-middleton-ropesgray-com
2023-09-28T13:55:44
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510412.43/warc/CC-MAIN-20230928130936-20230928160936-00878.warc.gz
0.924592
168
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__276518971
en
Alexander (Alex) Middleton is a counsel at Ropes & Gray. Alex focuses primarily on patent litigation. Alex’s patent litigation experience includes drafting motions and briefs, assisting in preparation of expert reports, taking and defending fact and expert depositions, and preparing witnesses for deposition. Alex has participated in patent litigation involving wireless communications security protocols, telecommunications, optical fiber networking, solid state memory design, and smartphone notification systems. Alex’s pro bono practice includes renewing trademarks for a historical society, advising a housing association on zoning enforcement mechanisms, and obtaining asylum for a Moldovan man. Prior to joining the firm, Alex worked on the development of simulation software for and the construction of the ATLAS detector in the CERN Large Hadron Collider. Prior to attending law school, Alex worked at a law firm in South Korea.
physics
http://udqac.czechian.net/3-earthquake-story-essay.php
2018-12-17T02:49:04
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828056.99/warc/CC-MAIN-20181217020710-20181217042710-00363.warc.gz
0.971571
4,026
CC-MAIN-2018-51
webtext-fineweb__CC-MAIN-2018-51__0__200890991
en
We have provided below essay on earthquake under two categories named short essay on earthquake and long essay on earthquake. We are here in order to help students to fulfill the tasks they are provided with in their classrooms or any competition organized for essay writing during national or international events celebration in the schools or colleges. All the essays are written by the professional content writer by using simple and easy words with latest informations especially for the students of class 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12th standard. They can select any of the essays given below under various words limit accroding to their need and requirement: Essay on Earthquake Short Essay on Earthquake Following are the short essays on earthquake for students under words limit of 100, 200 and 300 words. Students can use any of these according to their need and requirement in the schools. Earthquake Essay 1 (100 words) Earthquake is the trembling or shaking movement of the surface of earth. It is a sudden violent shaking of the earth surface occurs naturally and causes great destruction because of the strong movements within the earth's crust or volcanic action. It is a natural disturbance which can be characterized as convulsion or concussion of the ground. It originates at any point within the crust and pushes a mass of rock to slip suddenly. A huge energy gets released and travels through rocks as waves, which causes vibrations and shaking of the earth surface. The word earthquake reveals its meaning very clearly (earth means ground or soil and quake means shake or tremble). Earthquake Essay 2 (200 words) Earthquake is one of the dangerous and life threatening natural disaster which can come anytime and anywhere on the earth. Most of the earthquakes come with minor tremors however larger earthquakes with strong tremors generally begins with slight tremors but soon gets changed into more violent shocks. Stronger earthquakes generally end up with huge and forced vibrations at long distance from the main point of arousal. It gradually diminishes with reduced aftershocks. The focus of earthquakes becomes the subterranean point where it originates. Magnitude and intensity of the earthquake can be measured with the help of variety of scales such as Richter scale, moment magnitude scale, modified Mercalli scale, etc. Earthquake is a life threatening event that responsible for the huge damage to the living and non-living beings. Earlier, it was quite hard to imagine the intensity of the earthquake before its occurrence. However, now-a-days, it has become easy to estimate the magnitude and intensity of earthquake because of the instrumental advancement in the world. People, in the ancient time, believed that earthquake occurs as mother earth was angry with them. It was Aristotle (a great Greek philosopher) who relate the occurrence of earthquake to some physical factors. According to him, the compression of air within the arch escapes cause shakes of some part of the earth surface and called as volcanic activity. Earthquake Essay 3 (300 words) Earthquake is a natural calamity can occur anytime and anywhere on the earth surface, cause lots of disturbance of to the living beings and useful natural resources. If we think about the earthquake, we also think that nothing is more destructive than this natural calamity. Earthquake has its long devastating history from the ancient time all over the world however its monotonous regularity makes us more fearful. Earth crust consists of several unfixed solid rock faces which move slowly below the surface under the range of millimeters to kilometers. The rate of movement increases with the thickness of plates. Such huge moving plates get separated from other plates and get out of their boundaries. Earthquake occurs when such moving plates clash with each other and separate. Sometimes volcanos (located around edges of the Pacific Ocean, known as Ring of Fire) bursts and releases lots of lava, gas, etc which causes pressure and imbalance within the earth surface and produces earthquake waves in the surrounding areas. Thus, volcanic activity within the earth surface is one of the reasons for earthquakes. Faults created by the volcanic activitiy are filled up by the strong earht surface movement which causes tremor. Everyone should take care of them when earthquake occurs by following some precautions like: - People should stay calm and stay inside or outside the door but away from windows, buildings and power lines. - They should stand against the wall near to the center of building, at doorway and crawl under some heavy furnitures like a desk or table. - Never use flammable things like matches, candles, or any other flame as they caught fire with broken gas lines. - Never use elevators as they may stuck. - If someone is in the running car, he/she must stop the car and stay inside until the earthquake stops. Long Essay on Earthquake Earthquake Essay 4 (400 words) Earthquake is a natural calamity which has power to destroy human lives in few seconds. It is lonely responsible for the huge damage to living and non-living beings. Earlier, people were unaware of the reasons of earthquake occurrence and the extent of damage. They believed that earthquake occur whenever mother earth become angry with them. It was Aristotle (a great Greek philosopher) who made people aware that earthquake occur because of some physical factors. He told that, some parts of land moves whenever air compressed within the arch escapes which is called as volcanic activity. Earthquake waves cause movement in the surrounding areas because of air pressure and imbalance. Another reason of earthquake occurrence is isostatic adjustment. Earth surface contains some raised and depressed blocks which make balance of earth surface however the balance disturb when blocks moves revolving on units of axis. Raised blocks get down and cause imbalance on the earth surface which inturn cause earthquake. Generally, it occurs in the volcano prime areas, under the feet of mountains and hills. However, it is not sure that earthquakes do not occur in other places. Earthquake may occur anytime in any part of the world. Some of the earthquakes become weak however some of them become very strong with huge force which may shake the earth suface far away from the centre point. Earthquakes with huge intensity become really dangerous and cause severe damage. According to the scientific study with the help of Seismograph, there are some secondary waves and tertiary waves in the earthquake. There are various measurement scales which can accurately measure the intensity of earthquake such as Mercall’s scale, Richter scale, moment magnitude scale, etc. Himalaya zone, the Ganga, Deccan Plateau, the Brahmaputra valleys, etc are the earthquake prone areas in India. Kutch (Gujarat, India) earthquake of the year 1819 was so massive (calculated 8 on Richeter Scale) and affected a huge area (around 5000 square kilometers depressed by 15 feet and 1500 square kilometers raised by 50 feet). More than 10000 people were killed in the earthquake event of Latur and Osmanabad districts of Maharashtra on 30th of September in 1993. Earthquake is the result of release of elastic energy after forceful tectonic plate movements. Elastic energy released in the form of seismic or shock waves which travels for a long distance outwards in all directions from the centre point (a place of maximum destruction). High-rise buildings and ancient structures of the cities (like Delhi) can be badly affected by the seismic force of earthquake. Earthquake Essay 5 (600 words) Earthquake is a very dangerous natural disaster which occurs as a sudden shaking movement of rocks in the earth's crust. Some of the earthquakes of low intensity become less dangerous however earthquakes having high intensity become very dangerous and can be extremely violent especially in the areas it occurs. There is no any fix duration for the occurrence of earthquake, it may occur anytime and anywhere for any duration. It may be brief but repeat many times a day. Earthquake is the result of sudden energy release within the surface of earth. This released energy under earth crust creates powerful seismic waves which travel through the earth surface. The frequency of waves and type and size of earthquake is measured by the help of seismology which involves the use of seismometer. Large earthquakes may destroy things to a great extent as they take down huge buildings, cause injury and thus death of people. There are various scales used to measure the intensity of shaking and the magnitude of an earthquake. The scale showing magnitude 3 or less indicates that the earthquake is less harmful however the scale showing magnitude 7 or more causes huge level damage over a wide range of area. Earthquake which occur under the ocean take the form of a tsunami. It is a giant form which can cause death and destruction to the living and non-living things. High intensity earthquakes give rise to the landslides in the surrounding areas. In the ancient time, people in China were used a device in order to guess the occurrence of earthquake. The device was similar to the jar having dragons on top and surrounded by frogs with open mouths. They guess the occurrence of earthquake by the exchange of fitted ball in dragon's mouth to the frog's mouth. The direction of earthquake was decided by the position of frog receiving a ball. This device was an effective tool in the ancient time to figure out the origin of an earthquake. Causes of Earthquake - One of the main reasons of the earthquake is plate tectonics which causes tectonic movements in the earth surface. Tectonic plates under the earth surface collide to each other and ride over the other which becomes the reason of mountain formation, earthquakes and volcanoes. This process releases a huge level of energy which creates a force and thus surface movement. - Geological faults are also the reason of earthquakes. There are various forms of faults however three main types are normal fault, reverse fault (also called thrust fault) and strike-slip fault. Normal faults generally occur in the areas with extended crust, reverse faults occur in the areas with shortened crust and strike-slip faults occur in the areas where two fault sides slip horizontally. - Most earthquakes become the part of a sequence of earthquake clusters which can recur in a regular pattern and related to each other in terms of location and time. Such earthquakes cause less damage however larger earthquakes (mainshock) create a foreshock (an earthquake of smaller magnitude) and cause much damage. A series of earthquake can occur in the variety of earthquake storm whenever earthquake hit a fault in the clusters. Tsunami is a dangerous form of earthquake which occur as a result of a chain of fast moving waves in ocean because of powerful earthquakes. It is a very serious challenge for the people's lives, safety as well as earthquake engineering as they can engulf coastal areas, swipe away whole city, damage houses and many more. It is sad to say that it cannot be prevented however people can be warned through various warning systems to run away and save lives. Essay on Cow It had been an average day in the office, conference calls, report writing, fighting off the mosquitoes that plague us here in Haiti. My clock showed just 10 minutes until it was time to leave for the day, when without any warning the ground made slight movements, which rapidly became violent. The earth shook harder than I have ever felt before, I ran to the door but could not get out. I hid under my desk, my hand pressed up against the surface protecting my head, hoping it would hold up to the pressure of 2 stories falling on it. If I were buried under a ton of debris, would I ever get rescued? Was this the end for me? As quickly as the earthquake started, the violent tremor stopped, everything became still again. Covered in dust, I scrambled shaking over the rubble by the office and made it out to the safety of the street outside. People were coming out stunned, some crying, some injured, some silent. A count of heads to check everyone was present showed one member of the team was missing, stuck under the rubble. Companions brought him out and they carried him unconscious on a piece of the gate on their shoulders to the nearest hospital where he later died. Several of the hospitals had already collapsed. Home, schools, offices -- the buildings we spend our lives in become our greatest danger. Cars were left abandoned in the street, roads were impassable covered by collapsed walls, buildings, telegraph poles and crushed vehicles. We walked the long way home not saying much, amongst people praying, crying, hysterical. It was surreal. We made a large detour around the petrol station that had exploded but was still making uncomfortable noises. A couple of people were wailing outside a collapsed building, the broken sign on the wall showed it had been a university. Communication in emergency situations is often not easy. The phone networks were either down or overloaded so it is impossible to find out if our friends were okay. I had no way of letting my family know that I had survived and just hoped that they wouldn't hear about the earthquake until tomorrow. We have no idea where was worst hit or how the rest of the country is doing. Last night we walked home in the dark, slept, or tried to sleep in the space in the garden least likely to have a wall or building fall on it should the aftershocks cause more damage. I lay feeling the aftershocks through the night under a beautiful sky heavy with stars, kept awake by the loud singing, clapping and shouting at what must be a local church and by our local confused cockerel who spent the night letting us know he was still alive! Today we walk back to the office in the stark light of day. We pass the collapsed hospital at the end of our street. We pass a man carrying his dead child, repeating out loud that he has his beloved dead child in his arms, not knowing where to go. We pass people being carried on all kind of makeshift stretchers, doors, blankets or whatever they can get their hands on to carry their loved ones to medical facilities for help. We went up and down the main road 6 times today, each time more corpses appeared, some covered in sheets some just lying contorted and stiff and coated in the dust that covers the city. I wonder if their families know where they are? It is impossible to make even a wild estimate at the number of people who have died, are missing or affected by this earthquake, which measured 7.3 on the Richter scale. In Canape Vert Park, hundreds of people are sitting on the street, in the small open space. The smell of urine and excreta is strong. As the days pass the corpses and waste will become increasingly pungent. Supermarkets have either collapsed, been looted or are closed for fear of trapping people in collapses from the aftershocks. The only food we find for sale is some unappetizing fruit that a group of women are selling on the side of the road. The cost of water has already gone up. Food and drinking water are scarce. I wonder how long we can last on the food we have at home, maybe two or three days. People are currently searching for family members or are in shock. I am concerned about the possibility of unrest related to the lack of food available in the coming days. Haiti is not exactly the breadbasket of the region. We attend an Oxfam staff meeting, we are a small organization yet 7 people had their homes destroyed and several other homes were damaged. Haitians are heeding the advice that it is dangerous to sleep in their beds because of the aftershocks. Most people are sleeping on the street. Teams are organized to go to different coordination meetings and collect information about the situation here in Haiti. We suggest that those members who are not coming into work this week help dig out people still alive trapped in the rubble. We go to the WASH cluster meeting with a group of organizations who work in Water, Sanitation and Hygiene to coordinate the WASH response. In an emergency many organizations come to help so we need to work together and organize who does what and where. Streams of people with suitcases are leaving the city to stay with friends and family in other parts of Haiti and the Dominican Republic. The earth moves almost constantly this evening. I feel queasy. This evening is colder. Not like England in January cold, but chilly. Just before midnight a large noisy trail of people pass our house, they are worried by the rumor that a Tsunami is coming and are seeking refuge higher up in the hills. It´s raining a little. Tonight Port au Prince is spending its second night sleeping under the stars. Haiti is not known for having a good security record. We hear that all the inmates from the huge local penitentiary who were not killed by the earthquake have escaped. Today we do a rapid appraisal of the communes where we have recently trained teams in emergency WASH (Water, sanitation and Hygiene) response. Visiting the open areas where displaced people are sleeping, the main needs we are told, not surprisingly are drinking water, food, medicines and latrines. The WASH coordination meeting does not go as planned but in a good way. Several private water companies are offering their services to provide water to key locations in the city, which is wonderful news. These organizations will provide 80 trucks full of water. The international organizations including Oxfam need to organize storage and management of the water, which is an enormous task. Unfortunately we also find out that our emergency stock, (the materials that we keep stored so that we came respond quickly when there´s an emergency) have become inaccessible following the quake. This is a huge setback as tomorrow we want to start distributing water. People are hungry and people are thirsty. The most disturbing sights of today were not the piles of debris that just 2 days ago were homes and local schools. The sights that made me draw breath were the bodies. A neat row of 16 bodies carefully wrapped in sheets, the group of 20 at the Canape Vert roundabout some identified with ripped cardboard name tags, a pile with no sheet covering them, just thrown one on top of the other and the two bodies on the corner of a street, an adult motionless under a small dead child. Today lots of people are covering their faces with scarves and face masks, they believe this will protect them from diseases spread by dead bodies. Red scarves are particularly popular. Red is believed to be the strongest color and helps ward off disease. It is true that dead bodies from cholera victims can spread disease, but just walking past the bodies of the ordinary healthy people that the quake has taken does not. But it is a link that people often make probably because of how traumatic it is to see the bodies. We are told that a plane is sending emergency materials for us and it should arrive tomorrow! This is great news. Today was spent preparing to distribute water. I visited a golf course, currently home to about 10,000 people. There are a lot of sick and injured people sleeping out here. I was looking for an appropriate place to mount a portable water storage container (a bladder). I am really curious about why people are walking around with thick white cream smeared under their noses. I imagine it must be something sweet smelling to counter the bad smells here. In fact it is toothpaste put there supposedly to stop them getting ill! We are still sleeping outside and will continue to do so for a few days to come. I am not sure what I miss more, sleeping in my bed or eating cooked meals! Tomorrow we have a long but hopefully really productive day ahead of us. We will start installing the water points and distributing drinking water.
physics
http://helenx1oo4.tripod.com/physicsofmushing/id2.html
2020-12-01T02:31:42
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141542358.71/warc/CC-MAIN-20201201013119-20201201043119-00162.warc.gz
0.947684
225
CC-MAIN-2020-50
webtext-fineweb__CC-MAIN-2020-50__0__198257033
en
Force equals mass times acceleration. Thus, the amount of force of a dog sled team depends on the mass of the musher, the dogs, and the sled, and the acceleration of the entire unit. A team of ten dogs of equal mass traveling at twenty miles an hour will have more force than a team of ten dogs of equal mass traveling at five miles an hour. A musher who is sledding for fun may not want to have as great a force as a musher entered in a dog sled race. A musher in a race is focused on winning. Thus, she will use more dogs because with the added masses, she will have more force to overcome obstacles, such as snowy passes and hills. Another force that comes into play when racing sled dogs is frictional force. When entering dog sled races, a musher must consider the friction between the sled and the snow and ice. A musher looks for a dog with tough pads that can handle the ice and snow. The frictional force can be found by multiplying the normal force by the coefficient
physics
https://www.sussexcancerfund.co.uk/simple-storage-solution-to-aid-electron-treatment/
2023-12-05T09:32:06
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00738.warc.gz
0.947224
816
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__156357533
en
We recently had a request from the Radiotherapy Team, specifically for the mould room for some heavy-duty plastic boxes. Intrigued to understand more about why the team needed so many boxes we popped along to see Dosimetrists Marissa & Rica to find out more. So in simple terms, the boxes are to be used to keep safe the various moulds and shields patients need while having their treatment. They are often bespoke pieces of wax and lead that are made for patients receiving close to the surface of the skin. The paranet wax (Bolus) help the electron beams reach the correct depth and the lead pieces (applicators) stop healthy tissue from being affected by the treatment. The boxes will keep the moulds together safely in a clean and protected environment. Marissa gave us a more detailed overview of how the boxes will be used, together with an insight to Electron Treatment: “Radiation damages cells which are fast dividing and therefore spread quickly. Targeting those specific cells will reduce the radiation healthy cells receive. Electron treatment is a form of radiotherapy for skin surface lesions. The electron beams can only treat to a depth of 0-3cm compared to the high-energy x-rays which are used for deeper tissue treatments, such as radiotherapy to the lung. The applicator is attached to the machine which will then rest on the patient’s skin. It is a lead cadmium alloy which has a melting point of around 100°C. Lead is used as a shielding material in order to make sure healthy cells receive minimal doses of radiation. At the Sussex Cancer Centre, we have a variety of sizes which our applicators can come in, 6x6cm, 10x10cm, 14x14cm and 20x20cm. Inside the applicator is a cut-out which we call an insert. These can be standard sizes varying from 3cm to 20cm circles or rectangles/squares of different widths. Due to the metal’s low melting point, we are also able to pour applicators with custom inserts. The shape is decided at the mark-up appointment by the radiographers and doctor. Bolus is also produced which is a skin-equivalent material, used for the following reasons: it increases the dose to the surface of the skin to really target the lesion, it degrades the energy from the beam which reduces the penetration, and it can also be used to fill in irregularities on the surface of the skin. Bolus at the Sussex Cancer Centre is made of the following: strips of pink wax layered on top of each other to create the required thickness on Head & Neck masks or paraffin wax layered to the required thickness, both of which only need creating once and can be used for the duration of the patient’s treatment. There is also bolus made from wet gauze which is a single-use bolus for every treatment. In some instances, patients will also require a lead eye shield to protect from the radiation if the treatment area is close to the eye, or sometimes a lead and aluminium nose plug, both of which are small components of the treatment. The boxes provided by the Sussex Cancer Fund have made it possible for us to keep each patient’s individual applicator, bolus and any other element of the set-up secure and within infection control standards. This will go a long way to assisting us when transporting individual items securely to our satellite sites. We are very thankful to the Fund for providing us with the means to deliver a more individualised service to our patients having electron treatment. “ A huge thank you to Marissa & Rica for taking the time to explain to us how this simple solution will aid such a key treatment process. We would also like to thank our supporters that have helped us provide this equipment. If you would like to help the Sussex Cancer Fund provide even more for cancer services throughout Sussex, please use the button below.
physics
http://www.danze.com/blogs/cool-down-your-home-heating-bill/
2018-07-19T11:24:24
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00076.warc.gz
0.899956
332
CC-MAIN-2018-30
webtext-fineweb__CC-MAIN-2018-30__0__49527283
en
Cool Down Your Home Heating Bill The colder the climate in your area, the hotter you get about home heating bills. Here is a list of ways to cut those costs that require a minimum of time, expense or sweating: SUNSHINE: The easiest way to cut heating costs is also the cheapest. Raise window shades and blinds when the sun is out and lower them back down when the sun goes down. LEAKS: From windows to door frames to electrical outlets, heat can leak from a room in many ways. Before you can eliminate them all, you should perform this test that will help you find them all: Place a piece of toilet paper in front of the suspected area and see if it moves. WINDOWS: Insulating windows against heat loss (the biggest heat loss culprit) is fast, easy and cheap using bubble wrap. Just spray the window with a thin coating of water and then place a windowpane-sized sheet of bubble wrap on the glass. FURNACE: Getting your furnace serviced so it’s running at its maximum efficiency can cut costs. A programmable thermostat can also save money during sleeping hours or when no one is home. CEILING FAN: Because heat rises, it accumulates at the ceiling. Reversing the rotation of your ceiling fan (find switch at fan base) can push that warm and valuable air back down into the room. RADIATOR: To make radiators in your home work more effectively, fit aluminum foil behind the unit so that it reflects more of the heat into the room rather than up the wall.
physics
https://www.ses-energiesysteme.com/en/products/solar-thermal-plants
2024-02-21T01:55:58
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00251.warc.gz
0.903149
296
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__192383463
en
Solar thermal energy converts solar energy into heat Local and district heating networks are an important key to the increasing integration of renewable energies in the heating sector. In solar thermal energy, collectors filled with liquid capture the sun's light and convert it into heat. This can then be used to heat water or for heating. Solar heating networks are eligible for subsidies and are currently being expanded. Since no fuel costs are incurred for their operation, solar thermal plants offer a high degree of long-term cost security (source: Bundesverband Solarwirtschaft e.V.). In 2022, we added solar thermal plants to our product portfolio After 25 years of experience in the CHP sector, more and more in complex plant construction, in 2022, we decided to expand our product range and included solar thermal plants. We offer them individually or in combination with other technologies such as combined heat and power plants, heat pumps, buffer storage tanks, etc. In the course of the project we take care of the complete project coordination and turnkey plant construction. The use of solar thermal plants in connection with innovative combined heat and power (CHP) ist promising and future-oriented. Your project requirements are our benchmark We work independently of manufacturers and are guided by your project specifications when using the various collectors, such as evacuated tube collectors, flat-plate collectors or parabolic trough collectors. Would you like to know more? We look forward to hearing from you. Contact
physics