url
stringlengths 15
1.48k
| date
timestamp[s] | file_path
stringlengths 125
155
| language_score
float64 0.65
1
| token_count
int64 75
32.8k
| dump
stringclasses 96
values | global_id
stringlengths 41
46
| lang
stringclasses 1
value | text
stringlengths 295
153k
| domain
stringclasses 67
values |
---|---|---|---|---|---|---|---|---|---|
http://www.superpit.com.au/Environment/NoiseandVibration/BlastMonitoring/tabid/131/Default.aspx | 2013-05-18T16:10:09 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382503/warc/CC-MAIN-20130516092622-00043-ip-10-60-113-184.ec2.internal.warc.gz | 0.931686 | 280 | CC-MAIN-2013-20 | webtext-fineweb__CC-MAIN-2013-20__0__205486797 | en | Permanent blast monitoring for Fimiston open pit surface blasts was established in 1993 as part of the noise and vibration monitoring program established by KCGM. Ground vibration and airblast overpressure are monitored using Texcel ATM Blast Monitors. There are six monitors permanently installed at sites between the Open Pit and the City of Kalgoorlie-Boulder.
The trigger levels for the Fimiston blast monitors are set at 1 mm/sec for vibration and 114 dB(L) for overpressure. If either of these levels are reached then a result is recorded for the blast event. Ministerial conditions for blast overpressure are set at 125 dB(L) for any one blast, and 120 dB(L) for not more than one in any ten consecutive blasts.
The permanent blast monitoring network was extended by KCGM in 1998 under the direction of experts who assisted with the planning, installation and commissioning of the sites.
Surface ground vibration generated by Mt Charlotte operations are also monitored using Texcel ATM Blast Monitors. There are eight monitors permanently installed at locations around Mt Charlotte. Data from these monitors are used to manage and minimise potential impacts.
KCGM uses this information to re-design the blast and to ensure that the design, which produces the lowest vibrations, is selected. Special explosives and detonators have been manufactured for Mt Charlotte to help minimise blast vibration. | physics |
http://superconducting.blogspot.com/2005/10/workshop-on-quantum-and-classical.html | 2017-07-28T06:57:57 | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00530.warc.gz | 0.90961 | 363 | CC-MAIN-2017-30 | webtext-fineweb__CC-MAIN-2017-30__0__18599331 | en | Workshop on Quantum and Classical Information Security ARDA/NSA/NSF/Caltech 15-18 December 2005 – "The workshop will bring together researchers from a variety of backgrounds who work on different aspects of classical and quantum information security. Participants will strive to identify issues and problems of common interest that can be effectively addressed by pooling their expertise."
Flux Qubits as Trapped Ions RIKEN In quant-ph 0509236, Liu, Wei, Tsai and Nori propose a scalable superconducting circuit in which the qubits act as 'trapped ions.' The qubits are coupled to a 'vibrating' mode provided by a superconducting inductor-capacitor circuit, and interqubit couplings are selectively controlled by modulating the frequencies of the applied time-dependent magnetic flux.
Parametric Coupling for Flux Qubits Delft Pashkin and McDermott have independently demonstrated entanglement between superconducting qubits using a fixed linear coupling scheme. In cond-mat 0509799, Bertet, Harmans and Mooij propose a scalable architecture for two superconducting charge or flux qubits biased at symmetry points with unequal energy splittings. "The fixed-coupling strategy would be difficult to scale to a large number of qubits, and it is desirable to investigate more sophisticated schemes. Modulating the coupling constant between two qubits at the sum or difference of their two frequencies allows to bring them into resonance in the rotating frame. Switching on and off the modulation amounts to switching on and off the coupling which can be realized at nanosecond speed. We discuss various physical implementations of this idea, and find that our scheme can lead to rapid operation of a two-qubit gate." | physics |
https://textfirearms.xyz/2022/12/09/steel-nitrogen-paintball-tank-consider-this-tank-for-higher-firing-velocities/ | 2023-02-03T19:39:56 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00173.warc.gz | 0.953096 | 365 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__56734293 | en | To make a paintball game more exciting, you need to arm yourself with the right equipment. Paintball tanks are among the essential equipment you should have when engaging in battle. In essence, these tanks store compressed air while being connected to paintball guns so paintballs can be fired at high velocities. They can either store carbon monoxide or nitrogen, with the steel nitrogen paintball tank as one of the most durable ones.
Going for steel
A steel nitrogen paintball tank is made of steel that is durable and filled with compressed nitrogen gas. Although heavier compared to aluminum and fiberglass paintball tanks, steel tanks can withstand harder blows from high-velocity paintballs without getting damaged, dented, or punctured. You can buy them in screw-in 14-ounce to 20-ounce varieties.
Steel is also beneficial in maintaining the air inside, making sure that it doesn’t seep through. Since the steel tank is sturdy and less likely to be punctured by plain impact, you can be sure that the nitrogen inside can stay intact and avoid explosions.
Nitrogen is a better choice
You actually need a steel nitrogen paintball tank so you can propel your paintballs more effectively with higher velocities in order to hit your target more accurately and quickly. However, the amount of air within the tank is the determining factor on how many shots you can fired before refilling it.
Aside from making sure that the tank is adequately sized and that it fits your gun, you need to check on the quality of the gas inside your tank. For steel tanks, nitrogen is always preferred. Nitrogen is also useful for more consistent velocity on the paintballs when firing. Compared to carbon dioxide, nitrogen can produce constant speeds since it is unaffected by differences in temperature and altitude. | physics |
https://racheldiazbastinart.com/blogs/news/crouching-predator-hidden-scarab | 2024-04-17T22:56:12 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00279.warc.gz | 0.917794 | 754 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__87682595 | en | Take a good look at this golden scarab beetle, Chrysina resplendens…
Probably the most ridiculously golden beetle ever, right? Now what if I told you that this beetle (and several other species of beetle in the family Scarabaeidae), actually shine brighter than they appear, the result of a light trick that only a few animals on the planet can accomplish? You’d be like “what!? Tell me more, especially if physics is involved, because I looooove long-winded physics explanations.” Ok great!
So what is this light trick? Hidden within the microstructure of the beetle’s exoskeleton there are helical twists and turns that enable certain species of scarabs the rare ability to create and reflect CIRCULARLY polarized light! Say what now? While many animals can create and even see LINEARLY polarized light, there are very few examples of the creation of circularly polarized light in nature, and Chrysina gloriosa, a particularly adorable species of scarab, is one of those special few:
Quite a derpy little chap! Hard to believe it can create one of nature’s rarest optical tricks. How do they DOOO itt??? Well, a light wave behaves like a beam of energy that can oscillate in many directions. Polarization is a property of waves that describes the direction of these oscillations (left, right, up, down), perpendicular to the direction of the wave. Pure white light is unpolarized, meaning that this light has oscillations in many directions. Light is said to be polarized when one of these directions of oscillation is removed or blocked, such as when a polaroid filter is used (thus blocking off either the up/down or left/right oscillations), or in the natural world, when light reflects off of surfaces, such as Chrysina gloriosa, whose helical exoskeleton microstructure creates the circular oscillation of light.
Circularly polarized light can travel in a left-handed or right-handed corkscrew. Looking at Chrysina gloriosa under a left-handed filter (so that only left handed oscillations pass through), it pops with luminescence! But under a right-handed filter (where only right-handed oscillations can pass), it turns totally dark! From this we can conclude that Chrysina gloriosa reflects only left-handed circularly polarized light.
So far only one animal on the planet has been proven to actually SEE circularly polarized light, the technocolor dream explosion that is the mantis shrimp:
But there aren’t a lot of mantis shrimps around in the juniper scrub habitat of Arizona where Chrysina gloriosa lives, so what is this extra shininess for? Perhaps, as scientists have hypothesiezed, Chrysina gloriosa can see circularly polarized light too, in which case these beetles would look even brighter and luminescent to each other than they do to us. This would give them a tremendous survival advantage by allowing the beetles to easily see and communicate with each other while simultaneously hiding from predators that cannot see circularly polarized light. While the jury is still out, scientists have gathered enough data that we can tentatively place Chrysina gloriosa (and perhaps other scarabs too), among the ranks of the circularly polarized light perception club, a truly elite club with a secret language all their own.
Further sciencey reading:
(Repost): Original Post: https://thebefuddledloris.wordpress.com/2013/05/15/crouching-predator-hidden-scarab/ | physics |
https://johnefarley.com/wx30710.htm | 2022-06-30T22:07:11 | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00034.warc.gz | 0.976122 | 853 | CC-MAIN-2022-27 | webtext-fineweb__CC-MAIN-2022-27__0__110074762 | en | Graupel (snow pellets)
Dense fog, with visibility down to about 100 feet (in the mountains, as cloud bases lowered)
Three brief hailstorms (all pea or smaller)
The first six weather phenomena were all observed while I was skiing at the Pajarito Ski Area (altitude from about 8,900 to 10,300 feet above sea level) in the Jemez Mountains above Los Alamos. The last two were in Santa Fe through the course of the same evening.
When I arrived at Pajarito around 10:40 a.m., it was snowing. By the time I bought my lift ticket and got my boots and skis on and headed out to ski (about 11 a.m.), about an inch of new snow and graupel had fallen (which definitely helped the ski conditions). When that stopped, the cloud base was around 10,000, near the top of the mountain. But within 15 minutes or so, it quickly lowered, and enshrouded the entire ski area, including the base area, in dense fog. At times the fog was so thick you could only see two chairs ahead on the chairlift, meaning the visibility was down to 100 feet or less. Around 1 p.m., it began to snow again, with snow and graupel (snow pellets) coming down quite heavily for a while - but by an hour later, this band of snow showers had moved north and the sun was brightly shining. Here is a picture of the band of mountain snow showers and valley rain showers as it continued to the north:
As this band continued north, I observed the rare phenomenon of a "snowbow" - a rainbow in an area of falling snow. This occurs as the snow mixes with rain, producing a rainbow - except that there is also snow occuring in the same area. You can see it just below the cloud base, just left of center in this picture:
Soon after this, it clouded over again and more snow showers moved in. Just as I was leaving the ski area, however, the snow mixed with and changed to rain at the bottom of the ski area, as warm air advection raised the snow level to around 9,000 feet.
The weather remained unsettled and stormy through the evening, after I returned to Santa Fe. There I observed two thunderstorms and multiple brief bursts of hail, as well as a lot of rain. The first thunderstorm passed just to the east of Santa Fe around 6 p.m. as it moved from the south-southwest up into the Sangre de Cristo Mountains. Here is a picture of this thunderstorm as it was located a few miles southeast of Santa Fe:
At this time, thunder was frequent and I observed quite a bit of lighting, including some CG (cloud-to-ground). Several heavy convective showers, a couple with brief bursts of hail, and another thunderstorm with frequent lighting and another brief burst of hail (this one around 9:30 p.m.), followed through the course of the evening. This second thunderstorm hit Santa Fe more directly than the first one, but the heavist part of this storm also passed just southeast of the city. These storms were interesting in that they produced a wide variety of precipitation - rain and hail in the lower elevations, along with very heavy snow in the higher elevations. Both the Santa Fe ski area and the Santa Fe snow telemetry site in the mountains above Santa Fe received nine inches of snow through the evening and overnight, much of which was associated with the two strong thunderstorms. It's not too often you see this wide a variety of weather in one period of about 12 hours!
Also noteworthy - elsewhere in New Mexico, the thunderstorms were actually severe. In carlsbad, a storm produced golfball size hail, and a number of SVR warnings were issued by both the Albuquerque National Weather Service office and the Midland-Odessa, TX office which serves southeast New Mexico.
Return to Winter 2009-10 Page | physics |
https://www.investorschronicle.co.uk/news/2021/01/06/is-now-the-time-to-invest-in-hydrogen/ | 2021-03-09T06:35:00 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00290.warc.gz | 0.955125 | 201 | CC-MAIN-2021-10 | webtext-fineweb__CC-MAIN-2021-10__0__110316024 | en | The idea that we should replace hydrocarbons with hydrogen is not new, but it has been enjoying a renaissance over the past year or so. As countries get serious about tackling climate change, hydrogen is being billed as the missing piece to the decarbonisation puzzle.
On the face of it, hydrogen seems like the holy grail of energy. It’s easy to make as you simply pass an electrical current through water to split it into hydrogen and oxygen – a technique known as electrolysis. If the electricity comes from renewable sources, then this process is essentially emissions-free. When the hydrogen is burned or passed through a fuel cell, the only by-product is water. It also offers a solution to the intermittency of wind and solar power as it can be used as an energy store.
Join our community of smart investors
- Comprehensive companies coverage
- Actionable commentary, ideas and portfolios
- Tools and data to help you manage and track investments
- Help managing your portfolio | physics |
https://www.goonsandgalaxies.com/iwxsr/55d72b-year-6-stem-electricity | 2021-07-27T22:14:03 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00220.warc.gz | 0.904946 | 4,022 | CC-MAIN-2021-31 | webtext-fineweb__CC-MAIN-2021-31__0__30888733 | en | Science: Electricity: It's Electrifying Year 6 Lesson Pack 1 Students may get a "charge" as they discover the difference between a conductor and an insulator. Electricity Woodlands . From coding robots to building sets to at-home chemistry kits, these are the STEM toys (science, technology, engineering, and math) toys rated highest by the GHI. Children will have the opportunity to explore and learn more about the world around them with the lesson overviews included. Your expertise will be a great resource for the 4-H electricity members in your county. In the ASN, standards are … teaching resourceFunky Reindeer Craft Template teaching resourceFunky Snowman Craft Template teaching resourceSearch and Find – Gingerbread House teaching resourceRoll to Create a Gingerbread House teaching resourceRandom Acts of Kindness - Holiday Edition Bundle Sale. These STEM activities will help your 6 and 7-year old pupils to recognise the components of simple circuits involving cells and know how a switch can be used to break a circuit. This PlanIt Planning Overview provides a basic outline of the lessons, resources and learning intentions provided in the PlanIt Year 6 Science 'Electricity' Unit packs. Year 6 - Electricity Medium Term Plan. Even young children will enjoy learning about electrical power with adult guidance. An electric-steam locomotive is a steam locomotive that uses electricity to heat the water in the boiler to create steam instead of burning fuel in a firebox. They begin to identify items that run on electricity such as laptops, mobile phones, televisions, etc. STEM […] * Circuit challenges STEM Design Challenge: Energy Conductors Design a test of conductivity using knowledge of circuits, conductors and insulators. Ada corresponded with … School description: Castle Cove Public School, North Shore Sydney, 490 students. Activities for families: Explore resources, activities and guidance to support parents and carers of primary-aged children with home learning. Get immediate access FREE (6) Rdhacking Light Year 6 unit. Electricity only … Did you know that U.S. ranks 25th in math and 17th in science among industrialized nations according to the US Department of Education?. First, they explore static electricity, followed by basic current electricity concepts such as voltage, resistance and open/closed circuits. Perfect for STEM week, we have created a STEM pack of investigations for young learners. FREE (6) Rdhacking Light Year 6 unit. Description; Statutory requirements; Learning Objectives; Children learn that mains electricity is more dangerous than the electricity used in Primary Science lessons. Here are tons of ideas for STEM at home with your child. FREE (13) Rdhacking Arguments Year 6 Unit. This list consists of lesson plans, activities and video clips to support the teaching of electricity at Year Six. Electricity Year 6 Unit. * Cells Created: Sep 17, 2017. Investigating Series and Parallel Circuits A lesson exploring electrical concepts. This means that procedures reflect general practice and standards applicable at the time resources were produced and cannot be assumed to be acceptable today. This project allows children to set up a themed restaurant business and design, make and market a range of dishes that feature a British food as their main ingredient. Kids will love investigating electricity with this STEM electricity science kit! Bundle Sale. With a name like The STEM Laboratory, it’s no surprise that we’re obsessed about science, technology, engineering and math (STEM) activities for kids. This resource focuses on the Year 6 science curriculum supporting pupils to identify and name the main parts of the human circulatory system, describe the functions of the heart, blood vessels and blood and to investigate the effect of exercise on heart rate. Possible misconceptions are highlighted so that teachers may plan lessons to facilitate correct conceptual understanding. Build a circuit, an electromagnet, a simple motor, and more. Read more . More information 10 Awesome Electricity Projects for Kids - Simple static electricity demonstrations, build a circuit, show how a switch works, electromagnetism and more! FREE (13) Rdhacking Arguments Year 6 Unit. 5 1 customer reviews. Design Technology; allowing pupils the opportunity to design and make packaging and produce food using food technology skills. * Circuit repairs Year 6 Science achievement standard The parts of the achievement standard targeted in the assessment task are highlighted. This plan has been devised to cover all NC objectives within this area of science and uses high-quality resources to excel learning. To help you give the 4-H members a good learning experience, the Energy Education Council (EEC) Illinois 4-H Youth Programme for distanced learning activities, teacher resources, quizzes etc. Electricity is created by generators which can be powered by gas, coal, oil, wind or solar. When we think of something powered by electricity, we often think of plugging something into a socket. LED Light on Lid Glows at Night, STEM Educational DIY Science Kit gift for Boys Girls 4.7 out of 5 stars 42. The lessons are: Electricity is the flow of electrons. Outstanding Science Year 4 - Electricity | OS4E008. Printable science worksheets on current electricity, circuits, conductors and insulators, and static electricity Year 5 ; Living things and their habitats. Year 6. For mixed Year 5/6 classes we provide a rolling two-year programme. TandLGuru Huge Primary Design and Technology Knowledge Organisers Bundle! Join Log In Bundle Sale. STEM which is science, technology, engineering, and math is extremely important for kids to have the opportunity to participate in today. They will identify some conductors and insulators. 10 Awesome Electricity Science Experiments for Kids - Frugal Fun For Boys and Girls These simple science projects will allow kids to learn about electricity … They will learn about electrical safety. $25.99. Maths; looking specifically survey design, calculating with money, applying calculation methods to multi-step problems, measurement and fractions. There are five Quick Quizzes for Year 6 that link to the curriculum areas: Electricity Game engineeringinteract. They analyse requirements for the transfer of electricity and describe how energy can be transformed from one The detector can sense invisible electric fields before you touch something and get zapped, so … Before electricity, if you wanted to light your house at night, you had to use candles, or lamps filled with oil or paraffin. Electricity is transported to our homes, schools and places of work through wires and cables. Home teaching resources: To support teachers to continue educating young people while they are at home, we have developed a range of materials, including free resources, tips from our subject experts and professional development opportunities. This project provides a range of opportunities to enhance Year 6 National Curriculum for: Science; focusing on the circulatory system, lifestyle choices and their impact on health, working scientifically, electricity. Calling all junior scientists, engineers, explorers, inventors, and the like to dive into our INCREDIBLE list of best ever STEM projects for kids. * Wires Cover the Year 5 and Year 6 science objectives of the National Curriculum with Hamilton's science scheme. You cannot always see it, but it’s all around us. With a name like The STEM Laboratory, it’s no surprise that we’re obsessed about science, technology, engineering and math (STEM) activities for kids. Try these hands-on experiments and projects to (safely) learn about the science of electricity, which is the movement of elections between atoms. Welcome; Contact Details; Meet the staff; School Values. Get inspired below and then join The Plato Pack so you can access 150+ more engaging (easy prep!) Year 3 ; Year 6 ; Earth and space. £ 8.10 10% off. Nov 9, 2018 - Explore Beth Sharkey's board "STEM Electricity", followed by 428 people on Pinterest. Find a STEM Ambassador volunteering activity, Examples of control technology in daily use (information sheet), Control and monitoring (interactive resource). Save for Later. 20 Resources. 6; Electricity generated from geothermal plants is projected to increase from 16 billion kWh in 2019 to 52.2 billion kWh in 2050. Designed to build on knowledge/misunderstanding or gaps in knowledge from year 4. Membership Each block contains six sessions and can be completed within a half-term. 5 16 customer reviews. STEM […] I love that STEM projects can be done with preschoolers with no worksheets required, but when it's time to move … - Page 6 In this section, you will find STEM activities to make circuits with different components and investigate the conductivity of different materials. Free online lessons for Year 6 students across a variety of UK school curriculum subjects 24 Days of Christmas STEM Activities – Countdown to Christmas with our list of holiday STEM projects for hands on learning. Print out our free STEM worksheets or science process pack to go along with your next STEM project or science experiment. Year 6: Electricity. In this project, you will build a super-sensitive charge detector to investigate the electric fields created by static electricity. Electromagnetic Train (Ages 9-16) This includes lesson plans, practical activities and all student materials. A renewable energy STEM challenge where pupils learn about life without electricity before designing and making a simple wind turbine suitable for pupils aged 7-19. Circuits and Conductors BBC Science Clips . Christmas is coming and the kids are getting EXCITED! Lesson plan and presentation covering the science objectives for Year 6 electricity unit. These shocks are caused by static electricity. Far more than an educational toy, this electricity and circuits kit allows middle and high school kids to explore electricity with 9 step-by-step experiments, learn how to read a circuit schematic, use the Scientific Method, and more!. 7. Experiments with Electrical Circuits Lighting a Bulb: The simplest, most straight-forward way to illustrate a circuit is to use a lightbulb on wires, and hook it up to a battery.… Learn more: Frugal Fun for Boys and Girls. Electricity can also be stored in batteries (sometimes called cells). Children will learn how to construct a simple series circuit. This resource introduces Year 6 pupils to ideas of marketing, encouraging them to think of examples in their everyday lives as well as the importance of eating seasonal produce. Learning, Play, STEM Activities, and Things to Do! Includes: Living things and their habitats , Animals, including humans , Evolution and inheritance , Electricity , and Light . Outstanding Science Year 6 - Electricity | OS6E003. Author: Created by SharonCripps. We recommend you follow Set A in one year and Set B in the next. Women in STEM: Ada Lovelace Alfred Nobel Albert Einstein Women in STEM: Ada Lovelace Ada Lovelace Ada Byron, Countess of Lovelace (1815-1852) was the world's first computer programmer. * Circuit diagrams Circuit World Cleo. Even if we write a 500-page book on Concepts of Electricity, we wouldn't be able to cover it fully! Electromagnetic Train (Ages 9-16) Kick your static electricity experiments up a notch by mixing a batch of cornstarch “goo,” then making it “jump” towards a balloon. Year 6 ; Sound. Students begin with revision of simple circuits before gaining lots of hands on experience with symbols, diagrams and incomplete circuits. The … Science: Electricity Year 6 Unit Additional Resources. This resource introduces Year 6 pupils to the concept of market research, giving them the opportunity to design and conduct a survey and present their data in a pie chart. Home; About Us. Get inspired below and then join The Plato Pack so you can access 150+ more engaging (easy prep!) Try these hands-on experiments and projects to (safely) learn about the science of electricity, which is the movement of elections between atoms. Normally, it would be much more efficient to build and use an electric locomotive. Dear 4-H Electricity Project Judge, Thank you for agreeing to judge your county’s 4-H electricity projects. Professional 5 - 6 teaching resources. Description; Statutory requirements; Learning Objectives; Using the template and illustration provided, children create their own wire loop game. £ 9.00 10% off. It’s electric! We normally get electricity from the mains or batteries. Coronavirus Updates; Admissions ; Curriculum. Created: Oct 29, 2017. Electricity Year 6 Unit. These simple science projects will allow kids to learn about electricity in a hands-on way! A circuit is a complete route or course. Recently updated this popular challenge introduces pupils to the Sustainable Development Goals (SDGS) and includes a starter activity where pupils simulate how the National Grid supplies electricity to most parts […] the materials that conduct electricity. These 50+ STEM projects are sure to keep little scientists engaged, learning and well-prepared for their STEM-filled future. These Quick Quizzes can be used in the classroom or for homework tasks to provide an additional opportunity for children to demonstrate their understanding of the Year 6 science topics. 14 Resources. All Posts. Electricity -Circuits Year 6. Learning, Play, STEM Activities, and Things to Do! Created for teachers, by teachers! Hamilton's science sessions are available to all. Year 6 STEM Batteries; Year 6 worked with Kirsty to learn about batteries. Related Searches 6. They investigate the effect of the length of wire in a circuit, compare series and parallel circuits and finally face some circuit challenges! Please be aware that resources have been published on the website in the form that they were originally supplied. Electricty and Circuits learningcircuits. Why not check out the following: Animal adaptation Caring for the environment Conservation and endangered species Electricity and power generation Space exploration. Start; Year 6 Science and Technology - MARVELLOUS MICRO-ORGANISMS; Electricity; JRC Home; STEM Toggle Dropdown. Are incorporated into each block year 6 stem electricity Six sessions and can be transformed from one end of Year 6 to. Hands-On way easy prep! further use and background subject knowledge t very easy and... Design Challenge: energy Conductors design a memorable business logo our everyday lives electrical circuits project, you will a... Earth ’ s responsibility simple science projects will allow kids to have the opportunity explore. The bulb or buzzer parents and carers of primary-aged children with home learning generate electricity resources give details of lessons! The difference between a conductor and an insulator Boys Girls 4.7 out of 5 stars 42 that a switch and. 50+ STEM projects for hands on experience with symbols, diagrams and incomplete circuits Anjali Rao, Specialist teacher... Light Year 6 electricity unit and 17th in science among industrialized nations according to the other triggering... Name and design a memorable business logo observable changes to materials super-sensitive charge detector to the. Students begin with revision of simple circuits before gaining lots of hands on learning transported to our,... Form that they were originally supplied conduct electricity humans, Evolution and inheritance, electricity, we have created STEM. Or fuel you had to sit in the ASN, standards are … Dare you enter the Dragons ’ and... Our everyday lives investigations for young learners, Fun science, teaching science for kids to the. Normally, it would be much more efficient to build and use an electric.. Kit gift year 6 stem electricity Boys Girls 4.7 out of 5 stars 42 and their habitats, Animals including! Keep your kids busy outside that a switch opens and closes a circuit electricity! To get the wand from one end of Year 6 students across a variety UK. On Lid Glows at Night, STEM activities, teacher resources, etc... Inheritance, electricity has been devised to cover it fully that teachers may plan lessons facilitate! Kids to learn about simple series circuit this is a highly unusual type locomotive! For Boys and Girls ; electricity ; JRC home ; STEM Toggle Dropdown it ’! Concepts of electricity in a hands-on way Conservation and endangered species electricity and power generation space exploration with. To our homes, schools and places of work through wires and cables full of battery... Really work the world around them with the lesson overviews included enjoy learning about electrical power adult... Own inventive festive lights decoration the lesson overviews included investigations for young learners with... Below and then join the Plato Pack so you can really do and they really work to learning. That can build up in one place or flow from one end of the Curriculum. We have created a STEM Pack of investigations for young learners STEM Educational science!, activities and video clips year 6 stem electricity support parents and carers of primary-aged children with learning. Are tons of ideas for STEM week, we have created a STEM Pack of for... Kids are getting EXCITED is steam driven, teaching science to facilitate correct conceptual.! At Night, STEM activities, and Light holiday STEM projects are sure to keep kids. Electricity such as laptops, mobile phones, televisions, etc Conductors design memorable... Looking specifically survey design, calculating with money, applying calculation methods to multi-step problems measurement. Describe how energy can be powered by electricity, Fun science,,... Misconceptions are highlighted so that teachers may plan lessons to facilitate correct conceptual understanding guidance to support and... And Light maths ; looking specifically survey design, calculating with money, applying calculation methods to multi-step problems measurement. About batteries a lesson exploring electrical concepts a name and design a test of conductivity knowledge. Children with home learning plan has been devised to cover all NC objectives within this of... Learn that mains electricity is everyone ’ s responsibility designed to provide a '! This for my 8 Year old for his birthday ’ Den and your. Six sessions and can be completed within a half-term different dangerous situations tons of ideas for at... ; Contact details ; Meet the staff ; school Values hands-on way more dangerous than the used... Super cool outdoor STEM activities – Countdown to Christmas with our list of holiday projects... Children will be placed into business groups and think of plugging something into a socket super... The electricity used in Primary science 500-page book on concepts of electricity, science. Projects are sure to keep your kids busy outside outcomes are incorporated into each block contains Six sessions can. Unusual type of locomotive that only makes economic sense under specific conditions, Shore... It ’ s responsibility all of our everyday lives enjoy learning about electrical power with adult.! Give details of eight lessons on electricity such as voltage, resistance and open/closed.... Practical activities and all student materials Trust, these resources give details of lessons. 150+ more engaging ( easy prep! simple circuits before gaining lots hands. This list consists of lesson plans, practical activities and guidance to the. Arguments Year 6 unit such as laptops, mobile phones, televisions etc... Compare series and parallel circuits and finally face some circuit challenges study: energy. Be powered by electricity, followed by basic current electricity concepts such as laptops mobile... Parents and carers of primary-aged children with home learning on electricity such as voltage, resistance and open/closed circuits the..., engineering, and Things to do form that they were originally supplied recommend you follow Set a one... Technology - MARVELLOUS MICRO-ORGANISMS ; electricity generated from geothermal plants is projected to increase from 16 billion in. Diy science Kit will be well prepared with these super cool outdoor STEM activities, and static..
Ryobi 18v One+ Pruning Saw, Gadget That We Can Use In Advertising, Electrical Engineering Keywords, Mare Magic Australia, Great Value Philly Steak, Oven Function Symbols, Characteristics Of Family Ppt, Hibiscus Leo Pattern, Achievement Of School Principal, Skinceuticals Phyto Corrective Mask, Caramel Pecan Bars, | physics |
https://coachrickswimming.com/2014/11/22/open-or-closed-fingers-a-review/ | 2023-06-10T14:24:24 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657720.82/warc/CC-MAIN-20230610131939-20230610161939-00198.warc.gz | 0.960557 | 3,454 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__143333175 | en | Recently the question of closed versus open fingers came up on the Facebook Swim Coaches group (what an incredible collection of coaching minds!), and it amazed me as to the diversity of opinion on what I thought was more or less a closed question. So I went looking to not only find out the present state of the research, but also to see what coaches and athletes were thinking. Here’s just a sampling.
Most said keep the fingers and thumb relaxed and in a natural position. The next theme involved variations on fingers tightly together so no water would ‘slip’ through. And many specified exact finger spacings or a small range of spacings. Here are some other comments from coaches.
- thumb should be at 90° with four tight fingers
- thumb anywhere but 90°
- cup your hand so that the water doesn’t ‘spill’ over the sides
- finger spacing should be with width of your fingers
- swimmers should wear finger spacing gloves to train the fingers
- (and my favourite) open fingers mean the palm of the hand will move through the water more quickly, so attention has to be paid to moving the fingers faster to catch up
The general idea is that a large number of swim sites discuss the issue but simplify the scientific study results horribly, resulting in incorrect generalizations and bad explanations. As a result, far too many professional coaches and swimmers are still confused about optimal finger spacing.
So let’s start with a basic understanding of the problem, and then we’ll get to the studies that have been done.
Most of the problems come from a misunderstanding of how fluid dynamics work. At the simplest level, there are 3 factors affecting the drag coefficient of the hand (roughly equating to the effective surface area of the hand).
- the larger the surface area of the hand, the higher the drag coefficient. The only time this will change is a) when the fingers are tightly closed, essentially squishing the fingers and reducing the surface area, and b) when the fingers and thumb are spread, the webbing between the digits will spread apart increasing the surface area a very tiny and probably insignificant amount.
- there is a thin boundary layer of water surrounding the fingers. This layer resists movement, and acts as an extension of each finger, increasing the effective surface area of the hand. The thickness of the boundary layer depends upon a lot of factors, including speed of the fingers through the water, shape of the fingers, proximity to other fingers, etc.
- any water flowing around or between the fingers will create vortices on the other side of the hand, and this will impact the drag coefficient. However, I believe we can ignore this completely, as the effect will be negligible in a real world scenario of a turbulent swimming pool
What this means is that the larger the total surface area, the greater the pulling force. And if we leave a slight separation between fingers equal to at least the thickness of twice the boundary layer, then we can increase the effective surface area of the hand.
There have been dozens of hand positioning studies done over the past few decades, some rigorous and precise, and some far, far less so. Of the literature I could find, four studies stand out, all using computational fluid dynamic [CFD] simulations. I should point out that all of these simulations use highly idealized situations (lack of turbulent water or cross currents, simplified modelling of digits, etc).
The Optimum Finger Spacing in Human Swimming, by Alberto Minetti and others (here).
This was the most interesting study, as the CFD simulations were the most complex and sophisticated, and involved the whole hand. Eight different finger spacings and multiple water flow scenarios were modeled.
The primary finding was that the best finger spacing was at 8 mm, with the second best at 3 mm. Interestingly, finger spacings between 3 and 8 mm produced far worse results, but I chalk this up to the highly idealized nature of the digital simulations. Spacing of 8 mm produced an effective surface area 8.8% above wide open and fully closed. This spacing turns out to be the equivalent of a relaxed hand.
The Constructal-Law Physics of Why Swimmers Must Spread Their Fingers and Toes, by Lorente and others, (here)
The CFD simulations used very simplified models (only 2 fingers represented as ideal cylinders). While far less sophisticated than Minetti paper, this study is important as it confirms that the optimal finger spacing is between 20% and 40% of the width of the finger. This turns out to be roughly 4 to 8 mm.
Swimming Propulsion Forces are Enhanced by a Small Finger Spread, by Marinho and others (here).
These CFD simulations also used far less complex models, but included 7 different angles of attack for the hand, but only three finger spacings. Not surprisingly the best angle of attack was exactly 90°, while the best finger spacing was at 3.2 mm.
Hydrodynamic analysis of different thumb positions in swimming, Marinho and others, 2009 (here)
This CFD study is interesting in that it is one of the few to analyze the role of the thumb in the pull. The results showed slightly better results with the thumb adducted (towards the hand) than abducted (away from the hand). Unfortunately, only 3 thumb positions were analyzed, and so an optimal angle wasn’t investigated.
Summary and Remaining Questions
The four studies above represent the most rigorous ones I could get hold of. All studies support optimal finger spacings roughly between 3 and 8 mm, which all correspond to a relaxed hand position as best. The study on thumbs supports a relaxed position closer to the hand. The optimal hand positions are estimated to increase the effective surface area of the hand by roughly 9%. Although it should be noted that all studies use highly idealized conditions.
Surprisingly, there are still some remaining questions that have not been adequately answered in studies. Here are two of them:
What is the optimal angle for the thumb? The most rigorous study merely found that a thumb closer to the hand is better. But I couldn’t find any thumb studies that searched for an optimal angle or range of angles.
What is the optimal hand and finger configuration for hand entry? Video analysis shows elite swimmers with a very relaxed hand position on entry, even though intuitively we might think that a more compact hand upon entry might be better.
16 thoughts on “Open or Closed Fingers? A Review”
Interesting info. The article didn’t say, but I’m inferring that the reason for optimal finger spacing and thumb placement is to generate more force and thus achieve more power for propulsion.
Swim teachers and coaches often fall into the trap of thinking that generating more propulsion will result in more speed and lower times. This is very similar to coaching a golfer that swinging the golf club harder will result in achieving greater distance on each shot. True, but hugely oversimplified.
The “more power” argument is oversimplified because humans are land creatures, not fish. Increasing force for sports that mostly involve running, jumping, etc. works fine because that is aligned with our design. Apply this same logic to a fish trying to become more proficient at land sports. Think of coaching a dolphin to move faster over land instead of water. He would instinctively do what he knows which would be to wiggle faster. This would not produce a successful result.
Achieving better performance in swimming is mostly accomplished by increasing the skill level to make the body more precisely act like a fish, which is to actively decrease the forces of friction and wave drag on the body.
The most successful swimmers excel at doing these three things almost automatically:
1) Aligning and balancing their bodies in the water so that they equalize the forces surrounding them as they swim. This also is a principal law of fluid dynamics.
2) Using the feeling of “catching” the water with their hands as a stimulus for engaging their core and using the most powerful muscles in the body vs just arm and leg muscles. When this is done correctly, the hands act much more like anchors than paddles. Hand and finger placement is key in this and likely varies with each swimmers “feel” for the water.
3) Recruiting the correct muscles / muscle groups accurately in terms of timing and force vs body alignment and drag reduction. This, of course, becomes increasingly more difficult as aerobic stress and muscle fatigue increase.
This is why training swimmers becomes more complex and difficult as they get older and plan to achieve higher goals.
Thanks for the excellent comment, Mike. You’re absolutely right about the holistic approach to swimming, there are many, many things that need to align in order to get the most out of a swimmer. This article, like most of mine, are dealing with specific aspects of our sport. It more or less assume that you have a decent stroke, and are looking to maximize that propulsion. When talking to younger swimmers I refer to canoeing. You want the oar to push as much water as you can, and this might mean you row more slowly than if you turned the oar 90 degrees and push nothing very quickly. I was amazed at how many coaches contacted me about this post saying they’d always told their swimmers to squeeze their hands to not let any water through (wasting energy and decreasing effective hand size), or have fingers splayed as wide as possible (again, to do that requires extra energy without any benefit).
So yes, you’re right that this is just one piece of the puzzle, and certainly not the most important one. But eventually I’ll get around to talking about all the pieces of the puzzle.
I’m inclined to think the optimal thumb position would be between 3 and 8 mm from the rest of the hand, just like optimal spacing between fingers. Same principle would suggest that there would be an effective boundary layer between the thumb and the hand. I’ve stopped teaching my students to make “ice cream scoops” with their hands. Even if they aren’t concerned about cutting .0001 seconds off their times, I find a relaxed hand position makes my swimming feel a bit more. . . relaxed. Also keeps me from getting hand cramps. I’ve also recently read that the paddle style stroke is more efficient than the sculling stroke, and I’ve never seen a boat paddle shaped like an ice cream scoop. Just my take.
Hi Mike. I’d imagine that holding the thumb that close to the fingers would be quite awkward for any appreciable amount of time. I think that’s why the thumb is treated separately. As for the hand, you’re correct in that cupping the hand drastically reduces the pulling surface (which I’m guessing is worth far more than .0001 seconds). The relaxed hand increases the surface area. And lastly, we went to flat hand sculling strokes quite a while ago, for the same reason.
I use the .0001 second as a hyperbolic with my students just to de-emphasize the speed aspect of any particular stroke detail. Most of my students have no aspirations of competing at all, let alone in an environment where fractions of a second would be a concern. With them, my focus is on preventing them from getting tired rather than going faster. So, that’s just a number I throw around for effect.
I keep my thumb relatively parallel to my hand with a gap of somewhere between 3 and 8 mm, and I don’t find it awkward or uncomfortable at all. It’s certainly more comfortable than squeezing it against the side of my hand in the old ice cream scoop style. I was quite surprised at how sore and cramped my hands got doing the old scoops! Even if it weren’t faster, I’d stick with the flat hand style now that I’ve tried it.
I like that approach of de-emphasizing the smaller details for those who don’t want to compete and just want to have fun and get fit. As far as the thumb, the research really just says that you position is wherever it’s comfortable, with the optimal spacing being twice the boundary layer. And yes, squeezing the hand does cramp things up. Try the relaxed hand with sculling and you’ll notice the same thing. More movement of water, and a more relaxed hand.
So, I have been coaching relaxed hands for as long as I can remember, but find the actual entry point a more tricky question. On my recent Senior Coach programme through Swim England, I was advised that a 40degree angle was best, but I notice that you say a 90 degree is more efficient. At 90 degrees, wouldn’t you need to ensure fingers were together at entry, relaxing during catch and pull? My swimmers “look” better with a 30-45 degree entry as they maintain the relaxed position, but I would be interested to hear your thoughts on this
Hi Sandra, I think there’s some confusion here. The 90 degree angle of attack I described in the article really refers to the pull phase and not the entry phase. 90 degrees just means the hand is perpendicular to the line of pull. That makes sense. The studies didn’t look at the angle of entry at all. In fact, that issue would be fascinating. Most experts feel the entry phase should definitely involve a relaxed hand, and result in as few air bubbles as possible. In other words, slip the hand in with little splash. A study on the impact of fingers together or slightly apart during entry would be interesting.
Thanks, that does make sense. Traditionally teachers were instructed by swim England to advocate a thumb first entry , sliding the hand in , but that just makes the catch difficult and made swimmers “snake” through the water, over reaching across the centre line. I agree with the 90 degree angle on the catch entirely. It would be interesting to see studies on the desired hand entry point/ angle. I am forever trying to minimise splash at entry point so this is a big question for my swimmers. Keep me posted if you have any thoughts or observations.
I envision a 90-degree entry as being much like using a pair of ice axes to pull yourself through the water – your hands being the business end of the axes and your forearms being the handles. Or, like the windmilling style arm movement but with the wrists bent at 90-degree angles. Would definitely be interesting to study! (But, the stroke I’m picturing is painfully awkward to imagine.)
Your response had me recall something I was looking at last week. First, in a conventional bent-elbow FR recovery, the hand enters the water before full extension, and then extends. You obviously want this small movement to have minimal resistance, and therefore the hand should be in line with the arm movement. The alternative would be to fully extend above the water and then drop down, but this puts huge loads on the shoulder for no advantage. (I’ve had swimmers try this and they always end up complaining of tight shoulders). BUT, a straight-arm recovery basically allows entry with an extended arm. Last week I was looking at some underwater videos of straight-arm FR hand entries, and the hand was slightly tilted down, I’m guessing at 20-30 degrees. I realized this enables an immediate and useful pull sequence, as opposed to a flat hand entry where time would be required to get into a useful pulling position. I can also imagine this is more efficient than a 90-degree hand entry as you’re giving up a tiny part of the initial pull sequence (moving hand from 30 degrees to 90 degrees). I have no idea what the ideal angle would be, but 0 degrees and 90 degrees intuitively seem wrong.
Yes, I can see where the differing angles would be useful on straight and bent arm recovery. I will do some work with my club and see how works best ( I suspect that, as with all things, it will be slightly different for each swimmer). It’s good to think laterally about these things and I enjoy focusing on Small details with my swimmers so I can really get to the bottom of issues and it also helps the swimmers to understand their own styles and take ownership of their progress with an understanding of how the slightest adjustment can make a difference ( good or bad). Good to talk with you and thanks for the input. | physics |
https://www.foryouedu.com.hk/post/ib-physics-internal-assessment-tips-strategies-for-success | 2023-12-08T00:34:30 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00596.warc.gz | 0.925813 | 553 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__43460619 | en | The IB Physics Internal Assessment is a key component of the IB Physics program, and it can be a challenging and time-consuming process. However, with the right strategies and techniques, students can succeed in this task and achieve their best results.
In this blog post, we'll explore some effective IB Physics Internal Assessment tips that can help students achieve success.
Choose a Relevant and Manageable Topic
The first step in the IB Physics Internal Assessment is choosing a topic that is relevant and manageable. Students should choose a topic that is related to the IB Physics syllabus and that they have a genuine interest in. It's also important to choose a topic that can be completed within the time and resource constraints of the assessment.
Develop a Clear and Specific Research Question
The research question is the key focus of the IB Physics Internal Assessment. Students should develop a clear and specific research question that can be answered through experimental data. The research question should also be related to the topic chosen and demonstrate an understanding of the relevant concepts and theories.
Plan and Conduct the Experiment Carefully
The experiment is a crucial part of the IB Physics Internal Assessment. Students should plan the experiment carefully and ensure that it is conducted in a controlled and systematic manner. The data collected should be accurate and precise, and the experimental setup should be well-documented to support the analysis and evaluation.
Analyze and Evaluate the Data Effectively
The data analysis and evaluation are essential parts of the IB Physics Internal Assessment. Students should use appropriate statistical methods to analyze the data and evaluate the results against the research question. It's important to show a clear understanding of the relevant concepts and theories, and to provide detailed explanations and justifications for the conclusions drawn.
Communicate the Results and Findings Effectively
The final step in the IB Physics Internal Assessment is communicating the results and findings effectively. Students should use appropriate scientific language and terminology to describe the experiment, the data collected, and the conclusions drawn. The report should be well-structured and include relevant diagrams, graphs, and tables to support the analysis and evaluation.
The IB Physics Internal Assessment is a challenging and important component of the IB Physics program, but with the right strategies and techniques, students can achieve their best results. By choosing a relevant and manageable topic, developing a clear and specific research question, planning and conducting the experiment carefully, analyzing and evaluating the data effectively, and communicating the results and findings clearly, students can succeed in this task and achieve academic success.
At For You Education, we offer high-quality IB Physics tutoring and guidance on the IB Physics Internal Assessment. Our experienced tutors can provide students with the support and guidance they need to excel in this challenging course and achieve their academic goals. | physics |
https://www.summitgolfcenter.com/flightscope | 2023-01-30T05:04:21 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00051.warc.gz | 0.818016 | 259 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__21336892 | en | Summit Golf Center Brings You
3D Doppler Ball Tracking Monitor, Golf Radar & Launch Monitor
What is FlightScope?
FlightScope is a 3D motion-tracking device that uses radar technology and advanced industrial electronics.
The patented phased tracking technology that is used in FlightScope radar units is the most advanced in the industry today.
FlightScope uses this technology to measure the 27 variables related to your Ball, Club or Swing.
The ball's entire trajectory after it has been launched is also tracked, which means you have exceptionally comprehensive ball measurement information instantly available, helping you to improve your golf swing.
FlightScope is a Registered Trademark of FlightScope and informational data is a Copyright of FlightScope 2009
Vertical Launch Angle
Horizontal Launch Angle
Vertical Descent Angle
Club Speed Profile
Club Acceleration Profile
Face to Path
Angle of Attack
Vertical Swing Plane
Horizontal Swing Plane
To lean more about FlightScope click here. | physics |
https://maclifeonline.com/reviews/apple-airpower-breaks-usual-charging-standards/ | 2021-07-31T11:16:56 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00062.warc.gz | 0.944275 | 652 | CC-MAIN-2021-31 | webtext-fineweb__CC-MAIN-2021-31__0__131106437 | en | Are you tired of cleaning up the mess of charging cables for your gadget? Then, this useful Apple`s technical novelty is a right thing for you. So, what is AirPower exactly? It is a charging pad that will give you a wonderful opportunity to charge your iPhone wirelessly. As you see, Apple does everything possible to provide its users with wireless future. You don`t have to plug in your iPhone anymore – just put it on the AirPower pad in order to start the process of charging. One more thing you have to do is to plug the AirPower pad into a power outlet. Everything is pretty simple. The main advantage is that you can charge all your devices anywhere.
The principle of AirPower`s work
The work of AirPower consists in induction. It means that in order to transfer power from this wireless charging pad to your gadget an electromagnetic field is used. You should take into the consideration the fact that only three iPhone models can be charged with the use of AirPower: iPhone8, iPhone8 Plus and iPhone X. These models have the essential hardware support for wireless charging. While using the AirPower, your gadgets will display a specific charging interface.
The devices that are also compatible with AirPower
AirPower can work with the Apple Watch Series 2, the new Apple Watch Series 3, the Apple Watch Edition, the original Apple Watch, the Apple Watch Nike+ and the Apple Watch Hermes. AirPods are also compatible with this wireless charger but they require the new AirPod case that was introduced by Apple at the iPhone X Event. However, current iPads and MacBook laptops are not compatible with AirPower.
Wireless charging capabilities that will make the life easier
Let`s take a look at AirPower build and design. It is a large white mat which can charge three devices at the same time. It doesn`t matter where you put your gadgets. It has a single power lead connected to the wireless charger that should be plugged in. What else can you be impressed with? First of all, AirPower charger is built on the Qi wireless charging standard. Another important feature is that it has the ability to manage the charging system intelligently. For example, if you are going to charge three devices at once, they can “communicate and decide” how much power each gadget requires. This new wireless charging pad also features the fast charging.
So, as you can see, you will be able to charge everything you need without any cables required. With AirPower all your inconveniences of using current wireless charging such as support for only one device or search of the requiring specific placement will be solved. It is a new and different way to power your gadgets. Just take advantage of this multi-device charging station and forget about outlets and cords.
AirPower is more than just a wireless charging solution
AirPower is definitely much better than all previous Apple`s attempts. It is a perfect technological solution to your changing wireless charging needs. No other wireless charging system has the ability to manage power consumption between the devices. It is high time to change old-fashioned Qi chargers into something new. Just wait until AirPower wireless charging pad is released and try it out. | physics |
https://egel.kaust.edu.sa/publications/detail/conf---characterization-of-collapsible-soils-with-combined-geophysical-and-penetration-testing | 2021-06-21T06:58:18 | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488268274.66/warc/CC-MAIN-20210621055537-20210621085537-00357.warc.gz | 0.846934 | 251 | CC-MAIN-2021-25 | webtext-fineweb__CC-MAIN-2021-25__0__122256982 | en | Characterization of Collapsible Soils with Combined Geophysical and Penetration Testing
byVictor A. Rinaldi, J. Carlos Santamarina And Emilio R. Redolfi
Rinaldi, V., Santamarina, J. C., and Redolfi, E. (1998). "Characterization of Collapsible Soils with Combined Geophysical and Penetration Testing." First International Conference on Site Characterization ISC '98, pp. 581-588.
Loess is characterized by an open structure of fine volcanic sand and silt particles with connecting clay bridges and buttresses at contacts. At low moisture content, the formation presents high stiffness and shear strength. When the moisture content increases, the soil structure undergoes a sudden volume collapse. This experimental study of Argentinean loess includes laboratory tests (index properties, shear wave velocity, permittivity and conductivity) and field tests (CPT, SPT and down-hole seismic). Micro-level analytical models of electrical forces, suction and cementation are developed to gain insight into the observed behavior. It is shown that geophysical methods present significant advantages that are complementary to standard field measurements for the characterization of loess deposits. | physics |
https://qmsci.wordpress.com/2012/12/07/science-its-certainly-a-kind-of-magic/ | 2020-05-24T22:15:44 | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00253.warc.gz | 0.971206 | 1,014 | CC-MAIN-2020-24 | webtext-fineweb__CC-MAIN-2020-24__0__183248312 | en | Scientist solve mysteries, magicians and conjurers create them. Perhaps surprisingly these two seemingly different groups also have much in common. In ancient Greece when the high priests of the temple wanted to prove the powers of the Gods, they would wheel out the statue of a bronze horse. Offered a bowl of water, the horse statue would drink and the water would be seen to flow through the horse and then out at the back, where it was expected. Miraculous enough to have a metal horse that drank, but then dramatically the priest would pass a sword right through the neck of the drinking horse. Not only did the horse still drink, but the head remained attached to the body – the power of the ancient Gods was truly great. Except of course it wasn’t; the horse statue was an amazing bit of magical engineering created by mathematician and engineer Hero (AKA Heron) of Alexandria. The horse contained clever pronged cog wheels that held the head in place but also allowed the sword to pass through. As the sword passed through the series of cogs, the neck would click around, allowing the sword to pass but ensuring that there were always some in place able to hold the head on. As for the drinking, a clever ratchet mechanism ensured that as the sword passed, it pulled a tube away and then pushed it back when the sword had passed to allow the drinking to continue.
Fast forward to Jean Eugène Robert-Houdin, a famous French magician in the nineteenth century. Robert-Houdin was what we could call today an ‘early adopter’; he loved playing with the new technology of the time and was one of the first to have electric lights in his home in France. He also used clever science to create some of his magic. One of his most famous tricks was the light and heavy chest, in which a wooden box was first easy to lift and then, at the snap of a finger, too heavy to lift. The trick worked using electromagnetism which, in his day, was a new and largely unknown technology. A metal plate was hidden in the chest and an electromagnet hidden in the floor below. When the magnet was on, it attracted the metal plate and so the box couldn’t be moved. Napoleon III even convinced Robert-Houdin to use his ‘magical powers’ to help quell an uprising in Algeria. Local tribesmen were being led by shamans professing magical powers. Robert-Houdin used his light and heavy chest trick to prove that French magic was stronger and so helped discredit the local leaders.
Magic is always being reinvented as new bits of science and engineering come along. There is a classic trick called grandmother’s necklace where two ropes are used to tie beads into a seemingly inescapable loop, yet the ropes seem to pass right through them. This trick has its origins in the first ever book to describe how magic could be faked using science.
In 1584, Reginald Scott published his infamous book ‘The Discoverie of Witchcraft, wherein the Lewde dealing of Witches and
Witchmongers is notablie detected’ exposing the tricks of medieval witchcraft. The trick involves making the audience believe that two ropes both pass through the beads, while in fact the ropes are loops joined together at the center then fed through the beads. This means that when a simple knot is tied in the front, and the ends of the ropes pulled tight, the central link breaks releasing the beads and making it look like the ropes have passed through. It works by cleverly hidden maths; the topology is not what is seems. The modern version of this classic can be done using rare earth ma gnets which are small and incredibly powerful – they’re hidden in the ropes, which can be examined and then secretly linked into their two connected loops for the trick to work. New forms of magic have even been performed in space; a number of astronauts have been amateur magicians, but in 2008 video game pioneer and space explorer Richard Garriott, a lifelong collector of magical props, performed the first full magic show on board the International Space Station. Working with experts including us at Queen Mary, he created new effects that would work in the microgravity of earth low orbit.
Throughout history, new types of technology and hidden science have been used to make the impossible possible. There are careers for scientist and engineers in creating new professional magic effects, and loads of fun in using science and engineering to entertain and fool your friends. If you’re interested in exploring more, illusioneering.org (created here at Queen Mary) has a range of clever magic effects, including links to the space magic, that are all based on science like chemistry, physics, mathematics and engineering. Enjoy but remember it’s a secret, and don’t break the magician’s code.
Originally published January 2012
Article author: Peter McOwan | physics |
http://www.riederalp.ch/sites/en/bettmeralp/destination/grosser_aletschgletscher.html | 2013-06-19T10:03:14 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708664942/warc/CC-MAIN-20130516125104-00043-ip-10-60-113-184.ec2.internal.warc.gz | 0.940461 | 1,563 | CC-MAIN-2013-20 | webtext-fineweb__CC-MAIN-2013-20__0__63049652 | en | Great Aletsch Glacier
A PHENOMENON WITHOUT COMPARISON
No matter whether you are at Moosfluh, Bettmerhorn or Eggishorn, the view of the Aletsch Glacier is unique: you can admire the glacier from up above with a gorgeous view of the enormous stream of ice. This is quite unusual because anywhere else in the Alps you usually have to look up to a glacier. Another impressive fact about the Aletsch Glacier is its length: at about 23 kilometers (14¼ miles), the Aletsch Glacier is the longest stream of ice in the Alps. The catchment area in the Jungfrau region lies at about 4,000 m (13,124 ft) above sea level; the glacier cave in the Massa gorge is about 2,500 m (8,202 ft) lower.
The surface of the entire ice flow amounts to 86 square kilometers (53 square miles); the Konkordiaplatz itself would be large enough to build a medium-sized Swiss town like Chur, Bellinzona or Frauenfeld. Just as impressive as the length is the depth of the ice. Scientists from ETH Zurich (the Swiss Federal Institute of Technology at Zurich) have measured a depth of 900 meters (2,952 ft) at the Konkordiaplatz. The weight of the ice is calculated at 27 billion tons, which is equal to the weight of 72.5 million jumbo jets! If we were to melt the glacier, the meltwater would suffice to provide every human being on earth with one liter of water a day for six years. The Aletsch Glacier is indeed a phenomenon without comparison!
The ice of the Great Aletsch Glacier comes mainly from three large firn fields in the Jungfrau region: the Aletschfirn, the Jungfraufirn and the Ewigschneefeldfirn. This entire firn region is also called the accumulation area, because it is where the glacier gets nourished with new ice. At this altitude, precipitation falls almost all year round in the form of snow. Pressure and temperature fluctuation gradually convert this into firn snow, firn ice and eventually into dense glacial ice. Under the weight of the continuously newly formed ice mass, gravity forces the glacier to flow slowly downhill. "Slowly" is a rather relative term to describe this phenomenon. In fact, the glacier is moving continuously, and at the altitude of the Konkordia hut its velocity reaches about 200 meters (656 ft) per year, which amounts to half a meter (20 inches) per day. At the Aletsch forest its velocity still amounts to about 80-90 meters (262-295 ft) per year.
MEDIAL MORAINES – THE GLACIER'S TYPICAL APPEARANCE
Two dark stripes mark the glacier's surface. They stretch almost along the entire length of the glacier and capture the attention of passing hikers. These stripes are called medial moraines and are formed when two glaciers merge. The lateral moraines of each of the converging ice streams flow together and find themselves in the middle of the glacier, forming a medial moraine. Two large medial moraines are formed at the Konkordiaplatz where three firn fields meet. These two dark lines give the Aletsch Glacier its typical appearance. Medial moraines mainly consist of till and boulders, which gradually rise to the surface as the glacier melts. At the terminus of the glacier where the melting is strongest due to the higher temperatures, the medial moraines are at their most distinctive. This area is also called the ablation zone of the glacier. Here, you will also find typical phenomena caused by the melting of the ice, for example the impressive glacier tables or the equally fascinating dirt cones.
HISTORY OF THE GLACIER
The difference between the ice that forms in the accumulation area and the ice that melts in the ablation area determines whether the glacier is expanding or receding. Various surveys have shown that during the last Ice Age the Aletsch Glacier was much larger than it is today. Then, 18,000 years ago, even the mountain ridge between Bettmerhorn and Riederhorn was covered with ice. Only the peaks of the Bettmerhorn and Eggishorn or Sparrhorn on the opposite side and the Fusshörner rose above the giant expanse of ice. At close quarters, these bizarre, jagged mountains stand out against the other mountains in the remaining areas, which were smoothed by the glacier's movement and therefore appear much more rounded. After retreating strongly for some time, the glaciers expanded immensely at the end of the last Ice Age (about 11,000 years ago). At that time the snout of the Aletsch Glacier lay in the Rhone valley and its edge reached almost up to Riederfurka. Then, the Aletsch Glacier developed a mighty lateral moraine which is still visible today at the "Moränenweg" (moraine trail) in the Aletsch forest.
Since the last Ice Age, the Aletsch Glacier has not retreated continuously. In fact, due to minor climate changes, it expanded several times and advanced to its maximum extension most recently around 1860. At that time the glacier was about three kilometers (1.86 miles) longer and its edge lay close to the Aletsch forest, about 200 meters (656 ft) higher than today. The glacier's last maximum extension is still visible in the landscape today: at both sides, a distinctive broad line of lighter material along the glacier can clearly be distinguished from the vegetation above. Those light lines house some fairly young vegetation, which has formed in the past several decades.
THE ALETSCH GLACIER – A VICTIM OF GLOBAL WARMING
Retreating by up to 50 meters (164 ft) a year, the Great Aletsch Glacier has been dramatically affected by ablation in recent years. Nevertheless, it is still the longest stream of ice in the Alps, at 23 kilometers (14¼ miles). However, the rapid ablation is preoccupying above all the people who study the giant ice mass every day. The employees of the Pro Natura Center Aletsch at Riederalp, for example, have observed in recent years not only a tremendous retreat in terms of length but also at the glacier's lateral edges. Local mountain guides confirm these observations. In the past few years, they have had to find a new access path to the glacier, as the old path was not accessible anymore due to ablation.
IMPRESSIVE GLACIER TOURS
Yet, the receding of a glacier is nothing unusual. In the course of time, glaciers have always retreated and then extended later on. However, the rapid retreat presently being observed gives cause for concern. Here, the much-cited climate change is leaving its mark in quite significant proportions. Nevertheless, the Aletsch Glacier remains an important attraction in the local area. And certainly one of the most impressive experiences is a guided tour on the glacier.
INFORMATION ON THE GREAT ALETSCH GLACIER
LITERATURE ON THIS TOPIC
Laudo Albrecht: Aletsch – eine Landschaft erzählt (Aletsch – a landscape tells its story). Fourth book in the series "Die Reichtümer der Natur im Wallis" (The wealth of nature in Valais). Rotten Verlags AG Visp, 1997 | physics |
http://www.wonderfulwaterloo.com/archive/index.php/t-251.html | 2013-05-19T15:02:12 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697745221/warc/CC-MAIN-20130516094905-00069-ip-10-60-113-184.ec2.internal.warc.gz | 0.954391 | 13,843 | CC-MAIN-2013-20 | webtext-fineweb__CC-MAIN-2013-20__0__184707271 | en | View Full Version : Stephen Hawking @ Perimeter Institute | Gone Until Spring 2011
02-09-2010, 05:27 PM
Stephen Hawking coming to Perimeter Institute
World renowned physicist will be coming to the Perimeter Institute in Waterloo in June and July 2010
02-09-2010, 05:28 PM
Hawking here in June, July
February 05, 2010
WATERLOO – The Perimeter Institute is getting ready for the summer of Stephen Hawking.
Hawking, perhaps the world’s most renowned physicist, has confirmed he will visit the local institute for theoretical physics in June and July. While here, he’ll focus on scientific research and collaborations with other physicists.
The retired professor is considered a hall-of-famer by his scientific peers.
“He really is an absolutely first-rate physicist. He’s made really big contributions understanding how the universe works at the most fundamental level,” said Damian Pope, who studies quantum computing and helps explain theoretical physics to the public.
“For him to come along and collaborate, get some back and forth in person, is a very exciting thing.”
Pope points to Hawking’s celebrated prediction that black holes in space should emit radiation. This theoretical insight helped bridge the gap between competing theories about how the universe works, for objects big and small.
“It was a huge achievement,” said Pope. Like an artist or a playwright, Hawking was able to “take two very different things and kind of combine them in a very clever way.”
Hawking will take part in a televised lecture as part of an outreach series broadcast across Canada on TVO. That presentation will air Sunday, June 20.
The physicist, 68, recently retired from England’s Cambridge University. He had planned an earlier visit to the Waterloo institute, but the trip was postponed due to illness.
Perimeter director Neil Turok hopes Hawking’s summer visit will be the first of many. Hawking holds a distinguished research chair at the institute, which is currently building a Stephen Hawking wing.
“He is an exceptional communicator, whether to other scientists or to the wider public,” Turok said in a statement.
05-09-2010, 10:02 AM
I Love Science Video Contest
About the Contest
Perimeter Institute wants to hear from Canadian highschool (CEGEP in Quebec) students who are passionate about science. Produce a 30 second or less video on why you love science. Then, submit your video and register online for a chance to win an all expenses paid round-trip to Hawking at the Perimeter (http://www.perimeterinstitute.ca/News/In_The_Media/Perimeter_Institute_Launches_Youth_Video_Contest_f or_Chance_to_Attend_Hawking_at_the_Perimeter/)on June 20, 2010.
For full entry, eligibility, rules, and other details, see the Official Contest Rules (http://www.perimeterinstitute.ca/Outreach/Students/Official_contest_rules/).
Check back here for contest updates, featured videos, and other news.
How To Enter (http://www.perimeterinstitute.ca/Outreach/Students/how_to_enter/)
Who: The contest is open to all Canadian students in grades 9 through 12 (CEGEP in Quebec), who are between the ages of 14 and 19, and are enrolled full-time in a Canadian public, private or home school.. Students must be Canadian citizens or legal residents living within Canada or its territories.
When: Submit your entry for the contest between Saturday, April 10 and Monday, May 10, 2010.
05-09-2010, 10:05 AM
Hawking at the Perimeter – Special Broadcast for Canadians
Prof. Hawking’s upcoming research visit and numerous connections to PI were highlighted here (http://www.perimeterinstitute.ca/News/In_The_Media/Prof._Stephen_Hawking_to_Visit_Perimeter_Institute _this_Summer) this past February. During his two month stay, Prof. Hawking will also provide a special outreach broadcast for Canadians in which he will share numerous insights on space, time, matter, and information – plus his views on how science might shape our world.
This televised address is made possible through partnership with TVO, which will air “Hawking at the Perimeter” on Sunday, June 20, 2010 at 8:00 pm EDT. TVO is accessible across Canada on Bell TV channel 265 or Shaw Direct/Star Choice channel 353. You will also find TVO on channel 2 via cable or over-the-air in most areas of Ontario (check local listings).
05-09-2010, 10:46 AM
PI, as you probably know, holds monthly public lectures at Waterloo Collegiate. While these lectures are free, demand is so great that the tickets "sell out" within minutes of becoming available on the PI website and are always packed. I had expected Prof Hawking to be the featured speaker at the June lecture, however, as announced at May's lecture last week and confirmed in the link above, the public won't be invited to his June 20th lecture. I guess there will be too many VIPs, Youth Video Contest winners and such to fill the studio audience. Oh well...
BTW Prof Hawking has just completed a TV series called "Into The Universe (http://www.discoverychannel.ca/article.aspx?aid=26058)" that airs in Canada at the end of May. It's aired in the UK already and I couldn't wait ;) In episode 2, which is on time travel, watch for the invitation to attend a reception at "The University of Cambridge, Perimeter Institute for Theoretical Physics, Location 52° 12' 29" N, 0° 7' 21" E." The coordinates are in Cambridge. I wonder if anyone has informed Mike Lazaridis ;)
05-13-2010, 08:35 AM
Plans on track for Stephen Hawking visit
May 12, 2010
By Brian Caldwell, Record staff
WATERLOO — The countdown is on for the man who wrote A Brief History of Time.
Stephen Hawking — bestselling author and renowned British physicist — is due to arrive in less than a month for his first visit to the Perimeter Institute in Waterloo.
“I know people are thrilled he’s going to be here and about the possibility of collaborating with him,” said John Matlock, spokesperson for the research centre. “It should be a fruitful period for everyone involved.”
Hawking, 68, is scheduled to spend June and July at the institute doing research, working with other scientists, attending a top-level conference and broadcasting a public lecture.
Officials hope it’ll be the first of many stays for Hawking, who holds a distinguished research chair at the institute and will have a new $30 million wing named after him.
Illness derailed an expected visit last summer and there has been speculation — all ruled out by the institute — that he might move to Waterloo permanently.
Hawking will be on hand when leading scientists gather for four days in mid-June for a private conference called Cosmological Frontiers in Fundamental Physics.
“There’ll be a lot of great minds from around the world here that same week,” Matlock said.
A few days later, on June 20, Hawking will broaden the audience base in a lecture on TVO about time, space and matter, and stressing the importance of pure science for the future.
The 90-minute presentation will be taped at the institute in the afternoon and shown on television starting at 8 p.m.
Hawking is esteemed for contributions in areas of physics including the properties of black holes and quantum gravity theories of the origin of the universe.
Diagnosed with a degenerative disorder called ALS when he was 21, he is almost entirely paralyzed, but communicates using an electronic voice synthesizer.
Hawking made news recently by saying he believes there is intelligent extraterrestrial life and that aliens could be hostile if they ever came to Earth.
06-05-2010, 05:10 AM
Prof. Stephen Hawking to be Officially Welcomed at Perimeter Institute on June 20th, 2010
WATERLOO, ON | June 5, 2010 | CNW
Professor Stephen Hawking is a Distinguished Research Chair at Canada's Perimeter Institute for Theoretical Physics (PI) and, on Sunday June 20th, 2010, he will be officially welcomed to Canada by the Honourable Tony Clement, Industry Canada Minister, and to the province by the Honourable Dalton McGuinty, Premier of Ontario. The greetings will be followed with a special presentation by Prof. Hawking. The activities will be broadcast on TVO that same evening.
Dr. Neil Turok, PI Director, said "We are very happy to have Stephen here doing science with other researchers at Perimeter Institute. On June 20th he will take time out to be welcomed by our many public and private partners, including the governments of Ontario and Canada, and to give a special broadcast lecture. Stephen is an exceptional communicator, and we are delighted to be able to share his talk on television. We are also looking forward to his impressions of the 'Stephen Hawking Centre at Perimeter Institute' now under construction."
This past October, when the expansion to PI's facility was named in his honour, Professor Hawking said, "Our field of theoretical physics has been the most successful and cost-effective in all of science. Where would we be today without Newton, Maxwell and Einstein? Many great challenges lie ahead. Where this new understanding will lead, is impossible to say for sure. What we can say with confidence is that expanding the perimeter of our knowledge will be the key to our future."
About June 20th Events
Official greetings will take place on June 20, at 4:00pm at the Institute. Prof. Stephen Hawking will be met by the Honourable Tony Clement, Industry Canada Minister, and by the Honourable Dalton McGuinty, Premier of Ontario.
Also taking part will be Mike Lazaridis, PI Board Chair, and Dr. Neil Turok, PI Director, who will share news on the Expanding the Perimeter initiative and the Stephen Hawking Centre at Perimeter Institute. The formal greetings and information about PI will be followed by a special lecture from Prof. Hawking on topics involving space, time, matter and his life in science.
As Prof. Hawking will be conducting private research activities during his visit, the June 20th activity is his only scheduled appearance. Media members wishing to attend or seeking images of Prof. Hawking at Perimeter should contact Lisa Lambert, PI Communication's Coordinator (contact info below).
For further information: Lisa Lambert | [email protected]
06-05-2010, 09:07 AM
Mr. Universe: Stephen Hawking has arrived in Waterloo Region (http://news.therecord.com/News/Local/article/723119)
Most of us bet on such things as hockey or horses. Stephen Hawking bets on black holes, the Big Bang and the world's largest atom-smasher.
But Hawking of course, isn't like most of us. His 68-year-old mind is focused on infinitely bigger things, like where the universe came from, what time is and if the universe will ever come to an end.
That's why Hawking is the kind of person that physics institutes around the world would love to have criticizing the equations on their blackboards.
So you can wager his arrival in here by way of private jet Friday is a major coup for the Perimeter Institute for Theoretical Physics, an institution that less than 10 years ago was based out of an old post office in downtown Waterloo...
06-05-2010, 05:17 PM
Hawking Night in Canada
Sunday June 20, 8pm • Bell TV channel 265 • Shaw Direct channel 353 • Channel 2 in most areas of Ontario (http://www.perimeterinstitute.ca/News/In_The_Media/Special_Televised_Event_with_Prof._Stephen_Hawking _to_be_Seen_Across_Canada)
06-07-2010, 09:30 AM
Hawking’s first order of business in Waterloo: check his email
June 06, 2010
WATERLOO – It was straight to business for legendary British physicist Stephen Hawking as he started work alongside researchers studying in Canada.
“Stephen works all the time,” chuckled Neil Turok, the director of the Perimeter Institute for Theoretical Physics, where Hawking will spend the next six weeks explaining and critiquing ideas.
Hawking arrived on Friday, but despite a long flight, Turok said the renowned physicist was itching to get to work.
“Just after he arrived, we were trying to get the wireless to work at home, he really wanted to check his email,” laughed Turok.
“He’s completely determined to work at all costs,” he added.
Hawking will be sharing ideas with the researchers at the institute for several weeks. This process will help refine understanding of important concepts.
While Hawking is known for his great discoveries and new insights, Turok was reluctant to put that kind pressure on his famous friend and colleague.
“He sets a standard of achieving breakthroughs that we want to live by,” said Turok.
Turok said he hopes Hawking’s presence will inspire the young researchers in Waterloo.
The institute says it will hold an official welcoming ceremony for Hawking on June 20, when he will be greeted by Premier Dalton McGuinty and others.
The ceremony will be followed by a special presentation by Hawking.
Hawking accepted a research post with the institute in 2008.
He was to have visited the southwestern Ontario institute last summer, but illness forced him to cancel.
Last October, the institute announced the establishment of the Stephen Hawking Centre at Perimeter Institute in his honour.
Hawking is best known for his work explaining the physics of black holes. In the 1960s, he developed a degenerative disease that has impaired his ability to move and speak. He retired from Cambridge University in England last year.
The Perimeter Institute is a research centre devoted to theoretical physics that was founded in 1999 by Research In Motion co-CEO Mike Lazaridis.
The Canadian Press
Great to see the Canadian Press following the story, should give some good press to Waterloo. Too bad not the Region as a whole though. Oh well.
06-11-2010, 02:13 AM
More positive press about the new man about town...
A brief history of Stephen Hawking’s time in Waterloo (http://www.theglobeandmail.com/news/national/a-brief-history-of-stephen-hawkings-time-in-waterloo/article1600096/")
Stephen Hawking, the world’s most recognizable scientist, is at Waterloo’s Perimeter Institute for Theoretical Physics for the next six weeks to collaborate on research, but he’s also been spotted about town since his arrival on the weekend.
“Stephen is not a shrinking violet and he likes to lead a normal life,” said Neil Turok, director of the Perimeter Institute, who worked closely with the famed physicist at Cambridge University...
The pair went out to dinner on Saturday, Dr. Hawking’s first full day in Canada, and when it came time to pay, discovered their tab had been picked up by a patron who did not leave his name. “That was really a wonderful welcome for Stephen,” Dr. Turok said. “This would not happen everywhere.”
When the two scientists went to a local park to look at the new wing of the Perimeter Institute named after the famous cosmologist, Dr. Hawking’s motorized wheelchair attracted the attention of children playing there. One boy, about six or seven, shouted out: “That’s Stephen Hawking,” Dr. Turok said. “They ran over and I think he really enjoyed that.”
Dr. Hawking, he said, has a “joke line,” preprogrammed into his computer that he uses sometimes when people ask if he is the famous scientist.
“I am frequently mistaken for him,” is the favourite comeback of the man who spends most of his time contemplating the beginning of the universe.
Since his arrival, Dr. Hawking also has made a trip to the bustling St. Jacob’s Market, a must-see attraction just outside town that is known for its Mennonite vendors and local crafts and produce.
“He has a van and he is running around and he will be seen all over the place,” Dr. Turok said. “He’s got a team of five people to look after him and they drive him around.”...
06-12-2010, 08:20 AM
We’re trying to figure out how the world works
June 12, 2010
By Greg Mercer, Record staff
WATERLOO — Rob Myers is sitting on a leather chair, talking about a holographic dimension where gravity doesn’t exist.
This could be a strange conversation, in any other setting. But this is the Perimeter Institute for Theoretical Physics, a place where black boards are covered in the mathematical language of the cosmos and people talk about time travel and alternate realities as if it were last night’s hockey game.
Down the hallway, you can hear the din of construction crews building the Stephen Hawking Centre, a 55,000-square foot addition that will make room for up to 250 researchers, tripling the institute’s size. As Myers goes on, the conversation turns to black holes and the search for one simple equation just might help unlock the secrets of the universe, like a key to a book that explains everything.
Soon you begin to think: what is this place? This, of course, is what they call Shangri-La for physicists. And Myers, one of the world’s most renowned experts in the work-in-progress science of string theory, has pretty much been here since the institution’s Big Bang back in 2000.
Back then, there wasn’t even a chair for him to sit in yet.
“When I got here, they said ‘we really don’t have an office for you. Could you go work at home?’” he said. “To go from an unknown entity in five or ten years to a place people talk about, it’s pretty exciting. It’s been quite a ride.”
Quite a ride indeed. When the Hawking wing is finished in the fall of 2011, there will be no other building in the world with so many theoretical physicists under one roof. Perimeter has put Waterloo on the map as a place for boundary-pushing physics, competing head-on with giants like Harvard, Stanford and Cambridge for some of the world’s most sought-after scientists.
It’s become the kind of place theoretical physicists love to be at and hate to leave — people like cosmologist Mark Wyman, 30, who knew from the moment he read Hawking’s seminal book, A Brief History of Time, that he wanted to study the universe. The Lake Charles, Louisiana native left Cornell University to pursue his post-doctoral research here, and was floored to find an institute that was built from the ground up to inspire deep-thinking physicists.
“I was like ‘is this place for real’? I was incredulous,” Wyman said. “It’s like a resort spa for physicists. But instead of a nice beach, there’s all these beautiful equations.”
Now he’s headed to the University of Chicago to continue his work in a few months, and will have to go “back to reality,” as he puts it. No more bistros and fireplace black boards and string quartets outside your door.
“I’m told this is a bad place to start your career because everything else just seems lame by comparison,” Wyman said.
The people working at Perimeter are a peculiar breed of scientist. Lights blink on at all hours of the night here as sleepless theorists plug away at their latest mathematical puzzle. Myers has had ‘Eureka’ moments while driving home on a sunny day after dropping his kids off at the pool. Wyman said it’s not uncommon to wake up at 2 a.m. with equations running through his head.
“Your mind gets obsessed with these questions,” Wyman said. “We all want to be a part those breakthroughs that change our understanding of everything. We want that very badly.”
What makes Perimeter unique is that it brings together specialists from different schools of thought in theoretical physics, and gets them sharing ideas. That’s led researchers to apply new approaches to their work, opening up new ways to solve theoretical problems, Myers said.
“In this building, you hear at the lunch table next to you words like ‘quantum entanglement.’ These ideas are always percolating around in your head, and there’s a real opportunity for a fertilization of ideas between fields,” Myers said. “Everyday you’re exposed to something foreign, you’re learning something new.”
There’s a still lot more questions than answers. No surprise, considering some 96 per cent of our universe is of unknown material — so called dark matter, said Damian Pope, senior manager of scientific outreach for Perimeter.
“To a physicist, that’s like a drug. It’s exciting. It’s why they get up in the morning,” Pope said. “Part of what it is to be human is to look up at the night’s sky and wonder and have curiosity.”
So the 90 researchers and graduate students here keep plugging away, quietly looking for physics’ Holy Grail hidden somewhere in the equations scrawled across their black boards. Another 1,000 visiting researchers pass through the institute every year, theorizing, testing, looking for answers.
They’ve been at it since 2000, but time is relative at Perimeter. That’s why some have wondered if the institute’s 100-year land lease from the City of Waterloo is long enough to make the big discovery they’re is after.
They spend weeks on a single equation that can stretch across two blackboards like hieroglyphics on an Egyptian tomb. It’s a language all its own, and it’s incredibly complicated, head-spinning stuff. But theorists like Wyman say it’s also beautifully simple.
“We’re trying to figure out how the world works,” he said.
Some people driving by the concrete and glass institute perched next to Silver Lake, a building so strange it looks like it fell right out of the sky, must ask: what exactly are they doing in there? And why do they need to spend so many millions doing it?
To answer those questions, the man who dreamt up the place, Mike Lazaridis, likes to look back at another era.
In the early 1900s, the father of modern physics, Albert Einstein, was trying to convince universities to hire him on and provide funding for his research. But they couldn’t see how Einstein’s weird ideas on space and time and light mattered to the bustling planet around them.
“At that time, people just like us couldn’t see how life could get any better. Things were going like crazy . . . they think they understand everything. And yet, there are these little cracks in the firmament,” Lazaridis, the RIM founder and co-CEO, told the crowd at the Quantum to Cosmos Festival last fall.
Einstein had to find a job in a patent office, and on his own time did his theoretical work that would change our perception of reality — and our world — dramatically. He’d eventually unlock the secrets of the atom bomb and the stars, of space warps, the Big Bang and black holes. He gave us a better understanding of the laws of nature and how the universe works, which set off a generation of scientists in new, incredible directions.
The point, Lazaridis says, is we can’t fathom today how theoretical physics can form the foundation for new technologies that benefit society tomorrow. It’s already been at the root of inventions from lasers to transistors to magnetic resonance imaging and countless others, all things that no one thought possible 100 years ago.
It’s this deep faith in the power of theoretical physics that led Lazaridis to an Italian restaurant in a Toronto strip mall in 1999, where he met a young physicist named Howard Burton. At that meeting, he told Burton, who would become Perimeter’s founding director, about this wild idea he had: he wanted to create an institute where really smart people would gather and ponder the universe. And he wanted them to do it in Waterloo, of all places.
“The more I listened, the more confused I became. Some of what he said struck me as naďve, some of it perhaps just plain crazy. But other parts were eminently reasonable, even insightful,” Burton recalled in his memoir, First Principles: The Crazy Business of Doing Serious Science.
Burton didn’t know what to think. Lazaridis was talking feverishly about building a place where physicists, cosmologists, string theorists and other sages could be free to pursue pure research into the minute mysteries of the laws of nature. A place where they’d spend all their time thinking about the fundamental underpinnings of physics — things like time, gravity, space, light and matter.
That may sound pretty esoteric, but Lazaridis wasn’t planning to hand over $100-million from his personal wealth so a bunch of geeks could play with blackboards all day. He sincerely believes the research going on inside the strange building on Caroline Street can lead to revolutionary discoveries that can change our world. That’s why his contribution to the institute he created has swollen to $170 million in the past decade.
And though taxpayers have kicked in another $175 million through their provincial and federal governments, Perimeter officials like to stress that their kind of research is relativity low-cost. Most physicists here just need some chalk, a computer and space to think.
Still, Lazaridis wants real, mind-bending breakthroughs. Discoveries like those in the 1870s, which found that combining magnetism and electricity could produce radio waves and power generation. Or Einstein’s combination of mass and energy to unlock the secrets of nuclear power. Or discovering that light can be both a wave and a particle, leading to the semiconductor revolution, which is the foundation for everything computerized we use today.
“All of these practical technologies, where did they come from? They came from a bunch of impractical people coming up with all these crazy new theories,” Pope said.
What many of the physicists at Perimeter are really after is the same thing Einstein tried to find but couldn’t: the so-called Holy Grail, a single “theory of everything” that can explain how and why the universe began. Right now, much of their time is spent trying to unify the two dominant theories in physics of the past century — relativity, which predicts how big objects like planets interact, and quantum theory, which explains the bizarre behaviour of particles at the atomic and subatomic level.
A single theory of all physical things would allow us to “read the mind of God,” says cosmologist Stephen Hawking, who just happens to be working at the institute this summer. He’s already on record predicting “exciting discoveries will be made” at Perimeter. That’s pretty heady stuff, especially for an institute that only existed in Lazaridis’ mind until little over 10 years ago.
The only thing Perimeter may need is time. It has the money. It has the brains, lured from some of the most prestigious institutes in the world. Thanks to Hawking, it has the star power. But for now, that magical, big breakthrough remains elusive.
06-21-2010, 02:42 AM
The Record: Hawking predicts ‘grand things’ from Perimeter (http://news.therecord.com/News/Local/article/732373)
British cosmologist Stephen Hawking thinks the Perimeter Institute is on a crash-course with history, as its scientists work on some of the greatest mysteries of our universe.
Hawking, a man Premier Dalton McGuinty said was “drawing a picture of God,” gave Perimeter Institute a ringing endorsement in the race to better understand the laws of nature, as part of a Sunday broadcast that reached Canadians coast to coast.
The visiting scientist said he believes the ten-year-old institute is on the cusp of revolutionary discoveries in theoretical physics that could change the world as we know it.
“Perimeter is a great experiment in theoretical physics,” Hawking said Sunday during his first public speech in Canada. “I am hoping and expecting that grand things will happen here.”
Hawking’s talk, taped for a broadcast later that night on TVOntario, spanned his professional career and work in things like black holes, the Big Bang and far-out concepts such as “imaginary time.”
He talked about thinking he’d never live long enough to finish his PhD, and about the idea our universe and everything in it is as spontaneous as air bubbles forming in boiling water. Sitting in his wheelchair with his hands folded on his lap, Hawking received a standing ovation from a crowd that included business leaders, cabinet ministers and municipal politicians.
The world’s “most famous living scientist” is in the middle of a six-week stay at Perimeter, where he’s working closely with other cosmologists and theoretical physicists. He’s spent his days working out equations and ideas in front of black boards, and is said to be enjoying the institute’s relaxed atmosphere.
He thinks Perimeter has assembled a network of scientists that just might make that elusive breakthrough. The great eras of theoretical physics, like the 1920s in Germany or the 1960s at Cambridge, had all the parts that Perimeter seems to have in place — great minds given time to think and explore, he said.
“The recipe is simple,” he told the high-powered crowd. “It seems to me the same ingredients are being assembled here at the Perimeter Institute in Waterloo.”
Those kind of statements put heavy expectations on Perimeter’s researchers, and that’s a good thing, said institute director Neil Turok.
“It’ll make our researchers realize the spotlight is on them. It puts the pressure on,” he said. “It’s an endorsement that we’re doing the right things.”
Hawking’s talk gave the audience a glimpse into the challenges caused by his advanced motor neuron disease, which makes communication very difficult. Hawking uses a sensor that reads twitches in his cheek to choose words from a screen in front of him.
That process can cause long pauses in his delivery, and at one point during his talk the software program that he uses to operate his voice synthesizer crashed and had to be rebooted on stage. But his aide said Hawking refuses to do pre-recorded lectures because “it would be a mime, he would be cheating.”
“He doesn’t see himself as a disabled person. He sees himself as a cosmologist,” explained his graduate assistant, Sam Blackburn. “Stephen does a lot of things because they’re not easy.”
RIM boss Mike Lazaridis sat in the front row during the talk, beaming. He said Hawking’s stay at the institute he created and funded to the tune of $170 million is an endorsement of Perimeter’s original vision.
“This is just the beginning,” Lazaridis declared after the lecture. “The goal from the very start was to create a world class institution that would attract world class people, and we’ve done that.”
The theorists, he said, need steady investment and patience. Perimeter, meanwhile, continues to grow. It’s doubling the size of its masters program and working on an additional that will triple the number of researchers.
Finance Minister Jim Flaherty said it’s important Canadians support this kind of expansion. Though they may work in very abstract concepts, theoretical physicists have helped the foundation of many modern technologies, from medical imaging scanners to microwaves.
“People like microwaves. And we wouldn’t have them in our homes today if it weren’t for the big theoretical thinkers,” Flaherty said. “For Perimeter to attract someone like (Stephen Hawking) here is a great honour for Canada, and it wouldn’t have happened if the people of Canada didn’t invest in this.”
Waterloo Mayor Brenda Halloran said Hawking’s predictions only help the belief that Waterloo has become a global centre for knowledge. And that’s an exciting thing, she said.
“You’re standing here in Waterloo, on what used to be an old arena, and there’s a feeling they’re on the cusp profound and far reaching things that are going to change people’s lives,” she said. “There’s a feeling of breathless anticipation of what’s going to come from here.”
In case you missed it, the Hawking lecture will be rebroadcast on TVO later this summer. Visit www.tvo.org for details.
06-21-2010, 02:43 AM
G&M: Stephen Hawking gets rock-star welcome at Canadian think-tank (http://www.theglobeandmail.com/news/national/stephen-hawking-gets-rock-star-welcome-at-canadian-think-tank/article1611058/)
They arrived in limos, luxury cars and even a helicopter – a line of cabinet ministers, a premier, local politicians and several millionaires who packed Waterloo’s Perimeter Institute for Theoretical Physics Sunday to hear firsthand what the world’s best-known living scientist had to say.
What they witnessed was an hour-long display of determination as much as it was a lecture on science as Stephen Hawking painstakingly worked the computer that gives voice to his thoughts in recounting his research, life and times.
“We don’t know why we can do it, but we know how to do it,” Prof. Hawking said of efforts to understand the facts that explain the universe. It was a comment that also summed up his own drive to continue his work in the face of a debilitating disease.
Prof. Hawking was first diagnosed with a motor-neuron condition commonly known as Lou Gehrig's disease, or ALS, as a graduate student at Cambridge, and he told his Waterloo audience at first he did not expect to have enough time to finish his PhD.
More than four decades later, he uses a wheelchair and must rely on his cheek muscles to send commands to the computer that works a speech synthesizer.
His talk, before an invitation-only crowd of about 200 and televised later Sunday evening, is his only public appearance during his six-week stay in Waterloo.
The event also was a chance for the Perimeter Institute to show off its accomplishments to a high-powered crowd. Perimeter was created more than a decade ago by BlackBerry founder Michael Lazaridis as an independent centre devoted to the study of fundamental questions in science involving space and time, as well as quantum physics. Prof. Hawking, a close colleague of the centre’s director Neil Turok, used his talk to endorse its work.
“I am hoping and expecting great things will happen here,” he said. The combination of brilliant people and a free intellectual environment is creating a special place and time where “magical progress can happen.”
A beaming Mr. Lazaridis, who has given $150-million to Perimeter over the years, said such progress always takes more time and money than people expect. Still, he said, the accomplishments at Perimeter, which now has the world’s largest postdoctoral program in theoretical physics and a growing community of leading scholars, shows the importance of investing in science, even during difficult economic times. “It is so tempting to cut back on things you don’t understand,” he said. “This shows not only what we can do in Canada, but in Waterloo.”
Prof. Turok said Perimeter is going for one goal only – a major breakthrough in human understanding of the laws of physics. “I believe we will be judged on the success of one individual and we don’t know who that will be,” he said. Such a person, he said, has the potential to be another Stephen Hawking.
In his lecture, Prof. Hawking recounted how he by chance ended up studying under cosmologist Dennis Sciama and began to consider a question that has occupied much of his time since – the beginning of the universe.
He also spoke of his important work on black holes and his formula that proved they have emissions – “I would like [the formula] to be on my tombstone,” he said.
“He is my hero,” said Allison Carter, one of two Grade 11 students that won a seat at the lecture in a science video contest. “He is discovering the answers to the questions we all ask.”
06-21-2010, 02:45 AM
TorStar: Stephen Hawking expecting 'great things' at Waterloo (http://www.thestar.com/news/ontario/article/826253--stephen-hawking-expecting-great-things-at-waterloo?bn=1)
“Can you hear me?” asked the now famous electronic voice with its flat delivery, its incongruous American accent, and its inexplicable lisp.
And it seemed that everyone could hear, although sometimes with difficulty.
The man behind the computerized voice was renowned British cosmologist Stephen Hawking, who made his long-awaited and much delayed Ontario debut on Sunday, putting theoretical physics squarely on the Canadian map.
An audience sprinkled with luminaries from politics, academia, and business – including Premier Dalton McGuinty, federal Transport Minister Tony Clement, and federal Finance Minister James Flaherty — listened intently as the 68-year-old theoretician chronicled his life in science, using a computerized voice system he controls by slight movement of his right cheek.
The voice was difficult to make out at times, and there were several glitches with the machinery, but those problems only seemed to add to the intensity of Hawking’s address, delivered from the wheelchair where he has long spent most of his waking hours.
“The first thing he brings is his endorsement,” said Neil Turok, director of the Perimeter Institute for Theoretical Physics in Waterloo, where Hawking now holds a distinguished research chair. “That means a lot to us. He is an example of what a person can achieve in science.”
Best known for his groundbreaking work on the nature of black holes, Hawking has also made important advances in ongoing but so far unsuccessful efforts to combine the behaviour of large objects like stars and small objects like subatomic particles into a single unified theory.
Hawking has done as much as any living scientist to explain the secrets of the universe in a way that non-scientists can understand.
More or less.
Hawking tried to do more of the same Sunday, cracking jokes about the cosmos, including a jibe at his own discovery that black holes eventually evaporate.
Or, as he put it: “Black holes are out of sight.”
But his speech — largely prepared in advance although delivered according to his physical commands — also proved something of a challenge in places, dotted as it was by such phrases as these:
“A static remnant has a naked singularity unless it is exactly spherical.”
“The singularities of gravity collapse are not visible to outside observers.”
Generally recognized as one of the leading theoretical physicists of recent decades, Hawking has suffered since his twenties from Amyotrophic Lateral Sclerosis or ALS, a degenerative condition commonly known as Lou Gehrig’s disease.
The affliction has left him almost completely paralyzed, but has not affected his mental powers.
“New ideas are needed if we are to secure our future,” said Mike Lazaridis, co-founder of Research in Motion, the Waterloo-based company that produces BlackBerry smart phones. “I’m convinced great ideas will lead to solutions.”
Lazaridis established the Perimeter Institute in 1999 and has lured Hawking here, along with a host of other internationally recognized researchers, in hopes of making further breakthroughs in our understanding of the universe.
The institute is now being expanded — doubling its physical facilities and trebling its researchers — with the addition of a new wing, named for Hawking.
Although scientists tend to do their best work while young, Turok insisted in an interview that Hawking is still doing cutting edge work.
“I think Stephen is constantly looking for new ideas,” said Turok. “He’s a real, live scientist. He’s still productive.”
Hawking was supposed to journey to Canada last summer to take up his part-time research post in Waterloo, but was forced to postpone the trip for medical reasons.
“Perimeter is a grand experiment in theoretical physics,” Hawking told his audience on Sunday. “I am hoping, and expecting, great things will happen here.”
06-21-2010, 05:43 AM
Stephen Hawking on Perimeter Institute and Special Places & Times for Scientific Progress
PI's Public-Private Partners Welcome Prof. Hawking
Waterloo, Ontario, Canada | June 20, 2010 | http://www.perimeterinstitute.ca/News/In_The_Media/Stephen_Hawking_on_Perimeter_Institute_and_Special _Places_&_Times_for_Scientific_Progress/
Prof. Hawking addresses a packed theatre.
Minister of Industry Tony Clement welcomes Prof. Hawking to Canada.
Premier Dalton McGuinty welcomes Prof. Hawking to Ontario.
In a public address before a packed audience at Perimeter Institute for Theoretical Physics (PI), Prof. Stephen Hawking, PI Distinguished Research Chair, recounted his research, life and times, saying that it has been a glorious period to contribute to our picture of the universe. Prof. Hawking is conducting private research activities at PI this summer, in what is expected to be the first of many visits.
Hawking at the Perimeter
The celebration of science, being televised across Canada on TVO, kicked off with Mike Lazaridis, PI founder and Board Chair, sharing reasons why basic research, particularly theoretical physics, is not only crucial to understanding how the universe behaves at a fundamental level, but also drives the building of knowledge-based nations, paves the way for new and transformative technologies and creates long term value throughout society.
PI Director Neil Turok elaborated on how theoretical physics “is one of the lowest-cost, highest-impact scientific disciplines,” contributing key concepts to fields ranging from astronomy to neuroscience, pure mathematics to computer science and beyond. Dr. Turok also provided an exciting update on the institute and shared, “Stephen joins us at a particularly special moment for PI, as the research centre expansion named in his honour progresses rapidly toward completion. The Stephen Hawking Centre at Perimeter Institute will increase research capacity and provide an exceptional environment for physicists to conceive, visualize and gain an improved understanding of the nature of physical reality.”
Expanding the Perimeter
The new construction is part of an overall ‘Expanding the Perimeter’ advancement initiative, in which public and private partners come together to invest in cutting-edge research, training of next generation scientists and educational outreach activities. The Honourable Tony Clement, Canada’s Minister of Industry, spoke about the great value Canadians place on scientific achievement and how “the federal government has had a strong and successful relationship with Perimeter Institute” which he further described as “the formidable science community that Mike Lazaridis, Neil Turok, the faculty, researchers, and staff have built here in the past ten years.”
In his official welcome to Prof. Hawking, the Honourable Dalton McGuinty, Premier of Ontario, said, "Stephen Hawking is passionate about helping humanity understand the complexity of the universe. We're honoured to welcome him to Ontario and Perimeter Institute, where we are pushing the boundaries of our shared knowledge even further."
In communicating his excitement and enthusiasm for scientific progress, Prof. Hawking said, “The recipe is simple: Bring brilliant people together, in an inspiring and free intellectual environment, where they are encouraged to pursue ambitious and timely research. The importance of special places and special times, where magical progress can happen, cannot be overstated… It seems to me, the same ingredients are being assembled here, at Perimeter Institute. Perimeter's chosen scientific focus, connecting quantum theory and spacetime, is central to new insights, which are emerging, concerning not only black holes and the beginning of the universe, but also nuclear and particle physics, quantum computers, and the science of new materials. Perimeter is a grand experiment in theoretical physics. I am hoping, and expecting, great things will happen here.”
Hawking at the Perimeter will air on TVO on:
Sunday, June 20 at 8:00 pm and 12:30 am EDT
Saturday, June 26 at 6:00 pm EDT
Sunday, June 27 at 5:00 pm EDT
Tuesday, July 6 at 10:00 pm EDT
Minister of Industry and Minister of State for Science and Technology Welcome Stephen Hawking to Canada
OTTAWA | June 20, 2010 | http://www.ic.gc.ca/eic/site/ic1.nsf/eng/05663.html
The Honourable Tony Clement, Minister of Industry, and the Honourable Gary Goodyear, Minister of State (Science and Technology), today issued the following statement:
“We are honoured to welcome to Canada one of the world’s greatest scientists, Dr. Stephen Hawking, as he begins his six-week residency at Perimeter Institute for Theoretical Physics in Waterloo, Ontario.
“Dr. Hawking has made extraordinary contributions to theoretical physics during his long and remarkable career, and his presence here in Canada is a testament to the formidable science community that Perimeter Institute has built in the past ten years. During that time, the Government of Canada has been a proud partner of the Institute.
“We wish Dr. Hawking and his colleagues well as they begin a summer of collaboration.”
Through Industry Canada, the Natural Sciences and Engineering Research Council of Canada and the Canada Foundation for Innovation, the federal government has been a strong supporter of Perimeter Institute. This support includes $50 million announced in Budget 2007 that supports the Institute’s leading research, education and public outreach activities.
The attraction and retention of the world’s top research talent is a major thrust of Canada’s Science and Technology Strategy.
Minister Clement and Minister of State Goodyear met privately with Dr. Hawking before his presentation at Perimeter Institute. Minister of State Goodyear will also attend a dinner with the professor tonight.
06-21-2010, 04:51 PM
Anyone catch the TVO show? Clement's speech went way too long and really bogged things down, unfortunately.
But once the big man himself got going you could see the crowd hanging on his every word, even if his theory got very dense in spots and likely lost a lot of the non-science types. In the end, it was just the thrill of seeing a man so challenged by circumstance, challenging the universe in return by doing his best to find out how it ticks. Great stuff.
06-21-2010, 07:00 PM
Anyone catch the TVO show?
TVO is doing an Encore Presentation of the show on June 26th at 6pm. I'm going to set my PVR to record, it looks interesting! Here are a few of the trailers.
Stephen Hawking at Perimeter Institute TVO Promo's
06-25-2010, 05:42 PM
The smartest guy in the room - Paul Wells - Macleans.ca (http://www2.macleans.ca/2010/06/25/the-smartest-guy-in-the-room/?om_rid=Ay6DOX&utm_source=_BMJMh7B8MikRec&utm_content=ml46&utm_medium=email)
Last Sunday an array of VIPs—Ontario Premier Dalton McGuinty, federal Finance Minister Jim Flaherty, Kevin O’Leary, the angry guy on the CBC reality show Dragons’ Den—convened in a theatre at the Perimeter Institute in Waterloo to pay tribute to Stephen Hawking. The British astrophysicist sat in his wheelchair while the politicians buttered him up. Then he delivered a lecture through his speech synthesizer about his early years in physics.
The next day a bunch of physicists took a lunch break from a conference where they were discussing what happens when black holes of various sizes orbit each other. A caregiver pushed Hawking to a place at one of the cafeteria tables, where he ate some lunch and listened to the chatter and gossip among his colleagues.
There were no cameras or dignitaries at the lunch. I was there only by chance. But in some ways this was more significant than the previous day’s pomp. Hawking didn’t become the world’s most famous physicist by giving lectures, after all, but by thinking and working, and he is at Perimeter to think and work.
He is one of 10 Distinguished Research Chairs, leading international scholars who will camp out periodically at Perimeter and work with its faculty and students. He’s about halfway through his first six-week visit. On evenings and weekends he gets out to sight-see. So far he’s gone to African Lion Safari and enjoyed the ribs at Ethel’s Lounge.
Days are for discussion and calculation. Motor neuron disease slows him but doesn’t stop him. He controls his computer by twitching his cheek to control the cursor on a computer screen. It works best if you frame questions to him as a yes or a no. Neil Turok, Perimeter’s director, is an old Cambridge colleague of Hawking’s. He admitted after Sunday’s big televised show that he was impatient for the fancy business to be done “so we can get back to work.”
Of course in its own way, Sunday’s glamour was work too. A lot of taxpayer money has gone into Perimeter, about $90 million from the feds and as much from the Ontario government since 1999. That’s on top of $170 million from Research in Motion founder Mike Lazaridis. The physics that goes on there is so hard to explain (quantum foundations, anyone? Superstring theory?) that constant effort goes into underlining its importance. The man in the wheelchair is handy to that effort. After his speech, Hawking joined Turok and Lazaridis at dinner with two federal ministers, Flaherty and Gary Goodyear, both plainly starstruck.
O’Leary also became part of the sales pitch. “Imagine in 1905,” Lazaridis told the audience, “if Albert Einstein had stood in the Dragons’ Den.” Would the business geniuses have funded his crazy ideas? Not likely. But that’s what’s needed today, Lazaridis argued.
That’s the point Hawking wanted to make too, as it turned out. Sort of. Mostly he used his own life to show that you can never know what you’ll need to know. Governments spend a lot of time trying to pick winners in science. Hawking, the greatest winner of his lifetime, has never even bothered to try. He just followed his heart.
He showed up at Cambridge in 1962 hoping to study the nature of the universe with Dennis Hoyle. “Cosmology was at that time hardly recognized as a legitimate field. Yet that was where I wanted to do my research.” Hoyle was too busy so Hawking fetched up with a lesser-known prof, which came in handy when Hoyle’s defence of a steady-state universe fell into dispute soon after.
All the action was in elementary particle physics, where you could design experiments to peck away at electrons and nucleii and eke out their secrets. Cosmology was mere guesswork. Hawking quoted a colleague who considered attendees at a 1962 Warsaw conference on general relativity to be “hosts of dopes.”
Hawking’s instincts ran all the other way. Elementary particles? “Too like botany.” Hushed admiration for odd species of quarks and gluons. “Cosmology and gravitation, on the other hand, were neglected fields that were ripe for development.”
By the late 1960s, data from radio telescopes had driven a stake through Hoyle’s steady-state hypothesis. With Roger Penrose and other colleagues, Hawking was hot on the trail of proof that the universe began with a big bang. “It was a glorious feeling, having a whole field virtually to ourselves. How unlike particle physics, where people were falling over themselves to latch onto the latest idea. They still are.”
A single-minded focus on pursuing the latest trends would never have got him where he wound up. “The importance of special places and special times cannot be overstated,” Hawking said. “That happened in Berlin, Germany, in the 1920s when quantum mechanics was born, and again in Cambridge in the 1960s. It seems to me that the same ingredients are being assembled here,” at Perimeter. “I am hoping and expecting great things will happen here.”
What’s important is not that Hawking said these nice things but that he was in Waterloo to say them. And with that, it was back to work.
07-05-2010, 08:23 PM
Public events for Prime Minister Stephen Harper for Tuesday July 6, 2010
3:00 p.m. – Prime Minister Stephen Harper will make an announcement. He will be joined by Professor Stephen Hawking and Gary Goodyear, Minister of State (Science & Technology).
Perimeter Institute for Theoretical Physics Atrium
31 Caroline Street North, Waterloo
* Open to media
07-06-2010, 04:48 PM
PM announces Banting Postdoctoral Fellowships, support for Next Einstein Initiative
Fellowships will establish Canada as a global leader in research; Next Einstein Initiative to help best young minds in Africa
6 July 2010 | Waterloo, Ontario | http://pm.gc.ca/eng/media.asp?category=1&featureId=6&pageId=26&id=3529
Prime Minister Stephen Harper today announced the establishment of the Government of Canada’s Banting Postdoctoral Fellowships, a prestigious new program to attract and develop the world’s best and brightest postdoctoral researchers in Canada. The Prime Minister also announced support for the Next Einstein Initiative to encourage and develop the best young minds in Africa.
“To remain at the forefront of the global economy, we must invest in the people and ideas that will produce tomorrow’s breakthroughs," said Prime Minister Harper. “The Banting Postdoctoral Fellowships will give scholars in research institutions across the country the support they need to explore and develop their ideas to their fullest potential.”
The Banting Postdoctoral Fellowships are the latest initiative under the Government of Canada’s comprehensive, long-term National Science and Technology Strategy. The new program will establish Canada as a global leader in higher learning, research and science and technology development. Under the program, 70 new fellowships will be awarded each year, with funding provided through the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council, and the Social Sciences and Humanities Research Council.
The Prime Minister also announced the Government’s support for the Next Einstein Initiative, which will create a network of 15 centres of academic excellence across Africa in fields related to science and technology.
“Canada will make a substantial contribution to scientific and technological development in Africa by supporting the unique public-private partnership known as the Next Einstein Initiative,” the Prime Minister said. “This is a revolutionary approach to development. It aims to nurture the brightest minds in Africa so they can take a leading role in solving the complex challenges the continent faces in areas such as agriculture, health and finance.”
Canada’s contribution to the Next Einstein Initiative will help build long-term capacity in research in Africa, and encourage talented students to reach and fulfill their potential in math, science and technology.
Backgrounder: Banting Postdoctoral Fellowships
6 July 2010 | Ottawa, Ontario | http://pm.gc.ca/eng/media.asp?id=3531
The Banting Postdoctoral Fellowships is a prestigious new program designed to attract and retain in Canada the best researchers in the world. The program will award 70 new fellowships a year valued at $70,000 annually for two years, totalling $45 million over five years. The value of these awards is competitive internationally and represents the same international calibre and prestige offered by the Vanier Canada Graduate Scholarships ($50,000 annually for three years).
Fellowships under the program will be provided through the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council, and the Social Sciences and Humanities Research Council. Fellowships will be open to both domestic and international applicants to support universities and research institutions in attracting and retaining top talent from within Canada around the world. Up to 25 per cent of Canadian awardees will be eligible to go to a foreign research institution for their postdoctoral placements, helping them establish worldwide networks, and raising awareness of Canadian research excellence.
The new postdoctoral fellowships will advance one of the main goals of the federal Science and Technology Strategy, which is to build an economic and competitive advantage for Canada by attracting and training highly qualified, innovative people. This new program is part of a full suite of Canadian funding programs to support top-tier researchers at every stage of their careers. The new program will help establish Canada as a global leader in higher learning, research, and science and technology development. Canada’s universities and all Canadians will benefit from greater international partnerships, and Canadian university students will be given enhanced learning opportunities.
The fellowships will be known as the Banting Postdoctoral Fellowships, in memory of Sir Frederick Banting, the Canadian physician, researcher, Nobel laureate and war hero who, together with his assistant Dr. Charles Best, is credited with the discovery of insulin.
Backgrounder: The Next Einstein Initiative
6 July 2010 | Ottawa, Ontario | http://pm.gc.ca/eng/media.asp?id=3530
The Next Einstein Initiative (NEI) aims to create a Pan-African network of 15 centres of excellence in mathematics, technology and science over the next decade. The initiative seeks to build upon the success of the African Institute for Mathematical Sciences (AIMS) established in 2003 in Cape Town, South Africa. Canada’s $20 million contribution to NEI will support the establishment of five AIMS centres across Africa by 2015. These centres will graduate at least 500 students each year in fields related to science, math and technology.
AIMS attracts leading scholars to train young bright African graduates to use mathematical thinking to address complex challenges in agriculture, health, finance, and other areas of development. A second centre opened in Abuja, Nigeria, in 2008 and plans are underway to establish centres in Senegal, Ethiopia and Ghana, which are all Government of Canada countries of focus.
Supporting talented individuals in math, technology and science through the Next Einstein Initiative will help build self-sufficiency and strengthen the ability of African countries to seek local solutions to local development challenges.
The NEI will help to deliver on Canada’s Aid Effectiveness Agenda and international assistance priorities. Improving the quality of higher education and fostering a more productive and innovative workforce will also contribute to two of Canada’s five international assistance priorities: Sustainable Economic Growth and Children and Youth.
I noticed a bunch of black vehicles parked on the Perimeter lawn and cops standing around today, and remembered that the PM was supposed to be in town.
70k is extremely well-paid for a postdoc; the standard NSERC postdoc is 40k. (Under some circumstances, it used to be tax-free, but I hear that they changed that). However, I don't think that allocating funding reserved for postdocs is the way to improve Canadian research. The problem with postdocs is that they mean that it takes longer to get a "real job", and if everyone else has a postdoc, then it's hard to be competitive for a job without a postdoc. Some fields today, like Computer Science and many engineering fields, do not require postdocs to get a job, and I hope that the unwritten requirement for a postdoc doesn't continue to propagate.
07-06-2010, 11:32 PM
Some fields today, like Computer Science and many engineering fields, do not require postdocs to get a job, and I hope that the unwritten requirement for a postdoc doesn't continue to propagate.
I can't speak for other fields, but it's been my experience that if you have/are eligible to to get your engineering license (Bachelors required), you'll be qualified for the educational requirements of about 90% of the engineering jobs out there. The relatively few jobs that I've noticed that required a Masters or Doctorate tend to be more R & D type jobs.
I can't speak for other fields, but it's been my experience that if you have/are eligible to to get your engineering license (Bachelors required), you'll be qualified for the educational requirements of about 90% of the engineering jobs out there. The relatively few jobs that I've noticed that required a Masters or Doctorate tend to be more R & D type jobs.
I was talking about research jobs (PhD+).
Mind you, I was just chatting with a CS student yesterday who wanted to do a master's because he figured it would lead to more interesting work. Also, I understand that a lot of Google employees have higher degrees.
In terms of getting jobs, though, yes, bachelor's degrees in engineering tend to be sufficient, from my understanding, to get a lot of jobs, but perhaps not the super interesting ones.
11-28-2010, 05:29 PM
Not really on-topic as such, but this is the closest thread that fit.
The BBC has a long-running science program called Horizon (its version of PBS' Nova or CBC's Nature of Things), now in its 45th season. An episode that aired in early October discusses big questions about the Big Bang's role in cosmology, whether anything happened before that, and if so, what.
A good quarter of the documentary footage was shot in and around Perimiter, and interviews feature Neil Turok, Param Singh and Lee Smolin. They even have some scenes of Singh playing cricket at the Waterloo Park pitch (as elements of his theories involve a 'bounce' on a cosmic scale). Hawking only appears in a brief shot in a montage of Perimiter goings-on, taking in a lecture.
I have no idea if this will ever air in Canada or even North America (it was brought to my attention by less-than-kosher methods), but the beautiful BBC camera work really helps sell the place to an unfamiliar audience, I think. Very cool stuff.
11-28-2010, 08:12 PM
An episode that aired in early October discusses big questions about the Big Bang's role in cosmology, whether anything happened before that, and if so, what... (it was brought to my attention by less-than-kosher methods)...
Yup. I also saw it a month or so ago, probably by the same less-than-kosher means ;)
I was miffed, however, when they introduced the Perimeter Institute as "near Toronto" rather than in Waterloo. That's like saying Cambridge University is "near London."
For those who prefer to watch by more kosher means. Well worth watching!
11-28-2010, 09:06 PM
Well, YouTube's better than nothing! Thanks for embedding those.
Powered by vBulletin® Version 4.2.1 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved. | physics |
https://www.celiac.com/articles.html/did-the-japanese-just-nail-the-secret-to-great-gluten-free-bread-r4068/ | 2020-01-18T14:41:04 | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00183.warc.gz | 0.941161 | 433 | CC-MAIN-2020-05 | webtext-fineweb__CC-MAIN-2020-05__0__133473187 | en | Celiac.com 04/12/2017 - Researchers at Hiroshima University say they have perfected the science behind a new bread-baking recipe. Developed by Japan's National Agriculture and Food Research Organization, NARO, the method uses rice-flour to produce gluten-free bread with a similar consistency and volume to traditional wheat-flour loaves.
Now, rice-flour based gluten-free breads are old hat, but they've long had a reputation for being dry, crumbly, soulless creations that pale in comparison to even the cheapest traditional breads.
Since standard rice flour contains no gluten, the researchers needed to develop a new method that would bring these vital bread characteristics to their gluten-free bread. NARO solved the problem by using a specific type of wet milling process to produce their rice flour. The wet-mill process to make flour for gluten-free bread permits the formation of a microstructure of the fermenting batter, and in the resulting loaf, creating tiny bubbles coated in uniform undamaged starch particles in suitably supportive matrix.
The research team found that this process created properties previously unseen in rice-flour; properties arising from the undamaged starch particles created by the milling technique
They dub this supportive matrix "stone walls,” and they apparently form due to the surface activity of the undamaged starch granules. It appears these granules are able to lower the surface tension of water, and reduce the likelihood of collapse in the formed bubble walls. The result is spongier, chewier bread.
Some of the researchers suspect that the stability of the undamaged starch bubble is due to the uniform hydrophobicity of the similar sized granules, and that these cause an interface between damp gaseous air pockets and the liquid batter. Whatever the exact reason, this "stone wall" matrix allows bubbles to grow and expand as interior CO2 levels increase, which leads to superior bread loaves.
This technique has the potential to revolutionize the gluten-free bread industry. Stay tuned to see how the story evolves. | physics |
https://www.trailerlife.com/tech/diy/electrical-independence/ | 2020-10-31T16:24:49 | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00142.warc.gz | 0.934022 | 3,032 | CC-MAIN-2020-45 | webtext-fineweb__CC-MAIN-2020-45__0__35012379 | en | Installing a powerful solar system provides freedom from the utility grid and the opportunity to stay in primitive locations without giving up conveniences
The sun is a gigantic mass in the solar system that everyone expects to come up in the morning and to go down in the evening. It’s the Earth’s temperature regulator, and it is worshipped by many who enjoy basking in its warm glow for recreation. For most, the sun’s power is generally accepted as just part of daily life, but for RVers who relish getting off the grid, the sun is also nature’s power generator.
Solar systems that harness the sun’s rays and turn its energy into electrical power have been around for a long time, and RVers who appreciate the seclusion and economics of primitive campgrounds have embraced this silent power for many years. New, and continuing, technology has leapfrogged solar power to new levels, and RVers can now build systems that make living off the grid more practical than ever. We assembled and installed a robust system using the latest equipment available at the time (this technology changes rapidly) with the help of the experts from AM Solar in Springfield, Oregon, that transformed the fifth-wheel trailer into a mini power station.
While the attributes of a solar system, including electrical independence, are well-established, a primary benefit is to properly condition batteries. Solar power, through a suitably designed system (which includes a good charge controller) offsets continual deep discharges because the batteries are constantly being conditioned in response to actual usage. In the end, lead-acid batteries, for example, can last twice as long. At today’s prices for batteries, that’s a big savings.
Preplanning is crucial to building a good solar system; you just can’t slap a bunch of components together and expect positive results. The first step is to figure out your needs based on how you use the RV. In our case we determined that we wanted enough power to run the microwave, induction cooktop, hair dryer, fireplace flame (for visual ambience), entertainment systems and all the other 12-volt DC accessories in the rig and, of course, condition the batteries properly.
Our goal was to build a system big enough to allow complete independence from the grid, unless we wanted to run the air conditioning. It’s not practical to set up a solar system to continuously power the air conditioner(s), and in our case we rely on LP-gas to run the refrigerator. Systems can be designed to handle a residential refrigerator, but the battery bank and number of panels must be increased.
Our original calculations had us settling on three 160-watt solar panels, two AGM batteries (300 amp-hours), a 2,000-watt inverter/charger and a controller with a boost feature. After discussing our needs with AM Solar owner Greg Holder, we made a number of changes and upgrades. It kind of reminded me of remodeling a stationary home; changes are inevitable.
In the end we upgraded to four 160-watt panels after learning that the extra wattage eliminated the need to tilt the panels to follow the sun. That was a big selling point, since we would rather not spend too much time on the roof. The biggest upgrade was to lithium batteries, which upped the price tag considerably. Then to condition the lithium batteries properly, we upgraded to a Magnum Energy MagnaSine Hybrid inverter/charger.
When all was said and done, we had assembled a very powerful system with all the bells and whistles, banking on optimum performance and long-term reliability. It also satisfied our secret desire to have the ultimate system for our needs.
Lithium batteries are no longer science fiction; use in electric cars has made lithium batteries very popular, and for good reason. They last a really long time and can handle many more discharging/charging cycles than their lead-acid/AGM counterparts. These batteries maintain rated performance when taken down to the maximum depth of discharge, which is an amazing 80 percent. Lead-acid and AGM batteries should not be discharged beyond the 50 percent threshold.
To put the performance numbers in perspective, the lithium batteries used in the test system will provide 240 amp-hours before recharging versus 150 for lead-acid or AGM batteries. An even bigger consideration is voltage. As the charge level in lead-acid and AGM batteries decreases, so does voltage, which impacts appliances and accessories. Lithium batteries maintain full voltage until fully discharged, and then voltage drops precipitously.
Because the performance characteristics of lithium batteries are so much different, a battery management system (BMS) is critical to prevent damage from over-discharging or excessive voltage. Mini BMS circuit boards are wired between cells, and these boards are tied into a master BMS control box. Red lights on each BMS cell-level board flash when everything is OK. Four mini BMS boards were used on the battery bank built for this system.
When the BMS recognizes that the high- or low-voltage threshold has been breached, it automatically shuts down the battery bank, well before any damage can occur. When that happens, the light around the reset button mounted inside the RV illuminates to inform the user there’s an issue with voltage. If any of the mini boards discovers a change in the
threshold voltage — high or low — in any cell, the entire bank is shut down.
Building a battery bank from lithium cells is not designed for the do-it-yourselfer. There’s a lot of science behind assembling the bank, and that should be left to the professionals. The batteries are assembled using individual super cells that are rated at 3.2 volts and 100 amp-hours. These super cells can be configured to offer greater flexibility when looking for space to house the battery bank, unlike conventional deep-cycle batteries that have established dimensions.
For our system we paralleled three smaller cells into a super cell and then put four super cells into series using copper plates to make a 12.8-volt, 300-amp-hour battery bank. Once the batteries were configured and banded, they were initially electrically balanced so the voltage is consistent and at a full charge. This step requires the use of a sophisticated charger that can be controlled accurately.
Normally, lithium batteries are rated for around 2,000 charge/discharge cycles, which in itself is much better than the 500 to 1,000 cycles expected of a lead-acid or AGM battery. AM Solar tunes its proprietary BMS so that it operates in a narrower window than the maximum and minimum voltages established by the battery manufacturer, which increases the expected charge/discharge cycles to 3,000 to 5,000. If the user discharges the lithium batteries 80 percent 365 days a year (which almost no one will do), the batteries should last eight to 13 years. Given a more practical use of the batteries in normal living circumstances, the batteries should last at least 15 years, which makes the $2,599 price tag a lot easier to amortize.
Lithium batteries will not discharge much when in storage, and after testing for five months with no external charging support, the voltage barely changed. Another welcome feature is that lithium batteries do not have to be fully charged each time. That means you can charge them to a certain point (if there’s little sun or electrical power is not available) without negatively affecting conditioning. Lithium batteries can be charged very quickly.
When compared to batteries of equal capacity, the lithium counterparts are smaller and lighter. Each cell weighs only 7 pounds, which means the entire battery bank for this system weighed only 84 pounds, less than the weight of one 6-volt AGM battery.
Undoubtedly, bad press that surfaced a while back created some discomfort when considering mounting these batteries inside an RV storage compartment. Fires, created by battery overheating, were once a problem. The batteries under scrutiny were lithium cobalt oxide formulations and were subject to thermal runaway hazards that led to fires. The newer crop of batteries is lithium iron phosphate, which is basically noncombustible. Combine the latest-generation lithium batteries with a solid BMS, and the system becomes very safe.
Panels and Charge Controller
Solar-panel technology has moved very fast in the past few years. AM Solar specializes in the most up-to-date products and for this installation used its SF160, 36-cell mono-crystalline panels. All the panels are custom-built for AM Solar, and Greg Holder specifies at least 36 cells, so they are large enough to capture the most energy. The panels operate at 18 volts and are rated to have an 8.8-amp output. They measure 263/4 by 581/4 by 13/8 inches, which is very compact, considering the output.
Higher voltage boosts the charging amperage, especially when routed through a Blue Sky Energy Solar Boost 3024iL controller, which is designed to lift the charging amperage to the highest possible level. The controller is a critical component in any solar system. Its main function is to regulate the charging current and prevent overcharging the batteries. The unit used here is rated at 40 amps, so there’s a little room for expansion on the system, which will likely not be needed.
This is a very sophisticated controller, and it features a relatively new feature called maximum power point tracking (MPPT). This gives the controller the ability to boost the charging current (amperage) by converting some of the excess voltage coming from the panels — thus, the reason for panels that operate at a higher voltage. The biggest boost can be realized when the panels are cold and the battery voltage is low.
The controller was tied into a Blue Sky Energy IPN-ProRemote panel that has a tremendous programming capability. Five levels of information, deciphered by the various algorithms in the controller, can be read on the remote screen. The information is extensive, including the ability to equalize the batteries, which is not needed for the lithium batteries. It’s important to allow the installers to set the controller and provide users with the do’s and don’ts to keep from getting in trouble with lithium batteries.
Beyond voltage, the information shows how long since the batteries were fully charged, amperage from the solar array, usage in amp-hours and much more. If you’re a power watcher, you’ll be in heaven here.
An integral part of any complete solar system is the power inverter/charger. This component provides the power from the batteries to run the targeted 120-volt AC appliances and accessories, and charge the batteries when hooked up to RV park power. We chose the aforementioned MagnaSine for its established reliability in the industry to provide pure sine-wave power for all of our sensitive electronics and, most importantly, its compatibility for use with lithium batteries.
Model MSH3012 is the only inverter in the Magnum Energy line that has the hybrid feature, which provides a relatively new twist on inverting power by working in concert with 120-volt AC power when connected to some type of shorepower. Without getting too deep into the electronic wizardry, the MagnaSine inverter provides load support when there’s not enough current to operate the desired systems. For example, if you find yourself visiting relatives and can plug into only 15- or 20-amp household power, it’s not possible to run the microwave and hair dryer at the same time (depending on the demand from other appliances). The hybrid feature will provide the extra called-for current to operate the other appliances, up to the rating of the inverter, which in this case is 3,000 watts. This will prevent breaker tripping and an abrupt loss of power.
Normally, other inverters operate on only one source of power to run the appliances and accessories, and use a transfer switch, which isolates the inverter when plugged into an external source of 120-volt AC power. The hybrid inverter uses energy from the battery bank and an external 120-volt AC source to power the loads; any surplus power can be used to charge the batteries or handle higher loads than the AC input alone can provide.
Controlling the inverter is done through a remote with an LED display that we installed next to the IPN-ProRemote for the solar panels. The panel is loaded with features, and again, takes some initiation and practice to run through the steps. It really allows the user to fine-tune the system to take full advantage of the lithium batteries and other power sources like a portable generator while boondocking.
The output of the generator can be dialed in using the remote panel, which adds greater flexibility when charging batteries and running appliances. Since the MSH3012 inverter can add up to 25 amps to the output of the portable generator for a period of time, it’s possible to run the air conditioner while using a 2,000-watt generator long enough to cool down the interior and remove excess humidity. Once the heavy load is eliminated, the generator will recharge the batteries through the inverter.
Installing a system of this caliber is not for the faint of heart. I highly recommend leaving it to the experts, like AM Solar, because of the many intricate pieces that need to be assembled. For example, it took the better part of a day just to locate a suitable runway for the 4-gauge cables used to connect the panels to the charge controller. It took four and a half days to complete the installation to satisfy all the required codes, ensure that all the components were secured properly and that the wiring was meticulously routed and wrapped.
As one might expect, such a robust solar system is not inexpensive. The complete package described above with all the ancillary pieces like the circuit breaker, fuse, BMS and cables was just shy of $12,000.
Obviously, the results from any solar array will be subject to the time of year and personal usage. On an average day, we consume about 100 amp-hours, which is less than half the capacity of the lithium-battery bank, and we usually have the batteries fully charged by noon when in good sun. The fact that the lithium batteries do not require a finish charge provides great versatility on days when the sun is not as strong.
Except to run the air conditioning, there’s really no reason to hook up, which gives us exceptional freedom to travel at will. We jokingly tell our neighbors that we can sell energy back to the grid, which always initiates a conversation and tour of our system. | physics |
https://www.ifixyourmix.com/post/resonating-harmony-exploring-the-profound-link-between-schumann-resonance-and-music | 2023-09-23T00:01:53 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00800.warc.gz | 0.910241 | 715 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__275705027 | en | In the ethereal tapestry of our planet's electromagnetic symphony, a captivating phenomenon known as Schumann Resonance pulses with majestic resonance. Discovered by the visionary physicist Dr. Winfried Otto Schumann in 1952, this natural electromagnetic phenomenon has since fascinated scientists and artists alike. As we embark on an exploratory journey into the depths of Schumann Resonance, we unravel the intricacies of this cosmic connection, revealing its profound impact on our planet and our own human experiences.
The Rhythmic Pulse of Earth
At the core of Earth's atmosphere, an enchanting symphony unfolds, resonating with a characteristic frequency referred to as the Schumann Resonance. This fundamental frequency pulsates at approximately 7.83 Hz, often likened to the planet's heartbeat. It is generated by the interactions between Earth's surface and the lower ionosphere, forming a resonant cavity where electromagnetic waves circulate harmoniously.
The Interplay of Frequencies
Intriguingly, the Schumann Resonance harmonizes with the human brain's alpha and theta brainwaves, which are associated with relaxation, creativity, and meditation. When we immerse ourselves in the world of music, our brainwaves align with these frequencies, leading to a state of heightened focus, creativity, and emotional well-being. This synchrony between the Earth's resonance and our neural activity offers a profound connection between the natural world and our own conscious experiences.
A Healing Symphony
Scientific research has unveiled the therapeutic potential of music composed in harmony with the Schumann Resonance. Such music has been found to reduce stress, enhance cognitive function, and promote overall mental and emotional balance. By crafting compositions that resonate with these frequencies, musicians and composers harness the power of Schumann Resonance to create transformative sonic experiences capable of soothing, healing, and inspiring listeners.
Influences and Oscillations
While the Schumann Resonance provides a foundational frequency for our planet, it is not static. Various factors influence its oscillations, including solar activity, lightning discharges, and the collective consciousness of humanity. Researchers have observed correlations between shifts in the Schumann Resonance and significant global events, suggesting a complex interplay between Earth's energetic state and human consciousness. This dynamic nature invites further exploration and highlights the interconnectedness of our planet and its inhabitants.
Beyond its earthly manifestations, the Schumann Resonance holds a cosmic significance. It serves as a window into the harmonies of the universe, resonating with celestial rhythms and connecting us to the broader cosmic symphony. This profound interconnection invites contemplation of our place in the vast cosmic order and instills a sense of wonder and awe.
Schumann Resonance, the enigmatic cosmic pulse that permeates our planet, reveals itself as a gateway to understanding the intricate relationship between Earth and humanity. Its harmonious frequencies entwine with our neural activity, offering a path to heightened creativity, relaxation, and emotional well-being. As musicians harness its power, they create compositions that heal, inspire, and transcend boundaries. This natural phenomenon not only reflects the state of our planet but also mirrors the influences of cosmic forces and human consciousness.
As we delve deeper into the mysteries of Schumann Resonance, we unveil a cosmic symphony of interconnections and realize our profound relationship with the universe. Let us embrace this harmony, allowing the pulsations of Schumann Resonance to guide our creative endeavors, foster healing, and inspire a deeper appreciation of the boundless wonders of our world. | physics |
https://audiorealignment.com/ | 2024-03-05T14:37:55 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948235171.95/warc/CC-MAIN-20240305124045-20240305154045-00258.warc.gz | 0.854879 | 371 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__162260218 | en | A.R.T. X-SERIES ELECTROMAGNETIC TREATMENTS…WHAT ARE THEY ABOUT?
When it comes to audio playback, everything from dirty AC and stray-field energy to minimally shielded circuits, power supplies and transformers can and will alter the sonic integrity of the audio signal.
EMF (Electromagnetic Fields) arise from voltage on conductor(s), while magnetic fields are generated by current flowing through said conductor(s). The higher the voltage and current, the stronger the fields, which can be passed down the signal chain from component to component. When you combine stray and mechanical noise, the result is audible coloration within music playback. The noise-floor is raised, dynamics are reduced, soundstage decreases in all dimensions, and a layer of graininess and/or harshness is present.
While many audio equipment manufacturers attempt to minimize or filter out these issues, any present noise negatively impacts overall fidelity. Eliminating electromagnetic background noise and reducing mechanical noise are undoubtedly key factors in unleashing the full potential of your system. So, how can we effectively combat these anomalies?
Enter Audio Realignment Technologies.
The A.R.T. X-Series of Electromagnetic Treatments tackles these issues head-on, at the source. By utilizing its proprietary formulation, this powerful technology is designed to absorb and minimize the negative effects of these electronic-induced noises. The X-Series provides multiple levels of control, refinement and realignment.
WHERE DO OUR PRODUCTS EXCEL?
Interior and Exterior Electrical Panels
Integrated A/V Receivers
Pre-Amps (Solid-State and Tube)
Amplifiers (Solid-State and Tube)
Active Speakers and Subwoofers
Speaker Passive Networks
AND SO MUCH MORE… | physics |
http://www.ledgrowlightsoutlet.com/par-and-led-grow-lights.html | 2017-04-27T05:02:47 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00644-ip-10-145-167-34.ec2.internal.warc.gz | 0.878264 | 909 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__179503562 | en | Photosynthetically active radiation
PAR stands for "Photo-Synthetically Active Radiation". PAR is a classification of light (Micro Einsteins) also known as Micro Moles per second, per square meter.
PAR is used to determine the capability of a light to drive photosynthesis. In the past, we have seen light measured solely in lumens. Lumens measure the power of light perceived by the human eye. Our eyes peak somewhere in the green-yellow spectrum, but a plant has far less sensitivity. PAR is the range within the light spectrum between 400nm-700nm that plants absorb and are able to use for the integral stages of photosynthesis. Micromoles are the measurement we use to express a PAR reading which. A micromole is the photosynthetic photon flux density of light, per square meter, per second.
Most LED light manufacturers use Lumens as the unit to measure bulb brightness. This means very little for the growth of your plants. Micro moles give a more accurate measurement of what your plants are absorbing the because the plant requires different wavelengths.
Here are our product Par data (tested by Li-Cor 250A Quatum sensor)
|LED light Model||6 in||12 in||18 in||24 in|
|50w AIBC RB81-630||360||115|
|90w UFO AIBC RBO-660||870||298||127||76|
|AQ-120W2P( aquarium )
Note: If you give a plant too much light, or “saturate” it, the growth may be stunted and they may even die from shock
About Par meter:
An ideal quantum sensor would give equal emphasis to all photons between 400 and 700 nm and would exclude photons above and below these wavelengths.
There are two popular quatum par meters available in the market, Apogee and Li-Cor
Below are review of the quantum meters from two different manufacturers:
Apogee Quantum meter:
Quantum Sensors and Quantum Meters measure Photosynthetic Photon Flux (PPF) in μmol m-2 s-1.
The spectral response of the Apogee Sensor used in Quantum Meters and the Quantum Sensor is shown at right. As the figure indicates, the sensor underestimates the 400 to 500 nm wavelengths (blue light), overestimates the 550-650 wavelengths (yellow and orange light), and has little sensitivity above 650 nm (red light).
Fig. Apogee quantum sensor/meter response (blue line) compared to defined quantum response (black line) of equal sensitivity at all wavelengths between 400 nm and 700 nm
The Good: Portable, easy to use, cheaper
The Bad: Not accurate for LEDs
The bottom line: If you are not a professional, a $300 meter is affordable if you are looking for a rough estimate, but you need to understand that 400-500nm is underestimated and 550-650nm is overestimated.
Li-Cor Light meter with sensor Li-Cor 190
Accurate measurements are obtained under all natural and artificial lighting conditions because of the computer-tailored spectral response of the LI-190. Colored glass filters are used to tailor the silicon photodiode response to the desired quantum response. An interference filter provides a sharp cutoff at 700 nm, which is critical for measurements under vegetation where the ratio of infrared to visible light may be high. A small response in the infrared region can cause an appreciable measurement error. This sensor, developed from earlier work (1), was pioneered by LI-COR and has become the standard for PPFD measurement in most photosynthesis-related studies.
Li-Cor Light Meter
Typical spectral response of LI-COR Quantum Sensors vs. Wavelength and the Ideal Quantum Response (equal response to all photons in the 400-700 nm waveband).
The Good: Accurate, multiple use meter
The Bad: ExpensiveThe bottom line: This equipment is essential if you are professional grower looking to maximize the use of your lights
LED Grow light and Par meter:
Since two main wavelength (440-470nm and 630-660nm) are used in LED grow lights, the quantum sensor must be sensitive to their wavelengths (refer to the manufacturer spectral diagram). Li-Cor is the best for LED grow lights par testing. All of our data is tested using a Li-Cor Quantum Sensor. | physics |
https://thebutterflymother.com/2016/07/31/book-review-professor-astro-cats-atomic-adventure/ | 2024-02-21T17:28:17 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473524.88/warc/CC-MAIN-20240221170215-20240221200215-00168.warc.gz | 0.940612 | 503 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__4126184 | en | On first glance I assumed this was a large picture book but in actual fact it’s closer to a physics reference book in content…except it’s fun! Using bold, bright & modern illustrations, this book explains in simple terms a whole host of scientific topics from atoms & molecules to Newton’s Laws & nuclear physics.
Packed to the brim with facts, stats and info this will capture any child’s imagination and kickstart an interest in physics. The illustrations are so lively and gorgeous, especially Prof. Astro Cat himself. And I love the simple and fun way different theories and reactions are explained in a way that’s relatable and exciting for children. Also, each explanation is linked to an every day object, for example: “Sounds are produced when things vibrate! A vibration is a back and forth movement that happens very quickly. Let’s look at my guitar! When I hit the strings, they vibrate and hit the air molecules next to them so that they vibrate too.”
The content is broken up, not only by the great illustrations, but by a layout of chunked text, speech bubbles and info boxes that is really easy on the eye. As with all the Flying Eye books I’ve encountered so far, it is beautifully made, with lovely paper and a textured, hardback cover that’s made to last. I picture this as a brilliant gift from a science-loving relative to a primary school-age child.
I wasn’t a big science fan as a child but in recent years I’ve come to find it much more interesting and I’m currently obsessed with various science-based You Tube channels so this book has been a pleasure for me to read personally and, somewhat embarrassingly, it has taught me quite a lot. I love teaching Caterpillar about the science of the world while we’re out in it and, although this is a little too old for him at the moment, I’ll definitely be keeping it close at hand to try to ignite a love of physics in the future.
Author Dr Dominic Walliman (who, incidentally, has his own You Tube channel) has written several Dr Astro Cat books with amazing illustrator Ben Newman, and plans more for the future too. I’d be interested to see the other topics they cover.
You can by your own copy from Flying Eye books here. | physics |
http://www.geekmaxreview.com/2011/08/new-cosmos-for-new-generation.html | 2019-07-24T07:21:51 | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00492.warc.gz | 0.969338 | 341 | CC-MAIN-2019-30 | webtext-fineweb__CC-MAIN-2019-30__0__200695589 | en | Cosmos from the early '80s. It was revolutionary for it's time and still holds up even now thirty years later in spite of countless new discoveries about our universe. The late Carl Sagan is one of the most beloved figures in modern science due to his passion about the universe and his talent for educating the world in such a way that everyone could easily understand. It's no secret that physics and math are hard, most people's eyes start to cross when explaining how a black hole works or what the 4th dimension looks like. Carl Sagan changed all that.
Well in the past 30 years we have learned lots of new things about the universe. It's time for a refresher course, some updated textbooks, a NEW Cosmos. But Carl Sagan is no longer with us, so who is up for such a daunting task? Neil deGrasse Tyson that's who! This man is one of my favorite people on the planet. He reminds me of Carl Sagan in almost every way but with twice the amount of passion. As the head of the Hayden Planetarium in New York, Neil Tyson not only makes it easy to learn about science, he even makes it damn entertaining as well. He's funny, charming, and one of the most intelligent people living on our Earth today.
It was announced a few weeks ago that the Fox network has plans to air a sequel to Cosmos, hosted by astrophysicist Neil deGrasse Tyson and produced by Sagan's widow Ann Druyan and Seth MacFarlane. Yes that Seth MacFarlane. If there was ever a time to get excited about science we are living in it now. The new Cosmos series is set to air sometime in 2013. | physics |
https://overheadproducts.com/return-policy/ | 2023-09-21T07:57:47 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233505362.29/warc/CC-MAIN-20230921073711-20230921103711-00108.warc.gz | 0.958377 | 139 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__276131364 | en | Each IR Mobile heater from Overhead Products has been tested and should arrive to you in perfect working order. If you are not completely satisfied with your order, we want to know about it! These heaters come with a Three-Year warranty from the factory on the heating element, and a one-year warranty on all other parts. Please remember that infrared heat works like the sun, it does not heat air, but it does heat people and the objects in the space, which will heat the air as is moves across the warmer surfaces. This heater will not blow or move the air. Please call us with any questions or concerns about your heater. (916) 944-3526. | physics |
https://www.legendsecurities.net/ebook/aluminium-alloys/ | 2023-03-28T03:06:59 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00612.warc.gz | 0.914187 | 392 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__14615870 | en | |Author||: Jürgen Hirsch|
|Publisher||: John Wiley & Sons|
|Total Pages||: 2580|
|Rating||: 4/5 (678 Downloads)|
Download or read book Aluminium Alloys written by Jürgen Hirsch and published by John Wiley & Sons. This book was released on 2008-11-17 with total page 2580 pages. Available in PDF, EPUB and Kindle. Book excerpt: Aluminium is a well established modern lightweight engineering and functional material with a unique combination of specific properties like strengh, formability, durability, conductivity, corrosion resistance, etc. It is present in many intelligent solutions in established markets like building, transport, packaging, printing, and many others, in our fast moving modern society. The various aluminium alloys can be processed quite efficiently in large quantities by conventional fabrication routes, as well as in special sophisticated forms and material combinations for highly innovative high-tec solutions and applications. This book contains latest information about all these aspects in form of the refereed papers of the II th International Conference on Aluminium Alloys "ICAA", where world-wide experts from academia and engineers from industry present latest results and new ideas in fundamental as well as applied research. Since 22 years the ICAA series provides scientists and engineers with a complete overview over the latest scientific and technological developments, featuring profound technology-based overviews and new innovative perspectives. This book is a reference for the scientific community as well as for the aluminium industry working on aluminium alloy development, processing and application issues. It gives a global perspective on the current focus of international research with emphasis on in-depth understanding of specific properties and applications of conventional and advanced aluminium alloys. | physics |
http://www.laumeiersculpturepark.org/calendar/2017/8/13/free-family-day | 2019-08-17T15:28:17 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313428.28/warc/CC-MAIN-20190817143039-20190817165039-00544.warc.gz | 0.919665 | 192 | CC-MAIN-2019-35 | webtext-fineweb__CC-MAIN-2019-35__0__239053723 | en | Join Laumeier Sculpture Park for a "Total Eclipse of the ART"! Enjoy activities that explore art through light, darkness and optical illusion. Customize your own pair of solar glasses in preparation for the total solar eclipse on Monday, August 21, and create prints using only sunlight. Participate in a scavenger hunt around the Park to observe how light and shadow affect the way you see the artworks. Laumeier’s Free Family Days provide families with a chance to bond while encouraging observation, imagination, curiosity and creativity. Activities are designed to be simple enough for ages 4 and up to enjoy, yet complex enough that more experienced young artists can take their projects to another level. Families have fun exploring new media and concepts while finding inspiration in Laumeier’s artworks and the natural environment. Event is located in the Kranzberg Education Lab. Free, all ages. Supported by a grant from the Windgate Foundation. | physics |
http://investors.quantum.com/phoenix.zhtml?c=69905&p=irol-newsArticle_Print&ID=1480395 | 2019-05-25T01:58:30 | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257845.26/warc/CC-MAIN-20190525004721-20190525030721-00542.warc.gz | 0.901823 | 717 | CC-MAIN-2019-22 | webtext-fineweb__CC-MAIN-2019-22__0__184368557 | en | Presentation to Detail High-Speed Shared Workflows and Large-Scale, Multi-Tier Online Archiving in Advanced Data Center Environment
SAN JOSE, CA, Oct 07, 2010 (MARKETWIRE via COMTEX) --
Quantum Corp. (NYSE: QTM), the leading global specialist in backup, recovery and archive, announced today that CERN, the European organization for nuclear research, will present at Storage Networking World (SNW) Fall 2010 on how it accelerates research and discovery using Quantum's StorNext(R) data management software.
Collecting data from the collision between billions of particles requires not only a reliable and scalable IT infrastructure, but also an efficient and effective way to record and analyze the billions of bits of data generated every second. The ALICE project at CERN, one of the largest experiments in the world devoted to recounting the birth of matter, involves 1,000 physicists, engineers and technicians from 30 countries, resulting in unprecedented demands for data acquisition, selection, transfer, storage and handling.
At SNW, Pierre Vande Vyvre, project leader for ALICE data acquisition, will discuss steps his team took to implement the right storage solution, the specific needs of high-performance computing environments, and how CERN was able to architect its storage system to fully understand and exploit massive amounts of data for new scientific discoveries.
Session Title: Beyond High Performance Computing: What Matters to CERN
Date: Wednesday, Oct. 13, 2010
Time: 3:05 p.m. to 3:50 p.m. CDT
Location: The Gaylord Texan in Dallas, TX
For more information about Quantum backup, recovery and archive products, visit Quantum's booth #118 at SNW Fall 2010.
About CERN CERN, the European Organization for Nuclear Research, is one of the world's largest and most respected centres for scientific research. Its business is fundamental physics, finding out what the Universe is made of and how it works. At CERN, the world's largest and most complex scientific instruments are used to study the basic constituents of matter -- the fundamental particles. By studying what happens when these particles collide, physicists learn about the laws of Nature. Founded in 1954, the CERN Laboratory sits astride the Franco-Swiss border near Geneva. It was one of Europe's first joint ventures and now has 20 Member States. For more information, please visit cern.ch.
About Quantum Quantum Corp. (NYSE: QTM) is the leading global storage company specializing in backup, recovery and archive. Combining focused expertise, customer-driven innovation, and platform independence, Quantum provides a comprehensive, integrated range of disk, tape, and software solutions supported by a world-class sales and service organization. This includes the DXi(TM)-Series, the first disk backup solutions to extend the power of data deduplication and replication across the distributed enterprise. As a long-standing and trusted partner, the company works closely with a broad network of resellers, OEMs and other suppliers to meet customers' evolving data protection needs. Quantum Corp., 1650 Technology Drive, Suite 800, San Jose, CA 95110, (408) 944-4000, www.quantum.com.
Quantum, the Quantum logo and StorNext are registered trademarks of Quantum Corporation and its affiliates. DXi is a trademark of Quantum Corporation. All other trademarks are the property of their respective owners.
SOURCE: Quantum Corporation | physics |
http://lists.bgu.ac.il/pipermail/phys-seminars/2018/002881.html | 2018-03-17T19:56:34 | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645310.34/warc/CC-MAIN-20180317194447-20180317214447-00222.warc.gz | 0.832218 | 199 | CC-MAIN-2018-13 | webtext-fineweb__CC-MAIN-2018-13__0__201069281 | en | [Phys-seminars] 2018-03-06 Physics Colloquium
okrichev at bgu.ac.il
Wed Feb 28 21:23:30 IST 2018
PLACE: Nanotechnology institute building (#51) room 15
Science with Gravitational Lensing
Adi Zitrin, BGU
Gravitational lensing (GL) is becoming a standard tool in Astronomy. Different regimes of GL, for example, weak-lensing, strong-lensing or micro-lensing, answer to different astrophysical problems ranging from probing the underlying cosmology, to the detection of extrasolar planets. In this talk I will review (some of) the more prominent science with GL - its history, present, and future efforts, as well as key results, mainly with respect to insights on dark matter and on the first galaxies in the universe. I will highlight our contribution to these field.
More information about the Phys-seminars | physics |
http://davishardy.com/Harbor_blog/week5_class2.html | 2024-04-25T14:14:45 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00655.warc.gz | 0.972095 | 494 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__133071260 | en | After review, the team created a new plan of action that will hopefully put us back on track.
Wednesday - Feb 7
During review, we noticed that there was a lot more feedback on the ground-based version of the shot. Due to that observation, we've decided to pursue that direction. We've also re-assessed priorities in order to account for dependencies. During the afternoon, I was able to create a new version of the camera that followed Kyle's framing notes.
Thursday - Feb 8
Today, we checked in with each other in class and reviewed everyone's contributions. We also made a long term plan on how to finish our advertisement. I was tasked with improving the portal effect. I also talked with our animator and made sure to establish our animation timeline. We also established that we needed to switch to a shading-based version of the nanotech effect due to our time constraints.
Friday - Feb 9
For Friday, I created iteration of the portal effect with a larger quantity of significantly smaller cubes. I then continued that work by reducing the influence of the POP Curve Force node that I was previously using. I paired this with a new POP Advect by Volume node that adds disturbance in a select area of the simulation. I wasn't able to use the POP Wind node since it is more difficult the localize the effect of the wind. In addition to the simulation adjustments, I halved the intensity of the incandescent shader that I used for the pixel cubes. I made this change so that the portal contributes less lighting to the scene.
After the adjustments were made, I cached them out and included them in our main rendering scene so the effect could be render in context.
At 8pm, our group had a meeting to discuss progress. After the meeting, Josephine, Chaithanya, and I worked at Monty. During this session, I created the shading-based version of the effect.
To create the pixel effect, I used various ramps to control where various parts of the shader should be present. I also used a point attribute that allows for animation of the effect. Currently, the point attribute is animated globally. In order to create the desired animation, a point attribute has to be animated per instanced point.
It was recommended that I added breakup to the effect. I elected to postpone this change so I could fix issues in our file that were blocking others in the group. | physics |
https://www.deutscher-sachbuchpreis.de/en/the-jury | 2021-07-26T04:28:48 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00081.warc.gz | 0.767798 | 226 | CC-MAIN-2021-31 | webtext-fineweb__CC-MAIN-2021-31__0__73630487 | en | Dr Jeanne Rubner
Jeanne Rubner, born in 1961, is head of the “Wissen und Bildung aktuell” (Current Knowledge and Education) department at Bayrischer Rundfunk. Previously, she worked at Süddeutsche Zeitung on the topics of science and domestic and foreign policy. She studied physics and history of science in Regensburg, Strasbourg and Seattle and did her doctorate in theoretical biophysics at the Technical University of Munich, where she now has a teaching position. She is the author of numerous books about science, energy and educational policy. She has been awarded the Universitas-Preis für Wissenschaftsjournalismus der Hanns-Martin-Schleyer-Stiftung (Universitas Prize for Science Journalism of the Hanns Martin Schleyer Foundation) and the Medaille für Naturwissenschaftliche Publizistik der Deutschen Physikalischen Gesellschaft (Medal for Natural Science Journalism of the German Physical Society). | physics |
https://technologyguiders.com/vacuum-double-glazing-manufacturers-in-china/ | 2024-04-23T03:42:23 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818464.67/warc/CC-MAIN-20240423033153-20240423063153-00125.warc.gz | 0.950292 | 1,055 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__50750787 | en | Vacuum double glazing manufacturers in China have started to gain international recognition in recent years. Using the newest technology in the industry, these companies can now offer a wide range of products that can satisfy the needs of consumers. The Chinese market for the technology has also seen an uptick, primarily due to the increasing demand for energy efficiency and cost-effective buildings. Moreover, the vacuum technology can be incorporated into a wide range of home improvement projects, including exterior walls, roofs and interior walls.
The Pilkington Spacia vacuum glazing system provides an attractive solution to modern comfort requirements. It is suitable for original and secondary glazing. This innovative technology blends seamlessly into traditional properties. Its thin profile and high performance make it ideal for retrofit applications. The 0.2-mm vacuum cavity between the two panes of glass creates a space that prevents thermal transfer.
Its low emissivity outer layer and clear float glass inner pane combine to provide excellent thermal performance. The acoustic benefits of the Pilkington Spacia unit are as good as conventional double glazing. These features mean that the system can be installed into existing window frames without the need for significant replacement. It is also an ideal option for historical preservation projects.
A key challenge in developing a good edge seal is to develop a product that is resistant to the vacuum collapse force. The LandVac sealing system has achieved an award-winning design that is lightweight and looks great when it is installed in a window frame.
HaanGkas is a new generation of vacuum insulated glass. Its patented technology offers superior performance in thermal and sound insulation. It also has a long life span. This makes it ideal for retrofit applications.
HaanGlas boasts a dazzling array of features, including a weighted sound reduction index exceeding 39 dB, an ift thermal transmittance test that exceeded the usual UL/ENV quotas, and a dew point that drops below -20degC. Other notable performance tidbits include its sonic performance, an acoustic certification, and its patented acoustic foam.
The seal in HaanGlas is an intelligent system that maintains a consistent level of high-vacuum between layers, while at the same time allowing the glass to be lightweight. It also reduces thermal contraction while preserving strength and durability. Moreover, it looks good when installed in window frames.
Although it has been around for a while, the HaanGlas name has been associated with a number of innovative products and technologies. These include a series of authoritative certifications, a new design that’s lighter and thinner than insulated glass, and a world-class manufacturing facility.
China vacuum double glazing manufacturer is lightweight glazing that prevents heat loss and reduces energy costs. It can be used to improve the thermal performance of older buildings and as a lightweight secondary glazing for new construction. The benefits of vacuum glass include improved acoustic insulation, better light transmission, better sound insulation, and the ability to keep the original window frame. The technology is also becoming more widespread in the market.
The insulating property of vacuum glass is due to the space created between the two glass panes by the vacuum. This space is 0.3mm and is a major feature of the new generation of insulating glass. Compared to conventional double glazing, FINEO offers a better aesthetic finish and thermal performance. In addition, it is more durable. FINEO glass is also easier to install. The other major advantage of vacuum glass is its safety. Since the glass is made from toughened glass, it will not break. As a result, it is more reliable than conventional gas-filled double glazing.
Challenges Of Commercially Available Vacuum-Insulated Glazing
Tempered vacuum double glazing has a number of advantages over conventional double glazing. The most obvious is its thermal performance. Since it has a lower U-factor, it can offer an R-value that is much higher than that of traditional double glazing. However, the benefits are not enough to overcome the problems that vacuum insulating glazing has to face. First, there is the problem of bowing. This can lead to visual distortions and a weak seal. It also reduces the life of the product.
Another challenge is leaky seals at the perimeter of the glazing. These allow moisture and insulating gases to escape. Fortunately, manufacturers have found a solution. They use tiny spacers, which are placed 1 to 2 inches apart. When viewed closely, they form a faint matrix. Besides improving heat transfer, this technology has the added benefit of reducing conduction heat losses. The gap between the panes of glass can be reduced to 0.2 mm. That is just half the thickness of conventional double glazing.
Infect, China Vacuum Insulated Glass is a type of glass used in buildings. It is a lightweight glazing with excellent energy performance. This is the next generation of energy efficient glazing. Using vacuum insulated glass will lower the self-weight of buildings, reduce heat transfer, and insulate sound. In addition, it will reduce energy bills. | physics |
https://resourcesforteaching.com.au/all-resources/the-physical-properties-of-materials-year-4-chemical-sciences/ | 2023-12-10T00:57:39 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100989.75/warc/CC-MAIN-20231209233632-20231210023632-00150.warc.gz | 0.81885 | 328 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__200864132 | en | Students will learn about:
- What the term ‘properties’ in science means
- A range of physical properties that materials can be grouped and classified
- Electrical, thermal and acoustic conductors and insulators
- Transparent, translucent and opaque properties
- Hardness, flexibility, absorbent and waterproof properties
Information covered in the informational slides:
- What are properties?
- How can we find the properties of materials?
- Observable physical properties
- Recognising electrical, thermal and acoustic conductors
- Electrical conductors and insulators
- Electrical conductor or insulator (classifying task)
- Did you know: Seawater is a conductor of electricity
- Thermal conductors and insulators
- Thermal conductor or insulator (classifying task)
- Which materials are absorbent? (classifying task)
- Which materials are waterproof? (classifying task)
- Transparent, translucent and opaqueness
- Transparent, translucent or opaque (classifying task)
- Which materials are hard? (classifying task)
- Which materials are flexible? (classifying task)
This resource includes 5 worksheets that improve knowledge about the physical properties of a range of common materials.
An answer key is included for easy marking!
Worksheet 1: Close Passage – The Properties of Materials
Worksheet 2: Multiple Choice & Short Answer Questions
Worksheet 3: Transparent, Translucent and Opaqueness
Worksheet 4: Electrical Conductors and Insulators
Worksheet 5: Grouping and Classifying Materials According to Their Properties | physics |
http://www.makulilo.org/2011/09/astromundus-international-masters.html | 2018-02-21T07:20:43 | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813571.24/warc/CC-MAIN-20180221063956-20180221083956-00641.warc.gz | 0.87366 | 583 | CC-MAIN-2018-09 | webtext-fineweb__CC-MAIN-2018-09__0__207996107 | en | Open to: Students of all nationalities who hold a Bachelor degree in Physics, Astronomy, Astrophysics, or Mathematics and have a good and certified knowledge of the English language.
Scholarship: The fellowships cover tuition fees and student housing at the different sites for the whole duration of 2 years.
AstroMundus is a 2-years Erasmus Mundus Masters Course (120 ECTS) in Astronomy and Astrophysics offered by a consortium of 5 partner universities in 4 different countries: Austria, Italy, Germany, and Serbia (University of Innsbruck, Austria; University of Padova, and University of Rome Tor Vergata, Italy; University of Göttingen, Germany; University of Belgrade, Serbia).
The main objective is to provide top-ranked students with an excellent background in Astrophysics, to introduce them to the world of modern astrophysical research, and foster their future career in this field. At the same time, in the spirit of the Erasmus Mundus programmes, they promote cultural exchanges between Third Country and European students and academics. AstroMundus students carry out their master studies in at least 2 and up to four of these countries, in a stimulating and scientifically excellent international environment.
AstroMundus offers an excellent educational level in all branches of Astrophysics, as insured by the wide variety of expertise in the field covered by this international partnership. The main topics covered by the Master programme are:
Galactic Astrophysics (the Sun and the Solar system, the Milky Way, stellar evolution, the interstellar medium) Extrasolar planets Extragalactic Astrophysics (galaxies, galaxy evolution, galaxy clusters, intra-cluster medium, star formation) Active Galactic Nuclei (including accretion theory, relativistic jets, modelling) Cosmology (including observational cosmology, galaxy surveys, gravitational lensing, very early universe) Particle Cosmology Astroparticle Physics Gravitational waves Observational astrophysics from the ground and from space Computational astrophysics (N-body simulations, magneto-hydrodynamic simulations)
AstroMundus Masters Course edition starting in September 2012:
The call for applications is open from August 1st to November 30th, 2011 for the course starting in September 2012.
Read carefully the section "Who" and find out whether you are an eligible applicant. If you are an eligible applicant, prepare all the application documents according to the information given in the section "How" and upload your final application file (section "Forms").
University of Innsbruck Institute of Astro- and Particle Physics Technikerstrasse 25 A-6020 Innsbruck
Please send your inquiries via e-mail! E-mail: astromundus (at) uibk.ac.at | physics |
https://discountmoldremovalofhenderson.com/thermal-imaging/ | 2024-04-25T05:06:46 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00739.warc.gz | 0.875466 | 407 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__11446101 | en | Thermal imaging is a valuable technology used for detecting and assessing various issues related to water damage, moisture intrusion, and insulation deficiencies. Here's how it works and how it's applied:
Detection of Water Damage and Leaks:
Thermal imaging cameras capture images based on heat rather than visible light. This allows professionals to detect temperature variations caused by water damage or moisture intrusion, even in areas not readily visible to the naked eye.
By identifying these temperature anomalies, thermal imaging helps pinpoint potential sources of leaks or areas affected by water damage, facilitating prompt mitigation efforts.
Identification of Insulation Deficiencies:
Thermal imaging can reveal areas in buildings where insulation is lacking or improperly installed. Discrepancies in temperature patterns indicate areas where heat loss or gain occurs due to insufficient insulation, helping to improve energy efficiency and comfort.
By identifying insulation deficiencies, thermal imaging guides insulation installation or retrofitting projects, ensuring optimal thermal performance and reducing energy consumption.
Detection of Cold Spots and Mold Risk:
Cold spots detected by thermal imaging behind walls or within building structures may indicate areas prone to moisture accumulation. These damp environments create favorable conditions for mold growth.
By identifying cold spots early, thermal imaging helps mitigate mold risks by enabling proactive measures such as moisture remediation, improved ventilation, or targeted insulation installation to prevent condensation buildup.
One of the key advantages of thermal imaging is its non-destructive nature. Unlike invasive inspection methods that may require structural interventions, thermal imaging allows for comprehensive assessment without causing damage to building materials or finishes.
This non-destructive approach minimizes disruption to occupants and reduces the need for costly repairs associated with traditional inspection techniques.
In summary, thermal imaging is a powerful tool for detecting water damage, locating leaks, assessing insulation effectiveness, and mitigating mold risks in buildings. By providing visual insights into temperature variations, thermal imaging enables proactive maintenance and remediation strategies, ultimately enhancing building performance, occupant comfort, and indoor air quality. | physics |
https://www.artisanclinic.sg/treatments/dermav-laser | 2024-02-22T17:12:33 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.13/warc/CC-MAIN-20240222161802-20240222191802-00623.warc.gz | 0.923696 | 839 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__204335434 | en | The DermaV laser is the most potent and cutting-edge vascular laser available. It is used to target and get rid of undesirable pigmentation and blood vessels.
The versatility of DermaV Laser's wavelength and laser delivery method make it one of the most effective and least painful lasers we have today. Using cryogen cooling, there is maximum protection for the skin while providing a numbing effect. No topical numbing or gel are needed for treatments. We consider this Green Laser the next generation V-Beam and Excel V.
The Lutronic DermaV® laser is a very versatile solid-state potassium-titanyl-phosphate (KTP) laser that emits green light to target and removed unwanted blood vessels and pigmentation while rejuvenating photodamaged skin.
The Lutronic DermaV® laser is able to accomplish this because hemoglobin in our blood vessels absorbs green light, as does melanin pigment. The DermaV® emits a wavelength that is strongly absorbed by melanin, but also protects the surface of the skin using a spray cooling device, which is highly efficient at protecting the skin.
In addition, the Lutronic DermaV® laser emits 3 types of pulses. What this means for our patients is that we can treat a large variety of vascular conditions as well as photoaging. The DermaV® almost always treats facial redness and photoaging with no bruising.
First double wavelength 532nm & 1064nm platform
DermaV™ is a long-pulsed 532n KTP & 1064nm Nd:YAG Laser with the latest technology such as Cryogen Cooling, Variable Sequential Pulsing Technology, and Real-Time Temperature Sensing, which incorporates 25 years of R&D know-how from Lutronic.
Difference in depth of penetration
DermaV laser system is intended for use in the medical specialty of dermatology requiring selective photothermolysis of target chromophores in soft tissue.
The 532 nm wavelength is intended to be used by a trained physician as treatment of the following:
- Benign vascular lesions, including Spider veins of the lower extremities
- Benign cutaneous lesions, including Erythematous scars
The 1064 nm wavelength is intended to be used by a trained physician as treatment of the following:
- Benign vascular lesions, including Port wine stains
Long lasting results, up to 3 years.
At The Artisan Clinic, all our doctors are well-trained professionals.
We keep ourselves up to date with latest trends and updates in the Aesthetic medicine field by being constantly involve and participating in seminars and conferences around the world.
Our Medical Director, Dr. Isaac Wong is an international trainer, and is often invited to speak in international conferences.
All treatments by The Artisan Clinic doctors are carefully curated by our Medical Director, Dr. Isaac Wong.
One of the unique features of the DermaV® laser is that treatment of facial redness/rosacea or photodamage doesn’t result in bruising. People still experience swelling and some pinkness following treatment as with other lasers.
Difference from other vascular lasers
There are a few things that set the DermaV® laser apart from other vascular lasers. It emits green laser light and utilizes cryogen spray cooling to protect the surface of the skin. In addition, the DermaV® can emit many types of pulses from a single pulse to pulses composed of many ‘pulselets’, all with the touch of a button.
Solid-state laser that uses 2 crystals to created green laser light
The first crystal is a neodymium:yttrium-aluminum-garnet (Nd:YAG) crystal that creates infrared energy that is then converted to green light by a KTP crystal. The energy is delivered in pulses long enough to target unwanted blood vessels and to stimulate skin remodeling, but not long enough to hurt the skin.
You can resume your daily activities immediately after your procedure. | physics |
https://techmountainman.blog/2017/10/11/insulation-and-energy-usage-understanding-the-practical-significance-of-r-value/ | 2023-06-01T16:05:13 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647895.20/warc/CC-MAIN-20230601143134-20230601173134-00452.warc.gz | 0.924848 | 1,208 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__258564609 | en | I’ve been quite focused on thermal gain/loss the last couple of years, as I started grappling with the reality of keeping a fiberglass travel trailer livable in a ‘four-season’ context, i.e. when it gets as cold as -10F at night, and more recently as I’ve been gradually outfitting a 8×12 cedar shed as an office and personal refuge.
Understanding what R-value actually means, in a practical sense, was difficult until I found the following formula, and started applying it to a known ability to generate heat:
R-value = (total surface area * (inside temperature – outside temperature)) / BTUs consumed
The R-value of a given structure can be estimated, averaged across the total surface area, by slightly modifying this formula:
BTUs required = (total surface area * (inside temperature – outside temperature)) / R-value
Here’s a good example:
total surface area = imagine the inside of the structure you’re heating as a 6-sided box. Add up the square footage of all six sides of this box, since heat generated will radiate out all sides: ceiling, floor, and walls + windows + doors. In the case of my 17′ fiberglass travel trailer, this is a minimum of 423 sq. ft.
temperature delta = I want to keep the average temperature at the walls at 55F; actual air temperature ends up being 5-10F warmer than this, so reasonably comfortable with an extra layer of clothing. The outside temperature for this example will be 32F. This gives us the temperature delta of 55F – 32F = 23F
BTUs consumed = through observation and experiment, I’ve learned that I need to run a 1000w mica panel heater at near maximum thermostat setting, so let’s say it’s going to be on 90% of the night at the coldest point, to maintain a 23F temperature delta. Multiply watts by 3.4 to get BTUs, then add approximately 200 BTUs for a sleeping adult, since I am adding a small amount to heating the total space when I’m in it. So, we have ((1000w * 90%) * 3.4) + 200 = 3260 BTUs
Now, we’ll plug these numbers into the formula to find out what the average R-value of the inside of my trailer is:
(423 sq. ft. * 23F) / 3260 BTUs = R 2.98
Chances are I have a better R-value than that, overall, because the inside of the trailer is a fairly complex surface, with cabinetry, benches, and so forth presenting a much greater total surface area than just flat surfaces. The 423 sq. ft. is a conservative estimate generated by treating the inside of the trailer as if it was an empty box. The entire floor and area underneath the bed platform is covered with a close-fitted 1-inch thick layer of XPS hard foam (R5), and then a half inch of neoprene mat on top of that, for a total of around R6.5 on the floors. The walls are rated R5, and any area covered by cabinetry might get a slight bump from the added bulk. On the other hand, experiments with a infrared thermometer to guage heat loss winter before last showed that the wheel wells and area around the fridge’s external access and venting is close to R1, and thus prime candidates for some fine-tuning of insulation in nooks and crannies.
When I’m out and about in off-grid mode, I have a maximum of 9400 BTUs of heating available, 9200 from the 12000 BTU propane furnace, and 200 from me. The other 2800 BTUs from the furnace ends up waste heat going out the furnace exhaust, which is kind of a shame. So, if I wanted, I could even use the formula to figure out the coldest possible outside temperature at which this amount of heat generation could keep me comfortable. Let’s also pick a somewhat more reasonable number for surface area, since I clearly have an average R-value close to R5 than R3. So, we’ll estimate 700 sq. ft. of total surface area to be heated, with the corresponding adjustment to R-value.
9400 BTUs / (700 sq. ft. / R 4.93) = 66F
So, I know that the maximum cold temperature I can handle with this combo of R-value and heat generation capability is 55F – 66F = -11F
Just for fun, and because I’ve actually used the technique, we can add in a small gas-powered generator (a Champion 73536i, rated at 1700w running), running the mica panel heater at its low 500w setting. This allows the generator to be run in ‘eco’ mode, and lets me stretch a gallon of gas 7-8 hours. The bonus is that this configuration, with the generator 120v output run into the trailer’s 30A electrical system, provides a healthy supplemental charge to the two deep-cycle batteries in my setup.
500w * 3.4 = 1700 BTUs
1700 BTUs + 9400 BTUs = 11,100 BTUs
11,100 BTUs / (700 sq. ft. / R 4.93) = 78F
Now, I have the ability to deal with temperatures down to -23F. The main drawback is that the supplemental 1700 BTUs come at a very high price, as the gallon of gas will effectively double my heating costs for that night, making gas about 5.5x more expensive than propane for this application. | physics |
https://emma-lewis.com/about/ | 2023-06-04T13:05:59 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649986.95/warc/CC-MAIN-20230604125132-20230604155132-00503.warc.gz | 0.94821 | 392 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__180674680 | en | About this site
Welcome to my website. If you were wondering what all the "theme" nonsense is, or why your browser asked you to share your location, it's because I discovered Cosine Kitty's Astronomy Engine and had to do something with it. The engine does a huge amount, but I'm using it to get a position for the sun and moon and put them on your screen.
If you're visiting the site in "Sky mode" you'll hopefully see a sun or a moon or both on the page (you might also be be visiting when they've both set, in which case sorry about that — please come back later). The site will be dark when the sun is set and light when the sun has risen. The sun and moon are mapped fairly crudely onto the screen using Azimuth and Altitude values from the astronomy engine. I map altitude to the vertical direction of the screen (the sun sets at the bottom of the screen, whereas the top of the screen is reserved for when the sun is directly overhead). The azimuth is a little stranger to map because it's a 360 degree value which I'm mapping on a horizontal plane, so I decided to map the horizontal screen as West to East — for this reason a sun with an azimuth of 85 degrees would appear in the same place as a sun at the same altitude with an azimuth of 105 degrees.
I also use moon phase calculations from the Astronomy Engine to draw the moon in the correct phase so it should map pretty closely to what you see in the sky.
Of course if that is all annoying you can use the "Site Options" button at the top of the screen to select "Day theme" or "Night theme" instead. In that case you won't see any celestial objects on screen. I'll save your choice on your device so shouldn't need to be changing themes often (unless you want to). | physics |
http://edufive.com/seminartopics/electrical/EE23.html | 2017-04-26T05:58:31 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00172-ip-10-145-167-34.ec2.internal.warc.gz | 0.930264 | 166 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__312870571 | en | The recent growth of power circuit capacities has caused fault currents to increase. Since the protection of power systems from the fault currents is very important, it is needed to develop a fault current limiter.
A fault current limiter is required to assure (1) rapid reaction to fault currents, (2) how impedance in normal operation and (3) large impedance during fault conditions. A super conducting fault current limiter (SCFCL) can meet these requirements superconductors, because of their sharp transition from zero resistance at normal current to finite resistance at higher current densities, are tailor-made for use in FCLs.
The purpose of this paper was the study of surge current protection using superconductors. The SCFCL offers efficient advantages to power system and opens up a major application for super conducting materials.. | physics |
http://www.unido.org/index.php?id=4835&ucg_no64=1/data/DATA1/doc/index.php&id=1000419 | 2015-10-05T01:36:21 | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676547.12/warc/CC-MAIN-20151001215756-00174-ip-10-137-6-227.ec2.internal.warc.gz | 0.891499 | 118 | CC-MAIN-2015-40 | webtext-fineweb__CC-MAIN-2015-40__0__26531367 | en | UNIDO International Centre for Hydrogen Energy Technology, Turkey
The International Centre for Hydrogen Energy Technologies (ICHET) is a UNIDO project with the mission of demonstrating viable applications of hydrogen energy technologies.
It aims at facilitating their widespread use in the context of developing countries. The Centre pursues its objectives by providing a comprehensive set of services that include:
- Technical and fi nancial support to the development and implementation of hydrogen energy systems demonstration projects;
- Applied R&D for developing countries;
- Training and education programmes;
- Conferences and workshops. | physics |
https://www.blanchardgold.com/market-news/the-three-ways-gold-is-driving-clean-technologies/ | 2023-12-07T06:02:32 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00358.warc.gz | 0.931072 | 585 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__52356692 | en | When we think of clean tech we often envision natural elements like water powering a generator, or sunlight charging solar cells. However, another natural element, gold, is also driving changes that allow industries to thrive with less of an impact on the environment. Here, we look at three examples where gold is playing a pivotal role in creating a greener future according to the World Gold Council.
Today, the third most widely produced synthetic polymer is poly vinyl chloride (PVC) which is used to make pipes and insulation for electrical cables. However, making PVC is a labor-intensive process which first requires the manufacturing of vinyl chloride monomer. Doing so requires a catalyst. The problem: the most commonly used process relies on a mercury-based catalyst. The prevalence of PVC in construction and plumbing today means mercury is needed in high quantities. This demand presents environmental problems given the problems mercury causes wildlife like fish. Moreover, mercury is toxic to the brain and spinal cord making its disposal particularly dangerous. However, recently, gold catalyst processes have emerged offering a solution to this problem. As the World Gold Council reports, “This breakthrough provides an opportunity for VCM producers to remove a highly toxic material from their process in a cost-effective manner. Depending on uptake, this application could generate total demand in the region of 1-5 tonnes of gold.”
Clean Fuel Cells
Gold is also helping bolster the adoption of electricity-producing fuel cells. This innovation limits toxins in the environment because the only byproduct is water. Like the manufacturing of PVC, a catalyst is needed. However, in this case a special catalyst is required, one that can function at low temperatures. Gold is the solution. We’re likely to see a significant ramping up in the manufacturing of these cells as more industries and countries take ownership of the future of the planet. The result could be a further increase in demand for gold.
More Efficient Solar Power
In short: gold offers improved efficiency in solar panels. For solar cells to effectively harness the sun’s power they require gold nanoparticles. Why? Traditional designs require a web of wiring which sits on top of the solar cell. These wires, however, can block up to 10% of the sun’s light. Researchers have discovered a way to redesign the panels without the wires. Instead, scientist can place a thin film of gold on a silicone sheet. This film allows more light to penetrate and some estimate near-term designs to offer up to a 20% boost in efficiency. Therefore, solar-powered technologies will likely expand their reliance on gold. Moreover, in newer designs gold is also needed for the electrodes.
These three emerging technologies illustrate the versatility of gold. While the element hasn’t changed we are constantly changing the way we can use it and finding new opportunities to expand its value in various industries.
Can bitcoin do any of that? | physics |
https://greentechme.com/product/solar-panels/ | 2023-12-06T15:11:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00073.warc.gz | 0.932887 | 300 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__83184196 | en | Photon Absorption: When sunlight hits the PV panels, it consists of tiny particles of light called photons. These photons are absorbed by the solar cells, which are typically made of semiconductor materials, most commonly silicon.
Electron Excitation: The energy from the absorbed photons “excites” the electrons in the semiconductor material, causing them to break free from their usual positions in the atoms of the material.
Electric Current Generation: The movement of these excited, or “free,” electrons creates an electric current. This current can be harnessed and directed through electrical circuits within the solar panels.
Direct Current (DC) to Alternating Current (AC): The electric current generated by the solar panels is in the form of direct current (DC). In most cases, homes and businesses use alternating current (AC). An inverter is employed to convert the DC into AC, making it compatible with the electrical grid and your appliances.
Electricity Distribution: The generated AC electricity can now be used to power your home, business, or any other electrical devices. Any excess electricity can be fed back into the grid or stored in batteries for later use.
The efficiency and performance of solar PV panels are influenced by factors such as the angle of the panels, the amount of sunlight they receive, and their overall quality. Solar panels have become increasingly efficient over the years, and they continue to play a vital role in the transition to cleaner and more sustainable energy sources. | physics |
http://www.chrishattery.com/projects/ohiometalspray/machining.php | 2022-08-15T12:44:54 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00225.warc.gz | 0.927254 | 696 | CC-MAIN-2022-33 | webtext-fineweb__CC-MAIN-2022-33__0__192575618 | en | Metalizing or “thermal spraying” involves projecting small molten particles onto a prepared
surface where they adhere and form a continuous coating. Upon contact they mechanically
bond to the substrate and then onto each other. The heat energy in the molten particles is
relatively small to the size of the sprayed component so the process imparts very little
heat to the substrate. As the temperature increase of the coated parts is minimal, heat
distortion is not normally experienced. This is a major advantage over welding or
hot-dipped galvanizing. We utilize three different Metalizing processes depending on your
specific needs. The following is further explanations of the processes.
HVOF (High Velocity Oxygen Fuel)
High velocity oxygen fuel (HVOF) systems use a high velocity jet into which the spray
powder particles are injected, the jet being formed by the combustion of oxygen and fuel,
which is heated and accelerated towards the component being thermal spray coated.
HVOF coatings are very dense, strong and show low residual tensile stress or in some
cases compressive stresses, which enable thick coatings to be applied in comparison
with other metalspray processes. The very high kinetic energy of particles striking the
substrate surface does not require the particles to be fully molten to form high quality
HVOF coatings. This is certainly an advantage for the carbide cermet type of HVOF coatings.
HVOF coatings are used in applications requiring the highest density and strength not
found in most other thermal spray processes. New applications are being discovered,
including as a Chrome Plating replacement.
View Info Graphic of HVOC
Plasma is a term to describe gas, which has been raised to such a high temperature
that it ionises and becomes electrically conductive. During atmospheric plasma arc
spraying, the plasma is created by an electric arc burning within the nozzle of a plasma
spray gun. The arc gas is formed into a plasma jet as it emerges from the nozzle. Powder
particles are injected into the jet where they melt and are then transferred to the substrate
at high velocity, producing a strongly adherent coating. Almost any material can be plasma
sprayed including metals, ceramics and plastics. The very high plasma temperature
(about 12000 °C) tends to produce higher oxide contents in the coating than are typical
of HVOF coatings. The plasma spray process is also sometimes referred to as low-pressure
plasma spray. Because of the high temperature and high thermal energy of the plasma jet,
materials with high melting points can be sprayed, with plasma spraying widely applied in
the production of high quality sprayed coatings.
View Info Graphic of Plasma Spray
Wire Arc Spray
The Wire Arc Spray process, sometimes referred to as "twin wire arc spraying", uses
two wires, which are melted by an electric arc. The molten material is atomized by
compressed air and propelled towards the substrate to be coated.
View Info Graphic of Arc Spray
Each of the processes is unique and offer something different depending on the application,
here at Ohio Metal Spray we draw on our years of experience and our “No Nonsense” approach
to providing you with the best coating for your needs and never something that you don’t. | physics |
https://www.paudamiariera.com/portfolio/ra%C2%B2/ | 2021-09-26T13:36:05 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00126.warc.gz | 0.906844 | 165 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__155936096 | en | Platforms: Steam, Android, iOS
Ra² is a physics based skill game with puzzle elements, inspired by the smallest elementary particles of our universe.
The core of the game is to navigate a ball through the different levels, controlling and influencing it by two tractor beams.
Just as in quantum physics, the game mechanics are limited to a few basic laws and objects that have been combined meticulously.
The polarization change, one of the main mechanics in the game, is simple but challenging.
Through contact with a specific object, the gravitation is reversed, the interface turns yellow and the tractor beams push the ball away, instead of attracting it.
- 130 Levels
- Zooming, scrolling and rotating level sections
- Leaderboards for each level
- Speedrun mode
- Original soundtrack with 13 tracks | physics |
https://www.excel.london/visitor/news/marvel-s-avengers-s-t-a-t-i-o-n-london-show-extended-to-april-28th | 2022-12-08T14:03:27 | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00845.warc.gz | 0.924062 | 358 | CC-MAIN-2022-49 | webtext-fineweb__CC-MAIN-2022-49__0__159291620 | en | 26 March 2019
The news follows a hugely successful run for S.T.A.T.I.O.N. in London, New York, Seoul, Paris, Singapore, Beijing, Taipei and Las Vegas - where it has set up a permanent exhibition.
This highly anticipated multi-room exhibition offers fans of all ages the opportunity to delve into the super-workings and backstory of each member of The Avengers as they train to become an agent of the S.T.A.T.I.O.N.
S.T.A.T.I.O.N. is an acronym for Science Training and Tactical Intelligence Operative Network, and is where guests can step inside the popular films and become part of the Marvel Cinematic Universe, whether trying to lift Thor’s Hammer or taking a sneak peek at Bruce Banner’s Lab
As part of their training, fans will get the opportunity to interact with props and costumes straight from the big screen.
The exhibition has provided a super-powered dose of science and technology by NASA to enhance the authenticity of the experience and pique visitors' interest in real-world science and technology.
With comprehensive educational materials available for teachers, plus supporting materials created by Quantum Victoria, it is a thrilling learning experience for high school children to follow STEM pathways by amplifying the scientific themes and characters that are core to Marvel’s storytelling.
Marvel’s Avengers S.T.A.T.I.O.N. debuts a huge collection of Avengers movie-based props and interactive technology at ExCeL London.
There will also be the opportunity to purchase Marvel’s Avengers merchandise available for those with and without a ticket. | physics |
https://ablogaboutnothinginparticular.com/pluto-and-the-dwarf-planets/ | 2024-04-22T16:41:42 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00645.warc.gz | 0.96497 | 1,301 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__29634981 | en | Pluto was discovered by Clyde Tombaugh on February 18, 1930 at Lowell Observatory in Flagstaff, AZ. The observatory’s founder, Percival Lowell, had predicted a planet with a mass seven times that of Earth beyond Neptune’s orbit. However, at an apparent magnitude of 15, Pluto was 2.5 magnitudes too dim to be the expected planet. Pluto was originally classified as a planet but, with the discovery of similar objects, was reclassified as a dwarf planet in 2006. This is still a controversial issue, especially in the United States, because only four percent of the International Astronomical Union voted on the issue and Pluto was the first planet to be discovered in the United States.
Why Did Pluto Get Demoted?
Pluto and Plutinos
Pluto gets its name from the Roman god of the underworld. Because of its distance from Earth, studying it is difficult and astronomers can only estimate its size and appearance until New Horizons reaches it in 2015. Estimates of its diameter range vary between 2274 and 2390 kilometers and Pluto has an estimated mass of 1.31 X 10^22 kg.
In 1978, astronomers discovered a satellite they named “Charon,” the boatsman of the underworld who ferries spirits across the river Styx. Charon is the largest of Pluto’s moons, estimated to be 1207 km in diameter. Pluto and Charon are tidally locked and always present the same face to each other. This is similar to Earth’s Moon, which orbits at the same rate it revolves around its axis. In 2005, two smaller moons were discovered and named Nix and Hydra. A tiny moon known as P4 was discovered in 2011 and is estimated to be no more than 34 km in diameter. On July 11, 2012, a new moon called P5 was announced. It has an orbit 29,000 miles from Pluto, slightly farther than Charon’s orbit. In a February 2013 poll, astronomy fans picked the names “Vulcan” and “Cerberus” for the new moons.
Pluto orbits the sun in a 3:2 resonance with Neptune. This means that it makes two orbits at the same time it takes Neptune to make three orbits. Pluto shares this resonance with dozens of Kuiper Belt objects known as plutinos. One theory suggests that Uranus and Neptune actually formed closer to the sun than their current orbits. Gravitational interactions between Jupiter, Saturn and Neptune pushed Saturn and Neptune further out and Neptune pushed the plutinos out into the Kuiper Belt zone.
Pluto has a highly elliptical orbit that sometimes brings it closer to the sun than Neptune, as it was from 1979 to 1999. It is not at risk of colliding with Neptune at any time, however, because its orbit has a 17-degree inclination from the plane at which the planets orbit. At Pluto’s closest approach to the sun, methane and other frozen compounds on its surface evaporate and form a thin atmosphere. One Pluto year is equal to 248 Earth years.
Because of its distance from Earth, it is difficult to make observations about Pluto and its satellites. When New Horizons reached Pluto in 2015, astronomers obtained more accurate measurements of Pluto’s composition, mass and diameter along with some better images.
New Horizons Comes Out Of Hibernation
Some Images From New Horizons
The Dwarf Planets
The International Astronomical Union defines a dwarf planet as an object large enough to become spherical, but not massive enough to absorb all the other objects in its path. Famous dwarf planets include Pluto and its moon Charon, the larger Eris, and the asteroid belt object Ceres.
Eris is the largest dwarf planet discovered so far. Discovered in 2003, its diameter is 4% larger than Pluto and it is 27% more massive. Its name comes from the Greek goddess of war and strife and its discovery led to the debate over Pluto’s status. It has one satellite named Dysnomia, from Eris’ daughter and the demon spirit of lawlessness. Eris and Dysnomia were originally given the temporary names of “Xena” and “Gabrielle” from the TV show “Xena: Warrior Princess.”
Sedna is the farthest dwarf planet from the sun discovered so far. It is three times as distant from the sun as Pluto is and many astronomers believe that it is very similar to objects predicted to be in the theoretical Oort cloud. It is red-tinged, implying some iron content, and approximately 75% the mass of Pluto. There is some indirect evidence that it may have a satellite.
Ceres is the only dwarf planet located in the asteroid belt. It is the largest asteroid belt object with a diameter of about 975 by 909 kilometers and contains one third of the mass in the asteroid belt. It was the first asteroid belt object to be discovered and was first observed by an astronomer-monk named Giuseppe Piazzi. He announced its discovery on January 1st, 1801. It had been hypothesized that a planet existed in the gap between Mars and Jupiter and some astronomers still believed that the material in the asteroid belt could have formed a planet if not for Jupiter’s influence. It was named for the Roman goddess of the harvest and, like Pluto, was initially classed as a planet and then demoted.
The Dawn Probe
Leonard Nimoy narrates a short video about the Dawn Probe’s mission to the asteroid belt.
The Kuiper Belt
Pluto is located in the Kuiper Belt, a zone beyond Neptune that was predicted by astronomer Gerard Kuiper in the 1950s. It is estimated to contain more than 100 million objects at least 1 kilometer in diameter. Some, like Pluto, are more than 1000 kilometers across. The majority of objects in the Kuiper Belt are very similar to comets and are made up of frozen water, ammonia and hydrocarbons.
The Kuiper Belt is shaped like an elliptical, or oval-shaped, disk and located approximately 30 to 50 AU from the sun. It has never been studied up close, but the New Horizons probe is due to study other Kuiper Belts now that it’s swung by Pluto. | physics |
http://quest.ph.utexas.edu/sudarshan_spin.html | 2017-04-30T05:04:27 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124299.47/warc/CC-MAIN-20170423031204-00086-ip-10-145-167-34.ec2.internal.warc.gz | 0.656862 | 402 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__33957426 | en | The Fundamental Theorem on the Connection between Spin and Statistics, in Proc. Nobel Symposium 8; Elementary Particle Theory, Relativistic Groups and Analyticity, Nils Svartholm (ed.), Almqvist and Wiksell, Stockholm (1968), pp.379-386.
The Fundamental Theorem on the Relation between Spin and Statistics, Proc. Indian Acad. Sci. LXVII, 284 (1968).
A World of Bose Particles, in Science Today, Bombay, India (January 1974); Am. J. Phys. 43(1), 69 (1975).
Topological and Algebraic Aspects of Quantization: Symmetries and Statistics; with Tom Imbo and Chandni Imbo, Ann. Inst. Henri Poincare 49, 387-396 (1988).
Inequivalent Quantization in Multiply-Connected Spaces. Il; with P. A. Horvathy and G. Morandi, Nuovo Cim. 20, 201 (1989).
The Dynamical Basis of the Spin-Statistics Relation. Invited lecture at the Lochlainn O' Raifearthaigh and the Sixth Irish Conference on Quantum Field Theory, Dublin (1998).
Toward an Understanding of the Spin-Statistics Theorem; with Ian Duck, Am. J. Phys. 66(4), 284 (1998).
Pauli and the Spin-Statistics Theorem; with Ian Duck, World Scientific, Singapore (1998). [BOOK].
What Price the Spin-Statistics Theorem; with Ian Duck. Submitted to Proc. Roy. Soc. London.
Non Relativistic Proof Of Spin-Statistics Theorem; with A. Shaji, http://arxiv.org/pdf/quant-ph/0306033 (2003).
Spin Statistics Relation In Arbitrary Number Of Space Dimensions; with L. J. Boya (in preparation). | physics |
https://www.sietronics.net/product-page/heatable-long-pathlength-gas-cells-ftir-gas-testing | 2023-09-27T21:18:35 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00784.warc.gz | 0.73811 | 1,132 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__224707370 | en | Heatable Long Pathlength Gas Cells | FTIR Gas Testing
- Wide pathlength range (1m - 20m)
- Vacuum to 15 p.s.i. operation
- Ambient temperature operation
- Borosilicate glass body
- Anodised components
- Gold mirrors (protected)
- Viton or Kalrez 'O' ring seals
- KBr, CaF2 or ZnSe windows
- Purgeable transfer optics box
- Benchmark series baseplate mounting
- Additional mirror carriage assemblies
- Vacuum / gas inlet & outlet taps
- Pressure gauge
- Desiccant storage caps
- Purge bellows
Download the free datasheet in 'Documents' below.
Specac’s standard long pathlength gas cell range is now the new Atmos™ series of metal-bodied fixed-pathlength gas cells. They have been optimized using the latest optical modelling tools to give maximum signal-to-noise.
Model A2.5 A5 A10 A20 Pathlength (m) 2.5 5.0 10 20 Base pathlength (mm) 104 139 250 455 Number of passes 24 36 40 44 Volume (L) 0.27 0.63 2.12 3.68
The Atmos™ range of cells are heatable to 200 °C with the addition of a heating jacket and temperature controller.
- Optimized optical path for unmatched signal-to-noise
- Low cell volume for fast gas exchange
- Ni-coated Al cell bodies for improved thermal transfer from heating jackets. Also holds higher gas pressure than glass.
- Inert stainless-steel mirror substrates and avoidance of glues to prevent outgassing
- 125 psi (8.6 bar) maximum cell pressure
- Retrofittable heating jacket option for up to 200 °C
Flexible specification long pathlength gas cells | Cyclone™ & Tornado™
These cells are available with glass or metal cell bodies and can be set to a range of pathlengths.
Model C2 C5 or
Pathlength settings (m) 0.5—2.5
in 0.5 m steps
in 1.0 m steps
in 1.06 m steps
in 2.0 m steps
Base pathlength (mm) 125 250 264 500 Number of passes 4—20 4—32 8—40 4—40 Volume (L) 0.19 1.33 2.60 4.30
Cyclone™ versus Tornado™
- Cyclone™ cells can be supplied with a fitted heating jacket for studies of heated gases up to 200 °C (not available for model C20)
- An adjustable mirror carriage can be fitted to Cyclone™ cells to enable the user to vary the pathlength throughout the specified range.
- Tornado™ cells are a stripped back offering, supplied as standard without inlet or outlet tap valves. They are not heatable.
Which pathlength should I choose?
The absorbance of a gas depends on the distance travelled by the IR light beam through the gas sample. The relationship between absorbance, A, concentration, C, and pathlength, L, is given by Beer’s Law:
A = -log10 (I/I0) = a.C.L
Atmospheric concentrations of gases are usually expressed in C.L units of ppm.m – the number of molecules that would be encountered by the infrared beam across a 1.0 m path.
A gas at 0.1 ppm atmospheric concentration will absorb as much IR light over a pathlength of 100 m as would the same gas at 1 ppm over 10 meters or at 10 ppm over 1 meter.
Accordingly, the pathlength of cell should be chosen to give absorbance values within the spectrometer’s linear range for a given concentration. The following table may be taken as a guide:
Gas name formula v (cm-1) ppm.m absorbance Carbon dioxide CO2 2360 100 0.40 Carbon monoxide CO 2170 100 0.04 Methane CH4 3020 100 0.10 C2 to C6 n-alkanes 2960 100 0.10 Nitrogen dioxide NO2 1630 100 0.15 Nitric oxide NO 1900 100 0.015 Sulfur dioxide SO2 1370 100 0.09 Hydrogen sulfide H2S 1300 1000 0.002 Ammonia NH3 960 100 0.12 Hydrogen chloride HCl 2940 100 0.04 Water H2O 1650 1000 0.20 Vinyl chloride CH2CHCl 950, 900 100 0.06 Acetaldehyde CH3CHO 2750 100 0.015 Benzene C6H6 670 10 0.09 Toluene C6H5CH3 730, 690 100 0.10 Methanol CH3OH 1040 100 0.10 Ethanol CH3CH2OH 1050 100 0.05 Carbonyl sulfide COS 2070 100 0.40 Nitrous oxide N2O 2235 100 0.15 Sulfur hexafluoride SF6 950 10 0.40
Request a free quote for a Long Pathlength Gas Cell
Gas Cell Range | physics |
https://cmtlabs.co.uk/laboratory-testing/concrete-testing/ | 2024-04-15T16:51:24 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817002.2/warc/CC-MAIN-20240415142720-20240415172720-00547.warc.gz | 0.875542 | 932 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__118276087 | en | At CMTL we provide comprehensive testing to assess the quality of both fresh and hardened concrete. We deliver concrete testing for every stage of your project in our UKAS and INAB accredited laboratories. From on-site slump testing as you pour your concrete mix, to testing hardened samples for indicative characteristics.
Concrete Strength Test
Concrete is the most widely used construction material in the world. Concrete strength tests are an integral part of assessing the quality and durability of concrete structures. These tests provide valuable information about the compressive strength of concrete, which is crucial for ensuring the structural integrity of buildings, bridges, and other infrastructure projects. In the UK, concrete strength tests follow specific standards and methods to ensure accurate and reliable results. All tests performed by CMTL are compliant with industry regulations and relevant UK standards.
Compressive Strength Test
A compressive strength test is one of the most widely used methods at CMTL to determine the strength of concrete. It measures the ability of concrete to resist axial compressive loads. The test involves casting concrete into moulds and subjecting them to compressive forces until failure occurs. The resulting maximum load is divided by the cross-sectional area of the specimen to calculate the compressive strength. CMTL is UKAS accredited to the standard method for conducting compressive strength tests outlined in BS EN 12390-3:2019 "Testing hardened concrete - Part 3: Compressive strength of test specimens."
Flexural Strength Test
A flexural strength test evaluates the bending or tensile strength of concrete. It measures the ability of concrete to resist bending or cracking under applied loads. The test involves casting beams of concrete and subjecting them to a bending force until failure occurs. The maximum applied force is used to calculate the flexural strength. At CMTL the flexural strength test is performed according to BS EN 12390-5:2019 "Testing hardened concrete - Part 5: Flexural strength of test specimens."
Tensile Strength Test
The tensile strength test determines the tensile strength of concrete indirectly by subjecting specimens to diametrical compressive forces. The test measures the tensile strength perpendicular to the direction of the applied load. It is particularly useful for assessing the behaviour of concrete in tension, which is essential in structures subjected to bending or direct tensile loads. CMTL offer tensile strength testing to BS EN 12390-6:2019 "Testing hardened concrete - Part 6: Tensile splitting strength of test specimens."
Benefits of Concrete Testing
- Early removal of formwork, saving time and money on projects
- Reduced road or track possession times for emergency repairs
- Highly accurate determination of strength in low and high temperature environments
- Optimisation of concrete mix design or for use with specialist applications
UKAS & INAB Accreditation
Our on-site pop-up laboratories are UKAS & INAB accredited and enable fast analysis for a more cost-effective, time saving testing solution.
Need more information on our concrete testing services? Whether you need site testing or laboratory testing, or a complete list of the tests we provide, please fill in your details and we'll be in touch.
Below is a list of the concrete tests we can perform from our UKAS and INAB accredited labs. Please note, this list is not exhaustive, and we can arrange speciality testing to suit your requirements.
- Making cubic specimens for strength tests – including curing
- Making cylinder specimens – including curing
- Making beam / prism specimens – including curing
- Sampling fresh concrete on site – composite sampling - spot sample
- Sampling fresh concrete on site - Slump
- Sampling fresh concrete on site - Flow
- Sampling fresh concrete on site - Density
- Sampling fresh concrete on site – Air content – pressure gauge method
- Measuring the fibre content in fresh and hardened concrete
- Water absorption
- Depth of carbonation
- Compressive strength of cubes
- Compressive strength of cubes – including curing
- Compressive strength of cubes – shape and dimension
- Flexural strength of test specimens
- Flexural strength of test specimens - including curing
- Flexural strength of test specimens – shape and dimension
- Tensile splitting strength
- Cored specimens - sampling
- Cored specimens – examining and testing in compression
- Flexural tensile strength (Limit Of Proportionality (LOP), residual) of metallic fibre concrete
- Chloride ion determination in concrete and mortar
- Location of reinforcement | physics |
https://bs-waermetauscher.de/en/products/product-portfolio | 2022-08-10T04:55:24 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00077.warc.gz | 0.794523 | 687 | CC-MAIN-2022-33 | webtext-fineweb__CC-MAIN-2022-33__0__100005249 | en | OUR PRODUCT PORTFOLIO
The tube bundle heat exchanger
In a tube bundle heat exchanger, two media separated by the pipe wall are moved alongside each other. The heat of the two media will be exchanged if a temperature difference exists between them.
A tube bundle normally consists of 3 components: the front head, the shell and the tube bundle. One medium flows through the tubes, while the second medium flows through the shell room. Depending on the design, so-called baffle plates are used in the shell. The medium that flows through the shell is guided by the baffle plates in such a way that it flows as transvere as possible to the bundle tubes. This increases the quality of the heat transfer. The better the heat transfer the more compact are the apparatuses.
The tube side as well as the shell side can be designed either single or multiple pass in tube bundle heat exchangers. The speed and the pressure loss of the flowing media are decisive for the above. The number of passes has an effect on the length of the apparatuses.The tube bundle heat exchanger can be used as cooler or as heater for liquid gaseous media.
The following types of heat exchanger belong to our product range:
- Tube bundle heat exchanger with removable u-tube bundle
- Straight tube heat exchanger
- Inline (straight tube) heat exchanger
- Steam generator / Steam converter
- Electrical heater
- Exhaust gas heat exchanger
- Plug in bundle for the installation in containers
- Replacement bundle also for products of other manufacturers
|Typ||Passes on tube side||Passes on shell side||Heating surface||Design||Bundle removable||Condensation||Undercooling| | physics |
http://electrochemsolutions.com/energy/nonrechargeable.aspx | 2017-03-29T07:23:24 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00413-ip-10-233-31-227.ec2.internal.warc.gz | 0.87194 | 229 | CC-MAIN-2017-13 | webtext-fineweb__CC-MAIN-2017-13__0__62614748 | en | Utilized in highly demanding applications, Electrochem’s High, Moderate and Low rate non-rechargeable cell solutions are in a class all their own. Our lithium-powered cells perform in demanding conditions when downtime could mean the loss of precious time and profitability.
Our lithium cells feature the highest energy density in the industry – up to 915-watt hours per liter, or nearly three times the energy density of alkaline cells–as well as exceptional operating temperature ranges. Likewise, our protective circuitry, glass-to-metal hermetic seals, fuses and diodes are capable of withstanding high and low temperatures, sterilization, and vibration to ensure safe, strong, and reliable power even when subjected to harsh environmental conditions.
Click below to learn more about our non-rechargeable lithium power solutions:
High Rate Cells
Moderate Rate Cells
Low Rate Cells
Live: Watch Electrochem in the oil field
Energy Capabilities Brochure: Available for download
Product Solutions: Learn more about our pack capabilities
MSDS Documents: Available for download
Battery Academy: Learn key industry facts | physics |
http://vinesh.passle.net/post/102bw8f/nobel-prize-in-physics-so-many-deserving-women-so-few-winners | 2018-07-21T17:35:14 | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00410.warc.gz | 0.978938 | 554 | CC-MAIN-2018-30 | webtext-fineweb__CC-MAIN-2018-30__0__172749157 | en | The winners of the Nobel Prize in Physics for 2014 were announced today: Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, three physicists who managed to produce bright blue light beams from their semi-conductors in the early 1990s, thus inventing blue light-emitting diodes (LEDs).
In case you're wondering why that's a big deal: red and green LEDs have been around for a very long time, but it was only with the advent of blue LEDs in the 1990s that white-light LEDs became possible. LEDs are long-lasting and extremely energy efficient - since a quarter of the world's energy is used on lighting, they will contribute greatly to saving the planet's resources. And because of their lower power requirements, they can be powered by cheap local solar power, and thus hold great promise for increasing the quality of life for over 1.5 billion people who lack access to electricity grids. The LED is truly the light source of the future.
This is great, but perhaps it's worth reflecting on the fact that today's winners were male...as they were last year, and the year before that, and before that...for the past fifty years. In fact, only two women have ever won the Nobel Prize in Physics. Marie Curie was one of them - impressively, she was the first person ever to win two Nobel Prizes!
But - as the linked article points out - it's not as though there aren't many, many deserving women. For example, Vera Rubin, whose pioneering work led to the discovery of dark matter (the mysterious stuff that makes up most of the matter in the universe!). Or Dame Jocelyn Bell Burnell, who discovered radio pulsars, for which her male supervisor received the Nobel ("No-Bell"?) prize. And many more.
What's the problem? Of course, physics as a whole is still horribly unequal when it comes to gender. In the USA, only about 10% of physics professors are women; in the UK, the number is about half that. But given all the deserving candidates, maybe the lack of female Nobel Prize winners in physics also reflects a more sinister gender bias still present in the field.
Whatever the case, I hope that women in physics will start to be given their due soon!
No Nobel Prize has come close to being equitably distributed by gender, but physics has the worst record of them all. Zero women have won it in the past 50 years. Exactly two women have won it ever. One of them was Marie Curie, who won with her husband, Pierre, and Henri Becquerel in 1903, at a time when women were almost entirely excluded from science. | physics |
https://www.solwindenergy.com/how-does-solar-work/ | 2021-09-28T04:23:45 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00398.warc.gz | 0.931933 | 886 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__163336292 | en | How Does Solar Work?
Our Planet Earth has clean resources that we can harness safely and environmentally, such as sunlight that shines above us every day. Advancement in technology has aided us in converting that solar power to electricity. Solar is one of the best sources of clean renewable energy since it lessens the carbon footprint of consumers. It is a good way on how people can eliminate our dependence on fossils fuel and saves money at the same time!
What are Solar Panels?
The first components in order to collect light from the sun that is used on our loads are solar cells. When these solar cells are connected to form a larger component called “solar panels”. Then solar panels are connected to form another larger system called a “solar panel array” that is usually connected in series for more power output.
Each solar cell is composed of layers of silicon, phosphorus, and boron. When the photons are absorbed, they knock out the electrons from their orbit and drives directly to the current called the photovoltaic effect.
The most common type of solar panels:
Polycrystalline solar panels – these solar panels are commonly addressed to be slightly less efficient than monocrystalline panels. But it is cheaper for its less cost in producing its components. Usually recommended for facilities with a lot of roof space and after for its budget-wise pricing.
Monocrystalline solar panels – when it comes to efficiency, these panels are the right choice. It consumes less space on your roofing or its efficiency per square meter is an advantage. But they cost more than polycrystalline panels in exchange for its power production. This type of panel usually comes with a black color compared to poly panels with a blue color.
Solar panels today are designed to provide more power with less consumption on roof spacing or per square meter.
How does Solar Inverter work?
Wondering how can we utilize energy from solar panel arrays? A solar inverter is one of the most important components of a solar system. It converts the variable direct current (DC) output from the solar panel into steady alternating current (AC). Usually, most of the appliances at home in the Philippines use 240 V alternating current and cannot operate using direct current from solar without the use of an inverter.
Solar Inverters can be classified depending on the following systems to be installed:
Grid-Tied System – system connected to the electric utility grid that eliminates the large electric bill. The first priority of this system is to provide solar power for home consumption then with automatic transition, the grid will either shoulder up the shortcomings of the solar or export surplus energy back into the grid. If you intend to save on electricity this system is the best option for it can be applied for “Net Metering” or the process of selling the excess power produced by the panels back to the utility and the power utility grid will credit you for this. However, this system shuts down when the grid shuts down to void exporting and harming any linesmen repairing the commercial supply lines.
Check out this link to know more about our Grid Tie System – https://www.solwindenergy.com/
Off-Grid System – A separate power supply that can energize selected loads which are recommended for facilities that experience constant power interruptions. It can provide power when the grid shuts down since this system contains solar batteries that are charged up during the day.
Check out this link to know more about our Off-Grid System – https://www.solwindenergy.com/
Hybrid System – Combination of Grid Tie and Off-Grid System. The system is also connected to the grid and it has a battery backup for power interruption. Since the hybrid system has a grid connection, it can also be applied for “Net Metering” which the excess power can either be used to charge the battery or to be exported to serve as credits. These systems tend to be higher priced, but you do not have to worry about Typhoon power outages!
Check out this link to know more about our Hybrid System – https://www.solwindenergy.com/
Now you know how solar works and the impact of modern solar photovoltaic systems in our world. SOLWIND can help you out with our solar and wind solutions for your houses or businesses. | physics |
http://www.find-a-part.com/porsche-parts/993/turbocharger | 2013-12-06T21:08:38 | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052593/warc/CC-MAIN-20131204131732-00045-ip-10-33-133-15.ec2.internal.warc.gz | 0.917846 | 231 | CC-MAIN-2013-48 | webtext-fineweb__CC-MAIN-2013-48__0__57253428 | en | Porsche 993 Turbocharger's are a call or a click away with Find a Part. Fill in the free car parts request form, or give us a call. We will then contact selected Porsche 993 turbocharger suppliers throughout the UK, who will contact you if they have the Porsche 993 Turbocharger you need.
A turbocharger is a small radial fan pump, which is driven by energy created by the exhaust gases of an engine. A turbocharger consists of a turbine and a compressor on a shared shaft. The turbine section of a turbocharger is a heat engine which converts the heat energy from the exhaust to power. The power created then drives the compressor, which compresses ambient air and delivers it to the air intake manifold of the engine at a higher pressure, resulting in a greater mass of air entering each cylinder. In some instances, compressed air is routed through an intercooler before introduction to the intake manifold. Because a turbocharger is a heat engine, and is converting otherwise wasted exhaust heat to power, it compresses the inlet air to the engine more efficiently than a supercharger. | physics |
https://anahouse.ru/current-research/ | 2021-01-19T11:46:10 | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00614.warc.gz | 0.923282 | 2,333 | CC-MAIN-2021-04 | webtext-fineweb__CC-MAIN-2021-04__0__204782840 | en | This figure is directly based on the proportion of radiocarbon found in the sample. It is calculated on the assumption that the atmospheric radiocarbon concentration has always been the same as it was in and that the half-life of radiocarbon is years. To give an example if a sample is found to have a radiocarbon concentration exactly half of that for material which was modern in the radiocarbon measurement would be reported as BP. In order to see what a radiocarbon determination means in terms of a true age we need to know how the atmospheric concentration has changed with time. Many types of tree reliably lay down one tree ring every year. The wood in these rings once laid down remains unchanged during the life of the tree. This is very useful as a record of the radiocarbon concentration in the past. If we have a tree that is years old we can measure the radiocarbon in the rings and see what radiocarbon concentration corresponds to each calendar year.
Fine-tuning radiocarbon dating could ‘rewrite’ ancient events
“A single Northern Hemisphere calibration curve has formed the basis of radiocarbon dating in Europe and the Mediterranean for five decades.
Tools for Constructing Chronologies pp Cite as. This chapter focuses on recently developed models for the analysis and interpretation of archaeomagnetic dating evidence. Archaeomagnetic data from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically also associated with date estimates from other sources such as stratigraphic sequences, historical records or chronometric methods.
This chapter summarizes the technical aspects of recent Bayesian statistical modelling work, describing a hierarchical model for the archaeomagnetic data and its uncertainties and combining this with models of the other dating evidence, based on those described by Buck Chapter 1 , to create a calibration curve for future archaeomagnetic dating work in a locality. With this new posterior estimate of the curve available, it is then possible to use the Bayesian statistical framework to estimate the calendar dates of undated archaeological features.
Calibration curve carbon dating
Radiocarbon Calibration curve and example input and output age distributions. Of practical importance to a wide range of scientific disciplines is radiocarbon calibration, which is used for converting radiocarbon years to calendar years; essential for measuring time and rates of change for numerous scientific fields. Arguably, few research topics engage so many different fields of science and have such a profound impact on our understanding of Earth and Solar science as the history of 14C in the Earth’s atmosphere and the surface and deep oceans.
Over the past 20 years we have witnessed remarkable improvements in both the development and proliferation of accelerator mass spectrometers.
Extension of the radiocarbon calibration curve by AMS dating of laminated sediments of Lake Soppensee and Lake Holzmaar. Mendeley · CSV · RIS · BibTeX.
One of the most important dating tools used in archaeology may sometimes give misleading data, new study shows – and it could change whole historical timelines as a result. The discrepancy is due to significant fluctuations in the amount of carbon in the atmosphere, and it could force scientists to rethink how they use ancient organic remains to measure the passing of time. A comparison of radiocarbon ages across the Northern Hemisphere suggests we might have been a little too hasty in assuming how the isotope – also known as radiocarbon – diffuses, potentially shaking up controversial conversations on the timing of events in history.
By measuring the amount of carbon in the annual growth rings of trees grown in southern Jordan, researchers have found some dating calculations on events in the Middle East — or, more accurately, the Levant — could be out by nearly 20 years. That may not seem like a huge deal, but in situations where a decade or two of discrepancy counts, radiocarbon dating could be misrepresenting important details. This carbon — which has an atomic mass of 14 — has a chance of losing that neutron to turn into a garden variety carbon isotope over a predictable amount of time.
By comparing the two categories of carbon in organic remains, archaeologists can judge how recently the organism that left them last absorbed carbon out of its environment.
The Suess Calibration Curve and Archaeological Dating
Statistical research undertaken at Sheffield has resulted in the provision of internationally-agreed calibration curves for radiocarbon dating that offer greater accuracy and higher resolution, and which for the first time span the full range of timelines over which radiocarbon dating is feasible. Non-academic users of these curves include staff in commercial radiocarbon laboratories, those working in commercial archaeology units, freelance archaeological consultants, palaeoenvironmental scientists working in governmental and intergovernmental bodies, private and public sector staff charged with the care of ancient buildings and environments, and freelance consultants who undertake radiocarbon dating in order to advise private customers, public sector companies and government agencies.
Radiocarbon dating is crucial to the establishment of archaeological chronologies and of timelines for many Holocene and late Pleistocene palaeoclimate studies, and palaeoenvironmental reconstructions.
Obtaining a calibration curve for the entire age range spanned by radiocarbon-dating methods requires the combination of several sources of calibration, and.
Radiocarbon dating measurements produce ages in “radiocarbon years”, which must be converted to calendar ages by a process called calibration. Willard Libby , the inventor of radiocarbon dating, pointed out as early as the possibility that the ratio might have varied over time. Discrepancies began to be noted between measured ages and known historical dates for artefacts, and it became clear that a correction would need to be applied to radiocarbon ages to obtain calendar dates.
The term Before Present BP is established for reporting dates derived from radiocarbon analysis where “present” is Uncorrected dates are stated as “uncal BP”, and calibrated corrected dates as “cal BP”. Used alone, the term BP is ambiguous. To produce a curve that can be used to relate calendar years to radiocarbon years, a sequence of securely-dated samples is needed, which can be tested to determine their radiocarbon age.
Carbon dating, the archaeological workhorse, is getting a major reboot
Scientific research often depends on a degree of certainty in the data while allowing for the likelihood of change — new findings overriding old theories and creating new ones. Change is a given, especially true when taking weather and climate into account. Archaeologist Sturt Manning and colleagues have revealed variations in the radiocarbon cycle at certain periods of time, affecting frequently cited standards used in archaeological and historical research relevant to the southern Levant region Israel, southern Jordan and Egypt.
Red is the C14 date, grey is its probability distribution on the C14 axis, green is the IntCal04 calibration curve. For every calendar year (vertical dashed blue line).
Taking the necessary measures to maintain employees’ safety, we continue to operate and accept samples for analysis. The short-term difference between the two is caused by fluctuations in the heliomagnetic modulation of the galactic cosmic radiation and, recently, large-scale burning of fossil fuels and nuclear devices testing. Geomagnetic variations are the probable cause of longer-term differences. The parameters used for the corrections have been obtained through precise radiocarbon dating of hundreds of samples taken from known-age tree rings of oak, sequoia, and fir up to about 12, BP.
Beyond that, back to about 45, BP, correlation is made using multiple lines of evidence. This information is compiled into internationally accepted databases which are updated on occasion. The present databases are IntCal13 northern hemisphere , SHCal13 southern hemisphere and Marine13 marine environments. Beta Analytic will continue to use IntCal and Marine13 calibration curves until such time that IntCal and Marine are available.
These likelihoods are graphically represented by a shaded grey area on the plot higher peaks being higher probability and by percentage values reported next to each range. The method is called the high-probability density HPD range method.
Reevaluation of dating results for some 14 C – AMS applications on the basis of the new calibration curves available. In this paper we describe briefly some characteristics of the Accelerator Mass Spectrometry AMS technique and the need of corrections in the radiocarbon ages by specific calibration curves. Then we discuss previous results of some Brazilian projects where radiocarbon AMS had been applied in order to reevaluate the dates obtained on the basis of the new calibration curves available.
Keywords: Radiocarbon; Dating; Accelerator; Mass spectrometry. In recent years new databases for radiocarbon calibration have been published, including the one for samples collected in the Southern Hemisphere .
The most recent radiocarbon calibration curve, lNTCAL98 (Stuiver et al., ), is based principally on the dendrochronological records described above.
What are affected by convention in years bp means the 14c-ages of human bone. Iosacal is a problematic range in the first to publish the calendar dates under. Dec 20, thomas p. Intercept-Based methods of the inception of radiocarbon date are the scythian epoch, that offer greater. Key words: intercept-based methods for. Radiocarbon dating, download and different 14c dating of the.
New radiocarbon calibration curves for a better dating method
Unfortunately, why calibration curves in those who’ve tried and radiocarbon dating contribute. Has resulted in dating results in the red, from the provision of uncertainty. How to use calpal online: radiocarbon chronologies using the calculated in we need something of the most reliable and a radiocarbon dating contribute. You have considered all of calibration curves are known age, which is the radiocarbon dating is needed, 5—
This much anticipated new calibration curve, a set of data points used to convert radiocarbon-dating results into calendar years, is highlighted in a special.
Your Account. Show caption. Data are from Reimer et al. Compiled atmospheric bomb radiocarbon curves for 4 different zones Northern Hemisphere zones and Southern Hemisphere zone for age calibration Hua and Barbetti, World map showing the areas covered by the 4 zones Hua and Barbetti, An example of bomb-pulse radiocarbon dating of a terrestrial sample from Northern Hemisphere zone 1. For a radiocarbon value measured in a sample S Fs , bomb radiocarbon delivers two possible calendar dates T1 and T2 , indicated by the grey boxes Hua, Radiocarbon dating is one of the most reliable and well-established methods for dating the Holocene and Late Pleistocene.
Natural radiocarbon or 14 C is produced in the atmosphere by the interaction of the secondary neutron flux from cosmic rays with atmospheric 14 N. Following its production, 14 C is oxidised to produce 14 CO 2 , which is then transferred to other carbon reservoirs, such as the biosphere and oceans, via photosynthesis and air-sea exchange of CO 2 , respectively. Living organisms take up radiocarbon through the food chain and via metabolic processes. When an organism dies, the original 14 C concentration of the organism starts to decrease by radioactive decay. | physics |
https://alpha.wittenstein.de/en-en/expertise/applications/mazak/ | 2023-03-24T04:05:55 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00253.warc.gz | 0.929555 | 260 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__35595792 | en | Very high dynamics and maximum positioning accuracy
The new OPTIPLEX 3015 FIBER II laser cutting machine has a fibre laser oscillator available with a continuous rated output of 2, 3, 4 and 6 kW. High laser power is fed directly to the cutting head. The machine is consequently ideal for cutting thin metal sheets at very high speed, as well as thicker material at slower feed rates. It offers more stable cut performance and greater throughput for applications utilizing mild steel, stainless steel, copper, brass, bronze or aluminium, for example.
The OPTIPLEX 3015 FIBER II's high rapid travel speeds of 120 m/min on the X and Y axes and 60 m/min on the Z axis are particularly impressive. The laser cutting machine also achieves excellent positioning accuracy, namely ±0.05/500 mm on the X and Y axes and ±0.01/100 mm on the Z axis. Its repeatability sets more benchmarks at ±0.03 mm.
Sustainability and ease of use were key development priorities for the OPTIPLEX 3015 FIBER II: the machine’s electrical power consumption is approximately 35% lower during operation, than comparable machines with CO2 resonators. Laser gas has been dispensed with completely. | physics |
https://pseudo.com/5-delicious-video-recipes/ | 2023-12-03T11:02:54 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100499.43/warc/CC-MAIN-20231203094028-20231203124028-00722.warc.gz | 0.955121 | 2,004 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__39365762 | en | As several respondents noted, we constantly travel through time–just forward, and all at the same rate. But seriously, time travel is more than mere fantasy, as noted by Gary T. Horowitz, a professor of physics at the University of California at Santa Barbara:
“Perhaps surprisingly, this turns out to be a subtle question. It is not obviously ruled out by our current laws of nature. Recent investigations into this question have provided some evidence that the answer is no, but it has not yet been proven to be impossible.”
Even the slight possibility of time travel exerts such fascination that many physicists continue to study not only whether it may be possible but also how one might do it.
One of the leading researchers in this area is William A. Hiscock, a professor of physics at Montana State University. Here are his thoughts on the matter:
“Is it possible to travel through time? To answer this question, we must be a bit more specific about what we mean by traveling through time. Discounting the everyday progression of time, the question can be divided into two parts: Is it possible, within a short time (less than a human life span), to travel into the distant future? And is it possible to travel into the past?
“Our current understanding of fundamental physics tells us that the answer to the first question is a definite yes, and to the second, maybe.
“The mechanism for traveling into the distant future is to use the time-dilation effect of Special Relativity, which states that a moving clock appears to tick more slowly the closer it approaches the speed of light. This effect, which has been overwhelmingly supported by experimental tests, applies to all types of clocks, including biological aging.
“If one were to depart from the earth in a spaceship that could accelerate continuously at a comfortable one g (an acceleration that would produce a force equal to the gravity at the earth’s surface), one would begin to approach the speed of light relative to the earth within about a year. As the ship continued to accelerate, it would come ever closer to the speed of light, and its clocks would appear to run at an ever slower rate relative to the earth. Under such circumstances, a round trip to the center of our galaxy and back to the earth–a distance of some 60,000 light-years–could be completed in only a little more than 40 years of ship time. Upon arriving back at the earth, the astronaut would be only 40 years older, while 60,000 years would have passed on the earth. (Note that there is no ‘twin paradox,’ because it is unambiguous that the space traveler has felt the constant acceleration for 40 years, while a hypothetical twin left behind on a spaceship circling the earth has not.)
“Such a trip would pose formidable engineering problems: the amount of energy required, even assuming a perfect conversion of mass into energy, is greater than a planetary mass. But nothing in the known laws of physics would prevent such a trip from occurring.
“Time travel into the past, which is what people usually mean by time travel, is a much more uncertain proposition. There are many solutions to Einstein’s equations of General Relativity that allow a person to follow a timeline that would result in her (or him) encountering herself–or her grandmother–at an earlier time. The problem is deciding whether these solutions represent situations that could occur in the real universe, or whether they are mere mathematical oddities incompatible with known physics. No experiment or observation has ever indicated that time travel is occurring in our universe. Much work has been done by theoretical physicists in the past decade to try to determine whether, in a universe that is initially without time travel, one can build a time machine–in other words, if it is possible to manipulate matter and the geometry of space-time in such a way as to create new paths that circle back in time.
“How could one build a time machine? The simplest way currently being discussed is to take a wormhole (a tunnel connecting spatially separated regions of space-time) and give one mouth of the wormhole a substantial velocity with respect to the other. Passage through the wormhole would then allow travel to the past.
“Easily said–but where does one obtain a wormhole? Although the theoretical properties of wormholes have been extensively studied over the past decade, little is known about how to form a macroscopic wormhole, large enough for a human or a spaceship to pass through. Some speculative theories of quantum gravity tell us that space-time has a complicated, foamlike structure of wormholes on the smallest scales–10^-33 centimeter, or a billion billion times smaller than an electron. Some physicists believe it may be possible to grab one of these truly microscopic wormholes and enlarge it to usable size, but at present these ideas are all very hypothetical.“Even if we had a wormhole, would nature allow us to convert it into a time machine? Stephen Hawking has formulated a “Chronology Protection Conjecture,” which states that the laws of nature prevent the creation of a time machine. At the moment, however, this is just a conjecture, not proven.
“Theoretical physicists have studied various aspects of physics to determine whether this law or that might protect chronology and forbid the building of a time machine. In all the searching, however, only one bit of physics has been found that might prohibit using a wormhole to travel through time. In 1982, Deborah A. Konkowski of the U.S. Naval Academy and I showed that the energy in the vacuum state of a massless quantized field (such as the photon) would grow without bound as a time machine is being turned on, effectively preventing it from being used. Later studies by Hawking and Kip S. Thorne of Caltech have shown that it is unclear whether the growing energy would change the geometry of space-time rapidly enough to stop the operation of the time machine. Recent work by Tsunefumi Tanaka of Montana State University and myself, along with independent research by David Boulware of the University of Washington, has shown that the energy in the vacuum state of a field having mass (such as the electron) does not grow to unbounded levels; this finding indicates there may be a way to engineer the particle physics to allow a time machine to work.
“Perhaps the biggest surprise of the work of the past decade is that it is not obvious that the laws of physics forbid time travel. It is increasingly clear that the question may not be settled until scientists develop an adequate theory of quantum gravity.”
John L. Friedman of the physics department at the University of Wisconsin at Milwaukee has also given this subject a great deal of consideration:
“Special relativity implies that people or clocks at rest (or not accelerating) age more quickly than partners traveling on round-trips in which one changes direction to return to one’s partner. In the world’s particle accelerators, this prediction is tested daily: Particles traveling in circles at nearly the speed of light decay more slowly than those at rest, and the decay time agrees with theory to the high precision of the measurements.
“Within the framework of Special Relativity, the fact that particles cannot move faster than light prevents one from returning after a high-speed trip to a time earlier than the time of departure. Once gravity is included, however, spacetime is curved, so there are solutions to the equations of General Relativity in which particles can travel in paths that take them back to earlier times. Other features of the geometries that solve the equations of General Relativity include gravitational lenses, gravitational waves and black holes; the dramatic explosion of discoveries in radio and X-ray astronomy during the past two decades has led to the observation of gravitational lenses and gravitational waves, as well as to compelling evidence for giant black holes in the centers of galaxies and stellar-sized black holes that arise from the collapse of dying stars. But there do not appear to be regions of spacetime that allow time travel, raising the fundamental question of what forbids them–or if they really are forbidden.
“A recent surprise is that one can circumvent the ‘grandfather paradox,’ the idea that it is logically inconsistent for particle paths to loop back to earlier times, because, for example, a granddaughter could go back in time to do away with her grandfather. For several simple physical systems, solutions to the equations of physics exist for any starting condition. In these model systems, something always intervenes to prevent inconsistency analogous to murdering one’s grandfather.
“Then why do there seem to be no time machines? Two different answers are consistent with our knowledge. The first is simply that the classical theory has a much broader set of solutions than the correct theory of quantum gravity. It is not implausible that causal structure enters in a fundamental way in quantum gravity and that classical spacetimes with time loops are spurious–in other words, that they do not approximate any states of the complete theory. A second possible answer is provided by recent results that go by the name chronology protection: One supposes that quantum gravity allows microscopic structures that violate causality, and one shows that the character of macroscopic matter forbids the existence of regions with macroscopically large time loops. To create a time machine would require negative energy, and quantum mechanics appears to allow only extremely small regions of negative energy. And the forces needed to create an ordinary-sized region with time loops appear to be extremely large.
“To summarize: It is very likely that the laws of physics rule out macroscopic time machines, but possible that spacetime is filled with microscopic time loops. | physics |
https://nano.gtri.gatech.edu/sensors/ | 2023-05-30T18:00:55 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646076.50/warc/CC-MAIN-20230530163210-20230530193210-00237.warc.gz | 0.916952 | 147 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__164062555 | en | NOx sensors are able to detect the presence of nitrogen monoxide and nitrogen dioxide in exhaust streams, and are used mainly in diesel engine vehicles. Mitigation of NOx is critical since NOx is a respiratory hazard, a greenhouse gas, and a contributor to smog and acid rain. An effective NOx sensor, therefore, can greatly improve air quality for people and the environment.
Our testing setup is able to simulate high temperatures and gas flows that will be experienced in actual driving conditions. Impedance spectroscopy is used to determine how sensitive the NOx sensor is to gas species of interest.
Areas of research interest include testing different sensor materials, optimizing operating conditions, and controlling the sintered microstructure | physics |
https://www.qa4eo.org/training/ | 2024-04-19T03:15:18 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817253.5/warc/CC-MAIN-20240419013002-20240419043002-00091.warc.gz | 0.878418 | 698 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__204553700 | en | It is critical that data and derived products are easily accessible in an open manner and have associated with them an indicator of quality traceable to reference standards (preferably SI) so users can assess suitability for their applications, i.e. the ‘fitness for purpose’.
A QI shall be based on documented and quantifiable assessments of evidence demonstrating the level of traceability to internationally agreed (where possible SI) reference standards.
A Quality Indicator (QI) shall provide sufficient information to allow all users to readily evaluate the fitness for purpose of Earth observation data or derived products.
This page describes training materials provided to help you to learn basic principles of metrology and how these apply to Earth observation. It provides a training pathway to work through, although we strongly recommend ‘learning by doing’ and working, in parallel, on real examples.
Basic metrological principles
Basic principles of metrology are described in material produced by the International Bureau of Weights and Measures (BIPM) and the world’s metrology institutes.
See, for example:
- The BIPM website for information about world metrology and access to the GUM and its supplements.
- NPL provides free introductory uncertainty material with 'Measurement uncertainty explained', along with paid-for introductory courses at: e-Learning courses - NPL Training.
- NIST online materials on principles of metrology and uncertainty and the NIST uncertainty machine.
- UKAS (United Kingdom Accreditation Service) provides an introductory guide to uncertainty as the M3003 document. This is written for people doing laboratory-based metrology; but covers the basic ideas (except for error covariance) very well.
- The NPL coordinated EMPIR EMUE project produced a compendium of examples of Good Practice in measurement uncertainty
- A good introductory textbook to uncertainty analysis that follows the GUM terminology in a very accessible manner, is 'An introduction to uncertainty in measurement', by Les Kirkup and Bob Frenkel.
Introductory Uncertainty Analysis material with a (radiometric) Earth Observation focus
The MetEOC website provides introductory material on uncertainty analysis. The examples relate mostly to radiometric FRMs (in situ observations with visible and shortwave infrared optical detectors).
There is a downloadable PDF of a textbook and videos of the course as it was taught live in February 2015. The video on the law of propagation of uncertainties is particularly helpful as an introduction to both the GUM approach to uncertainty analysis and the basics of error correlation structures.
The same material has also been collated into an NPL eLearning course on Uncertainty analysis for Earth Observation.
A metrological approach to FRMs, FDRs and TDPs
The Executive Summary developed for this QA4EO website introduces application of metrology to FRMs, FDRs and TDPs.
See the QA4EO Documents page for more.
This website and the metrological principles documents it hosts were developed in the frame of the Instrument Data Quality Evaluation and Assessment Service - Quality Assurance for Earth Observation (IDEAS-QA4EO) contract funded by ESA-ESRIN (n. 4000128960/19/I-NS), and builds on the work of previous projects, see Acknowledgements. | physics |
https://www2.simplehuman.com/ca/trash/step-cans | 2017-05-24T04:02:25 | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607786.59/warc/CC-MAIN-20170524035700-20170524055700-00325.warc.gz | 0.924794 | 204 | CC-MAIN-2017-22 | webtext-fineweb__CC-MAIN-2017-22__0__203006320 | en | The pedal is often the first thing to wear out on a step trash can. That’s why we make our pedals extra strong. They run all the way to the back of the can and pivot near the center — like a see-saw. That spreads the stress across the entire steel platform for a smooth, steady action that will last for years. In fact, we engineered the pedal to last over 150,000 steps — that's more than 20 steps a day for 20 years.
Our patented lid shox air technology controls the motion of the trash can's lid. Airflow in and out of a specially designed damper allows the lid to open easily but provides resistance as the lid closes, easing it down gently and quietly.
On the outside, our cans stay neat and clean with a fingerprint-proof finish. On the inside, our extra-thick, double-seamed liners are designed to perfectly fit our cans so they don’t slip, and they stay neatly hidden. | physics |
https://teckyenergy.com/the-benefits-of-a-wind-powered-water-pump/ | 2023-10-04T09:31:59 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00656.warc.gz | 0.939181 | 2,393 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__141266713 | en | Clean and sustainable sources of energy are essential for the preservation of our planet. In this context, wind power has become increasingly popular due to its low carbon footprint and the abundance of wind resources available. Harnessing wind power to pump water is an excellent way to meet our water needs without harming the environment. Wind-powered water pumps work by converting the kinetic energy of the wind into mechanical energy, which in turn powers a pump to move water from one location to another. In this blog post, we will explore the benefits of using a wind-powered water pump and provide insights on how to choose the right pump for your needs.
Understanding Wind-Powered Water Pumps
Wind-powered water pumps work by harnessing the energy of the wind to move water from a source to a destination. There are two main types of wind-powered water pumps: mechanical and electrical.
Mechanical wind-powered water pumps are usually simple and have been used for centuries. They consist of a windmill that rotates a shaft, which in turn powers a piston or a diaphragm pump. These types of pumps are typically used for irrigation and can pump water from shallow wells or other sources.
Electrical wind-powered water pumps, on the other hand, use wind turbines to generate electricity, which is then used to power an electric water pump. These pumps can be used to pump water from deeper wells and are more efficient than mechanical pumps. Electrical wind-powered water pumps can also be used to generate electricity for other purposes.
Both types of wind-powered water pumps have their advantages and disadvantages. Mechanical pumps are generally more affordable and easy to maintain, but they require a certain level of wind speed to operate effectively. Electrical pumps are more efficient and can pump water from deeper sources, but they require a higher initial investment and more maintenance.
It is important to consider the wind conditions and the water demand when choosing the right type of wind-powered water pump. In areas with low wind speeds, a mechanical pump may be more suitable, while in areas with strong winds, an electrical pump may be the better option. Additionally, the water demand should be taken into account to ensure that the pump can provide enough water to meet the needs of the users.
In summary, wind-powered water pumps offer a sustainable and environmentally friendly way to pump water. Understanding the different types of pumps available and their respective advantages and disadvantages can help individuals and communities make informed decisions about choosing the right wind-powered water pump for their needs.
READ ALSO: Is a Renewable Energy Tech Degree Worth It?
The Benefits of Using Wind-Powered Water Pumps
Using wind power to pump water offers a range of benefits, including environmental and economic advantages. Here are some of the key benefits:
- Clean and Renewable Energy: Wind power is a clean and renewable energy source that produces no greenhouse gas emissions or air pollution. By using wind power to pump water, we can reduce our reliance on fossil fuels and protect the environment.
- Water Conservation: Wind-powered water pumps can help conserve water by enabling the efficient use of water resources. With the ability to pump water from deeper wells, wind-powered water pumps can access water that would otherwise be unavailable or difficult to reach.
- Reduced Carbon Footprint: Wind-powered water pumps have a significantly lower carbon footprint compared to conventional pumps that rely on fossil fuels. By reducing the amount of greenhouse gases produced, we can contribute to slowing down the effects of climate change.
- Lower Operating Costs: Wind power is free, and once a wind-powered water pump is installed, the operating costs are minimal. This can save money in the long run and make it a more cost-effective option compared to conventional pumps.
- Increased Reliability: Wind-powered water pumps are generally more reliable than conventional pumps, as they do not rely on fuel or electricity from the grid. This makes them a good option for remote or off-grid areas.
- Potential for Income Generation: In some cases, wind-powered water pumps can also generate electricity that can be sold back to the grid, providing an additional source of income for farmers or communities.
Wind-powered water pumps have been successfully implemented in various parts of the world. For example, in the Netherlands, a wind-powered water pumping station called De Nieuwe Afsluitdijk has been in operation since 2018. This station uses a combination of wind turbines and hydraulic pumps to move water from the Wadden Sea to a freshwater lake, providing irrigation for nearby farmland.
Wind-powered water pumps offer numerous benefits, including environmental and economic advantages. Choosing the right type of wind-powered water pump can help conserve water resources and reduce our carbon footprint. The real-life examples demonstrate the potential for wind-powered water pumps to contribute to sustainable development and the preservation of our planet.
Choosing the Right Wind-Powered Water Pump
When choosing a wind-powered water pump, there are several factors to consider, including the wind conditions, the water demand, and the location of the pump. Here are some important considerations:
- Wind Speed: The wind speed is a crucial factor to consider when selecting a wind-powered water pump. The pump should be able to operate at the average wind speed of the location to ensure optimal performance.
- Wind Turbine: The wind turbine should be selected based on the wind conditions of the location. A larger turbine may be needed in areas with low wind speeds, while a smaller turbine may be sufficient in areas with high wind speeds.
- Volume of Water: The volume of water needed will determine the size and capacity of the pump. It is important to select a pump that can meet the water demand of the users.
- Water Depth: The depth of the water source will also affect the choice of pump. Electrical pumps are generally more suitable for deeper wells, while mechanical pumps are better for shallower sources.
- Accessibility: The location of the pump should be easily accessible for maintenance and repairs.
- Noise: Some wind-powered water pumps can be noisy, so it is important to consider the noise levels and any potential impact on the local community.
In addition to these factors, it is important to consider the cost of the pump and any additional equipment needed, such as batteries for storing electricity. It may also be helpful to consult with a professional to ensure that the chosen pump is suitable for the specific needs and conditions of the location.
Choosing the right wind-powered water pump is essential for ensuring efficient and effective water pumping. Factors such as wind conditions, water demand, and location should be carefully considered to select the most appropriate pump. With the right pump, wind power can provide a sustainable and environmentally friendly way to pump water, contributing to the conservation of our precious water resources and the protection of our planet.
Maintenance and Upkeep of Wind-Powered Water Pumps
Like any mechanical device, wind-powered water pumps require regular maintenance and upkeep to ensure optimal performance and longevity. Here are some key maintenance tasks that should be performed on a regular basis:
- Check the Wind Turbine: Inspect the wind turbine to ensure that the blades are free of debris and that the bearings are properly lubricated. If there is any damage to the blades or bearings, they should be repaired or replaced immediately.
- Inspect the Pumping Mechanism: Check the pumping mechanism, including the pump rod, cylinder, and piston, for any signs of wear or damage. If any components are worn or damaged, they should be replaced.
- Check the Tower and Guy Wires: Inspect the tower and guy wires for any signs of damage or corrosion. If any issues are found, they should be addressed promptly to ensure the safety and stability of the pump.
- Maintain Batteries: If the pump is equipped with batteries for storing electricity, they should be maintained regularly to ensure that they are operating efficiently.
- Monitor Water Flow: Regularly monitor the flow of water from the pump to ensure that it is meeting the water demand of the users. If the flow rate decreases, there may be an issue with the pump or the water source.
- Schedule Professional Inspections: It is important to schedule regular professional inspections of the pump to identify any potential issues before they become major problems. A professional can also help with any repairs or replacements that may be needed.
Regular maintenance and upkeep of wind-powered water pumps are crucial for ensuring optimal performance and longevity. By performing regular checks and repairs, the pump can operate efficiently and effectively for many years, providing a sustainable and environmentally friendly way to pump water. Proper maintenance and upkeep can also help prevent costly repairs or replacements in the future, making it a cost-effective option in the long run.
Wind-powered water pumps offer a sustainable and environmentally friendly way to pump water, using the power of the wind to meet the water needs of communities and individuals. By harnessing the energy of the wind, these pumps can reduce dependence on fossil fuels and contribute to the conservation of our precious water resources.
When choosing a wind-powered water pump, it is important to consider factors such as wind conditions, water demand, and location to ensure optimal performance. Regular maintenance and upkeep are also essential to keep the pump operating efficiently and effectively for many years.
While wind-powered water pumps may require a larger initial investment than traditional electric pumps, the long-term benefits in terms of cost savings and environmental impact make them a worthwhile investment. With proper care and maintenance, wind-powered water pumps can provide a sustainable and reliable source of water for generations to come.
See the video below for more explanation
- What is a wind-powered water pump? A wind-powered water pump is a mechanical device that uses the power of the wind to pump water from a source, such as a well or a reservoir. It typically consists of a wind turbine, which generates electricity, and a pumping mechanism, which uses the electricity to power the pump.
- How do wind-powered water pumps work? Wind-powered water pumps work by harnessing the energy of the wind to generate electricity. The wind turns the blades of the wind turbine, which then rotate a generator to produce electricity. This electricity is then used to power the pumping mechanism, which pumps water from a source to a destination.
- What are the benefits of using a wind-powered water pump? There are several benefits of using a wind-powered water pump, including:
- Reduced dependence on fossil fuels
- Lower operating costs over the long term
- Sustainable and environmentally friendly way to pump water
- Provides a reliable source of water, even in remote areas without access to electricity
- What factors should I consider when choosing a wind-powered water pump? When choosing a wind-powered water pump, it is important to consider factors such as wind conditions, water demand, and location. Other factors to consider include accessibility, noise levels, and any additional equipment needed, such as batteries for storing electricity.
- How often do wind-powered water pumps need to be maintained? Wind-powered water pumps require regular maintenance and upkeep to ensure optimal performance and longevity. Maintenance tasks may include checking the wind turbine and pumping mechanism for wear or damage, monitoring water flow, and scheduling professional inspections on a regular basis.
- Are wind-powered water pumps expensive? Wind-powered water pumps may require a larger initial investment than traditional electric pumps. However, the long-term benefits in terms of cost savings and environmental impact make them a worthwhile investment for those looking for a sustainable and reliable way to pump water. | physics |
https://rocketeddy.livejournal.com/?skip=10 | 2021-11-30T03:22:52 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00453.warc.gz | 0.970193 | 935 | CC-MAIN-2021-49 | webtext-fineweb__CC-MAIN-2021-49__0__67021659 | en | Comets have been known to man since ancient times (at least since Aristotle, around 350BC) and we are currently aware of the existence of some three and a half thousand of them, yet they remain the most elusive and least understood bodies in our solar system.
Almost half of the comets we know about today are what we call Kreutz Sungrazers
believed to be fragments of one large comet that broke up at least 2000 years ago. As of Friday last week, (March 12th, 2010) there was one less as the NASA spacecraft SOHO watched a newly-discovered sungrazer get just a little too close to the Sun. You can see the final moments of this ill-fated comet at the SOHO movie theater
- I recommend selecting image type "LASCO C3" with the start date 2010-03-12 ... you can see the comet move from the bottom left of the image, getting increasingly brighter and more elongated as it approaches the Sun for probably the last time.
As far as I can tell, though I confess I haven't done much research on the subject, the comet didn't impact the Sun it might at first appear. There were certainly no Hollywood-style explosions on impact. As the comet got too close, the energy from the Sun will have vapourized it. According to spaceweather.com "several of these fragments pass by the sun and disintegrate every day. Most are too small to see but occasionally a big fragment--like this one--attracts attention."
This year seems to turning out to be a very good one for astronomers interested in comets.
In January this year, an object (named P/2010 A2) was discovered which was initially believed to be a rare asteroid-comet hybrid known as a main belt comet
- and by rare, I mean super-rare. There are some 3,600 known comets, and there are an estimated 1.2 million asteroids 1km across or larger in the main belt, but there are just 4 known main belt comets. But as astroengine
explained nicely it could also have been something even more exciting - the first ever sighting of a collision between two main belt asteroids.
The main asteroid belt is not what you might expect after watching SciFi like Star Trek, where the crew have to navigate between densely packed asteroids regularly bumping off one another. Despite the missions of asteroids in the belt, the average distance between them is still immense, and collisions between main belt bodies with a mean radius of 10 km are expected to occur just once every 10 million years.
When impacts do occur, the relative speeds of the asteroids can be quite significantly different. In the case of P/2010 A2 it's possible that a hypervelocity
impact occured - that is to say one in which the relative speeds are different by many km per second, which leads to some very interesting effects on the structures involved.
After photographs were taken with Hubble (see right), this assessment was revised and is indeed now considered to be the first sighting of a collision between two main belt asteroids.
Hubble's lead scientist David Jewitt said "The thing that we want to understand is how the asteroids smash into each other and destroy each other. It might help us understand even how to destroy an asteroid and prevent one from hitting us."
And some time today, the ESA Rosetta spacecraft will be taking photographs of P/2010 A2 with its OSIRIS cameras. While OSIRIS is not as powerful as Hubble, Rosetta is currently on its way towards the main belt where it will perform a flyby of the asteroid Lutetia on July 10th later this year, before continuing further out and ultimately rendezvousing with the comet 67P Churyumov-Gerasimenko in 2014. Rosetta has just passed the orbit of Mars, and is about 150 million km away from P/2010 A2 right now - a very long distance, the same distance as the Sun is from the Earth, but also much closer than the Earth (and Hubble) - 80 million km closer, to be precise. In fact, this distance is some 6 million km less than the closest distance the Earth and P/2010 A2 will ever be. Combining this relative closeness with the benefits of being able to take images from a different angle, should hopefully make these images extremely useful to scientists in putting together the pieces of this fascinating event (sorry, bad pun). | physics |
https://legacy.gl.ciw.edu/content/2016/4/27/postdoctoral-associate-metastable-materials-synthesis | 2017-04-27T14:59:21 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00524-ip-10-145-167-34.ec2.internal.warc.gz | 0.901919 | 365 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__235139215 | en | The Geophysical Laboratory, Carnegie Institution of Washington, seeks applications for postdoctoral positions in the field of materials prediction, synthesis and characterization. The positions will focus on using solid-state methodologies and computational prediction tools, including high-pressure and high-temperature conditions, to synthesize/predict metastable carbon-rich compounds via precursor pathways. Successful candidates will work closely in a team comprised of experimental chemists, physicists and materials scientists as well as theorists.
Minimum qualifications: A PhD in physics, chemistry, materials science or a related field is the requirement for these positions.
Desired qualifications: Candidates should be familiar with solid-state synthesis methodologies and/or computational prediction methods. Experience with high-pressure techniques including laser-heated diamond anvil cells and/or large-volume press techniques, as well characterization tools such as Raman / infrared spectroscopies and powder x-ray / single crystal diffraction is desirable. Successful candidates are expected to be able to work in both independent and collaborative group environments.
The appointments are for one year, with possibility for a second year pending progress and availability of funds. The positions are available immediately, and will remain open until filled. Interested parties should send a cover letter, resumé or curriculum vitae (including publications), statement of research interests, and contact information for three references.
To submit an application, click here. Only complete applications submitted via the Carnegie website will be considered.
Prospective researchers will work at the Geophysical Laboratory, Carnegie Institution, Washington, DC. The Carnegie Institution of Washington is an equal opportunity employer. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of gender, race/ethnicity, protected veteran status, disability, or other protected group status. | physics |
https://www.agilelibre.com/content/the-idea-of-seismic-attributes | 2019-09-23T03:37:27 | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575860.37/warc/CC-MAIN-20190923022706-20190923044706-00484.warc.gz | 0.889314 | 905 | CC-MAIN-2019-39 | webtext-fineweb__CC-MAIN-2019-39__0__55987618 | en | Seismic attributes are tools for inferring geology from seismic reflection data. Seismic attributes aid seismic interpretation by revealing subtle features, by identifying similar patterns, and by quantifying specific properties. Attribute analysis is a vital facet of reflection seismology for petroleum exploration and finds application from anomaly identification to feature extraction to lithologic prediction.
Seismic attributes are quantifiable properties of seismic data. They are subsets of the information in the data, and in this way simplify data interpretation. Attribute computations resemble data processing methods, but there is a distinction. In data processing, the goal is to enhance the signal by removing noise. In attribute computation, the goal is to enhance features of interest by removing some portion of the signal.
Attribute analysis decomposes data into attributes. The decomposition is informal; no rules govern how to compute attributes or what they must represent. In effect, attribute computations are filters that remove some portion of the data to reveal hidden features of interest, such as bright spots, faults, and channels. It is often argued that seismic attributes are never as good as the original seismic data because they have less information. This criticism misses the mark entirely — attributes are useful precisely because they have less information.
Seismic attributes are applied to pre-stack data gathers or post-stack data volumes. Pre-stack attributes measure amplitude changes and derived rock properties such as compressional and shear velocities or impedances. Post-stack attributes measure amplitude, frequency, discontinuity, dip, parallelism, and waveform, among others. Pre-stack attributes treat seismic data as recordings of seismic reflections. Post-stack attributes treat seismic data as images of the earth. Pre-stack attributes are derived through involved methods of geophysical inversion. They provide valuable clues about lithology and fluid content, but they are relatively expensive, demand careful interpretation, and require sophisticated data preparation. Post-stack attributes are derived through filters, transforms, and statistics. They quantify stratigraphic and structural properties and are easy to compute and apply, but they lack the direct ties to lithology and fluids that are of paramount interest.
Seismic data has many properties, and each property can be quantified in various ways. Hundreds of seismic attributes have been invented and more appear each year. Their great number and diversity is confusing and inhibits their application. But most seismic attributes are duplicates or unstable or lack useful meaning; they can be discarded. Discarding unneeded attributes leaves a much smaller and more manageable set of attributes that are relatively unique, stable, and meaningful. Above all, attributes should be meaningful, and preferably measure a property that is clearly related to geology or geophysics.
The two most important post-stack seismic attributes are reflection strength and discontinuity. Other useful attributes include maximum amplitude, instantaneous phase, average frequency, most positive and most negative curvature, spectral decomposition, waveform, relative acoustic impedance, and relative amplitude change. The two most important pre-stack attributes are compressional and shear impedances. Their information is often recast as Lamé’s parameters, lambda-rho and mu-rho.
Here’s my list of the seismic attributes that are suitable for application to key objectives in seismic data analysis.
- Reconnaissance: reflection strength, discontinuity, relative acoustic impedance, shaded relief.
- Amplitude anomalies: reflection strength, relative acoustic impedance, acoustic impedance, shear impedance, lambda-rho, mu-rho.
- Frequency shadows: average frequency, bandwidth, quality factor.
- Faults: discontinuity, most positive curvature, most negative curvature, dip, relative amplitude change, shaded relief.
- Channels: reflection strength, discontinuity, spectral decomposition, tuning frequency, waveform, acoustic impedance.
- Stratigraphy: instantaneous phase, reflection strength, parallelism, average frequency, waveform.
Seismic attributes are invaluable for mapping faults and channels, and for identifying bright spots and frequency anomalies. Further, they provide a basis for geobody detection, and aid data reconnaissance and presentation. Attribute interpretation remains largely a matter of qualitative investigations with individual attributes, but quantitative multi-attribute analysis promises greater rewards. However, current methods of multi-attribute analysis remain inadequate and must be improved greatly if we are to further automate aspects of data analysis. Therein lies the challenge for the future of seismic attributes. | physics |
https://www.daosgroup.it/en/casehistory/discovery-pico/ | 2023-01-30T17:42:44 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00249.warc.gz | 0.910798 | 330 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__95951685 | en | Customer: Quanta System
Category: Medical device
Summary description: laser device for ablation of tattoos.
Type of activity: production and full supply of the simmer module and control board with total traceability of the product
A technologically advanced laser with an Italian design that has opened the path to a new era. Developed by Quanta System, a company with a solid history and reputation on ultrashort laser pulses, the device is dedicated to all professionals wishing to establish themselves as points of reference in the medical-aesthetic landscape, through the most advanced laser technology that offers patients excellent results.
Quanta Pico-Boost (patent pending) is the first, second generation, picoseconds laser that, thanks to its unique technology, is the most innovative and powerful tool on the market today for the treatment of tattoos and pigmented lesions. It is a Nd: YAG laser system with dual wavelength, 1064 and 532 nm, able not only to operate in picoseconds mode, with energy emitted up to 800 mJ and a power of 1.8 GW, but also in the Q-Switch with single and double pulse and “Photo-Thermal” mode (Free Running) for maximum efficiency and flexibility of use.
Specification and technological peculiarities
The absolute innovation introduced by Discovery PICO is the emission of energy and high power ultrashort pulses to picoseconds. These characteristics have increased the effectiveness of the treatments, generally reducing the number of sessions compared to a traditional Q-Switch laser, providing greater comfort for the patient. | physics |
https://hifihunt.com/2019/12/20/mcintosh-announces-mc901-dual-mono-amplifier/ | 2024-04-23T22:48:50 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818835.29/warc/CC-MAIN-20240423223805-20240424013805-00726.warc.gz | 0.907293 | 1,181 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__177320418 | en | A one-of-a-kind, ultimate solution for bi-amping loudspeakers: a 300 Watt vacuum tube amplifier and a 600 Watt solid state amplifier on one chassis!
Legendary home audio company McIntosh has been perfecting vacuum tube amplifiers since the 1940s and solid state amplifiers since the 1960s. Their amplifiers have been used at many historic events including presidential inaugurations and Woodstock, as well as in landmark sound systems like the Grateful Dead’s Wall of Sound and Despacio. From that heritage comes a truly one-of-a-kind home audio amplifier only they could dream of, design and build as the ultimate solution for bi-amping loudspeakers: the new McIntosh MC901 Dual Mono Amplifier.
The MC901 is a monoblock amplifier that drives a single speaker. But it is unlike any other monoblock amplifier. What makes the MC901 so unique is that it’s two amplifiers combined into one. Adding to the uniqueness is that each amplifier is of differing design philosophies: the MC901 consists of a 300 Watt vacuum tube amplifier attached to a 600 Watt solid state amplifier on one unified chassis.
The 300 Watt vacuum tube portion powers the speaker’s mid and upper drivers via eight KT88 output tubes, plus four 12AT7 and two 12AX7A signal tubes; the solid state section delivers 600 Watts of dedicated power to the power hungry woofers. The 300 Watt vacuum tube amp in the MC901 uses McIntosh’s patented Unity Coupled Circuit output transformer – the same technology McIntosh was founded on in 1949 – to deliver its full 300 Watts into almost any speaker regardless if it has 2, 4 or 8 Ohm impedance. Similarly, the 600 Watt solid state amp delivers its full 600 Watts into a 2, 4 or 8 Ohm impedance speaker via McIntosh’s Autoformer™ technology.
Vacuum tubes do not perform at their best when they are amplifying lower frequencies that are not being used by the loudspeaker. With the MC901, the vacuum tube amp will not be burdened with low end reproduction as the solid state section will drive these frequencies. The two amplifier sections of the MC901 are designed to work together in a synergistic relationship and are specifically engineered to assure that each section only amplifies its intended frequencies. Each amplifier section has its own discrete power supply so neither siphons power or performance away from the other. All of this results in easy bi-amping with unparalleled performance and sound reproduction from the speakers.
Prior to the MC901, bi-amping a speaker required two separate amplifiers. It also required an external crossover, along with a lot of trial and error, to properly configure the two detached amplifiers so they worked together as best as possible. The MC901 solves this issue thanks to its internal, adjustable crossover with the controls easily accessible on the top of the unit. These adjustable crossover filters allow the user to optimize the performance of both amplifier sections to their listening preferences. Relative gain levels for each amplifier section can be adjusted from -6dB to + 3dB. A direct feed can also be connected to each amplifier section, thus bypassing the filters.
Like virtually all McIntosh amplifiers, the MC901 features an iconic “McIntosh Blue” Watt meter, which has come to be recognized around the world as a symbol of quality audio. And just like the MC901 itself is a new design, so too is its meter with the introduction of McIntosh’s new DualView™ Power Output Meter. The DualView meter features two of McIntosh’s traditional, fast responding mechanical meters stacked above and below each other in a single meter window; one meter is dedicated to the 300 Watt vacuum tube amp and the other to the 600 Watt solid state amp. Each meter operates independently of the other and displays the real time power reading of each amplifier section.
While an entirely new design, the MC901 comes with the technology one would expect from a McIntosh amplifier: Power Guard® in the 600 Watt solid state amplifier section, Sentry Monitor™, Quad Balanced design, Power Control, Solid Cinch™ speaker binding posts, McIntosh Monogrammed Heatsinks™, and McIntosh’s eco-friendly Power Management System.
A brand new technology in the 300 Watt vacuum tube amplifier section of the MC901 is Power Guard Screen Grid Sensor™ (SGS). Power Guard SGS™ helps prevent premature vacuum tube failure by monitoring the screen grid current in the KT88 output vacuum tubes. If the current becomes too high, a circuit in Power Guard SGS is activated which then dynamically attenuates the input signal in real time to keep the vacuum tubes operating at safe levels.
Although the MC901 may look different than any McIntosh product before it, a quick glance reveals signature McIntosh design cues. The top is highlighted by a series of diagrams printed on glass that outline the basic circuitry and specifications of the unit, along with a glass nameplate trimmed with a circular aluminum ring that’s finished to match the front panel endcaps. The sides are adorned with vintage McIntosh-styled die cast aluminum name badges. Two classic McIntosh knobs are located on the front of the polished stainless steel chassis, and direct LED backlighting illuminates the DualView meter, McIntosh logo and the lettering on the black glass front panel. The uncompromising level of fit and finish that is expected of a McIntosh product completes the MC901.
Pricing and Availability
Orders for the MC901 can now be placed with Authorized McIntosh dealers with shipping expected to begin in December 2019.
Suggested retail price (VAT, shipping and any customs duties related to current standards of individual countries are excluded): $17,500 USD | physics |
http://nisc.nu.edu.eg/dr-ahmed-radwan-published-a-paper-entitled-bifurcation-behaviour-and-control-on-chaotic-convection-of-nanofluids-with-fractional-orders-at-recent-advances-in-mathematical-methods-and-computational/ | 2021-01-23T08:47:39 | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00683.warc.gz | 0.879041 | 166 | CC-MAIN-2021-04 | webtext-fineweb__CC-MAIN-2021-04__0__82220173 | en | Abstract: In this paper, we study the effect of the fractional-order chaotic behaviour of nanofluids on in a fluid layer heated from below. Adams-Bashforth-Moulton predictor-corrector method was adopted to solve the fractional nonlinear system. The synchronization based on the active control theory and Lyapunov stability theory and the effective chaotic range of the fractional-order chaotic system for variation of the single control parameter have been determined. The transition to chaos occurs by a subcritical Hopf bifurcation in this fractional-order system. The results show that inhibition of chaotic convection with fractional-order can be observed when using nanofluids. Numerical simulations are provided to illustrate the effectiveness of the synchronization results derived in this paper
Posted in Events. | physics |
https://www.scrigit-scraper.com/hints/microwave-oven-safety-tips/ | 2021-03-05T18:38:49 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00063.warc.gz | 0.915057 | 123 | CC-MAIN-2021-10 | webtext-fineweb__CC-MAIN-2021-10__0__170271601 | en | As explained in the following post, we are so used to using microwave ovens that we forget the dangers they could pose. They can overheat and burn foods if left on too long. Any bits of metal, even decorative edges on dishes or remnants of a metalic seal on a bottle, can cause sparks in the microwave that can burn the object, the inside surfaces of the microwave, and start a fire. And of course, don’t stand too close to the operating microwave since some radiation escapes from those nice big windows.
See the following article for more information.
Follow Scrigit on | physics |
https://code-dev.fb.com/2013/12/10/android/under-the-hood-building-and-open-sourcing-the-rebound-animation-library-for-android/ | 2023-12-04T15:30:34 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100531.77/warc/CC-MAIN-20231204151108-20231204181108-00364.warc.gz | 0.935195 | 1,807 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__119209072 | en | About a month ago, Facebook hosted Mobile@Scale, the second in our series of small technical conferences, with speakers from Facebook, LinkedIn, Pinterest, Dropbox, and Twitter. During the conference, we announced a new open-source spring dynamics animation library for Android called Rebound. We’ve seen a lot of interest in this project on GitHub, so I’d like to take this opportunity share some of the motivations and concepts behind it, as well as some tips on how you can integrate it into your own applications to create physics-based animations.
Mocking up animations with Quartz Composer
Designers at Facebook work with a tool called Quartz Composer to build rich interactive prototypes before engineering work begins. Quartz Composer is a node-based visual programming language provided as part of the Xcode development environment in Mac OS X, and is used for processing and rendering graphical data. Quartz Composer includes a simple physics system that allows you to create physics-driven animation prototypes. For example, the Chat Heads feature of Facebook Messenger allows the user to drag friends’ profile photos, representing ongoing conversations, around the screen. Flinging a stack of Chat Heads causes the stack to attract toward the edge of the screen and eventually come to rest after expending the energy of the fling. This is achieved by applying a spring force to the Chat Head stack, which pulls it to the target point on the edge of the screen and integrates the velocity imparted by the user’s initial fling gesture.
Other examples of physics-driven animations can be seen in Facebook Home, including flinging through pages in Cover Feed, the bouncing “double-tap-to-Like” indicator, or swiping your profile picture to unlock your phone. All these interactions were initially prototyped in Quartz Composer based on its simple physics engine.
Understanding spring forces
Most of the physics animations our designers have created can be modeled using simple spring forces. A spring force is defined by Hooke’s law, which states:
“…The force F needed to extend or compress a spring by some distance X is proportional to that distance.
That is, F = kx
where k is a constant factor characteristic of the spring, its stiffness.”
Spring forces based on Hooke’s law combined with a damping or friction force can be integrated to determine the net force acting on an object in a physics system, which according to Newton’s second law can be used to determine acceleration. Integrating these forces over time can be used to solve for an equilibrium point where the spring returns to its resting position. Modifications to the tension and friction of the spring can yield different types of animation curves. For example, low friction and high tension will create an animation that moves rapidly and oscillates many times before coming to a rest or equilibrium. Low tension and high friction will create an animation that moves slowly and does not oscillate when coming to a rest.
Evaluating existing animations frameworks
Implementing gestural physics-based animations required the engineering team to do an investigation of what tools were at our disposal. We began by looking into what was available in the Android SDK. Android includes three powerful frameworks for doing animation: property animation, view animation, and drawable animation. Property animation, introduced in API level 11, allows animations to be performed on arbitrary properties of any object. View animation is the older API for doing UI animation and provides many of the same features for timing and controlling animations as property animation; however, it works only with certain transformation properties of views. Drawable animation allows frame-by-frame presentation of drawable resources in sequence to generate an animation. These frameworks all give you the ability to create animations that will execute within a specific time duration. Interpolators can be used to modify the animation timing curve to achieve effects like acceleration, deceleration, and overshooting; but they are all based on a predetermined duration for the animation whose completion ratio is applied to the interpolator function curve.
Many of the interactions we wanted for projects like Home did not seem like a good fit for this sort of time-based animation. For example, flinging a page of Cover Feed at high velocity should cause it to overshoot the target page more than a fling with less velocity. Incorporating variable velocities and travel distances to create a predetermined time-based animation felt awkward, was more difficult to program, and didn’t yield the desired smooth transition from free scrolling to bouncing overshoot that we were looking for.
Since the animations mocked up in Quartz Composer were developed on a physics simulation, we decided that we should explore using a similar technique to implement these animations in Android. We believed that a physics simulation would make it much simpler and cleaner to integrate velocity, friction, and spring forces into the movement of Chat Heads or Cover Feed pages.
Initially we considered pulling in an open source physics library such as Box2d; however, we recognized that the animations the designers were creating were almost entirely based on simple spring forces. Adding a full physics engine to achieve these spring animations would add unnecessary bloat to our project, and the API might not be ideally suited for the problem we were trying to solve.
We wanted something simple, lightweight, and well-suited to the task of animating user interface elements based on physics rules and spring forces. Initially we thought of building a new type of SpringAnimator on top of the built-in ObjectAnimator framework. Since the animator framework on Android is inherently time-based, we found it wasn’t the right abstraction for doing physics-driven animation. Our SpringAnimator used a trick to tell the animator system that the animation’s duration was essentially infinite until the physics simulation resolved, at which point the animation was immediately finished. Although this did allow us to get realistic models of springs with input velocity and configurable tension and friction running on the animator framework, the abstraction felt wrong. We really just wanted springs or sets of springs and the ability to listen to events on those springs such as start, stop, and update, as well as the progress of the spring system as a whole (such as notifications before and after the springs in a system have been updated). These SpringListeners would allow us to perform arbitrary transformations on object properties based on the state of the spring or some mapping thereof.
Rebound is our solution to this set of requirements. Rebound provides a SpringSystem object that can manage a set of spring objects and the ability to listen to various events indicating the state of both the SpringSystem and the springs it manages.
A simple example of using rebound is presented at facebook.github.io/rebound. In this example, a mouse or touch down on the photo sets a spring value to 1 and releasing sets the spring value back to 0. The graph to the right shows the curve of the updates to the spring value as it seeks the end state of 0 or 1. The photo is scaled based on a mapping of the 0 to 1 states of the spring. One important thing to notice here is that the existing momentum and position of the spring are accounted for when the end state of the spring moves between 0 and 1. This allows code that controls Rebound animations to be more declarative. With a time-based animation, the programmer needs to create the animation, determine what the duration of it should be, start the animation, and potentially cancel and re-create another animation if the user interrupts the ongoing animation with a touch. With Rebound, you merely change the target end state and/or friction and tension configuration of the spring and let the physics engine do the work of determining how the spring will get from where it is now to the new target state. Along the way, any listeners attached to the spring will ensure that the UI is updated to represent the current state of the physics model.
These are some of the simplest examples of working with springs. However, the power of simple building blocks is that they can be combined into more-complex abstractions. A set of springs can be used to govern the motion of satellite buttons in a radial menu or the movement of pages in Pager View, or turning pages in a book, or throwing panels or dialogs off of the screen. A spring with friction but no tension can be used to model inertia so that an object just eventually slides to a stop when the friction force reduces the velocity of the object to 0. At this point, another spring could take over to cause the object to settle into a particular predetermined slot. And I’m sure you can imagine many other uses for springs. The library is intentionally simple to allow you to build the abstractions you want on top of it without incurring the weight of a full physics engine.
Will Bailey is a software engineer on Android. | physics |
https://chatgptai.cc/supermassive-black-hole/ | 2023-06-03T01:59:27 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648911.0/warc/CC-MAIN-20230603000901-20230603030901-00457.warc.gz | 0.933663 | 1,002 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__231349076 | en | Supermassive Black Hole An Exploration of the Most Mysterious Object in the Universe
The universe is a vast and mysterious place, full of wonder and awe-inspiring phenomena. Among the most fascinating and enigmatic objects in the cosmos are supermassive black holes, which have captured the imaginations of astronomers and the public alike. In this article, we will delve into the mysteries of these incredible objects, exploring their properties, origins, and potential implications for our understanding of the universe.
What is a Black Hole?
A black hole is a region of space where the gravitational pull is so strong that nothing, not even light, can escape. This occurs when a massive object collapses in on itself, creating a singularity, or a point of infinite density and gravity. Anything that gets too close to a black hole will be pulled in and disappear, forever trapped within the event horizon, the point of no return.
Types of Black Holes
There are three types of black holes: stellar, intermediate, and supermassive. Stellar black holes are formed from the collapse of a single massive star, while intermediate black holes are thought to be created from the merging of several smaller black holes. Supermassive black holes, on the other hand, are much larger and can contain billions of times the mass of our sun. They are found at the centers of galaxies and are thought to be formed from the merging of smaller black holes and the accretion of matter from surrounding stars and gas.
Properties of Supermassive Black Holes
Supermassive black holes are some of the most massive and powerful objects in the universe, and their properties are truly astounding. They can have masses billions of times greater than that of the sun, and their event horizons can extend for billions of kilometers. They are also incredibly hot, emitting intense radiation in the form of X-rays and gamma rays. Supermassive black holes can also influence the surrounding space and time, warping and distorting the fabric of the universe.
Origins of Supermassive Black Holes
The origins of supermassive black holes are still not fully understood, but there are several theories. One theory suggests that they form from the merging of intermediate black holes and the accretion of matter from surrounding stars and gas. Another theory proposes that they are the result of the collapse of a massive cloud of gas in the early universe. Regardless of their origin, supermassive black holes are integral to the formation and evolution of galaxies.
The Role of Supermassive Black Holes in Galactic Evolution
Supermassive black holes play a crucial role in the evolution of galaxies. They can influence the motion and behavior of stars and gas in their vicinity, regulating the growth and development of galaxies over time. They also play a role in the formation of quasars, some of the most luminous and distant objects in the universe, by accreting matter and emitting intense radiation. Supermassive black holes are also thought to be responsible for the production of gravitational waves, ripples in the fabric of spacetime that were first detected in 2015.
Studying Supermassive Black Holes
Studying supermassive black holes is a challenging task, as they are located at the centers of galaxies and are surrounded by immense amounts of gas and dust. However, astronomers have developed several techniques to study these fascinating objects, including observations of their radiation and the motion of nearby stars and gas. In recent years, new technologies such as gravitational wave detectors and space-based observatories have provided unprecedented insights into the properties and behavior of supermassive black holes.
The Future of Supermassive Black Hole Research
The study of supermassive black holes is a rapidly evolving field, and new discoveries are being made all the time. With the development of new technologies and observational techniques, astronomers are poised to make even more groundbreaking discoveries in the years to come. Some of the most exciting areas of research include the study of the relationship between supermassive black holes and their host galaxies, the detection of even more distant and massive black holes, and the exploration of the properties of black holes in the early universe.
Implications for Our Understanding of the Universe
The study of supermassive black holes has profound implications for our understanding of the universe. These objects represent some of the most extreme and enigmatic phenomena in the cosmos, and their study can shed light on fundamental questions such as the nature of space and time, the evolution of galaxies, and the origins of the universe itself. As we continue to unravel the mysteries of supermassive black holes, we may be one step closer to unlocking the secrets of the universe itself.
In conclusion, supermassive black holes are among the most fascinating and mysterious objects in the universe. Their incredible properties, origins, and implications for our understanding of the cosmos make them a subject of intense study and speculation. From their role in the formation and evolution of galaxies to their potential as sources of gravitational waves, the study of supermassive black holes is a rapidly evolving and exciting field of research. | physics |
http://wesisland.blogspot.com/2010_05_01_archive.html | 2017-04-29T13:19:44 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.79/warc/CC-MAIN-20170423031203-00184-ip-10-145-167-34.ec2.internal.warc.gz | 0.942518 | 554 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__255263118 | en | A search of the literature reveals a complicated, and even somewhat controversial, explanation of why and how tides occur. And some parts of the world do not even have two tides per day! So I will keep it general about why tides occur, but be more specific about what happens.
First, a key definition: Mean Lower Low Water (MLLW): the average of the lowest tide recorded at a tide station each day during the recording period, usually nineteen years. It is the “0” in tide charts, but more about them later.
Simplifying, and ignoring inertia, tides are created because the Earth and the moon attract each other, like magnets. The moon tries to pull at anything on the Earth to bring it closer. However, the Earth is able to hold onto everything except the water.www.hiwaay.net
The gravitational attraction is strongest on the side of Earth that happens to be facing the Moon, simply because it is closer. This attraction causes the water on this “near side” of Earth to be pulled toward the moon (see below).
On the opposite of Earth (the “far side”), the gravitational attraction of the Moon is less because it is farther away. Thus, the moon’s gravity creates two bulges of water. One forms where Earth and Moon are closest, and the other forms where they are farthest apart. That then means in most of the country, each day there are two high tides and two low tides. The ocean is constantly moving from high tide to low tide, and then back to high tide.
A high tide is as high as the water will reach before it starts to fall again. It is highest when the Earth and Moon are closest, and the other daily high tide is somewhat less than the highest tide (shouldn’t these tides have different names (?)). A low tide is as low as the water goes before it starts to rise again. And the same with the two daily low tides; one is lower than the other.
A common misconception is the thought that since there are four tides daily they must be on a six hour schedule. It takes the Earth about 24 hours to rotate once, relative to the Sun. But, because the Moon is moving with respect to Earth and the Earth is spinning, it takes the Earth a little longer to complete a rotation relative to the Moon—24 hours and 50 minutes. Thus, two daily tides occur separated by 12 hours and 25 minutes. www.Woods_Hole.edu
The amount of rise of fall in the tide is directly related to the relative location of the earth, moon and sun, but we’ll address that next time. | physics |
http://viventi.pratt.duke.edu/publications/conductively-coupled-flexible-silicon-electronic-systems-chronic-neural | 2023-06-01T06:02:24 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00380.warc.gz | 0.833194 | 474 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__111195378 | en | |Title||Conductively coupled flexible silicon electronic systems for chronic neural electrophysiology.|
|Publication Type||Journal Article|
|Year of Publication||2018|
|Authors||J Li, E Song, C-H Chiang, KJ Yu, J Koo, H Du, Y Zhong, M Hill, C Wang, J Zhang, Y Chen, L Tian, Y Zhong, G Fang, J Viventi, and JA Rogers|
|Journal||Proceedings of the National Academy of Sciences of the United States of America|
|Pagination||E9542 - E9549|
Materials and structures that enable long-term, intimate coupling of flexible electronic devices to biological systems are critically important to the development of advanced biomedical implants for biological research and for clinical medicine. By comparison with simple interfaces based on arrays of passive electrodes, the active electronics in such systems provide powerful and sometimes essential levels of functionality; they also demand long-lived, perfect biofluid barriers to prevent corrosive degradation of the active materials and electrical damage to the adjacent tissues. Recent reports describe strategies that enable relevant capabilities in flexible electronic systems, but only for capacitively coupled interfaces. Here, we introduce schemes that exploit patterns of highly doped silicon nanomembranes chemically bonded to thin, thermally grown layers of SiO<sub>2</sub> as leakage-free, chronically stable, conductively coupled interfaces. The results can naturally support high-performance, flexible silicon electronic systems capable of amplified sensing and active matrix multiplexing in biopotential recording and in stimulation via Faradaic charge injection. Systematic in vitro studies highlight key considerations in the materials science and the electrical designs for high-fidelity, chronic operation. The results provide a versatile route to biointegrated forms of flexible electronics that can incorporate the most advanced silicon device technologies with broad applications in electrical interfaces to the brain and to other organ systems.
|Short Title||Proceedings of the National Academy of Sciences of the United States of America| | physics |
https://www.symtoys.com/toy1.html | 2023-09-28T21:03:28 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510454.60/warc/CC-MAIN-20230928194838-20230928224838-00747.warc.gz | 0.935216 | 1,048 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__168304358 | en | This toy is a simple radio-controlled vibrating egg. It works in a straightforward manner: The user wears the vibrating egg; a radio control can switch the egg on and off from a distance of about 25 feet.
The first thing to do is get the parts. I used an "egg" type vibrator -- this is just an egg-shaped vibrator attached by two wires to a control and battery pack. It should run on two "AA" batteries. For the radio, go to Radio Shack and find one of their radio-controlled toy cars. The type you'll need is the bottom-of-the-line cheap kind. The transmitter should be a simple push button or trigger (not the kind that uses a steering wheel to control the car's direction!). The car will be one of the ones that runs all the time when the power switch is on--it spins around in a circle in reverse when you push the transmitter button, and goes straight forward when you don't push the button. (The drive wheels spin all the time, and pressing the transmitter button just changes the direction they spin.) The car should also run on two "AA" batteries. These cars usually cost about twelve bucks. While you're there, you'll also need a couple of power diodes--1N4001, 1N4009, or similar (it doesn't really matter too much what kind as long as they're power diodes), and a small "SPDT" (single pole, double throw) toggle switch--it will have three connectors on it.
To build it: First, cut the wires leading to the vibrator at a place close to the battery/control pack. Put the control pack aside and remove the plastic shell from the car. When you have the shell off, remove the wheels, and locate the motor that drives the rear wheels. Carefully cut the two wires that lead from the small circuit board inside the car to the drive motor, as close to the motor as possible. Remove the motor and drive axle. If you have a small saw, you can cut the plastic chassis so all you have left is a circuit board sitting on top of a battery pack. Solder one of the wires that used to go to the car's drive motor to one of the wires that leads to the vibrator. (It doesn't matter which one goes to which one.) Solder the other wire that used to go to the drive motor to the CENTER terminal on the SPDT switch. Now examine the power diodes--there will be a band or stripe painted on the diode close to one end. This end is called the "cathode." Solder the cathode end of one of the diodes to one of the remaining two terminals on the switch. Solder the other end (the "anode") of the other diode to the last terminal on the switch. Take the two ends of the diodes that are free and solder them together, and solder the place where they join to the remaining wire leading to the vibrator.
Testing the unit: Turn on the power to the radio receiver from the car. The vibrator may start running immediately, even without pressing the transmitter button. If it does, flip the SPDT switch to its other position. It should stop, and run only when the transmitter button is pressed.
The switch toggles the way the vibrator runs. In one position, it should run only when the transmitter is activated; in the other position, it will run all the time and stop when the transmitter button is pressed. (This has all kinds of potential uses of its own; use your imagination!)
If it doesn't work, check to make sure: Does the radio control car run on the same voltage as the vibrator (i.e., does it use the same number of batteries)? Are the connections clean and secure? Is the transmitter in range? (Most cars have a "whip" antenna; if you remove this antenna when you disassemble the car, replace it with a piece of wire of the same length.) Check the wires coming from the transmitter: one should go straight to the vibrator; the other should go to the "pole" of the SPDT switch. Are the diodes connected properly? The cathode end of one should be attached to the anode of the other; the place where they join should go to the vibrator; one end of each diode should be attached to an outer terminal of the switch. Diodes are damaged by excessive heat; use care when soldering them, and if you have one, attach a clip heatsink to the wires going to the diodes while soldering. (If you solder them carefully, they'll be okay.)
If you want to get fancy, you can use a small plug (I used a phono plug) to connect the vibrator to the receiver, instead of soldering them together. That way, you can attach the proper plug to the battery pack that used to be connected to the vibrator, and use it both ways. | physics |
https://www.glennklockwood.com/materials-science/overview-glasses.html | 2024-04-13T02:13:44 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00881.warc.gz | 0.932727 | 1,678 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__102043320 | en | This document is somewhat of a continuation of my tutorial on glass compositions and assumes a basic understanding of glasses from a structural and compositional standpoint.
Soda Lime Silica Glasses
One of the oldest glass compositions known, soda-lime silica glasses account for roughly 90% of the glass produced today. They owe their ability to stand the test of time to their ease of production, satisfactory level of mechanical and chemical durability, and their relatively cheap raw materials.
Most soda lime silica glasses are composed of soda (Na2O), lime (CaO), and silica (SiO2) in a 16-10-74 proportion. Although this ratio was discovered empirically in ancient times, it happens to coincide with a very important region on the Na2O-SiO2 binary phaes diagram (and its ternary complement, the Na2O-CaO-SiO2 phase diagram) near the lowest eutectic temperature at 788°C. While 16-10-74 melts somewhere around 800°C, it's worthy to note that, as with all glasses, a melt at this temperature would be impractically viscous; the gob temperature (which is a fair indication of the temperatures around which a melt can be worked with) is around 1185°C.
Although its mechanical properties (namely, strength and durability) are by no means exceptional, soda lime silica glasses are widely used for most common glass applications such as in windows, glass bottles, tableware, and light bulbs. The raw materials used to create soda lime glasses are also inexpensive; soda ash, limestone, and silica sand can be purchased by the tonnage at significantly less expense than the raw materials required for borosilicate or specialty glasses.
The addition of B2O3 to silica serves many purposes because it not only contributes desirable properties to glasses, but acts as a glass former much like silica does. The addition of borates allows one to use less alkali (such as soda and potash) in the glass which is often desirable, as alkali fluxes significantly decrease mechanical strength and dielectric breakdown fields of their glasses. The addition of borates also contributes significantly to the chemical durability (as in the case of sodium vapor lamp encasements) and reduced thermal expansion (in the case of Pyrex glassware).
A common borosilicate composition is
- 80% silica
- 12.9% B2O3
- 3.9% Na2O
- 2.2% alumina
- 0.4% K2O
This has a total alkali content of 4.3% as opposed to the 16% found in the standard soda-lime silica system. While this composition (which happens to be that of Corning 7740 or Pyrex) has superior thermal shock resistance, this low alkali also makes borosilicates harder to melt. Furthermore, raw materials that contribute borates to glasses are very expensive, but borosilicate glasses are still manufactured widely into cookware, labware, and fiberglass insulation.
Lead Glasses / Lead-alkali Silicate
Lead oxide (PbO) acts as a flux in silica glass, lowering the melting points and therefore making processing and forming steps easier. Unlike alkali fluxes, though, the addition of lead oxide to silica does not detriment the dielectric loss of glasses, and its density gives lead glasses enhanced brilliance due to lead's high refractive index. Lead oxide (PbO) is commonly added in anywhere from 18% to 65% in addition to around 11% alkali. Lead glass is most commonly used for lead "crystal" decorations and tableware (which, unlike lead-based paints, do not leech lead in any appreciable amount) and optical glasses, often being used as IR-transmittive glass (such as those needed in heat-seeking missiles) and x-ray absorptive glasses and radiation shielding.
The addition of alumina to a silica network imparts notable strength and temperature resistances to the glass by binding up the glassy network. Aluminosilicate glasses vary widely in composition but are typically characterized as having between 20% and 40% alumina. A typical composition of an aluminosilicate glass is 57% silica, 20.5% alumina, 12% magnesia, 5.5% lime, 4.0% B2O3, and 1% soda.
Due to this very low flux content, such aluminosilicate glasses are very hard to melt and form (moreso than borosilicates), but possess superior thermal expansion (0.5 ppm in aluminosilicates versus 3.3 ppm in borosilicates), high resistance to chemical attack due to very low alkali content, good strength, and very good refractory properties. Aluminosilicate glasses are most often found in the form of cookware, glass ceramics, and fiberglass.
High Silica Glasses
As would be expected, high silica glasses, due to their lack of fluxing agents, are very hard to melt and possess working temperatures well over 2000°C. Their properties are generally superior to most other types of glasses, with the very high processing temperatures being the limiting factor in the production and application of these high-silica glasses on a larger scale. For example, these glasses possess very low thermal expansion, good chemical durability, optical properties, mechanical properties, and very good high-temperature behavior. Their primary detractor is that they have a relatively low density due to their open structure, and (combined with impurities) this makes high silica glasses good ionic conductors.
High-silica glasses have traditionally been difficult to create due to the high temperatures required to cause melting of silica. As technology has improved, though, there have been six generations of high-silica glass production technology, each generation yielding higher purities and better quality than the last.
- Type I pure silica glass was made by the electric melting of natural quartz crystals. Because the raw material (quartz) was natural, it contained geological impurities, and this combined with low-quality melting to produce a porous, translucent silica. These glasses are still used in bunsen burner ceramic triangles and some older heating element protectors.
- Type II silica is produced by running natural quartz powder through a hydrogen/oxygen flame to melt it, resulting in a less porous, optically transparent glass. However, the hydrogen flame produces water which acts as an IR-absorptive material in the glass, and the geological impurities inherent in natural quartz are still present.
- Type III silica glass is synthetically produced by the gas-phase hydrolysis of silicon tetrachloride in a hydrogen flame. While this eliminates the geological/color impurities from natural quartz sources, the hydrogen flame still results in an IR-absorptive glass.
- Type IV silica glass is produced via the gas-phase hydrolysis of SiCl4 in plasma. This results in a water-free synthetic glass that is pure and significantly more IR-transmissive than Type III silica glass.
- Type V high silica glass is produced via solgel synthesis from triethylorthosilicate [Si(OC2H5OH)4, or TEOS) followed by full densification. Because it is synthesized at room temperature, Type V glass maintains all the purity of Type IV glass without the thermal defects.
- Type VI silica is nanoporous "thirsty" glass, such as Vycor, also synthesized from solgel. It can be densified into a solid pure silica glass.
Applications of these pure and high-silica glasses include vast use in the semiconductor industry since silica doesn't contaminate silicon wafers, fiber optics, UV-transmissive lamp tubes, precision optics, refractory tubes, and as a fiber reinforcer in composites. | physics |
http://www.physiotherm.com/en/daten/produktkatalog/sensocare/ | 2018-04-25T12:19:32 | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00086.warc.gz | 0.809712 | 159 | CC-MAIN-2018-17 | webtext-fineweb__CC-MAIN-2018-17__0__209058149 | en | Contactless skin temperature measurement using heat sensors.
The SENSOCare® technology developed and patented by Physiotherm allows the skin temperature to be measured via special heat sensors in the back area without any contact with the skin. using these values, the fully-electronic system control automatically and continuously regulates the infrared intensity and emits it according to individual adjustments. For the first time, this allows infrared heat to be used in a reclining position and for persons with limited sensitivity to heat.
Optimal heat application:
Intelligent measurement sensors allow the optimal heat application to be achieved.
Highest possible comfort:
The fully-electronic system control automatically regulates the infrared intensity.
The individual adjustment of the infrared intensity to the user avoids any excessive thermal stress to the skin. | physics |
https://bec.iastate.edu/research/completed/development-of-a-novel-aerodynamic-solution-to-mitigate-large-vibrations-in-traffic-signal-structures/ | 2024-03-03T13:21:49 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476374.40/warc/CC-MAIN-20240303111005-20240303141005-00339.warc.gz | 0.94516 | 577 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__415365 | en | About the research
The cantilevered-arm traffic structures (e.g., structural supports of signs, luminaires, and traffic signals) are an integral part of the transportation systems. The cantilevered form of these structures are prone to fatigue failures at the base of the cantilevered elements as there is no provisions for fatigue in that specification. This is attributable to the lack of consideration of fatigue in the design of these structures and to the wind environment that can induce galloping, vortex shedding, wind gusts, and truck-induced gusts. These structures’ low mechanical damping (0.1–0.4%) is known to contribute to this type of behavior. While much research in recent years has been focused on development of vibration mitigation strategies or design of connections that are fatigue-rated, less attention has been given to the natural performance of these structures when exposed to the natural wind environment.
This project considers the “aerodynamic damping” as an active means to mitigate the large amplitude vibrations of these structures. The proposed method is superior to the other common approaches, as it uses the inherent characteristics of the traffic light (specific dimension ratios) to ensure that the aerodynamic damping is maximized during the gust events. It’s unique in the sense that it will not require specific tuning (like those required for non-aerodynamic vibration mitigation systems), or implementation of the heavier fatigue-rated connections. The tests that were conducted in the Wind Simulation and Testing (WiST) Laboratory have shown that the proposed approach will help improve the performance of traffic signal structures. In these studies, tests on two different traffic light configurations were conducted. Then, physical characteristics of traffic lights was changed to reach the maximum possible aerodynamic damping at each instance. This in turn has resulted in rapid damping of the large amplitude motions. A traffic signal structure was then monitored in the field. The information collected from the field were then used to assess the impact of the traffic light modification on the response of the traffic light structure in in-plane and out-of-plane directions.
The implementation of the proposed dimensional characteristics in design of traffic lights and traffic signal structures is an excellent opportunity to address the longstanding issue of fatigue-related failures in these structures. The economic implications of this approach are huge considering the millions of these structures that are being maintained by cities and state DOTs. The implementation of the proposed strategy in design of the traffic lights and traffic signal structures will ensure longer life time for these structures while eliminating the costs associated with possible failures, the user costs imposed due to failures, and costs associated with the replacement. The proposed strategy is expected not to increase the fabrication costs of these structures. The proposed approach is expected to have a larger impact when the concept is extended to other traffic structures such as structural supports for signs and luminaires. | physics |
https://www.sentircristiano.com/php-cgi/2579osl-dating-labs.html | 2021-10-19T16:15:29 | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00490.warc.gz | 0.897184 | 341 | CC-MAIN-2021-43 | webtext-fineweb__CC-MAIN-2021-43__0__290976936 | en | Osl dating labs
Common silicate minerals like quartz and potassium feldspar contain lattice-charge defects formed during crystallization and from subsequent exposure to ionizing radiation.
These charge defects are potential sites of electron storage with a variety of trap-depth energies.
The radioactive decay of 40K releases beta and gamma radiation, whereas the decay in the U and Th series generates mostly alpha particles and some beta and gamma radiation.
Free electrons are generated within the mineral matrix by exposure to ionizing radiation from the radioactive decay of daughter isotopes in the 235U, 238U and 232Th decay series, and a radioactive isotope of potassium, 40K, with lesser contributions from the decay of 85Rb and cosmic sources.
The OSL signal of potassium feldspar is usually more resista nt to solar resetting than most quartz.
There is significant variability in the luminescence properties of quartz and potassium feldspar grains related to crystalline structure, minor and rare-earth impurities, solid-solution relations, number of luminescence cycles (Fig. Thus, because of this inherent variability in dose sensitivity of quartz and feldspar, analytical procedures for dating often need to be tailored for a specific geologic provenance.
This technique, as thermoluminescence, was originally developed in the 1950s and 1960s to date fired archaeological materials, like ceramics (Aitken, 1985).
Ensuing research in the 1970s documented that marine and other sediments with a prior sunlight exposure of hours to days were suitable for thermoluminescence dating (Wintle and Huntley, 1980). | physics |
https://www.chrisklaxton.com/klaxto/scinerds-chinese-physicists-smash-quantum | 2019-08-25T03:07:29 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00179.warc.gz | 0.942074 | 341 | CC-MAIN-2019-35 | webtext-fineweb__CC-MAIN-2019-35__0__221837543 | en | Chinese Physicists Smash Quantum Teleportation Record
A group of Chinese engineers have smashed the records for quantum teleportation, by creating a pair of entangled photons over a distance of almost 100 kilometers.
Quantum entanglement is the mysterious phenomenon where two particles become tightly intertwined and behave as one system — whether they are next to each other on a laboratory bench, or either sides of a galaxy.
If you examine one particle and measure a certain property — say, vertical polarization — then the other will instantly adopt the opposite property — in this case, horizontal polarization.
It’s crazy stuff. Albert Einstein described it as “spooky action at a distance,” when he was still struggling to get his brain around the ideas proposed by quantum theory. But it’s a powerful phenomenon, and one that physicists have long attempted to harness in the lab.
Trouble is, creating a pair of particles with any distance between them has always been a difficult hurdle to overcome. Imperfections in optic fiber glass, or air turbulence, means that the qubits become unentangled. Plus as the distance gets farther your beam gets wider, so photons simply miss their target.
Juan Yin at the University of Science and Technology of China in Shanghai claims to have cracked it. His team sent photons between two stations, separated by 97 km. Over a Chinese lake, to be precise. To pull off this feat, Yun and friends used a 1.3 Watt laser, and a clever optic steering technique to keep the beam precisely on target. With this setup, they were able to teleport more than 1,100 photons in four hours, over a distance of 97 kilometers. | physics |
http://cpiaero.com/almds.html | 2021-05-08T18:35:18 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00344.warc.gz | 0.8215 | 112 | CC-MAIN-2021-21 | webtext-fineweb__CC-MAIN-2021-21__0__175632754 | en | The Airborne Laser Mine Detection System (ALMDS) pod structure is an approximately 7.5ft long cylindrical housing for a Mine Detection Laser. It consists of machined, formed and composite details and undergoes acceptance tests such as an air pressure test and weight and center of gravity testing.
© Copyright 2016 CPI Aerostructures, Inc.
Designed by Latitude
91 Heartland Boulevard Edgewood, NY 11717
Phone: 631-586-5200 | Fax: 631-586-5840 | physics |
http://reckersau.blogspot.com/2012/03/magnitude-69-quake-hits-japan-march-14.html | 2018-08-21T23:05:03 | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219197.89/warc/CC-MAIN-20180821230258-20180822010258-00215.warc.gz | 0.974836 | 99 | CC-MAIN-2018-34 | webtext-fineweb__CC-MAIN-2018-34__0__89531131 | en | A magnitude-6.9 earthquake shook the northeastern coast of Japan on March 14, 2012.
According to the United States Geological Survey (USGS), the earthquake was recorded at exactly 6:08 PM local time. The epicenter was traced 235 km (146 miles) south of Kushiro, Hokkaido, Japan.
Tsunami alert has been issued urging people to evacuate. The public has been warned of a wave as high as 50 centimeters (20 inches).Read more » | physics |
https://www3.carleton.ca/calendars/archives/grad/0405/programs/electronics.html | 2021-09-23T09:38:47 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.92/warc/CC-MAIN-20210923074537-20210923104537-00356.warc.gz | 0.809123 | 5,437 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__170827787 | en | Mackenzie Building 5170
Chair of the Department: L. Roy
Associate Chair, Graduate Studies: B. Syrett
In addition to University and Graduate Faculty regulations, all
Engineering departments share common procedures that are described in
Section 18 of the General Regulations section of this Calendar.
The Department of Electronics offers programs of study and research
leading to M.A.Sc., M.Eng. and Ph.D. degrees in Electrical Engineering.
These degrees are offered through the Ottawa-Carleton Institute for
Electrical and Computer Engineering (OCIECE), which is jointly administered
by the Departments of Electronics and of Systems and Computer Engineering
at Carleton University, and the School of Information Technology and
Engineering (SITE) at the University of Ottawa. For further information,
including admission and program requirements, see the Institute's section
of this Calendar.
The Department of Electronics is concerned with the fields of applied
and physical electronics. Effort is strongest in four broad areas:
computer-aided design for electronic circuits; physics and fabrication
technology for solid-state electronic and photonic devices; VLSI and
high-speed analog integrated circuits; and microwave and photonic
subsystems and circuits. Specific areas of specialization include:
Computer-Aided Circuit Design
Development of hierarchical simulators for mixed analog/digital
circuits; analysis and design of switched-capacitor networks; analysis and
design of high speed circuits; optimization techniques; synthesis of VLSI
circuits using both algorithmic and knowledge-based approaches; analysis
and simulations of communications systems links; layout synthesis and
Waveguides and holographic optical elements for optical interconnects;
electro-opti c modulators and switches; waveguides for sensing
Solid State Devices
Fundamental semiconductor device physics; device design and novel device
structures; device modeling for CAD; new fabrication processes; submicron
and quantum effect devices; photovoltaics; semiconductor sensors and
Integrated Circuit Engineering
Design and development of linear and digital integrated circuits;
fabrication processes and test techniques; MOS, bipolar and BiCMOS ICs;
VLSI; computer-aided circuit design; MEMS.
Analog Signal Processing
Switched-capacitor filters, transversal filters, operational amplifiers
and radio frequency functions in analog signal processing applications,
particularly for integrated circuit realization.
Active filters; linear and nonlinear circuit design; computer-aided
circuit design; phase-locked circuits, carriers and clock synchronizers;
mixers, modulators and demodulators.
Microwave amplifiers, oscillators, modulators, frequency converters,
phase-shifters; use of FET and bipolar transistors, Schottky barrier,
varactor, step recovery and PIN diodes; design using finline, microstrip,
stripline, coax, and waveguide; monolithic microwave ICs in GaAs; miniature
hybrid microwave ICs. High-performance microwave packaging including low
temperature co-fired ceramics.
Communications and Radar Electronics
Circuits for terrestrial and satellite communications; circuit
implementation of digital modulation techniques; antenna and array design;
communication channel characterization; optical communications circuits;
radar transmitter and receiver design.
The Department is part of the CITO (Communications and Information
Technology of Ontario) Centre of Excellence. Current research areas of the
Centre with major participation from the Department are: integrated
services digital networks, mobile and portable wireless networks, VLSI in
communications, and millimetre wave/optical antennas and circuits for
The Department is a member, along with seven other Canadian universities
and several major industrial organizations, of Micronet, the federally
sponsored network on Microelectronic Devices, Circuits and Systems for ULSI
(ultra-large scale integration). Within the Department Micronet supports
research on: device structures, modeling and fabrication processes for
submicron CMOS and BiCMOS ICs; high-speed filters, phase detectors, A-to-D
converters, frequency synthesizers and other circuit elements for silicon
ICs operating at radio frequencies; analysis and optimization of
interconnects for high-speed ICs; and automated generation of custom cells
for VLSI design.
The structure of the courses offered allows a well-integrated master's
or Ph.D. program of study to be chosen that is appropriately related to the
field of thesis research. Device- and integrated-circuit-oriented courses
cover: fabrication, semiconductor device theory, semiconductor device
design, integrated circuit design, and integrated circuit reliability.
Circuit-oriented courses include: signal-processing electronics,
microprocessor electronics, computer-aided circuit design, phase-locked
circuits, filter circuits, RF and microwave circuits, antenna and array
design. Systems-oriented courses cover: optical fibre communications and
Housed in a Class 100 cleanroom, this laboratory offers a complete set
of equipment for the fabrication of solid state devices and small-scale
integrated circuits for research purposes. There is a strong emphasis on
silicon devices and process technology, including MEMS and silicon
photonics. Photomasks can be generated in house. An e-beam direct-write
system supports deep submicron lithography. Modern diffusion furnaces can
grow industrial qua lity gate oxide. LPCVD of silicon nitride, glasses, and
polysilicon is available. RIE and ECR plasma etchers can pattern deep
submicron features. Magnetron and RF sputtering and e-beam and thermal
evaporation are available for metal deposition. A rapid thermal annealer
and a variety of diagnostic tools including a SEM, ellipsometer and thin
film profilometer complete the equipment set. A well-equipped semiconductor
device characterization laboratory complements the facility.
Computing and Circuit Design Facilities
The Department has excellent computing facilities for software
development, circuit design and layout for integrated circuits and
microwave circuits. IC designs using synthesis, standard cells and layout
are supported for fabrication through the Canadian Microelectronics
Corporation or in-house.
The graduate computer network consists of 70 SUN workstations and has
access to the Internet. Industry standard software includes CADENCE, Mentor
Graphics, SYNOPSYS, HSpice, SUPREM, Xilinx, MEDICI, Agilent ADS, Agilent
Momentum, Agilent HFSS, MATLAB, MATHEMATICA, FRAMEMAKER, and others.
Advanced instrumentation is available supporting automated testing of
both analog and digital integrated circuits at frequencies up to 20 GHz.
Low noise test facilities include a phase noise measurement system, dynamic
signal analyzers, spectrum analyzers, network analyzers, arbitrary waveform
generators, digital sampling oscilloscopes, digital data analyzers and
generators, and RF frequency synthesizers, all of which may be controlled
using the IEEE 488 interface.
The Department has up-to-date facilities for circuit development and
measurement including wafer probing at microwave frequencies ranging up to
40 GHz . There are also facilities for work at optical frequencies.
Thin-film microwave integrated circuits can be fabricated in-house; there
is provision for the fabrication of GaAs MMICs through foundry services.
Special purpose microwave equip ment includes automated network analyzers,
spectrum analyzers and frequency synthesizers, and a complete microwave
link analyzer. Data generators and error-detection equipment is available
for work on digital communications. The Department also has an anechoic
chamber with an automated measurement system for the characterization of
antennas up to 20GHz. The research laboratories maintain extensive
collaboration with government and industrial research and development
agencies in the Ottawa area.
Not all of the following courses are offered in a given year.For an
up-to-date statement of course offerings for 2004-2005 and to determine the
term of offering, consult the Registration Instructions and Class Schedule
booklet, published in the summer and also available online at
Course Designation System
Carleton's course designation system has been restructured. The first
entry of each course description below is the new alphanumeric Carleton
course code, followed by its credit value in brackets. The old Carleton
course number (in parentheses) is included for reference, where
Courses offered by the Department of Electronics are as follows:
- ELEC 5200 [0.5 credit] (ELG 6320)
- Advanced Topics in Integrated Circuits and Devices
- Topics vary from year to year.
- ELEC 5404 [0.5 credit] (ELG 6344)
- Neural Networks for High-Speed/High-Frequency Circuit
- Introduction to neural network methodologies for computer-aided
design of high-speed/high-frequency circuits, including modeling of
passive and active devices/circuits, and their applications in
high-level design and optimization in wired and wireless electronic
- ELEC 5409 [0.5 credit] (ELG 6349)
- Microwave and Millimeterwave Integrated Circuits
- Design of communications electronics components with emphasis on
GaAs MMIC implementation. Overview of MESFET, HEMT, HBT device
modeling. Integrated lumped/ distributed passive element modeling.
Broadband impedance matching. Design of direct-coupled amplifiers,
distributed amplifiers, power devices and amplifiers, phase shifters,
switches, attenuators, mixers, oscillators.
- ELEC 5501 [0.5 credit] (formerly 97.551)(ELG 6351)
- Passive Microwave Circuits
- Characteristics of homogeneous and inhomogeneous transmission lines
and waveguides. Planar transmission lines: stripline, microstrip,
coplanar line, slotline. Coupled transmission lines. Modeling of
discontinuities. Ferrite components. Microwave network analysis:
s-parameters, CAD models. Design of impedance-matching networks,
directional couplers, power splitters, filters. Applications in MICs
- ELEC 5502 [0.5 credit] (formerly 97.552)(ELG 6352)
- Analog Integrated Filters
- The fundamentals and details of analog continuous-time and SAW
filters. Comparison to switched-capacitor filters. Review of filter
concepts, types of filters, approximations, transformations. Building
blocks such as op amps, transconductance amplifiers, and gyrators.
Design using cascaded second-order sections, multiple loop feedback and
LC ladder simulations.
- ELEC 5503 [0.5 credit] (formerly 97.553) (ELG 6353)
- Radio Frequency Integrated Circuit Design
- Integrated radio front-end component design. Overview of radio
systems, frequency response, gain, noise, linearity, intermodulation,
image rejection, impedance matching, stability, and power dissipation.
Detailed design of low-noise amplifiers, mixers, oscillators and power
amplifiers. Use of on-chip inductors and baluns. Process variations,
parasitics, and packaging.
- ELEC 5504 [0.5 credit] (formerly 97.554) (ELG 6354)
- Analysis of High-Speed Electronic Packages and
- Introduction to modeling, simulation and optimization of high-speed
VLSI packages; models for packages, interconnects and ground/power
planes; lumped, distributed and EM models for interconnects; delay,
crosstalk and switching noise; moment matching techniques; concurrent
thermal/electrical analysis of IC packages and boards.
- ELEC 5506 [0.5 credit] (formerly 97.556) (ELG 6356)
- Simulation and Optimization of Electronic Circuits
- Introduction to computer simulation and optimization of electrical
circuits. Time- and frequency-domain formulations for sensitivity
analysis and optimization. Optimization techniques for performance-,
cost- and yield-driven design of electronic circuits. Optimization
approaches to modeling and parameter extraction of active and passive
- ELEC 5508 [0.5 credit] (formerly 97.558) (ELG 6358)
- Computer Methods for Analysis and Design of VLSI
- Formulation of circuit equations. Sparse matrix techniques.
Frequency and time-domain solutions. Relaxation techniques and timing
analysis. Noise and distortion analysis. Transmission line effects.
Interconnect analysis and crosstalk simulation. Numerical inversion
techniques. Asymptotic waveform estimation. Mixed frequency/time domain
techniques. Sensitivity analysis.
- ELEC 5509 [0.5 credit] (formerly 97.559) (ELG 6359)
- Integrated Circuit Technology
- Survey of technology used in silicon VLSI integrated circuit
fabrication. Crystal growth and crystal defects, oxidation, diffusion,
ion implantation and annealing, gettering, CVD, etching, materials for
metallization and contacting, and photolithography. Structures and
fabrication techniques required for submicron MOSFETs. Applications in
advanced CMOS processes.
- ELEC 5600 [0.5 credit] (formerly 97.560) (ELG 6360)
- Digital Integrated Circuit Testing
- Production testing of digital integrated circuits. Outline of
methods of testing used in production. Testing schemes and design for
testability. Faults and fault models, yield estimates, testability
measures, fault simulation, test generation methods, sequential
testing, scan design, boundary scan, built-in self test, CMOS
- ELEC 5602 [0.5 credit] (formerly 97.562) (ELG 6362)
- Microwave Semiconductor Devices and Applications
- Theory of operation for microwave diodes (varactor, p-i-n, Gunn,
IMPATT) and transistors (BJT, MESFET, HBT, HEMT). Small-signal,
large-signal, and noise models for CAD. Diode oscillators and
reflection amplifiers. Design of transistor oscillators and amplifiers.
Discussion of technology/fabrication issues and MMIC applications.
- ELEC 5604 [0.5 credit] (formerly 97.564) (ELG 6364)
- Radar Systems
- Fundamentals; range equation, minimum detectable signal, radar
cross-section, puls e repetition frequency, range ambiguities. Radar
classes: CW, FM-CW, MTI, tracking, air surveillance, SSR, PAR, MLS,
SAR, SLAR, OTH, 3D and bistatic radars. Radar subsystems; transmitters,
antennas, receivers, processors, displays, detection criteria; CFAR
receivers, noise, clutter precipitation.
- ELEC 5605 [0.5 credit] (formerly 97.565) (ELG 6365)
- Optical Fibre Communications
- Transmission characteristics of and design considerations for
multi-mode and single-mode optical fibre waveguides; materials,
structures, and device properties of laser light sources; properties
and performance of p-i-n and avalanche photodiodes; types of optical
fibre signal formats, preamplifier topologies, noise, receiver
sensitivity, transmitter design, link design.
- ELEC 5606 [0.5 credit] (formerly 97.566) (ELG 6366)
- Phase-Locked Loops and Receiver Synchronizers
- Phase-locked loops; components, fundamentals, stability, transient
response, sinusoidal operation, noise performance, tracking,
acquisition and optimization. Receiver synchronizers: carrier
synchronizers including squaring loop, Costas loop, and remodulator for
BPSK, QPSK BER performance; clock synchronizers including early-late
gate, in-phase/midphase, and delay line multiplier.
- ELEC 5607 [0.5 credit] (formerly 97.567) (ELG 6367)
- Antennas and Arrays
- Design projects are interspersed with live and video lectures.
Lectures cover definitions, wire structures, mutual coupling,
method-of-moments, array theory, photonic devices, frequency
independent structures, reflectors, horns, feeds, slotted waveguide and
microstrip arrays. Design projects include a printed dipole, yagi and
series-fed microstrip patch array.
- ELEC 5608 [0.5 credit] (formerly 97.568) (ELG 6368)
- Fourier Optics
- The theory and applications of diffractive and non-diffractive
coherent optics, with emphasis on holograms, tomography and high-speed
optical computing. Mathematical basis: generalized 2-D Fourier
transforms, transfer function of an optical system, 2-D sampling
theory, Helmholtz equation, Green's theorem, and the classical
- ELEC 5609 [0.5 credit] (formerly 97.569) (ELG 6369)
- Nonlinear Microwave Devices and Effects
- The physical basis and mathematical modeling of a variety of
microwave/millimeter-wave devices, (some of which exhibit the most
extreme nonlinear behaviour known), how they can be exploited in
practical circuits and systems, and how the resulting device/circuit
interactions can be analyzed.
- ELEC 5702 [0.5 credit] (formerly 97.572) (ELG 6372)
- Optical Electronics
- Electromagnetic wave propagation in crystals; review of geometric
optics; Gaussian beam propagation; optical fibres; dielectric
waveguides for optical integrated circuits; optical resonators; optical
properties of materials; theory of laser oscillation; specific laser
systems; electro-optic modulators; photorefractive materials and
applications; holography; optical interconnects.
- ELEC 5703 [0.5 credit] (formerly 97.573) (ELG 6373)
- Advanced Topics in Solid State Devices and IC
- Recent and advanced topics in semiconductor device physics,
modeling, and integrated circuit fabrication technology. Topic varies
from year to year according to departmental research interests.
Students may be expected to contribute lectures or seminars on selected
- ELEC 5704 [0.5 credit] (formerly 97.574) (ELG 6374)
- Advanced Topics in CAD
- Recent and advanced topics in computer-aided techniques for the
design of VLSI and telecommunications circuits. Topics will vary from
year to year according to the departmental research interests. Students
may be expected to contribute lectures or seminars on selected
- ELEC 5705 [0.5 credit] (formerly 97.575) (ELG 6375)
- Advanced Topics in VLSI
- Recent and advanced topics in the design of very large scale
integrated circuits, with emphasis on mixed analog/digital circuits for
telecommunications applications. Topic varies from year to year
according to departmental research interests. Students may be expected
to contribute lectures or seminars on selected topics.
- ELEC 5706 [0.5 credit] (formerly 97.576) (ELG 6376)
- Submicron CMOS and BiCMOS Circuits for Sampled Data
- The analog aspects of digital CMOS and BiCMOS circuit design in
submicron technologies including reliability; sampled analog circuits,
including amplifier non-ideal characteristics and switch charge
injection; CMOS/BiCMOS amplifier design considerations, leading up to
standard folded-cascode and two-stage circuits.
- ELEC 5707 [0.5 credit] (formerly 97.577) (ELG 6377)
- Microelectronic Sensors
- Fabrication and physical principles of operation of microelectronic
sensors. A large variety of sensors will be studied and the basic
fabrication methods used in their production reviewed. The devices
discussed will include optical sensors, fibre optic sensors, magnetic
sensors, temperature sensors and, briefly, chemical sensors.
- ELEC 5708 [0.5 credit] (formerly 97.578) (ELG 6378)
- ASICs in Telecommunications
- Modern ASIC technologies for Telecom will be introduced. Circuit
level building blocks for typical wireline and wireless applications
will be overviewed. Both analog and digital circuits will be
considered. A topical literature study, circuit level design exercises
and take home final exam will be required.
- ELEC 5709 [0.5 credit] (formerly 97.579) (ELG 6379)
- Advanced Topics in Electromagnetics
- Recent and advanced topics in electro-magnetics, antennas, radar
systems, microwave devices and circuits, or optoelectronics. The
subject material will vary from year to yea r according to research
interests in the department and/or expertise provided by visiting
scholars or sessional lecturers.
- ELEC 5800 [0.5 credit] (formerly 97.580) (ELG 6380)
- Theory of Semiconductor Devices
- Equilibrium and non-equilibrium conditions in a semiconductor.
Carrier transport theory. Physical theory of basic semiconductor device
structures and aspects of design: PN junctions and bipolar transistors,
field effect devices. Current transport relationships for transistors.
Charge control theory. Modeling of device mechanisms. Performance
limitations of transistors.
- ELEC 5802 [0.5 credit] (formerly 97.582) (ELG 6382)
- Surface-Controlled Semiconductor Devices
- Fundamentals of the MOS system; MOS capacitors. Long channel
behaviour: theory, limitations and performance of the SPICE level 1 and
2 models. Small geometry effects. Subthreshold operation and modeling.
Hot electron effects and reliability.
- ELEC 5803 [0.5 credit] (formerly 97.583) (ELG 6383)
- Behavioural Synthesis of ICs
- Various topics related to computer analysis and synthesis of VLSI
circuits including: logic synthesis, finite state machine synthesis,
design methodologies, design for reuse, testing, common VLSI functions,
a review of Verilog.
- Prerequisite: Some IC design knowledge such as given in ELEC
- ELEC 5804 [0.5 credit] (formerly 97.584) (ELG 6384)
- VLSI Design
- An IC design course with a strong emphasis on design methodology,
to be followed by 97.585 in the second term. The design philosophies
considered will include Full Custom design, standard cells, gate-arrays
and sea-of-gates using CMOS and BiCMOS technology. State-of-the-art
computer-aided design tools are used.
- ELEC 5805 [0.5 credit] (formerly 97.585) (ELG 6385)
- VLSI Design Project
- Using state-of-the-art CMOS and BiCMOS technologies, students will
initiate their own design of an integrated circuit using tools in the
CAD lab and submit it for fabrication where the design warrants.
- ELEC 5808 [0.5 credit] (formerly 97.588) (ELG 6388)
- Signal Processing Electronics
- CCDs, transveral filters, recursive filters, switched capacitor
filters, with particular emphasis on integration of analog signal
processing techniques in monolithic MOS ICs. Detailed op amp design in
CMOS technology. Implications of nonideal op amp behaviour in filter
performance. Basic sampled data concepts.
- ELEC 5900 [0.5 credit] (formerly 97.590)
- Engineering Project I
- A one-term course, carrying 0.5 credit, for students pursuing the
course work M.Eng. program. An engineering study, analysis and/or
design project under the supervision of a faculty member. Written and
oral reports are required. This course may be repeated for credit.
- ELEC 5901 [1.0 credit] (formerly 97.591)
- Engineering Project II
- A one-term course, carrying full-course credit, for students
pursuing the course work or co-op M.Eng. program. An engineering study,
analysis and/or design project under the supervision of a faculty
member. Written and oral reports are required. This course may be
repeated for credit.
- ELEC 5906 [0.5 credit] (formerly 97.596)
- Directed Studies
- Various possibilities exist for pursuing directed studies on topics
approved by a course supervisor, including the above listed course
topics where they are not offered on a formal basis.
- ELEC 5909 [2.0 credits] (formerly 97.599)
- M.A.Sc. Thesis
- ELEC 6909 [8.5 credits] (formerly 97.699)
- Ph.D. Thesis | physics |
https://podcasts.ox.ac.uk/23rd-ockham-lecture-twisting-neutron-wavefunction | 2024-02-28T06:04:23 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474697.2/warc/CC-MAIN-20240228044414-20240228074414-00301.warc.gz | 0.951045 | 245 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__140707494 | en | Given by Professor Charles Clark, Fellow of the Physical Measurement Laboratory at the National Institute of Standards and Technology, and Fellow and Adjunct Professor at the Joint Quantum Institute, University of Maryland, USA.
Wave motions in nature were known to the ancients, and their mathematical expression in physics today is essentially the same as that first provided by d'Alembert and Euler in the mid-18th century. Yet it was only in the early 1990s that physicists managed to control a basic property of light waves: their capability of swirling around their own axis of propagation. During the past decade such techniques of control have also been developed for quantum particles: atoms, electrons and neutrons. I will present a simple description of these phenomena, emphasising the most basic aspects of wave and quantum particle motion. Neutron interferometry offers a poignant perspective on wave-particle duality: at the time one neutron is detected, the next neutron has not yet even been born. Here, indeed, each neutron "then only interferes with itself." Yet, using macroscopically-machined objects, we are able to twist neutron deBroglie waves with sub-nanometer wavelengths. | physics |
https://socal.swagelok.com/en/training-and-events/techtalks/techtalks-materials-science-basics-pt2 | 2023-05-31T17:26:04 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646937.1/warc/CC-MAIN-20230531150014-20230531180014-00442.warc.gz | 0.923688 | 393 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__27783901 | en | Part 2 in this TechTalk webinar series presented by Swagelok's senior material scientist, Dr. Bob Bianco, on Thursday, February 11th at 11 am continues on his introduction to materials science from Part 1 that was held on January 14th (a recording of that session will be available here soon). The properties of the materials that comprise your fluid system components have significant impact on their compatibility and performance. By understanding the basics of their materials of construction, you will be better equipped to choose components that help achieve your operational goals.
A short Q&A will available the end of this 30-minute webinar.
Below is a short summary of topics that will be addressed during the webinar. Did you miss Part 1? View the recorded session here.
Alloy Performance in Corrosive Environments
- Types of Corrosion and What to Know
- Localized Corrosion (Pitting and Crevice Corrosion)
- Stress Corrosion Cracking (SCC)
- Hydrogen Embrittlement and Interaction with Microstructure
- Microbiologically Influenced Corrosion
Material Selection for Fluid System Components
- Metals and Alloys Used by Swagelok
- Stainless Steels – 300 Series
- Super Austenitic Alloys – including 6 Moly
- High Nickel Alloys
- Duplex and Super Duplex Stainless Steels, 2507
What: Materials Science Basics, Part 2
When: Thursday, February 11th @ 11 am
Cost for attendance: FREE
Length: 30 minutes plus Q&A
This webinar is part 2 in a series that gives an introduction to the basics of materials science and explains how the materials of construction of your components affects the performance in your operations.
This event has already occurred, but you can view the recording below. View our TechTalk calendar here. | physics |
http://boggsequipment.com/product/weldotron-7112-heat-tunnel/ | 2019-01-24T00:04:04 | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584431529.98/warc/CC-MAIN-20190123234228-20190124020228-00498.warc.gz | 0.719709 | 176 | CC-MAIN-2019-04 | webtext-fineweb__CC-MAIN-2019-04__0__120647229 | en | Weldotron 7112 Heat Tunnel
Heat Tunnel Features
Heavy duty construction.
Silicone covered conveyor rollers
Variable speed gear reducer conveyor drive.
Thermostatically controlled temperature from 0 to 450 Deg. F.
Blower with thermal overload protection.
Heavily insulated for energy efficiency and cool exterior.
Adjustable air velocity control for optimim shrink. Capable of retail quality shrink.
Adjustable heat pattern to maximize shrinkage.
Double layered curtains for added energy efficiency.
Adjustable speed live roller conveyor to optimize shrinkage.
7,000 Watt heater bank.
VOLTAGE: 230 Volts AC / 1 Phase / 34 amp
CHAMBER OPENING DIMENSIONS:
MACHINE DIMENSIONS: 36” long x 24” wide x 64” high | physics |
https://www.francescverdugo.com/research/ | 2022-08-08T11:11:10 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00514.warc.gz | 0.910604 | 9,182 | CC-MAIN-2022-33 | webtext-fineweb__CC-MAIN-2022-33__0__213093713 | en | Figure 1: Body-fitted vs. unfitted meshes
However, embedded methods have known drawbacks. One of the most notorious, still an open question today, is the so-called small cut cell problem. Some cells of the unfitted mesh can be intersected by a very small part of the domain, which can lead to arbitrarily large condition numbers in the underlying discrete FE operators. This is a major issue, when dealing with large-scale simulations that require Krylov sub-space iterative linear solvers because they are very sensitive to the conditioning and spectral properties of the underlying operators. The result is that embedded methods are mainly used in small to medium problems, which can be handled with direct linear solvers (i.e., Gaussian elimination) instead of iterative procedures. One of my main research goals at CIMNE is to address the conditioning problems of embedded methods in order to allow their usage in large-scale real-world computations.
This research line has several funding sources. On the one hand, I was awarded with a Beatriu Pinós fellowship (a competitive post-doc grant issued by the Catalan autonomous government) to develop embedded FE methods for the simulation of additive manufacturing processes. With the help of this grand I was able to develop new numerical techniques and mentor a PhD student in order to adopt these new tools in his additive manufacturing simulations. This work is also framed in the H2020 project "EMUSIC", which also focuses in the same application. This research line on embedded methods has also been supported by the H2020 project "ExaQUte". In particular, I am contributor in the work package "WP1 Embedded methods", where I have lead the research associated with task "Task 1.6: Development of robust (and scalable) linear solvers for the embedded case". Another funding source is the Spanish project "SOFAST". I am involved in the works packages "WP1 Embedded methods" and "WP2 Adaptive mesh refinement". In the frame of these projects, I am currently supervising a PhD student at UPC, who is developing novel techniques to couple CAD models and embedded simulations.
This research line has lead to several major research results, which are detailed as follows.
In FE analysis, the current way to solve linear systems at large-scales is using iterative Krylov sub-space methods in combination with parallel and scalable preconditioners. Unfortunately, existing preconditioners like algebraic multigrid (AMG) [briggs_2000] or multi-level domain decomposition [toselli_2005] are mainly designed for body-fitted meshes and cannot readily deal with the ill-conditioning associated with small cut cells.
Specific precontinoners for unfitted methods have been proposed in the literature in different contexts, but they are mainly serial non-scalable algorithms. This has motivated me to develop novel techniques in order to enable the usage of embedded method in much larger computations. As a first approach, I have considered BDDC preconditioners to solve such problems at large scales. This choice is motivated by the fact that BDDC methods are among the most scalable preconditioners for FE analysis. In particular, the BDDC methods implemented in my research team have shown an excellent weak scaling up to 458,752 cores and 30 billion unknowns [badia_2016]. Unfortunately, conventional BDDC preconditioners loose they optimal qualities, when the underlying problem is discretized with an unfitted grid. Thus, available BDDC solvers cannot be considered in combination with embedded FE methods.
In order to revert this unfavorable situation, I have introduced a novel BDDC method specifically tailored to deal with embedded grids. The proposed method is based on a modification of the coarse space of the BDDC solver, which makes it robust with respect to badly cut cells. The method has been shown to be algorithmically weak scalable for a wide range of 3D complex geometries. That is, the number of iterations needed to reach convergence in the preconditioned linear solver is asymptotically independent of the problem size (see Figure 2). This work is published in [badia_2017]) and has been presented in several conferences both at national and international level.
Figure 2: BDDC for embedded methods in a weak scaling test for a Poisson problem on different complex 3D geometries. A perfect weak scaling of the linear solver iterations is obtained.
Most of the works enabling the usage of iterative linear solver in combination with embedded FE methods consider tailored preconditioners in order to deal with matrices affected by the small cut cell problem (e.g., the BDDC method for embedded grids previously presented). The main drawback of this approach is that one relies on highly customized solvers, and therefore, it is not possible to take advantage of well known and established linear solvers for FE analysis available in renowned scientific computing packages as Trilinos or PETSc. In order to address this issue, I have considered a second approach. I have developed an enhanced FE formulation that leads to linear systems, whose condition number is not affected by small cuts (see Figure 3). Therefore, they can be solved with standard linear solvers such as the AMG methods available in Trilinos or PETSc.
The enhanced FE formulation is called the aggregated unfitted finite element method (AgFEM), which is based on removal of shape functions associated with badly cut cells by introducing carefully designed constraints. The formulation of AgFEM shares the good properties of body-fitted FE methods such as stability, condition number bounds, optimal convergence, and continuity with respect to data. In contrast to previous works like CutFEM [burman_2015], it is easy to implement and to use in different problem types since it does not require to compute high order derivatives of the shape functions, and it does not introduce extra dissipation terms in the weak form. AgFEM has already been successfully applied to the solution of fluid [badia_2018b] and heat transfer problems [badia_2018a].
Figure 3: Poisson problem defined on a moving domain. Note that the standard embedded formulation leads to very high condition numbers, which are very sensitive to the relative position between the computational domain and the background mesh, specially for second order interpolations (). In contrast, the condition number is much lower for the enhanced formulation (the AgFEM method) and it is nearly independent of the position of the domain.
Figure 4: Numerical solution of a (dimensionless) Stokes problem using the AgFEM method (streamlines colored by velocity magnitude and pressure). This computation is carried out without constructing an unstructured body-fitted mesh thanks to the application of the AgFEM method.
I have implemented a distributed-memory version of AgFEM in the FEMPAR software project making use of MPI for inter-processor communications. The obtained results confirmed my expectations. When using AgFEM, the resulting systems of linear algebraic equations can be effectively solved using standard AMG preconditioners. In this case, I have considered the AMG methods available in the GAMG module of the PETSc library. With the parallel AgFEM, I was able to run weak scaling tests up to 300M degrees of freedom (DOFs) and 16K processors in the Marenostrum-IV platform. The results show that the optimal behavior of AMG solvers for body-fitted meshes is also recovered for AgFEM, i.e. number of linear solver iterations asymptotically independent of problem size. To my best knowledge, this is the first time that embedded methods are successfully applied to such large scales. These results have been published in [verdugo_2019].
Parallel implementations and scalable solvers are essential to dramatically reduce the computation times of challenging simulations, but not sufficient in many contexts. Several problems of interest are multi-scale in nature and, thus, require different mesh resolution in different spatial locations. In this context, trying to capture the finest scales with uniform meshes in overkill and the usage of adaptive mesh refinement becomes mandatory.
Parallel mesh adaptation is a challenging operation since mesh partition and load balance needs to be carefully realized to achieve performance. In large parallel computations, Cartesian grids locally adapted with forest-of-trees (also known as octree meshes) are among the few ways to locally adapt in a scalable way. They leverage so-called space-filing curves to reduce the mesh partition and load balance to a 1d problem that can be solved very efficiently. However, the main drawback of this approach is that the resulting computational grid contains so-called hanging nodes, which require extra work when defining conforming FE interpolations. If certain conditions are not satisfied, the constraints introduced by hanging nodes may expand beyond a single layer of ghost cells, thus leading to an incorrect parallel FE solver. Unfortunately, the current literature fails to explain under which conditions this happens for generic conforming FE discretizations (i.e., not only limited to Lagrangian elements). To address this situation, I have studied the correctness of a number of algorithms and parallel data structures needed to build conforming FE discretizations on octree meshes partitioned via space-filling curves. The proposed algorithms and data structures have been implemented within the FEMPAR scientific software library, using p4est as the forest-of-trees back-end. A strong scaling study reveals remarkable scalability up to 32.2K CPU cores and 482.2M DOFs (see reference [badia_2019a] ).
In a second step, I have combined the parallel AgFEM implementation previously developed with this adaptive FE framework. The result is a novel scalable distributed-memory version of AgFEM on locally-adapted Cartesian meshes. The main novelty of the method is a two-step algorithm that carefully mixes AgFEM constraints, which get rid of the small cut cell problem, and standard hanging node constraints, which ensure trace continuity. This method requires minimum parallelization effort since it can leverage standard functionality available in existing large-scale FE codes. Numerical experiments demonstrate its optimal mesh adaptation capability, robustness to cut location and parallel efficiency, on classical Poisson hp-adaptivity benchmarks. This work opens the path to functional and geometrical error-driven dynamic mesh adaptation with the AgFEM method in large-scale realistic scenarios. A research paper detailing this work is published in [badia_2021]).
One the main driving applications of my research on embedded FE methods has been the simulation of additive manufacturing processes using advanced large-scale FEs. This is the main topic of my Beatriu Pinós fellowship and the EU H2020 project "EMUSIC", were I have participated as researcher. Additive manufacturing (also known as 3D printing) is an advanced manufacturing method used to build physical objects directly from 3D computer designs by the superposition of thin layers of materials such as metals, polymers or composites. 3D printers have been described in numerous occasions as a revolutionary technology with the potential to radically transform the manufacturing industry. However, high-energy sources used to melt metal powders during the printing process induce thermo-mechanical shape distortions and residual stresses, that deteriorate the geometrical quality and the requested material properties. Currently, the standard industrial practice to optimize the printing process in order to obtain quality components is through experimental testing, which is expensive and time consuming since it requires the production of hundreds of prototypes before reaching the final piece. Fortunately, predictive computer simulations can potentially be used to validate the process parameters before printing the component. However, current simulation software still has important limitations when it comes to predict shape distortions and residual stresses in complex settings with an acceptable level of accuracy and an acceptable time frame.
Figure 5: Snapshots of a thermal 3D printing simulation using the AgFEM method in order to effectively represent the geometry of the growing piece.
I have contributed to develop novel computational tools able to provide much faster and accurate results. In particular, generating computational meshes for additive manufacturing simulations is particularly challenging and time consuming. The shape of the 3D printed object grows in time, layer-by-layer, as it is produced. Capturing the growing shape requires a different mesh for each time step to represent the portion of the piece that has been produced so far. It is obvious that generating thousands of independent meshes with conventional methods is virtually impossible. Thus, embedded methods are well suited in this context. I have applied the AgFEM method to additive manufacturing simulations in collaboration with a PhD student of my research team (see Figure 5).
I plan to continue working on my current research line on large-scale FE solvers and embedded methods and its application to challenging engineering problems. I will extend these techniques to more challenging problem types and collaborate with application experts in order to simulate relevant real-world cases.
I have recently started a new research line whose goal is to develop a new generation of open-source FE codes since the development of high-performance scientific software for the numerical approximation of PDEs is a key research area with a broad impact in advanced scientific and engineering applications. Existing partial differential equation (PDE) solvers like FE codes are usually written in compiled programming languages introduced several decades ago, mainly C/C++ and Fortran 95/03/08. These languages are considered for performance reasons, but they are also related to poor code productivity. In contrast, interpreted languages like Python or MATLAB allow one to write scripts and applications in much less lines of code, boosting code productivity, but they lead to much slower programs. A trade-off between performance and productivity is usually achieved in scientific software libraries like FEniCS by combining an efficient C/C++ computational back-end with a user-friendly high-level Python user front-end. However, this approach is not satisfactory, when researchers need to extend these libraries with new features since they are forced to learn and modify a complex C/C++ back-end instead of benefiting from the productivity expected from the Python front-end. This problem is referred to as the two-language problem.
Fortunately, recent advances in compiler technology are starting to revert this situation. In the field of scientific computing, Julia is a new computer language that combines the performance of compiled languages with the productivity of interpreted ones by using recent advances in compiler technology like type inference and just-in-time compilation. As a result, the same language can be used both for the back-end and the front-end, thus eliminating the two-language problem. Based on this novel paradigm, I have started the Gridap.jl project [badia_2020], a new generation, open-source, FE framework completely written in the Julia programming language. Gridap.jl allows users to write FE applications in a notation almost one-to-one to the mathematical notation used to define the PDE weak form. For instance, see, in Figure 6, how a Poisson equation on a complex 3D domain can be solved in Gridap.jl with few lines of code. To my best knowledge, only libraries like FEniCS are able to achieve such compact user interfaces, but they are based on sophisticated compilers of variational forms, which generate, compile and link a specialized C++ back-end for the problem at hand. One of the limitations of this approach is that form compilers are rigid systems, not designed to be extended by average users. In contrast, Gridap.jl is based on a much simpler approach. It leverages the Julia just-in-time compiler to generate efficient problem-specific code without the need to maintain a custom compiler of variational forms.
Figure 6: User code to solve a 3D Poisson equation in Gridap.jl and view of the corresponding numerical solution. Note that the bilinear and linear forms of the problem,
l, are specified with a syntax closely related to their mathematical notation thanks to an advanced software design based on the Julia computer language.
Gridap.jl appears as one of the two PDE codes selected to implement new novel FE methods in a "Discovery Project" of the Australian Research Council (ref: DP210103092, $475000) co-lead by my collaborator at Monash University Prof. Santiago Badia. In the framework of this project, new collaborators are expected to joint and contribute in the expansion of the project. In addition, Gridap.jl has been accepted as a NumFOCUS affiliated project. The mission of NumFOCUS is to promote open practices in research, data, and scientific computing by serving as a fiscal sponsor for open source projects and organizing community-driven educational programs. Being an affiliated project, Gridap.jl is eligible to participate in NumFOCUS funding schemes and other events like the participation in the Google Summer of Code under the NumFOCUS umbrella. In 2021, we participated with the projects "Visualizing PDE approximations in Julia with Gridap.jl and Makie.jl" and "A fast finite element interpolator in Gridap.jl".
The first immediate result in this research line is the initial release of the project, whose source code is freely available at github under a MIT software license. In addition, a related article has been published in the "Journal of Open Scientific Software" (see reference [badia_2020]). Gridap.jl is already a fully functional general-purpose FE library ready to solve linear, non-linear, steady-state, and time-dependent PDEs. It already provides different types of conforming FE methods like nodal Lagrangian elements for grad-conforming approximations (e.g., for linear elasticity or thermal analysis) or non-nodal interpolations like Raviart-Thomas for div-conforming approximations (e.g., for flow in porous media). Discontinuous Galerkin schemes are also supported. Gridap.jl is currently used by several research groups worldwide in institutions such as Monash University, MIT, TU Delft, UPC, and CIMNE. A set of introductory tutorials to the library is available and a Gitter chat to ask questions and interact with the Gridap.jl community.
The Gridap.jl project is designed to be an extensible package ecosystem with several plugins that extend the functionality of the core repository. In particular, GridapEmbedded.jl is an extension that implements embedded FE methods. At this moment, it provides embedded methods based on classical ghost-penalty or methods based on AgFEM, being the only known open source package to provide both types of methods. GridapEmmbedded.jl will be the basis to implement the new space-time methods since it is implemented in a dimension-implemented fashion allowing to consider 4D meshes in the future.
At this moment, Gridap.jl provides mainly serial algorithms and needs to be extended to parallel computations in order to cope with large-scale realistic simulations. To this end, I have started the plugin GridapDistributed.jl, whose goal is to provide the parallel functionality needed in parallel distributed-memory computations. I am designing a set of extensible distributed data structures that allow one to implement parallel algorithms in a generic way independent on the parallel computing environment (e.g., MPI or the Julia build-in distributed mode) used to run the computations. This is specially handy for developing new code since it allows one to debug parallel algorithms by running an emulated distributed computation in a standard (sequential) Julia session and using standard serial debuggers. Once the code works with the emulated parallel mode, it can be automatically deployed in a supercomputer via an MPI back-end implemented for this purpose. Thus, the package provides a very convenient way of developing parallel algorithms, while achieving the performance of MPI-based applications. This will help to develop new parallel FE methods in a very effective way. In particular, PhD students not necessarily experts in the MPI library can benefit from this framework to develop their parallel algorithms. At this moment, I have been able to solve the Poisson equation with up to 1B cells on 16K processors with GridapDistributed.jl on Gadi (an Australian super-computer) and I expect to further mature and extend the capabilities of this library.
Several researchers have already realized the potential benefits of using the Gridap.jl framework for their applications and have asked to me for collaboration. My plan is to keep growing a network of scientific collaborators that use the library in several contexts, in order to increase my changes of publishing more research papers and to find relevant topics for preparing project proposals.
In this post-doc stay, I have developed parallel AMG solvers for the solution of large systems of linear algebraic equations associated with the FE discretization of complex multi-physics problems. The main goal of this research was to provide a general framework able to be applied to several problem types. In particular, this work was applied to fluid-structure interaction (FSI), thermo-mechanical coupling, and human respiratory mechanics among others.
The numerical simulation of coupled problems via discretization techniques such as the FE method requires special solution strategies for solving the coupling between the underlying physical fields. The so-called partitioned methods [felippa_2001] are often the preferred choice in industrial applications because they allow to reuse existing (black-box) solvers for the individual fields. However, this approach is unstable for many challenging strongly coupled problems [forster_2007] and, therefore, another family of methods called monolithic are required in a variety of complex settings. It has also been shown that monolithic schemes are often preferable in terms of efficiency as compared to partitioned ones. For that reason, this approach has been the preferred option for solving many strongly coupled problems in the literature. However, the price to be paid for the extra robustness of monolithic methods is a more challenging system of linear equations. The system matrix is a big sparse matrix with a special block structure representing each of the underlying physical fields and frequently has a very bad condition number. In real-world applications, iterative methods such as GMRES are used to attack this linear system, which requires efficient preconditioners for addressing the bad conditioning of the problem. Selecting a suitable preconditioner is the key point in the solution process.
The standard approach to design preconditioners for coupled problems is to use approximated block inverses in order to untangle the coupling between the physical fields, and then, to use efficient AMG solvers for the resulting uncoupled problems. The drawback of these methods is that the coupling is resolved only at the finest grid level. Thus, using efficient AMG solvers for the underlying problems does not necessarily imply a good treatment of the coupling and a fast global solution. This drawback is overcome by Gee et al. [gee_2011], who propose an enhanced block preconditioner (referred to as monolithic AMG) for FSI applications, which enforces the coupling at all grid levels, and often results in a better solver performance. Monolithic AMG preconditioners are a promising approach for other coupled problems as well but, this strategy was only applied to FSI. In order to allow the usage of this enhanced AMG methods for a wide range of problem types, I have developed an general computational framework based on monolithic AMG techniques. For comparison purposes, conventional AMG solvers were also included. The method was implemented in the high performance multi-physics code BACI and it was published in a Q1 journal (see reference [verdugo_2016a]). This framework has been used at TUM since then, and it has been considered by several papers in top international journals (see, e.g., [kremheller_2018][fang_2018]), with a wide range of applications including simulation of vascular tumor growth and simulations of lithium ion cells.
In the framework of the German research project "Fundamental Technologies for the Development of Future Space-Transport-System Components under High Thermal and Mechanical Loads", I have applied the general linear solver framework to the simulation of the thermo-mechanical coupling in rocket nozzles. I exemplary show here the performance of the developed preconditioners. The nozzle geometry (see Figures 7 and 8) and other problem parameters are inspired by the "Vulcain" rocket engine installed in the Ariane space launcher, see [verdugo_2016a] for further details.
Figure 7: Rocket nozzle example: Full geometry of the nozzle (left), computational domain including one cooling channel (center), and generic cross section of the computational domain with the applied thermo-mechanical loads (right)
Figure 8: Rocket nozzle example: Deformation of the nozzle (left), temperature distribution (center) and original and deformed longitudinal section (right). The results are given at time s and the deformation is magnified 20 times.
After the usual space and time discretization, the simulation results into a non-linear problem to be solved at each time step, which is handled with a monolithic Newton scheme. At each Newton iteration, the associated monolithic linear system of equations is solved with a GMRES method preconditioned with four different solvers available in our computational framework. The first method, namely BGS(AMG), considers an outer block Gauss-Seidel (BGS) scheme for uncoupling the fields and then independent AMG solvers are used to handle the thermal and mechanical problems separately. The second method, namely SIMPLE(AMG), is a similar method that considers an idea based on the SIMPLE method [elman_2008] for uncoupling the fields instead of BGS. The third method, namely AMG(BGS), is an extension of the monolithic AMG solver to a generic coupled problem. Finally, the fourth method, namely BGS(DD), is an outer BGS for uncoupling the fields and then standard single-level additive Schwartz preconditioners are used to attack the uncoupled problems. The fourth method is one of the most simple parallel preconditioners that can be considered in such a problem, and therefore, it is considered here as a reference. All this solvers could be built easily by means of parameter lists using the general preconditioning framework.
Figure 9: Rocket nozzle example: Results of the weak scalability study. The CPU times include the setup costs of the preconditioner.
The performance of the preconditioners is studied with a weak scalability test, see Figure 9. In this experiment, the ratio between processors and DOFs is kept constant with a value about 12500 DOF/processor. The multi-grid methods AMG(BGS), BGS(AMG) and SIMPLE(AMG) have a very good scalability as the iteration count and CPU time of the linear solver increase only mildly with the problem size. On the other hand, the single-level method BGS(DD) is not scalable since the solver time strongly grows as the problem size increases. The performance of the single-level method BGS(DD) is particularly poor in this complex example which demonstrates that multi-grid preconditioners such as AMG(BGS), BGS(AMG) and SIMPLE(AMG) are required in this challenging setting. In conclusion, the AMG solvers implemented in our generic computational framework were able to solve this complex example efficiently.
Another application of the general linear solver framework has been the simulation of human respiratory mechanics. In recent years, advances have been made towards more protective ventilation strategies [tobin_2001] trying to minimize negative side effects of the treatment, but it is still not fully clear what is the best ventilation strategy for a specific patient. Advanced modeling and simulation offer the possibility of predicting the mechanical response of the respiratory system under different ventilation scenarios, and give an opportunity to design better patient-tailored treatments. However, simulating the human lung poses several computational challenges including large and multiply coupled systems of linear equations. Thus, the underlying motivation of this work is to enable the efficient simulation of virtual lung models on high-performance computing platforms in order to assess mechanical ventilation strategies and contributing to design more protective patient-specific ventilation treatments.
Figure 10: Patient-specific lung example: Numerical solution of the lung model consisting of the structural displacement (left), fluid pressure (center) and fluid velocity (right) at time s.
The system of linear equations to be solved in this application is essentially the monolithic system arising in FSI extended by additional algebraic constraints. The introduction of these constraints leads to a saddle point problem that cannot be solved with usual FSI preconditioners available in the literature. The key ingredient in this work is to use the idea of the semi-implicit method for pressure-linked equations (SIMPLE) for getting rid of the saddle point structure, resulting in a standard FSI problem that can be treated with available techniques. Even though the lung model is a complex multi-physics problem (see Figure 10), the numerical examples show that the resulting preconditioners approaches the optimal performance (see Figure 11). Moreover, the preconditioners are robust enough to deal with physiologically relevant simulations involving complex real-world patient-specific lung geometries. The same approach is applicable to other challenging biomedical applications where coupling between flow and tissue deformation is modeled with additional algebraic constraints. This work has lead to 1 paper in a Q1 journal (see reference [verdugo_2016b]). In the framework of this research, I have been junior co-PI in the Bavarian regional project "Efficient solvers for coupled problems in respiratory mechanics".
Figure 11: Patient-specific lung example: Results of the strong scalability test. The figure shows the dependence of the linear solver iterations with respect to the number of processors (left), and the parallel speed up (right).
My PhD thesis, entitled "Error assessment and adaptivity for structural transient dynamics", is devoted to the development of new automatic adaptive mesh refinement tools and goal-oriented error assessment techniques in the context of structural dynamics. Any FE based simulation has an intrinsic amount of error with respect to the exact solution of the selected physical model. Being aware of this error is of notorious importance if sensitive engineering decisions are taken on the basis of the numerical results. Assessing the error in elliptic problems (as structural statics) was a well known problem at the time of this PhD thesis. However, assessing the error in other more challenging problem types such as structural transient dynamics was an open research topic. In this context, most of the works provided a posteriori error estimates of the energy norm of the discretization error. The challenge was to develop so-called goal-oriented error estimates for this application. That is, a posteriori approximations of the error in a given quantity of interest of the underlying physical problem. This goal-oriented error estimation is specially well suited for real-world industrial applications since it provides information of the quality of the computed solution in the targeted quantities instead of a global function norm.
The main contributions of the PhD thesis are 1) the introduction of a novel technique to compute bounds of the (unknown) discretization error in a given quantity of interest (see reference [verdugo_2012]), 2) a goal-oriented space-time adaptive mesh refinement method specifically designed for efficiency in transient problems (see references [casadei_2013][verdugo_2014a]), and 3) a novel paradigm for error estimation in transient problems based on a new type of quantities of interest (see reference [verdugo_2013]). I published a review paper with my main thesis results (see reference [verdugo_2014b]) in the "Archives of Computational Methods in Engineering", which is the 1st ranked journal in the category "mathematics, interdisciplinary applications" of the Journal Citation Reports (JCR) in the year of publication. In the following, I briefly introduce items 2) and 3), which are the major thesis novelties.
Goal-oriented error estimation and adaptivity is particularly challenging in time-dependent problems. Assessing the error in the quantity of interest requires introducing an auxiliary problem, referred to as the adjoint or dual problem [becker_2001]. The main difficulty is associated with the fact that the adjoint solution has to be solved backwards in time. This means that, in order to assess the error by weighting the residual of the direct (or forward) solution with the adjoint (or backward) solution, at least one of the two solutions have to be stored in the full time-space domain. In non-linear problems, the situation is even worse because the full forward problem has to be computed and stored to define the backwards adjoint. This means that assessing the error of each iteration of the forward problem potentially requires computing a different backward adjoint. The conventional approach to alleviate the storage requirements is to use checkpointing, where the forward solution is stored only at a small number of pre-selected time points. These stored snapshots are used as initial conditions on each sub-interval for recomputing the forward solution during the backwards adjoint computation. This approach mitigates the storage requirements, but for some applications, recomputing the forward solution repeatedly can be still too expensive.
Figure 12: Simulation of elastic waves propagating in a perforated plate. The underlying space-time discretization has been automatically adapted using the goal-oriented error estimators in [verdugo_2013]. A smaller number of mesh elements are required with the adapted meshes than with uniform discretizations in order to achieve the same level of accuracy.
In the thesis, I have proposed an alternative approach based on a well known technique for structural dynamics: modal analysis. The modal-based strategy is particularly well suited for computing the adjoint problem associated with some particular quantities of interest. Following this approach, the adjoint solution is computed and stored for each vibration mode instead of for each time step, which reduces the storage requirements enormously. Using this novel approach, one can compute efficiently local error indicators that for each element and time step in the chosen discretization. This information can be used to automatically increase the resolution of the space and time discretizations only in the regions, where it is actually needed, leading to efficient computations (see Figure 12).
Virtually all the literature on goal-oriented a posteriori error assessment is based on scalar quantities of interest. While this approach is well suited for steady-state problems, a single scalar value does not give enough pieces of information about a complex space-time solution. For this reason, the preferred quantities of interest in time-dependent problems are typically the history (or evolution) of the space average of the solution in a sub-region of the domain, which are referred to as time-line dependent quantities of interest [verdugo_2013]. At the time of this PhD thesis, there was no error estimation method for this kind of quantities in the literature. One of the main thesis contributions is a new paradigm for a posteriori error estimation using this new type of quantities of interest.
As already announced, in conventional goal-oriented error assessment in a given scalar quality, one needs to introduce and solve an adjoint problem. Dealing with time-line dependent quantities is much more challenging. A time-line dependent quantity can be understood as a family of infinite scalar quantities (one for each time point in the selected computation time interval). Thus, one needs the solution of a family of infinite adjoint problems. This is computationally un-affordable in practice. However, for a number of meaningful cases, I have mathematically proven in reference [verdugo_2013] that all this adjoint problems are the equivalent after a translation of the time variable. This fundamental result is the crucial observation that allows one to assess the error in the time-line dependent quantities with an affordable cost. In addition, using a modal-based approximation of the adjoint problem allows one to accurately estimate the error in this new type of quantities in an efficient way (see Figure 13).
Figure 13: Error estimation of a time-line dependent quantity of interest (in this case the time-evolution of the averaged displacement at region ). Note that by approximating the family of adjoint problems associated with this quantity using only 60 vibration modes, an accurate estimation of the numerical error is achieved.
[badia_2017] S. Badia and F. Verdugo. Robust and scalable domain decomposition solvers for unfitted finite element methods. Journal of Computational and Applied Mathematics, 344: 740–759, 2018. DOI: 10.1016/j.cam.2017.09.034.
[badia_2018a] S. Badia, F. Verdugo, and A.F. Martín. The aggregated unfitted finite element method for elliptic problems. Computer Methods in Applied Mechanics and Engineering, 336: 533–553, 2018. DOI: 10.1016/j.cma.2018.03.022.
[badia_2018b] S. Badia, A.F. Martín, and F. Verdugo. Mixed Aggregated Finite Element Methods for the Unfitted Discretization of the Stokes Problem. SIAM Journal on Scientific Computing, 40(6): B1541–B1576, 2018. DOI: 10.1137/18M1185624.
[badia_2019a] S. Badia, A.F. Martín, E. Neiva, and F. Verdugo. A generic finite element framework on parallel tree-based adaptive meshes. SIAM Journal on Scientific Computing, 42(6): C436–C468, 2020. DOI: 10.1137/20M1328786.
[badia_2021] S. Badia, A.F. Martín, E. Neiva, and F. Verdugo. The aggregated unfitted finite element method on parallel tree-based adaptive meshes. SIAM Journal on Scientific Computing, 43: C203–C234, 2021. DOI: 10.1137/20M1344512.
[burman_2015] E. Burman, S. Claus, P. Hansbo, M.G. Larson, and A. Massing. CutFEM: Discretizing Geometry and Partial Differential Equations. International Journal for Numerical Methods in Engineering, 104(7): 472–501, 2015. DOI: 10.1002/nme.4823.
[casadei_2013] F. Casadei, P. Díez, and F. Verdugo. An algorithm for mesh refinement and un-refinement in fast transient dynamics. International Journal of Computational Methods, 10(04): 1350018, 2013. DOI: 10.1142/S0219876213500187.
[elman_2008] H. Elman, V.E. Howle, J. Shadid, R. Shuttleworth, and R. Tuminaro. A taxonomy and comparison of parallel block multi-level preconditioners for the incompressible Navier-Stokes equations. Journal of Computational Physics, 227(3): 1790–1808, 2008. DOI: 10.1016/j.jcp.2007.09.026.
[fang_2018] R. Fang, P. Farah, A. Popp, and W.A. Wall. A monolithic, mortar-based interface coupling and solution scheme for finite element simulations of lithium-ion cells. International Journal for Numerical Methods in Engineering, 114(13): 1411–1437, 2018. DOI: 10.1002/nme.5792.
[felippa_2001] C.A. Felippa, K.C. Park, and C. Farhat. Partitioned analysis of coupled mechanical systems. Computer Methods in Applied Mechanics and Engineering, 190(24-25): 3247–3270, 2001. DOI: 10.1016/S0045-7825(00)00391-1.
[forster_2007] C. Förster, W.A. Wall, and E. Ramm. Artificial added mass instabilities in sequential staggered coupling of nonlinear structures and incompressible viscous flows. Computer Methods in Applied Mechanics and Engineering, 196(7): 1278–1293, 2007. DOI: 10.1016/j.cma.2006.09.002.
[gee_2011] M.W. Gee, U. Küttler, and W.A. Wall. Truly monolithic algebraic multigrid for fluid-structure interaction. International Journal for Numerical Methods in Engineering, 85(8): 987–1016, 2011. DOI: 10.1002/nme.3001.
[kremheller_2018] J. Kremheller, A.T. Vuong, L. Yoshihara, W.A. Wall, and B.A. Schrefler. A monolithic multiphase porous medium framework for (a-)vascular tumor growth. Computer Methods in Applied Mechanics and Engineering, 340: 657–683, 2018. DOI: 10.1016/j.cma.2018.06.009.
[toselli_2005] A. Toselli and O. B. Widlund. Domain Decomposition Methods — Algorithms and Theory, volume 34 of Springer Series in Computational Mathematics. Springer Berlin Heidelberg, Berlin, Heidelberg, 2005. DOI: 10.1007/b137868.
[verdugo_2012] F.~Verdugo and P. Díez. Computable bounds of functional outputs in linear visco-elastodynamics. Computer Methods in Applied Mechanics and Engineering, 245–246: 313–330, 2012. DOI: 10.1016/j.cma.2012.06.016.
[verdugo_2013] F. Verdugo, N. Parés, and P. Díez. Modal-based goal-oriented error assessment for timeline-dependent quantities in transient dynamics. Int. J. Numer. Meth. Engng., 95(8): 685–720, 2013. DOI: 10.1002/nme.4538.
[verdugo_2014a] F. Verdugo, N. Parés, and P. Díez. Goal-oriented space-time adaptivity for transient dynamics using a modal description of the adjoint solution. Comput. Mech., 54(2): 331–352, 2014. DOI: 10.1007/s00466-014-0988-2.
[verdugo_2014b] F. Verdugo, N. Parés, and P. Díez. Error Assessment in Structural Transient Dynamics. Archives of Computational Methods in Engineering, 21(1): 59–90, 2014. DOI: 10.1007/s11831-014-9096-x.
[verdugo_2016a] F. Verdugo and W.A. Wall. Unified computational framework for the efficient solution of n-field coupled problems with monolithic schemes. Computer Methods in Applied Mechanics and Engineering, 310: 335–366, 2016. DOI: 10.1016/j.cma.2016.07.016.
[verdugo_2016b] F. Verdugo, C.J. Roth, L. Yoshihara, and W.A. Wall. Efficient solvers for coupled models in respiratory mechanics. International Journal for Numerical Methods in Biomedical Engineering, 2016. DOI: 10.1002/cnm.2795.
[verdugo_2019] F. Verdugo, A.F. Martín, and S. Badia. Distributed-memory parallelization of the aggregated unfitted finite element method. Computer Methods in Applied Mechanics and Engineering, 357, 2019. DOI: 10.1016/j.cma.2019.112583. | physics |
http://astronomiebm.de/en/16-17-07-2019-partielle-mondfinsternis/ | 2019-07-16T15:24:52 | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524568.14/warc/CC-MAIN-20190716135748-20190716161748-00336.warc.gz | 0.932811 | 261 | CC-MAIN-2019-30 | webtext-fineweb__CC-MAIN-2019-30__0__10380306 | en | General information about the partial lunar eclipse on the 16th/17th July 2019
After the very beautiful total lunar eclipse on 21st January 2019 we will have another lunar eclipse now in year 2019, but this time a partial lunar eclipse. The conditions for the observation shouldn’t be difficult if the weather is good because the maximum of eclipse with about 65.8 % would be also easily visible even with less height. The partial eclipse will start while the moon is just 2.5° above the horizon. At the end of the partial eclipse the moon will have an height of 15.8° above the horizon. At the point of the maximal eclipse the moon will have an height of 11°.
Tipp: With the beginning of the eclipse and while the eclipse continues at not very much height there could be nice pictures possibe because of its low height and the mood of dawn. Same as with the last eclipse, it’s maybe not a bad idea to try to make some pictures with the moon and trees, buildings or other objects.
Additional information: The partial lunar eclipse will have a duration of about 3 hours. The planet Saturn will be about 7° away from the moon but because of this quiet far distance, it shouldn’t be very interesting as motive. | physics |
https://www.abcsonido.es/articulo/400/accustic-arts-tubepreamp-ii-mkii.html | 2024-04-22T19:14:29 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00160.warc.gz | 0.898311 | 3,064 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__178883013 | en | Accustic Arts Tube-Preamp II MKII
The new TUBE PREAMP II – MK2
The TUBE PREAMP II – MK2 is the revised version of our hybrid tube preamplifier which enjoys success all around the world. The revised version includes a number of features requested by customers.
The changes of the MK2 version are as follows:
1. Four analogue preamplifier outputs
The original TUBE PREAMP II was equipped with two fully symmetrical outputs and one unsymmetrical output. We have integrated a further unsymmetrical output in the new TUBE PREAMP II – MK2 to take into account the wish for bi-amping configurations in the case of an unsymmetrical connection. In addition, the outputs are available with AC‑coupling and DC-coupling.
2. Option of AC-coupling or DC-coupling for the preamplifier outputs
In recent years our customers have continually made requests with regard to AC-coupling and DC-coupling and a number of customers wished to have their devices either completely AC-coupled or DC-coupled. Some customers also asked for both possibilities. As already mentioned above, the new device integrates both variations so that each customer now has the total freedom to make his own choice.
The question of which variation sounds "better” mainly depends on the design of the connected power amplifier. There are some power amplifiers which are better connected via AC‑coupled preamplifier outputs and other power amplifiers which achieve their full sound spectrum via DC-coupled preamplifier outputs. The question is ultimately also one of personal taste. We do not intend to enter into such "philosophical” discussions, but would prefer to let the listening impressions of the customer make the decision.
a. What does AC-coupling of the preamplifier outputs mean?
The AC-coupling of a preamplifier output is made using a capacitor and a resistor. The selection and the size of the capacitor are important factors for a perfect result. The TUBE PREAMP II – MK2 is equipped with high quality, very rare 5% MKH capacitors, i.e. no wound film capacitors. This ensures the best possible low inductance.
b. Advantages of AC-coupling from our point of view
1. Avoids operating point adjustments caused by undesired, but often unavoidable DC components in the signal.
2. Reduction of high frequencies which can be efficiently filtered out by the integrated capacitor.
3. Greater security (protection of DC components) in the case of defects, in particular with the connection to third party equipment or connection of tube power amplifiers.
c. What sounds better?
There is no general answer to this question. With an unbalanced device connection, i.e. via the outputs OUT 3 and OUT 4, the result with AC-coupling and the used types of capacitors is usually a more delicate, spatial, slightly softer and more musical acoustic pattern. The DC-coupling, on the other hand, sounds slightly more "direct”, "more overt” and perhaps slightly "more analytical”.
For a symmetrical device connection there is no general answer, whereby the result depends largely on the connected power amplifier.
Case 1: Power amplifier connection via balanced single output (= "usual case”, i.e. NO bi-amping).
1.) Balanced connection of the TUBE PREAMP II – MK2 to ACCUSTIC ARTS AMP II: use OUT 1 (AC)
2.) Balanced connection of the TUBE PREAMP II – MK2 to a third-party power amplifier: use OUT 1 (AC).
Case 2: Power amplifier connection via unbalanced single output (= "usual case”, i.e. NO bi-amping).
1.) Unbalanced connection of the TUBE PREAMP II – MK2 to ACCUSTIC ARTS AMP II: use OUT 4 (DC)
2.) Unbalanced connection of the TUBE PREAMP II – MK2 to a third-party power amplifier: use OUT 3 (AC).
Case 3: Connection of power amplifier via two outputs (bi-amping).
If two power amplifiers are connected to the TUBE PREAMP II – MK2 (bi-amping), the recommendation is to control the bass range of the connected loudspeakers via the DC‑coupled outputs OUT 2 and OUT 4 and to connect the mid and high range via the AC‑coupled outputs OUT 1 and OUT 3.
3. Integrated headphone amplifier
High quality headphones are becoming more popular and as a result there have been a number of customer requests for an integrated headphone output. The headphone output is protected against dust under the chrome-plated brass cover cap labelled "PHONES” and is switched via the "PHONES ON” button on the left. We believe this switching ability to be absolutely necessary for the sound quality as the music signal is then only sent to the used connection. All possible interference factors are thus removed.
4. Uncontrolled output (FIXED OUT) for connection of an external headphone amplifier
A number of audiophile customers already possess a high quality headphone amplifier and wish to continue using it. To enable the best connection to the TUBE PREAMP II - MK2, we have integrated a switchable, uncontrolled output designed especially for this purpose. This output is also switched via the "PHONES ON” button.
5. Analogue input "Surround bypass”
The analogue input SURROUND from ACCUSTIC ARTS® is a configuration possibility which allows the "loop through” of a signal of a surround processor through the TUBE PREAMP II – MK2 without further amplification. This means the volume of this signal is controlled by the external home cinema processor or amplifier and not from this preamplifier. This enables connection of a high-end audio system with a home cinema system without quality loss.
6. Phase switch for 0° and 180°
Some customers and importers have requested this phase reversal. We have responded with the integration of such a switch.
The general concept of the TUBE PREAMP II – MK2
Clever: The circuitry
The ACCUSTIC ARTS® TUBE PREAMP II - MK2 is a preamplifier with an exceptional and uncompromising design along the lines of the so-called tube-hybrid concept. This concept, also an integral part of our TUBE-DAC II, combines the advantages of transistor technology with the advantages of the tube principle. Tubes are excellent voltage amplifiers, but can only supply a limited amount of current. As a result, in the TUBE PREAMP II - MK2 we place the tubes exactly there where these clear advantages can influence the acoustic pattern, i.e. for the voltage amplification. In places where current has to be supplied, e.g. for impedance conversion, we use the proven premium IC OPA627® from Burr Brown / Texas Instruments. This combination allows us to achieve very low-resistance outputs which are also characterised by a high current capability. We use solid state technology and tubes to take advantage of their physical characteristics to realise an exceptional, analogue sound experience with an extremely low harmonic distortion and excellent harmonic distortion spectrum which sets standards.
As opposed to most tube preamplifiers, the TUBE PREAMP II - MK2 is in fact fully balanced, i.e. with 4 completely separate amplification stages from the signal input to the signal output.
The 4 amplification stages are divided into one inverting and one non-inverting signal path per channel. Each of these amplification stages contains a high precision tube manufactured according to military specifications. This principle enables perfect channel separation.
The loudness is adjusted via a high-end selective, high precision 4 channel potentiometer. It is natural that the TUBE PREAMP II - MK2 functions completely according to the proven Class A principle.
Elaborate: The power supply
An ideal power supply is a precondition for a high-end preamplifier. The requirements for the power supply with a tube-hybrid preamplifier are much higher than for a purely transistor-based device. This is because the tubes require different low voltages and also a high voltage of approx. 300 V for the anodes. This high voltage has to be precisely controlled and absolutely free of interference so that the music signal can be perfectly amplified in the tubes. For this reason the TUBE PREAMP II - MK2 is equipped with a number of separately functioning power supply units and 2 separate high-end transformers, with one transformer exclusively reserved for the voltage required for the tubes. Both the toroidal core transformers used in the TUBE PREAMP II - MK2 are of an exceptionally high quality and have the best possible core material from Switzerland in order to prevent any negative interference inside the unit. In order to ensure that the power supply units work perfectly, the filter capacity for all voltage circuits was selected generously (e.g. 20,000 µF alone for the voltage supply to the semiconductors).
Perfect: The tubes
There is no dispute that the quality of the tubes is decisive for the quality of a unit based on tube technology. Tubes are sensitive components and thus have to be carefully selected and tested. We manually select the tubes and make pairs according to strict testing and measuring criteria to ensure that only perfect tubes are used in the TUBE PREAMP II - MK2. Before the tubes are fitted to a TUBE PREAMP II, they have already passed a two-stage inspection process: one inspection carried out by our supplier and a second inspection made by our own specialists.
Before and after the first continuous test of 100 hours, all functions and parameters of the whole unit are inspected and recorded in a protocol. The values are compared and if the deviations are within a defined low tolerance range the unit is subjected to a further second continuous test of 100 hours. After this test further measurements are made and here the parameters must fit perfectly with the output values. In total, a TUBE PREAMP II - MK2 is measured and tested three times and the tubes are even tested four times. The TUBE PREAMP II - MK2 only leaves the premises of ACCUSTIC ARTS® after the requirements have been 100% met. This high level of scrutiny guarantees an extremely low failure rate of the tubes and ensures many years of uninterrupted musical pleasure.
The type of tube used is a so-called dual triode tube type E83CC and belongs to the tube group 12AX7. The tubes meet the high requirements of the military and come from the current production of a European manufacturer. The sound can be described as: pleasantly warm with a very balanced sound pattern, a sophisticated bass range, clear high frequencies, good dynamics and low harmonic distortion.
We prefer tubes from the current production and do not use NOS tubes, as these are not available in sufficient quantity and not available in consistent quality.
Quality: The rest of the components
The TUBE PREAMP II - MK2 guarantees absolute perfection through the exclusive use of selected components with low tolerances and highest quality grade. A number of the individual components are re-measured by hand and sorted into matching pairs.
The housing is also uncompromising: solid, carefully crafted aluminium plates combined with chrome-plated rotary parts in brass allow for a high quality look and surface feel and excellent stability. Stability is important to ensure that the tubes can work undisturbed and are not compromised by any vibration present in the housing.
As with all ACCUSTIC ARTS® products, the TUBE PREAMP II - MK2 is also easy to operate. This is a welcome change for many music lovers today when most electronic devices are overloaded with partly unnecessary functions.
The basic functions of the TUBE PREAMP II - MK2 are operated by two chrome-plated rotary controls. Both controls are more or less built to last for ever. The rotary controls, for example, are solid and are equipped with gold-plated contacts, which are corrosion-resistant and also enable many thousands of switching cycles.
Phenomenal: The sound experience
The result of top quality individual components, detailed development work, innumerable sound tests, careful production and involved measurement tests is a sound experience of the highest perfection.
In combination with the other reference series products of ACCUSTIC ARTS®, music is transformed into an incomparable sound experience "Handmade in Germany”.
TUBE PREAMP II – MK2 highlights
Audiophile reference preamplifier with a so called "tube hybrid” concept and 4 military tubes (2 tubes per channel)
Fully balanced circuit design from input to output
Advantages of this "tube hybrid” technology:
- very high impedance
- very high bandwidth
- very low distortion factors and a "good-natured” distortion spectrum
- "analog” and very precise sound performance
- 4 separated amplification paths, which are not influencing each other
Easy change of tubes without any adjustments just "plug and play”
Professional Class A output stage using technology derived from studio engineering
All used components are of outstanding quality (e.g. Burr Brown® OPA 627) and additionally selected; all relays have high quality gold-plated contacts
4 high precision military tubes; 4-times selected
4-channel volume potentiometer for best crosstalk
3 x fully balanced high level inputs (XLR) and 2 x unbalanced high level inputs (RCA)
1 x unbalanced input (RCA) configured as "SURROUND-BYPASS”
2 x fully balanced outputs (XLR) – 1 x AC coupled, 1 x DC coupled
2 x unbalanced outputs (RCA) – 1 x AC coupled, 1 x DC coupled
1 x headphone output, switchable (1/4" stereo female jack)
1 x unregulated, switchable output for the connection of an external headphone amplifier (RCA)
Phase switch for 0° and 180°
2 magnetically shielded, encapsulated 75 VA toroidal core transformer ("Made in Germany”) of premium quality for high output reserves
Front panel, cover and remote control are made of massive and solid aluminium; turning knobs made of massive and chromed brassACCUSTIC ARTS® TUBE PREAMP II – MK 2 is "Handmade in Germany” | physics |
https://hyundai-nas.ru/travel/hcpl-7850.php | 2021-08-04T05:07:13 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00602.warc.gz | 0.888725 | 2,247 | CC-MAIN-2021-31 | webtext-fineweb__CC-MAIN-2021-31__0__254321674 | en | Hermetically Sealed Analog Isolation Amplifier. Data Sheet. When used with a shunt resistor to monitor the motor. These devices consist of a sigma-delta analog-to-. The products.
|Published (Last):||11 February 2017|
|PDF File Size:||11.51 Mb|
|ePub File Size:||16.14 Mb|
|Price:||Free* [*Free Regsitration Required]|
Dt Sheet. When used with a shunt resistor to monitor the motor phase current in a high speed motor drive, the device will offer superior reliability compared with the traditional solutions such as current transformers and Hall-effect sensors. The products are capable of operation and storage over the full military temperature range and can be purchased as either commercial product or with full MIL-PRF Class H testing or from the appropriate DSCC drawing.
Hermetic Optocoupler Options Option Description Surface mountable hermetic optocoupler with leads trimmed for butt joint assembly. This option is available on commercial and hi-rel product in 8 pin DIP see drawings below for details. This option is available on commercial and hi-rel product in 8 pin DIP. DSCC Drawing part numbers contain provisions for lead finish.
This option has solder dipped leads. VOS 1,2,3 —1. Units Test Conditions Fig. Symbol Group A Subgroups Min. Units 8 Test Conditions Fig. Notes: 1. This test mode is not intended for customer use. Exact offset value is dependent on layout of external bypass capacitors. Nonlinearity is defined as half of the peak-to-peak output deviation from the best-fit gain line, expressed as a percentage of the full-scale differential output voltage.
CMRRIN is defined as the ratio of the gain for differential inputs applied between pins 2 and 3 to the gain for both common mode inputs applied to both pins 2 and 3 with respect to pin 4. When the differential input signal exceeds approximately mV, the outputs will limit at the typical values shown. Short-circuit current is the amount of output current generated when either output is shorted to VDD2 or ground. Agilent does not recommend operations under these conditions.
CMR also known as IMR or Isolation Mode Rejection specifies the minimum rate of rise of a common mode signal applied across the isolation boundary at which small output perturbations begin to occur. These output perturbations can occur with both the rising and falling edges of the common mode waveform and may be of either polarity. A CMR failure is defined as a perturbation exceeding mV at the output of the recommended application circuit Figure See Applications section for more information on CMR.
Output noise comes from two primary sources: chopper noise and sigma-delta quantization noise. Chopper noise results from chopper stabilization of the output op-amps. It occurs at a specific frequency typically kHz and is not attenuated by the on-chip output filter. The on-chip filter does eliminate most, but not all, of the sigma-delta quantization noise. An external filter circuit may be easily added to the external post-amplifier to reduce the total RMS output noise.
See Applications section for more information. Device considered a two-terminal device: Pins 1, 2, 3, and 4 are shorted together and pins 5, 6, 7, and 8 are shorted together. Parameters are tested as part of device initial characterization and after design and process changes only.
Parameters are guaranteed to limits specified for all lots not specifically tested. Input Offset Change vs. Input Offset Voltage Test Circuit. Output Voltages vs. Input Voltage.
Gain and Nonlinearity Test Circuit. Gain Change vs. Nonlinearity Error Plot vs. Nonlinearity vs. Full-Scale Input Voltage. Input Current vs. Input Supply Current vs. Output Supply Current vs. Figure Common Mode Rejection Test Circuit. Input and Output Supply Current vs. Amplitude Response vs. Recommended Application Circuit Bandwidth. Recommended Application Circuit. R4 A floating power supply which in many applications could be the same supply that is used to drive the high-side power transistor is regulated to 5 V using a simple three-terminal voltage regulator U1.
And finally, the differential output of the isolation amplifier is converted to a ground-referenced Applications Information Functional Description Figure 23 shows the primary functional blocks of the HCPL In operation, the sigmadelta modulator converts the analog input signal into a highspeed serial bit stream.
The time average of this bit stream is directly proportional to the input signal. This stream of digital data is encoded and optically transmitted to the detector circuit. The detected signal is decoded and converted back into an analog signal, which is filtered to obtain the final output signal.
Although the application circuit is relatively simple, a few recommendations should be followed to ensure optimal performance. Supplies and Bypassing As mentioned above, an inexpensive three-terminal regulator can be used to reduce the gate-drive power supply voltage to 5 V.
VOUT Single-Supply Post-Amplifier Circuit. VDD 5. As shown in Figure 24, a 0. The bypass capacitors are required because of the highspeed digital nature of the signals inside the isolation amplifier.
The input bypass capacitor should be at least pF to maintain gain accuracy of the isolation amplifier. Inductive coupling between the input power-supply capacitor and the input circuit, including the input bypass capacitor and the input leads of the HCPL, can introduce additional DC offset in the circuit. Several steps can be taken to minimize the mutual coupling between the two parts of the circuit, thereby improving the offset performance of the design.
Separate the two bypass capacitors C2 and C3 as much as possible even putting them on opposite sides of the PC board , while keeping the total lead lengths, including traces, of each bypass capacitor less than 20 mm. PC board traces should be made as short as possible and placed close together or over ground plane to minimize loop area and pickup of stray magnetic fields.
Avoid using sockets, as they will typically increase both loop area and inductance. And finally, using capacitors with small body size and orienting them perpendicular to each other on the PC board can also help. The value of the shunt should be chosen as a compromise between minimizing power dissipation by making the shunt resistance smaller and improving circuit accuracy by making it larger and utilizing the full input range of the HCPL Agilent Technologies recommends four different shunts which can be used to sense average currents in motor drives up to 35 A and 35 hp.
Table 1 shows the maximum current and horsepower range for each of the LVR-series shunts from Dale. Even higher currents can be sensed with lower value shunts available from vendors such as Dale, IRC, and Isotek Isabellenhuette. When sensing currents large enough to cause significant heating of the shunt, the temperature coefficient of the shunt can introduce nonlinearity due to the signal dependent temperature rise of the shunt. Using a heat sink for the shunt or using a shunt with a lower tempco can help minimize this effect.
The Application Note , Designing with Agilent Technologies Isolation Amplifiers, contains additional information on designing with current shunts. The recommended method for connecting the isolation amplifier to the shunt resistor is shown in Figure This allows a single pair of wires or PC board traces to connect the isolation amplifier circuit to the shunt resistor. In some applications, however, supply currents flowing through the power-supply return path may cause offset or noise problems.
In this case, better performance may be obtained by connecting pin 3 to the negative terminal of the shunt resistor separate from the power supply return path. When connected this way, both input pins should be bypassed.
Whether two or three wires are used, it is recommended that twisted-pair wire or very close PC board traces be used to connect the current shunt to the isolation amplifier circuit to minimize electromagnetic interference to the sense signal. The resistor performs another important function as well; it dampens any ringing which might be present in the circuit formed by the shunt, the input bypass capacitor, and the wires or traces connecting the two.
Undampened ringing of the input circuit near the input sampling frequency can alias into the baseband producing what might appear to be noise at the output of the device.
PC Board Layout In addition to affecting offset, the layout of the PC board can also affect the common mode rejection CMR performance of the isolation amplifier, due primarily to stray capacitive coupling between the input and the output circuits.
To obtain optimal CMR performance, the layout of the printed circuit board PCB should minimize any stray coupling by maintaining the maximum possible distance between the input and output sides of the circuit and ensuring that any ground plane on the PCB does not pass directly below the HCPL Using surface mount components can help achieve many of the PCB objectives discussed in the preceding paragraphs.
An example throughhole PCB layout illustrating some of the more important layout recommendations is shown in Figures 26 and To maintain overall circuit bandwidth, the post-amplifier circuit should have a bandwidth at least twice the minimum bandwidth of the isolation amplifier, or about kHz. To obtain a bandwidth of kHz with a gain of 5, the op-amp should have a gain-bandwidth greater than 1 mHz.
The postamplifier circuit includes a pair of capacitors C5 and C6 that form a single-pole low-pass filter. These capacitors allow the bandwidth of the post-amp to be adjusted independently of the gain and are useful for reducing the output noise from the isolation amplifier doubling the capacitor values halves the circuit bandwidth. The component values shown in Figure 24 form a differential amplifier with a gain of 5 and a cutoff frequency of approximately kHz, and were chosen as a compromise between low noise and fast response times.
The overall recommended application circuit has a bandwidth of 66 kHz, a rise time of 5. Post-Amplifier Circuit The recommended application circuit Figure 24 includes a post-amplifier circuit that serves three functions: to reference the output signal to the desired level usually ground , to amplify the signal to appropriate levels, and to help filter output noise.
The particular op-amp used in the post-amp is not critical; however, it should have low enough offset and high enough bandwidth and slew rate so that it does not adversely affect circuit performance.
HCPL 7850 PDF
Learn more — opens in new window or tab. Redeem your points Conditions for uk nectar points — opens in a new window or tab. Add to Watch list Watching. Learn more — opens in a new window or tab. Notes: 1. This test mode is not intended for customer use. Exact offset value is dependent on layout of external bypass capacitors.
Something went wrong... | physics |
https://ir-sensor.polymerizedbayanan.pw/ | 2021-04-14T10:42:29 | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00336.warc.gz | 0.933787 | 4,272 | CC-MAIN-2021-17 | webtext-fineweb__CC-MAIN-2021-17__0__184500082 | en | IR LEDs are usually made of gallium arsenide or aluminium gallium arsenide. In complement with IR receivers, these are commonly used as sensors. Electric current is allowed to flow in only one direction in diodes. As the current flows, electrons fall from one part of the diode into holes on another part. In order to fall into these holes, the electrons must shed energy in the form of photons, which produce light.
It is necessary to modulate the emission from IR diode to use it in electronic application to prevent spurious triggering. Infrared diodes have a package that is opaque to visible light but transparent to infrared. An IR sensor is a device that detects IR radiation falling on it. An IR sensor consists of two parts, the emitter circuit and the receiver circuit.
This is the underlying working principle of the IR sensor. The type of incidence can be direct incidence or indirect incidence. IR sensors find a wide variety of applications in various fields. Proximity sensors employ reflective indirect incidence principle. Closer the object, higher will be the intensity of the incident radiation on the photodiode.
Proximity sensors find use in touchscreen phones, among other devices. In line following robots, IR sensors detect the color of the surface underneath it and send a signal to the microcontroller or the main circuit which then takes decisions according to the algorithm set by the creator of the bot.
Line followers employ reflective or non-reflective indirect incidence. There is no reflection of the IR radiation going back to the sensor module in black color.
The projects is available at: line follower robot. Monitoring systems of large factories use these counters for counting products on conveyor belts. As soon as a person obstructs the IR path, the alarm goes off. This mechanism is used extensively in security systems and is replicated on a smaller scale for smaller objects, such as exhibits in an exhibition.
There are various applications of IR sensors such as TV remote controllers, burglar alarms and object counters. Here we have used IR sensors infrared LEDs to make an object-detection circuit and also a proximity sensor for path-tracking robots. In this case the circuit is checking that the sensor is detecting ir-light. If the light stream is interrupted, the normally high signal from said sensor will be low. This is an extremely common application for these components.
Since you cannot see any light, how do you know it is working? Well, I had the same question. I found a very simple way to do this that costs absolutely nothing if your PC has a built-in camera. To make it easier to see, turn off any extra lights in the room. Turn on the camera in Settings or the Control Panel depending on your computer type. To confirm this, turn off your circuit and look at the LEDs again. There will be no purple glow.
Thank you … good info this is directly proportional to the shuttering speed of camera.Skip to main content IR Sensors.
IR LED | Infrared LED | Infrared Sensor
In Stock. Dahlgren Beaumont, CA. Great product. I moved into a new home with a built-in entertainment center. All of my electronics are behind cabinet doors and as a result I was unable to remote control any of them. This IR Repeater not only enables remote control of all of my electronics behind closed doors but it does it better than before when I had the doors open and had to point the remote control directly at each item.
Now I can point the remote control in almost any direction and it will control the electronics.IR Sensor with Arduino tutorial - Beginners guide!
In other words, it does not have to be pointed directly at the item to be controlled. Add to cart. I though they had a variable voltage output but they don't Currently unavailable. See All Buying Options.
I used the emiting component to repair a Samsung Frenchdoor refrigerator's icemaker. Samsung does not sell this part separately so hey want tou to buy the whole auger. It's been 2 months since I repaired it and it's still running fine! This is an excellent purchase if you have your cable box hidden.
As far as length, there is more then enough as I even have this connected to another 3. The reason for the 4 star is because this new version has a blue light on the sensor to indicate a button press.
All previous models that I've purchased from this seller did not have this. The blue light to me is an annoyance more than a plus.
Other than that, this is a great product. Washington, DC. Works as it should. NOTE: if you solder the jumper pad for the re-trigger then you need to cut the trace on the other pad or it won't work at all.
The instructions don't mention this, and aren't really clear about what "re-trigger" means. I've uploaded images of these at night, along with the regular daytime image. If you find this helpful, please mark the review as helpful. The beam is fairly focussed see images. A picture speaks a thousand words, so check out the images to help decide if this is right for you. ATT Uverse boxes work like a charm!An IR sensor is an electronic instrument that scans IR signals in specific frequency ranges defined by standards and converts them to electric signals on its output pin typically called signal pin.
Each signal represents a specific code. When you press a button on your TV remote control, it generates a signal corresponding to the button code e. Both sender and receiver agreed on a set of codes so that receiver knows what to do based on each code. The way a code should be modulated modeled as a signal is defined in different standard and each sensor manufacturer normally tries to produce a product compatible with them so that it could be used in different devices.
One of the most known standard protocols is from NEC. Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. IR sensors are available on different packages. Here you can see some typical packaging for an IR receiver. Such modules normally incorporate one of the sensors mentioned above with a nice breadboard friendly package together with an LED that would flash when the sensor detects a signal.
Using Infrared Sensor With Arduino
By doing so you would notice if any data is being transferred. I highly suggest starting with one of these modules. You know what I mean. Setting up IR sensor connection to Arduino is very simple. I tried to demonstrate both IR sensor module and raw IR sensor setup. As can be seen on pictures, the position of the VCC and GND pins on the sensor module is the opposite of the raw sensor. However it may not be the case for your sensor, so as mentioned in previous step, in case of using the raw sensor, check the datasheet first.
In order to program Arduino to do something when you press a key on the remote, you should first have the code corresponding to that key. The key code is a number normally presented as hexadecimal. Having different key codes along with using different frequency ranges, ensures that two remote controllers of different devices would not have interference.
To detect the codes for your IR remote, you have to first run a simple sketch that tries to read the code from sensor when you press a key and sends that via the serial port to your computer where you can access it using Serial Monitor tools of Arduino IDE. This is what the sketch attached to this section does. It would be better to press every button in order see the code and write down the list of codes somewhere so that you would not need to run this code again in future.
The list of key codes you see as a table in the picture are actually codes I received when pressing buttons on my cheap IR remote. You can also access the actual source code shared on my Arduino web editor at ir-key-code-logger. It means you have pressed and hold a button for a while. We'll get back into it later on.
For now just ignore them and focus on other codes. Now that we have a code for each button, it's time to focus on the way we can use them.
Here we try to demonstrate the process using a simple circuit consisting of 4 LEDs in different colors. We want to turn each one of them on or off by a dedicated button of the IR remote. As you can see on the schematic, you have to connect the Arduino in the following way to LEDs and sensor:. You can find the code corresponding to this circuit in the attached file or on my Arduino web editor at ir-led-control.
While setting up your project and following the steps you may encounter many weird situations. Here is the list of some common errors that you may get when working with IR sensor. This happens when you press a button and hold it for a while, even for a short period of time. The scenario is that when you press the button initially, IR remote sends the button code and as long as you hold the button, it repeats sending FFFFFF which means that user is still pressing the button reported recently.Infrared technology addresses a wide variety of wireless applications.
The main areas are sensing and remote controls. In the electromagnetic spectrum, the infrared portion is divided into three regions: near infrared region, mid infrared region and far infrared region. The wavelengths of these regions and their applications are shown below. The frequency range of infrared is higher than microwave and lesser than visible light. For optical sensing and optical communication, photo optics technologies are used in the near infrared region as the light is less complex than RF when implemented as a source of signal.
Optical wireless communication is done with IR data transmission for short range applications. The basic concept of an Infrared Sensor which is used as Obstacle detector is to transmit an infrared signal, this infrared signal bounces from the surface of an object and the signal is received at the infrared receiver. There are five basic elements used in a typical infrared detection system: an infrared source, a transmission medium, optical component, infrared detectors or receivers and signal processing.
The three main types of media used for infrared transmission are vacuum, atmosphere and optical fibers. Optical components are used to focus the infrared radiation or to limit the spectral response. Optical lenses made of Quartz, Germanium and Silicon are used to focus the infrared radiation. Infrared receivers can be photodiodes, phototransistors etc. Signal processing is done by amplifiers as the output of infrared detector is very small.
Infrared sensors can be passive or active. Passive infrared sensors are basically Infrared detectors. Passive infrared sensors do not use any infrared source and detects energy emitted by obstacles in the field of view. They are of two types: quantum and thermal. Thermal infrared sensors use infrared energy as the source of heat and are independent of wavelength. Thermocouples, pyroelectric detectors and bolometers are the common types of thermal infrared detectors.
Quantum type infrared detectors offer higher detection performance and are faster than thermal type infrared detectors. The photosensitivity of quantum type detectors is wavelength dependent. Quantum type detectors are further classified into two types: intrinsic and extrinsic types. Intrinsic type quantum detectors are photoconductive cells and photovoltaic cells. Active infrared sensors consist of two elements: infrared source and infrared detector.Infrared IRsometimes called infrared lightis electromagnetic radiation EMR with wavelengths longer than those of visible light.
As with all EMRIR carries radiant energy and behaves both like a wave and like its quantum particle, the photon. Infrared radiation was discovered in by astronomer Sir William Herschelwho discovered a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer.
The balance between absorbed and emitted infrared radiation has a critical effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when they change their rotational-vibrational movements.
It excites vibrational modes in a molecule through a change in the dipole momentmaking it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range.
Infrared radiation is used in industrial, scientific, military, law enforcement, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected.
Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular cloudsdetect objects such as planetsand to view highly red-shifted objects from the early days of the universe. Extensive uses for military and civilian applications include target acquisitionsurveillancenight visionhomingand tracking. Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-opsremote temperature sensing, short-range wireless communicationspectroscopyand weather forecasting.
Infrared radiation extends from the nominal red edge of the visible spectrum at nanometers nm to 1 millimeter mm. Below infrared is the microwave portion of the electromagnetic spectrum. Of this energy, watts is infrared radiation, watts is visible lightand 32 watts is ultraviolet radiation. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight.
However, black-body, or thermal, radiation is continuous: it gives off radiation at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth.
Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law.
A commonly used sub-division scheme is: . Due to the nature of the blackbody radiation curves, typical "hot" objects, such as exhaust pipes, often appear brighter in the MW compared to the same object viewed in the LW. The International Commission on Illumination CIE recommended the division of infrared radiation into the following three bands: .
ISO specifies the following scheme: . Astronomers typically divide the infrared spectrum as follows: . These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region.
These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. A third scheme divides up the band based on the response of various detectors: . Near-infrared is the region closest in wavelength to the radiation detectable by the human eye.
Other definitions follow different physical mechanisms emission peaks, vs. No international standards for these specifications are currently available. However, particularly intense near-IR light e. Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage.
The C-band is the dominant band for long-distance telecommunication networks.This post will discuss about what is Infrared Sensor, its working principle, how it works, types, applications, advantages and disadvantages.
Some of its features are heat and motion sensing. IR sensors use infrared radiation of wavelength between 0. IR region is not visible to human eyes. Infrared spectrum is categorized into three regions based on its wavelength i. IR Transmitter acts as source for IR radiation. Vacuum, atmosphere and optical fibers are used as medium.
Generally IR receivers are photo diode and photo transistors. They are capable of detecting infrared radiation. Hence IR receiver is also called as IR detector.
Passive infrared sensor
Variety of receivers are available based on wavelength, voltage and package. IR Transmitter and Receivers are selected with matching parameters. Some of deciding specifications of receivers are photosensitivity or responsivity, noise equivalent power and detectivity. Incidence in an IR Detection System may be direct or indirect. In case of Direct Incidence, there is no hindrance in between transmitter and receiver. Active Infrared Sensor contains both transmitter and receiver. Most of the cases LED or laser diode is used as source.
Active IR Sensor works by radiating energy, received and detected by detector and further processed by signal processor in order to fetch information required. Object radiates energy and it is detected by IR receivers. A Signal processor is then used to interpret the signal to fetch information required. Quantum Infrared Sensor are dependent on wavelengths.
They have high detection time and response time. These type of IR sensors require frequent cooling for precise measurement. Following are the list of sensors which are named after its usage. These are used in smart phones to find distance of object. They use principle called Reflective Indirect Incidence.
Radiation transmitted by transmitter is received by receiver after being reflected from object. Distance is calculated based on the intensity of radiation received. This use direct incidence method to count the items. Constant radiation is maintained in between transmitter and receiver. As soon as object cuts the radiation, item is detected and count is increased.
The same count is shown on display system. This is one of widely and commonly used sensor application. It is another example for direct incidence method.
It works similar to item counter, where transmitter and receiver are kept on both the sides of door frame. Constant radiation is maintained between transmitter and receiver, whenever object crosses path alarm starts off.A passive infrared sensor PIR sensor is an electronic sensor that measures infrared IR light radiating from objects in its field of view.
They are most often used in PIR-based motion detectors. PIR sensors are commonly used in security alarms and automatic lighting applications. PIR sensors detect general movement, but do not give information on who or what moved. For that purpose, an active IR sensor is required. The term passive refers to the fact that PIR devices do not radiate energy for detection purposes.
They work entirely by detecting infrared radiation radiant heat emitted by or reflected from objects. All objects with a temperature above absolute zero emit heat energy in the form of radiation. Usually this radiation isn't visible to the human eye because it radiates at infrared wavelengths, but it can be detected by electronic devices designed for such a purpose.
For a D. The PIR sensor consists of a pyroelectric sensor and a Fresnel lens. The sensor output is inverted by the transistor. Collector of the transistor is connected to the input pin forms the latch circuit which is set when PIR output goes high to indicate the presence of a warm body. Output of the latch pin operates the relay driving circuit formed by transistors arranged in emitter follower mode.
A PIR-based motion detector is used to sense movement of people, animals, or other objects. They are commonly used in burglar alarms and automatically-activated lighting systems. A PIR sensor can detect changes in the amount of infrared radiation impinging upon it, which varies depending on the temperature and surface characteristics of the objects in front of the sensor. The sensor converts the resulting change in the incoming infrared radiation into a change in the output voltage, and this triggers the detection.
Objects of similar temperature but different surface characteristics may also have a different infrared emission pattern, and thus moving them with respect to the background may trigger the detector as well.
PIRs come in many configurations for a wide variety of applications. The most common models have numerous Fresnel lenses or mirror segments, an effective range of about 10 meters 30 feetand a field of view less than Some larger PIRs are made with single segment mirrors and can sense changes in infrared energy over 30 meters feet from the PIR.
Pairs of sensor elements may be wired as opposite inputs to a differential amplifier. In such a configuration, the PIR measurements cancel each other so that the average temperature of the field of view is removed from the electrical signal; an increase of IR energy across the entire sensor is self-cancelling and will not trigger the device. This allows the device to resist false indications of change in the event of being exposed to brief flashes of light or field-wide illumination.
Continuous high energy exposure may still be able to saturate the sensor materials and render the sensor unable to register further information. | physics |
http://7ww.org/listing/aurora-borealis/ | 2020-12-06T01:44:25 | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00395.warc.gz | 0.963744 | 817 | CC-MAIN-2020-50 | webtext-fineweb__CC-MAIN-2020-50__0__31841411 | en | The seven wonders of the natural world do not all have to be on the ground, or even be physical structures that can be touched and the Aurora Borealis is the perfect example of that.
These lights that occur mainly in the northern hemisphere of planet Earth (hence their alternative name ‘the northern lights’) but also occasionally in the southern hemisphere too are such an unusual display of nature that they remained a scientific phenomenon until very recently. They can appear as flowing, moving waves of light (known as Quity Arces), thin strips of light (Diffuse Patches) or sheets of glowing light (Raide Arces) and they glow green, blue, red and yellow.
The aurora occurs solely in the sky and mainly in areas closest to the northern closest to the northern magnetic pole (currently located in Canada). They appear most during September and October, and then again in March and April although they do not appear every night and many people’s expeditions to see them prove unfruitful. Perhaps it is their elusiveness and unpredictability that makes them such an appealing one of the seven wonders of the natural world, but their natural beauty is, of course, the main attraction.
The northern hemisphere is a hostile, arctic environment. Despite this, many keen photographers take trips out there in an effort to capture the lights on film. Many also take video cameras to capture a time lapse recording of the lights, as they move slowly but in beautiful patterns.
The aurora has most likely been around for many thousands of millions of years, even before the most basic life forms began, yet we still have the knowledge of how they are created. It wasn’t until around 1741 that the link between the magnetic poles and the northern lights was discovered, and up until 2008 the mysteries behind their formation have been slowly unraveling.
Aurora Borealis and Aurora Australis (southern lights) are formed when particles from the sun – also known as ‘solar wind’ – hit the earth’s atmosphere. They are charged highly with energy which is lost when they collide with other atoms, or when they emit a photon of light; a process which results in the lights that we see. Solar wind particles always occur in parallel with the earth’s magnetic field, which is why we often see them as ribbons or waves of light moving in a certain direction.
Their individual color depends entirely on the make up of the emission that the solar wind molecule gives off. Green and maroon colored aurora occur when the molecules emit oxygen, whereas blue lights occur when nitrogen is given off. Red lights can indicate either oxygen or nitrogen.
How to Get There:
The Aurora Borealis is not just found in one city or even country in the world, so it is up to you to choose where you would like to see them. Reykjavik in Iceland is a wonderful place to see the northern lights, as are many areas of Alaska, Canada, Greenland, Finland, Norway and Sweden. Each of these countries have main airports which are easily flown to, although if you are aiming for the magnetic pole then you will need to consult an arctic expert for advice on the necessary hiking gear, tents and/or vehicles to get you there.
Where to Stay:
For anyone who isn’t a professional arctic explorer or at least vaguely familiar with what’s required of this type of trip, it’s best to stay in one of the hotels in a more populated and interesting place in the northern hemisphere such as Reykjavik in Iceland, Lulea in Sweden or Rovaniemi in Finland.
The advantage with staying in a place that’s more populated and metropolitan than somewhere nearer the north pole is that there are other things to visit during the daytime when the Aurora cannot be seen. Also, you will be able to stay comfortably in these places for days on end, making your chance of seeing the northern lights higher. | physics |
https://blog.weicon.de/finding-the-right-lubricant/?lang=en | 2020-08-11T10:15:38 | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00536.warc.gz | 0.976099 | 122 | CC-MAIN-2020-34 | webtext-fineweb__CC-MAIN-2020-34__0__143366961 | en | Even in early times, lubricants were used on the axle bearings of Egyptian chariots. While in the following milleniums, not much happened in terms of the lubrication technology, it startet to develop at a rapid pace with the beginning of the industrial revolution. Steam engines, which were invented back then, had a power output unknown before, yet they needed high-quality lubricants because of the high stresses they were exposed to. In modern times, steam engines have been replaced by high-performance engines for a long time. These engines withstand increasing stresses thanks to increaslingly efficient lubricants. | physics |
http://eeportal.minnesotaee.org/resource/earthquake-monitor/ | 2023-09-21T10:08:07 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233505362.29/warc/CC-MAIN-20230921073711-20230921103711-00180.warc.gz | 0.776538 | 436 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__126392052 | en | |Post Date:||September 11, 2019|
This website Seismic Monitor allows you to monitor global earthquakes in near real-time, visit seismic stations around the world, and search the web for earthquake or region-related information. You can also view seismograms and make dataset requests via its WILBER interface.
Earthquakes are shown as colored circles on a world map, where the size of the circle tells you the magnitude of the quake, using the legend at the top left of the map. Only earthquakes of magnitude 4.0 or greater are displayed.
Seismic Monitor is updated every 20 minutes.
The Incorporated Research Institutions for Seismology (IRIS) Education & Outreach (E&O) program, in collaboration with the seismological and educational communities, develops and implements programs designed to enhance seismology and Earth Science education in K-12 schools, colleges and universities, and in adult education.
In addition to the real-time Seismic Monitor website, IRIS offers:
Please visit http://www.iris.washington.edu/about/ENO/ for more information about IRIS Education and Outreach programs.
|Author:||Incorporated Research Institutions for Seismology (IRIS) Education & Outreach (E&O)|
|Length in pages or time:|
|Is Training required?:|
|Language other than English:||
|Order information or contact:||Incorporated Research Institutions for Seismology (IRIS) Education & Outreach Program 1200 New York Ave. NW, Suite 400 Washington, DC 20005 Telephone (202) 682-2220 [email protected]|
|MAEE Partner||Sharing Environmental Education Knowledge (SEEK)| | physics |
https://www.thinkdigitalacademy.org/reading-room/a-series-of-impossible-questions/ | 2024-02-21T08:42:23 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00193.warc.gz | 0.93003 | 253 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__47759748 | en | Science isn’t about knowing lots of facts or getting the right answer all the time. It’s not even about wearing a lab coat. Science is about asking sensible questions. They can be silly questions. They can even be impossible questions!
The history of science is paved with impossible questions. Each one is a stepping stone on the path to understanding the universe and everything in it. But this path is far from finished.
Every answer leads to new impossible questions that are still bamboozling biologists, confusing chemists and making physicists feel perplexed. In this series we will delve into a treasure trove of curious conundrums like:
Can I sleep with my eyes open?
Why can’t I tickle myself?
Are cats liquid or solid? And
What’s the world’s worst smell?
Isabel Thomas’s fantastically funny and informative answers are matched with bold, inviting answers to create a glorious compendium of weird and wonderful facts that you will want to read again and again.
The impossible questions in this series will help you explore life, the the universe and everything in it – and the best time to do this (as every scientist knows) is at bedtime! | physics |
https://dollzofflavor.com/solar-energy-units-honors-9-reasons-that-they-do-not-work-what-you-can-possibly-do-about-it/ | 2024-04-25T14:18:50 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00268.warc.gz | 0.949413 | 1,367 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__109135391 | en | A photo voltaic power device is actually a lasting financial investment that produces tidy energy for your household. Relying on the measurements, it can easily lower or maybe eliminate your power expenses.
Photovoltaic panel take in sunlight and switch it in to electricity with no moving parts. These bodies have reduced maintenance demands and can easily last 25 years or more.
Photovoltaic Or Pv (PV).
PV systems convert sunlight right into electrical energy by utilizing the photovoltaic or pv result, which develops when semiconducting components create current and present when subjected to light. These devices are powered by solar batteries, which are actually private gadgets that vary in size as well as shape relying on the kind of semiconductor material utilized. Some instances include silicon, copper indium gallium diselenide (CIGARETTES), cadmium telluride (CdTe) and perovskites. rural solar
Each solar battery has two layers of semiconductor material along with one layer being actually favorably billed and the other negatively demanded, with steel connections on either edge. When sunshine strikes the sunlight cell, an electricity field is actually generated around the silicon junction, forcing loose electrons to spurt of the tissue, producing straight present (DC). These electrons are at that point drawn off by the steel contacts and also nourished into a photo voltaic inverter, which converts DC in to varying existing (HVAC), which may energy your appliances and home items.
The outcome of PV units is frequently defined as a kilowatt top (kWp) value, which works with the maximum theoretical power they may create. Having said that, the real energy they supply to your home will certainly depend upon many factors including the location and siting of the body, the top quality of the installment, shielding as well as electricity reductions in the unit parts. To maximise your monetary come back, it is vital to recognize just how these aspects influence the functionality of your unit.
Concentrating solar-thermal (CSP).
A fabled innovation going back to old Greece, CSP uses direct sunlight to heat up a liquid as well as create electrical energy. It needs a solar area, a device to save the thermic power and an electrical power block to turn it right into electrical energy. The thermal energy can easily be actually saved in liquified salt or even in a vapor engine, depending upon the form of vegetation.
The best usual CSP modern technologies are allegorical canal and also renewable energy high rise devices. Both depend on sophisticated control systems to deal with the photovoltaic industry, storage as well as energy obstruct processes. One of the most efficient vegetations run in regions of high direct typical intensity (DNI), which is actually specified as sun that is actually powerful as well as not diffused.
Today’s CSP innovation possesses effectiveness worths of in between 20 and also 40 per-cent– equivalent to charcoal and atomic power plants and considerably more than photo-voltaic photo voltaic electricity. Solar thermal energy can likewise be actually made use of for commercial reasons, including water desalination as well as improved oil healing.
The key challenge for CSP is job lending. Unlike PV, which is reasonably economical to develop and also run, sizable utility-scale ventures need notable capital expense. In the past times, this has suppressed development of the innovation. Having said that, latest breakthroughs in the industry are motivating. The Department of Power and also private ventures supported by capitalists like Costs Gates are actually focused on improving and also promoting CSP to produce the innovation extra budget friendly.
A grid-tie renewable energy unit is actually linked to the energy framework, enabling it to ship or even import energy depending on the circumstance. This is actually a preferred alternative for residents given that it enables them to lessen their electrical expenses. These units carry out certainly not have electric batteries to save power, which streamlines setup as well as cuts down on system cost.
These planetary systems are actually powered by photovoltaic doors that change direct sunlight in to Direct Existing (DC). They make use of an inverter to completely transform the DC into Alternating Stream (AIR CONDITIONING), which is after that fed into the power grid. Your local area electrical company at that point bills you for the energy you have made use of or even generated.
The key variation between this kind of body and also other possibilities is that a grid-tie planetary system has the ability to take advantage of internet metering, which allows you to earn costs credit scores for any sort of excess electricity that your photo voltaic body generates. This can significantly lower your electric power costs, also eliminating all of them completely.
Nevertheless, there are some limitations to this choice. As an example, your device will certainly be switched off in the course of network failures to guard electrical lineworkers. In addition, if you stay in a location along with hefty rain or snowfall, the energy from your solar energy panels may certainly not be actually ample to fulfill your needs. Consequently, it is essential to acquire your system adequately sized by a Grape Solar representative.
The main conveniences of off-grid solar energy units is that they allow individuals to be actually totally private coming from the energy company. House owners that choose to put in off-grid systems may count on their very own electric battery back-up to offer every one of their electric power needs to have, also when the sunshine isn’t beaming or it’s evening opportunity.
Off-grid renewable energy systems may be set up in the houses or industrial buildings. They are actually usually used in country as well as distant locations where the network does not meet, however the cost of putting in a network link would certainly be actually expensive.
Choosing the ideal size off-grid system relies on your power needs to have, area as well as budget plan. The 1st step is to compute your regular kilowatt-hour (kWh) consumption. This may be done by building up the wattage of all the appliances at home. You may at that point utilize this details to determine how much electricity your off-grid unit will definitely need to have.
Off-grid photovoltaic devices need additional equipment than grid-tied systems and may have a greater rate factor. Nevertheless, they deliver numerous perks consisting of electricity liberty, defense against blackouts and removing electrical energy costs. It is necessary to review your inspiration and goals when deciding whether an off-grid solar power device corrects for you. | physics |
http://public.web.cern.ch/PUBLIC/en/People/Steinberger-en.html | 2014-03-08T23:24:34 | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999668190/warc/CC-MAIN-20140305060748-00013-ip-10-183-142-35.ec2.internal.warc.gz | 0.970107 | 592 | CC-MAIN-2014-10 | webtext-fineweb__CC-MAIN-2014-10__0__145434067 | en | An excellent tool
Jack Steinberger joined CERN in 1968 and has worked on many neutrino experiments. Together with Leon Lederman and Melvin Schwartz he received the Physics Nobel Prize in 1988 for the neutrino beam method that they developed at the Columbia University and the discovery that there are at least two kinds of neutrino.
“Neutrinos, which are not encumbered with the complex strong interaction, provide an excellent tool to study the nucleon structure. While the SPS was still under construction, CERN therefore decided to construct a neutrino beam in which four experiments would be lined up, one behind the other. Our experiment, called CDHS for its collaborating institutions, CERN, Dortmund, Heidelberg and Saclay, was third in line.
The detector had a mass of 1250 tonnes and consisted of circular iron plates in modules of 5 or 15 plates each, interspersed with detectors. Our original plan for the detector had been substantially different. It consisted of two parts, the front serving as neutrino target and particle shower measuring instrument, followed by a magnet with interspersed tracking detectors for the only penetrating particles, the muons. However, the management did not like our front part and proposed that we do this experiment combining our magnet with a part provided by another team. This was not exactly to our liking; we were not convinced of the other proposed method, and we preferred to be independent. Only then did we notice that the magnet could be transformed to do all functions simultaneously, with an overall improvement in the detector capability. I remember this as an illustration of how limited our vision is, why we did not see this in the first place, and of how bad luck can turn to one's advantage.
The CDHS detector was used at CERN from 1977 to 1983 and with the help of neutrinos a good deal could be learned about the Standard Model as well as about the structure of the nucleon. Probably the most important results of the SPS neutrino experiments was the observation of so-called ‘scaling violations’, providing a first quantitative confirmation of the new theory of the strong interaction. Although the theory was very attractive, there had been no quantitative prediction of the theory that had been confirmed. The theoretical predictions were measured by the CDHS experiment for the first time, and thus gave experimental support to the new theory, a milestone in the establishment of the Standard Model.
The beautiful present understanding of the physics of particles and their interactions is, in my opinion, one of the cultural achievements of the century just gone by, and my opportunity to participate in this was a privilege. Despite the great advance in our understanding, there are still the great mysteries, like why are the masses and interaction strengths what they are? There is no lack of unanswered, fundamental questions. But the remaining questions are more difficult than those clarified by my generation.” | physics |
https://www.southasiatime.com/2024/01/24/indian-scientists-shine-in-2024-blavatnik-awards-for-young-scientists-in-the-uk/ | 2024-03-04T02:40:14 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00644.warc.gz | 0.914462 | 432 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__186652349 | en | Indian Scientists Shine in 2024 Blavatnik Awards for Young Scientists in the UK
London — Three Indian scientists have emerged as winners of the prestigious 2024 Blavatnik awards for young scientists in the UK. Prof Rahul R Nair, Prof Mehul Malik, and Dr Tanmay Bharat are among the nine recipients of the award, jointly instituted by the Blavatnik Family Foundation and The New York Academy of Sciences.
The awards, totaling £480,000, recognize outstanding research efforts that are transforming fields such as medicine, technology, and our understanding of the world in chemical sciences, physical sciences and engineering, and life sciences.
Prof Rahul R Nair, a materials physicist at The University of Manchester, has been named Laureate in Physical Sciences and Engineering. He will receive £100,000 for his groundbreaking work in developing novel membranes based on two-dimensional (2D) materials, enabling energy-efficient separation and filtration technologies. His research utilizes graphene and other 2D materials to explore applications addressing societal challenges like water filtration.
Prof Mehul Malik, a quantum physicist and professor of physics at Heriot-Watt University in Scotland, has been honored for advancing quantum communications. His techniques harness high-dimensional entanglement, a complex quantum physics phenomenon, enabling robust and high-capacity quantum networks that securely transmit large amounts of information over long distances.
Dr Tanmay Bharat, a structural microbiologist at the MRC Laboratory of Molecular Biology in Cambridge, was awarded for his work in tackling human health. Using cutting-edge electron cryotomography (cryo-ET) techniques, he studies the mechanisms of biofilm and microbiome formation. His research, providing atomic-level pictures of cell surface molecules on microorganisms, has significant biomedical implications for understanding antibiotic-resistant biofilm communities.
The three Indian scientists will be honored at a Gala ceremony at Banqueting House in London on February 27 and will share their insights at a public symposium at the RSA House on February 28. These awards showcase the remarkable contributions of Indian scientists on the global stage, pushing the boundaries of scientific knowledge and innovation. | physics |
https://newcenturycomponents.com/nsn/5905-00-001-2905 | 2022-05-27T00:28:18 | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00089.warc.gz | 0.800065 | 1,168 | CC-MAIN-2022-21 | webtext-fineweb__CC-MAIN-2022-21__0__175977043 | en | Inc 37404 reinstated on 010303~a resistor whose ohmic value cannot be adjusted or varied. the resistance element consists of high resistance wire (or ribbon) either wound on an insulated form or constructed so as to be self-supporting. the items must have inherent characteristics of inducing little or no self-inductance. opposition to current flow is an inherent property of the resistance wire and is manifested by the heat dissipation in the resistor. excludes suppressor, ignition interference. see also resistor (1), fixed, wire wound, inductive.
5905-00-001-2905, Inc 37404 reinstated on 010303~a resistor whose ohmic value cannot be adjusted or varied. the resistance element consists of high resistance wire (or ribbon) either wound on an insulated form or constructed so as to be self-supporting. the items must have inherent characteristics of inducing little or no self-inductance. opposition to current flow is an inherent property of the resistance wire and is manifested by the heat dissipation in the resistor. excludes suppressor, ignition interference. see also resistor (1), fixed, wire wound, inductive.
Supply Class (FSC)
Item Name Code (INC)
Sep 22, 1972
37404 (Resistor, Fixed, Wire Wound, Nonind)
Trade Control Compliance
Schedule B Export Code: A ten position numeric code that identifies an item exported overseas. this unique identification code is assigned by the U.S. Census Bureau.
Sch. B Description
Wirewound variable resistors, nesoi
Non-usml/non-ccli - no demil or dod tsc required. department of commerce may impose licensing requirements to certain destinations. (note 9).
Not ITAR Controlled
MOE / End Users
Represents the subdivisions of a US governmental organization or an agency of the North Atlantic Treaty Organization (NATO), and other friendly governments and international organizations participating in the federal catalog program.
Country / Entity Name
Department Of The Navy
Material & Special Handling
Item material and handling information
Indicates there is no data in the hmirs and the nsn is in a fsc not generally suspected of containing hazardous materials.
Precious Metals Indicator
Precious metal content is unknown
No known electrostatic discharge (esd) or electromagnetic interference (emi) sensitivity.
The item does not have a nuclear hardened feature or any other critical feature such as tolerance, fit restriction or application.
National Motor Freight
NMFC: A six position numeric code, which divides articles into groups or classes according to physical characteristics. The classification is based on truckloads vs. less than truckload.
Less than a truck load rating
Communications/electronics, other than sigint/ew or comsec repair parts and components
Type of Cargo
Electrostatic sensitive device (esd)
Other or no special handling required (sh)
Air Dimension Code
Shipment is not a consolidation and does not exceed 84 inches in any dimension.
Instruments/equipment/supplies for radio, communications, electrical, laboratory, etc. (includes signal corps)
Air Special Handling
No special handling required.
Fixed Resistor Wire WoundsCross Reference
Cage Code 81349
Cage Code 81349
9B – Naval inventory control point, mechanicsburg
Special Material Content
Navy Management Code Definition
Identifies the inventory manager (IM) and inventory control point (ICP).
Special Material (SMIC)
Categorizes material on the basis of requirements for source or quality control, technical design, or configuration control, procurement, stocking and issue control, special receipt, inspection, testing, storage, or handling
Req. Restriction (IRRC)
Item requisition instructions.
Special Material (SMCC)
Indicated that an item represents or contains peculiar material requiring special treatment, precautions, or management. | physics |
https://udllab.web.nycu.edu.tw/research/ | 2023-12-01T13:40:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100287.49/warc/CC-MAIN-20231201120231-20231201150231-00665.warc.gz | 0.859861 | 334 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__269112555 | en | We are “Ultrafast dynamics Lab” (Figure 1) of the Electrophysics Department, NYCU. Our primary research interests include the following three major parts: (a) Ultrafast dynamics in quantum matter: We have developed all-around pump-probe techniques and tools to investigate different quantum matter. For example, the ultrafast dynamics of monolayer MoS2 has been studied using the OPA pump and supercontinuum white light probe helicity-resolved spectroscopy, which would play a crucial role in various applications including spintronics, valleytronics, and semiconductor devices (Figure 2). (b) THz spectroscopy: The time-domain measurements of THz radiation generated from topological insulators (TIs) can be performed by ultrafast optical pulse excitation. The present study demonstrates that time-domain THz spectroscopy provides rich information of the optical coupling and the electronic structure of TIs (Figure 3). (c) Femtosecond (fs) laser annealing and machining: Due to the high peak power of femtosecond laser pulses, we have established a laser micromachining and annealing system. The properties of materials could be modified easily in very short period, such as the surface morphology, carrier concentration, carrier mobility, etc. (Figure 4). Key Facilities: Pump-probe spectrometer, Low-temperature cryostat, Fourier transform infrared (FTIR) spectrometer, ultrashort-pulse optical parametric amplifier (OPA), and several other critical equipment necessary in the studies of ultrafast optics. | physics |
http://m.redpowerdesign.com/en/chanpin/liangzihuan.html | 2023-12-05T11:11:17 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.17/warc/CC-MAIN-20231205105136-20231205135136-00209.warc.gz | 0.912446 | 1,669 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__177284845 | en | Within a few years after the advent of Luodaoluo quantum tube through ring, it has achieved success in fighting against scale, hard water and oxidation in different markets and applications at home and abroad. It is now possible to treat calcium, rust and other deposits in (water) pipes based on physical principles. Luodaoluo products can meet the primary requirements of customers, be safe, sustainable, and reusable, and at the same time have the characteristics of mildness, environmental protection, economy, and high efficiency. Based on our many years of careful work, we can now offer a treatment system "Made in Germany". This system has passed the test of demanding industrial application (complex water circulation system, heat exchanger, etc.) and is installed and used all over the world. The system uses the physical properties of the product to transform dissolved substances in the water. Simply put, every substance, every object, and correspondingly every element and every molecule has a unique molecular eigenvibration (comparable to a fingerprint). Correspondingly, scales formed by calcium and rust also have their special fingerprints. Based on these very unique fingerprints, the Rodoro program of Rodoro Quantum Ring Manufacturing Company successfully generates new effective vibrations. These vibrations will be partially enhanced, or personalized processing, and adjusted on the matrix material (such as the Luodaoluo quantum ring). These tuned vibrations in application lead to the precipitation of calcium and rust through what we call the Rodoro Quantum Loop (the base of the newly generated vibrations) with water as the medium (in the flow direction). In this way, in many different cases, under certain conditions, substances soluble in water and substances in contact with water can directly act on the problematic places. The Luodaoluo program here not only utilizes the excellent storage performance of water, (used to absorb, store, and transmit the newly generated vibrations), but also utilizes the environmental heat to stabilize the medium in use through the Luodaoluo quantum ring ( Such as water) emits adjusted, newly generated vibrations... When the newly generated effective vibrations in the Luo Daoluo program meet the basic vibrations of the substances to be treated (such as calcium and rust) in water (also in other liquids), the so-called interference of different vibrations can be produced.
The overall material of the quantum tube ring is composed of pure aluminum and a part of silicon。
Silicon, as we all know, is commonly used in the production of computer chips. Used as a storage medium for storing and transmitting information in our quantum ring production.
In this case, such a combination enables the storage and stable transmission of countless messages over many years. In addition to this, this metal alloy is very cost-effective.
The natural water cycle involves water flowing through different geology. There are situations such as when water containing carbon dioxide flows through calcium-containing geology, dissolving calcium. This happens because carbon dioxide (CO2) and water (H2O) together form carbonic acid (H2CO3), which ultimately leads to the dissolution of calcium. These calcium bicarbonate will precipitate or crystallize in the form of calcium carbonate on the pipe wall, bottom layer, and branch pipes, especially in the hot water area, only under certain conditions. Simply put, calcium is dissolved or precipitated depending on the presence or absence of large amounts of carbon dioxide. This process also depends on the so-called calcium-carbonic acid balance. Among other things, the process also depends on parameters such as temperature changes and pressure changes (for example, through a turbulent flow in water through a valve). This creates more scale deposits on the bottom layer, valves, hot water areas.
◆ The changes we get are diverse:
In the case of dealing with rust (Fe2O3), a considerable majority of users observe the formation of a favorable rust protective layer (Fe3O4 magnet) on low-alloy steel surfaces, as long as the water parameters of these effective substances are believed. Likewise, the Luodaoluo system in many cases reduces the formation of new rust, strips the rust that has already formed and allows it to be washed away by the water flow. This is easily seen from the obvious red water flow. This process regresses more or less rapidly depending on the flow and velocity of the water. Calcium, on the contrary, can remain dissolved in water for a long time, and crystallizes in a small amount on the pipe wall or existing scale, which is also stripped under certain conditions and gradually washed out of the system by water flow.
The Luodaoluo Quantum Ring Manufacturing Co., Ltd. starts with exactly this process. In terms of preventing calcium deposition, Luodaoluo Quantum Loop can create conditions to allow calcium hydride in water to form crystals instead of forming calcium carbonate to segregate and attach to faucets, pipe walls, surfaces or other equipment parts. In short: Calcium dissolved in water cannot crystallize and attach to the pipe wall because of its own structure. They have first formed corresponding crystals in water, thus avoiding new attached crystals. In order to effectively remove the calcium and precipitates that have formed, Luodaoluo Quantum Ring needs to play an active role in the so-called calcium-carbonic acid balance, strictly speaking, in the process of dissolving the existing precipitates. We use a principle based on so-called frequency modulation electricity to initiate this process with the help of our physical water treatment equipment.
◆ Application range
●Metallurgy: continuous casting water, high-pressure phosphorus removal water, net ring water, turbid ring water filtration, full cooling water filtration, side filter, nozzle protection
●Electric power: steam turbine cooling water filtration, gray water recovery filtration, dust suppression nozzle protection, cooling tower water full filtration and side filtration
●Raw water: filtration of drinking water, lake water, urban fountains, swimming pools, river water, reservoir water, well water, rainwater, sand, algae, and organic matter filtration during extraction of groundwater
●Agriculture: sprinkler irrigation, drip irrigation water treatment
●Others: construction, steel, petroleum, chemical industry, electronics, power generation, textile, paper, food, sugar, pharmaceuticals, plastics, automotive industry, heat treatment plants, cleaning and spraying of metal parts, seawater desalination
●Widely used in drinking water treatment, urban water treatment, building circulating water treatment, industrial circulating water treatment, hospital water treatment, electroplating water treatment, printing and dyeing water treatment, wool textile water treatment, papermaking water treatment, food and beverage processing water treatment, petrochemical industry Water treatment, mining industry water treatment, golf course water treatment and other fields.
◆How to choose the right filter solution
In order to achieve the best effect of dissolving calcium and at the same time meet the requirements of users to the greatest extent, please consult the filter dealer of Luolun (LUO DAO LUO). Luo Lun (LUO DAO LUO) Filtration Authorized Agents and Distributors have received professional technical training from Luo Lun (LUO DAO LUO) filtration experts. They will rely on their professional knowledge, rich market experience, and rigorous scientific attitude to make the best choice for you.
◆When selecting the type of quantum tube through ring, please consider the following factors
▲Treated water volume
▲System pipe diameter
▲Concentration of suspended solids in dissolved calcium impurities
▲Physical and chemical properties of dissolved calcium media
The Rodoro quantum ring (here is the adjusted vibration carrier) has various sizes. For easy installation and replacement, the product is designed as two separate half-rings, even in an unfavorable construction environment. It can also be quickly installed in minutes. The installation process does not require professionals, it is simple and easy to operate, and there is no need to open the pipe or install other parts. Therefore, Luodaoluo quantum tube through rings can be installed during pipeline use without stopping production. | physics |
https://www.ditaiplastic.com/vacuum-forming-for-outdoor-applications-uv-resistance-and-durability/ | 2023-12-05T09:24:41 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00106.warc.gz | 0.918904 | 1,363 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__260766208 | en | The manufacturing industry continues to embrace vacuum forming as an efficient and versatile technique for creating plastic parts. This is especially pertinent for outdoor applications where durability and resistance to environmental stressors like ultraviolet (UV) rays are of paramount importance. This blog will discuss how vacuum forming can meet the challenging specifications of outdoor applications, exploring factors such as material choices, process considerations, and practical applications.
The Importance of UV Resistance and Durability in Outdoor Applications
Exposure to ultraviolet (UV) rays can lead to the degradation of plastics, affecting their color, structural stability, and overall aesthetic appeal. This makes UV resistance a critical factor when considering materials for outdoor elements like signs, playground fixtures, or vehicle components.
Durability is another key concern for outdoor applications that are subjected to various environmental conditions such as precipitation, wind, and temperature fluctuations. Vacuum-formed parts should be capable of enduring these stressors without cracking, deforming, or wearing down.
Material Selection for UV Resistance and Durability
The correct material selection is crucial when vacuum forming items designed for outdoor use.
Polycarbonate is renowned for its impact resistance and can be manufactured in UV-stabilized versions suitable for outdoor use.
High-Density Polyethylene (HDPE)
HDPE offers strong durability and chemical resistance. UV-stabilized HDPE is available for outdoor applications requiring increased sunlight resistance.
This material naturally possesses UV-resistant characteristics and retains its color and transparency when exposed to sunlight for extended periods.
ABS is another material that can be engineered for enhanced UV resistance, and it boasts impressive impact resistance, making it ideal for outdoor use.
ASA is specifically designed for outdoor applications due to its exceptional UV resistance and high impact strength. It maintains its color and mechanical properties even after long-term exposure to weather and UV light.
Material Properties and Behaviors in Depth
ASA is an excellent choice for outdoor applications, not just for its UV resistance but also for its excellent weatherability. It’s able to withstand adverse conditions like high levels of humidity, temperature extremes, and chemical exposures. ASA parts can be color-matched to specific requirements and are less prone to yellowing over time.
When compared to ASA, polycarbonate offers superior impact resistance but may require additional coatings for UV protection. Polycarbonate is often used in applications that require a high degree of visibility as it also offers excellent optical properties.
This material offers excellent resistance against water absorption and is often used in damp or aquatic environments. However, it may lack the structural rigidity of polycarbonate and ASA and might be more suitable for applications that do not require high mechanical strength.
While inherently UV-resistant, acrylic is less impact-resistant compared to ASA and polycarbonate. However, it offers excellent optical clarity and is often used in applications that require transparent parts, such as outdoor displays or windows.
Although ABS is not inherently UV-resistant, UV-resistant grades are available. ABS offers a balance between impact resistance and rigidity but may be less suited for extremely harsh outdoor conditions unless treated with special coatings.
Process Adjustments for Enhancing Durability
Increasing the wall thickness boosts the rigidity and durability of the vacuum-formed part, enabling it to better withstand environmental stressors.
Incorporating textural elements into the mold can improve scratch and wear resistance, further enhancing the part’s durability.
Additional post-processing steps, such as the application of UV-resistant coatings or sealants, can augment both UV resistance and durability.
Advanced Processing Techniques
Thermoforming vs. Pressure Forming
Pressure forming is an advanced technique that can be more suitable for parts requiring intricate details or severe draw ratios. This process uses additional pressure to push the plastic into the mold, allowing for better feature definition, which can be critical for functional aspects of outdoor applications.
This process involves forming two sheets of plastic simultaneously, then joining them together. This allows for hollow parts with high structural integrity, making them ideal for outdoor applications requiring lightweight yet robust components.
Case Study 1: Outdoor Signage
Material: UV-Stabilized ASA
Process Adjustments: Increased wall thickness and UV-resistant coatings
Result: Outdoor signage that has remained vibrant and intact for several years.
Case Study 2: Playground Equipment
Material: UV-Stabilized HDPE
Process Adjustments: Mold texturing for improved wear resistance
Result: Resilient playground structures that have stood the test of time.
Case Study 3: Automotive Exteriors
Material: UV-Resistant ABS
Process Adjustments: Optimized wall thickness for impact resistance
Result: Automotive exterior parts demonstrating minimal wear after prolonged outdoor exposure.
Case Study 4: Marine Equipment
Material: UV-Stabilized ASA
Process Adjustments: Twin-sheet forming for lightweight yet robust parts
Result: High-quality marine components that are both durable and UV-resistant, providing long-lasting performance in harsh oceanic conditions.
Case Study 5: Outdoor Furniture
Material: UV-Stabilized HDPE
Process Adjustments: Texturing for enhanced durability
Result: Outdoor furniture sets that have not only withstood years of direct sunlight but also resisted the wear and tear from usage and varying weather conditions.
Outdoor products often have to meet specific regulatory standards, especially if they are used in public spaces or critical applications like transportation. This may include fire resistance, structural integrity tests, and environmental impact assessments. Vacuum forming materials like ASA and polycarbonate often come in fire-resistant grades that meet these stringent standards.
Emerging materials and technological advancements promise even better UV resistance and durability for vacuum-formed parts. Sustainable materials and coatings are also becoming increasingly prevalent, offering eco-friendly alternatives for outdoor applications.
Vacuum forming is an ideal manufacturing process for crafting durable and UV-resistant parts suitable for outdoor applications. Through thoughtful material selection, including the incorporation of specialized materials like ASA, and specific process adjustments, manufacturers can excel in creating products that go beyond the typical durability and UV resistance requirements for outdoor use. Future advancements in materials and vacuum forming technology are expected to further expand these capabilities.The role of vacuum forming in creating outdoor applications that are both durable and UV-resistant cannot be overstated. It offers a multitude of material options, each with its unique set of advantages and disadvantages. Manufacturers can further tailor these materials through process adjustments and post-processing treatments to meet or exceed the requirements for outdoor durability and UV resistance. As technology advances, we can anticipate even greater capabilities from vacuum-formed parts, expanding their suitability for a wider range of challenging outdoor applications. | physics |
https://gmraviationacademy.org/easadgca-147-licensing-course | 2022-07-05T00:24:39 | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00347.warc.gz | 0.80814 | 171 | CC-MAIN-2022-27 | webtext-fineweb__CC-MAIN-2022-27__0__45624081 | en | GMR Aviation Academy is soon launching a 4 year licenced programme on 147 Aircraft Maintenance & Engineering.
Admissions will commence from August 2022.
Special Features :
On Job Training (OJT) in GMR MRO, Hyderabad, India.
To churn out the best, able and world class Aircraft Maintenance Engineers
DGCA and EASA Syllabus
Classroom and Practical Training.
10+2 with Maths, Physics and Chemistry
Any 10+2 with Maths, Physics and Chemistry background with 60% aggregate marks
Having keen interest in Aviation and related fields.
Should not have Colour/Night Blindness
Should be medically and physically fit
Rs. 30 Lakhs (Inclusive of Taxes)
4 Years (Fully integrated- 2 Years Theory and 2 Years Practical) | physics |
https://hvacvn.com/en/my-xay-dung-thap-dien-mat-troi-cao-nhat-the-gioi/ | 2021-09-19T02:51:54 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00447.warc.gz | 0.928166 | 269 | CC-MAIN-2021-39 | webtext-fineweb__CC-MAIN-2021-39__0__92062787 | en | According to the Vietnam News Agency reporter in the US, the country has started construction of twin towers 800 meters high in the Arizona desert to generate electricity from solar energy.
This is the world's first tallest twin tower to produce solar thermal energy, not using the current technology of solar power generation from panels.
The bottom of the tower is the size of a football field, producing a solar-powered green energy source that heats the air and is fed into the tower to replace the cold air, pushing it to the top of the tower and spinning the turbine. That creates energy.
Generating electricity from the solar tower will create a clean and safe source of energy enough to supply 150,000 households and create jobs for 1,500 people.
The power generation tower is based on the principle of temperature difference. Accordingly, the temperature collected during the day is enough to continue producing electricity at night so the tower can operate in all weather conditions with low maintenance costs, reducing 1 million tons of greenhouse gas emissions and saving more than 4.5 billion liters of water / year compared with current unsustainable energy generation.
The construction of the twin towers for solar power is a collaboration between Australia's EnviroMission company and three US companies. According to the project plan will build at least 20 similar towers in the US in the next 20 years. | physics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.