url
stringlengths
15
1.48k
date
timestamp[s]
file_path
stringlengths
125
155
language_score
float64
0.65
1
token_count
int64
75
32.8k
dump
stringclasses
96 values
global_id
stringlengths
41
46
lang
stringclasses
1 value
text
stringlengths
295
153k
domain
stringclasses
67 values
https://www.mlsnextpro.com/huntsvillecityfc/news/wffajds-to-house-j-2x-nasa-rocket-engine
2023-12-07T17:35:44
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100677.45/warc/CC-MAIN-20231207153748-20231207183748-00696.warc.gz
0.908546
512
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__10554245
en
Huntsville, Ala. (Sept. 20, 2023) – The Wicks Family Field at Joe Davis Stadium is making history by housing a J-2X NASA Rocket Engine, making it the only NASA rocket engine test article of its kind currently on long-term display at a professional sports venue. The engine will be unveiled ahead of Huntsville City Football Club’s final home match of the regular season on Sunday, Sept. 24 at 12 p.m. CT. City of Huntsville Mayor Tommy Battle, officials from NASA’s Marshall Space Flight Center, and officials from Huntsville City FC will all help reveal the engine before the team hosts Chicago Fire FC II an hour later on MLS NEXT Pro’s Decision Day. Tickets for the match and this special event can be purchased here. "Being known as the Rocket City, nothing encapsulates Huntsville like Space exploration, and we are thrilled to partner with NASA's Marshall Space Flight Center to bring a uniquely Huntsville piece of history to Wicks Family Field at Joe Davis Stadium,” says Chad Emerson, Managing Dir. of Business Operations for Huntsville City FC.“No other stadium in the United States can say they own something quite like the J-2X NASA Rocket Engine. The engine will be a focal point for anyone who comes to a Huntsville City FC match, high school football game, or any other kind of event at the Wicks Family Field at Joe Davis Stadium.” The engine’s namesake comes from its predecessor, the J-2 engine of the Apollo era, used on Saturn V rockets, which carried the first humans to the moon. The engine design leveraged 50 years of experience in human spaceflight with state-of-the-art technology in design processes, materials, and manufacturing to enable further human exploration of space. The development and testing of the J-2X helped usher in major manufacturing improvements, including 3-D printing of complex rocket engine components and the development of new materials. The J-2X engine is a liquid-oxygen/liquid-hydrogen fueled rocket engine that produces nearly 300,000 pounds of thrust in a vacuum. It is a highly efficient and versatile advanced rocket engine built using the knowledge and successes of nearly a half-century of NASA spaceflight experience. The J-2X designed and built by Pratt & Wheatley Rocketdyne of Canoga Park, Calif. for NASA’s Marshall Space Flight Center at Redstone Arsenal in North Alabama.
physics
https://www.oyc.space/research-interests
2023-12-04T19:16:15
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100534.18/warc/CC-MAIN-20231204182901-20231204212901-00164.warc.gz
0.950545
2,392
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__119922067
en
"I have yet to see any problem, however complicated, which, when you looked at it in the right way, did not become still more complicated." -- Poul Anderson Theories of Gravitation: Does General Relativity Need Modification? Einstein's general relativity celebrated its centenary in 2015. Described by S. Chandrasekhar as "probably" the most beautiful theory, general relativity has passed all observational and experimental tests thus far with flying colors. To quote from the book The Perfect Theory by Pedro Ferreira published in 2014, "One thing about general relativity that has always puzzled me is how, despite being around for almost a century, it continues to yield new results." Indeed, general relativity has a firm mathematical foundation based on Lorentzian geometry. The same cannot be said for many modified theories of gravity, which aim to explain the accelerated expansion of the Universe without including a cosmological constant (some also try to explain "dark matter" this way -- which is far more challenging). It is not difficult to propose a new theory of gravity, what is difficult is to propose a healthy theory. More often than not, attempts to modify general relativity resulted in pathological features such as energy not bounded from below (thus resulting in severe instability), wrong sign of kinetic energy (so-called "ghost"), ill-posed Cauchy problem (namely, given an initial condition, we cannot uniquely solve for the future evolution of the system -- which means that we cannot do physics) or even the unwelcomed existence of arbitrarily small closed timelike curves (thus violating causality). I am interested in further understanding general relativity and appreciating its subtleties; I am also interested in the mathematical structures of modified gravity, and had spent some efforts in uncovering the problems that seem to plagued some teleparallel theories like f(T) gravity, as well as massive gravity. I am especially interested in theories with torsion. General relativity is, by construct, torsion-free, but a given connection has in general, in addition to curvature, also torsion and non-metricity. Geometries with these quantities are rich and interesting, and might offer some insights into gravitational physics. If indeed gravity is only the effect of spacetime curvature, then it would also be interesting to understand why Nature chooses not to make use of torsion and non-metricity. How rigid is general relativity? Ultimately, what is gravity? This brings us to the next topic... Black Hole Thermodynamics, Singularities, Cosmic Censorship, and Gravitational Waves A black hole is a region of spacetime with curvature behaving in such a way that nothing, not even light, can escape from within. The boundary of no return is called an "event horizon". (Note that gravity as measured by tidal deformation is not necessarily strong at the event horizon!). There are a lot we don't understand about black holes, both at the astrophysical level and the theoretical level. The former includes questions like: when did supermassive black holes first form in the Universe, and did they play any role in the reionization of the Universe? No doubt with the recent discoveries of gravitational waves by advanced LIGO, a new era of astrophysics has begun. I am, however, more interested in the theoretical aspects. Black holes are thermodynamical objects -- they have temperature and entropy. The nature of black hole entropy, nevertheless, has remained mysterious -- what is the underlying degrees of freedom of a black hole? Is it some kind of "spacetime atom" or other microstructure? Why does adding electrical charge or increasing angular momentum make the entropy go down? In additional, can quantum gravity "cure" the singularity inside black holes? If so, was the same mechanism at work during the Big Bang? If so, how do we explain the arrow of time -- the fact that the very early Universe has a vastly lower entropy? How can we quantify gravitational entropy? All these questions are inherently related to the nature of curvature singularities and quantum gravity. In fact, we see evidence that cosmic censorship (Penrose's proposal that no naked singularity can form from a generic initial condition) remains relevant at the semi-classical level (including low energy limit of string theory), so it is not quite obvious that singularities can be resolved in the full quantum gravity theories. In fact -- if singularities are really resolved, then why is there such a bound at the classical level that we observe to hold in astrophysical black holes (see figure below)? As I mentioned in an interview by Scientific American in August 2021, the importance of cosmic censorship (at least recently) is not so much in proving or disproving it, rather it is what we can learn along the way, what insights we can gain, what tools we can develop. The journey will be important, not just the destination. In the coming years and decades, we will have more data on gravitational waves, which would hopefully provide more hints on new physics beyond general relativity, or to further constrain quantum gravitational models that give rise to corrections at the horizon scales. Black Holes: Information Paradox, Quantum Information, and Holography A notoriously difficult problem in theoretical physics is the so-called "information paradox": what happens to the information about the stuff that falls into a black hole? Since Hawking radiation makes a black hole smaller and smaller, and (possibly) eventually disappears, the fear is that information is lost, which seems to contradict a central tenet of quantum information ("unitarity"). There are many proposed resolutions to this paradox in the literature, but none seems convincing. The problem was made worse when it was claimed in 2012 that if information leaked out from a black hole by being entangled in the Hawking radiation, then the event horizon of a black hole at late time becomes a very high energy curtain of "firewall", completely contradicting our prior knowledge about black holes. A cartoon illustration of gravitational waves as ripples of spacetime, produced by two black holes that are about to collide and merge. Take a moment to appreciate that this is produced in a vacuum spacetime -- all there is in spacetime devoid of matter, yet energy propagates. (Credit: Swinburne Astronomy Productions) An old black hole might be surrounded by a blazing firewall (Credit: Equinox Graphics/SPL) I am interested in the properties of Hawking radiation for different black holes, and the information paradox. I have investigated a particular proposal by Harlow and Hayden, concerning the enormously long time required to decode Hawking radiation, and how this might evade firewalls. In addition, I have also written a comprehensive review on the remnant scenario -- the possibility that black holes eventually stop evaporating. As part of the effort to understand black hole evaporation, I have also been involved in the research of "moving mirrors" in (1+1)-dimensional flat spacetime (just quantum field theory), which serves as a toy model for an evaporating black hole. Another important theoretical aspect of black hole physics is in the context of holography, which is also known as the AdS/CFT correspondence (though the term "holography" is arguably more general and thus more appropriate -- most applications are not strictly about CFT anyway). According to holography, the physics of a gravitating system in anti-de Sitter (AdS) spacetime is completely equivalent to another physical system -- a quantum field theory without gravity -- that lives on the boundary of the AdS spacetime. This opens a door to understand gravity using ordinary quantum field theory (even systems one can study in the lab, such as superconductor, cold atoms, and quark-gluon plasma) and vice versa. Holography (Credit: Tom Brown) I am interested in a few aspects of holography. Notably, being a nontrivial correspondence between two completely different physical systems, there must be some underlying consistency conditions to holography. Uncovering these are important, and might help us to understand how general holography is -- does it work for any mathematically/physically consistent theory of gravity in asymptotically AdS spacetimes, or does it require one to be able to embed the theory into string theory? I presented as the co-chair of the parallel session on black hole information loss paradox during the 2nd LeCosPA International Symposium “Everything about Gravity”, which was held in National Taiwan University, Taipei, December 2015. Cosmology: Physics at the Largest Scale I have always been fascinated by the cosmos ever since I was a child. I still vividly remember my excitement when my father gave me a pair of binoculars as birthday present when I was nine years old. I especially enjoyed looking at the Pleiades cluster from the window of my bedroom then. The Universe we live in is a remarkable place. For one thing, it is not only expanding but accelerating, and no one is quite sure why. Perhaps it is just a cosmological constant. Maybe it is the result of our over-simplification of cosmological models, and one really has to take inhomogeneities into account. Maybe it is something else entirely, a mysterious form of "dark energy". Maybe, after all, general relativity has to be modified, and the new theory will be able to explain the accelerated expansion in a "natural" manner. It is frustrating but at the same time exciting, that about 95% of the matter-energy content (so-called "dark sector") of the Universe remains unknown: all those remarkable things like planets and stars are nothing but 5% (of which a lot we still do not understand)! What rich physics await us in the dark sector? To quote Carl Sagan, “Somewhere, something incredible is waiting to be known.” Also, as Einstein once remarked, "The most incomprehensible thing about the world is that it is comprehensible." Indeed, it is very impressive that the human species managed to figure out the big picture of the history of the Universe, in only a couple of centuries since the beginning of modern science. Of course, there are a lot more to understand, but what we do know are already quite impressive! We may be a young species in the grand scheme of things, but make no mistake, we are ambitious, we are curious, and we will be out there among the stars figuring the Universe out. One day! (Credit: NASA, ESA, and A. Field (STScI)) From high precision observations, we know that the Universe is incredibly flat, and that its temperature distribution (of the Cosmic Microwave Background) is extremely uniform. These imply that the Universe started off in a very special initial condition. Not so surprising -- of course it makes sense for the initial condition to be relatively special, since entropy increases with time (the 2nd law of thermodynamics). Understanding why entropy is low in the beginning is a difficult but -- at least to me -- an important problem. Many would agree that cosmic inflation -- an epoch during which the Universe increased its size exponentially -- played a crucial role in this (though maybe only a partial role), but the underlying mechanism for inflation remains unknown. In fact, if we trace the history of the Universe far back enough, we would eventually reach a time when quantum effects can no longer be omitted, and we have to face the Big Bang singularity seriously. In the realm of theoretical cosmology, where my current interest lies, much work remain to be done.
physics
https://www.mixdirect.co.uk/laserworld-el-400rgb-show-laser.html
2022-07-06T22:51:01
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00049.warc.gz
0.898742
427
CC-MAIN-2022-27
webtext-fineweb__CC-MAIN-2022-27__0__204236678
en
The Laserworld EL-400RGB MK2 is a powerful laser system with a total output of 400mW, a super-fast step motor with 3-8kpps, three powerful laser diodes, and a variety of settings and controls. 0% Finance: Spend £79.00 more to qualify. The Laserworld EL-400RGB MK2 is a powerful laser system with a total output of 400mW, a super-fast step motor with 3-8kpps, three powerful laser diodes, and a variety of settings and controls. The EL-400RGB laser is suitable for both novice and experienced laser users, and it will complement any lighting engineer's toolkit. Mobile artists, small to medium-sized clubs and pubs, music venues, festivals, home parties, theatres, and live events may all benefit from it. Three super-powerful laser diodes operate in tandem to generate vivid, breathtaking colours and ultra-bright light beams in this show laser. The wavelength of the 110mW red laser diode is 650nm; the wavelength of the 50mW green laser diode is 532nm, and the wavelength of the 160mW royal-blue laser diode is 445nm. The laser beams have a diameter of about 3mm and a divergence of around 2mrad, and you get superb projections thanks to a super-fast step motor system. The EL-400RGB has three modes of operation: DMX control, which allows you to create your own custom light shows, sound-activated mode, which uses an integrated microphone to make the light dance to your music, and an automatic mode, which cycles through 50 pre-programmed patterns like waves, tunnels, fences, and even layers. A switch on the rear of the unit may be used to alter the sensitivity of the sound-activated mode. On the rear of the laser is a crucial safety switch as well as a 3-pin DMX in/output.
physics
https://penship.azurewebsites.net/impeller-propeller/
2021-04-12T14:49:59
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067870.12/warc/CC-MAIN-20210412144351-20210412174351-00165.warc.gz
0.885124
821
CC-MAIN-2021-17
webtext-fineweb__CC-MAIN-2021-17__0__218444564
en
Impeller vs. Propeller Both an impeller and a propeller are vital for a boat’s operation. However, their purpose and function are very different. A propeller is a fan which propels a fluid by pushing against it: it converts rotational motion into linear motion. An impeller is a rotor that produces a sucking force and is part of a pump. A propeller is always open, and an impeller is always closed, as it draws fluid. Let’s take a closer look at an impeller and a propeller to identify similarities and differences. What Are They? A boat impeller is a series of rubber vanes molded around a hub. The hub rotates the flexible vanes on an eccentric within a stainless-steel liner in the pump housing. An impeller pumps cold water into the boat’s engine to keep it cool while the engine is running. Marine engines contain an impeller in the water pump. The pump draws water from outside the boat through the plumbing system. The impeller has an inlet that allows water inside and vanes that push the fluid forward. As a rotating component of a centrifugal pump, the impeller accelerates fluid outward from the center of rotation. This action transfers energy from the pump motor to the water being pumped. The sudden velocity in the water creates pressure within the pump casing and causes outward movement. A boat propeller is a fan that generates boat power by converting rotational motion into thrust. Propellers typically consist of three to four rotating blades that spin around a shaft to create the needed dynamic for forward movement. When the blades rotate, they accelerate water pressure behind each blade, causing the vessel to move in the water. The blades extend outward from the prop hub at an angle. This allows water to pass through the prop from front to rear instead of pushing the water from side to side. The more blades attached to the prop, the more power the prop can create. More blades also reduce drag that occurs simply because the prop is in the water. Manufacturers design props to run closer to the water surface, giving them additional bite and stability at higher speeds. How Are Impellers and Propellers Similar? There are several ways that impellers and propellers are similar. - Both use water to create energy (pressure or propulsion). - Both use rotation to move water. - Both rely on a motor to operate. - Both require ongoing care to maintain peak performance. - Both require lubrication to keep all parts from wearing out quickly. And, of course, both impellers and propellers are necessary for a boat to operate. Without an impeller, the boat engine cannot stay cool. Without propellers, the boat will not move. How Are Impellers and Propellers Different? The primary difference between an impeller and a propeller is their function or purpose. The impeller is the main component in the boat’s cooling system. It feeds water into the engine to keep it at optimal operating temperature. The propeller is the main component in the boat’s movement. When the blades turn, the boat accelerates to the desired speed. Despite this primary difference, there are other ways that impellers and propellers are different. |Motion||Forward rotation||Clockwise rotation| |Primary function||Create water pressure||Create water propulsion| |Containment||Pump housing||No containment| Professional Impeller/Propeller Maintenance in Pensacola Pensacola Shipyard offers propeller and impeller maintenance services for all types of larger vessels. Our divers clean the bottoms of your vessel and inspect the props for any underwater appendages. Contact us today to schedule. Whether you need ongoing boat maintenance, resources or facilities, we have everything you need for a complete boating experience. To find out more about our marina and boating services, call 850.780.8441.
physics
http://hugeuniverse.ethereal.org/
2017-04-29T15:21:17
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00213-ip-10-145-167-34.ec2.internal.warc.gz
0.937977
853
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__73999607
en
August 11th, 2008 Based on the design of a lilypad, they could be used as a permanent refuge for those whose homes have been covered in water. Major cities including London, New York and Tokyo are seen as being at huge risk from oceans which could rise by as much as 3ft by the end of this century. This solution, by the award-winning Belgian architect Vincent Callebaut, is designed to be a new place to live for those whose homelands have been wiped out. The ‘Lilypad City’ would float around the world as an independent and fully self-sustainable home. With a lake at its centre to collect and purify rainwater, it would be accessed by three separate marinas and feature artificial mountains to offer the inhabitants a change of scenery from the seascape. Power for the central accommodation hub is provided through a series of renewable energy sources including solar panels on the mountain sides, wind turbines and a power station to harness the energy of the waves. –>Link to Vincent Callebaut Architecte LILYPAD –>Link to article @ dailymail.co.uk August 3rd, 2008 The solar eclipse of August 1, 2008 was a total eclipse of the Sun with a magnitude of 1.039 that was visible from a narrow corridor through northern Canada (Nunavut), Greenland, central Russia, eastern Kazakhstan, western Mongolia and China. It belonged to the so-called midnight sun eclipses, as it was visible from regions experiencing midnight sun. In Siberia, the total eclipse zone passed through populated places, including the “capital of Siberia” Novosibirsk, and the cities of Nizhnevartovsk, Barnaul and Biysk. The greatest eclipse duration was reached near the town of Nadym in Yamalo-Nenets Autonomous Okrug in Northern Siberia. A partial eclipse could be seen from the much broader path of the Moon’s penumbra, including eastern North America and most of Europe and Asia. –>Link to NASA’s website for this event –>Link to SpaceWeather.com photo gallery July 31st, 2008 As mentioned on the About page, this track, from the album “Shpongle Remixed“, provides the inspiration for the name of this blog. While integrating themes from many musical genres, Shpongle’s music could be classified as electronic/psychedelic. Ott has done a phenomenal job remixing this track. After the oompa-loompa interlude, the track breaks through to about 4 minutes of some of the most epic, yet ethereal music you’ve heard. –>Link to twisted.co.uk July 31st, 2008 The Lick Observatory is an astronomical observatory, owned and operated by the University of California. It is situated on the summit of Mount Hamilton, just east of San Jose, California, USA. The observatory is managed from the University of California, Santa Cruz, where its scientific staff moved in the mid-1960s. Significant discoveries made at Lick Observatory include several moons of Jupiter and several extrasolar planets. –>Link to Lick Observatory –>Link to Lick Observatory “HamCam” July 29th, 2008 A nutritious beverage, rich in detoxifying chlorophyll and antioxidants, and suited for meditative practice. –>Link to Health Benefits of Matcha –>Link to Wikipedia: Matcha –>Link to Wikipedia: Japanese Tea Ceremony July 28th, 2008 Lorenzo of the Psychedelic Salon produces a fantastic and invaluable podcast featuring interviews and discussions with the great thinkers and personalities of the psychedelic movement. Podcast 149 is especially relevant for this blog. This episode features a fascinating discussion (or trialogue) between Rupert Sheldrake, Ralph Abraham, and Terence McKenna about “The Heavens”. –>Link to Psychedelic Salon: Podcast 149 - “The Heavens”
physics
https://www.explosionproof.net/tag/hvac/
2024-04-25T10:59:59
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297292879.97/warc/CC-MAIN-20240425094819-20240425124819-00168.warc.gz
0.92367
669
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__83463154
en
When you need to enclose electrical or other components in a cabinet to protect them from dust and debris, you run the risk of them overheating. When your components overheat, they can lose efficacy, part of their lifetimes, or even fail, costing you money in repair costs, lost revenue, or even emergency evacuations. To solve the problem of overheating electrical components, you need a cabinet cooler, which can alleviate cabinet heat without requiring a full air conditioning solution. How Cabinet Cooling Fans Work All cooling fans work on the principle of forced convection, which enhances natural convection – the process of heat transferring from one area to another through air movement – with fans or blowers. As cooler outside air is pushed through the system, the heat generated by the electrical equipment within the cabinet is pushed out through the exhaust vent. A cabinet fan is an economical answer for any industrial enclosure you may have, as long as the environment is not excessively harsh, like an indoor cabinet without exposure to heavy dust or sprayed liquid. More intense environments or cooling needs may necessitate a closed-loop cooling system, like an air to air heat exchanger or enclosure air conditioner, to properly cool the cabinet. Is Extra Cooling Required? Not every cabinet on your property will need additional cooling. Natural convection and heat transfer through the cabinet walls might be enough to maintain the equipment at or below its specified maximum operating temperature. To figure out if you need a cabinet cooler, you’ll need to calculate the total waste heat the equipment in your cabinet produces while operating, or its heat load. Your equipment’s specifications should list its heat load, so check that. An example of a calculation is this: say you have a VFD with an operating efficiency of 92% at design load. That means that you’ll lose 8% of the total energy used by the drive as heat, since 100% – 92% = 8%. For, say, a 100 horsepower VFD, that works out to 8 hp of waste heat, or 5,965.6 watts. If your cabinet loses less heat than that via convection and conduction, you’ll need a cabinet cooling fan. Explosion-Proof Cabinet Cooling Fans for Sale in Baton Rouge At Safe Air Technology, we’ve performed thorough research and in-house engineering to bring our customers the finest line of explosion proof cabinet coolers available. These unique cooling systems use air to air heat exchangers to maintain the desired temperature in any closed cabinet environment. Whenever you need to enclose electrical or other components to protect them from dust and debris, you run the risk of overheating. Our explosion proof cabinet coolers alleviate that problem with cost effective thermal management that doesn’t require air conditioning. The Safe Air Technology CB-XPC series Cabinet Coolers are fully tested in-house to ensure outstanding operational performance and compliance every time. Designed to meet Division 2 codes, these units are the ideal way to provide cooling for industrial severe duty applications. Our quality of engineering and manufacturing and personalized attention to every aspect of the manufacturing process, from design and engineering to delivery, will ensure many years of reliable and safe service. Give us a call or visit us online for a quote today!
physics
http://bsixu.your-ideal-body.ru/radiographic-dating-definition-99.html
2019-03-25T12:39:22
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203947.59/warc/CC-MAIN-20190325112917-20190325134917-00329.warc.gz
0.936787
340
CC-MAIN-2019-13
webtext-fineweb__CC-MAIN-2019-13__0__135906936
en
Radiographic dating definition cek harga emas antam online dating Risk factors for ischaemic stroke largely mirror the risk factors for atherosclerosis and include age, gender, family history, smoking, hypertension, hypercholesterolaemia and diabetes.An ischaemic stroke typically presents with rapid onset neurological deficit, which is determined by the area of the brain that is involved.Radioactive decay occurs at a constant rate, specific to each radioactive isotope.Since the 1950s, geologists have used radioactive elements as natural "clocks" for determining numerical ages of certain types of rocks. "Forms" means the moment an igneous rock solidifies from magma, a sedimentary rock layer is deposited, or a rock heated by metamorphism cools off. Not only does it decay by giving off energy and matter, but it also decays at a rate that is characteristic to itself.Let's look closely at how the half-life affects an isotope. Therefore, after one half-life, you would have 5 grams of Barium-139, and 5 grams of Lanthanum-139.After another 86 minutes, half of the 5 grams of Barium-139 would decay into Lanthanum-139; you would now have 2.5 grams of Barium-139 and 7.5 grams of Lanthanum-139.After one half-life of a given radioisotope, only one half as much of the original number of atoms remains active.Another way to look at this is that if the radiation intensity is cut in half; the source will have only half as many curies as it originally had.
physics
http://www.niras.co.uk/radiation-protection-advice
2017-02-26T19:50:29
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00540-ip-10-171-10-108.ec2.internal.warc.gz
0.878662
182
CC-MAIN-2017-09
webtext-fineweb__CC-MAIN-2017-09__0__126332187
en
Radiation Protection Advice AMEC is a HSE approved RPA corporate body, providing Health Physics and Radiation Protection advice to a range of clients worldwide. Our team of Radiation Protection Advisers and Health Physicists provide a range of services including: - Provision of Radiation Protection Training - RPA advice and support to a variety of businesses - Assess Radiological hazards and advise on control of exposure of the workforce - Provide incident support and advice on remedial programmes - Health Physics monitoring - Radiation Protection Supervisor support for radiological surveys - Advice to the analytical laboratories - Advice on the transport of radioactive materials, supported by the AMEC Dangerous Good Safety Advisers (DGSA). - Radiological decontamination and recovery from accidents and other incidents - Provide a 24 hour call out service to our clients, allowing access to specialist RPA’s when needed.
physics
http://www.daltonbearing.com/torrington/torrington-thrust-roller-bearings.aspx
2015-06-30T23:58:28
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094629.80/warc/CC-MAIN-20150627031814-00201-ip-10-179-60-89.ec2.internal.warc.gz
0.888207
202
CC-MAIN-2015-27
webtext-fineweb__CC-MAIN-2015-27__0__42825434
en
6mm - 140 mm (Typical size range 25mm - 75mm) Needle roller and cage thrust assemblies are complements of small diameter needle rollers arranged in a spoke-like configuration. Needle rollers are equally spaced by means of a cage whose web section separates the rollers and provides guidance to keep them tracking in an orbital path. The purpose of these assemblies is to transmit a thrust load between two relatively rotating objects while greatly reducing friction. Needle roller and cage thrust assemblies can also be unitized with lipped washers which serve as raceway surfaces for the needle rollers. Washers can be supplied separately or can be mechanically unitized to the needle roller thrust assemblies for ease of handling. - Automotive automatic and manual transmissions - Automotive accessories (compressors, steering gears, etc.) - Agricultural equipment - Construction equipment - One-way fool-proof assembly features - Anti-rotation locking features - Lubrication flow enhancements
physics
https://www.advancedvsr.com/services
2024-04-12T15:16:09
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816024.45/warc/CC-MAIN-20240412132154-20240412162154-00294.warc.gz
0.923015
404
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__31438534
en
WE INVENTED THE VSR PROCESS Advanced VSR performs both on-site and in-house stress relieving services On-Site VSR Treatment : Advanced VSR can either bring or ship a complete vsr system to your facility, and stress relieve your critical components, right on your shop floor. We do require information about the work-pieces, so that we can both qualify them as being good candidates for the VSR Process, and arrive fully prepared.. A VSR report (see VSR reports page in library) is normally included with such service, so that all vsr setups, vibration data and procedures are fully documented and provided to the customer. On the left: a 40 ton, 4 part, bimetallic hydroturbine discharge ring is receiving the 1st of 2 treatments. Typically these rings machine to within 0.010" - 0.020" round over a ~ 30 foot diameter. In-House Stress Relieving : For modest sized components, it is sometimes easier to bring the work-pieces to our shop than bring the equipment to yours. We are equipped to handle components up to ~1/2 ton. Here a 2 m long decanter centrifuge rotor is setup on a fixture receiving VSR Treatment. These stainless steel rotors must maintain very accurate dynamic balance, which is possible only if VSR Processed. Vibration Analysis / Simulation Often industrial facilities that produce vibration, whether intentional or not, require controlled vibration simulation together with analysis, to know how to modify structures or equipment. VSR systems are well - suited to generate such controlled vibration, and produce spectra detailing vibrational response. Here a VSR vibrator is mounted on a structural member of coal-crusher building at a coal-fired power plant. Spectra generated by a vsr system powering it was used to plan a corrective course of action for the structure, which had been suffering excessive vibration amplitude while in operation.
physics
https://blog.nitecorestore.com/the-history-of-flashlights.html
2024-03-02T23:05:19
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00431.warc.gz
0.944644
1,074
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__23637264
en
- Before electricity was discovered, humans were using fire and fuel sources to have prolonged light - Battery-operated lights were invented in the 19th century, but their size was too big for everyday use - Advancements in technology and materials have helped the flashlights grow into what they are today In today's world, we often take the convenience of a flashlight for granted, easily turning it on to illuminate our path during dark hours. However, the history of flashlights is an intriguing tale of ingenuity, innovation, and perseverance. From humble beginnings to the modern marvels we have today, the evolution of flashlights has been marked by technological advancements and significant cultural impacts. Join us on this illuminating journey as we delve into the fascinating history of flashlights. Early Origins: The Ancient Beacons Long before the invention of electricity, humans sought ways to conquer the darkness. Historical records show that ancient civilizations used various methods to create light sources. Early lamps, made of clay, stone, or metal, were filled with animal fats, oils, or plant-based materials, providing a crude form of illumination. The Greeks, for example, used ceramic lamps with a wick soaked in oil to create their own form of portable lighting. The invention of candles helped people light their way, but most early-day candles could be burned up within only an hour. The First Battery-Powered Light: The Arc Lamp The true transformation of flashlights began in the early 19th century with the discovery of electricity. The groundwork for battery-powered portable lights was laid by Sir Humphry Davy in 1802 when he invented the first electric arc lamp. These arc lamps operated with two carbon rods that would be placed close to one another in holders made of non-conductive material. A battery would then send a high voltage to said rods; because of the space in between the carbon rods, a discharge of electricity would be created which would be an intense bright light. However, these early arc lamps were impractical for everyday use due to their large size and high power consumption which only allowed them to be used for outside public lighting. The Breakthrough: The Incandescent Flashlight The pivotal moment in flashlight history arrived in 1899 when the British inventor David Misell patented the first practical flashlight. He used a carbon-filament bulb, powered by three D zinc batteries, encased in a cylindrical metal tube. They were called flashlights because the batteries couldn’t hold a steady current for long periods which caused the flashlight to flash at times. This invention marked the transition from the era of oil lamps to the age of electric flashlights. In 1902, the first commercial flashlight hit the market courtesy of Conrad Hubert and the Ever-Ready Company, which later became Eveready (and is still around today), and began mass-producing flashlights for public consumption. These early flashlights were basic by today's standards, with simple on-off switches and limited battery life. However, they proved invaluable in various fields, including mining, military, and public safety. As time went on engineers created flashlights with sturdier designs, water-resistant cases, and more efficient bulbs. These developments accelerated the flashlight's popularity in civilian life as well, as people recognized its multifaceted utility. Leaping into Modern Technology: LED Flashlights The 1960s marked a significant turning point in flashlight history with the development of light-emitting diodes (LEDs). Initially, LEDs were expensive and inefficient, but rapid progress in semiconductor technology made them a viable alternative to incandescent bulbs. By the late 1990s, LED flashlights began to dominate the market due to their enhanced durability, longer battery life, and lower energy consumption. In recent years, flashlights have continued to evolve with cutting-edge innovations. The incorporation of rechargeable batteries, adjustable focus, tactical features, and miniaturization have revolutionized flashlight technology. Furthermore, advancements in materials and manufacturing have led to robust and lightweight designs suitable for extreme outdoor activities and emergencies. If you’re looking for a flashlight for yourself, be sure to check out our best sellers here! Going Beyond: LEP Flashlights One of the most prominent new technologies today is LEP flashlights. An LEP (Laser Excited Phosphor) flashlight emits a blue laser beam that excites a material called phosphor. This phosphor absorbs the laser energy and subsequently emits visible light. The flashlight then utilizes lenses or reflectors to focus this light, resulting in the distinctive bright white and long-reaching LEP beam. Want to know more about LEP flashlights? Check out our blog about it here. From ancient clay lamps to the sophisticated LED flashlights of today, the history of these portable illuminators is a testament to human inventiveness and adaptability. The flashlight's evolution, driven by the necessity to conquer darkness and explore the unknown, showcases the remarkable progress of human civilization. As we move forward, it's exciting to imagine the future of flashlights and how they will continue to light our path in the years to come. Looking for more flashlight facts and history? Make sure to check out some of our other blogs like 10 Interesting Things You Didn’t Know About Flashlights!
physics
http://csdtechpd.wordpress.com/tag/bending/
2013-05-25T15:14:56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956734/warc/CC-MAIN-20130516120556-00063-ip-10-60-113-184.ec2.internal.warc.gz
0.93859
404
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__15158886
en
Tag Archives: bending A new RoundTrips videoconference has been announced! Here are the details: Date: January 30, 2008 Times: 10 to 10:45 a.m. and 11:15 a.m. to Noon Grade Levels: 4-8 Cost: No Fee When you travel you notice there are all sorts of different shapes to bridges that span rivers, gorges, and highways. Have you ever wondered “Why did they build that kind of bridge here?” This interactive videoconference is designed to help you and your students answer that question. We’ll explore basic bridge shapes such as arch, beam, suspension, and cable-stayed. We’ll look at the forces of tension, compression, torsion, bending and shear that act on those bridge shapes. We’ll investigate how the purpose of the bridge, its geographic location, and materials used in its construction also help determine its final design. This is the second of our ten part series developed with the Missouri Department of Transportation as it builds a new bridge across the Missouri River at Glasgow, Missouri. Students will see examples of different types of bridges and engage in interactive discussion and activities with engineers who design and build bridges. We’ll look at examples of bridges from around the world and the specifics of the new bridge being built at Glasgow. To participate in this videoconference, contact RoundTrips at [email protected]. More details on the series of programs and an archive of the first program in the series can be found at MOREnet’s website, http://www.more.net. Tags: arch, beam, bending, bridge, bridge construction, cable-stayed, compression, Glasgow Missouri, Missouri Department of Transportation, Missouri River, MOREnet, RoundTrips, shear, suspension, tension, torsion, videoconfernce
physics
https://doresearchforme.com/northern-arizona-university-p/
2022-10-01T00:46:02
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00049.warc.gz
0.852623
964
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__25339656
en
I’m working on a science multi-part question and need a sample draft to help me learn. - In your own words, explain what a wave is. Why are waves important? What are three properties of every wave? Describe those properties. Answer in ~1 paragraph. - One fascinating thing about light is that exhibits both particle behavior and wave behavior. What is one specific piece of evidence that it behaves as a particle? What is one piece of evidence that it behaves as a wave? Answer in your own words in 1-2 paragraphs. - What is reflection? Explain it in your own words, and provide an example. What is refraction? Explain it in your own words, and provide an example. Then explain why these two concepts are important. Answer in 1-2 paragraphs. - Explain Coulomb’s law as if you were teaching this concept to a 6th grader. Be specific, but make sure your explanation is age-appropriate. Why is Coulomb’s law important? Answer in ~1 paragraph. - Historically, some of the first power plants distributed direct current to customers. But today, all household electricity comes in alternating current form. Why is alternating current used for power distribution, rather than direct current? Be sure to include a description of alternating current and direct current in your response. Answer in ~1 paragraph. - What is the main difference between permanent magnets and electromagnets? What are some uses/applications of permanent and electromagnets? Provide at least three examples. Answer in ~1 paragraph. - Compare and contrast the elements carbon (C) and sodium (Na) in terms of (a) number of protons and electrons (assume the atoms are neutral), (b) mass number, and (c) whether they are metals, nonmetals, or metalloids. Then compare and contrast metals and nonmetals: how do they differ? Answer in ~1 paragraph. I have attached links to the course’s readings below that may be a valuable resource. An Introduction to Motion, Force, and Newton’s Laws - Chapter 2: Kinematics – Read Introduction, Sections 2.1 – 2.4, and Section 2.7 - Chapter 4: Dynamics: Force and Newton’s Laws of Motion – Read Introduction and Sections 4.1 – 4.5 - Chapter 5: Further Applications of Newton’s Laws – Read Introduction and Sections 5.1 and 5.2 - Chapter 6: Uniform Circular Motion and Gravitation – Read Introduction and Sections 6.1 – 6.5 Elements, Atoms, Ions, and Matter - Chapter 1: Essential Ideas – Read Sections 1.1 – 1.5 - Chapter 2: Atoms, Molecules, and Ions – Read Sections 2.1 – 2.6 An Introduction to Waves and Sound - Section 16.9: Waves - Section 16.10: Superposition and Interference - Chapter 17: Physics of Hearing – Read Introduction and Sections 17.1 – 17.7 Light and the Electromagnetic Spectrum - This reading, from OpenStax Astronomy, provides a brief but comprehensive introduction to light and the electromagnetic spectrum: The Behavior of Light - Chapter 25: Introduction to Geometric Optics (OpenStax College Physics) – Read Introduction and Sections 25.1 – 25.7 - Chapter 27: Wave Optics (OpenStax College Physics) – Read Introduction and Sections 27.1 – 27.5 and Section 27.8 - Section 6.1: Electromagnetic Energy (OpenStax Chemistry) - Chapter 18: Electric Charge and Electric Field – Read Introduction and Sections 18.1 – 18.4 - Section 19.1: Electric Potential Energy: Potential Difference - Chapter 20: Electric Current, Resistance, and Ohm’s Law – Read Introduction and Sections 20.1 – 20.6 - Circuit Symbols and Circuit Diagrams - Two Types of Connections - Series Circuits - Parallel Circuits - Combination Circuits Magnetism and Electromagnetism - Magnets and Magnetism: - All About Circuits. (n.d.) Electromagnetism. - OpenStax College Physics Chapter 23: Introduction to Electromagnetic Induction – Read Introduction and Sections 23.1 and 23.2 - Williams, M. (2016). What are the uses of electromagnets? Universe Today.
physics
https://www.syhn.com.cn/products/relay-voltage-regulators/relay-ac-voltage-regulator.html
2024-02-26T05:08:21
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474650.85/warc/CC-MAIN-20240226030734-20240226060734-00115.warc.gz
0.877478
189
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__86796952
en
Relay AC Voltage Regulator This relay ac voltage regulator is widely used in various industries,small office equipment,refrigerator, electric fan,TV set,computer and other household appliances. A voltage controller, also called an AC voltage controller or AC regulator is an electronic module based on either thyristors, TRIACs, SCRs or IGBTs, which converts a fixed voltage, fixed frequency alternating current (AC) electrical input supply to obtain variable voltage in output delivered to a resistive load. This varied voltage output is used for dimming street lights, varying heating temperatures in homes or industry, speed control of fans and winding machines and many other applications, in a similar fashion to an auto transformer.Voltage controller modules come under the purview of power electronics. Because they are low-maintenance and very efficient, voltage controllers have largely replaced such modules as magnetic amplifiers and saturable reactors in industrial use.
physics
https://urbanremedy.com.au/we-must-keep-exploring-space-to-answer-the-big-questions-humanity-faces/
2024-03-03T13:01:47
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476374.40/warc/CC-MAIN-20240303111005-20240303141005-00410.warc.gz
0.897252
699
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__31576942
en
Exploration of space has always captivated the imagination of humanity, serving as a conduit for answering some of the most profound questions that have lingered in our collective consciousness. As we gaze beyond our terrestrial confines, the pursuit of exploring space is not merely a quest for scientific discovery but also a relentless pursuit to unravel the mysteries that have perplexed us for centuries. In the grand cosmic theater, our endeavors in space exploration offer a unique platform to address existential queries, scientific enigmas, and philosophical conundrums that humanity grapples with. One of the fundamental questions that propel our exploration of space is the quest for our origins. By scrutinizing celestial bodies, such as planets, asteroids, and comets, we seek clues about the beginnings of our solar system and the very building blocks of life. These cosmic artifacts serve as time capsules, preserving information about the conditions and materials present during the formation of our celestial neighborhood. Delving into the origins of life on Earth and its potential existence elsewhere in the universe propels our search for habitable environments and the exploration of exoplanets. Unraveling the mystery of whether life exists beyond our planet could redefine our understanding of life itself and our place in the cosmos. Moreover, the exploration of space is intrinsically tied to understanding the fate of humanity. Earth, our cradle, faces numerous existential threats ranging from natural disasters to the consequences of human-induced climate change. By extending our reach beyond our home planet, we aim to ensure the survival and resilience of humanity. Establishing colonies on other celestial bodies or space habitats could serve as a crucial backup plan, safeguarding our species against potential catastrophic events on Earth. The study of space also offers a unique vantage point to contemplate the universe’s vastness and our significance within it. Observing distant galaxies, nebulae, and cosmic phenomena allows us to ponder existential questions about the nature of the cosmos, our place in it, and the possibility of intelligent life elsewhere. The pursuit of these answers ignites philosophical discussions about our existence, consciousness, and the ultimate purpose of humanity in a universe of infinite possibilities. Furthermore, space exploration is inexorably intertwined with technological advancements that reverberate beyond the realm of astronomy and astrophysics. Innovations stemming from space missions have consistently contributed to various fields, including medicine, telecommunications, materials science, and robotics. The quest to explore space pushes the boundaries of human ingenuity, fostering technological breakthroughs that improve our daily lives and drive economic growth. However, the pursuit of these answers amidst the cosmos is not without challenges. The vastness of space, coupled with the limitations of current technology and resources, presents formidable obstacles. Overcoming these hurdles necessitates international collaboration, pooling together the collective knowledge, resources, and expertise of diverse nations and organizations. Moreover, ethical considerations regarding the exploration of space, such as the responsible use of resources and the implications of potential discoveries, require careful deliberation and global consensus. In conclusion, the imperative to explore space persists as a beacon guiding humanity toward answering the profound questions that have tantalized our curiosity for millennia. It represents not only a scientific endeavor but a pursuit ingrained in our quest for understanding our origins, ensuring our survival, contemplating our place in the cosmos, and fostering technological progress. As we gaze at the stars, the exploration of space stands as a testament to human resilience, curiosity, and the unrelenting pursuit of knowledge in our eternal quest to unravel the mysteries of the universe.
physics
https://icnqt.com/nano-technology-and-you-benefits-and-applications/
2023-09-29T11:12:40
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00146.warc.gz
0.913811
2,871
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__220469211
en
After more than 20 years of basic nanoscience research and more than a decade of focused R&D, applications of nanotechnology are delivering in both expected and unexpected ways on nanotechnology’s promise to benefit society. Nanotechnology is helping to considerably improve, even revolutionize, many technology and industry sectors: information technology, energy, environmental science, medicine, homeland security, food safety, and transportation, among many others. Described below is a sampling of the rapidly growing list of benefits and applications of nanotechnology. Everyday Materials and Processes Most benefits of nanotechnology depend on the fact that it is possible to tailor the essential structures of materials at the nanoscale to achieve specific properties, thus greatly extending the well-used toolkits of materials science. Using nanotechnology, materials can effectively be made to be stronger, lighter, more durable, more reactive, more sieve-like, or better electrical conductors, among many other traits. There already exist over 800 everyday commercial products that rely on nanoscale materials and processes: - Nanoscale additives in polymer composite materials for baseball bats, tennis rackets, motorcycle helmets, automobile bumpers, luggage, and power tool housings can make them simultaneously lightweight, stiff, durable, and resilient. - Nanoscale additives to or surface treatments of fabrics help them resist wrinkling, staining, and bacterial growth, and provide lightweight ballistic energy deflection in personal body armor. - Nanoscale thin films on eyeglasses, computer and camera displays, windows, and other surfaces can make them water-repellent, antireflective, self-cleaning, resistant to ultraviolet or infrared light, antifog, antimicrobial, scratch-resistant, or electrically conductive. - Nanoscale materials in cosmetic products provide greater clarity or coverage; cleansing; absorption; personalization; and antioxidant, anti-microbial, and other health properties in sunscreens, cleansers, complexion treatments, creams and lotions, shampoos, and specialized makeup. - Nano-engineered materials in the food industry include nanocomposites in food containers to minimize carbon dioxide leakage out of carbonated beverages, or reduce oxygen inflow, moisture outflow, or the growth of bacteria in order to keep food fresher and safer, longer. Nanosensors built into plastic packaging can warn against spoiled food. Nanosensors are being developed to detect salmonella, pesticides, and other contaminates on food before packaging and distribution. - Nano-engineered materials in automotive products include high-power rechargeable battery systems; thermoelectric materials for temperature control; lower-rolling-resistance tires; high-efficiency/low-cost sensors and electronics; thin-film smart solar panels; and fuel additives and improved catalytic converters for cleaner exhaust and extended range. - Nano-engineered materials make superior household products such as degreasers and stain removers; environmental sensors, alert systems, air purifiers and filters; antibacterial cleansers; and specialized paints and sealing products. - Nanostructured ceramic coatings exhibit much greater toughness than conventional wear-resistant coatings for machine parts. In 2000, the U.S. Navy qualified such a coating for use on gears of air-conditioning units for its ships, saving $20 million in maintenance costs over 10 years. Such coatings can extend the lifetimes of moving parts in everything from power tools to industrial machinery. - Nanoparticles are used increasingly in catalysis to boost chemical reactions. This reduces the quantity of catalytic materials necessary to produce desired results, saving money and reducing pollutants. Two big applications are in petroleum refining and in automotive catalytic converters. Electronics and Information Technology Applications Nanotechnology is already in use in many computing, communications, and other electronics applications to provide faster, smaller, and more portable systems that can manage and store larger and larger amounts of information. These continuously evolving applications include: - Nanoscale transistors that are faster, more powerful, and increasingly energy-efficient; soon your computer’s entire memory may be stored on a single tiny chip. - Magnetic random access memory (MRAM) enabled by nanometer‐scale magnetic tunnel junctions that can quickly and effectively save even encrypted data during a system shutdown or crash, enable resume‐play features, and gather vehicle accident data. - Displays for many new TVs, laptop computers, cell phones, digital cameras, and other devices incorporate nanostructured polymer films known as organic light-emitting diodes, or OLEDs. OLED screens offer brighter images in a flat format, as well as wider viewing angles, lighter weight, better picture density, lower power consumption, and longer lifetimes. - Other computing and electronic products include Flash memory chips for iPod nanos; ultraresponsive hearing aids; antimicrobial/antibacterial coatings on mouse/keyboard/cell phone casings; conductive inks for printed electronics for RFID/smart cards/smart packaging; more life-like video games; and flexible displays for e-book readers. Sustainable Energy Applications The difficulty of meeting the world’s energy demand is compounded by the growing need to protect our environment. Many scientists are looking into ways to develop clean, affordable, and renewable energy sources, along with means to reduce energy consumption and lessen toxicity burdens on the environment. - Prototype solar panels incorporating nanotechnology are more efficient than standard designs in converting sunlight to electricity, promising inexpensive solar power in the future. Nanostructured solar cells already are cheaper to manufacture and easier to install, since they can use print-like manufacturing processes and can be made in flexible rolls rather than discrete panels. Newer research suggests that future solar converters might even be “paintable.” - Nanotechnology is improving the efficiency of fuel production from normal and low-grade raw petroleum materials through better catalysis, as well as fuel consumption efficiency in vehicles and power plants through higher-efficiency combustion and decreased friction. - Nano-bioengineering of enzymes is aiming to enable conversion of cellulose into ethanol for fuel, from wood chips, corn stalks (not just the kernels, as today), unfertilized perennial grasses, etc. - Nanotechnology is already being used in numerous new kinds of batteries that are less flammable, quicker-charging, more efficient, lighter weight, and that have a higher power density and hold electrical charge longer. One new lithium-ion battery type uses a common, nontoxic virus in an environmentally benign production process. - Nanostructured materials are being pursued to greatly improve hydrogen membrane and storage materials and the catalysts needed to realize fuel cells for alternative transportation technologies at reduced cost. Researchers are also working to develop a safe, lightweight hydrogen fuel tank. - Various nanoscience-based options are being pursued to convert waste heat in computers, automobiles, homes, power plants, etc., to usable electrical power. - An epoxy containing carbon nanotubes is being used to make windmill blades that are longer, stronger, and lighter-weight than other blades to increase the amount of electricity that windmills can generate. - Researchers are developing wires containing carbon nanotubes to have much lower resistance than the high-tension wires currently used in the electric grid and thus reduce transmission power loss. - To power mobile electronic devices, researchers are developing thin-film solar electric panels that can be fitted onto computer cases and flexible piezoelectric nanowires woven into clothing to generate usable energy on-the-go from light, friction, and/or body heat. - Energy efficiency products are increasing in number and kinds of application. In addition to those noted above, they include more efficient lighting systems for vastly reduced energy consumption for illumination; lighter and stronger vehicle chassis materials for the transportation sector; lower energy consumption in advanced electronics; low-friction nano-engineered lubricants for all kinds of higher-efficiency machine gears, pumps, and fans; light-responsive smart coatings for glass to complement alternative heating/cooling schemes; and high-light-intensity, fast-recharging lanterns for emergency crews. Environmental Remediation Applications Besides lighter cars and machinery that requires less fuel, and alternative fuel and energy sources, there are many eco-friendly applications for nanotechnology, such as materials that provide clean water from polluted water sources in both large-scale and portable applications, and ones that detect and clean up environmental contaminants. - Nanotechnology could help meet the need for affordable, clean drinking water through rapid, low-cost detection of impurities in and filtration and purification of water. For example, researchers have discovered unexpected magnetic interactions between ultrasmall specks of rust, which can help remove arsenic or carbon tetrachloride from water ; they are developing nanostructured filters that can remove virus cells from water; and they are investigating a deionization method using nano-sized fiber electrodes to reduce the cost and energy requirements of removing salts from water. - Nanoparticles will someday be used to clean industrial water pollutants in ground water through chemical reactions that render them harmless, at much lower cost than methods that require pumping the water out of the ground for treatment. - Researchers have developed a nanofabric “paper towel,” woven from tiny wires of potassium manganese oxide, that can absorb 20 times its weight in oil for cleanup applications. - Many airplane cabin and other types of air filters are nanotechnology-based filters that allow “mechanical filtration,” in which the fiber material creates nanoscale pores that trap particles larger than the size of the pores. They also may contain charcoal layers that remove odors. Almost 80% of the cars sold in the U.S. include built-in nanotechnology-based filters. - New nanotechnology-enabled sensors and solutions may one day be able to detect, identify, and filter out, and/or neutralize harmful chemical or biological agents in the air and soil with much higher sensitivity than is possible today. Researchers around the world are investigating carbon nanotube “scrubbers,” and membranes to separate carbon dioxide from power plant exhaust. And researchers are investigating particles such as self-assembled monolayers on mesoporous supports (SAMMS™), dendrimers, carbon nanotubes, and metalloporphyrinogens to determine how to apply their unique chemical and physical properties for various kinds of toxic site remediation. Nanobiosystems, Medical, and Health Applications Nanotechnology has the real potential to revolutionize a wide array of medical and biotechnology tools and procedures so that they are more personalized, portable, cheaper, safer, and easier to administer. Below are some examples of important advances in these areas. - Quantum dots are semiconducting nanocrystals that can enhance biological imaging for medical diagnostics. When illuminated with ultraviolet light, they emit a wide spectrum of bright colors that can be used to locate and identify specific kinds of cells and biological activities. These crystals offer optical detection up to 1,000 times better than conventional dyes used in many biological tests, such as MRIs, and render significantly more information. - Nanotechnology has been used in the early diagnosis of atherosclerosis, or the buildup of plaque in arteries. Researchers have developed an imaging technology to measure the amount of an antibody-nanoparticle complex that accumulates specifically in plaque. Clinical scientists are able to monitor the development of plaque as well as its disappearance following treatment . - Gold nanoparticles can be used to detect early-stage Alzheimer’s disease. - Molecular imaging for the early detection where sensitive biosensors constructed of nanoscale components (e.g., nanocantilevers, nanowires, and nanochannels) can recognize genetic and molecular events and have reporting capabilities, thereby offering the potential to detect rare molecular signals associated with malignancy. - Multifunctional therapeutics where a nanoparticle serves as a platform to facilitate its specific targeting to cancer cells and delivery of a potent treatment, minimizing the risk to normal tissues. - Research enablers such as microfluidic chip-based nanolabs capable of monitoring and manipulating individual cells and nanoscale probes to track the movements of cells and individual molecules as they move about in their environments. - Research is underway to use nanotechnology to spur the growth of nerve cells, e.g., in damaged spinal cord or brain cells. In one method, a nanostuctured gel fills the space between existing cells and encourages new cells to grow. There is early work on this in the optical nerves of hamsters. Another method is exploring use of nanofibers to regenerate damaged spinal nerves in mice. Future Transportation Applications In addition to contributing to building and maintaining lighter, smarter, more efficient, and “greener” vehicles, aircraft, and ships, nanotechnology offers various means to improve the transportation infrastructure: - Nano-engineering of steel, concrete, asphalt, and other cementitious materials, and their recycled forms, offers great promise in terms of improving the performance, resiliency, and longevity of highway and transportation infrastructure components while reducing their cost. New systems may incorporate innovative capabilities into traditional infrastructure materials, such as the ability to generate or transmit energy. - Nanoscale sensors and devices may provide cost-effective continuous structural monitoring of the condition and performance of bridges, tunnels, rails, parking structures, and pavements over time. Nanoscale sensors and devices may also support an enhanced transportation infrastructure that can communicate with vehicle-based systems to help drivers maintain lane position, avoid collisions, adjust travel routes to circumnavigate congestion, and other such activities. Future sensor systems will be able to use multiple physical phenomena to sense many analytes simultaneously for a variety of applications, some of which are noted above.
physics
http://viralnews.harshvardhanart.com/x-ray-technology-is-speeding-up-the-search-for-hidden-gold/
2018-09-22T16:57:43
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158609.70/warc/CC-MAIN-20180922162437-20180922182837-00458.warc.gz
0.922289
285
CC-MAIN-2018-39
webtext-fineweb__CC-MAIN-2018-39__0__196710584
en
A breakthrough x-ray technology that can detect and analyze unseen gold is now up and running in Australia, the world’s second-biggest producer, with plans to take it to Africa. The new technology, developed by Australia’s national science agency, uses high powered X-rays to bombard rock samples and activate atoms of gold and other metals. A highly sensitive detector picks up unique signatures to determine their concentrations. The system is now operational at Ausdrill Ltd.’s analytical facility in Perth, with two more to be established in the Kalgoorlie goldfields in coming months, the Commonwealth Scientific and Industrial Research Organisation, or CSIRO, said Thursday in a statement. Ausdrill has longer-term plans to take the innovation to Africa. The introduction of the new technology couldn’t be better timed. The producer-funded World Gold Council has estimated that world supply may have peaked, while Frank Holmes, chief executive officer of U.S. Global Investors Inc., said last month that mine supply topped out in 2017 or will do so this year. The photon assay system will analyze at least 50,000 samples a month, at a similar cost to conventional methods, and can also be applied to a range of other minerals, including silver and copper, the statement showed. It’s a faster, safer and more environmentally-friendly alternative, it said.
physics
https://sunxtender.com/battery_sizing.php
2024-04-15T19:39:20
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817014.15/warc/CC-MAIN-20240415174104-20240415204104-00066.warc.gz
0.914032
1,362
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__111079343
en
Sun Xtender® Battery Sizing Follow these guidelines for sizing a battery system that should provide a reliable energy storage system for stand alone Renewable Energy systems. The primary emphasis is for photovoltaic (PV), solar battery systems but other renewable energy source systems would have similar requirements. To calculate the DC Ampere Hours per Day required to power the system: DC Load Amps = 1000 x kW ÷ DC System Voltage Total Daily Load [AH] = DC Load Amps x No. of Operating Hours per Day For a 0.12 kW DC load at 48 VDC, DC Load Amps = 1000 x 0.12kW ÷ 48VDC = 2.5A. Total Daily Load = 2.5A x 24 Hours/Day = 60 AH/Day. For variable DC Loads, establish the duty cycle based on percentages of the daily operations. (P1% of day at xx Amps) + (P2% of day at yy Amps) + Etc = Total AH Consumed/Day A system operates at 5A for 70% of the day and 10A for 30% of the day: Total Daily Load = (70% X 5A X 24 Hrs) + (30% X 10A X 24 Hrs) Total Daily Load = 84 AH + 72 AH = 156 AH/Day. When an inverter is used to power 120 or 240 VAC appliances, such as pumps, refrigerators, lighting, etc., the AC voltage must be converted to the Battery's DC voltage and the efficiency of the inverter must be considered. If the inverter AC voltage is 120 VAC and the battery DC voltage is 24 VDC, then the conversion factor is 5.0. For every AC amp drawn there will be 5 times as many DC amps required. Also, the inverter's conversion efficiency from DC to AC is not 100%. There is an internal loss in the inverter which is normally about 10% to 15%. See inverter/charger manufacturer's data for efficiency specifications. For a 2.4 kW AC Load at 120VAC with a 48VDC battery and Inverter operating at 90% efficiency, AC Load = 1000 x 2.4 kW ÷ 120 VAC = 20 Amps @ 120 VAC DC Load = 20 Amps AC X 120/48 ÷ 0.90 = 55.6 Amps DC Total Daily Load = 55.6 A x 24 Hours/Day = 1,334 AH/Day Note: When sizing the battery for non continuous loads, or for larger loads for short periods of time per day, it may not be possible to use the 20, 24 or 120 hr. rate of discharge for the battery's capacity. When discharged at different rates, a battery's capacity will vary. The higher the rate of discharge, the lower the capacity of the battery will be. More detailed calculations are required in these cases. Days of Autonomy As everybody knows, the sun does not shine with equal intensity every day, nor does it shine at night and during inclement weather. Cloud cover, rain, snow, etc. diminish the daily insolation (Insolation is the amount of solar energy delivered to the earth's surface, measured in W/m2 or kWh/m2/day. A storage factor must be employed to allow the photovoltaic battery system to operate reliably throughout these periods. In addition, it is desired to obtain the best service life of the battery by limiting its average daily depth of discharge. This storage factor is commonly referred to as "Number of Days of Battery Autonomy". The number of days is established by evaluating the peak hours of sun per day for the lowest insolation month of the year with the solar array oriented for maximum output during that month. The minimum number of days that should be considered is 5 days of storage for even the sunniest locations on earth. In these high sun locations there will be days when the sun is obscured and the battery's average depth of discharge should not be more than 20% per day. The recommended days of autonomous storage are shown in the following table: The temperature of the battery is a major factor in sizing a PV system. Battery capacity is reduced in cold temperatures and the battery life is shortened in high temperatures. It should be realized that the temperature of the battery itself and ambient temperature can be vastly different. While ambient temperatures can change very quickly, battery temperature change is much slower. This is due to the large thermal mass of the battery. It takes time for the battery to absorb temperature and it takes time for the battery to relinquish temperature. The battery's temperature is normally the average temperature for the past 24 hours plus or minus a few degrees. In many systems it can be difficult or impossible to heat or cool the battery and we must take ambient temperature into consideration. A battery that is required to operate continuously at -18°C (0° F.) will provide about 60% of its capacity. This same battery operated continuously in a 35°C (95°F.) environment will see its life expectancy cut in half. The earth is a great heat sink which provides enormous insulation in high or low temperatures. By burying the battery in the ground we can increase its capacity at cold ambient temperatures and increase the life of the battery at high ambient temperatures. The battery with only 60% of its capacity at -18°C (0°F) can be brought up to 85% to 90% capacity by burying it. With life cut in half at 35°C (95°F), burying the battery can bring it back to near normal life expectancy. The battery capacity for a PV system can be calculated using the following formula: Capacity (AH) = Total Daily Load x Days of Autonomy x Design Factor The Design Factor depends on the battery's average temperature during the coldest time of the year, as discussed above. The following table provides recommended Design Factors at various temperatures. For a 48VDC system, Total Daily Load of 30AH, 5 Days of Autonomy, and -8°C is the lowest average temperature, the required battery capacity is as follows: Battery Capacity = 30 x 5 x 1.84 = 276AH. This requirement could be satisfied with a PVX-2580L, which has a C/120 rating of 305AH. Four of these batteries in series gives 4 x 12VDC = 48VDC.
physics
https://thereviewmaster.com/quantum-computing/
2024-04-22T23:27:32
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818374.84/warc/CC-MAIN-20240422211055-20240423001055-00658.warc.gz
0.935549
590
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__156091531
en
The computer has been becoming smaller and smaller since its invention. The computer has been becoming smaller and smaller since its invention. It has downsized from enormous room-sized giants to pocket-sized mobile phones. Each new model: smaller and faster than the one before. Now, the computer and software engineering is about to enter into a new era: Quantum Computing. Quantum computing is not as simple as its name. For a regular person, it may not make any sense at all. However, some basics of this strange technique can be explained. What are Quantum Computers? Quantum computers are computers that use quantum mechanics. Quantum mechanics is a branch of physics. Quantum computing employs the phenomena of superposition and entanglement for operating. We know that classical computers process data in form of ‘zeros’ and ‘ones’ otherwise known as ‘bits’. However, quantum computers use ‘qubits’ to process data. New data processor: Qubits Bits used in classical computers can either have a value of 1 or 0. However, qubits can have both of these values (0 and 1) at the same time. Qubits can even have more than two values. This is called superposition. The interesting part is the observation of a qubit. When unobserved, a qubit is in an average value of all the possible values assigned to it. When it is observed, it picks one of these assigned values. Hence, qubits can be assigned a large number of values. And essentially increase the amount of data for the same number of qubits, as compared to bits of classical computing. This is the fundamental property and advantage of quantum computing. Entanglement is another strange and important part of quantum computers. Entanglement happens when two particles separated by any distance influence one another. In other words, change in state of one particle affects the state of the other. In quantum computing, the same phenomenon is applied to qubits. Qubits, due to entanglement, create a communication network within a quantum system. It speeds up the quantum computer much faster than a classical computer. Read The Review Master’s impressive article on Google Map’s Newest Update Hurdles and Difficulties Quantum computers are still a difficult concept because of many technical difficulties. It is very difficult to increase the number of qubits physically. Assigning arbitrary values to qubits is a difficult task. Reading qubit information is not easy. Moreover, due to superposition, a large amount of data can be lost when qubits are observed. Quantum computers are yet to be realized. However, if manufactured and perfected over time, they will provide a revolutionary technological advancement in data processing.
physics
https://pjhomeimprovements.co.uk/energy-rating/
2021-10-28T09:31:30
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00060.warc.gz
0.921088
484
CC-MAIN-2021-43
webtext-fineweb__CC-MAIN-2021-43__0__293366447
en
Benefits of energy-efficient windows - Smaller energy bills. - Smaller carbon footprint. - More comfortable home: energy-efficient glazing reduces heat loss through windows and means fewer draughts and cold spots. - Peace and quiet: as well as keeping the heat in, energy efficient-windows insulate your home against external noise. - Reduced condensation: energy-efficient glazing reduces condensation build-up on the inside of windows. The costs and savings for energy-efficient glazing will be different for each home and each window, depending on its size, material and the installer you choose. Double glazing should last for 20 years or more. To get a better idea of how much you could save by replacing your windows, use the Energy Saving Calculator on the Glass and Glazing Federation’s website, developed in conjuction with the Energy Saving Trust. How energy-efficient glazing works Double-glazed windows have two sheets of glass with a gap in between, usually about 16mm, to create an insulating barrier that keeps heat in. This is sometimes filled with gas. Triple-glazed windows have three sheets of glass, but aren’t always better than double-glazed windows. To choose the most energy-efficient window look for the BFRC rating. Energy-efficient windows come in a range of frame materials and styles. Performance criteria vary according to the following: - How well they stop heat from passing through the window. - How much sunlight travels through the glass. - How little air can leak in or out around the window. What to look for - Glass – The most energy-efficient type for double glazing is low emissivity (Low-E) glass. This often has an invisible coating of metal oxide, normally on one of the internal panes. This lets in light and heat but cuts the amount of heat that can get out. - Gaps between the glass – Very efficient windows might use gases such as argon, xenon or krypton in the gap between the sheets of glass. - Pane spacers – These are set around the inside edges to keep the two panes of glass apart. For maximum efficiency, look for pane spacers containing little or no metal – often known as ‘warm edge’ spacers.
physics
https://electroferretera.com/quantum-cryptography-redefining-secure-communication-in-the-future/
2024-04-25T06:01:11
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00316.warc.gz
0.875145
693
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__71007575
en
In an age where digital security is paramount, the realm of cryptography is undergoing a paradigm shift with the advent of quantum cryptography. This article delves into the future and its profound implications in ensuring secure communication, exploring its transformative potential and the promise it holds in safeguarding sensitive data. Quantum Cryptography: A New Frontier in Security Quantum cryptography harnesses the principles of quantum mechanics to create cryptographic protocols that are inherently secure. Unlike classical cryptography, which relies on mathematical complexity, this utilizes quantum properties like superposition and entanglement to ensure unbreakable encryption keys and secure transmission of data. Unhackable Communication Channels The hallmark of quantum cryptography lies in its ability to offer unhackable communication channels. Through quantum key distribution (QKD), it enables the creation of cryptographic keys using quantum states, guaranteeing the security of information by detecting any interception attempts, a feature rooted in the fundamental principles of quantum mechanics. Quantum Key Distribution in Practice The deployment of QKD systems is steadily progressing, with advancements paving the way for real-world applications. These systems facilitate secure communication between entities by exchanging quantum-encoded keys, ensuring that any eavesdropping attempts would disrupt the quantum state, alerting the parties involved to potential security breaches. Challenges and the Road Ahead While quantum cryptography holds immense promise, challenges persist in terms of scalability, reliability, and practical implementation. Overcoming these challenges requires continued research and technological advancements to make quantum cryptographic systems more accessible, cost-effective, and compatible with existing infrastructure. Transformative Potential and Future Applications The future of quantum cryptography holds transformative potential across various sectors, including finance, government communications, healthcare, and beyond. Its integration into sensitive communication networks, confidential data transmission, and secure information exchange will redefine the landscape of digital security, ushering in an era of unparalleled protection against cyber threats. Conclusion: Securing Tomorrow’s Communication This represents a seismic shift in ensuring secure communication in an increasingly interconnected world. As research and development progress, quantum cryptographic solutions are poised to become integral in fortifying digital security against evolving cyber threats. Embracing the transformative power of quantum cryptography promises a future where communication remains impervious to interception, safeguarding sensitive data and securing the foundations of trust in the digital age. Quantum cryptography stands poised at the vanguard of a revolutionary era in secure communication. At its core lies an intricate fusion of quantum mechanics and cryptographic principles, offering an unprecedented level of impregnability in data transmission. This technology, unlike conventional cryptography, leverages the inherent properties of quantum mechanics, such as superposition and entanglement, to fortify encryption keys and shield sensitive information. The future of quantum cryptography promises a transformative paradigm, anchored in the unbreakable security it offers. Quantum key distribution (QKD), a cornerstone of this technology, facilitates the exchange of cryptographic keys through quantum states. Any attempt to intercept or eavesdrop disrupts the delicate quantum state, instantly alerting the parties involved and thwarting potential security breaches. The deployment of quantum cryptographic systems is gradually materializing, marking a significant stride towards real-world application. These systems, through quantum-encrypted keys, establish secure communication channels resistant to malicious interceptions. However, challenges persist, ranging from scalability and compatibility with existing infrastructure to the reliability and cost-effectiveness required for widespread adoption. For more Articles like this, Visit Our Website Here
physics
https://corlisskinisio.com/standards/
2023-06-05T17:56:29
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00328.warc.gz
0.894627
484
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__219359637
en
In-situ chemical and isotopic analyses can suffer from matrix effects – analytical artifacts created by using a calibration standard with a different composition to the sample. Best practice is therefore to employ standards with a matrix composition that closely matches that of the sample. Synthetic standards for in-situ analyses are often made by melting mixtures of the required starting materials for an extended period of time, in order to aid homogenization, followed by rapid quenching to form a glass. However, for platinum-group-elements (PGE) in a silicate matrix, this procedure induces formation of metallic nuggets, making the quenched samples heterogeneous and unsuitable for use as calibration standards. As a result, there is a pressing need for novel methods that produce homogeneous PGE micro-analytical standards with a silicate matrix. Working with Dr. Tashi Parsons-Davis, we used a modified Stöber reaction to grow micron-sized particles from a solution doped with PGE. These PGE-doped particles were formed into appropriately sized blocks using additive manufacturing techniques. Upon calcining, the matrix of these particles is SiO2. Subsequently, the samples are sintered to yield a coherent sample for use as micro-analytical standards. To suppress the formation of metallic nuggets, sintering is performed below the melting temperature of the sample, at controlled atmospheric conditions, and for shortest duration possible. Laser ablation inductively coupled mass spectrometry (LA-ICPMS) analyses confirm the homogeneity of PGE in these samples. The same method can be used to synthesize standards for other elements, and the amount of synthesized material is only limited by the size of the container used for the Stöber reaction. This method therefore allows production of standards in quantities large enough for widespread distribution. - Sio, C. K., Parsons‐Davis, T., Lee, E., Wimpenny, J., Pascall, A. J., Kuntz, J. D., Goodell, J. J., Roberts, K. E., Bandong, B. B. & Bennett, N. R. (2020). Additive manufacturing of platinum group element (PGE) reference materials with a silica matrix. Rapid Communications in Mass Spectrometry, 34(7), e8627.
physics
http://www.shareyourrepair.com/2012/08/how-to-replace-fuses-in-fluke-177-true.html
2018-02-18T21:55:34
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.30/warc/CC-MAIN-20180218212626-20180218232626-00720.warc.gz
0.925502
881
CC-MAIN-2018-09
webtext-fineweb__CC-MAIN-2018-09__0__9097624
en
The other day I wanted to measure the current being drawn from a DC to AC converter and so I hooked up my trusty Fluke 177 and nothing happened, open circuit through the voltmeter. BTW: you must know that you have to place the leads in the correct terminals on the multimeter to measure current. Improperly connecting the terminals will blow the fuses. Somehow I did that some time ago. After talking to a Fluke Tech on the phone I learned that the Multimeter will not notify you that the fuse has been blown, it just won’t work and that’s right where I was at yesterday. How to Replace the Fuses in a Fluke 177 Multimeter Update 4/2/13: After writing a review on these replacement fuses on amazon.com someone recently commented on my review and shared some insightful information: You should be able to test the fuses without opening the case. On my Fluke 23-III, I plug the red lead into the standard Resistance-Voltage-Continuity port, and touch the lead tip into one of the fused Amperage ports. Set the Fluke to measure resistance. No need to use the black lead since the common port is already connected to the other side of the internal fuse. I haven’t tried that yet, but that makes sense. So if you are just trying to figure out if your fuse is blown or not you can try that. Step 1: Remove the 4 phillips screws on the back of the multimeter. Note: if there is a stand on the back you’ll have to lift it up to expose the two bottom screws that hold the battery compartment door AND hold the bottom half of the case together. Step 2: Remove the screws and turn over the multimeter and take the top of the case off. You don’t have to take the battery door off but you can. You may need to use your fingernails in the seam to get the face to separate but it came apart pretty easy for me. Step 3: Locate the 2 fuses. They are both in the lower left corner. Step 4: Pry out the fuses. Take note of which one goes where–this is important if you don’t want to immediately have to replace the 400mA fuse again. |Removing the 11 A fuse| |Removing the 440 mA fuse| Step 5: Test the fuses. You can test the fuses with a different multimeter (although this multimeter will still work to test resistance even with the fuses out, because essentially they are out now). The “blown” fuses should measure an open circuit (or infinite resistance): |The blown fuse measures an open circuit| Note how dark in color the bad fuse is versus the bright white new one: |The new fuse is on the left and the darker one is on the right.| The good fuse will measure like this (here on the 200 ohm setting): |The good fuse measures .8 ohms| Step 6: Reinstall the fuses. They will snap in there easily. Make sure the fuse with blue writing (440 mA) is in the top left and the green (11 A) fuse is in the bottom right. One thing you should watch out for is that the fuse does not touch the terminal. It is possible to have the fuse too low and it will touch the top terminal in the picture below. I don’t know what it would do if that was the case. |Don’t let the fuse touch the terminal.| Step 7: Put the front case back on and reinstall the 4 screws on the back. Piece of cake. Just for fun I decided to test my new fuses by measuring the current that my cheap multimeter uses to test resistance. 1.86 mA flows through whatever circuit you are testing for resistance and the resistance of the input of the Fluke 177 is 2.4 ohms when it’s in the 400 mA terminal and on the DC Amp setting: |Measuring the current used in testing resistance.| Other Useful Information:
physics
https://yuko.eu/catalog/greases/special/goi-54p/
2021-01-17T02:37:09
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00620.warc.gz
0.936822
115
CC-MAIN-2021-04
webtext-fineweb__CC-MAIN-2021-04__0__166667873
en
GOI-54p is used in low-loaded friction units, including the mechanisms of artillery guns. Used in the conservation of mechanisms and devices. It consists of low viscosity oil, thickened with ceresin. Contains antioxidant additive. It has high protective properties. Retains its properties during storage for 10 years; Protects metal products from corrosion up to 5 years; Effective at temperature range from -40 °C to + 50 °C; Exceeds other low-temperature greases in water resistance, colloid and chemical stability
physics
https://charlemagnews.xyz/2023/03/14/nasa-tracking-asteroid-that-could-impact-earth-in-2046/
2023-12-07T17:27:18
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100677.45/warc/CC-MAIN-20231207153748-20231207183748-00520.warc.gz
0.937066
751
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__135229184
en
Asteroid impact is a possibility, albeit remote, as per NASA’s warning of a “very small chance” of a newly detected asteroid colliding with Earth in 23 years. The asteroid in question has a probability of hitting our planet on February 14, 2046, potentially ruining Valentine’s Day for future generations of romantics. What are the chances of asteroid 2023 DW hitting Earth? The European Space Agency has estimated the likelihood of impact to be 1 in 625, while NASA’s Jet Propulsion Laboratory’s (JPL) Sentry system places it at 1 in 560. As some sort of comparison, your chances of getting hit by lightning are about 1 in 15,000, while dying in a plane crash is 1 in over 10,000,000 and then death by shark is somewhere between 1 in 3.7 million and 1 in 7 million depending on your proximity to the sea. The solitary item featured on NASA’s risk list is a rock named 2023 DW, which scores a mere 1 out of 10 on the Torino Impact Hazard Scale. This scale gauges the likelihood of an object striking Earth. Despite being the only entry, a score of 1 implies that the chances of a collision are “extremely unlikely” and require no public attention or alarm. ‘A routine discovery in which a pass near the Earth is predicted that poses no unusual level of danger. Current calculations show the chance of collision is extremely unlikely with no cause for public attention or public concern. New telescopic observations very likely will lead to re-assignment to Level 0,’ reads the official assessment in a comfort-giving green shade. Via CNN, Davide Farnocchia, a navigation engineer for NASA’s JPL’s Sentry system, remarked, “This object is not particularly concerning.” Nevertheless, NASA experts caution that the probability of impact may shift over time. What would the ‘impact’ between Earth and 2023 DW be like? “We’ve been tracking a new asteroid named 2023 DW that has a very small chance of impacting Earth in 2046,” announced NASA Asteroid Watch on Twitter. “Often when new objects are first discovered, it takes several weeks of data to reduce the uncertainties and adequately predict their orbits years into the future.” According to NASA’s Eyes on Asteroids system (check out the cool interactive graphic on that page), the asteroid has a diameter of around 160 feet (50 metres) — which is the height of the Arc de Triomphe in Paris, France — and it is currently travelling around 15.5 miles per second (25 kilometers per second). What can we do if asteroid is heading towards Earth? DART, that’s what. On 26 September 2022, a spacecraft weighing 570kg was fired into the asteroid moon Dimorphos. The aim was to shift the orbit of the moon to prove it would be possible to alter the direction of space debris that could threaten Earth in future. It succeeded in this aim by knocking the asteroid off course by 32 minutes of its orbit. Both the impact of the spacecraft and the aftermath was caught on camera by NASA’s two sapce telescopes, Hubble and James Webb. “That’s the very reason why we flew that mission,” the aforementioned Farnocchia said about DART in relation to the latest mild threat. “And that mission was a spectacular success.” So breathe easy, my fellow humans. There are much greater problems being created by ourselves to worry about first.
physics
https://raport2016.pse.pl/en/our-company
2023-12-02T05:13:20
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00806.warc.gz
0.95397
193
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__168249391
en
Ensuring common access to electricity requires an efficiently operating system for its generation, transmission and distribution. All equipment connected to it, including consumer facilities, form the Polish Power System. Electric energy supplied to our homes is generated mainly in power plants and combined heat and power (CHP) plants. In Poland, the basic energy generating sources are thermal power plants in which energy is generated as a result of combustion – usually by burning hard coal or lignite. The largest cluster of those plants is situated in the southern part of the country. In large cities, CHP plants operate which are mostly fired with coal, but also natural gas. Renewable energy sources (RES) are also developing: wind, hydro, biomass and photovoltaic. Energy transmission from power plants to consumers is possible over an extensive network of power lines and electrical substations. Different voltage levels are used to optimise costs, depending on the distance over which electricity is transmitted.
physics
https://www.cines.fr/en/supercomputing-2/
2022-07-07T07:50:16
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00527.warc.gz
0.92455
620
CC-MAIN-2022-27
webtext-fineweb__CC-MAIN-2022-27__0__145006817
en
CINES hosts advanced equipment including the supercomputer by GENCI ( Grand National Equipment for Supercomputing ), which appears to date from the first European machines. The computational power made available to the research comunity gives researchers the opportunity to address big scientific challenges. Extreme simulations of complex physical situations, which were not realizable until recently, in various domains as fluid mechanics, physics, chemistry, biology, climatology, astrophysics, environment etc…are now possible. Numerical simulation has become a method for research at the same level as analysis and experience, and therefore, the community of users of supercomputing capabilities increases and is renewed every year. A supercomputer ranked at world level with a peak performance of 3.5 Pflops - The supercomputer Occigen Result of a competitive dialogue procedure conducted by GENCI parallel bullx supercomputer, designed by the French company Bull is hosted at CINES since December 2014. This system has a peak power of 2.1 Pflops (2 million billion operations per second). Designed on the basis of shared memory compute nodes interconnected via an InfiniBand network , it consists of 85824 core Intel Xeon ™ each with 2.5 GB of memory. A local storage capacity of 5 Po allows rapid access (106 GB/s) data ( /scratch ) managed by the parallel file system “Lustre”. A second space ( /home ) Panasas type a rate of 5 GB/s. The clusters hosted at CINES are equipped with basis software : compilers, mathematical libraries, parallel tools etc … and also with software specific at particular scientific domains, depending on the needs of the scientific community. All these tools are chosen and installed to get the best performance. Quality services for scientific users The department of Intensive Computing aims at training, advising and helping researchers to use the computing environments, at managing and optimizing the usage of the clusters, and at participating in national and european projects. It offers: - training in the parallelization of codes - advice and help for users - seminars about the contribution of parallel clusters - documentation which is made available online - technological surveillance, through collaborations with university laboratories : welcome of interns, thesis students, or post doctoral employees, and with the R&D teams of the suppliers - the scientific on site welcome of experienced researchers from France or Europe for a better interaction with CINES experts with a participation of CINES in the housing costs CINES has strong connections with ‘Maison de la Simulation’ (House of Simulation) in Paris as part of its participation in the « Prace Advanced Training Center » from the European project PRACE and proposes international trainings for parallel computing. A user’s commity, composed of elected representatives gives the opportunity to CINES to have a better knowledge of the needs of the researchers and to put into place procedures and tools to respond to them.
physics
https://www.traworld.com/en-GB/service/2375-windlab-indoor-skydiving-experience
2023-10-01T13:03:03
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00701.warc.gz
0.913368
456
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__309466124
en
Windlab Indoor Skydiving is the simulation of an actual outdoor skydive at 14,000 feet. One of the hottest thrill rides and sport flying in Asia, Windlab is the first indoor skydiving facility in Kuala Lumpur, Malaysia. Everyone who has dreamed of flying can visit one of the World's largest shopping mall - 1 Utama, and experience the thrill of flight. The all-glass paneled circular wind tunnel creates a cushion of air in a 12-foot diameter 32-foot high flight chamber where the flyer enters onto a cushion of air and becomes airborne, all within the reach of a qualified instructor. The base of the chamber is a trampoline floor of aircraft-quality stainless steel and with wind speeds of up to 250 km/h. We run hourly sessions starting from 11am daily until 9pm. The entire process for one session can take up 1 hour and 30 minutes depending on how full is the session. Our maximum flyers per session is 15 flyers. Our standard is 2 flights per person. Each flight is 50 seconds long. All flyers are to arrive at our facility 1 hour prior to your booked session for check-in, waiver signing, a brief training session, gear up our standard flight equipment, and then flight. Classes start ON TIME. If you are late, you will not be able to fly. No refunds are entertained. Each flyer enters the wind tunnel for first flight for 50 seconds. Exits and rest while next flyer enters. All flyers in the same session rotates until back to the first flyer for the 2nd flight. Flight height is up to 8-9 feet depending on the flyer’s ability. We only fly one person in the wind at a time. Each flyer will be accompanied by one of our qualified instructors to assist them. We have the HIGHRIDE* which is optional (The High Ride is where one of our more experienced instructors flies each flyer towards the top of our 10M Tunnel). At the tail end of the 2nd flight, The HIGHRIDE is executed. Once completed with all flyers, session ends. All exit to de-gear. The HIGHRIDE is only available in selected sessions and must be prebooked.
physics
http://alegremath.com/thats-the-way-the-ball-bounce.html
2024-03-02T20:08:51
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475897.53/warc/CC-MAIN-20240302184020-20240302214020-00573.warc.gz
0.819002
1,461
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__63164973
en
That's the way the Ball Bounce As the ball bounces up and down, the maximum height it reaches continually decreases from one bounce to the next. For any particular bounce, if the ball's height is plotted as a function of time, the resulting graph has a parabolic shape. The relationship between height and time for a single bounce of a ball, then, is quadratic. Expressed mathematically: where y represents the ball's height at any given time, x. It is possible to mathematically model a ball's bouncing behavior using a series of quadratic functions. In this activity, you will record the motion of a bouncing ball using a Calculator Based Ranger (CBR). You will then analyze the collected data and attempt to model the variations in a bouncing ball's height as a function of time for one particular bounce. 1 CBR unit 1 Ball (racquetball or basketballs work well) PRIOR TO THE ACTIVITY, MAKE THE FOLLOWING PREDICTIONS: Make a prediction of the height from the floor as a function of time. 2. Make a prediction of the distance from the CBR as a function of time. Be sure the ball is bounced on a smooth surface. Do not allow anything to obstruct the path between the CBR and the ball while data is being collected. Run RANGER on your calculator by selecting it the CBR/CBL From the Applications menu of RANGER, Choose meters as the units. Select 3:Ball Bounce. Be sure to hold the CBR at least 1.5 meters from the Ball when collecting data. The resulting plot of distance versus time should appear to be a series of parabolic sections with decreasing maximum If you are dissatisfied with your results, press ENTER and choose Repeat Sample to recollect the data. Once you are satisfied with the results, make a sketch of the height versus time plot. Include the units and values along each axis. SELECTING THE DATA: We will analyze the data for one parabola. From the LIST menu, arrow right to OPS and choose 8:Select(. The select feature allows you to choose a portion of the graph and place those data points into another set of lists. You type the lists where your want to place the new data separated by commas. For this activity, place the data in L3 and L4. Your home screen should show Select (L3,L4). Press ENTER and you will be taken to the graph. Move the cursor to the beginning of the parabola that you want to choose. Avoid the sharp point unless you are sure it is part of the parabola that you want. Press ENTER to mark this point. Move the cursor to the end of the parabola and press ENTER. The plot will now show the selected data which is located in L3 and L4. Sketch the selected data in the space to the right. 1. In this activity, the ball bounced straight up and down beneath the CBR, yet the data plot seems to depict a ball that is bouncing sideways. Explain why this is so. 2. Press TRACE. Move along the height versus time plot and estimate the x- and y-coordinates of the vertex of the parabola (in this case, the maximum point on the curve). Record these values in the table below. 3. What do the x and y coordinates represent physically? 4. The theoretical model for the height vs time data is quadratic. We will attempt to fit our data with a quadratic function of the where b is the x-coordinate of the vertex, c is the y-coordinate of the vertex, and a determines the parabola's dilation (stretch or spread). This model is sometimes called vertex form. 5. You will use an application, Transformation Graphing, to help you fit the equation to this data. Press the [APPS] key and select Transfrm. Press Y= and move the cursor to Y1. Enter the equation Y = A(X - B)^2 + C and then press GRAPH . Move the cursor the B on the screen and enter the value for the x-coordinate of the vertex. Move the cursor to the C and enter the value of the y-coordinate of the vertex. 6. To obtain a good fit, you will need to adjust the value of A. Use the method described above to store different numbers to the variable A. Record the A-value that works best in the space below: A = ______________ 7. It is also possible to express any quadratic function in the general form, as described earlier: where the coefficient a is identical to that found in question (6) above, but b and c are different. To determine these coefficients, substitute the proper values for A, B, and C found above into the quadratic expression from question (2), expand it, and collect like terms. Record the corresponding values of a, b, and c in the table below: 8. The calculator has a built-in feature that allows it to compute the best-fitting quadratic equation through a set of data. To perform a quadratic regression on the data that you selected, press STAT arrow right to CALC. Select QuadReg to place the quadratic regression command on the home screen. Then press arrow right to select YVARS and select Function and then Y2 ENTER. 9. Copy the values that appear on your calculator screen into the matching table to the right. Are the values of a, b, and c in the quadratic regression equation above consistent with your table values from question (7)? 10. Press Y=, move the cursor onto the equal sign for Y2 and press ENTER to turn on this equation. How well does it fit with your data? 11. In your own words, describe how the constant a affects the graph of Specifically, how does the sign of A change the graph? 12. Suppose you had chosen the parabolic section for the bounce just to the right of the one you actually used in this activity. Describe how each of the constants A, B and C would have to change, if at all, in order to fit this parabolic section with the equation . 13. From your physics lessons, you learned that for constant acceleration, the equation for motion is: How does this equation relate to the data that you collected? Relate each part of the equation and each variable. 14. From your data, what is the value of the acceleration due to gravity?________________ 15. How close is this to the theoretical value? Find the percentage of error. Show your calculation. 16. Do the values of b and c from your regression equation represent the initial velocity and initial position of the ball? Explain your
physics
https://fashiononstore.com/bruce-weber-photographer-shares-insight-into-using-reflectors-for-photography/
2024-04-20T17:38:43
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817670.11/warc/CC-MAIN-20240420153103-20240420183103-00392.warc.gz
0.930645
614
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__82206175
en
Photography is way more than simply knowing how to work a camera. Understanding light, and the manner it bounces and spreads, is also pretty important. A few objects absorb light, while others bounce it back in another direction. A reflector is an important tool that allows photographers to manipulate the light by offering another surface for the light to bounce off. Reflectors are simple, cost-effective tools that can have a huge impact on images. Rather than extensive lighting set-ups, Bruce Weber Photographer prefers to use just a reflector for many of his shoots. Bruce Weber Photographer talks about clicking amazing photos with the help of a reflector In photography, a reflector is basically a tool that reflects light. It does not create light like a flash. A reflector just redirects the existing light. It can redirect light from the sun or even from a flash or studio strobe. The quality of light from a reflector will be the same as the light of the scene. For example, if a photographer is shooting at sunset, the light bouncing off their reflector is likely to have a bit of an orange hue. There, however, can be exceptions to this. After all, reflectors do come in multiple colors and types. The color of the reflective surface might change the tone of the light that is bounced back. A typical white reflector would bounce of the light as it is, and add a nice, soft touch to it. Silver reflectors do not change the color of the light as such, but do provide a bit brighter light than the one reflected off a white one. Gold reflectors are designed to change the color of the light by warming it with an orange tone. As reflectors do not create light, their key purpose mostly is to fix shadows. They can be used to fix odd shadows on the face when shooting a portrait outdoors during daytime. Reflectors can also be helpful in preventing a backlit subject from becoming a silhouette. If the light is directly behind the subject, a reflector should be placed directly in front of them to help prevent a silhouette. Conversely, if the light is coming from one side, a reflector should be used on the opposite side to help fill in the shadows. Placing a reflector close to objects blocking the light can also be ideal for clicking stunning images. Reflectors are highly versatile tools, on the whole. Most experienced photographers know how to use them in a myriad of ways. For instance, reflectors can be used to add more drama or interest to the shot in flat lighting. A few photographers also use reflectors as hair lights outdoors. Certain reflectors have a black side that can be used for blocking out light rather than reflecting it. Reflectors are also great for bouncing a flash when there is nothing around to bounce off of Industry professionals like Bruce Weber Photographer do have a good understanding of how to use reflectors proficiently. Amateur photographers, on the other hand, must pay heed to important tips and tutorials available online to hone their capabilities.
physics
http://blog.sheffercorp.com/things-you-never-knew-cylinders-could-teach-you-about-star-wars-or-maybe-just-the-force
2019-07-17T15:18:49
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00515.warc.gz
0.846455
568
CC-MAIN-2019-30
webtext-fineweb__CC-MAIN-2019-30__0__61156563
en
Cylinders convert a fluid pressure into a mechanical force. The input fluid pressure acts on the surface area of the cylinders piston displacing it to create a linear movement of the piston rod. The output force of the cylinder is based on the pressure of the input fluid and surface area of the piston (which is driven by the cylinders bore diameter). First let’s take a look at how pressure affects a cylinders force. For this example we will compare a 2” bore pneumatic cylinder running on 90psi (common shop air pressure) and a 2” bore hydraulic cylinder running on 3000psi (common hydraulic pump pressure). Since both cylinders have the same bore diameter their piston surface area will be the same. A quick calculation for determining the surface area is as follows: For extend area : .7854 x (bore diameter)^2 For retract area: .7854 x ((bore diameter)^2 - (rod diameter)^2) So for our 2” bore cylinders the extend area would be .7854 x (2”)^2 = 3.14 sq. in. Now that we have the piston surface area we can multiply it by our cylinders input pressures to get our output forces. For the pneumatic cylinder 90 psi x 3.14 sq. in. = 282 lbf. For the hydraulic cylinder 3000 psi x 3.14 sq. in. = 9420 lbf. As shown on the graph with a fixed bore size input pressure forms a linear relationship with the output force. For the next example let’s look at how changing the bore size affects the output force. Using the same 2” bore hydraulic cylinder running on 3000psi from the example above let’s compare it to a 4” bore cylinder with the same pressure. From the previous example the 2” bore cylinder has an extend area of 3.14 sq. in. x 3000 psi = 9420 lbf. The 4” bore cylinder has an extend area of .7854 x (4)^2 = 12.56 sq. in. This gives us an output force of 12.56 sq. in. x 3000 psi = 37680 lbf. An easy way to understand this relationship when sizing a hydraulic system, when you double the bore size of a cylinder you get four times the output force on the same input pressure. Here is a graph of the most common NFPA hydraulic cylinder bore sizes and their output force based on a 3000 psi pressure input. Without "force" hydraulic systems just wouldn't work. A little late we know but, may the force (or May the fourth) be with you. Image credit: Wikimedia Commons
physics
https://www.littletoncoin.com/shop/2020-ghana-2-cedis-titanium-nikola-tesla-ghna2-wc
2024-04-16T05:04:17
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817043.36/warc/CC-MAIN-20240416031446-20240416061446-00865.warc.gz
0.904357
241
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__71630547
en
2020 Ghana Titanium 2 Cedis Titans of American Innovation - Nikola Tesla Select a Grade: Uncirculated Discover a new world of collecting with this dramatic profile of the brilliant Nikola Tesla on the reverse of an Uncirculated bi-colored 2020 titanium coin. This impressive coin is the second in a new Titans of American Innovation series. It honors the Serbian-American whose invention of a transformer that could easily convert different voltages – alternating currents – revolutionized the electrical system we use today. Tesla immigrated to the U.S. in 1884. He sold the patent rights to his system of AC dynamos, transformers, and motors to George Westinghouse. In 1893, Tesla’s AC current was used to light the World’s Columbian Exposition in Chicago which led to a winning contract to make the first water-powered machinery at Niagara Falls. Another invention, the Tesla coil, allowed for the wireless transfer of electricity for such applications as radio, telegraphy and TV. Order this unique 2 Cedis commemorative from Ghana honoring a great innovator and pioneer in the field of radar, X-ray and remote-control technology.
physics
http://www.the-esa.org/blog/article/-/iot-growth-requires-energy-efficiency
2020-01-29T05:12:30
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00520.warc.gz
0.916028
806
CC-MAIN-2020-05
webtext-fineweb__CC-MAIN-2020-05__0__34292700
en
IoT Growth Requires Energy EfficiencyWednesday 22nd February 2017 The “internet of things” is growing and affecting all our lives, whether we like it or not! What is it? As Larry Hardesty of MIT news explains, it is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks. Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents. Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference held this month, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 2,000,000-fold increase. The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts. The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet. If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second. To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable. The developed converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on. But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range. It is energy efficiency in a sense in the small scale, but multiply that by the ever increasing demands via the Internet of things and the combined energy efficiency gains are potentially huge for energy svaing, let alone the efficiency of such devices and gadgets. Source and picture from MIT. Wednesday 22nd February 2017
physics
https://www.tg-quotidiano.net/archives/introduction-to-infrared-thermometer-features-functions-and-common-problems
2023-05-30T11:49:12
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645595.10/warc/CC-MAIN-20230530095645-20230530125645-00772.warc.gz
0.937144
454
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__164310497
en
Introduction to Infrared Thermometer: Features Functions and Common Problems... Infrared thermometers, also known as non-contact thermometers, are widely used in various fields like industry, medicine, science, and research. In this article, we will discuss the features, functions, and common problems of infrared thermometers. Appearance and Design Infrared thermometers are designed with a pistol-shaped grip, which is both comfortable and easy to hold. The front end of the thermometer has an infrared sensor, which helps to detect the temperature of any object that comes into its field of view. The display panel is attached to the back of the device, allowing the user to view the temperature readings, and the menu screen. Temperature Range and Accuracy The temperature range of infrared thermometers can vary from model to model, but most of them can measure temperatures within the range of -50\u00b0C to 1000\u00b0C. Generally, infrared thermometers are very accurate, with a typical accuracy of \u00b12%, but this can vary depending on the type and quality of the device. Uses and Applications Infrared thermometers are mainly used to measure surface temperatures of objects without the need for direct contact. This makes them ideal for use in situations where the target is moving, hard to reach, or potentially dangerous, such as measuring the temperature of tires, machinery, and electrical components. Infrared thermometers are also used in the medical and scientific fields to measure body temperature, food temperatures, and laboratory samples. The most common issue with infrared thermometers is that the readings may be affected by temperature or humidity variations. Irregular temperature patterns can also cause errors, such as reflections, cold spots, or hot spots. To minimize these problems, infrared thermometers must be calibrated regularly to ensure consistent results. Infrared thermometers are an essential tool in many industries and fields. Their design and construction make them easy to use for measuring temperature without direct contact, making them ideal for a wide range of applications. However, it is important to understand and address the common problems that can affect their performance to ensure accurate and reliable results." - Blog (128)
physics
https://bochnerip.com/attorneys/eric-kleinertz/
2022-12-10T09:03:21
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00019.warc.gz
0.944117
221
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__3350210
en
Eric Kleinertz is a law clerk with experience in patent and trademark prosecution. Eric has aided in the prosecution of patents pertaining to medical devices, consumer appliances, agriculture, green energy, computer software, and more. Eric’s experience in Intellectual Property began in the Brooklyn Law Incubator and Policy Clinic, which provides pro bono legal services to startup technology companies. Also during Eric’s career at Brooklyn Law School, Eric excelled in a wide array of Intellectual Property coursework, earning him the Intellectual Property, Media & Information Law Certificate. Before embracing the world of Patent Law, Eric was already immersed in science and technology. During Eric’s time at New York University, he completed courses ranging from Quantum Mechanics to Computational Physics and held an executive position in the Society of Physics Students. Eric has also worked with manufacturing and aerospace industry leaders during his time with an industrial machinery broker. - Brooklyn Law School, J.D., 2021 - New York University, B.S. Physics, 2016
physics
https://www.nighthunter.com.au/kentli-batteries-chargers
2020-08-07T16:08:35
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00198.warc.gz
0.889874
173
CC-MAIN-2020-34
webtext-fineweb__CC-MAIN-2020-34__0__134551837
en
This battery uses lithium-ion cells, and changes the voltage of 3.7V to the constant output voltage of 1.5 V through voltage conversion technology. It completely replaces the disposable alkaline batteries and nickel-metal hydride batteries. Meanwhile, the stable output voltage of 1.5V of universal lithium batteries solves the problem that some electric appliances are unable to work because NI-MH rechargeable batteries' voltage is too low (1.2V). It has greatly improved the battery life. Battery has two outlet ends, the output of one end is 1.5V, the normal use of electrodes; the other end is 3.7V, used to charge the battery. These rechargeable batteries will give you up to 3 times the life of standard Alkaline and are perfect for all AA and AAA electronics that require 1.5V
physics
https://www.willowdentalcare.com/dental-glossary/laser-cavity-preparation/
2022-05-20T19:45:06
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00479.warc.gz
0.873966
149
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__142578755
en
Laser Cavity Preparation The pulse duration of the KaVo KEY Laser 3 is so short that the reaction threshold of the nerves is not reached. There is often no need for anaesthetics. There are no vibrations and no shrill drilling noise. KaVo KEY Laser 3 kills all germs. Owing to the high energy absorption in water of this laser wavelength, the moisture content of the germ cells is evaporated. The cell membrane bursts and the cells thus die. The KaVo KEY Laser 3 can be used to prepare teeth for a filling when new caries is detected. The conservative preparation minimizes the destruction of natural tooth structure and conditions the enamel and dentine to enhance the bonding of restorative material.
physics
http://www.hmirrorwall.com/more-time-in-air-for-drones-with-wireless-charging/
2017-04-23T21:42:53
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00611-ip-10-145-167-34.ec2.internal.warc.gz
0.948915
621
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__119887619
en
The drone industry is all set to expand its wings beyond warfare. There are many startups which have taken up the noble mission to provide interesting twists to UAVs so that these can be of real help to mankind. A small startup based at Newfoundland, Canada is working on a major issue that could surely have tremendous impact on the future of drone industry. When it comes to practicality and efficacy of drone system, battery life remains a key issue. At present the drone needs to be landed at frequent intervals for battery swapping before it takes the next flight. Otherwise it needs physical contact charge. This process comes in the way of efficient drone operation, more so if it is on an emergency mission. The Canadian startup is working on this particular problem in partnership with Boeing. They aim at recharging unmanned aerial vehicles wirelessly with the help of energy transmitters. These transmitters will be capable of communicating across distances with receivers fitted on drones thereby assisting in wireless charging. This would help the gadget remain in air for longer periods of time. Solace power has developed this technology based on “RC2 resonant capacitive coupling“. Nikola Tesla first used resonances in his experiments related to inductive power transmission more than hundred years ago. Now deriving inspiration from this type of wireless charging technology, Solace Power has come up with drone recharging system which allows greater flexibility in deciding size and shape of the receiver that draws power. Then the system guarantees greater freedom when it comes to aligning the transmitter and receiver for adequate power transmission from point A to point B. The drone recharging technology developed by Solace Power can help bring down overall inactive time for fleets. It could prove to be particularly helpful for industrial, commercial and agricultural applications, since the fleet can include brief runs over charging surfaces into their flight plans so as to keep them airborne for longer period. Another alternative would be to develop charging elements directly into surfaces over which drones work regularly. Warehouses or fixed factory sites can fit in this scheme. A recently released video by the company shows the drone charging pad in action. It is fitted with a green LED which informs when the drone starts charging its battery. No special orientation is needed. The drone just has to remain above the panel. Further announcement has come up from the company’s end that the venture will receive investment support from Industry Canada. It is an official government investing body which provides funding assistance through Boeing. They do have an ultimate motive that this technology will help fulfill Canada’s military procurement needs. Solace Power has already grabbed licenses for this technology to be used across various industries related to powering electrical vehicles and battery-powered equipments that are worn as part of soldier’s kit. It can be also used to power ring motors or dynamos usually used to construct robots, security cameras and helicopters. Till now battery life remains the only hurdle in the way of successful autonomous drone deployment. Now it is to be seen if Solace Power in conjunction with Boeing can indeed bring a revolutionary change to the drone recharging scenario.
physics
https://www.blinzinger-elektronik.de/en/custom-built-ferrites-ferrite-machining/
2022-10-07T16:16:15
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00223.warc.gz
0.889872
193
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__77616741
en
Are you looking for a matching air gap or AL-value for your Ferrite or toroidal core or a customized special ferrite core? We are machining ferrite cores according to your requirements or design and manufacture totally new shapes and sizes. Blinzinger - your reliable partner in ferrite cores. Our company has been working with and matching ferrite cores to customers' specifications and requirements for more than 30 years. We are a reliable supplier for our customers and fulfil assigned tasks fast and flexible. We can modify virtually any shape of core. - Manufacturing of new shapes and sizes, ferrite machining, ferrite reworking, ferrite cores with air gap, mechanical air gap or according to AL-value, air gap altering, special ferrites, electrical measuring of ferrites, bonding ferrites, separating ferrites (segmenting toroidal cores), Slotting toroidal cores and a lot more in relation to ferrite cores
physics
https://store.simplifaster.com/product/iron-neck-resistance-band/
2020-07-13T14:12:47
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00439.warc.gz
0.854591
136
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__235531394
en
No cable machine? No problem. With our resistance bands, you can use the Iron Neck almost anywhere! Our cinch anchor securely loops around any stable vertical post (i.e. squat racks, structural columns, goal posts, basketball hoops, etc). Clip one end of the resistance band into the cinch anchor and the other into the Iron Neck and you’re good to go! Select Resistance Weight: There are three different max resistance weights: Starter (0-25 lbs resistance), Intermediate (0-35 lbs resistance) and Advanced (0-50 lbs resistance). Resistance bands ship with a chart illustrating resistance weights, per band, at incremental distances.
physics
https://shewalkssoftly.com/2016/07/28/robert-hodgin-dodecahedral-variations/
2017-03-27T18:22:24
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189495.77/warc/CC-MAIN-20170322212949-00480-ip-10-233-31-227.ec2.internal.warc.gz
0.957911
229
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__116393593
en
Robert Hodgin: Dodecahedral Variations I found this while looking for Ernst Haeckel pieces to feature in various craft projects…and I love this sculpture so much I can barely stand it. Robert Hodgin created a series of magnetic solids inspired by magnetism (of course) and mathematical concepts in a two person show at the Gray Area Foundation for the Arts (GAFFTA). In his own words: The largest structure required a temporary scaffold of sorts. Without the scaffold, I would be unable to complete the form without it succumbing to gravity. This structure took me a few hours to create and on the day of the show, it collapsed on the way to the gallery. I ended up rebuilding it the day of the show. These forms are created with cylinder magnets, spherical magnets, and ball bearings. Magnetism is the only thing holding the forms together. They are fairly fragile and picking them up will likely crush them. All of the forms I created were variations of the 12 sided dodecahedron. This particular platonic solid seems to be the form the magnets are happiest with.
physics
https://mbnusa.biz/detail/annual-award-supports-research-education-in-quantum-optics-photonics-at-hbcus?show_comment=1
2024-04-12T18:34:56
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816045.47/warc/CC-MAIN-20240412163227-20240412193227-00040.warc.gz
0.913099
1,187
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__1008817
en
Annual award supports research, education in quantum optics, photonics at HBCUs BELLINGHAM, Washington, USA — SPIE, the international society for optics and photonics, and IBM Quantum have selected Wesley Sims, an assistant professor of physics at Morehouse College, as this year’s recipient of the IBM-SPIE HBCU Faculty Accelerator Award in Quantum Optics and Photonics. Sims is also the director of the college’s Micro/Nano Optics Research & Engineering Laboratory. The $100,000 annual award, presented jointly by the IBM-HBCU Quantum Center and SPIE, the international society for optics and photonics, supports and promotes research and education in quantum optics and photonics within IBM-HBCU Quantum Center member institutions, currently 24 historically Black colleges and universities (HBCUs). The IBM-SPIE joint annual award is expected to provide a shared total of $500,000 over five years. The inaugural recipient was Renu Tripathi, a professor of physics and engineering at Delaware State University. “We are particularly proud of this partnership with IBM and our shared contribution in quantum research programs at HBCUs,” said SPIE CEO Kent Rochford. “Quantum science is going to have an increasingly large impact on society, and a diverse population of skilled and knowledgeable students will greatly enhance the critical science and technology of the future. We are grateful for leaders like Dr. Sims and look forward to seeing the results from his group at Morehouse.” The technical goal of Sims’ proposal is “to study integrated photon-photon correlation architectures that can provide ultrafast sensitivity and thus examine both coherent and incoherent excitation mechanisms in a plethora of quantum materials, from solid-state to soft condensed matter systems.” In addition, he will be leveraging an established collaboration between Morehouse, the University of California, Los Angeles (UCLA), and Stanford University’s SLAC National Accelerator Laboratory: the California institutions will provide additional mentorship and training, as well as hands-on summer research opportunities for Morehouse physics undergraduates. “I’m extremely excited about our continued partnership with SPIE,” said Academic Alliance Lead, Partner Ecosystem at IBM Quantum Kayla Lee. “The work that Dr. Sims is doing at Morehouse College is a perfect example of how we define impact within the IBM-HBCU Quantum Center. His collaborative research across institutions creates opportunities for students to enter and thrive in this emerging discipline of quantum information science and engineering.” “I am extremely humbled and excited to receive the IBM-SPIE HBCU Faculty Accelerator Award,” said Sims. “Not only does it grant me the opportunity to increase research capacity at Morehouse College, it enables me to involve undergraduate students in our optics and photonics research. Morehouse has a long history of producing underrepresented minorities in STEM, but there is still the need to develop and build upon existing research programs and infrastructure, particularly in optics and photonics. In addition, the collaborative approach with other institutions will expose and prepare our students for graduate-level rigor as well as building a quantum-focused network that will provide many opportunities for them. The training they receive within this program will help close the gap between workforce needs and available talent in the quantum field. If we can create a pipeline from Morehouse to UCLA — and ultimately to the quantum workforce — we can consider the program a success.” Sims holds a PhD in applied physics from Alabama A&M University, an MEng from the University of Alabama at Birmingham, and a BS in physics from Morehouse. His research interests encompass cross-phase optics, micro/nano optics fabrication, optical quadrature microscopy, extreme ultraviolet lithography, terahertz imaging, and nanostructure characterization. In addition, Sims’ background work includes laser applications such as interferometric lithography of diffractive materials and fabrication of plasmonic nanostructures. He has also worked with phase change materials (PCM) for switchable photonic devices which involved characterization of asymmetric split ring resonators in the THz region. For more on the IBM-SPIE HBCU Faculty Accelerator Award in Quantum Optics and Photonics — including the next award cycle — please visit our dedicated webpage. SPIE, the international society for optics and photonics, brings engineers, scientists, students, and business professionals together to advance light-based science and technology. The Society, founded in 1955, connects and engages with our global constituency through industry-leading conferences and exhibitions; publications of conference proceedings, books, and journals in the SPIE Digital Library; and career-building opportunities. Over the past five years, SPIE has contributed more than $22 million to the international optics community through our advocacy and support, including scholarships, educational resources, travel grants, endowed gifts, and public-policy development. www.spie.org. Public Relations Manager +1 360 685 5478 BELLINGHAM Washington SPIE international society for optics and photonics IBM Quantum Wesley Sims Morehouse College IBM-SPIE HBCU Faculty Accelerator Award in Quantum Optics and Photonics Micro/Nano Optics Research & Engineering Laboratory IBM-HBCU Quantum Center member institutions historically Black colleges and universities HBCUs IBM-SPIE joint annual award Renu Tripathi Delaware State University Kent Rochford SPIE CEO University of California UCLA Stanford University’s SLAC National Accelerator Laboratory Kayla Lee Alabama A&M University University of Alabama at Birmingham THz region photonic devices SPIE Digital Library
physics
http://ma-salafiyah.sch.id/index.php/2022/06/29/brand-new-mole-try-an-expense-product-similar-to/
2022-08-08T09:34:20
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00693.warc.gz
0.934419
873
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__51804773
en
The identity of a substance is defined not only by the types of atoms or ions it contains, but by the quantity of each type of atom or ion. For example, water, H2O, and hydrogen peroxide, H2O2, are alike in that their respective molecules are composed of hydrogen and oxygen atoms. However, because a hydrogen peroxide molecule contains two oxygen atoms, as opposed to the water molecule, which has only one, the two substances exhibit very different properties. Today, sophisticated instruments allow the direct measurement of these defining microscopic traits; however, the same traits were originally derived from the measurement of macroscopic properties (the masses and volumes of bulk quantities of matter) using relatively simple tools (balances and volumetric glassware). This experimental approach required the introduction of a new unit for amount of substances, the mole, which remains indispensable in modern chemical science. It provides a specific measure of the number of atoms or molecules in a sample of matter. One Latin connotation for the word “mole” is “large mass” or “bulk,” which is consistent with its use as the name for this unit. The mole provides a link between an easily measured macroscopic property, bulk mass, and an extremely important fundamental property, number of atoms, molecules, and so forth. A beneficial mole of substance is that amount in which there are 6.02214076 ? ? 10 23 discrete entities (atoms or molecules). This large number is a fundamental constant known as Avogadro’s number (NA) or the Avogadro constant in honor of Italian scientist Amedeo Avogadro. This constant is properly reported with an explicit unit of “per mole,” a conveniently rounded version being 6.022 ? ? 10 23 /mol. In keeping with the meaning as the an amount equipment, step one mole of any element contains the exact same number of atoms just like the step 1 mole of every other function. The masses of just one mole of different facets, although not, differ, while the masses of the individual atoms is actually dramatically different. The newest molar mass from a component (otherwise compound) is the mass from inside the g of 1 mole of the compound, property shown inside products off grams for each mole (g/mol) (find Figure 3.5). The newest molar size of any material are numerically comparable to the atomic or algorithm weight during the amu. Per the fresh new amu meaning, a single 12 C atom weighs twelve amu (their atomic mass try several amu). Good mole from several C weighs in at a dozen grams (their molar bulk try a dozen grams/mol). This relationships holds for all elements, since their nuclear masses is actually measured prior to that of brand new amu-resource material, 12 C. Extending so it principle, the new molar size from a substance from inside the grams will in addition be numerically comparable to the formula size into the amu (Contour step three.6). While you are Sikh online dating nuclear bulk and you can molar mass is actually numerically similar, just remember that , he is vastly some other when it comes to level, once the represented from the vast difference in the brand new magnitudes of their respective tools (amu versus g). To know the fresh enormity of one’s mole, think a little drop of liquid consider throughout the 0.03 g (select Profile step 3.7). Even though this is short for merely a fraction of just one mole out-of liquids ( 18 grams), it has so much more h2o molecules than just are going to be certainly thought. When your molecules have been delivered similarly among approximately 7 billion people in the world, differing people carry out discover over 100 mil molecules. Relationship to Training Brand new mole is used into the chemistry in order to depict six.022 ? ? 10 23 from things, but it shall be tough to conceptualize for example much. Observe that it video clips after which complete the “Think” inquiries you to realize. Explore a little more about the brand new mole because of the looking at everything below “Search Deeper.”
physics
https://horse.co.za/electromagnetic-therapy/
2018-12-11T23:19:39
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823705.4/warc/CC-MAIN-20181211215732-20181212001232-00490.warc.gz
0.920401
494
CC-MAIN-2018-51
webtext-fineweb__CC-MAIN-2018-51__0__18155701
en
With a range of therapies available, ranging from standard veterinary treatment all the way to less mainstream methods such as acupuncture and kinesiology, it is useful to form a better understanding of what is involved. This month, HQ takes a look at electromagnetic therapy. Pulsed electromagnetic field therapy is a form of treatment that deals with the electric current that flows through the cells of a horse and other mammals. We spoke to Evette Gemmill at Equiproducts, which distributes ActivoMed products. Equine orthopaedic specialists have been using electromagnetic therapy on horses with leg injuries with great success. Electrical potential in horses Horses, like us, are electrically charged. All of the cells in a horse have a natural electrical current flowing through them. The blood acts as an electrical conductor and electrolytes are the minerals in the blood that carry electric currents within the horse’s body. All cells in a horse have a resting electrical current flowing through them; this is referred to as electrical potential. Electrical potential is measured in millivolts (MV). 90MV is a normal measurement for resting cells. When a horse has been injured or hurt, the cells suffer and the electrical potential in these cells can reach measurements in the 120MV region. Electromagnetic therapy helps with restoring cells back to an ordinary electrical potential. How does electromagnetic therapy work? Electromagnetic blankets, neck wraps and leg wraps are commonly used for targeting the specific areas that need to be treated. The leg wraps concentrate on areas such as the hock, ligaments, knees and tendons. There are coils of wire sealed inside the blanket and the leg wraps. The coils are attached to a power supply or battery which when switched on creates an electromagnetic field around the coils. Electromagnetic therapy results in blood vessels widening and allowing more blood to enter the targeted area, thereby promoting healthier oxygenated blood circulation. Electromagnetic treatment influences cell behaviour by inducing electrical charge in and around the cells. Beneficial for treatment of: - Tendon and ligament injury - Speed up healing time - Pain relief - Soothing and relaxing of sore muscles - Relieving spasms - Reducing inflammation - Promoting blood circulation - Reduces post exercise recovery time - General relaxation and stress relief Text: Charlotte Bastiaanse The full article appears in the June issue of HQ.
physics
http://www.davelane.ca/about.html
2014-09-01T18:29:52
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909053634-00395-ip-10-180-136-8.ec2.internal.warc.gz
0.921432
380
CC-MAIN-2014-35
webtext-fineweb__CC-MAIN-2014-35__0__99341432
en
|Me, in my dome.| I have been an active amateur astronomer since the early 1980s, am a life member of the Royal Astronomical Society of Canada (RASC), and am a past-president of the Halifax Chapter. I am presently the national past-president of the Society. Since 1992, I have been employed as the Astronomy Technician (and Observatory Director) and Systems Administrator at the Department of Astronomy and Physics at Saint Mary's University (Halifax, Nova Scotia, Canada) where I have been responsible for the Burke-Gaffney Observatory and the department's computing infrastructure. I'm also the High-Performance Computing administator for the Institute for Computational Astrophysics and a founding technical architect for ACEnet. Prior to working at SMU, I developed instrumentation systems for oceanography and meteorology and led the "hardware group" at Seimac Limited (now Cobham Tracking & Locating). My primary astronomical interests are deep sky observing, observatory automation, CCD imaging, contributing to amateur astronomical science projects (such as variable star observing), astrocomputing, and all the related gadgetry. I also operate the part-time business, Nova Astronomics, whose primary work involves the development and distribution of The Earth Centered Universe (ECU), a Planetarium and Telescope Control Program for Microsoft Windows. For developing ECU, I was awarded the Chant Medal of the RASC, the highest award for amateur contributions to astronomy in Canada. I also hold the distinction, along with Paul Gray and Beverly Miskolczi of being the first and second Canadians to discover any supernovae (1995F in NGC 2726 and 2005B in UGC11066) from within Canada. In July 95, Paul and I received the RASC's Ken Chilton Prize for the discovery of 1995F.
physics
https://help.reece.com.au/hc/en-us/articles/360000650216-Will-a-solar-hot-water-unit-still-work-even-if-it-s-not-sunny-
2021-01-20T16:48:39
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00276.warc.gz
0.959497
293
CC-MAIN-2021-04
webtext-fineweb__CC-MAIN-2021-04__0__13329724
en
Solar hot water systems harness the suns energy to heat your water. There are generally two types of solar collectors used - a flat panel system or an evacuated tube system. Your solar system will still produce hot water on overcast days as it is either boosted with an electric element inside the tank, or a separate gas continuous flow system that kicks in if the temperature of the water in the tank is too low. There are advantages to evacuated tube systems such as the Thermann Evacuated Tube Solar system which offers excellent efficiency and long life. Each tube is like a little greenhouse that traps sunlight inside which is used to heat water. Because the tubes are round, they can efficiently collect heat no matter where the sun is in the sky. This is called ‘active tracking’ and ensures you have hot water all day long. In times of high sun exposure, 100% of the water within the system is heated by the solar energy, meaning you are using almost no mains power. A common misconception of evacuated tube solar is that they only work in warmer environments, but this is not the case. Evacuated tube solar can warm water at zero temperatures and they continue to work well even in overcast conditions. This makes them just as suitable for use in the southern Australian states as well as in the northern parts of Australia. For additional information on how Thermann evacuated solar systems work, head to www.thermann.com.au
physics
https://www.peaksensors.co.uk/component-store/calibration-equipment/microcal-1-thermocouple-simulator/
2019-11-18T17:38:26
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00010.warc.gz
0.751236
335
CC-MAIN-2019-47
webtext-fineweb__CC-MAIN-2019-47__0__19107906
en
The MicroCal 1 thermocouple simulator helps ensure that the frequent checking of thermometer accuracies is a routine operation. The calibrator is designed to simulate a chosen temperature to test thermocouple type K, J, T, R, N, S and E thermometers without the need for specialised equipment or conversion tables. Selectable parameters include: °C/°F, auto power off – enable/disable, Cold Junction Compensation – internal/external and display contrast adjustment. Optional leads are available, you can choose from seven leads, one for each thermocouple type K, J, T, R, N, S and E or buy all seven as a set. Each PVC lead is one metre long and incorporates two miniature thermocouple plugs. Each MicroCal is supplied with a one metre PVC type K thermocouple lead with miniature connectors and a five point UKAS Certificate of Calibration which indicates deviations from standards at the various points. Thermocouple type K: range -200 to 1372°C Thermocouple type J: range -200 to 1200°C Thermocouple type T: range -270 to 400°C Thermocouple type R: range 0 to 1768°C Thermocouple type N: range -200 to 1300°C Thermocouple type S: range 0 to 768°C Thermocouple type E: range -140 to 1000°C 2 x 1.5 volt AAA 35 x 73 x 141mm Peak Sensors Ltd, The Bridge, Beresford Way, Chesterfield, S41 9FG, United Kingdom
physics
http://nakkeran.com/index.php/2020/12/29/what-is-sidereal-time/
2023-12-09T14:23:59
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00536.warc.gz
0.910354
1,849
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__183022454
en
What is sidereal time? By Christopher Crockett in ASTRONOMY ESSENTIALS June 10, 2012 Explanation of sidereal time, or star time. New Year’s celebrations on Mercury. How Venus spins backward and is the slowest rotating planet in the solar system. When Earth is at position 1, it is noon at the red dot. One sidereal day later, Earth has rotated once and moved along its orbit (position 2). For it to be noon again, Earth has to continue rotating for another four minutes (position 3). Credit: Wikipedia A sidereal day measures the rotation of Earth relative to the stars rather than the sun. It helps astronomers keep time and know where to point their telescopes without worrying about where Earth is in its orbit. Every 24 hours, the Earth spins once around its axis and the sun loops around the sky. From noon to noon – or the time it takes the sun to return to its highest point in the sky – is how we define the days of the week. Astronomers call this a solar day. But the time it takes for the sun to make one circuit around the sky and the time it takes our planet to complete one rotation is not the same thing. If you’ve spent your life thinking that 24 hours is how long it takes Earth to rotate, you might be in for a surprise. Earth turns to face the sun in Cambodia. Credit: chauromano via Fotopedia In the time it takes the Earth to spin once about its axis, it also moves along its orbit by over 2.5 million kilometers. Because Earth has moved, the sun will not appear in the same part of the sky at the end of that rotation. To end up facing the sun again, the Earth has to rotate for another four minutes. In other words, a solar day is how long it takes Earth to rotate once – and then some. A sidereal day – 23 hours 56 minutes and 4.1 seconds – is the amount of time needed to complete one rotation. In this system, the stars always appear at the same place in the sky at the same time each sidereal day. Sidereal noon is when the vernal equinox – where the sun sits in the sky at the first moment of northern hemisphere spring – passes directly overhead. The four-minute difference between sidereal and solar days can be seen by watching the stars rise four minutes earlier every night. If Vega is rising at 9 P.M. tonight, then it will rise at 8:56 P.M. tomorrow, and 8:52 P.M. the following night, and so on. As Earth travels about the sun, we see each star earlier and earlier. Mercury has to orbit the sun twice from one Mercurian noon to the next. Credit: Wikipedia Sidereal days are also how astronomers define the rotation periods of other planets. It helps isolate how quickly the planet is actually spinning from how fast it’s traveling about the sun. In most cases, like Earth, the difference between a solar day and a sidereal one is pretty small. But our solar system does have some notable exceptions. Mercury’s rotation rate is two-thirds of its orbital period: a Mercurian sidereal day is 58 Earth-days while its year is 88. Because the sidereal day is a considerable fraction of the planet’s orbital period, an inhabitant of Mercury has to wait for about 170 Earth-days from one noon to the next. But this means that a solar day on Mercury is longer than its year! One Mercury year is about one-half of a Mercury solar day. Imagine ringing in the year 2012 at midnight, and then gearing up for the next New Year’s celebration at noon! Venus over the Pacific Ocean. Our closest neighbor in space and about the same size and density as Earth. But the sun only rises twice in a Venusian year. Plus Venus spins backward, relative to other worlds in our solar system. Credit: Brocken Inaglory via Wikipedia Venus is a particularly odd case. She goes around the sun faster than she spins on her axis: a 225 Earth-day orbit versus 243 to complete one rotation. This is why Venus is the slowest spinning planet in the solar system. At Venus’ equator, the planet is spinning at about 6 km/hr while Earth’s equator is hurtling along at nearly 1700 km/hr. What’s more, Venus does this while spinning backward. If there were ever to be a break in Venus’ stifling cloud layer, the native Venusians would watch the sunrise in the west and set in the east. The backward rotation makes Venus the only planet in the solar system where the sidereal day is actually longer than the solar one. The sun returns to its highest point in the sky before the planet has completed one rotation. Combining all this together leaves Venus with a solar day that takes 117 Earth-days. Put another way, the sun only rises twice in a Venusian year. Sidereal time measures the rotation of our planet relative to the stars. It allows astronomers to keep time without worrying about the motion of Earth around the sun. And it reveals some of the quirky motions of our planetary brothers and sisters. Next time your clock strikes noon, try and imagine what life might be like in a world where the sun moves backward or doesn’t get a chance to set before the year is over. Turns out, such alien environments are right next door! Christopher has a Ph.D. in astronomy from the University of California, Los Angeles. After eight years of searching for exoplanets, probing distant galaxies and exploring comets, Chris realized he enjoyed talking about astronomy a lot more than actually doing it. After being awarded a 2013 AAAS Mass Media Fellowship to write for Scientific American, he left a research career at the U.S. Naval Observatory to pursue a new life writing about anything and everything within the local cosmological horizon. Since 2014, he’s been working with Science News. How Long Is a Tropical Year / Solar Year? The length of a tropical year is the time it takes the Earth to complete a full orbit around the Sun, but it varies from year to year. A Year Is Never 365 Days Long A tropical year, also known as a solar year, an astronomical year, or an equinoctial year, is, on average, approximately 365 days, 5 hours, 48 minutes and 45 seconds long (365.24219 days). On timeanddate.com, we calculate a tropical year from the March equinox to the next March equinox (see table below). Leap Day Synchronizes Calendar Without the correct amount of leap years, our calendar would quickly become out of sync. This happened to the Julian calendar, which had too many leap years. Eventually, it was replaced with the Gregorian calendar. Can Vary by 30 Minutes The exact length of a tropical year can vary by up to around half an hour. For instance, the tropical year 2032 will last longer than 365 days and 6 hours. 2027, however, will only last 365 days, 5 hours, and 39 minutes. Length of Tropical Year 2010–2030 |March 2010 – March 2011||365||5||48||23| |March 2011 – March 2012||365||5||53||56| |March 2012 – March 2013||365||5||47||22| |March 2013 – March 2014||365||5||55||14| |March 2014 – March 2015||365||5||48||2| |March 2015 – March 2016||365||5||44||56| |March 2016 – March 2017||365||5||58||36| |March 2017 – March 2018||365||5||46||41| |March 2018 – March 2019||365||5||43||12| |March 2019 – March 2020||365||5||51||4| |March 2020 – March 2021||365||5||47||55| |March 2021 – March 2022||365||5||55||54| |March 2022 – March 2023||365||5||50||55| |March 2023 – March 2024||365||5||42||8| |March 2024 – March 2025||365||5||54||53| |March 2025 – March 2026||365||5||44||39| |March 2026 – March 2027||365||5||38||39| |March 2027 – March 2028||365||5||52||27| |March 2028 – March 2029||365||5||44||57| |March 2029 – March 2030||365||5||49||56|
physics
https://ruger.com/news/2015-10-07a.html
2022-11-28T02:36:47
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00307.warc.gz
0.930668
640
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__154555595
en
Information in news articles is current as of the date of publication. Product specifications and other details are subject to change over time. Sturm, Ruger & Company, Inc. (NYSE: RGR) and PolyCase Ammunition® are pleased to introduce the Ruger® ARX® line of ammunition. This new ammunition is designed and produced by PolyCase under license from Ruger and incorporates PolyCase's revolutionary ARX bullet technology. "We were impressed by the innovations PolyCase has developed and incorporated into the ARX bullet technology," explained Mike Fifer, Ruger CEO. "Ruger prides itself on being an industry leader in innovation, so matching up the Ruger brand with cutting-edge PolyCase bullet technology seemed a perfect fit," he concluded. From the research and development laboratory of PolyCase Ammunition, through Ruger's extensive testing, the flagship ARX projectile has established itself as the next generation of highly effective self-defense ammunition. Achieved through advanced design and materials science, the unique bullet profile transfers maximum energy to the target from a fluid dynamic effect. By design, the non-expanding Ruger ARX exploits the bullet's velocity to redirect energy laterally via flutes in the bullet ogive. This effect results in stopping power and terminal performance that rivals that of many expanding handgun bullets. The design of the Ruger ARX allows it to feed like a round nose yet still transfer energy to targets effectively over a wide range of bullet velocities. The ARX penetrates many barriers without deformation, and penetrates through clothing without clogging and degrading terminal performance. The Ruger ARX ammunition utilizes injected molded copper/polymer matrix projectiles. Unlike traditional bullets, this unique material can be molded into complex shapes like the ARX bullet configuration. These lightweight bullets are launched at high velocities and achieve very high energy levels, but at nominal or even reduced recoil levels and reduce the loaded weight of firearms and spare magazines. The copper/polymer bullets fragment upon striking solid backstops, making them ideal for use in indoor ranges. "PolyCase Ammunition is honored to be selected by Ruger as a licensee and is pleased to introduce Ruger-branded ammunition to the commercial sporting market," said Paul Lemke, CEO and Founder of PolyCase Ammunition. "Ruger is a forward-thinking company that has been a model of corporate responsibility for over 60 years. These traits, combined with Ruger's strength as the leading American manufacturer of firearms, is why PolyCase decided to pursue a licensing arrangement with Ruger. We are excited that Ruger shares our vision for this technology and look forward to providing highly effective, innovative ammunition technologies to the defensive and commercial sporting markets for years to come," Lemke concluded. For more information about Ruger-branded ARX Ammunition, visit Ruger.com/Ammo. To learn more about the extensive line of award-winning Ruger firearms, visit Ruger.com or Facebook.com/Ruger To find accessories for Ruger firearms, visit ShopRuger.com or your local independent retailer of Ruger firearms.
physics
http://fallisphoto.deviantart.com/
2015-03-28T14:15:56
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297587.67/warc/CC-MAIN-20150323172137-00242-ip-10-168-14-71.ec2.internal.warc.gz
0.958778
448
CC-MAIN-2015-14
webtext-fineweb__CC-MAIN-2015-14__0__158368360
en
Apertures and f/stops are not quite the same thing. An aperture is the size of the hole that the light passes through in order to expose the film. It is not complicated. f/stops are a set of standardized numbers used in mathematical formulas that are used to determine the size of the aperture. The "f" in f/stop is the focal length of the lens. Thus, if you have a 22mm lens, and you are shooting at f/2, then 22/2 = 11mm, which is the size of the aperture. If you are shooting at f/22, then 22/22 = 1mm, which will be the size of the aperture. Each progressive aperture, going from the largest number (let's say 22) to the smallest (let's say 2) admits twice the light into the camera that the preceding number does. Shutter speeds are set up the same way. Starting from the fastest shutter speed (let's say 1/1000th second) and going to the slowest (let's say 30 seconds; but there really isn't a limit to how slow you can go -- some people with film cameras have used shutter speeds of a year or more), each progressive shutter speed admits twice the light into the camera as the preceding one. Because they are set up this way, it allows you to use what are called "equivalent exposures." Let's say that your camera's meter is recommending an exposure of 1/125th second at f/11. Well, you decide that you want a little less depth of field than this. Depth of field is what the aperture controls and it is the amount of space in front of and behind the subject of the photo that will be in focus. To get less depth of field, you'd switch from f/11 to f/8; this admits twice as much light into the camera and without making an adjustment, your photo will be overexposed. To compensate for this, you switch from 1/125th second to 1/250th second, reducing the light by half (back to its previous level). Now you have an equivalent exposure that has the depth of field that you want. Simple.
physics
https://hkrfid.blogspot.com/2012/07/tempcorder-hkrat-tt02x-with-external.html
2020-07-02T08:35:14
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00475.warc.gz
0.843997
207
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__176395116
en
Tempcorder TT02X can perform stably around metal, liquid and other RF unfriendly environment. With the external temperature probe, wider temperature data could be collected while RF transmission is not affected as the small and slim probe with temperature sensor can reach deep inside a metallic or water-filled box but the active RFID tag can remain uncovered by the metal or water content. Tempcorder TT02X has the following features: Waterproof, dust proof, shock proof, IP68 - External probe (length 50cm) - Replaceable battery, long battery life - Convenient mounting design - Stable performance - Durable with special plastic encapsulation - Wireless temperature sensor - Accurate digital temperature sensing probe - Probe temperature range (-20oC – 120oC) - Operating temperature range (-20oC-70oC) Thermocouple with customized mounting options is now also available per volume order request. Each system can use one type of sensor only.
physics
https://leannekroll.wordpress.com/2012/06/16/national-geographic-illustrations/
2019-04-26T12:47:50
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578770163.94/warc/CC-MAIN-20190426113513-20190426135513-00098.warc.gz
0.973702
125
CC-MAIN-2019-18
webtext-fineweb__CC-MAIN-2019-18__0__78568182
en
The above illustration represents how fog is created in geographic locations such as Peru, which is covered in fog half of the year. In Peru, scientists designed a concept to gather the fog and use it to produce water for the villagers. They built special nets through which the fog would travel and capture tiny water droplets that would combine and drip into a gutter. The water was then designed to flow through pipes into tanks. What an amazing design solution! It was great working on these illustrations and creating realistic looking fog and desert scenes. Thanks so much to Jen & Hope for making the illustration process so easy and effective!
physics
https://adhinoegroho.wordpress.com/2012/12/
2022-09-24T22:02:43
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00724.warc.gz
0.922959
1,195
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__3876274
en
|The solar interior is separated into four regions by the different processes that occur there. Energy is generated in the core, the innermost 25%. This energy diffuses outward by radiation (mostly gamma-rays and x-rays) through the radiative zone and by convective fluid flows (boiling motion) through the convection zone, the outermost 30%. The thin interface layer (the “tachocline”) between the radiative zone and the convection zone is where the Sun’s magnetic field is thought to be generated.| The Sun’s core is the central region where nuclear reactions consume hydrogen to form helium. These reactions release the energy that ultimately leaves the surface as visible light. These reactions are highly sensitive to temperature and density. The individual hydrogen nuclei must collide with enough energy to give a reasonable probability of overcoming the repulsive electrical force between these two positively charged particles. The temperature at the very center of the Sun is about 15,000,000°C (27,000,000°F) and the density is about 150 g/cm³ (about 10 times the density of gold or lead). Both the temperature and the density decrease as one moves outward from the center of the Sun. The nuclear burning is almost completely shut off beyond the outer edge of the core (about 25% of the distance to the surface or 175,000 km from the center). At that point the temperature is only half its central value and the density drops to about 20 g/cm³. In stars like the Sun the nuclear burning takes place through a three step process called the proton-proton or pp chain. In the first step two protons collide to produce deuterium, a positron, and a neutrino. In the second step a proton collides with the deuterium to produce a helium-3 nucleus and a gamma ray. In the third step two helium-3s collide to produce a normal helium-4 nucleus with the release of two protons. In this process of fusing hydrogen to form helium, the nuclear reactions produce elementary particles called neutrinos. These elusive particles pass right through the overlying layers of the Sun and, with some effort, can be detected here on Earth. The number of neutrinos we detect is but a fraction of the number we expected. This problem of the missing neutrinoswas one of the great mysteries of solar astronomy but now appears to be solved by the discovery of neutrino masses. The Radiative Zone The radiative zone extends outward from the outer edge of the core to the interface layer or tachocline at the base of the convection zone (from 25% of the distance to the surface to 70% of that distance). The radiative zone is characterized by the method of energy transport – radiation. The energy generated in the core is carried by light (photons) that bounces from particle to particle through the radiative zone. Although the photons travel at the speed of light, they bounce so many times through this dense material that an individual photon takes about a million years to finally reach the interface layer. The density drops from 20 g/cm³ (about the density of gold) down to only 0.2 g/cm³ (less than the density of water) from the bottom to the top of the radiative zone. The temperature falls from 7,000,000° C to about 2,000,000°C over the same distance. The Interface Layer (Tachocline) The interface layer lies between the radiative zone and the convective zone. The fluid motions found in the convection zone slowly disappear from the top of this layer to its bottom where the conditions match those of the calm radiative zone. This thin layer has become more interesting in recent years as more details have been discovered about it. It is now believed that the Sun’s magnetic field is generated by a magnetic dynamo in this layer. The changes in fluid flow velocities across the layer (shear flows) can stretch magnetic field lines of force and make them stronger. This change in flow velocity gives this layer its alternative name – the tachocline. There also appears to be sudden changes in chemical composition across this layer. The Convection Zone The convection zone is the outer-most layer of the solar interior. It extends from a depth of about 200,000 km right up to the visible surface. At the base of the convection zone the temperature is about 2,000,000°C. This is “cool” enough for the heavier ions (such as carbon, nitrogen, oxygen, calcium, and iron) to hold onto some of their electrons. This makes the material more opaque so that it is harder for radiation to get through. This traps heat that ultimately makes the fluid unstable and it starts to “boil” or convect. Convection occurs when the temperature gradient (the rate at which the temperature falls with height or radius) gets larger than the adiabatic gradient (the rate at which the temperature would fall if a volume of material were moved higher without adding heat). Where this occurs a volume of material moved upward will be warmer than its surroundings and will continue to rise further. These convective motions carry heat quite rapidly to the surface. The fluid expands and cools as it rises. At the visible surface the temperature has dropped to 5,700 K and the density is only 0.0000002 gm/cm³ (about 1/10,000th the density of air at sea level). The convective motions themselves are visible at the surface as granules and supergranules. Courtesy of http://solarscience.msfc.nasa.gov/
physics
http://airdalecompressors.com/learn.php
2019-06-26T22:33:24
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00241.warc.gz
0.928386
1,291
CC-MAIN-2019-26
webtext-fineweb__CC-MAIN-2019-26__0__44420361
en
How do Compressors Work? Years ago, it was common for shops to have a central power source that drove all the tools through a system of belts, wheels and driveshafts. The power was routed around the work space by mechanical means. While the belts and shafts may be gone, many shops still use a mechanical system to move power around the shop. It's based on the energy stored in air that's under pressure, and the heart of the system is the air compressor. You'll find air compressors used in a wide range of situations — from corner gas stations and auto shops to major manufacturing plants. They are much easier to install and operate than the bulky, totally mechanical systems of the past, and dramatically decrease costs associated with the operation and maintenance of these systems. The biggest advantage of air power is that each tool doesn't need its own bulky motor. Instead, a single motor on the compressor converts the electrical energy into kinetic energy. This makes for light, compact, easy-to-handle tools that run quietly and have fewer parts that wear out. Air Compressors Types A conventional piston compressor has a crankshaft, a connecting rod and piston, a cylinder and a valve head. The crankshaft is driven by either an electric motor or a gas engine. While there are small models that are comprised of just the pump and motor, the compressors we distribute and service have an air tank to hold a quantity of air within a preset pressure range. The compressed air in the tank drives the air tools, and the motor cycles on and off to automatically maintain pressure in the tank. At the top of the cylinder, you'll find a valve head that holds the inlet and discharge valves. Both are simply metal flaps–one mounted underneath and one mounted on top of the valve plate. As the piston moves down, a vacuum is created above it. This allows outside air at atmospheric pressure to push open the inlet valve and fill the area above the piston. As the piston moves up, the air above it compresses, holds the inlet valve shut and pushes the discharge valve open. The air moves from the discharge port to the tank. With each stroke, more air enters the tank and the pressure rises. Typical compressors come in 1- or 2-cylinder versions to suit the requirements of the tools they power. On the homeowner/contractor level, most of the 2-cylinder models operate just like single-cylinder versions, except that there are two strokes per revolution instead of one. Some commercial 2-cylinder compressors are 2-stage compressors–one piston pumps air into a second cylinder that further increases pressure. Compressors use a pressure switch to stop the motor when tank pressure reaches a preset limit. Most of the time, though, you don't need that much pressure. Therefore, the air line will include a regulator that you set to match the pressure requirements of the tool you're using. A gauge before the regulator monitors tank pressure and a gauge after the regulator monitors air-line pressure. In addition, the tank has a safety valve that opens if the pressure switch malfunctions. The pressure switch may also incorporate an unloader valve that reduces tank pressure when the compressor is turned off. Many articulated-piston compressors are oil lubricated. That is, they have an oil bath that splash-lubricates the bearings and cylinder walls as the crank rotates. The pistons have rings that help keep the compressed air on top of the piston and keep the lubricating oil away from the air. Rings, though, are not completely effective, so some oil will enter the compressed air in aerosol form. Having oil in the air isn't necessarily a problem. Many air tools require oiling, and inline oilers are often added to increase a uniform supply to the tool. On the down side, these models require regular oil checks, periodic oil changes and they must be operated on a level surface. Most of all, there are some tools and situations that require oil-free air, such as hospitals and food industry applications. While solutions to the airborne oil problem include using an oil separator or filter in the air line, a better idea is to use an oil-free compressor that uses permanently lubricated bearings in place of the oil bath. One of the factors used to designate compressor power is motor horsepower. However, this isn't the best indicator. You really need to know the amount of air a compressor can deliver at a specific pressure. The rate at which a compressor can deliver a volume of air is noted in cubic feet per minute (cfm). Because atmospheric pressure plays a role in how fast air moves into the cylinder, cfm will vary with atmospheric pressure. It also varies with the temperature and humidity of the air. To set an even playing field, makers calculate standard cubic feet per minute (scfm) as cfm at sea level with 68 degrees F air at 36% relative humidity. Scfm ratings are given at a specific pressure – 3.0 scfm at 90 psi, for example. If you reduce pressure, scfm goes up, and vice versa. |Air Tool Description||Average CFM @ 90 PSI| |Angle Disc Grinder - 7"||5-8| |Drill, Reversible or Straight-Line||3-6| |Impact Wrench - 3/8"||2.5-3.5| |Impact Wrench - 1/2"||4-5| |Impact Wrench - 1"||10| |Mini Die Grinder||4-6| |Ratchet - 1/4"||2.5-3.5| |Ratchet - 3/8"||4.5-5| The cfm and psi ratings are important because they indicate the tools and equipment that a particular compressor can drive. When choosing a compressor, make sure it can supply the amount of air and the pressure that your tools and equipment need. The web has many useful resources for finding more information than is listed here. One site of particular value is located here. Additionally, please do not hesitate to contact us with any further questions or service you may need. We are here to help you!
physics
https://gzhotlink.com/article/Overhead-Power-Line.html
2021-10-27T13:01:11
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00565.warc.gz
0.946172
903
CC-MAIN-2021-43
webtext-fineweb__CC-MAIN-2021-43__0__10414467
en
An overhead power line, also known as a "pylon" in some areas, is a structure used in electric power transmission and distribution to transmit electrical energy along large distances. It consists of one or more conductors (most often three or four) suspended by towers or utility poles. Since most of the insulation is provided by air, overhead power lines are generally the lowest-cost method of power transmission for large quantities of electric energy. Towers for support of the lines are made of wood (as-grown or laminated), steel (either lattice structures or tubular poles), concrete, aluminum, and occasionally reinforced plastics. The bare wire conductors on the line are generally made of aluminum (either plain or reinforced with steel, or composite materials such as carbon and glass fiber), though some copper wires are used in medium-voltage distribution and low-voltage connections to customer premises. A major goal of overhead power line design is to maintain adequate clearance between energized conductors and the ground so as to prevent dangerous contact with the line, and to provide reliable support for the conductors, resilient to storms, ice load, earthquakes and other potential causes of damage. Today overhead lines are routinely operated at voltages exceeding 765,000 volts between conductors, with even higher voltages possible in some cases. Structures for overhead lines take a variety of shapes depending on the type of line. Structures may be as simple as wood poles directly set in the earth, carrying one or more cross-arm beams to support conductors, or "armless" construction with conductors supported on insulators attached to the side of the pole. Tubular steel poles are typically used in urban areas. High-voltage lines are often carried on lattice-type steel towers or pylons. For remote areas, aluminum towers may be placed by helicopters. Concrete poles have also been used. Poles made of reinforced plastics are also available, but their high cost restricts application. Each structure must be designed for the loads imposed on it by the conductors. The weight of the conductor must be supported, as well as dynamic loads due to wind and ice accumulation, and effects of vibration. Where conductors are in a straight line, towers need only resist the weight since the tension in the conductors approximately balances with no resultant force on the structure. Flexible conductors supported at their ends approximate the form of a catenary, and much of the analysis for construction of transmission lines relies on the properties of this form. A large transmission line project may have several types of towers, with "tangent" ("suspension" or "line" towers, UK) towers intended for most positions and more heavily constructed towers used for turning the line through an angle, dead-ending (terminating) a line, or for important river or road crossings. Depending on the design criteria for a particular line, semi-flexible type structures may rely on the weight of the conductors to be balanced on both sides of each tower. More rigid structures may be intended to remain standing even if one or more conductors is broken. Such structures may be installed at intervals in power lines to limit the scale of cascading tower failures. Foundations for tower structures may be large and costly, particularly if the ground conditions are poor, such as in wetlands. Each structure may be stabilized considerably by the use of guy wires to counteract some of the forces applied by the conductors. Power lines and supporting structures can be a form of visual pollution. In some cases the lines are buried to avoid this, but this "undergrounding" is more expensive and therefore not common. For a single wood utility pole structure, a pole is placed in the ground, then three crossarms extend from this, either staggered or all to one side. The insulators are attached to the crossarms. For an "H"-type wood pole structure, two poles are placed in the ground, then a crossbar is placed on top of these, extending to both sides. The insulators are attached at the ends and in the middle. Lattice tower structures have two common forms. One has a pyramidal base, then a vertical section, where three crossarms extend out, typically staggered. The strain insulators are attached to the crossarms. Another has a pyramidal base, which extends to four support points. On top of this a horizontal truss-like structure is placed.
physics
http://astro.sci.yamaguchi-u.ac.jp/jvn/eng/index_e.html
2022-12-07T03:48:31
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00637.warc.gz
0.911588
212
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__261293760
en
The Japanese VLBI Network (JVN) is an astronomical VLBI network in East Asia, which was established by the university VLBI collaborative observation project that started in 2005. NAOJ and multiple universities cooperate to operate the radio telescope and VLBI network. The participating universities are Ibaraki University, Tsukuba University, Gifu University, Osaka Prefecture University, Yamaguchi University, and Kagoshima University, with the support of JAXA. The observation frequencies are 6, 8 and 22 GHz, the maximum baseline length is 2300 km, the angular resolution is about 3 milliseconds at 8 GHz, and the baseline sensitivity (Ibaraki-Yamaguchi baseline) at 8 GHz is about 3 mJy. JVN has good angular resolution, sensitivity, and imaging capabilities. JVN will be available when researchers and students belonging to participating universities and research institutes. Researchers who are not related to JVN may be able to use it when conducting joint research with JVN officials.
physics
https://stephenschneider.stanford.edu/Climate/Climate_Science/EarthsEnergyBalance.html
2022-09-25T16:45:10
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00271.warc.gz
0.926682
149
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__281776919
en
Figure Details of Earth's energy balance (source: Kiehl and Trenberth, 1997). Numbers are in watts per square meter of Earth's surface, and some may be uncertain by as much as 20%. The greenhouse effect is associated with the absorption and reradiation of energy by atmospheric greenhouse gases and particles, resulting in a downward flux of infrared radiation from the atmosphere to the surface (back radiation) and therefore in a higher surface temperature. Note that the total rate at which energy leaves Earth (107 W/m2 of reflected sunlight plus 235 W/m2 of infrared [long-wave] radiation) is equal to the 342 W/m2 of incident sunlight. Thus Earth is in approximate energy balance in this analysis.
physics
https://www.pai.com/tags/401k-plan-features
2018-04-25T06:47:45
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00085.warc.gz
0.937126
134
CC-MAIN-2018-17
webtext-fineweb__CC-MAIN-2018-17__0__51831524
en
PAi's account system is currently down for scheduled maintenance. We apologize for the inconvenience and will bring the account system back up soon. The Law of Inertia: A body in motion will stay in motion and a body at rest will stay at rest unless an external force acts upon it. The Law of Inertia, while originating in the world of physics, is amazingly applicable to the world of retirement planning. As plan sponsors and financial advisors know, getting people who are not saving to start saving requires a force nearly equal to the gravitational field of a planet. And getting people to save more than they are already saving is an accomplishment of galactic proportions.
physics
https://www.mightybrisk.com/magnetic-particle-testing/
2023-03-24T16:05:30
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00689.warc.gz
0.917668
558
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__202261552
en
Magnetic Particle Testing (MPT) Magnetic particle testing or Magnetic particle inspection – called MPI inspection or magnetic crack detection – MPT / MT in short is one the most widely used Nondestructive testing method for inspection of ferro magnetic materials, components and structures made of Iron, Nickel and Cobalt alloys. As majority of engineering materials are predominantly made of Steel alloys this method is considered important in testing of materials in raw material, during manufacturing and in-service inspection stages. MPI testing is one of the predominantly used NDE method considering the operating cost of equipment, consumables and certified Level I / II inspectors. Magnetic particle testing method detects surface and subsurface discontinuities. The depth of detection varies considerably from material to material based on magnetic permeability, strength of magnetic flux generated, magnetic particle properties, sensitivity desired and orientation of discontinuity with respect to the direction of magnetic field induced. Although test method can prove sensitivity up to approximately around 14mm under ideal conditions, but in actual practice the detectability or sensitivity is much lesser. MPI inspection method is used for testing Automobile components, railway components, precision engineering components, shafts, gears, forgings, castings, plates, bars, rods, rolled products, cylinders, pistons, hydraulic components, pressure vessel and boiler components, welded joints, structures, Aerospace components, components for ships, mining equipments, oil and gas pipe line construction, storage tanks, bridges and other engineering components. Also this method detects fatigue cracks and other in-service discontinuities effectively. MPI equipments such as Permanent magnets, electromagnetic yokes, bench type MPI equipments with head shot technique, central conductor and coil techniques and prod technique are commonly used. This method also can be suitable for automation to test mass production components for detecting cracks and other anomalies. The test results depends largely on skill and proficiency of the magnetic particle testing MT Level I / II technician and it is essential to engage only qualified and certified professionals to carry out MPI testing. Employer based Certification program such ASNTSNT TC 1A Level 1, Level 2, Level 3 in Magnetic particle testing MPI Testing or EN479 or ISO9712 Level I and Level 2 or IS 13805 are some of the important training and certification courses available to get the individual to get certified in MPI testing. The quality and course outline for each NDEtraining course is precisely designed NDT Level III, monitored delivered by experienced expert trainers. Hands on practical sessions are part of the course. At the end of the training course examinations are held to asses each candidate for the performance.Examination results are evaluated by NDT Level III in magnetic particle testing and results are announced.
physics
http://www.3m.com/Product/information/NFC-RFID-Materials.html
2015-03-30T14:13:48
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299360.90/warc/CC-MAIN-20150323172139-00056-ip-10-168-14-71.ec2.internal.warc.gz
0.864907
134
CC-MAIN-2015-14
webtext-fineweb__CC-MAIN-2015-14__0__79353454
en
3M™ NFC & RFID Materials Improve Read Range 3M™ Flux Field Directional Materials (FFDM) are useful in the 13.56 MHz frequency range for RFID and NFC applications to isolate the NFC or RFID antenna from metal surfaces by directing the antenna flux fields away from the metal object or surface. 3M FFDM solutions are highly permeable, low loss near field communications materials with enhanced performance compared to NFC absorbers and RFID absorbers. FFDM’s de-couple the NFC or RFID antenna by directing the flux field away from metal surfaces, reducing the eddy current loss and improving read range.
physics
http://www.tompass.co.uk/?p=303&replytocom=597
2022-08-12T05:00:00
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00516.warc.gz
0.950676
1,061
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__102671780
en
Let’s go on a journey, squeaked the hamster from his tiny pocket in the woman’s bag. “But where to?” Asked the woman to her tiny rodent friend. “Up.” He answered. And so, she did, she opened up her umbrella and started floating upwards. She saw the ground fall further and further away from her feet, and the awestruck people below pointing upwards ‘Is it a bird? Is it a plane?’ – No, and it wasn’t Mary Poppins either. The people slowly became specks below, worth nothing. In fact, as she got further up, through the clouds, she couldn’t see any people at all. Only giant masses of land, industry and black oil filling up the space where the ocean had once been. The hamster giggled at her. “We are silly aren’t we?” “Yes, but we’re also amazing.” And they rose further and further up, through the Troposphere, the Stratosphere, the Mesosphere and all the other spheres that made this impossible, incredible, astronomically miraculous thing called life, possible. The woman and the hamster could see tiny specks of light in the distance, likely swallowed by the black void by now, the remnant of their life only just reaching the woman and the hamster on a relatively low speed highway called light. Much closer they could see their distance ancestor’s relative – the sun. The woman waved with her arm made of stardust, and the Hamsters little paw just reached out the top of the bag. The umbrella kept lifting the two higher and higher, faster and faster, until they passed Mars, Jupiter, Saturn, and Uranus. “You were saying how humans are amazing? I can’t even see Earth anymore; the solar system has already forgotten about you and we haven’t even reached the galaxy yet.” They carried on travelling, zoomed past alpha centauri burst through a nebula that filled their souls with vibrancy. “Yes, but think about this; In the trillion of lightyears that form space, in all of observable universe that we can see, in all of the tiny amount of time that has existed – this crazy thing called life happened. Atoms, by chance, had mass. Mass created stars that exploded in an inconceivable bound of energy that scattered life giving materials across barren rocks. Barren rocks transformed from hellholes into tranquil oceans of carbon, hydrogen and oxygen. And on one of those rocks, by a mishmash of a thousand million chances we got the one that made life. And further still, past instinct, life became sentient. Despite the absolutely negligible chance, and the uncaring of the universe, an atom figured out that it existed. It got the chance to be aware that YES, the universe is beautiful! The universe itself woke up and said wow, look at me!” “I guess you’re right I-“ “And then the universe thought! HAH! You’re not in control of me! I am you! And so it started building, it started building and making and sharing and discovering and it began to wonder how far can I go? How much of myself can I see before my spark of light fizzles out and the universe explodes and begins again? How many more cycles of big bangs and freeze-deaths will happen before I get another chance to try again and discover myself!? For all know this might be my only chance for trillions and trillions of years! So we build rockets – we build spaceships – we’ll create more life that can outlast us and then it will travel the stars for us with the same materials that were used to build us – the universe will continue to explore itself for as long as it knows that it exists!” The woman and the hamster burst out of the milky way, smashed through comets and crashed through solar winds on their way, and finally, they stopped accelerating, slowed down, and looked around at the universe. “It’s… Quiet, up here.” Said the Hamster. “Yeah. A little lonely too.” She replied. “What say we go back down, and tell everyone what we saw?” “No. What say we let them discover for themselves.” “That sounds good.” Spoke the hamster. Like Mary Poppins, she floated back through the milky way, past the stars and the moons, waved at the sun, and tapped her heels on dry land. “Say,” Said the hamster. “Shall we go on an adventure?” “Sure.” Said the woman, and walked back home. Inspired by Exurb1a:
physics
http://denator.com/products
2017-04-25T20:16:51
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120878.96/warc/CC-MAIN-20170423031200-00308-ip-10-145-167-34.ec2.internal.warc.gz
0.819751
187
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__68942955
en
Heat stabilization is carried out using the Stabilizor™ system. The fully automated procudure ensures consistent and reproducible treatment of samples. Algorithms use details of the physical state (fresh or frozen) and sample size, measured automatically by lasers, to determine the exact amount of energy required for complete denaturation. Heat stabilization and sample storage - the essentials Performs rapid controlled high temperature heating. Maintainor® Tissue cards Heat stabilize, transport and store tissue samples. Inert materials minimize the risk of contaminants interfering with analysis. Maintainor® DBS cards Designed specifically to ensure effective heat stabilization of dried blood spots (DBS). Maintainor® Liquid cards Designed specifically to ensure effective heat stabilization of liquid samples. Maintainor® storage box Denator cryo storage box is specially designed to hold up to 12 Maintainor Tissue cards.
physics
https://hs770.com/product/conmed-beamer-argon-probe-10-cs/
2023-10-02T03:56:53
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00295.warc.gz
0.650225
289
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__223835322
en
The ConMed Beamer Argon Probe is designed to quickly and accurately coagulate bleeding tissue with minimal to no mucosal contact. ConMed Beamer Argon Probe Features - Non-contact coagulation with controlled argon beam. - Ceramic tip effectively guides the argon beam for precise and controlled coagulation at a low argon flow rate. - Ceramic tip provides visual contrast at the probe against the mucosa in a bloody field. - Offers a long, broad beam. ConMed Beamer Argon Probe Configurations |CNM-A-Beam-1||0.07″ x 63″ (1.8 mm x 160 cm)||10 / Case| |CNM-A-Beam-2||0.07″ x 126″ (1.8 mm x 320 cm)||10 / Case| |CNM-A-Beam-3||0.09″ x 126″ (2.3 mm x 320 cm)||10 / Case| |CNM-A-Beam-4||0.09″ x 126″ (2.3 mm x 320 cm)||10 / Case| |CNM-A-Beam-5||0.12″ x 91″ (3.2 mm x 230 cm)||10 / Case|
physics
https://salascom.wordpress.com/2004/02/26/2004226amazing-little-rover-calculates-its-html/
2020-02-26T17:40:53
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146414.42/warc/CC-MAIN-20200226150200-20200226180200-00196.warc.gz
0.918776
190
CC-MAIN-2020-10
webtext-fineweb__CC-MAIN-2020-10__0__28856877
en
Amazing little rover “calculates its own location in the universe…..on Mars”: I just can’t get enough of this amazing bit of engineering called the Mars Rover. Get this: Opportunity also updated its “attitude knowledge,” which fine-tunes the rover’s information about its exact location and position on Mars….To adjust the attitude knowledge, engineers have the rover turn the panoramic camera to the Sun and watch the Sun travel across the sky for 15 minutes. The rover is then smart enough to take the Sun movement data collected from the panoramic camera to calculate its own location in the universe…..on Mars. This is such an amazing little machine. That’s not even getting into the astronomy, orbitall mechanics, geology, physics, and who knows what other basic science that determines the core algorithms. Hats off to the team who built it.
physics
https://electricianu2.com/potential-difference-the-driving-force-behind-electricity/
2024-04-22T13:20:23
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818293.64/warc/CC-MAIN-20240422113340-20240422143340-00530.warc.gz
0.912065
674
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__185505629
en
We’ve journeyed through atoms, electric charges, electric fields, and magnetic fields in our exploration of electrical theory so far. Now, we turn our attention to a pivotal concept that causes these charges to move, giving rise to what we commonly call electricity. Welcome to the world of potential difference. Understanding Potential Difference To understand potential difference, imagine a ball at the top of a hill. The ball has the potential to roll down due to the difference in height levels. Similar to the ball on a hill, electric charges can also possess potential energy when they’re in an electric field. Potential difference, also known as voltage, is a measure of the work done to move a unit positive charge from one point to another in an electric field. It is the “electrical pressure” that pushes electric charges through a circuit. Potential difference is measured in volts (V), and it is the source of energy that drives the flow of current. The Role of Potential Difference in Electricity So how does this relate to electricity? The simplest answer is that potential difference causes electricity. When it is applied across a conductor, like a metal wire, the free electrons move from the area of low potential (positive terminal) towards the area of high potential (negative terminal). This movement of electrons constitutes an electric current. Importance for Electricians The potential difference is fundamental in an electrician’s work. Whether installing a light fixture, setting up a new circuit, or repairing an electrical appliance, understanding this concept is key to ensuring that electricity flows as required. For instance, when wiring a home, electricians need to know the potential difference of the power source to install compatible devices and fixtures. If an appliance designed for a lower potential difference is connected to a high-voltage supply, it can result in overheating and could even cause a fire. Conversely, if it is too low, the appliance won’t receive enough power to operate. Part of Daily Life Potential difference, or voltage, is a part of our daily lives. Every device that uses electricity, from a refrigerator to a smartphone, operates at a certain potential difference. For example, the standard potential difference for households in the U.S. is 120 volts, while it’s typically 220-240 volts in many European countries. Even the seemingly simple task of charging a phone or laptop requires understanding this concept. The charger must convert the high voltage from the outlet to a lower voltage compatible with the device, typically around 5 volts for a smartphone and 12-20 volts for a laptop. Potential difference, or voltage, is the driving force behind the flow of electricity. It is the push that gets electrons moving, bringing life to electronic devices and lighting up our world. Understanding this deepens our understanding of how electricity works. As it is an essential stepping stone to understanding more complex electrical phenomena. As we move forward in our journey through electrical theory, we’ll explore how the movement of charges gives rise to an electric current, alternating current (AC), direct current (DC), and finally, how electricity and magnetism come together in the form of electromagnetism. Stay tuned for our next article where we’ll be discussing the introduction to electric current.
physics
https://www.volagi.com/faq/how-does-longbow-flex%E2%84%A2-stay-work-and-will-it-affect-handling-bike-0
2021-12-02T03:08:58
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00272.warc.gz
0.916032
471
CC-MAIN-2021-49
webtext-fineweb__CC-MAIN-2021-49__0__39105318
en
How does the LongBow Flex™ Stay work and will it affect the handling of the bike? When our design engineers, who also happen to be accomplished ultra-endurance riders, set out to create the Liscio, they knew from experience what long distance riders were looking for in a bike. This resulted in a ground-breaking frame design we call the Long Bow Flex Stay. We designed the LongBow Flex™ Stay to optimize the carbon fiber's natural properties to absorb road vibrations and maximize the ability to move up and down. To put this in non-layman's terms: Our patent- pending design gives maximum vertical compliance and suppleness while maintaining a high degree of lateral stiffness or STW (stiffness to weight ratio) for power transfer. The LongBow Flex™ stay is a revolutionary design that utilizes high-yield strength carbon fiber and lengthens the overall stay to maximize the frame’s ability to give vertical compliance to absorb road vibrations. In fact, the frame’s vertical deflection rate at 6.0+mm/kN is one of the highest in the industry. Even the orientation of the oval shape allows the stay to flex in one direction, while maintaining stiffness side to side for maximum control and power transfer. Another benefit to the LongBow Flex™ stay is it actually maintains the great handling characteristics of a short chain stay design while enhancing the ride quality. Furthermore, the stay is connected all the way to the midpoint of the frame providing a strut- like design for even better lateral stiffness. As a result, Volagi™ bikes promote better handling and more efficient power transfer. Most bicycles feature seat stays that connect directly to the seat tube. With LBFS, the seat stays bypass the seat tube and instead connect directly to the top tube. This design, with the help of intentionally shaped stays, creates just enough flex, to achieve improved compliance that results in reduced rider fatigue and improved rear wheel contact with the pavement. While the LBFS serve to take the sting out of the road, the rest of the bike is still able to capture every bit of power, propelling you forward with every pedal stroke. While the seat stays are thin and flexible, the down tube, seat tube, and oversized chainstays remain stiff for maximum power transfer to the rear wheel.
physics
https://www.truteam.com/insulation-installation/best-soundproof-insulation/
2023-10-03T21:21:43
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511220.71/warc/CC-MAIN-20231003192425-20231003222425-00729.warc.gz
0.923392
1,659
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__166415622
en
As the #1 installer of insulation in the United States, TruTeam installers are often tasked with recommending and installing soundproof insulation for homes and businesses. We help homeowners, home builders, remodelers, and commercial builders with all their sound insulation needs. There are many different types of insulation available, and each offers different benefits in terms of sound reduction. TruTeam’s installers will help you choose the best insulation to fit your needs and budget. Contact TruTeam today for a free on-site soundproof insulation consultation by one of our local insulation experts. In homes and businesses, quiet spaces are always desired. Whether you seek a peaceful area for relaxation or a quiet room for concentrating on work, it’s important to design areas that are removed from both interior and exterior noises. In your home, you don’t want to hear people snoring down the hall, music from your teen’s bedroom, or noise from traffic passing outside. Choosing and installing the right type of insulation absorbs echo in a room and reduces or blocks sound transmission, making interior spaces more enjoyable in residential and commercial structures. Sound waves travel through open spaces. Insulation in walls, floors, and ceilings impedes sound travel through those areas. There are many different types of insulation available, offering different thermal and acoustic benefits. The insulation experts at TruTeam take your project needs, budget, and soundproofing concerns into account and will recommend and install the right insulation materials in your home or business. We may even recommend a combination of several types of insulation within a single structure. Fiberglass batt insulation is a common, cost-effective material used as insulation in homes and businesses. Fiberglass batt is a blanket insulation that often comes in rolls. It is available in various thicknesses – leading to different levels of thermal and soundproofing benefits. Homeowners and installers often prefer fiberglass batt because it is easy to install. TruTeam installs both faced and unfaced batt fiberglass insulation. Fiberglass batt is easily installed between studs in walls, floor joists, and ceiling beams when they are exposed. It is a popular choice for soundproofing in residential and commercial construction and remodeling. Unfortunately, it is not easily installed in existing walls without some demolition work. A similar product to fiberglass batt insulation is fiberglass blown-in insulation. It is a loose fill insulation material applied with a machine. At TruTeam, our insulation experts are trained in safely and effectively installing blow-in blanket insulation systems (BIBS). Our installation methods create a tight fill around pipes, wires, and other obstructions, providing the best results for thermal protection as well as soundproofing. Blown-in fiberglass is commonly used in sidewalls, attics, and crawl spaces. It has excellent soundproofing properties because the density of the insulation can be varied to fill open spaces. It can be installed in new construction or existing homes with exposed cavities. Homes and commercial buildings can be retrofitted with BIBS insulation, although that process may require wall renovation and insulation removal. Reflective insulation is a popular insulation choice in warmer climates because it reflects radiant heat away from structures. It has a reflective surface, usually made from aluminum, and is commonly used in roofs and attics. TruTeam installs roll, blanket, and board reflective insulation. While offering superior thermal protection, reflective insulation alone does not provide effective sound insulation. However, reflective insulation can be layered with other types of insulation like spray foam or fiberglass batt as part of a complete thermal and acoustical protection system. Spray foam insulation is one of the best performing thermal insulation materials available because it fills gaps in walls. It is made from polyurethane foam and comes in open and closed cell varieties. It prevents air loss and does provide some protection from noise. While it can cost more to install, spray foam stays in place (unlike fiberglass that can compress) and provides long-term results, making it a cost-effective option over time. TruTeam often recommends using spray foam insulation for existing homes and commercial buildings looking to be retrofitted with sound insulation because spray foam can be installed into walls that are already built. Rigid board insulation is a thin, lightweight, easy-to-install insulation material. It is easy to cut and can be used to cover large spaces in a time efficient way. TruTeam installs three different types of rigid board insulation: expanded polystyrene (EPS), extruded polystyrene (XPS), and polyisocyanurate (ISO). Foam board is a popular insulation choice for basements. Rigid board is a common choice for homeowners and builders because of its ease of installation, its cost benefits, and its ability to be used above and below grade in commercial and residential buildings. By itself, foam board insulation does not have enough mass to absorb sound. To get better soundproofing performance from your rigid board insulation, TruTeam can install it with other materials to increase sound absorption. Mineral wool is one of the best choices for soundproof insulation in homes and businesses. Mineral wool looks similar to fiberglass but it is made from recycled content, and it provides higher thermal capacity and lower air permeability. As a denser material than fiberglass, it provides superior soundproofing results. TruTeam installs batt, board, and blown-in mineral wool insulation. Mineral wool can help reduce noise pollution from the outdoors and stop noise flow between rooms. It can be used for noise reduction in floors and ceilings as well as walls. Homeowners and contractors frequently choose mineral wool for sound insulation in new construction or when upgrading existing home insulation. Cellulose insulation is an environmentally-friendly insulation option made from plant fibers or recycled newspapers. TruTeam installs cellulose insulation in three options: loose fill, BIBS, and spray-applied. Cellulose, especially when compared to other insulation materials like fiberglass, provides better soundproofing results. It is a denser insulation and blocks air and noise more effectively. While cellulose can be used in new construction, TruTeam frequently recommends cellulose for soundproofing existing structures because it can be installed through small holes cut in drywall without the need to take down entire walls. Homes, multifamily buildings, offices, industrial buildings, and other structures each have unique needs when it comes to soundproofing and choosing the right insulation material. For example, a single family home will have different sound reduction requirements than a school auditorium. The insulation experts at TruTeam are available to help with soundproof insulation selection and installation. As a budget-friendly yet effective sound reduction material, fiberglass batt insulation is a popular choice, particularly in residential applications. However, fiberglass batt does not completely seal spaces, making it less effective at sound attenuation than gap-filling insulation like blow-in and spray foam. For existing homes and businesses looking to add additional soundproof insulation, the best choices include blown-in insulation such as fiberglass and cellulose. Blown-in insulation can be installed without having walls reconstructed. Choosing the right insulation for soundproofing can be challenging. You need to pick the right material that provides the energy efficiency you want with additional sound absorption and reduction properties, all while fitting your budget. That’s where the insulation experts at TruTeam come in. We can explain the advantages of each type of insulation, provide specific insulation recommendations, and complete your soundproof insulation installation. Additionally, proper installation of soundproofing insulation is key to achieving the best results. All of our installers are trained, licensed, and insured. Our installation contractors will help ensure that your insulation meets your soundproofing expectations. When you need to install soundproof insulation, trust the experts at TruTeam. Our hand-picked and proven installers keep soundproofing projects on schedule and on budget. We serve homeowners, home builders, remodeling contractors, and commercial builders across the country. Contact us for a free on-site soundproofing quote today.
physics
https://sabkuchh.com/why-does-evaporation-cause-cooling/
2023-12-02T11:26:15
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00053.warc.gz
0.960308
1,390
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__131296324
en
Overview on Why does evaporation cause cooling: Evaporation is a natural cooling process that occurs in our everyday lives. When we sweat, the sweat on our skin evaporates and takes heat away from our body, making us feel cooler. This is why it is important to stay hydrated in hot weather, as our body needs water to produce sweat and regulate our body temperature. The cooling effect of evaporation is also used in various industrial applications. For example, air conditioning systems use evaporation to remove heat from the air, which is then transferred to the outside environment using a heat exchanger. This is why air conditioning units have a condenser unit located outside the building, as this is where the heat is released. The cooling effect of evaporation is also used in refrigeration units. In these systems, a liquid is evaporated to remove heat from the surrounding environment, which is then transferred to another location using a heat exchanger. This is how refrigerators and freezers work, as they use evaporation to remove heat from the inside of the unit and transfer it to the outside environment. The cooling effect of evaporation is also used in the food industry. For example, some foods are dried using evaporation to remove moisture and extend their shelf life. This is done by exposing the food to a stream of hot air, which causes the moisture to evaporate and leave the food. This process is used to make products such as dried fruits, jerky, and instant noodles. Evaporation is also used in the production of ethanol, a type of alcohol used as a fuel. Ethanol is produced by fermenting crops such as corn and sugarcane, which produces a liquid containing ethanol and water. The ethanol is then separated from the water using a process called distillation, which involves evaporating the liquid and condensing the vapor back into a liquid. This process is repeated several times to produce a high concentration of ethanol. The cooling effect of evaporation is also used in the medical industry. For example, some medications are delivered using a nebulizer, which is a device that converts liquid medication into a mist that can be inhaled. The nebulizer uses evaporation to create the mist, which is then inhaled into the lungs. This method of delivery is used for medications that need to be delivered directly to the lungs, such as those used to treat asthma and other respiratory conditions. Evaporation is also used in the production of paper. In the paper-making process, wood chips are boiled in water to create a pulp, which is then spread out on a conveyor belt and dried using a series of rollers. The rollers use evaporation to remove the water from the pulp, leaving behind a sheet of paper. This process is repeated several times to create a stack of paper sheets. The cooling effect of evaporation is also used in the production of electricity. Some power plants use a process called a cooling tower to remove heat from the water used to generate steam. The cooling tower uses evaporation to remove the heat, as water is sprayed into the tower and allowed to evaporate. This process removes heat from the water, which is then reused to generate more steam and produce electricity. Evaporation is also used in the production of salt. In some parts of the world, salt is produced by evaporating seawater in large ponds. The seawater is pumped into the ponds and allowed to evaporate in the sun, leaving behind a layer of salt. This process is repeated several times to produce a large amount of salt, which is then harvested and processed for use in various industries. Evaporation is also used in the production of cosmetics. Some cosmetics, such as perfumes and colognes, contain alcohol, which evaporates quickly when applied to the skin. This evaporation creates a cooling effect, which can be refreshing on a hot day. Additionally, some cosmetics, such as facial toners, contain ingredients that promote evaporation to help remove excess oil and dirt from the skin. The cooling effect of evaporation is also used in the textile industry. Some fabrics, such as linen and cotton, are known for their cooling properties, as they allow air to circulate and promote evaporation. This makes them popular choices for summer clothing and bedding. Additionally, some athletic clothing is designed to wick away sweat and promote evaporation, which can help keep athletes cool and comfortable during exercise. Evaporation is also used in the production of biofuels. Some biofuels, such as biodiesel and ethanol, are produced using a process called fermentation, which produces a liquid containing the biofuel and water. The biofuel is then separated from the water using a process called distillation, which involves evaporating the liquid and condensing the vapor back into a liquid. This process is repeated several times to produce a high concentration of biofuel. Evaporation is also used in the production of essential oils. Some essential oils, such as lavender and peppermint, are produced using a process called steam distillation. This process involves heating plant material with water to create steam, which is then condensed to produce a liquid containing the essential oil and water. The essential oil is then separated from the water using a process called decantation, which involves pouring off the top layer of liquid. The cooling effect of evaporation is also used in the production of beer. During the brewing process, a liquid called wort is boiled with hops to create a bitter flavor. The wort is then cooled rapidly using a heat exchanger, which uses evaporation to remove the heat. This rapid cooling helps prevent the growth of bacteria and other microorganisms, which can spoil the beer. Evaporation is also used in the production of ice cream. In the ice cream-making process, a mixture of cream, sugar, and flavorings is heated and then cooled rapidly using a heat exchanger. This rapid cooling causes some of the water in the mixture to freeze, creating ice crystals. The mixture is then churned to break up the ice crystals and create a smooth texture. In summary, the cooling effect of evaporation has a wide range of applications in various industries, from essential oil production to beer brewing to ice cream-making. Understanding the science behind evaporation can help us appreciate the many ways in which it benefits our lives and the world around us. Whether we are enjoying a cold beer on a hot day or using essential oils for aromatherapy, evaporation plays a crucial role in creating the products we use and enjoy. From natural processes like sweating to industrial processes like air conditioning, evaporation is a fascinating and important phenomenon that affects our daily lives in countless ways.
physics
https://www.arcflashanswers.com/topic/what-are-potentials-for-human-injury-during-an-arc-flash/
2019-07-19T06:01:38
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00318.warc.gz
0.904353
222
CC-MAIN-2019-30
webtext-fineweb__CC-MAIN-2019-30__0__148806884
en
The event of an arc flash can result in devastating injuries or even death to workers in the area. Although the duration of an arc flash is usually less than 30 seconds, it can be truly catastrophic. Temperatures are hotter than that of the sun, the sound can be deafening, and the heat can melt even metal. The following are five different potentials an arc flash can cause to a worker: → Arc blast damage: An intense force that is thousands of pounds per inch can be created from an arc blast. This force can knock people through the air, cause broken bones, collapse lungs, and cause concussions. → Burns: Arc flashes often causes second and third degree burns in a fraction of a second. → Electrocution : Workers can be electrocuted if the arc flash travels through a person; it can also be fatal → Eyesight Damage: Arc flashes produce an intense amount of light which cause temporary or long-term damage to the eyes. → Auditory Damage: Permanent hearing damage can be a result of the extremely loud noises caused by an arc flash.
physics
https://astrophotography.com.au/videos/
2021-04-19T02:50:13
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00297.warc.gz
0.918925
326
CC-MAIN-2021-17
webtext-fineweb__CC-MAIN-2021-17__0__239769231
en
Astrophotography Videos shown here are a combination of time lapse and still photograph sequences, captured by myself. I enjoy the use of video to add an extra element to astrophotography, combining music and images in a well choreographed manner to produce a specific atmosphere. I hope you enjoy the colleciton. All these videos have a soundtrack, I suggest ensuring you have good audio available. I had an awesome few months of astronomy & astrophotography – so what better way to sum it up than a short video designed to inspire? This is a short video I made for a training night at the Perth Observatory. It’s 3.5 minutes & has sound. Enjoy! Jupiter and Moon Occultation 18th Feb 2013 Watch the Moon move in front of Jupiter, as it’s moons disappear and then re-appear. You can see some surface detail on Jupiter with the main bands quite clear, as well as nice lunar detail. Photographed at the Perth Observatory this video shows time lapse and still photographs of the Total Lunar Eclipse December 2012. Observatory One (automation) I enjoy participating in astronomical research activities, and this is aided by a reasonable level of automation of my home observatory. Now and then it’s fun to sit back and watch this automation work nicely. May 2013 Solar Eclipse This video is a nice atmospheric representation of the May 2013 partial Solar Eclipse as viewed from the hills of Perth, Western Australia. A fun video of time lapse photography showing Orion rising mixed with watching my telescope perform supernova searching.
physics
https://www.research-webinars.com/network.php?cat=Energy
2021-12-04T05:23:18
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362930.53/warc/CC-MAIN-20211204033320-20211204063320-00510.warc.gz
0.951973
190
CC-MAIN-2021-49
webtext-fineweb__CC-MAIN-2021-49__0__121252827
en
Energy Researchers Network Energy Researchers Network (ERN), managed by TechnoBiz is a global network of researchers, whose specialization is focusing on industrial and public energy needs with reference to energy technology, alternate fuels, renewable energy, biomass & biogas energies, waste to energy, fuel cells, hydrogen energy, nuclear energy, hydro-thermal energy etc. Through this Network, TechnoBiz is aimed to promote research works of members to the industry and municipalities and creating a platform for researchers and industries to exchange information and explore opportunities. The membership is FREE for all interested researchers. Their profile and specialization will be publicized through TechnoBiz. Network members will also have an opportunity to present their research work as a Research Webinar and also through various social media platforms managed by TechnoBiz. Interested researchers are invited to join as a member by filling details in the membership form. Here is list of current members of Energy Researchers Network.
physics
http://www.jmcc.org/news.aspx?id=3376
2020-05-28T07:56:51
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00505.warc.gz
0.978656
409
CC-MAIN-2020-24
webtext-fineweb__CC-MAIN-2020-24__0__99317848
en
RAMALLAH, March 21 (JMCC) - A Palestinian man in Gaza has rigged his home to be fully dependent on solar energy and beat the electricity blackouts that plague Gaza as much as 18 hours a day, reports al-Arabiya. Mahmoud Shaheen, a 55-year-old chemistry professor, started investigating the possibility of generating electricity at home a long time before the bombing of the generators. “I started thinking of this experiment 21 years ago and at the time things were not really better than they are now because Gaza was under Israeli occupation and power outages used to happen a lot,” he told Al Arabiya His idea was made possible when a Palestinian vendor brought electrical cells from Israel and did not know what to do with them. He took the cells from him and started experimenting with them until he managed to generate electricity. “Since then my house has been fully lit and during blackouts it is the only house from which light comes in the middle of Jabalia.” Shaheen, now known as the Conqueror of Darkness, explained that he did not use chemical reactions to generate electricity, but rather depended on solar energy. “It is pretty simply. The sunlight that falls on the cells is converted to electrons that are transmitted through wires to batteries that keep the generated energy in them until it is used then charged again and so on.” Shaheen pointed out that all the electric devices in his house are working at full capacity with a rate of energy conversion that amounts to 3,000 watts. “This means that they are working properly all day and night.” Gaza's electricity company has not run at full capacity since it was bombed by Israel five years ago. An Israeli-imposed blockade hinders the import of fuel to run the plant and a recent dispute between Egypt, where Palestinians had been purchasing fuel, and the Hamas-led government in Gaza is causing long periods without electricity every day.
physics
http://www.earthyreport.com/site/summers-next-weather-event/
2013-06-19T13:27:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708783242/warc/CC-MAIN-20130516125303-00000-ip-10-60-113-184.ec2.internal.warc.gz
0.957306
284
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__110087344
en
This summer has been quite a season for weather extremes. Various degrees of drought, and rainstorms have afflicted much of these United States. This coming week another storm is getting ready to hit planet Earth that could cause some serious issues for telecommunications and our electric grid. According to the U.S. National Oceanic and Atmospheric Administration (NOAA) a magnetic storm will develop into a moderate to strong level event. Three large explosions from the Sun over the past few days have prompted the NOAA warnings. These solar storms will effect communications and global positioning satellites and may even produce an aurora visible as far south as Minnesotan and Wisconsin. Communications disruptions from solar activity is fairly rare, but has caused serious impacts in the past. In 1989 a solar storm took down the power grid in Quebec, Canada, leaving six million people without power for several hours. The largest storm ever recorded was in 1859 when communications was limited to telegraphs. The 1859 solar storm hit telegraph offices around the world and caused a giant aurora visible as far south as the Caribbean Islands. If a solar storm of a similar magnitude were to hit today the NOAA estimates that it could cause up to $2 trillion in damage globally. The NOAA is not expecting this storm to carry anything close to this past event. But, don’t be surprised if you have difficulty texting and calling this upcoming week. Photo Credit: NASA
physics
https://plumbinglab.com/do-gutter-guards-cause-ice-dams/
2024-02-22T08:24:53
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473735.7/warc/CC-MAIN-20240222061937-20240222091937-00174.warc.gz
0.923811
1,001
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__165984019
en
Icicles might look pretty on your roof, but they can lead to ice dams - ridges of solid, dense ice - which can cause major damage to your gutters. It’s in your best interest to deal with ice dams quickly. So we put together this article on how to prevent ice dams and also answer the question, do gutter guards cause ice dams or not? Page Contents (Click Icon To Open/Close) What Causes Ice Dams? As you might expect, ice dams are caused by frozen standing water in your gutters. Outside temperature, snow cover, and heat from your house can all contribute to the formation of ice dams in your gutters. As snow on top of the gutters melts, it falls down and reaches a spot that is below freezing. The water then forms an ice dam. Do Gutter Guards Cause Ice Dams? Short answer, no. Gutter guards do not cause ice dams. Gutter guards are a common solution to keep leaves out of gutters, but they do not cause ice dams. The main thing that causes ice dams is a temperature differential around your roof. Temperature differentials create pockets of cold space. Water seeps into these cold spaces and freezes. Since the gutter line is usually below the roofline, the temperature there is lower than around the other parts of the roof. The result is that gutters are a prime place for ice dams to form. However, gutters with clogged vents can contribute to ice dams. The material in gutters retains water, so it does not naturally drain out. Ice dams can damage gutter guards by pressing on the leaf protection brackets. See Related Article: Are Gutter Guards Worth The Investment? How Can You Prevent Ice Dams? Poor ventilation can cause ice dams. Poor ventilation makes the attic get too hot and creates temperature differentials. Fixing ventilation ensures your attic has cool air circulating through it, so the snow doesn't melt and then refreeze in your gutters. Similarly, proper insulation can keep heat from escaping your attic and melting snow on your roof. Ventilation ensures your attic remains a constant temperature and does not create heat differences on your roof. Eliminating attic heat sources Heat sources in your attic like a heater or gas stove can also contribute to ice dams by melting snow on the roof. Removing the potential heat sources can reduce the formation of ice dams. Electric heat cables You can also install electric heat cables to melt ice in channels. Heat cables, also called heat coils or deicing cables, require an installation process, so they may not be the best choice for every homeowner. How to Remove Ice Dams in Your Gutters We do not recommend hacking away at ice dams with a shovel or pickaxe, as this can damage your gutters or roofing tiles. Instead, try to melt the ice. Here is a simple method to get rid of ice dams. You will need: - 1First, fill the pantyhose or long socks with calcium chloride. CaCl is used to melt ice on driveways. - 2Position the socks over the dam. - 3Arrange the socks so they lie evenly against the thickest parts of the ice dams. - 4Attach a string or a wire to the pantyhose to secure them to the roof. - 5Remove the pantyhose once the ice dams have sufficiently melted. Make sure you do not use rock salt instead of calcium chloride as it can damage your shingles and vegetation in your yard. People also Ask (FAQs) Can ice dams damage your gutter guards? Yes, ice dams can put mechanical stress on gutter guards and damage them. Is ice dam damage covered by insurance? Some homeowners' policies have coverage for water damage or bursts due to ice dams. What can you put on your roof to melt ice? The best material to melt ice on your roof is calcium chloride. You can buy CaCl at your local hardware store. Do ice dams cause permanent damage? Yes, ice dams can cause permanent damage to your roof or your walls if left uncared for. Ice dams can be a nuisance, so you should take care of them as quickly as possible when they arise. Make sure your home is ventilated and insulated to protect against the formation of ice dams. Holly Curell is the editor extraordinaire for Plumbing Lab. Having grown up in Michigan, Holly has spent time living in New York, Virginia, & currently North Carolina, where she lives with her husband & family. Holly loves DIY & has years of experience with at-home plumbing problems that arise from having 3 kids & living in colder climates. When she’s not writing about her plumbing knowledge, Holly enjoys reading, hiking & relaxing with family.
physics
https://erwanleroy.com/vector-tools-for-nuke-tutorials-and-math/
2023-03-31T23:19:07
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00301.warc.gz
0.92383
1,388
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__231475557
en
I just published a suite of Vector tools on Nukepedia (Download and Quick description here). In this post I will to run the more nerdy of you through the math involved, in part 2 I will include mini-tutorials of example usage of these tools. The Tools Math Basic understanding of vectors is necessary to follow the math involved, review it here: mathsisfun.com Luma_to_Normals: Converts an image to normals (based on Luma). This one was one I really cared about for a long time. Back in 2014 I got really close to achieve good results, but I wasn’t able to obtain the Blue channel of a normal map properly. While there are tools on Nukepedia already that attempt to achieve this, I wasn’t entirely convinced by them. I didn’t want to publish a half-baked tool, so I got back in there a few days ago and I am pretty satisfied with the result. After some research, it seemed that the first step of converting luma to normals would be to apply some sort of Filter. I decided to use a Sobel filter. https://en.wikipedia.org/wiki/Sobel_operator In nuke, that meant adding 2 matrix nodes, one for an horizontal filter and one vertical. I would then shuffle these into R and G. That was already sufficient for running in an iDistort for example to fake a refraction or heat haze, etc.. That’s what I had been using for a long time. However, for a complete tool, I needed to get 3D vectors, not 2D. As normal maps are vectors representing a direction, the magnitude of their vectors is normalized so that each magnitude is 1. The math to calculate a vector magnitude is . Since I knew X and Y, and wanted a final magnitude of 1, I had to write the expression , which solves Z like this . I wrote the expression in Nuke. Now the problem is that if the magnitude of the 2D vector was already longer than 1, the expression would error and add a lot of “nan”. So I added an expression in order to return 0 in blue if the 2D vector was already 1 or longer. By adding a grade node before the Sobel operation, it’s possible to control the overall magnitude of the 2D vectors, allowing to reduce the errors happening when calculating the 3D vector. I also added some extra controls, packed it in a group, and the tool was done. UV_Map_Generator: This one was pretty straight forward. UV maps are ramps that go from 0 to 1 horizontally in R, and vertically in G. The one little trick was that Nuke calculates the X and Y position of pixels from the bottom left corner of the pixel, while STMaps and other UV based tools calculate it from the center of the Pixel. I used to generate them with the expression r = x/width, g=y/height. To be correct I had to add 0.5 to x and y to center the value to the pixel. So the final expression is r = (x+.5)/width, g=(y+.5)/height. UV_to_Vectors: By subtracting a distorted UV map and an original UV map, we obtain some sort of faint vectors corresponding to the distortion, however, to truly get the right magnitude, it’s necessary to multiply r by the width and g by the height. So for the expression, onto a distorted map (in r and g), I need to re-generate an original UV to subtract to the distorted UV: r = (r-((x+.5)/width))*width g = (g-((y+.5)/height))*height Which can be simplified to: r = -x+r*width-0.5 g = -y+g*height-0.5 (I’m using http://www.wolframalpha.com to simplify my expressions, because I’m lazy) Vectors_to_UV: Basically the opposite of above: r = (r+x+0.5)/width g = (g+y+0.5)/height Vectors_Direction: I wanted a way to rotate 2D vectors. No need to re-invent the wheel, I googled the formula, and wrote it into a Nuke expression. Sadly I can’t cite my source anymore since I found it years a go and can’t recall where I found it from. The thing to be aware is that the formula expected angles in radians, so the first thing I did is define a variable, then use it into the expression. angleRad = radians(parent.rotation) r = r * cos(angleRad) – g * sin(angleRad) g = r * sin(angleRad) + g * cos(angleRad) Vector_Transform: Sometimes, you may want to transform some vectors in space, using nuke’s Transform node. However, since the direction and magnitude of the vectors are built into the color and not spatially, the default transform wouldn’t be able to rotate or scale vectors properly. Using the previous node (Vectors_Direction), a Multiply node (to multiply values according to scale) and a default transform, I was able to transform vectors properly. Other Tools included: vector3DMathExpression: A NoOp node with expressions to calculate a 3D vector between 2 3D points, as well as it’s magnitude. Vectors_Magnitude: An utility to see the magnitude of motion vectors, usually simply as information for the artist. Vectors_Normalize: Will scale every vector in a vector pass so that each vctor’s magnitude is 1, while keeping the direction. Works on 2D and 3D vectors. In Part 2 of this article, I’ll be publishing a few mini-tutorials on how to use these tools in context. Great! thanks to spread your knowledge! …really appreciated! Utterly brilliant! Many thanks. mr. erwan leroy!!! from the bottom of my heart, thank you for sharing your knowledge. i pray a lot of blessings come to you and your family always.. keep always healthy and happy.. May all good things come to you. 🙂 I learned a lot from your blog.Thank you!
physics
https://www.rmhcene.org/environmental-implications-of-using-sonar-technology
2024-04-18T14:16:30
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.54/warc/CC-MAIN-20240418124808-20240418154808-00013.warc.gz
0.935137
1,507
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__75106825
en
Table of contents The power of sonar technology is undeniable, with its applications spanning from naval navigation to detecting fish and submarines under the ocean’s surface. However, the environmental implications associated with its use are a topic of growing concern in today's society. This article delves into the potential environmental impacts of sonar technology, shedding light on the effects on marine life and ecosystems. It's essential for us to address these issues, as the balance of our planet depends on our awareness and actions. Let's start this journey into the deep blue, exploring the unseen consequences of a technology we rely on so heavily. The Science behind Sonar Technology At its most fundamental level, sonar technology is a system that uses sound propagation to navigate, communicate with or detect objects underwater. Sonar, an acronym for Sound Navigation and Ranging, relies on the principle of transmitting acoustic waves and interpreting the echoes returned after they bounce off an object. This method is primarily employed for underwater navigation and detection. There are two main types of sonar: active sonar and passive sonar. Active sonar involves emitting an acoustic signal, or pulse of sound, into the water. If an object is in the path of this pulse, part or all of the pulse will be reflected back to the sonar transducer. The sonar equipment then measures the strength and the time delay of the return signal to calculate the distance and orientation of the object. On the other hand, passive sonar listens without transmitting. It is often used in military applications to detect submarines without revealing the listener's presence. The way these acoustic waves travel underwater and interact with objects, the seabed, and the water's surface is referred to as sonar propagation. The understanding of sonar propagation plays a pivotal role in the development and effective use of sonar technology. It involves complex physics calculations and an in-depth understanding of marine environments. Therefore, individuals with backgrounds in physics or marine technology are best equipped to delve into the intricacies of this fascinating field. Impacts on Marine Life The implementation of sonar technology has had far-reaching consequences on marine life. A particular subject of concern is the sonar noise pollution. The noises produced by active sonar are incredibly loud, and they have been linked to a range of negative effects on marine creatures, particularly mammals such as whales and dolphins. These creatures are noted for their sophisticated auditory systems, which they use for communication, location of food, and navigation. With the introduction of active sonar technology, these creatures are exposed to high-intensity sounds that are far beyond their normal hearing range. This overstimulation often results in a condition known as "acoustic trauma". Acoustic trauma is a bodily harm caused by a sudden change in the atmospheric pressure around an animal, often leading to devastating outcomes. Marine biologists and environmental scientists have also observed behavioral changes in dolphins and whales due to sonar noise pollution. These changes range from disruption in feeding patterns to unusual diving behavior. In extreme cases, exposure to sonar noises has been linked to mass strandings, where groups of these creatures beach themselves, which often leads to their demise. Indeed, the interplay between sonar technology and marine life remains a pressing concern. It underscores the need for a more considerate and sustainable approach to our use of technology in marine environments. Regulations and Mitigation Efforts The use of sonar technology has been subject to various "sonar regulations" due to its potential impacts on the environment. Governments and international bodies have implemented policies and guidelines to control the application of this technology, particularly in sensitive marine habitats. These rules govern the intensity, frequency, and timing of sonar activity to minimize disruptions to marine ecosystems. Alongside these regulations, numerous "mitigation efforts" have been undertaken to lessen the environmental harm caused by sonar. These include the development of alternative technologies and the implementation of "environmental impact assessments" before any sonar activity. Environmental Impact Assessment (EIA) is a vital tool to identify and evaluate the probable environmental effects of proposed activities, including the use of sonar. Despite these steps, there is a growing need for more comprehensive "policy changes". It is necessary to strike a balance between "technological advancement" and "environmental preservation". This balance is not merely about limiting the use of technology, but also about harnessing it in ways that can contribute positively to environmental protection. For instance, advancements in sonar technology itself could be directed towards minimising its environmental footprint. In conclusion, while current regulations and mitigation efforts play a significant role in managing the environmental implications of sonar use, more proactive measures are needed. It is important to ensure that the pursuit of technological progress does not come at the cost of environmental sustainability. Exploring Alternatives to Sonar In the quest for ensuring environmental sustainability, researchers are investigating potential alternatives to sonar technology. The use of sonar has proven to be a useful tool for underwater communication and navigational tasks, but its impact on marine life and ecosystems has raised significant concerns. It's become increasingly crucial to consider other viable options that can replicate the functionality of sonar without causing harm to the underwater environment. Research and innovation have led to the development of emerging technologies such as non-acoustic detection methods. These methods serve as a promising alternative to sonar, harnessing different principles for underwater communication that do not rely on sound waves. These technologies come with the potential benefit of reducing the adverse environmental effects associated with sonar. Still, they also present their own set of challenges - they might not provide the same range or accuracy as sound-based methods, for instance. The ongoing development and refinement of these alternatives to sonar are a testament to the importance of balancing technological advancement with environmental sustainability. As studies continue, the hope is that these emerging technologies will provide effective and eco-friendly solutions for underwater detection and communication. Long-Term Ecological Effects There are substantial concerns associated with the long-term effects of sonar technology on marine ecosystems. Sonar technology, primarily used for navigation and detection, has the potential to bring about significant changes in migration patterns. The high-intensity sound waves it produces can disrupt the behavioral patterns of various marine species, leading to altered migration paths. Furthermore, the breeding behavior of aquatic life can also be drastically affected. The noise pollution caused by sonar technology can cause stress and physical harm to marine animals, particularly those that rely on sound for communication and mating. This could potentially lead to a decrease in population over time due to reduced successful breeding. Besides, species distribution within marine ecosystems could also be affected. Species may be forced to move from their natural habitats to avoid the areas with high sonar activity, leading to an ecological disruption of the existing balance within the ecosystem. This displacement could potentially lead to loss of biodiversity and even extinction of certain species. By understanding these effects, it is clear that the utilization of sonar technology must be managed with care to minimize its negative impacts on marine ecosystems. An individual such as an ecologist or environmental scientist would be equipped with the knowledge and expertise to further explore these concerns and implement effective strategies to mitigate them. For instance, the use of an Email Adress Validator could ensure that any communication related to marine conservation efforts and sonar regulation discussions are sent to the correct recipients, emphasizing the importance of accuracy in both technological and environmental fields.
physics
https://www.mumc.nl/en/actueel/nieuws/brain-atlas-leads-fewer-side-effects-radiotherapy
2020-06-01T17:36:08
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00508.warc.gz
0.941677
727
CC-MAIN-2020-24
webtext-fineweb__CC-MAIN-2020-24__0__92034346
en
With the irradiation of tumours in the brain and in the head and neck area, the radiation dose can now be reduced considerably, without the desired treatment result being compromised. This drastically reduces the chances of side effects. Daniëlle Eekers, radiation oncologist at MAASTRO clinic and ZON-PTC was awarded her PhD on this subject last Friday, 7 December, at Maastricht University. She compared various treatment methods for her research. At her initiative, a brain atlas was developed at MAASTRO—together with the European Proton Centres and in close collaboration with Maastricht UMC+. Before a tumour can be irradiated, scans (CT) are made to determine precisely where the correct dose should be given in relation to the location of the tumour. But it is often difficult to see on the scans where exactly the tumour is and where it is adjacent to the healthy tissue, for example with the optic nerve or the memory areas. These are also called organs at risk. That is why it is digitally contoured on very detailed scans (MRI) of the head of the patient. "In practice, there appeared to be important differences in how this was done by the various European Centres for Radiotherapy." That is not really remarkable, according to Daniëlle Eekers, because this involves very complicated anatomy. She proposed the development of a brain atlas. Through a solid consensus, MAASTRO has now produced a definitive brain atlas that will be used by the European Proton Centres. The other European radiotherapy centres will also use this brain atlas. "It has become a digital atlas with images of the brain that is available online", says Daniëlle Eekers. "You can view it from multiple perspectives, both on MRI and CT scans. The intention is that all radiotherapists will contour the many important organs at risk in the same way. This not only allows for good comparisons of the various treatments, such as the current irradiation techniques with radiation with charged particles (protons). But this also reduces the side effects of the radiation because it is now possible to take these organs at risk into account when determining the parameters of the radiation. The brain atlas thus makes an important contribution to the further optimisation of the radiotherapeutic treatment of tumours in the brain and the head and neck area." Daniëlle Eekers also established that the cerebellum can play an important role in the recovery of the patient after radiation treatment. "It was already previously known that the cerebellum is primarily responsible for our coordination and balance. Now it appears that the posterior part of the cerebellum also affects the process of acquiring knowledge through perception and processing it through thinking. Sometimes patients have problems with this after radiation. It is therefore essential that the posterior part of the brain is carefully contoured to prevent unnecessary side effects." With two international studies, Danielle Eekers showed that the aforementioned organs at risk will receive lower doses by irradiating charged particles, such as protons and carbon. "The special thing about these particles is that they stop in the patient and therefore do not release a dose beyond the tumour." This radiation with protons is now possible in Groningen and Delft and in a few weeks will also be in Maastricht. It is anticipated that approximately 3% of all patients who receive radiation therapy in the Netherlands are eligible for proton radiotherapy. Source: MAASTRO Clinic
physics
https://motohut.ca/products/revit-thermic-shirt
2024-04-25T03:58:00
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00606.warc.gz
0.912235
221
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__105173892
en
REV'IT! Thermic Shirt Free Standard Shipping The laws of thermodynamics dictate that heat will always move to colder areas. Well, not anymore. With the Thermic LS long sleeve base layer, we’re fighting physics with physics. Engineered with hypoallergenic Dryarn® fabric in very specific, strategically placed knitting patterns, our R&D managed to create a long sleeve base layer shirt that actively insulates body heat. Meanwhile perspiration is wicked away, absorbed by the fabric, and allowed to evaporate away from the skin, preventing the moisture from conducting between the cold outside and the heat inside. Even the raised elasticated collar is designed so the base layer comfortably wraps around the neck, preventing rising heat from leaving your core. Much more than just underwear By effectively trapping body heat, your body is less strained to keep your body temperature up by burning calories and consuming energy. Energy you could be putting to good use to stay sharp and focused, proving the Thermic LS base layer shirt is much more than just simple underwear.
physics
https://www.shopqamolu.com/products/inshape-exercise-bands
2023-10-03T13:19:48
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00589.warc.gz
0.901325
167
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__6784712
en
InShape Portable Gym Push-up aids increase resistance, prevent chest pulling, and increase the training effect by 3 times. Quickly adjust the length of the belt to change the resistance. The resistance belt is 85cm long, and can be pulled up to 230cm. It can withstand greater tension. There are three levels of resistance to adjust. You can easily adjust the resistance by adjusting the buckle. Use a comfortable non-slip soft rubber handle, the camouflage belt is sturdy and durable, anti-oxidation can prevent accidental injury by rebound. The maximum tensile force that the red can withstand is 36KG, which is suitable for women’s fitness, and the maximum tensile force that the blue can withstand is 48KG, which is suitable for men’s fitness
physics
https://sites.wustl.edu/clouddetection/
2022-09-27T20:37:49
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00531.warc.gz
0.881537
697
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__202793559
en
Motivation for the Project Solar energy production is set to become a cheaper way to generate power than coal by 2021, a shift fueled by massive investments in large solar farms by quick-growing economies such as China and India . The integration of large scale photo-voltaic arrays into the electricity grid poses a unique challenge to grid operators–solar power is an inherently variable power source which can fluctuate rapidly over small spatial and temporal scales. Events as common-place as a cloud passing between the sun and a solar panel can cause a rapid change in the power output of the panel, an event known as ramping. Being able to predict the output of a solar array is necessary for system operators to smooth ramping events, and to “procure energy and ancillary services in the intra-hour to day-ahead time frame, thereby minimizing costs and improving services” for consumers . While techniques such as numerical weather prediction and satellite cloud tracking do provide some reliable predictions of global horizontal irradiance (a measure of the direct and diffuse solar radiation reaching the earth’s surface), these techniques fall short for predictions at a high spatial and temporal resolutions . It is this gap in prediction techniques that ground-based sky imaging systems seek to fill. Within the intra-hour time frame, clouds represent the single most important influence on GHI. By collecting images from a camera located at the site, cloud cover which will directly impact the solar panels at that location can be identified, categorized, and tracked using a variety of computer vision techniques, allowing the solar panel operators to predict ramping events that will affect the power output. Overview of the Project Over the course of the semester, I developed a method for extracting cloud cover percentage and cloud density from a ground-based sky image. I utilized the OpenCV Library in Java and the Image Processing Module in MATLAB to create a simple, yet effective, algorithm for extracting these details from an image. The images I used to train and test the techniques are from two databases, SWIMCAT and SWIMSEG, which are explained in more detail here. A visual overview of the process is shown in Figure 1 below. Details about segmenting the images into cloud and sky regions can be found here, and more information about density categorization of the images is here. Figure 1: Overview of Process - Shankleman, Jess, and Hayley Warren. “Solar Power Will Kill Coal Faster Than You Think.”Bloomberg, 15 June 2017, www.bloomberg.com/news/articles/2017-06-15/solar-power-will-kill-coal-sooner-than-you-think. - Richardson, Walter, et al. “A Low Cost, Edge Computing, All-Sky Imager for Cloud Tracking and Intra-Hour Irradiance Forecasting.” Sustainability, vol. 9, no. 4, 2017, p. 482., doi:10.3390/su9040482. - Chow, Chin Wai, et al. “Intra-Hour Forecasting with a Total Sky Imager at the UC San Diego Solar Energy Testbed.” Solar Energy, Pergamon, 13 Sept. 2011, www.sciencedirect.com/science/article/pii/S0038092X11002982.
physics
http://www.yjcsoe.live/tungsten-heavy-metal/tungsten-heavy-metal-rod.html
2020-03-29T15:19:38
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00244.warc.gz
0.865247
208
CC-MAIN-2020-16
webtext-fineweb__CC-MAIN-2020-16__0__170693277
en
Tungsten Heavy Metal Rod Tungsten has the highest melting point (3410°C or 6170°F) of all metals.? The extremely high melting point of pure tungsten makes all the common manufacturing techniques used for metals such as iron impractical. Specialized methods make possible the processing of pure tungsten into rod, sheet, and wire for a wide variety of high temperature applications including incandescent lamp wire, TIG welding electrodes, high temperature heat shielding, etc. Tungsten heavy metal rod is manufactured from high-purity tungsten powder by metallurgical technology. Compared to rods made from other material, tungsten heavy metal rod has higher melting point, higher density and better electrical conductivity. Tungsten heavy metal rod is now widely used in some applications such as tungsten heavy metal wire, tungsten heater, printer pin, various tungsten heavy metal electrodes, heating devices of quartz furnace, welding rods, automotive products, sputtering targets, etc.
physics
https://www.brembostoreusa.com/ventilated-brake-rotors-explained-standard-vane-vs-pvt-pillar-design/
2023-03-26T00:01:09
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00014.warc.gz
0.944198
916
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__78849537
en
Braking is everything in driving. Whether you’re commuting to work or chasing personal records around a track, your ability to slow down is important. Modern brake systems use brake rotors and brake pads to slow a vehicle down – a system that’s both simple and efficient. When you press the brake pedal, you’re sending hydraulic pressure to the calipers, which in turn press the brake pads against the rotor. The resulting friction reduces speed, ultimately bringing the vehicle to a halt. However, where there’s friction, there’s also heat. Brake rotors, as well as brake pads, are subjected to immense temperature buildups during use. Excess heat is the primary enemy of any braking system as it greatly reduces its effectiveness. Even worse, the buildup of excess heat can warp rotors, causing permanent damage. Brake rotors come in all kinds of flavors, with solid rotors still being extremely popular. As cars became heavier and faster, solid brake rotors have proven to be inefficient at managing excess temperatures that were becoming the norm. With up to 80% of this excess heat being channeled through the rotor, it was obvious that something had to be done. The solution came in the form of passive ventilation. Instead of having a solid cast iron rotor, the idea was to build one that had vanes running between the interior and exterior braking surfaces. That way, as the rotor spun, it would create constant airflow through those vanes, resulting in a much cooler rotor. Brembo Pillar Venting Technology The use of passive ventilation has significantly impacted the overall performance of an average brake rotor. However, as performance demands increased, it became apparent that standard directed vanes weren’t going to cut it for long. Brembo was one of the first companies to offer an innovative, proprietary evolution of the ventilated rotor design. Named Pillar Venting Technology, the new ventilation design delivered numerous improvements compared to standard directed ventilation vanes. Engineers at Brembo recognized the limits of standard vanes, as well as the shortfalls of this original design. Instead of running contoured, directional vanes through the middle of the rotor, Brembo went with a more intricate system of air ducts whose main purpose is to optimize airflow and increase the cooling surface. By implementing this unique ventilation pattern, Brembo has achieved: - Better Wear Resistance – The ability to offer consistent cooling even when the vehicle isn’t moving at higher speeds allows Brembo PVT rotors to stay well below critical operating temperatures. This, in turn, results in less rotor wear over time. - Resistance to Thermal Cracks – One of the consequences of extreme heat is thermal cracks that form all over the braking surface of the rotor. By introducing a more efficient airflow pattern using PVT, Brembo has managed to increase the resistance to thermal cracks by over 40% - Longer Brake Pad Life – Lastly, lower maximum average rotor temperature results in a longer brake pad life. - Reduction in Debris Ingress – PVT and PVT Plus rotors feature a vane design that prevents the ingress of debris inside the rotor. Pushing Pillar Technology Further with PVT Plus Once the initial results came in showing the advantage of PVT technology, Brembo started working on optimizing this system even further. First came the T pillars and Star Pillars, both of which were designed to improve the performance and durability of rotors used on trucks as well as light commercial vehicles. However, the next big improvement came with the development of PVT Plus technology. Engineers at Brembo managed to improve the thermal performance of PVT rotors by 30% using better-optimized pillars and a more effective airway design. Aside from boosting the overall cooling performance of the rotor, one of the side effects of this optimization was the 10% reduction in brake rotor mass. Customized PVT Solutions for Specific Vehicle Models The original PVT layout had proven to be a good universal solution. Pushing the performance of PVT technology to the next level meant optimizing individual rotors to fit the mass and driving characteristics of car models where this solution is adopted as original equipment (OE). Each PVT Plus rotor has its own PVT geometry, thus delivering a custom performance profile. That being said, our PVT and PVT Plus rotors fit on both ends of the axis.
physics
https://lhy.com/us/products/motors/
2024-04-19T12:26:31
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00704.warc.gz
0.911059
302
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__197068550
en
Swash plate and bent axis hydraulic motorsWe produce axial piston motors for high pressure hydraulics. Our hydraulic motors are differentiated by their design features, as we offer our constant and variable displacement motors in both swash plate and bent axis designs. What are hydraulic motors? What does a hydraulic motor do? A hydraulic pump converts a mechanical energy into hydraulic energy. The hydraulic motor then converts this hydraulic energy (pressure and volume flow) proportionally back into mechanical energy (torque and speed). The difference between swash plate and bent axis motors Swash plate and bent axis motors differ in their design characteristics. Whereas in the swash plate motor the cylinder block is always parallel to the drive shaft, its position in the bent axis motor is diagonal to the drive shaft – just to name one. Why hydraulic motors from Linde Hydraulics? We offer a wide range of high-pressure motors – hence, you will always find the right motor for your requirement! If you need higher speeds, the bent-axis motor is perfect due to its smaller cylinder block. It also offers a very slim insertion contour. However, if you need greater robustness against rotational acceleration, you should have a look at our swash plate motors. Our starting torque is outstanding. This applies to both motors and regardless of whether fixed or variable displacement motors. Do you have questions about high-pressure motors? We will be happy to advise your company: Contact
physics
https://www.ronwilkinjewellers.com/services/laboratory-grown-diamonds/
2023-09-29T11:08:19
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00074.warc.gz
0.922468
648
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__261364886
en
Laboratory Grown Diamonds “Are lab grown diamonds real diamonds?” The answer is simply “YES!” The only thing that makes a laboratory grown diamond different from a natural diamond is its origin. They are grown in a lab, instead of being grown underneath the Earth's surface. A lab diamond is “grown” inside a laboratory using cutting-edge technology replicating the formation of a natural mined diamond. These lab created diamonds have the same physical, chemical and optical properties as a natural diamond. Laboratory grown diamonds are graded the same way as natural diamonds using the 4-C’s (colour, clarity, carat weight and cut). Laboratory grown diamonds are not the same thing as diamond simulants like Cubic Zirconia or Moissanite. Frequently Asked Questions About Lab Diamonds How are laboratory grown diamonds created? High Pressure-High Temperature (HPHT) Using one of three manufacturing processes, the belt press, cubic press, and the split-sphere press, an environment of extremely high pressure and temperature is created that is conducive to diamond growth. A HPHT diamond begins as a small diamond seed that is placed into carbon. The seed is exposed to a temperature of about 1500 degrees Celsius and pressurized to approximately 1.5 million pounds per square inch. The pure carbon melts and starts to form a diamond around the starter seed. Then it is cooled carefully to form a pure carbon diamond. Chemical Vapor Deposition (CVD) Starting as a thin slice of a diamond seed, the diamond slice is placed in a sealed chamber and heated to around 800 degrees celsius. This chamber is filled with a carbon-rich gas such as Methane and other gases. These gases are ionized into plasma. The ionization breaks the molecular bonds in the gases, and the pure carbon adheres to the diamond seed and slowly crystalizes. How long does it take to create a lab diamond? It takes approximately 7-10 days to grow a 1 carat lab diamond. How can you determine if a diamond is naturally mined or laboratory grown? Laboratory grown diamonds are almost impossible to differentiate from natural diamonds. Lab diamonds may exhibit different trace elements than natural diamonds but those elements do not affect the appearance of the diamond. All laboratory grown diamonds are required by law to have “lab-grown" or "lab-created'' and a serial number inscribed on the girdle of the diamond. This is visible with a loupe. All laboratory created diamonds should come with a gem certification identifying them as laboratory grown. Benefits of a laboratory grown diamond Competitively priced - the price of a laboratory grown diamond can be anywhere from 35-45% less expensive than mined diamonds. This is due to a shorter supply chain and far less cost involved with maintaining the equipment needed to create diamonds in a lab vs. running a large scale mining operation. Environmentally Sustainable - the way in which a laboratory mined diamond is created allows for much less land disturbance and mineral waste compared to a naturally mined diamond. Conflict Free - a laboratory diamond is guaranteed 100% conflict free.
physics
http://lauscha.heatherkellyglass.co.uk/content/annealing-schedule
2017-04-29T09:19:42
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00123-ip-10-145-167-34.ec2.internal.warc.gz
0.888534
549
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__286696904
en
Carol Anne Beckman - Annealing Schedule 2009 This schedule was provided by Carol Anne Beckman. I have added conversions to °C. In Carol's words: "1. Ramp up as fast or as slow as you feel necessary to 980 degrees fahrenheit (527°C). 2. I do not have a separate temperature for soaking and annealing. I use 980 Degrees Fahrenheit (527°C) for soaking and annealing. 3. Your program should be as long as your working time, plus a minimum of 2 hours. The 2 hour time must be increased if your beads are bigger than 4 centimeters. The guideline that I learned is 1/2 hour annealing time for each centimeter of bead size with a minimum of 2 hours. 4. So, a bead that had a largest measurement of 6 centimeters would need to be annealed for 3 hours. 5. After the annealing segment of your annealing program, you want to take the kiln down to 800 degrees Fahrenheit (427°C) at a rate of between 60 degrees Fahrenheit to 100 degrees Fahrenheit per hour (33 to 56°C/hr). The range from 60 degrees to 100 degrees per hour (33 to 56°C/hr) is in case you included a bunch of strange stuff in your bead and you want to baby your bead during this part of the annealing cycle... 6. Then, your hold time at 800 degrees Fahrenheit (427°C) (step 5) is going to be the SAME as the amount of time you annealed your beads in step 3. 7. After holding, take your beads down to room temperature at a rate of 60 to 100 degrees fahrenheit per hour (33 to 56°C/hr), depending again on how much strange stuff you mixed in with your bead. For more strange stuff, slower." Note: your kiln may run hotter than the temperature it displays, and may have a temperature gradient across it, with some areas warmer than others. If you are getting unexpected results, try checking the temperature inside the kiln with a separate thermocouple. Alternatively, do some runs with strategically-placed test beads, adjusting the temperature by 10 degrees each time until you get the desired result. If you use transparent glass you can check the beads for stress with a home-made polariscope. Converting temperatures and ramps Google will convert Fahrenheit to Celsius, but converting degrees per hour is a bit trickier. There is a handy tool by Brad Walker: Temperature and Rate Conversion at the Warm Glass tip archive that will also convert ramps in F to C for you.
physics
https://www.new1.ncbj.gov.pl/en/reaktor-maria/fuel-maria-reactor
2024-03-01T07:25:48
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475203.41/warc/CC-MAIN-20240301062009-20240301092009-00146.warc.gz
0.922292
332
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__54880249
en
Fuel of the MARIA reactor The fuel element of the MARIA reactor consists of 6 (MR type) or 5 (MC type) concentric pipes 2 mm thick, separated by a 2.5 mm water gap. Water in the gaps between the fuel pipes plays a double role. First, it is a neutron moderator (slower) needed to slow down the fast neutrons generated during fission to thermal energies, at which subsequent fission of 235U nuclei takes place efficiently. Water also acts as a coolant, i. e. it is used to collect the heat generated in the fuel. A single fuel element holds up to 485 grams of 235U. The fuel material used in the MARIA reactor is a dispersion of uranium oxide (Russian MR fuel) or uranium silicide (French MC fuel) in aluminum. The 0.8 mm thick fuel layer is placed between two layers, the so-called the fuel jacket, which prevents the release of fission products into the coolant. The fuel element of the MARIA reactor is shown in Figure 1. arrows indicate the direction of coolant flow. The core of the MARIA reactor contains 6 to 7.5 kg of uranium-235 each time. This amount depends on the core configuration, fuel burnout and the current work program. The fuel elements are removed from the core after reaching a burnout of 235U within 40–60%. Further burning of the fuel is economically disadvantageous. After the end of work in the core, the fuel is transferred to the storage basin, from where it is collected by the manufacturer after several years of cooling.
physics
http://www.fearofflying.com/library/jets-move-up-and-down-less-than-an-inch-during-turbulence-why/
2023-11-28T17:15:50
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00154.warc.gz
0.943654
1,132
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__36139935
en
In a jet, you are going about about 800 feet per second. From goal line to goal line, a football field is one-hundred yards, or three-hundred feet. So every second, a jet moves the length – almost – of three football fields. Imagine three football fields lined up end to end. Let’s figure the air over the first one is going down, the air over the next one is going up, and the next one down. On a jet, you go up, down, up just for about one-third of a second, about as quick as you can read that. Then you are over the second field with its downward-moving air and you move down for about one- third of a second. Then you are over the third field and move up for one-third of a second. There really isn’t much time to go up (or down) before you are into an area where the direction of the air is the opposite. It all averages out and the plane stays at its assigned altitude. That’s it. How much do you go down? Less than an inch in a jet because you go from down-moving air to up-moving air so quickly that neither down nor up moving air has time to do much, and you just feel it as a jolt because you are hitting the bump so fast (think speed bump in your car at ten mph versus 30 mph). In the slower plane you are in down-moving air longer and go down maybe a foot or two before the up-moving air moves you up a foot or two. But ‘what if’? What if the plane did ‘free-fall?’ What if it did go into a zero-gravity condition for one-third of a second. How far would it ‘free-fall’? Zero gravity is thirty-two feet per second per second. So, in one-third of a second of free-fall, the plane would descend a mere ten feet and eight inches. So the next time someone tells you they were on a plane that feel ten thousand feet, tell them to check this web site: http://www.nogravity.com/ That is the web site for Zero Gravity Corporation The terror fearful fliers experience is based only imagination. They imagine falling weightlessly from 35,000 to the ground. In a dive an airplane can produce weight- lessness for only a moment. Contrary to what fearful fliers expect – weightlessness throughout a dive is impossible. For weightlessness to be produced for more than a moment, an airplane must carefully fly a parabolic profile. Weightlessness is achieved only while the plane’s nose is being CONTINUOUSLY lowered at a prescribed rate. Even with a carefully flown parabola, weightlessness can be achieved for only about thirty seconds. For a mere $3750, Zero Gravity Corporation provides a zero gravity experience using a Boeing 727, the interior of which, is padded on all sides. The longest they can produce a zero-gravity experience is thirty seconds. But for your $3750, you fifteen experiences of weightlessness of about thirty seconds each. Zero Gravity Corp starts the parabolic profile at 32,000 feet with the nose of the plane approximately raised 45 degrees above the horizon. (About three times higher than on a normal takeoff.) The nose is then continuously lowered at a prescribed rate to produce a zero-gravity condition for 25 to 30 seconds during which everything in the plane is weightless. The lowering of the nose is continued until the plane is diving with the nose approximately 30 below the horizon. (About ten times steeper than a normal descent.) At that point the pilots start a gentle pull up to allows the participants to perch themselves on the padded floor. Then, to recover from the dive, the nose is brought aggressively back to level. As this is done, the g-force increases to about 1.8 g’s until the aircraft returns to level flight. (1.8 g’s is more than you will ever experience in turbulence). Zero Gravity Corporation does this with a forty-some year old 727, and yes, the wings don’t fall off. The weightlessness experienced inside the airplane is actually equivalent to the type of “free fall” experiences for the first six seconds when sky diving or bungee jumping. After six seconds, a person’s downward speed stabilizes at a fixed speed of around 120 to 150 m.p.h. and one descends at that speed in a one g condition. This brings to mind the difference between ‘descending’ and ‘falling’. We expect the feeling of falling to be a zero-gravity (weightless) experience. Descending can be a zero-gravity experience or not. An airplane can descend, but it cannot fall (zero g) for more than a moment (except when intentionally flying the parabola). The parabolic flight profile is shown at this web page: http://www.nogravity.com/how.htm For more information see: http://www.nogravity.com/ The problem really comes because you visually see nothing holding the plane up and expect it to fall. You need to think about the jello exercise, . . . possibly even buy some jello and do it for real. FOR A SCHOLARLY EXPOSITION ON WEIGHTLESSNESS, SEE:
physics
https://agapewater.com/what-is-inside-cedi/
2024-02-29T04:08:58
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474784.33/warc/CC-MAIN-20240229035411-20240229065411-00014.warc.gz
0.917725
1,092
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__78882365
en
Water purification is an essential process in many industries. The Electrodeionization (EDI) or Continuous Electro Deionization (CEDI) process is one of the most efficient and cost-effective methods. CEDI is an ion exchange technology that uses electrical current to remove ions from water, resulting in high-purity water. This article will explore what is inside CEDI and how it works. The components inside CEDI are: - Positively charged anode - Negatively charged cathode - Diluting chambers with mixed cation and anion resin (D-chambers) - Concentrating chambers that remove the ions from the CEDI device (C-chambers) - Cation Exchange membranes between Dilute and Concentrate chambers on the cathode side of dilute chambers. - Anion Exchange membranes between Concentrate and Dilute chambers on anode side of dilute chambers. What Is CEDI? CEDI is an ion exchange water purification technology that uses ion exchange resin and membranes in a DC electrical current to remove ions from water. This impressive technology passes an electric current through a series of cell pairs which transports ions from the feed water to concentrate waste stream. This process results in high-purity water with low levels of dissolved solids, making it ideal for industrial applications such as semiconductor manufacturing and pharmaceutical production. Although original patents for this technology can be traced back to the early 50s, it was not until the mid 90’s that the technology advanced enough to provide consistent results. How Does CEDI Work? This highly technical process begins with a pre-treatment step such as reverse osmosis to remove suspended solids and bulk dissolved impurities from the feedwater. The CEDI feed water enters the CEDI device and is split between multiple Dilute chambers. Water is purified as it passes the ion exchange resin and contaminants are trapped on the resin. The positively charged cations are attracted to the negatively charged cathode, migrate through the resin in the D-chamber until it reaches pass through cation exchange membrane into C-chambers. Likewise, the negatively charged anions are attracted to the positively charged anode, migrate through the resin in the D-chamber until it reaches pass through anion exchange membrane into C-chambers. The contaminants are swept away into the concentrate stream, and exit the EDI device as a reject stream. The reject stream is still high quality, so the water can either be sent to local drain, or recovered to the pretreatment vented storage tank. Only about 5-10% of the feed water becomes concentrate reject in most cases. Finally the DC electrical current splits water molecules H2O into hydrogen H+ and hydroxyl OH- ions. The hydrogen continuously regenerates the cation resin, and hydroxyls continuously regenerate anion resin. Note this is a continuous and chemical free process. So there is no acid or caustic regeneration necessary as with traditional mixed beds. Advantages of CEDI CEDI offers several advantages over traditional ion exchange processes, including: - High efficiency: The CEDI process can achieve up to 99% removal efficiency for most ions, making it one of the most efficient methods of ion removal available today. - Low operating costs: CEDI requires very little energy, resulting in lower operating costs for users. - Low maintenance requirements: The membranes used in CEDI require minimal maintenance and can last up to 10 years or even longer with proper care and maintenance. - Flexibility: The modular design of CEDI systems allows them to be easily scaled up or down depending on user needs. This makes them ideal for applications where flow rates may vary over time or where space is limited. - Chemical free. No acid or sodium hydroxide is used to regenerate the resin, therefore no chemical regeneration systems, acid or caustic bulk storage, pH neutralization is necessary. - Small footprints. CEDI systems have small space requirements and do not need all the ancillary equipment required with chemically regenerated systems. CEDI is an efficient and cost-effective method for producing high-purity water with low levels of dissolved solids. It offers several advantages over traditional ion exchange processes, including high efficiency, low operating costs, low maintenance requirements, and flexibility in terms of scalability and space requirements. For these reasons, it has become increasingly popular among industrial users looking for reliable and cost-effective solutions for their water purification needs. Suppose you’re looking for solutions for your industrial water purification needs. Or if you have questions about your current water purification solutions Agape Water Solutions is here to help. Our trained professionals can help with field audits, troubleshooting, and more. At Agape, we provide the best industrial water solutions. Our trained personnel can check your plant’s specific operating conditions and requirements and recommend a proven solution. Our highly specialized process engineers can custom-design your system and select particular components and membranes to ensure the most reliable and consistent performance for years to come. We believe in excellence, and that’s the kind of products and services you’ll receive when you work with us. Contact us today to get started.
physics
https://africanpostonline.com/nasas-parker-solar-probe-to-launch-on-historic-mission-to-touch-the-sun/
2019-08-23T15:11:12
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00551.warc.gz
0.916163
640
CC-MAIN-2019-35
webtext-fineweb__CC-MAIN-2019-35__0__158473281
en
NASA’s Parker Solar Probe will lift off from the Cape Canaveral Air Force Station in Florida on Saturday, Aug. 11 at 3:33 a.m. EDT. The $1.5-billion, car-sized spacecraft is designed to provide a close look at the sun’s atmosphere — what astronomers call the corona — to answer enduring questions about this ultra-hot region of our nearest star. Over the course of its seven-year mission, the probe will orbit the sun 24 times, each time sweeping through the corona, where the temperature is a blistering 2,500 degrees Fahrenheit (almost 1,400 degrees Celsius). The spacecraft and its suite of delicate instruments will be protected from the sun’s extreme heat by a carbon fiber heat shield. At its closest approach, the probe will be just 3.8 million miles above the sun’s surface. And as it draws near, the spacecraft will be accelerated by our star’s intense gravity to a stupendous speed — estimated to be 430,000 miles per hour. That will make the probe the fastest human-made object, eclipsing the twin Helios probes that zoomed along at 157,000 miles per hour on their sun-circling trajectories. Space scientists have spent decades trying to understand how energy moves through the corona and what drives the flow of charged particles that the sun continuously casts off. Solar physicist Eugene Parker first predicted the existence of this stream of high-energy particles, known as the solar wind, 60 years ago. NASA’s probe is named for Parker, making it the first time the agency has named a mission for a living person. It’s important to understand the corona because it’s the breeding ground of vast and potentially destructive blasts known as solar flares and coronal mass ejections. When these streams of plasma and energetic particles strike Earth, they interact with our planet’s magnetic field, generating beautiful northern and southern lights. But they can also jeopardize the safety of astronauts aboard the International Space Station and the integrity of electrical grids on Earth. The probe should give scientists a front row seat to all this action. “All our data on the corona so far have been remote,” said Nicholeen Viall, a solar physicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “We have been very creative to get as much as we can out of our data, but there is nothing like actually sticking a probe in the corona to see what’s happening there.” Eventually, when the spacecraft runs out of fuel, it will disintegrate as it gets pulled lower and lower in its orbit around the sun. NASA’s live broadcast of the Parker Solar Probe launch will begin at 3 a.m. EDT on Saturday. © 2018, African Post Magazine. All rights reserved. This material, and other digital contents on this website may not be reproduced, published, broadcast, rewritten or redistributed in whole or in part without prior express written permission to AFRICANPOST MAGAZINE
physics
http://wepta.org/stem-presentation-dr-marty-cooper/
2018-03-19T14:16:12
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646952.38/warc/CC-MAIN-20180319140246-20180319160246-00534.warc.gz
0.952827
144
CC-MAIN-2018-13
webtext-fineweb__CC-MAIN-2018-13__0__172049057
en
Marty Cooper – STEM Discussion (2 spots) from Friday 9/15 – 5:30 pm to 6:30 pm The inventor of the cell phone and the winner of the Marconi Prize, Marty Cooper, has agreed to be a White Eagle STEM volunteer and get involved in inspiring our kids to understand science. He experimented with soda bottles when he was young and he is happy to discuss that with your kids. And oh, also show the world’s first cell phone which is the size of a boot and charged for 10 hrs for 8 minutes of talk time! Bring your curious minds, the young and the old and come listen to a legend. To invite your friends to Sign Up, Click here
physics
https://kmyali.github.io/
2022-11-29T21:45:51
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00693.warc.gz
0.922231
134
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__109217476
en
Research and Design Co-op at Memorial University St. John's | Jun 2019 - Aug 2019 Inspecting the water sprinkling system inside the walk-in refrigerator in the Thermal Lab - During my time at Memorial University's Industrial Outreach Office, I was responsible for delivering two research and design projects on my own. - The first one involved coming up with experiments to qualitatively describe the de-icing properties of a cleaning product. I enjoyed designing the experiments and solving problems on my feet. - The second project involved learning and conducting heat transfer analysis on a prototype for design verification. I enjoyed independently applying my calculus knowledge to optimize the model.
physics
https://divacontemporary.org.uk/2015/09/28/associate-artist-mandy-rathbone-lunar-eclipse-2015/
2020-07-05T23:56:54
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00591.warc.gz
0.922799
205
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__221717726
en
Associate artist Mandy Ratbone was up and about very early again (3am), this time to record the ‘super moon’ lunar eclipse. A lunar eclipse occurs when the Moon passes within Earth’s umbra (shadow). As the eclipse begins, Earth’s shadow first darkens the Moon slightly. Then, the shadow begins to “cover” part of the Moon, turning it a dark red-brown color (typically – the color can vary based on atmospheric conditions). The Moon appears to be reddish because of Rayleigh scattering (the same effect that causes sunsets to appear reddish) and the refraction of that light by Earth’s atmosphere into its umbra. The following simulation shows the approximate appearance of the Moon passing through Earth’s shadow. The Moon’s brightness is exaggerated within the umbral shadow. The northern portion of the Moon was closest to the center of the shadow, making it darkest, and most red in appearance.
physics
https://wby12p2ks.homepage.t-online.de/wrdprs/?p=497
2022-05-19T17:31:59
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00614.warc.gz
0.847946
293
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__175527917
en
This is the noise module for my Next Generation Formant project. It is a combination of two original Elektor Formant modules. The Noise module from Elektor Formant book one and the Coloured Noise (CNC) module from book two. It provides a white noise output, a fixed coloured noise output, a variable coloured noise output “red” “blue” and a random voltage output. The noise is derived from the reverse biased BE diode of an NPN transistor. Noise source is the reverse biased BE diode of NPN transistor Q1. The following operational amplifier IC1A and IC1B amplifies the noise to 10Vpp. IC1C is the buffer for the white noise output. The high pass filter C5/R23 and R13/R19 in the feedback loop of IC1D provides a bass boost for the fixed coloured noise output. IC2B is configured as a 12dB low pass. So you get a low frequency random voltage. The changing speed is set with P1A/P1B which sets the corner frequency of the low pass filter. IC2A / LED1 makes the fluctuation visible. Tr1 adjust the brightness of LED1. In the feedback loop of IC3B is an adjustable filter combination which gives you a wide range of adjustable coloured noise with P1 and P2. The output is buffered with IC3A.
physics
http://www.bakerssigns.com/news/a-3d-display-that-speaks-directly-to-you-about-time.html
2019-07-24T03:01:48
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530250.98/warc/CC-MAIN-20190724020454-20190724042454-00198.warc.gz
0.916397
347
CC-MAIN-2019-30
webtext-fineweb__CC-MAIN-2019-30__0__203301008
en
Huge ultra-realistic outdoor 3D displays without glasses planned for next year The boundaries of reality are about to dissolve. - January 19, 2015 “Vienna University of Technology (TU Vienna) physicists have designed a radical autostereoscopic (“glasses-free”) laser display that will send different ultrathin laser beams directly to individual viewers’ eyes, with full sunlight readability. The objective: create a realistic 3D illusion that changes as viewers walk or fly around the virtual object, with up to several thousand 3D viewing zones — each zone displaying a different view.” “Current 3D movies only show two different pictures — one slightly different for each eye. The new display can create hundreds or thousands of pictures — one for each viewing location (or viewer).” “So if you were walking, driving, or flying by the hypothetical display shown above, you would be seeing the leopard from constantly shifting different angles — even different sides. The display is designed to be amazingly bright, so it can be used outdoors, even in bright sunlight. The researchers expect the second prototype to be finished by mid-2015, with commercial launch scheduled for 2016. And yes, existing 3D movies can be converted into the new format, the researchers say.” Well there you have it. It looks like us receiving pleas from galactic princesses and time share salesmen in holographic-like form are about to be as ordinary as video chatting through our cell phones. I have a feeling that messaging while driving is about to take on a whole new dimension. Article Credit: KurzweilAI.net Photo credit: TriLite Technologies
physics
http://qscitech.ca/en/events/seminaires-industriels-qscitech-en/
2023-06-05T02:17:52
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650620.66/warc/CC-MAIN-20230605021141-20230605051141-00281.warc.gz
0.928283
469
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__33506121
en
The QSciTech industrial seminars series presents diverse enterprises to inspire and expand participant's horizons about new quantum technologies and the canadian quantum industry situation. Each trimester, we welcome a key actor from a canadian quantum technology company. Presentations are in hybrid mode. Contact us at [email protected] to receive an invitation and participate! QSciTech members will have the opportunity to meet Dominic Marchand, PhD, Head of Research and Partnerships, and Jessica Lemieux, Scientist. They will talk about 1QBit's projects and internship opportunities. Date: Wednesday March 24th 2021, 10:30 AM Eastern Time Title: Remote sensing in the quantum era Guest speaker: Jérôme Bourassa, Qubic president (Sherbrooke), quantum remote sensing start-up. Abstract: Microwave remote sensing tools, such as radars, are often used to monitor and secure spaces. Their reliability and precision are crucial in the analysis of situations and in decision making. It is known that the limits of measurement accuracy are dictated by the rules of quantum mechanics and it is known that the use of quantum electromagnetic signals allows substantial gains in sensitivity. At Qubic, we are working to develop a new generation of microwave remote sensing systems, based on the fundamental laws of quantum mechanics and built with superconducting circuits. In this presentation, I will introduce you to the principles behind our innovative technology as well as our most recent efforts to demonstrate the quantum advantage in remote sensing. I will conclude by presenting the long-term vision of our company, as well as the various future opportunities at Qubic. Date: Friday November 27th, 2020, 2:00 PM Eastern Time Title: Internships opportunies at Xanadu Guest speaker: David Asgeirsson, PhD, Manager of Partnerships and IP at Xanadu (Toronto), a company developping an integrated photonic platform and algorithms and softwares for quantum computing. Abstract: This virtual seminar aims to present the main research projects and internship opportunities at Xanadu, as well as the process to apply for an internship. In particular, Xanadu's new residency program will be presented: https://residency.xanadu.ai/.
physics
https://www.stjohnspriory.co.uk/2021/03/30/steam-week/
2024-04-22T17:32:45
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00376.warc.gz
0.934617
310
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__36672457
en
STEAM Week is an annual celebration of four valuable subjects: Science, Technology, Engineering, Arts and Mathematics. These subjects are integrated across the curriculum, with a view to inspiring and encouraging our pupils to develop their enthusiasm and skill in these areas. This year, we dedicated the week to the wonders of Space, the Mars landing and the planets. Through STEAM, pupils develop their teamwork, independence, reasoning and critical and creative thinking, and we have seen wonderful examples of this across our weeklong learning programme of STEAM tasks. Little Conkers Early Years Children – Nursery and Reception Little Conkers created paper constellations and marble painted planets; participated in Space themed Cosmic Kids yoga sessions; and built space station dens in the Little Conkers garden. Their continuous provision included a small world space station, starry night water play and rockets and astronauts role-play. Prep School – Years 1 – 6 Children in our prep school held Space and balloon debates; explored a deep space projector looking at constellations, planets and the Moon; made biome Mars habitats out of recyclable materials; painted Space scenes; and worked in teams to design and build Moon buggies out of Lego- testing their speed and distance and recording their results. The week ended with the grand finale of a whole school ‘eggnauts’ drop, which comprised of the children working in teams to design and make egg capsules and parachutes for their eggs to travel in when dropped from a height.Categories: Uncategorised
physics
https://www.diontario.org/single-post/2018/09/10/ed-and-betty-got-married
2024-02-28T13:25:57
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474715.58/warc/CC-MAIN-20240228112121-20240228142121-00253.warc.gz
0.862271
244
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__76182562
en
Ed and Betty Got Married! A special bit of news: Ed (Treasurer, Scientific Challenge Master and Alumni) and Betty (Instant Challenge Appraiser and Volunteer) got married this summer! And in true DI spirit, they challenged their wedding guests to solve this Instant Challenge at their wedding. Pillar of Love Instant Challenge Betty and Ed require you to build a pillar that will support their love. You and your tablemates will construct a structure using the materials provided that will support an object representing their love. Challenge: Build a structure that will support an object above table level. Time:You will have 3.5 minutes to build your structure. 3 sheets of paper 4 mailing labels 3 mini pipe cleaners One team member will bring the structure and place on top of table in the front. Team members will place object on top of structure. The height at the top of the object will be measured. 2 points for each centimetre the object is above the table. Structure must remain standing for duration of measurement. Notes: You may not use the envelope, this sheet of paper, or any materials not listed above.
physics
https://safethaw.com/the-art-and-science-behind-snow-melt-mats-for-driveways/
2023-11-29T15:10:45
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00843.warc.gz
0.875938
936
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__252498945
en
The Art And Science Behind Snow Melt Mats For Driveways When the first snow of the season blankets our homes and communities, it brings with it a mix of enchantment and trepidation. While the world appears serene under a layer of snow, homeowners know the challenges that come with clearing and maintaining safe driveways. Enter the snow melt mats driveway technologies: a fusion of art and science that offers a modern, efficient solution to an age-old problem. Safe Thaw was created as the ice management solution for tough winter environments. Ideal in commercial and industrial properties, shops, government agencies, bridges, and construction. A Symphony Of Elements: How Snow Melt Mats Work At first glance, snow melt driveway mats might seem like simple constructs. But underneath their seemingly modest surface lies a symphony of engineering marvels that harmoniously work together to melt away snow. - Heat Transfer Principles: Snow melt mats operate on the premise of transferring heat to the snow, effectively turning it into water. The evenly distributed heating coils embedded within ensure that every inch of the mat’s surface is warm, melting snow upon contact. - Smart Circuitry: The internal wiring is not just randomly placed. It is methodically organized in a grid pattern, ensuring the entire mat heats up uniformly. This intricate web of heating wires is what guarantees the consistent melting capability of these mats. The Artistic Facet: Design And Aesthetics Beyond the pure science and engineering of these mats lies an artistic approach to their design. - Customizable Designs: Modern snow melt driveway mats aren’t just about function; they can be tailored to fit the aesthetic of any home or commercial space, seamlessly blending with surroundings. - Flexible Formats: Whether you have a narrow pathway or a broad driveway, there’s a design to fit every need. The mats can be interconnected for larger spaces, ensuring every corner is snow-free. Safety First: Temperature Regulation Safety is paramount, and snow melt mats excel in this department. Equipped with thermostats and temperature sensors, these mats monitor their heat levels. The goal? To maintain an optimal temperature that’s just right for melting snow but safe for human touch. Material Matters: Durability Meets Efficiency The external layer of the mats is crafted to withstand the weight of vehicles and resist the harshest of winter conditions. But there’s more: - Reflective and Insulative Layers: These internal layers are the unsung heroes. They ensure that the heat is directed upwards to melt the snow, while also insulating the mat from the cold ground below. - Weather-Resistant and Long-Lasting: The choice of materials guarantees that the mats will serve homeowners for several winters, making it a worthy investment. The Safe Thaw Advantage Amidst the marvel of snow melt driveway mats, there will be instances when ice might form around the edges or areas not covered. Safe Thaw, a chemical and toxin-free industrial-strength ice melt, comes to the rescue. - Why Safe Thaw Stands Out: - Eco-Friendly: Made without harmful chemicals, it’s a green solution for icy problems. - Versatile: Whether it’s concrete, pavers, or other surfaces, Safe Thaw doesn’t discriminate. It works efficiently without causing any damage. - Power-Packed Performance: Its industrial-strength ensures rapid ice melting, making driveways safer faster. 100% salt & chloride-free, fast acting Ice Management Solution Wrapping It Up: Melding Art With Science The harmonious integration of artistic design and scientific innovation in snow melt mats driveway solutions epitomizes modern ingenuity. When paired with the unparalleled power and eco-friendliness of Safe Thaw, homeowners have a winning combination for wintertime driveway maintenance. As the snow continues to fall, take a moment to appreciate the art and science at your feet, keeping your path clear and safe. Try Also Our Other Winter Safety Products: The Original and #1 Selling Pet and Child Safe Ice Melt for over 20 years. Guaranteed environmentally safe –It won’t harm animals or children, and it won’t damage your property. That’s Safe Paw. Safe Paw can change how winter affects our planet. Walk On Ice The handy disposable canister can be taken everywhere, with the same 100% naturally occurring minerals that provide instant traction on ice or snow. Use it on sidewalks, steps, or as an instant traction agent for your car.
physics
http://www.minster-ramsgate.kent.sch.uk/science-2/
2019-03-22T17:23:56
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202672.57/warc/CC-MAIN-20190322155929-20190322181929-00308.warc.gz
0.980726
473
CC-MAIN-2019-13
webtext-fineweb__CC-MAIN-2019-13__0__112563699
en
A great first day for Science Week in Year 2. We were finding out about the inventor and engineer, Isambard Kingdom Brunel. We used his ideas about bridges to investigate which biscuit could take the most weight between two blocks. We made predictions, tested weights, recorded estimates and results in a table and wrote a conclusion. There were lots of different ideas and surprises with the pink wafer biscuit! Purple Class have been wonderful Scientists today on their first day of Science and Technology week. This afternoon they investigated which shapes would be effected the least by water resistance. Using this knowledge, they designed a boat that had to be streamlined. We had a few design problems in that our boats didn't float as well as we would have liked but like true scientists, the children were able to explain why by referring to the forces of buoyancy and the effect of weight and mass. Purple Class were wonderful. We can't wait for the rest of the week now! Year 2 children have been asking the question in Science, 'How are animals in Antarctica able to survive in ice and snow?' We then went on to look at insulation and which materials helped the ice cube to last the longest. Year 5 visited the Dreamland Education Centre to learn about plastic from real scientists! Discovery Planet hosted a workshop to help us work scientifically, investigating the different properties of plastic and testing unknown materials. We will be visiting again to learn about the future of plastics and the role of chemistry later this month. Year 5 have been studying materials and the different changes that can happen to them. We have been investigating what happens when bicarbonate of soda mixes with vinegar. Does it react? What chemical changes take place? Is the change reversible or irreversible? Year 4 are learning about 'Living Things' in Science this term. We conducted a survey to find out what living things live in our school grounds. We will be using this data to create graphs in Maths. Science Week certainly went off with a bang, as we were joined by two scientists from Pfizers. The whole school gathered in the hall with excitement, wearing scientific goggles to really get us into the mood. Here we were all wowed by fire, explosions and some really fantastic technology. These photos show some of the highlights of the show. We hope you enjoy it!
physics
https://almostinfamous.wordpress.com/2008/10/28/nuclear-fusion-brought-to-you-by-3m/
2022-12-01T19:21:59
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00604.warc.gz
0.956191
282
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__206571601
en
what the hell are they putting in their tape? In the current issue of the journal Nature, Dr. Putterman and his colleagues report that surprisingly fierce flows of electrons were unleashed as the tape was unpeeled and its gooey adhesive snapped free of the surface. The electrical currents, in turn, generated strong, short bursts of X-rays — each burst, about a billionth of a second long, contained about 300,000 X-ray photons. “Some kind of microscopic lightning effect,” Dr. Putterman said. The scientists even demonstrated that the X-rays were bright enough to take an X-ray of a finger. That does not mean that tape dispensers on office desks are mini X-ray machines. The phenomenon has been observed only when tape is unpeeled in a vacuum. Something about air, moisture perhaps, short-circuits the X-rays. The tape phenomenon could also lead to simple medical devices using bursts of electrons to destroy tumors. The scientists are looking to patent their ideas. Finally, there is the possibility of nuclear fusion. If energy from the breaking adhesive could be directed away from the electrons to heavy hydrogen ions implanted in modified tape, the ions would accelerate so that when they collided, they could fuse and give off energy — the process that lights the sun.
physics