url
stringlengths 15
1.48k
| date
timestamp[s] | file_path
stringlengths 125
155
| language_score
float64 0.65
1
| token_count
int64 75
32.8k
| dump
stringclasses 96
values | global_id
stringlengths 41
46
| lang
stringclasses 1
value | text
stringlengths 295
153k
| domain
stringclasses 67
values |
---|---|---|---|---|---|---|---|---|---|
https://anthonyarms.com/detectors/magnet-fishing/ | 2023-09-23T11:33:17 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00342.warc.gz | 0.94746 | 5,897 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__47521504 | en | Magnet fishing is the new hobby taking the UK and US by storm! So I decided to write a beginners guide to magnet fishing while I start out my magnet fishing adventures. I have called it the beginner’s guide because that is what I am a beginner.
I have recently started a wonderful hobby of magnet fishing, alongside my other wonderful hobby of metal detecting. So as I learn all about magnet fishing, and start this journey, I wanted to teach you my findings and research and give you a few magnet fishing tips along the way. As I learn more I will of course update this guide.
If you have any great tips to share or recommendations then please feel free to add them in the comments at the bottom of this page. This guide is good for those who want to magnet fish in the UK but also those from Europe and the US too. Obviously, though laws may be different depending on where you are so please be sure to check first.
My main mission for this beginners guide to magnet fishing is to help you get started too!
- What is magnet fishing?
- How do I get started magnet fishing?
- Where can I go magnet fishing?
- What are the magnet fishing laws?
- How to tie a knot for magnet fishing
- How to set up for magnet fishing
- Magnet fishing techniques
- How do I clean my magnet fishing finds?
- Best Types of Magnets For Magnet Fishing
- Neodymium magnets vs Ferrite magnets for magnet fishing
- Magnet Fishing Safety
- FAQ about Magnet Fishing
What is magnet fishing?
Magnet Fishing is the act of fishing with a very powerful magnet attached to a rope. Not trying to catch fish though, we are trying to attract metal objects to the magnet.
Over the years many metal items have been disposed of or dropped into many different bodies of water like rivers, lakes or streams. These metal items include historic relics, modern-day gadgets, and many more interesting metal items.
In recent years Magnet fishing has become increasingly popular and more and more people are getting started in this wonderful hobby every day.
How do I get started magnet fishing?
To get started in the magnet fishing hobby you’re not going to need a lot. It’s an inexpensive hobby. There are of course accessories you can add to your arsenal. Listed below are the things you need for magnet fishing, some are mandatory and some are optional.
What you will need for magnet fishing
Strong Magnet (mandatory)
There are many options when choosing a strong magnet for magnet fishing.
You will need a magnet that has a strong eyebolt to attach the rope too. (don’t worry we recommend some magnets further on in this guide)
After extensive research as a beginner, I would suggest a Neodymium Magnet with an Eyebolt.
Most Neodymium magnets will be fine, but it all depends on what you want to pull out. Different strengths and sizes are available, with some people even going for dual magnets!
Strong Rope (mandatory)
There are two options of rope that are highly recommended these are Nylon and Paracord which are considered the best magnet fishing rope.
The higher the power the magnet the stronger the rope you will need. The rope that I have linked to above is probably the higest rated and strongest available.
These will add protection to your hands, metal can be sharp and rusty. You don’t want to slice your hands open and cut your day short. It’s also a good way of keeping your hands dry and clean.
Magnet fishing can be a dirty hobby. You can get some great cut resistant waterproof gloves!
A bag or bucket (optional)
You don’t want to lose your precious magnet fishing finds. A heavy duty bucket or bag will allow you to store your items without the worry of losing them from pockets etc
A cloth (optional) & Hand Sanitizer (optional)
Magnet fishing can be very messy a cloth and some hand sanitizer will allow you to dry and clean your hands.
Threadlock is an extra safety measure to prevent you from losing your Magnet. It is possible for the Eyebolt that is threaded into the Magnet to work loose. A heavy duty threadlock will prevent this from happening.
Penknife or Scissors (optional)
Obviously, you will need to check the laws on carrying such items, a small penknife or pair of scissors should be fine. These can be very handy if your rope is tangled up with fishing wire, or other rope.
Where can I go magnet fishing?
Generally speaking, the same rules apply as they do in metal detecting, all land is owned by somebody and so are all lakes, rivers and streams. Which ultimately means if you want to ensure you aren’t breaking any laws or trespassing you will need to get permission.
Try to look for a busy area, somewhere where there may have been a lot of people come and go in the past. Where people could have potentially dropped or lost things.
Magnet Fishing Rivers
Rivers are a great place to go Magnet fishing. You may still need permission from whoever owns the part of the river you want to go magnet fishing on.
Rivers can be a very good place to go because over time many things will have been disposed of into the river and there have been many fishermen by the river. Try to find a good bridge to magnet fish off.
Magnet Fishing Lakes
Lakes are another good choice, same principles apply, make sure you have permission.
Again things may have been intentionally thrown into the lake over the years, people fish lakes, and perhaps even swam the lake.
Magnet Fishing at the Beach
The beach is perfect if you are struggling to get permission. The same laws apply as they do to metal detecting a beach. Most beaches belong to the crown estate, not so long back you could apply for a simple permit to give you permission, however that has now been discontinued and you no longer require a permit. Just check that the beach you want to go to is crown owned.
Over the years there has been a lot of people on the beach, holidaymakers, fisherman, dog walkers etc so potentially some lost metallic items on the seabed. The only downside is that many could have sunken too far into the sand. There is definitely still potential to find things especially if you can find a good spot off the pier.
Magnet Fishing Canals
Canals are a great place to magnet fish largely due to the traffic up and down over the years, many things have been dropped or thrown into the canals.
Magnet Fishing Streams
Magnet fishing streams are perhaps one of the most simple places to enjoy this hobby, there’s also a lot of history in many streams.
Your finds may not be as regular but never the less it’s a nice place to go, grab your wellies and off you go. Because streams are often so shallow it’s a great place to take the kids so they can safely enjoy the hobby too!
What are the magnet fishing laws?
At the moment there are no specific magnet fishing laws, that being said I would believe they would follow the same principles as the metal detecting laws and rules. Which in a nutshell means if you want to follow the proper legal channels then permission from the owner would be required. Yes, all rivers, canals, streams have owners.
Although there have been no recorded cases of breaking the law while magnet fishing I would still recommend you research the area you would like to magnet fish, and contact the owner for permission.
The one thing I could not find anywhere and I spent a lot of time looking was somewhere that would offer a public liability insurance for magnet fishing. If anyone knows the answer to this or can offer some advice as to where one could obtain such a thing then please let me know in the comments below.
Is magnet fishing legal?
Yes, magnet fishing is legal and there are no laws against magnet fishing as of 2023.
I have recently seen some text from the canal river trust that explains they “don’t allow” magnet fishing.
In a nutshell, they state that it is very dangerous, with people pulling out WW2 hand grenades and live ammunition. So they do not allow magnet fishing in the waterways.
That being said it mentions nothing of legalities, and it hasn’t stopped anyone from continuing this strange but wonderful hobby. At present, there have been no prosecutions nor have there been any reported circumstances of anyone being told to stop by any authorities.
So is magnet fishing legal? Yes, magnet fishing is legal and there are no laws against magnet fishing as of 2023.
Magnet fishing code of conduct
I couldn’t actually find this anywhere so I decided to write up what I believe would be a good guideline.
- No Trespassing, Always ensure you have permission from the landowner to magnet fish there.
- Report all finds of historical importance and those defined under the treasure act.
- Report anything suspicious to the police or local authorities.
- Leave all areas tidy, do not leave unwanted finds or items left lying around. Dispose of them correctly and do not throw back into the water.
- Respect the country code.
- Record findspots as accurately as possible to assist with archaeological research etc
How to tie a knot for magnet fishing
There are believe it or “knot” a variety of different types of knots that serve different purposes. You are going to want to choose the right one for the task at hand. When Magnet fishing items that are attracted to the magnet can be very heavy or large. So we will want to choose a knot that is reliable and strong.
What Knot Should I use when Magnet Fishing?
There are literally hundreds of ways you can tie a knot. Again after extensive research, the most popular of all Magnet fishing knots is called the Palomar Knot. You can, of course, use other knots and some you may find simpler and more effective. It’s all about preference. I personally use the Palomar Knot and recommend it.
How to tie a Palomar Knot
A highly recommended knot type when magnet fishing is the Palomar Knot. The following video gives you a step by step guide on how to attach the rope to the eyebolt of the magnet. The best thing about the Palomar knot is that you have 2 loops that go through the eyebolt so if one wears thin and snaps, you won’t lose the magnet as you will still be attached.
How to set up for magnet fishing
Once you have mastered how to do a knot you are comfortable with and you are sure it will be secure. It’s time to get set up!
- First, you need to cut a long strip of your rope/chord, ensure that you have more than enough to throw into the water and to be able to hold it securely.
- Tie the knot you are most comfortable with through the inside of the eyebolt of the magnet.
- Apply the thread lock if you have any to the thread and screw into the magnet.
- Test by throwing the magnet into an open area, checking that the magnet is secure.
- Once all is secure, you can throw into the body of water.
An extra tip to protect your magnet is to put an old pop(soda) bottle over your magnet. You can see an example of this here:
Magnet fishing techniques
To be honest with you there is no real skill involved. I personally feel doing what works best for you is the key. As you get out more and practice you will find your way.
For a complete beginner though you probably wonder how it actually works so here’s just a couple of simple methods I use. The best technique will also change depending on the area you are magnet fishing in and the obstacles in your way.
- Throw in and pull back:
A very basic and simple method and technique. Simply throw the magnet into the body of water, let the magnet sink and slowly pull back.
- Throw in, walk and pull back
For this method, you will throw in the magnet and wait for it to hit the bed at the bottom of the water, as you are pulling it in walk parallel to the magnet. A good tip to cover more ground is to try to drag the magnet from side to side as you are pulling it in.
How do I clean my magnet fishing finds?
The majority of your magnet fishing finds are going to be covered in rust. Sometimes they are that covered they are unidentifiable.
So you’re going to need a safe way to clean and clear some of that rust away. Here are a few good techniques that may help without doing irreversible damage to the find.
Please note there are many other ways of cleaning rust and your finds. These are in my opinion safer methods. If you believe that you have found something of importance and are concerned you may damage the item. Do not clean it. Look for expert advice.
Once you have removed your metal finds from the water there is a very good chance that the items deterioration rate will increase due to the oxidation. If you want to slow this process down the item should be stored in a very dry place with no oxygen (an airtight container of some sort)
Lemon Juice & Salt
The natural acid in lemon juice works very well as a natural cleaner. You will find that some well known popular cleaning products contain lemon juice.
To use this method you will need to rub salt all over the areas of rust that you want to clean, once you have coated the area, take half of a lemon and squeeze the juice all over the salt. Leave to soak for a couple of hours. Then scrub it off with a scouring pad or steel wool. If you want to cause fewer scratches I would recommend using a cloth or soft bristled brush.
To use baking soda to clean rust you will need to make a paste from the Baking soda and water. Simply apply the paste to the rusted area and leave to sit for a couple of hours.
Use either a scouring pad or steel wool to rub off the paste. If you are concerned again about scratches use a soft bristled brush or cloth.
Vinegar is probably the best natural cleaner available. To remove rust using vinegar you will need to let your magnet fishing find soak in a bowl of vinegar overnight. You then scrub the item with a scourer or wire wool. Again use a cloth or soft brush if you are concerned you might damage the item.
Larger magnet fishing finds you won’t be able to soak in a bowl of vinegar, a good method is to soak a rag in vinegar and wrap around the item.
Best Types of Magnets For Magnet Fishing
Neodymium Magnets for Magnet Fishing
If you’re a beginner to magnet fishing you’ve probably by now heard of Neodymium magnets, these are also known as “rare earth magnets” and are the most popular magnets for magnet fishing.
What is a Neodymium Magnet?
Sometimes known as rare earth magnets or Super magnets, Neodymium magnets are the most powerful permanent magnet in the world.
The smallest of neodymium magnets can lift over 1000 x their own weight.
Neodymium magnets transmit magnetic fields that attract ferrous metal items from a great distance.
The rare earth magnets are used for many things, by many people including magicians with some magic tricks. They can attract ferrous metal even through a human finger.
They were created in the 1980’s and were used to help develop things like speakers, motors and power tools ultimately allowing the manufacturers to create much smaller products.
The Neodymium magnets are made of iron, boron and an alloy of neodymium.
Due to the sheer power of Neodymium magnets, they are the best magnets for magnet fishing.
Neodymium Magnet Grades
Neodymium magnets grades are determined by the material they are made up from.
The easiest way for me to explain grading would be, the higher the number that follows the N in the grading the stronger the magnet.
N52 is currently the highest possible grade, the number that follows the N is referring to the temperature rating of the magnet.
How is temperature relevant to the magnets? Well, simply put, the numbers that follow the letter are the maximum operating temperature that the magnet can withstand before it starts to lose its magnetic property.
Neodymium magnets size & pull power
There are many Neodymium magnets available in a variety of strengths. These strengths are measured in “pull power” Typically the bigger the magnet the bigger the pull-power. Before you choose your magnet and its pull strength you will have to decide on what items you are prepared to recover from the water.
If you want to be able to pull out bikes, safes and other large heavy metal items you will need a bigger Neodymium magnet with higher pull power, you’ll also want a stronger rope.
If you are only wanting to pull out smaller items then you should go for less pull power.
Pull-power is measured in lbs or kgs.
So if you see 1000lbs or 453 kgs pull power on a magnet then I believe this is the largest and strongest of magnets for magnet fishing.
Where can I buy Neodymium magnets?
You can buy neodymium magnets online from both Amazon and eBay and have them delivered straight to your door.
Both of these marketplaces are safe to purchase from and have buyer protection so you can rest assured your Neodymium magnets will be the best quality and will be delivered safely.
Ferrite Magnets for Magnet Fishing
The second type of magnet I would recommend is a Ferrite magnet. These are not as powerful as the Neodymium magnets but are a cheaper alternative. You can see the differences between the two further down this post.
They are a popular choice and are also one of the best magnets for magnet fishing.
What is a ferrite magnet?
A Ferrite magnet is also referred to as a ceramic magnet or hard ferrite magnet. They are electrically insulating and exist in two different forms. Strontium Ferrite and Barium Ferrite. The more common of the two types is Strontium.
They are usually dark grey in colour and resemble pencil lead. You’ve probably seen this type of magnet at school or on the back of a fridge magnet.
Ferrite magnets are a popular choice of magnets because they are free from corrosion and last a very long time. They also have a high-temperature tolerance and do not lose strength in temperatures up to 250 degrees Celsius, sometimes higher.
The Ferrite magnets are typically cheaper than other types of magnets as they do not include any rare earth materials. Usually, they are made from Iron oxide and barium or strontium carbonate.
Ferrite magnet grades
Originally the grades of Ferrite started with the letter C, this is still used in the USA and UK sometimes. They are more commonly graded with the letter Y today.
The Y is an identifier for a ceramic magnet or hard ferrite magnet. The letter Y is proceeded by a number and represents the BHmax energy product. There are currently 27 grades of ferrite magnet.
Confused yet? I know I am.
The most popular grades of Ferrite magnets are Y30.
Ferrite magnets size & pull power
Again as per Neodymium magnets, Ferrite magnets are available in different strengths. These strengths also are measure in pull power. Unlike Neodymium ferrite are not nearly as strong so you will need a bigger ferrite magnet to achieve higher pull powers.
If you are magnet fishing and you don’t have a big budget and you’re not interested in pulling out really large items then ferrite magnets are a good choice.
To achieve a higher pull power custom rigs can be made by combining two or more ferrite magnets.
Pull-power is measured in lbs or kgs.
I believe from my research approximately 130kg pull power is the maximum size ferrite magnet you can get.
Where can I buy ferrite magnets?
You can buy ferrite magnets online from Amazon and eBay and have them delivered straight to your door. You can also purchase magnets from some reputable magnet retailers like first4magnets.
Neodymium magnets vs Ferrite magnets for magnet fishing
The first difference between Neodymium and Ferrite magnets is the price. Ferrite magnets are much cheaper than Neodymium magnets because of the material they are created from. Ferrite magnets do not contain any rare earth materials like Neodymium magnets do.
The second is working temperature, ferrite magnets work fantastically between -40 degrees and 250 degrees. Most neodymium magnets would lose their magnetic properties at the higher end of the temperature. However, temperatures on the lower end will not effect neodymium magnets. It can actually increase their performance and power.
Appearance and material is the third difference, a ferrite magnet is dark grey in colour and can break into pieces when repeatedly put under strain. Neodymium magnets are more attractive, they are coated in nickel-copper and are silver in colour, they are also rather brittle and can crack.
Both are good magnets for magnet fishing, if you are looking at pulling large items from bodies of water then I personally would choose a neodymium magnet. However, if you’re looking at pulling just smaller items from the water then ferrite magnets will suffice.
Magnet Fishing Safety
In light of the recent tragedy and deaths of the magnet fishing father and son, I have decided to add a small section on the importance of safety while partaking in the hobby.
Generally speaking, providing you use common sense and some extra caution magnet fishing is a fun and safe hobby, however, there are of course a few small things you should be aware of.
There are of course dangerous objects in the deep waters, these could include unexploded bombs from the world wars.
There are weeds and other things in the water that could potentially tangle your magnet or if you fell in YOU!
The banks and edges can be slippery and dangerous.
Muddy riverbanks are prone to Weil’s disease (leptospirosis)
Wellington boots are a danger in deep water!
Magnet fishing safety tips:
- Never tie a magnet to yourself, the magnets are powerful and will attract heavy large objects.
- If your magnet gets stuck or tangled do not jump into free it. Is losing a magnet worth risking your life? Many magnet fishermen will tell you that they have done this without any kind of danger, however, once you are in the water no matter how much of a good swimmer or shallow the water is, the risk and dangers increase. There are other ways of retrieving a magnet safely, for example using a “come-along” cable puller.
- Wear gloves, these won’t just protect you from cuts, and damage from the metal you are finding, these will also protect you from rope burn and Weil’s disease. If you’re going to have a sandwich for lunch wash your hands thoroughly with a hand sanitizer beforehand.
- Assess the area you are magnet fishing in first, look out for signs of danger, for example, areas where you could fall in, mud sliding, traffic behind you if you are fishing off a bridge.
- I’d also recommend carrying a small portable first aid kit with you.
- You could also go as far as wearing a life jacket/belt. The only downside would be the limited mobility while wearing one, so you could even look at buoyancy jackets. These are not “life-saving” aids, but they are designed to help you float should you fall in and could potentially save your life.
FAQ about Magnet Fishing
What is a neodymium magnet?
A Neodymium magnet is the most powerful available per unit volume. These magnets will sometimes be also called “rare Earth magnets”. The manufacturing process is very complex. Vacuum melting, milling, pressing and sintering. The magnets are then broken down by slicing into smaller sized magnets or ground down by using grinding tools.
Is Gold Magnetic?
Simply put pure gold is not magnetic. So don’t expect hoards of treasure to attach itself to your magnet. That doesn’t mean though that the container holding the gold isn’t magnetic….you catch my drift.
What will I find Magnet Fishing?
There is potential to find just about anything metallic. People have found many interesting items including coins, knives, safes, bikes, bullets, tools, cannon balls, guns & pistols, swords someone even found an Enigma codebreaking machine from WWII! The list is endless and this is why magnet fishing is becoming so increasingly popular!
A great magnet fisher to follow on YouTube is Gareth – He records his magnet fishing finds and has discovered everything from Crossbows to Guns!
How much rope do I need for magnet fishing?
You will need in my opinion approximately 20 – 30 meters of rope. Anything more is overkill and could increase the chance of you getting tangled up. A good way to test if you need more rope is to throw your magnet and rope as far as you can if you can throw further than the amount of rope you have then you may want to get a longer rope.
What do I do if my magnet gets stuck?
I saw a post somewhere on the internet forums not so long ago about a magnet fisherman who had got his mega powerful magnet stuck to the metal bridge. He was looking for tips on how to get it unstuck. So I thought I would just include this information here.
There are 3 suggestions that were made that seem to be the most popular techniques for getting your magnet unstuck from an unwanted location.
Get a wooden wedge and use it to pry between the magnet and unwanted metal.
A rubber mallet, which is a more aggressive approach. Please be aware some people would believe that the impact can help demagnetise the magnet, whereas others believe that it would take a hell of a lot more force to have any effect on the magnetism. Use this approach at your own risk.
A large flat head screwdriver or pry bar put it through the eyelet and then push/pull upwards.
How do I clean my magnet?
To ensure your magnet lasts for a long time and to decrease the chance of it losing power it’s probably a good idea that you give it a clean every now and then.
You can use an old toothbrush to give it a going over every few throws, then a quick dry off with an old rag this will be more than sufficient. This will get rid of any of the iron sand or crap that builds up.
Generally speaking, this hobby is relatively simple. You will find your own way after you get out there and practice. There is no right or wrong way to do things. I suggest you be aware of safety precautions and act responsibly when near to the water.
When removing metal items from the magnet as well you should always be careful, remember the magnet will try to pull the metal back to it, you do not want to get your fingers in the way.
Get permission if it is needed for the place you wish to magnet fish on.
Take your time and have fun. Magnet fishing is very exciting. As I am learning more and getting out more, I’m really starting to find the hobby intriguing. Some of the things you find really are exciting.
Metal detecting will always be my number one hobby, however, Magnet fishing is fantastic for those who can’t afford a metal detector. It’s a very cheap hobby to start and very simple.
If I were you I would also visit YouTube there are many magnet fishers out there that have YouTube channels and record their findings, adventures and best tips! | physics |
http://qcsi.fi/instrument-helsinki/ | 2023-06-01T19:58:52 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648000.54/warc/CC-MAIN-20230601175345-20230601205345-00537.warc.gz | 0.836175 | 764 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__230456121 | en | Contribution of the University of Helsinki to the infrastructure includes construction of a fast multiplex coherent Raman imaging instrument at the Department of Chemistry. At present such instrument is not yet commercially available but has been demonstrated in leading analytical laboratories (e.g. at Harvard University, Purdue University) to have enormous potential in the material and life sciences to provide chemical and supramolecular insights not revealed using established state-of-the-art analytical methods. The instrument will be made openly accessible to users through an online reservation system.
Fast multiplex coherent Raman imaging is a unique technology enabling label-free, global, quantitative, and non-destructive submicron 3D imaging of diverse and dynamic samples, in both the material and life sciences. The technique is based on vibrational spectroscopy via inelastic Raman scattering. Compared to the classical spontaneous Raman effect, the coherently driven transitions result in an enhancement of up to 105 in signal intensity. This overcomes the biggest analytical barrier with conventional state-of-the-art Raman mapping: painfully slow image acquisition times of up to hours or days. Furthermore, the spatial resolution is improved and one-photon fluorescence interference is also avoided (especially important for life sciences). Standard coherent Raman microscopes are very fast (video-rate imaging), but operate in narrowband spectral mode corresponding to a single vibrational transition. This severely limits the ability to resolve different chemical species and supra-molecular structures. The multiplex coherent Raman imaging instrument will instead allow a broad spectral coverage (currently 1100-3200 cm-1), enabling fast quantitative imaging of chemically and structurally complex systems. Inherently confocal non-contact sampling and the lack of water signal makes the technique ideal for 3D imaging in situ/in vitro/ex vivo of e.g. cells, tissues, microfluidics.
The setup, will multiple complementary modalities. The main focus is in stimulated Raman scattering (SRS) with spectral focusing technique, which allows fast frame-by-frame measurement of vibrational spectral images. This will be complemented by coherent anti-Stokes Raman scattering (CARS), sum frequency or second harmonic generation (SFG/SHG) and two-photon excited fluorescence (TPEF), and normal confocal fluorescence imaging to further increase chemical and structural specificity. The non-destructive analyses also enable complementary correlative coupling with, e.g. mass spectrometry for metabolomics, proteomics, and reaction monitoring, and transmission and scanning electron microscopy for further structural analysis.
Further reading on coherent Raman imaging:
Evans, C. L., & Xie, X. S. (2008). Coherent anti-Stokes Raman scattering microscopy: chemical imaging for biology and medicine. Annual Review of Analytical Chemistry, 1, 883-909.
Cheng, J. X., & Xie, X. S. (2015). Vibrational spectroscopic imaging of living systems: An emerging platform for biology and medicine. Science, 350(6264).
Tipping, W. J., Lee, M., Serrels, A., Brunton, V. G., & Hulme, A. N. (2016). Stimulated Raman scattering microscopy: an emerging tool for drug discovery. Chemical Society Reviews, 45(8), 2075-2089.
Fu, D., Holtom, G., Freudiger, C., Zhang, X., & Xie, X. S. (2013). Hyperspectral imaging with stimulated Raman scattering by chirped femtosecond lasers. The Journal of Physical Chemistry B, 117(16), 4634-4640. | physics |
https://topazfiresystems.com/water-mist-system.html | 2023-02-08T04:18:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00700.warc.gz | 0.883325 | 196 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__113052171 | en | Water mist system optimises the quantity of water used, maximising effective water volume by means of distribution of droplets of very tiny size, which produce a cooling effect. Thus, damage is reduced by reduction of volume of water used.
Water Mist Systems offer safe, effective and environmentally conscious fire suppression. Low & High pressure systems operate at a working pressure from 8bar to 200bar with specially-designed discharge nozzles that spray water in tiny droplets. This results in a fog that provides a highly effective combination of extinguishing actions.
Water mist owes its extinguishing efficacy to the joint actuation of 3 main effects:
Smothering: Vapour generated displaces an equivalent of oxygen volume, thus producing a smothering effect.
Cooling: Water spray in droplets of micrometric size produce a large heat collector surface.
Attenuation: Mist generated in the enclosure absorbs a great amount of radiant heat, thus protecting adjacent objects. | physics |
http://michaeljacksonontrial.blogspot.com/2009/11/what-galileo-and-newton-were-listening.html | 2017-04-29T05:27:06 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00595-ip-10-145-167-34.ec2.internal.warc.gz | 0.982169 | 430 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__278937266 | en | Sunday, November 15, 2009
What Galileo and Newton were listening to
This is of course what Galileo and Newton (and also Tycho Brae and all the rest of the astronomers) listened to when they worked out how it was that the Solar system worked. No, really, they were, even if they hadn't been invented yet: they were listening to the music of the spheres rather than these specific music of the spheres wind chimes of course. For that's what the whole phrase refers to: the spheres being the planets circling in their orbits and the music being the cosmic soundtrack, the notes that are played out as God's creation goes through its turns.
That's the theory anyway, and then we take a further step, which is that the music of the spheres chimes bring you somehow closer to realising the vision of that creation by listening to them. That's a part of it that has never really quite seemed to hold up to logical analysis I have to admit but that is the story and we'll stick to it. So the thought is that by listening to the soundtrack of the universe we'll be able to better understand that universe and even possibly our part in it.
True, the music of the spheres wind chimes don't have quite the beat or insistency of something like Limp Bizkit and certainly don't approach the sublime expression of God's will for mankind that the very best Motown (or Gospel, from which of course Motown took much) can encapsulate. But the sound is indeed pleasant and pleasing and adds nicely to the background track of one's life. I have only one real concern over them: we have at present an entirely mischievous puppy, of breed unknown but we're pretty sure there's a lot of terrier in there, who would near immediately work out how to jump up and make them chime. So the soundtrack would not be that blissful evocation of the heavens, rather more a Yip, Yip, Clang and repeat. Which might not be quite so relaxing to listen to.
Posted by Tim Worstall at 9:41 AM | physics |
https://www.wesleybaker.com/a-palette-of-ocean-hues/ | 2023-12-09T09:06:56 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00490.warc.gz | 0.917446 | 631 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__96625070 | en | The ocean is a wondrous and mysterious expanse, covering more than 70% of the Earth’s surface. From the vibrant turquoise waters of the Caribbean to the deep blues of the Pacific and the murky browns of coastal estuaries, the colours of the oceans are as varied as the marine life they support. This article delves into the science behind the ocean’s shifting hues to uncover why its colours span such a fascinating spectrum.
Light absorption and scattering
The colour of the ocean is primarily determined by how sunlight interacts with the water molecules and other particles present in the water. Sunlight, or white light, is a mixture of all colours in the visible spectrum. When sunlight enters the ocean, it is absorbed and scattered by water molecules and particles. The longer wavelengths, such as red, orange, and yellow, are absorbed more quickly, while shorter wavelengths, such as blue and green, penetrate deeper and are scattered more efficiently.
Water depth and clarity
Water depth and clarity also play a significant role in determining the colour of the ocean. In shallow waters, sunlight can penetrate all the way to the ocean floor, allowing the colour of the seabed to influence the overall hue. For example, the water often appears turquoise in sandy or coral reef environments due to the combination of scattered blue and green light with the colours reflected from the seabed. In deeper waters, where sunlight cannot reach the bottom, the water appears dark blue, as the shorter blue wavelengths are the only ones that remain after absorbing other colours.
Dissolved and suspended particles
The presence of dissolved and suspended particles in the water, such as algae, minerals, and organic matter, can also influence the colour of the ocean. High concentrations of phytoplankton or microscopic algae can give the water a greenish tint, as these organisms contain chlorophyll, which absorbs blue and red light while reflecting green. Similarly, suspended sediment can result in brown or murky waters, particularly in coastal areas. These particles absorb and scatter light differently than water molecules and reflect the colours of the substances they contain.
The angle of the sun and atmospheric conditions
The angle of the sun and atmospheric conditions also play a part in shaping the ocean’s colours. As the angle of the sun changes throughout the day, it affects the amount of sunlight that penetrates the water and the way light is scattered. Additionally, atmospheric conditions, such as the presence of clouds or haze, can influence the colour of the ocean by altering the amount and quality of light that reaches the surface.
The diverse colours of the world’s oceans result from a complex interplay of factors, including light absorption and scattering, water depth and clarity, dissolved and suspended particles, and the sun’s angle and atmospheric conditions. This rich palette of hues is not only visually stunning but also serves as an essential indicator of the health and composition of marine ecosystems. By understanding the science behind the ocean’s colours, we can better appreciate and protect the vital role these vast bodies of water play in sustaining life on Earth. | physics |
https://umarc.eecs.umich.edu/index.php/club-projects/2-kw-hf-amplifier-rebuild/ | 2023-12-09T12:50:54 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00513.warc.gz | 0.951639 | 164 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__55946934 | en | The club’s Heathkit SB-220 amplifier has been repaired and is in use at W8UM. This amplifier uses a pair of 3-500Z triodes to generate 600 W (average) to 1200 W (PEP) of RF power on the 3.5 to 28 MHz amateur radio bands. The amplifier was donated to the club in 2003 by the original builder, Joe Firlit WA8RTL, a U-M EE alumnus. The amplifier was in excellent condition but benefitted from the replacement of several power supply components and some other minor modifications to improve its stability and reliability. For more information on this unit, refer to the W8UM Station Documentation file.
Here are some photos of the SB-220 after the electronic work was completed:
Updated: January 2007 | physics |
http://www.primaxengineers.net/rubber-rings.html | 2023-12-08T01:54:18 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00630.warc.gz | 0.944477 | 138 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__118576283 | en | Our company is offering a wide range of
optimum quality rubber rings. These are used to guid the piston as they reciprocate
inside the cylinder. By acting as a seal between the piston and the cylinder
wall, the ring also aids in sealing the combustion chamber. Additionally, the
ring aid in heat conduction from the piston to the cylinder wall. Pneumatic and
hydraulic cylinders' rods and pistons are guided by rubber rings, which also
serve to keep metal from coming into direct contact with the cylinder housing.
Strong transverse stresses, especially across a short guide distance, must be
absorbed by them while minimising frictional losses. | physics |
http://www.zaphu.com/2008/05/22/energy-crisis-what-energy-crisis/ | 2014-04-23T22:05:38 | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz | 0.938397 | 935 | CC-MAIN-2014-15 | webtext-fineweb__CC-MAIN-2014-15__0__17330546 | en | With oil approaching 140 dollars per barrel, there is a lot of talk of peak oil production and the end of civilization as we know it. Well, maybe civilization will come crashing down, but I don’t find this very likely. Read on to see why.
Basically, regardless of what you’ve heard, there is no shortage of energy on planet Earth. Those who say there is have forgotten about that little brilliant yellow ball overhead. That’s right, Mr. Sun. Consider that the chemical energy in one barrel of oil is about equal to the amount of solar energy that directly strikes a 100 square meter section of the Earth in 12.4 hours (see the illustration above). That’s a lot of energy. Isn’t the Sun cool (actually it is quite hot). This means that the total amount of solar energy that strikes the Earth in a given year (taking changes in the angle of incidence due to varying latitude into account) is equal to the chemical energy contained in about 895 trillion barrels of oil (that’s 895 followed by 12 zeros).
To put this number in context, the world uses about 30 billion barrels of oil a year or about 0.003 percent of the energy given to us by the Sun. Even if only 1 percent of solar energy is recoverable, that is still about 300 times the energy contained in the oil the world consumes every year. It is worth emphasizing at this point that this analysis doesn’t even include energy derived from geothermal, tidal, and nuclear sources which do not originate from solar fusion.
So, do you still think there is an energy crisis? Well, maybe there is. While the Sun’s energy is way more abundant than the energy we could ever hope to recover from fossil fuels, it is in a diffuse form that is not readily usable for many applications. In other words, the energy density of oil is much greater than that of sunlight, and the energy in oil is easily retrieved (by burning the stuff). This is why oil derivatives are so great for use in automobiles, planes, and trains. Thus, the crisis (if one exists at all) isn’t so much a lack of energy but a lack of means to collect and store the Sun’s energy. But, we have the technology to engage in “Sun harvesting” activities today. We can collect solar energy directly using photovoltaic cells (i.e., solar panels). We can harvest the Sun’s heat energy (which is the source of all of the Earth’s weather) using hydroelectric power plants and wind farms. And, we live on a planet that abounds with organisms that conveniently turn solar energy into stored chemical energy through photosynthesis. Turning this biomass into ethanol, biodiesel, and other biofuels is just starting to emerge as an economically viable alternative to drilling for black gold. Having said that, hopefully we (i.e., our governments) will learn soon that we shouldn’t encourage turning our food (like corn) into fuel. After all, we still need Fritos to eat.
This brings me to my final point. The important thing to remember is that it is economics, not lack of technological know-how, that is the reason alternative energy has not yet come online en masse. Until recently, fossil fuels where so cheap that they priced alternative energy sources out of the market. But with sky high energy prices, all of this will change (and change in a hurry). While the incredible price hiccup we are currently experiencing may not last, even oil priced at 70-90 dollars a barrel makes a host of alternative energy sources economically viable. So in the short term, don’t get mad at market speculators, they are just leveraging the power of the free market price system to ensure a more speedy and smooth transition to alternative energy. In the long term, get ready for all the wonderful things that the end of cheap oil will bring, like carbon neutral energy, reduced pollution, a more decentralized power grid, and electric cars (see this post). Perhaps the most enjoyable thing the coming revolution in “Sun harvesting” will bring is the ability for the free countries of the world to stick it to the nationalistic governments who control the major portion of the world’s remaining easily retrievable oil reserves. They don’t want to share, so they will be left in the proverbial dust.
If you would like to check the calculations in my figures, most of my data came from here. | physics |
https://progamingcorp.com/best-gpu-for-ray-tracing/ | 2023-12-03T03:57:21 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00464.warc.gz | 0.871477 | 1,451 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__164200795 | en | Ray tracing is an advanced graphics rendering technique that simulates real-world lighting and reflections to create ultra-realistic visuals in games and applications. To enjoy the benefits of ray tracing, you need a powerful graphics card or GPU built for this demanding task.
In this comprehensive guide, we will discuss everything you need to know about getting the best GPU for ray tracing.
What is Ray Tracing and Why it Matters
Ray tracing simulates how light rays interact with objects in a scene. It accurately traces the path of light as pixels on the screen. This gives ray tracing the ability to deliver true-to-life shadows, reflections, refractions and global illumination in real-time graphics.
Without ray tracing, traditional rendering uses “tricks” to approximate lighting effects. This leads to unrealistic and fake-looking visuals with muted colors and lighting. Ray tracing completely changes this by calculating proper light physics and material interactions.
In games, ray tracing can transform the visual experience. Environments look vibrant, vivid and three-dimensional. Lighting becomes dynamic and nuanced. You see life-like shadows and reflections based on light source positions. Rain, smoke, fire and other particle effects look amazingly realistic.
Ray tracing also has important use-cases in 3D modeling, CAD applications, content creation, machine learning and more. Overall, it takes graphics, simulation and immersion to the next level.
How Ray Tracing Works
The basic principle behind ray tracing is that for each pixel rendered on the screen, one or more light rays are traced back to their origin. This accurately simulates how light interacts with surfaces in the virtual environment.
There are three main components of a ray tracing system:
- The ray generation shader programs shoot out light rays for each screen pixel.
- The intersection shader checks collisions of rays with scene geometry and materials.
- The shading shader calculates lighting and color values after ray-surface interactions.
Additional shader programs add shadows, reflections, ambient occlusion and other effects. Advanced ray tracing also utilizes AI and machine learning techniques.
The entire pipeline runs in parallel on the GPU for real-time performance. But it requires tremendous computing power and VRAM for acceptable frame rates.
Factors to Consider for the Best Ray Tracing GPU
Ray tracing places heavy demands on the graphics card. To enjoy smooth performance, you need to choose a GPU tailored for this workload. Here are the key factors to consider:
Dedicated Ray Tracing (RT) Cores
Specialized RT cores accelerate ray tracing and light calculations on the GPU. They are present on Nvidia RTX cards and AMD RDNA2/3 RX cards. More RT cores deliver better ray tracing performance.
High Clock Speeds
Higher GPU clock speeds increase shader program performance for ray tracing. A boost clock over 2 GHz is recommended for smooth frame rates.
Plenty of VRAM
Ray tracing uses a lot of video memory for storing scene data and light information. Aim for at least 8 GB of VRAM, with 10 GB or 12 GB being ideal.
Latest GPU Architecture
Newer GPU designs like Nvidia Ada Lovelace and AMD RDNA3 have hardware and software enhancements for ray tracing. They are much faster than previous generations.
DLSS 3 and FSR Support
DLSS 3 and FSR use advanced upscaling and anti-aliasing with AI acceleration to boost frame rates. This compensates for the performance hit of enabling ray tracing.
The Best GPUs for Ray Tracing
Based on the above factors, these are the top graphics card recommendations right now for ray tracing:
Nvidia GeForce RTX 4090 – The Ultimate Ray Tracing GPU
The freshly launched RTX 4090 is in a league of its own for ray tracing performance. It has a whopping 128 RT cores and can hit over 130 FPS with ray tracing enabled at 4K resolution.
With up to 24 GB VRAM, Ada Lovelace architecture, DLSS 3 and insane compute power, the RTX 4090 is unmatched for ray tracing fidelity and speed.
AMD Radeon RX 7900 XTX – Powerful RDNA3 Ray Tracing
AMD’s new flagship RX 7900 XTX packs 96 improved ray accelerators and goes toe-to-toe with the RTX 4090. It delivers buttery smooth ray traced visuals at 4K. The card also has plenty of VRAM at 20 GB.
RDNA3 ray tracing enhancements, excellent 4K gaming performance and lower price make the 7900 XTX a great choice.
Nvidia RTX 3080 Ti – Previous Gen Flagship RTX
Although superseded by RTX 4000 models, the RTX 3080 Ti still impresses with its ray tracing capabilities. It has 104 second-gen RT cores and 12 GB GDDR6X VRAM.
With support for DLSS 2, PCIe Gen 4 and a 320-bit memory bus, the 3080 Ti runs ray traced games at 4K smoothly.
AMD Radeon RX 6900 XT – Great RDNA2 Ray Tracing Value
The RX 6900 XT deserves a mention for bringing ray tracing to AMD GPUs. It has 80 ray accelerators and 16 GB VRAM for solid ray tracing performance.
While not as fast as Nvidia RTX cards, the 6900 XT is great value today for entry-level 4K ray tracing.
Nvidia RTX 3060 Ti – Budget Ray Tracing Pick
In Nvidia’s mainstream RTX line-up, the RTX 3060 Ti stands out. It comes with 38 RT cores and 8 GB GDDR6 VRAM.
For 1080p or 1440p gaming, the 3060 Ti can run ray traced titles at 60+ FPS when paired with a good CPU.
People Also Ask
Does ray tracing reduce FPS?
Yes, enabling ray tracing can significantly reduce FPS. The performance hit can be 30-50% in some games. This is because ray tracing involves complex lighting and graphical calculations. Using DLSS or FSR helps regain lost frame rates.
Can GTX cards do ray tracing?
No, only RTX (Nvidia) and RX (AMD) graphics cards support real-time ray tracing due to their dedicated hardware and drivers.
Is ray tracing worth it?
Absolutely – ray tracing makes lighting, reflections and shadows incredibly realistic. More games are releasing with ray tracing support. Powerful GPUs like the RTX 4090 can now run ray tracing smoothly.
Does AMD or Nvidia do ray tracing better?
Nvidia currently has superior ray tracing performance compared to AMD. However, AMD’s RDNA3 cards have narrowed the gap significantly. Overall, both vendors will continue improving ray tracing quality and speeds.
Ray tracing is transforming modern gaming visuals with true-to-life lighting. To enjoy ray traced games, you need a powerful GPU like Nvidia’s RTX 4090 or AMD’s RX 7900 XTX. Consider the RT cores, VRAM, clock speeds and architecture while choosing the ideal ray tracing graphics card for your budget.
With rapid advances from Nvidia, AMD and Intel, ray tracing performance and adoption will only grow and beyond. The realistic visual payoff is worth the investment into a good ray tracing GPU | physics |
http://www.nakedeyeplanets.com/neptune-2016.htm | 2018-04-23T13:03:16 | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946011.29/warc/CC-MAIN-20180423125457-20180423145457-00405.warc.gz | 0.939161 | 248 | CC-MAIN-2018-17 | webtext-fineweb__CC-MAIN-2018-17__0__43565837 | en | Finder Chart for Neptune for 2016, with positions marked on the first day of each month (a Southern hemisphere view can be found here). Where the planet was too close to the Sun to be observable, the path is shown as a dashed line.
In 2016, Neptune reached opposition to the Sun - when it was brightest in the sky for the year and closest to the Earth - on September 2nd (indicated on the chart by the symbol ) when its apparent magnitude was +7.8 and its apparent diameter was 2".4 (2.4 arcseconds). The planet was then 28.945 Astronomical Units (4,330 million kms or 2,690 million miles) from Earth.
Much of the star field in the chart should be easily contained within a binocular field of view (which typically ranges from 5° to 9°). Stars are shown down to magnitude +8.5. Right Ascension and Declination co-ordinates are marked around the border, for cross-referencing with a star atlas. Printer-friendly (greyscale) versions of the chart are available for Northern and Southern hemisphere views.
^ Back to Top of Page
Copyright Martin J Powell December 2016 | physics |
http://www.ahs-hrc.org/virginia-air--space-center.html | 2018-05-27T03:18:33 | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00210.warc.gz | 0.913215 | 410 | CC-MAIN-2018-22 | webtext-fineweb__CC-MAIN-2018-22__0__61688360 | en | Virginia Air and Space Center
The Virginia Air and Space Center is a museum and educational facility in Hampton, Virginia that also serves as the visitors center for NASA's Langley Research Center. The museum also offers summer aeronautic and space themed camps for children. The museum's permanent collection is housed in a three-story glass atrium accessible from two exhibit floors with an additional catwalk level available for viewing suspended aircraft from above. Volunteers maintain an amateur radio exhibit displaying modern and historic radio equipment. The exhibit also participates in the Space Amateur Radio Experiment where visitors can periodically talk to astronauts aboard the International Space Station. The Adventures in Flight gallery emphasizes hands on and immersive experiments on flight concepts such as control surfaces and propeller design and experiences such as flight simulators. The gallery also features numerous aircraft suspended from the roof in the main gallery. Most are restored and have close ties to flight research performed at area NASA, Air Force and Naval installations In the Space Gallery Visitors enter through a room which simulates a manned launch to Mars, telling the story of rendezvous with a Mars Transit Vehicle and arrival at the planet where doors open up into the gallery. Visitors can experience the hands-on space gallery, "Space Quest: Exploring the Moon, Mars & Beyond." This gallery includes four different exhibits; Our Solar System, Living and Working in Space, Mars and the Moon, and Visions of Space Exploration.
The Riverside IMAX 3D Theater is the first institutional theater in the world to go IMAX Digital! The state-of-the-art enhancements include IMAX’s powerful digital projection system and latest digital surround sound, a new IMAX screen, new flooring, and new seating with cup holders. The audience experience is better than ever! While still presenting the traditional IMAX films such as Hubble 3D and Under the Sea 3D, the digital upgrades will allow for more of the Hollywood blockbusters.
Click here to open the Virginia Air and Space Museum in a new page.
(Information derived from Wikipedia) | physics |
https://e3sparkplugs.com/blog/whats-happening-under-your-hood/ | 2022-01-19T22:35:28 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00432.warc.gz | 0.948075 | 426 | CC-MAIN-2022-05 | webtext-fineweb__CC-MAIN-2022-05__0__137787038 | en | What's Happening Under Your Hood?
For many of us, we know enough about our cars to keep them clean, fill them up, get the oil changed, and maybe check tire pressure. Yet if you own any type of car it's important to know what is happening under the hood.
All gasoline engines on the market are known as "internal combustion engines." As the name implies, combustion (or explosions) are taking place internally. These are also known as reciprocating engines because the repeat a pattern of movement and turn a crank. A simple explosion in a standard cylinder is powerful enough to launch a potato about 500 feet, to put it in perspective. Internal combustion engines put this power to more useful purposes by creating a cycle which sets off hundreds of explosions per minute, transferring that power to your tires and propelling your vehicle forward.
Nearly all gas engines use what's called the four-stroke cycle, or the Otto cycle, named after its 1867 inventor Nikolaus Otto. These 4 cycles refer to the strokes performed within the cylinder, which are: the intake stroke, the compression stroke, the combustion stroke, and the exhaust stroke. This is fairly easy to understand if we break it down. On the intake stroke the cylinder fills with a mixture of air and gasoline fumes. The compression stroke reduces the space within the cylinder, compressing the air and gasoline to create a more powerful explosion. During the combustion stroke, the spark plug (usually E3 for environmentally-conscious auto owners) ignites the mixture creating an explosion. Finally on the exhaust stroke the unburned hydrocarbons and other chemicals found in exhaust exit out of the tailpipe.
As mentioned, hundreds of these explosions happen every minute. To propel your vehicle forward, the pistons doing the work within the cylinders are attached to a crank, the first part of your drive train. This rotates, transferring power down the drive shaft to your tires which turn.
So although we may not all know how to take an engine apart and put it back together again, hopefully we now know a little more about how your engine operates. | physics |
https://www.geology.bas.bg/en/departments-112/depEarthquake-139 | 2023-12-04T07:21:46 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00416.warc.gz | 0.949632 | 299 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__148556912 | en | The department was established in 1987 as an independent unit at the Geological Institute to provide research related to the earthquake geology in Bulgaria. The topics and activities of the department cover the study of seismic sources and recent deformations of the earth's surface on the territory of Bulgaria. The object of research are seismogenic faults and earthquakes that happened on them. Faults are identified, traced and mapped, their behavior and the history of earthquakes are established - when they occurred, what is their frequency, how large they are. A set of approaches is applied, which are characteristic of structural geology, morphotectonics, sedimentology, quaternary geology, geomorphology, paleoseismology, applied geophysics, tectonics.
Modern methods are combined, including analysis of satellite data, through field collection of various analytical information about the structure and the substance, and laboratory tests. The department has established a tradition in conducting highly informative specific research on traces of past earthquakes in karst caves.
As a result of the activities, the entire necessary set of fault parameters is collected, which in turn serves to assess the seismic hazard. The use of empirical data is a relatively new practice in our country and it significantly increases the reliability of the calculated expected ground shake during future earthquakes.
The scientists from the expected ground shake during future earthquakes have extensive experience gained in studying earthquake faults in Bulgaria and in working in international teams in Belgium, Spain, Romania, the Czech Republic and India. | physics |
https://www.annikamccann.com/blog/2018/9/30/red-light-therapy | 2019-09-15T06:18:19 | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00266.warc.gz | 0.951327 | 1,644 | CC-MAIN-2019-39 | webtext-fineweb__CC-MAIN-2019-39__0__35735249 | en | It was my 21-year-old daughter who got me to explore red light therapy. She has some significant chronic health issues, and is an amateur biohacker and a diligent and thoughtful researcher. She was excited about the potential of red light therapy and sent me a bunch of studies.
I had heard of red light therapy, but had never thought to take a serious look at it. There are so many health modalities out there, and it’s impossible to research all of them. Some are grounded in solid science, while many others have anecdotal evidence behind them, but lack scientific data. I assumed that red light therapy was in the latter category. After all, it’s just light. How wrong I was!
What is red light therapy? First of all, let’s take a look at the light spectrum. Visible light is only a tiny portion of the electromagnetic spectrum, which ranges from the short-wavelength (0.0001 nanometer) gamma rays on one extreme to the long-wavelength (100 meter) radio waves on the other. Humans can see only a small part of the spectrum, from about 390 to 700 nm. Below 390 is ultraviolet light; above 700 is near-infrared (NIR) light.
I’ve previously written about blue light and how it can be detrimental to our sleep when we are exposed to it too close to bedtime. In addition to affecting our circadian rhythms, light can have an enormous impact on health in many other ways. Red and near-infrared light in particular has been shown to be therapeutic for a wide range of conditions.
Red light therapy, also know as photobiomodulation or low level laser (light) therapy (LLLT), has been studied extensively for decades. This page meticulously details the photobiomodulation research and cites over 3000 studies, including over 900 animal studies and 140 randomized clinical trials in humans. Positive results have been found in a truly remarkable range of conditions. The ones which caught my eye included:
Traumatic brain injury
Wound and fracture healing
The list goes on and on and on! I’m particularly interested in the positive effects on joint and muscle pain, since this seems to be my Achilles heel (ha!) - I have had tendonitis in my knee, shoulder, and both wrists, and I frequently have lower back or neck pain.
How can simply shining light on the body result in such dramatic and diverse health effects? Every cell in our bodies, with the exception of red blood cells, contain mitochondria, commonly referred to as the energy powerhouses of the cell. Mitochondria keep very busy manufacturing adenosine triphosphate (ATP), the energy currency of the body. All activity within the cell is fueled by ATP; without it, the cell would quickly die. Mitochondria contain tiny light-sensitive receptors called chromophores. Red and near-infrared (NIR) light can penetrate deep into the body, when it is absorbed by chromophores within all kinds of tissues: muscle, bone, skin, and organs. When red light and NIR light reach the mitochondria, the photons are absorbed by a light receptor enzyme called cytochrome-c oxidase, which stimulates the production of ATP in the cell. The take-away: red and near-infrared light act to increase production of ATP, which increases the energy with which the body performs all its essential functions!
The vast majority of the photobiomodulation studies have been conducted using light from lasers. However, it’s the wavelength of the light which matters, not the delivery system. One of the leading experts on light therapy, Dr. Michael Hamblin, who has published hundreds of papers on the subject, had this to say:
Most of the early work in this field was carried out with various kinds of lasers, and it was thought that laser light had some special characteristics not possessed by light from other light sources such as sunlight, fluorescent or incandescent lamps and now LEDs. However all the studies that have been done comparing lasers to equivalent light sources with similar wavelength and power density of their emission, have found essentially no difference between them.
Wavelengths of red light (600-700 nm) and near-infrared light (NIR, 770-1200 nm) have shown positive results in hundreds of studies.
There are many companies making products for red light therapy, but two in particular are making a big splash in the ancestral health community: Joovv and SaunaSpace. Both make highly-regarded light therapy devices for use in the home, with one big difference: Joovv uses LED lights, while SaunaSpace uses incandescent heat lamps.
The Joovv consists of one or more panels with many LED lights. It emits red light at 660 nm and near infrared light at 850 nm; all the energy output is concentrated at these therapeutic wavelengths.
Incandescent bulbs put out a lot of heat, and SaunaSpace, as you might have guessed from the name, doubles as a sauna. The bulbs emit full-spectrum light, with the majority of the light in the 600-1400 nm range. As illustrated in the image below, 14% of the power is in the therapeutic range, with a much larger part producing heat.
Joovv and SaunaSpace both make compelling arguments asserting theirs is the superior product. Users seem to love both products. Both are highly recommended by experts I trust; Chris Kresser and Terry Wahls use the SaunaSpace, while Robb Wolf and Sarah Ballantyne use the Joovv.
After doing her research, my daughter decided she wanted to give the Joovv a try. She has used it for three weeks now, and remains cautiously optimistic. It hasn’t been the life-changing device she was hoping for, but she can feel a difference, especially immediately after using it. Joovv states that it could take 8-12 weeks of consistent use to see significant results, so the jury is still out.
The SaunaSpace and the incandescent heat lamps appealed to me over the Joovv for a couple of reasons. I really dislike LED lights; I’m very light-sensitive, and bright lights, especially LEDs, can be a migraine trigger for me. The full-spectrum, “natural” light of the incandescent bulb appeals to my hippie instincts. Plus, with the SaunaSpace, you get a two-fer: red/NIR light therapy, plus the heat of a sauna, which comes with a wide range of additional benefits.
My daughter’s health is one thing; for myself, I was not willing to invest a lot of money on a light therapy device. Even the smallest and most basic versions of both the Joovv and the SaunaSpace cost hundreds of dollars (and go up to several thousand dollars for the bigger versions). However, the same exact bulb used in the SaunaSpace is available for less than ten bucks! That sounded like an investment I was willing to make.
I bought a Philips infrared bulb and set it up with a workshop clamp light we had in our basement (incidentally, this is the exact same setup we used years ago for raising chicks). I clamped it to a chair and aimed it at my cranky lower back. Ta-da! Instant red/NIR light therapy setup for less than the cost of going out to lunch. I’ve been using the light on my back when I meditate each morning. It feels great, but I can’t say that it’s actually made a difference in my back pain - but, it’s only been a couple of weeks. Of course, one light does not a sauna make, but stay tuned.... my next post will be about creating my own DIY SaunaSpace knock-off! (Spoiler alert - it’s awesome!) | physics |
https://evgenii.com/blog/how-to-reduce-your-energy-consumption/ | 2024-04-17T18:45:35 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817171.53/warc/CC-MAIN-20240417173445-20240417203445-00129.warc.gz | 0.882742 | 984 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__119831974 | en | How to reduce your energy consumption
The following is a list of things each of us can do to reduce our energy consumption. It’s based mostly on excellent lectures The Science of Energy: Resources and Power Explained by Professor Michael E. Wysession and The Great Courses.
- Recycle. It takes less energy to reuse materials than to make them from scratch.
- Avoid disposable items and low quality products that don’t last long.
- Purchase local food/products. Local products require less energy for transportation.
- Eat less packaged food. It takes energy to make packaging.
- Reduce amount of food you throw away. 40% of food is lost in the US during different stages from production to consumption.
- Turn off electronic devices completely when they are not in use instead of leaving them in standby or sleep mode. 75% of electricity from electronics is consumed when they are not on.
- Buy energy efficient house appliances (fridge, washing machine).
Heating water requires enormous amount of energy.
- Heat just enough water to fill your cup when making a hot drink. Heating extra water is wasteful.
- Use lower temperature setting in your washing machine. Your clothes will also last longer.
- Reduce time spent in the shower.
- Use electricity provider that gets energy from renewable sources.
- In summer, stick reflective film to your windows. Blinds and curtains don’t prevent heat from getting into your house.
- Use thermally efficient windows with vacuum between multiple glass panes.
- Generate your own energy with solar panels and store extra energy in batteries.
- Improve insulation of your house: windows, doors, walls.
- Plant trees around your house. Trees cool your house during summer and prevent winds from stealing away the heat in winter.
- Use natural light whenever possible.
- Turn off the lights when you leave the room.
- Use LED light bulbs.
- Install lighting dimmers.
- Install solar powered street lights that point down.
- Install motion sensors that turn lights off in offices and other public places.
- Use heat pump technology for both heating and cooling instead of electrical/gas heating.
- Set thermostat to lower temperature in winter and higher temperature in summer.
- Consider putting on warmer clothes instead of heating entire room.
- Keep doors/windows shut when cooling/heating is on.
- Change air filters of cooling/heating equipment regularly.
- Reduce amount of flying.
- Work remotely and teleconference for meetings instead of commuting.
- Bicycle whenever possible.
- Use public transportation. Electric public transport is most efficient.
- Transport goods by water or train instead of truck/air.
- Use electric car.
- Use smaller/lighter car.
- Reduce unnecessary acceleration and braking.
- Avoid driving alone. Having more passengers is more efficient.
- Keep your car tuned up: replace air filters, keep tires fully inflated etc.
I did not know that…
- Flying in an airplane is one of the most energy intensive things you can do. Boeing 747 consumes 140 megawatts, which is equal to the power input of one hundred thousand (100 000) US homes. During a 12-hour flight you will consume as much energy one US household uses over one year (in electricity).
- The total distance humans drive in two years is about the same as the distance to the nearest star, Proxima Centauri.
- Electric cars are four to six times more efficient than gas-powered cars.
- Bicycling is one of the most energy efficient ways of traveling.
- Energy efficiency of walking depends on the food you eat. For example, beef requires a lot of energy to produce (to feed the cows). If you eat a lot of beef then walking is as energy inefficient as driving alone in the car.
Ideas for the future
- Install electricity meter that displays the cost of your current energy use and place the meter somewhere in your house where everyone can see it.
- Show the cost of current fuel consumption in your car.
- Using virtual reality equipment for teleconferencing can make business meeting more realistic and can make commute/flying unnecessary.
- Design new homes with energy savings in mind: insulation, air flow, orientation, tilt of windows, solar panels on rooftops.
Why should I save energy in the first place?
- It reduces pollution.
- It reduces greenhouse gas emission.
- It reduces human impact on ecosystems, maintains biodiversity and prevents extinction of species.
- It leaves the planet in better condition for future generations.
- It saves you money. | physics |
https://www.classicparker.com/threads/radar-question.961/ | 2023-09-29T20:30:53 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00176.warc.gz | 0.955281 | 179 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__206682526 | en | Mine is mounted flat to the roof and does OK. The Raymarine manual states it has an operating range of 15 deg both up and down from the center of the beam. I've made some wedges and one of these days when I take a break from fishing, I'll install them.
I recently added radar to my 2320...I used a 4 degree wedge along with a 5" Powermount...Both were purchased from BOE Marine. From the picture you can see the wedge keeps the center of the radar antenna horizontal to the surface...The radars vertical beam width is 15 degrees so I don't think the 4 degrees affects performance it just looks better when the radar antenna is horizontal to the surface when the boat is at rest. I added the 5 inch mount to raise the radar antenna above my GPS antenna to prevent interference and damage to the GPS receiver. | physics |
http://www.clevelandwater.com/blog/freezing-weather-doesnt-always-mean-frozen-lake-erie | 2018-10-19T08:40:38 | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512382.62/warc/CC-MAIN-20181019082959-20181019104459-00474.warc.gz | 0.951663 | 717 | CC-MAIN-2018-43 | webtext-fineweb__CC-MAIN-2018-43__0__110658290 | en | We constantly monitor the temperature of the 250 million gallons of raw water we draw from Lake Erie into our treatment plants. In fact, Cleveland’s official water temperature, the one meteorologists report on the news, comes from the gauges inside the intake at Cleveland Water’s Garrett A. Morgan Treatment Plant near Whiskey Island.
For the first week in January 2018, water temperatures in Lake Erie hovered around 32°F. Yet the water running through our treatment plants has little chance of freezing. The reasons why start with the location of our water intakes.
Each Cleveland Water intake crib is located about 3 to 5 miles offshore. The cribs protect the top portion of the 7- to 9-foot diameter pipes which go down into the lakebed then towards the shore through giant tunnels. The tunnels are about 50-feet below the bed of Lake Erie, which makes them about 100 feet below the surface of the water. Water is drawn into the intake pipes from the middle and bottom of the lake – not the surface where water is more likely to freeze and pieces of ice float.
Their location far offshore is important because when a winter is cold enough very shallow areas along the lake can freeze solid. Even a few hundred feet offshore needle-shaped ice crystals called frazil ice can develop and clog intake pipes. Our intakes are located in 50-foot deep water where the water cannot freeze solid. Additionally, the temperature 50-feet below the bed of the Lake Erie where our intake tunnels are located is actually warmer than the lake water.
Water at the bottom of Lake Erie is also in motion. Flowing water can have enough internal energy to resist crystallization. While most people will never experience underwater currents, people who scuba dive in Lake Erie will attest to the fact that water at the bottom of the lake moves. The National Oceanic and Atmospheric Administration’s (NOAA) Great Lakes Environmental Research Lab has a real-time map that shows currents. Once water is in our intake pipes, pumps constantly move water into and through the plants and distribution system. This continual movement help prevents freezing. This is also why it’s a good idea to let a small stream of water flow from your faucet to prevent your pipes from freezing during extremely cold temperatures.
Additionally, water under pressure has a lower freezing point. Lake water drawn into our plants is under pressure from the many feet of water above it. When water is in our intake pipes and distribution mains, the pressure comes from the pumps that push water from place to place. Pressure’s impact on the freezing point effects packaged drinks, too. If you’ve left a sealed beverage outside over the last few weeks, it may still look liquid. If you crack the lid and some of the liquid instantly freezes, it’s because opening the bottle released the pressure and increased the freezing point.
Finally, when water has stuff in it temperatures must be colder than 32°F for water to freeze. The stuff can be dissolved oxygen, minerals, or the ingredients in your favorite drink. Raw water in Lake Erie has extra stuff in it, including more oxygen. The impact of minerals on water’s freezing point is why spreading salt on sidewalks melts ice. Salt lowers the freezing point. This is also why temperatures need to drop to around 28.4°F for seawater to freeze.
All of these reasons are why, even when much of Lake Erie looks frozen, we're still able to provide our customers with quality drinking water. | physics |
https://www.mcmaster-electric.com/learn-more-about-dc-gear-motor.html | 2024-02-27T17:30:42 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00774.warc.gz | 0.915226 | 544 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__108630765 | en | Views: 16 Author: Belmont Publish Time: 2023-04-20 Origin: https://www.mcmaster-electric.com
A DC gear motor is an electric motor that converts electrical energy into mechanical energy using direct current. Its structure consists of a motor body and a gearbox. The motor body converts electrical energy into rotational force, while the gearbox reduces the high-speed rotation of the motor to the appropriate speed to drive the desired load.
DC gear motors are typically used in applications that require high torque and low speed, such as automated production lines, conveyor machinery, wind turbines, car window lifters, electric doors, and more.
Compared to other types of motors, DC gear motors have the following advantages:
Higher starting torque, suitable for equipment that requires frequent starting and stopping;
Good speed regulation performance, can control the speed by adjusting the power supply voltage or current;
Reliable operation and easy maintenance;
Suitable for continuous operation applications.
However, DC gear motors also have some disadvantages, such as brush wear and noise. Therefore, when choosing a motor, various factors need to be considered comprehensively according to the specific application situation.
A DC gear motor consists of two main parts: a DC motor and a gearbox. The DC motor is typically equipped with a permanent magnet or a commutator to convert electrical energy into mechanical energy, rotating the motor shaft. The gearbox is usually designed with a gear train structure to transmit the high-speed rotation of the motor to the output shaft, while reducing the speed to a level suitable for the application. The larger the gear ratio, the higher the output torque and the lower the output speed.
DC gear motors have a wide range of applications in industrial automation, transportation, medical equipment, ships, and other fields. Some typical applications include:
Industrial automation equipment: DC gear motors are widely used in various equipment such as conveyor belts, mixers, cutting machines, welding machines, machine tools, and injection molding machines in production lines and assembly lines.
Transportation: DC gear motors are commonly used in automobile window lifters, safety belt retractors, fans, pumps, and other components in cars, trains, ships, and other transportation vehicles.
Medical equipment: DC gear motors can be used in medical care beds, medical instruments, surgical equipment, and other medical devices in hospitals.
Household appliances: DC gear motors can be used in washing machines, vacuum cleaners, food processing equipment, and other household appliances.
Overall, DC gear motors have a wide range of applications due to their efficiency, reliability, low noise, and easy controllability. | physics |
http://www.thewarpath.net/parking-lot/18409-team-finds-28-planets-faraway-solar.html | 2017-04-26T02:20:59 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00277-ip-10-145-167-34.ec2.internal.warc.gz | 0.859986 | 134 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__241362933 | en | |05-29-2007, 02:30 PM||#1|
Join Date: Feb 2004
Team finds 28 planets in faraway solar systems
A University of California-led research team has discovered 28 new planets deep in the Milky Way, circling stars not unlike our own - leading them to conclude that our solar system may not be so special after all.
"The sun and Earth is not a rarity," said Geoffrey Marcy, professor of astronomy at UC-Berkeley, estimating that there may be at least 20 million to 30 million solar systems within the Milky Way galaxy. "A family of planets orbiting a single star is a very common occurrence." | physics |
https://timedocumentstorage.com/x-ray-film-storage-solutions-for-healthcare-providers/ | 2024-03-03T02:05:15 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476180.67/warc/CC-MAIN-20240303011622-20240303041622-00680.warc.gz | 0.948445 | 913 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__51719291 | en | Healthcare providers are responsible to their patients and the law to store x-rays safely and securely. X-rays are often essential for diagnosing medical conditions, so they must be appropriately archived and accessible when needed. You need a storage company that understands the unique needs of healthcare providers and offers customized solutions for x-ray storage. It’s vital to use a HIPAA-compliant records storage provider with extensive experience in securely storing x-rays. Let’s look at x-ray storage solutions to understand better what’s required and why you should use a professional company to keep your x-ray films.
What is X-Ray Storage?
X-ray storage is keeping x-rays safely and securely stored so they can be accessed when needed. X-rays are medical images that are used to diagnose various conditions. They need to be stored properly to maintain their quality and prevent them from being damaged.
Why is X-Ray Storage Important?
There are several reasons why x-ray storage is so important.
- X-rays can be beneficial in diagnosing medical conditions. They may be damaged and become unusable if they are not stored properly. Damage can delay or prevent the diagnosis and treatment of a medical condition.
- X-rays are considered to be medical records. As such, they are subject to HIPAA regulations. These regulations mean that x-rays must be stored to protect their confidentiality.
- X-rays can take up a lot of space. Storing them can save you valuable storage space.
What are the Requirements for X-Ray Storage?
Several requirements must be met to store x-rays properly.
- X-rays must be stored in a cool, dry place. They should not be exposed to excessive heat or humidity, which can damage the x-rays.
- X-rays must be stored in a way that protects their confidentiality.
- X-rays must be labeled appropriately and kept organized to ensure that the x-rays can be identified and located when needed.
What are the Best X-Ray Storage Solutions?
The best x-ray storage solutions will meet all of the requirements listed above. They will also offer additional features that can help to improve the x-ray storage process. For example, some x-ray storage solutions allow x-rays to be stored electronically. Storing electronically can save space and make it easier to access x-rays when needed. Other x-ray storage solutions offer cloud-based storage. Cloud-based storage allows x-rays to be accessed from anywhere, which can be helpful if you need to share x-rays with other healthcare providers.
When choosing an x-ray storage solution, it’s essential to choose one that meets all your needs. Be sure to consider the size of your x-ray collection, the confidentiality requirements, and the ease of access when choosing a storage solution.
How Long Do X-Rays Need to be Stored?
The length of time that x-rays need to be stored depends on several factors, including the state in which you practice and the type of x-ray. It’s always best to check with your state’s regulations to be sure.
Shredding of X-Rays
Shredding X-rays is the best way to protect patient confidentiality. A professional shredding company should do x-ray shredding. Using a certified and secure shredding company will ensure that your X-rays are correctly destroyed and that you receive a certificate of destruction.
When the films reach their retention date, and we receive proper authorization, Time Document Storage shreds the X-ray films. X-ray films destroyed will be sent to a refinery. Once received at the processing plant, the silver is removed from the x-ray film through the washing process.
As part of the recycling process, 99% of the X-ray film is recycled into polyester, and 1% is recycled into silver. As a result of this recycling process, both the silver and x-ray film can be recycled in an environmentally friendly manner.
X-Ray Film Storage Solutions
At Time Document Storage, we offer x-ray storage solutions tailored to healthcare providers’ needs. We understand the importance of X-ray storage and take pride in providing a safe, secure, and confidential solution. Contact us today to learn more about our x-ray storage solutions and how we can help you meet your needs. | physics |
https://roboticdreams.wordpress.com/2015/05/10/building-an-arduino-based-self-balancing-robot-part-2/ | 2020-02-20T11:41:15 | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00139.warc.gz | 0.928432 | 1,844 | CC-MAIN-2020-10 | webtext-fineweb__CC-MAIN-2020-10__0__66769779 | en | In my previous post, I explained how to design an inertial measurement unit or IMU for use with a self-balancing robot. The IMU is used to measure the current tilt angle of the robot. The idea is to use the tilt angle as a way to determine the direction the robot should move to bring it back to a near vertical position. This problem is similar to the situation most of us have intuitively solved when balancing something like a broom on the palm of our hand. We move our hand so that it is under the center of mass – if the broom tilts left, we move the bottom to the left to catch up with the rest of the broom before it falls. We make many small adjustments like this to keep the broom balanced. These adjustments are based on our observation that the broom is beginning to tilt too much in one direction and we need to move our hand fast enough so that the broom doesn’t fall. In this case, we may need to move our hand forward and backward as well as left and right.
The PID and the Pendulum
[Apologizes to Edgar Allen Poe]. The self-balancing robot balancing act is similar and somewhat simpler than balancing the broom in that we only need to move the robot forward and backward – but how to do this? This problem happens to be a classic control theory problem called an inverted pendulum. Most descriptions of this problem go to the trouble of mathematically describing the motion equations of the inverted pendulum before describing how to control it. This is mostly for the benefit of those with system simulation software and ideal parameters to create the perfect solution. The math is frighteningly complex involving signal processing techniques like transforming the time domain problem into the frequency domain through Laplace transforms and bringing stability to the system by moving the roots into the negative s-plane. Do you need to understand that? I don’t think so – but if you do, that could help later. I haven’t touched Laplace transforms in a very very long time and I didn’t need to learn them again. So what do you need to know to control the robot? A little bit of control theory.
You have likely encountered control theory without even realizing it while filling a bath tub. You may start with both hot and cold water running while you monitor the water temperature in the tub. If the temperature is too cold, you might turn the cold water down or turn the hot water up. You monitor the temperature and make adjustments until the right water temperature and level is reached. What you’ve done is used the difference between the desired and actual water temperature as a guide to repeatedly adjust the incoming water temperature mix. Your hand is the sensor that provides feedback of the current temperature. The difference between the desired (or reference) and actual temperatures is called the error. You use the error signal to control adjustments to bring the temperature closer to the desired output – you are the controller.
You can see from the diagram above that “Your Program” is the controller and it uses the IMU output to decide how to move the robot. This type of feedback control has been extensively used for over one hundred years to control all sorts of machines. A typical approach to a controller is to use the Proportional, Integral, Derivative or PID design. This design combines the use of two or three of:
- A proportional adjustment (e.g. multiplier) of the current tilt angle. This adjustment by itself cannot result in a stable system and must be combined with one or both of the other adjustments.
- An adjustment based on the total accumulated error. Multiple adjustments to the robot should make the tilt angle approach zero degrees (e.g. vertical). The actual measurements will show that the adjustments may have fallen short or over shot the desired angle resulting in an error. This error is accumulated and should be accounted for by the controller. Because this is a sum over time, it is mathematically an integration and thus the name integral.
- An adjustment that considers the rate at which the error is changing so that the system can become stable more quickly. Because it is related to the rate of change of the error over time, it is mathematically a derivative of the error.
Controllers that use only the Proportional and Integral adjustments are referred to as PI controllers. Those that only use the Proportional and Derivative adjustments are called PD controllers. A very informative and comically dated MIT lecture on YouTube provides a very good detailed mathematical foundation for control feedback of an inverted pendulum. The lecture only uses a PD design, while I understand most systems rely on a PI design. This series uses all three adjustments just for fun. The PID controller output can be expressed mathematically as:
Where θ is the tilt angle. The tilt angle is considered the “error” since the reference is zero degrees so anything other than zero is considered an error. Each adjustment has a multiplier called a gain. The Kp, Ki, and Kd gains are for the proportional, integral and derivative adjustments respectively. All of these gains must be positive to achieve stability. There are other equivalent representations of this equation that we’ll discuss when it comes to determining the values of each of the gains. As you can see there is a need to do integration and differentiation. We need to find ways to approximate these functions.
Integration is the calculation of the area under a function. Fans of calculus also know that integration is the sum of infinitesimally wide rectangles as the width approaches zero. Your program can accomplish a close approximation of integration through numerical integration. This example (and all that I have seen elsewhere) specifically uses the rectangle method. Imagine a graph of the accumulated tilt angle on the y-axis versus time on the x-axis like the one to the right. The rectangle method sums the rectangles formed by a series of equal sub-intervals to calculate the area providing a nice approximation to integration. There is some error in the calculation as shown with the black triangles but is minimized through the use of equal sub-intervals. This approach led me to use a timer interrupt to ensure the time intervals were as equal as possible. Many other examples you might find tend to use the main Arduino loop for this evaluation and I suspect additional error may be introduced in the numeric integration as a result.
A derivative is the slope of a function and is accomplished through differentiation. Your program can implement a close approximation of a derivative through the use of numeric differentiation. This example uses the finite difference approach where the slope of a secant line between two adjacent tilt angle measurements is used. This is sufficiently accurate for our use provided the sub-interval time is very short.
So now that we have the controller output foundations established, how should that be used? Don’t worry about the gain values yet as we’ll cover them in detail when we finally get to some code.
May The Torque Be With You
Ultimately we need to drive the motors of our robot and the details are quite implementation specific. Your program needs to determine a direction and speed to keep the robot balanced. The way to do that depends on the weight of the robot, the torque of the motors you’ve selected and how you’ve attached the motors to your Arduino. I use the Adafruit Motor/Stepper/Servo Shield because it’s convenient, powerful and relatively inexpensive at US$20. You might be able to put together a cheaper alternative with some extra work. A few things I like about this shield is that is has a dedicated PWM chip, connects over just two I2C pins and has a nice library to access it. The direction is set with a dedicated function call using FORWARD or BACKWARD constants. The speed is set by using another function call passing a value ranging from 0 to 255 – the realized speed depends on the motor, the current power supply level and the load.
As you may recall from my previous post, the IMU arrangement makes a positive rotation about the x-axis correspond to tilting forward. Your design may use a different convention for the axis or direction. As the tilt angle becomes positive, the robot needs to move forward to stay balanced. Looking at the control equation, we can see that a positive and increasing tilt angle θ produces a positive control output. The sign of the control value can be used to determine the required motor direction.
The absolute value of the control should not be directly used as the speed or throttle of the motor because that value must be within the range of 0 to 255. With some care of data types and ranges, the control value can be mapped into the proper throttle value through the use of the Arduino ‘constrain’ and ‘map’ functions.
This completes the background and foundational concepts to design a self-balancing robot. Next we will finally get to some code to make these concepts real and then determine the PID gain values to make it work. | physics |
https://www.psaafrica.co.za/product/fl3100h-h2-uv-ir-flame-detector-for-hydrogen-applications/ | 2020-11-28T10:56:12 | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00513.warc.gz | 0.84372 | 142 | CC-MAIN-2020-50 | webtext-fineweb__CC-MAIN-2020-50__0__41685457 | en | FL3100H-H2 UV/IR Flame Detector for Hydrogen Applications
The FL3100H-Hydrogen is an Ultraviolet / Infrared flame detector designed specifically to detect hydrogen (H2) fires and provide alarm outputs directly from the detector while maintaining false alarm immunity. It detects H2 fires by monitoring in both the ultraviolet (UV) and infrared (IR) spectral ranges, making it highly immune to false alarms. Configurations with dual Modbus and HART are available. Modbus and HART data can be used for predictive maintenance. The flame detector’s electronics are integral within its explosion-proof housing, allowing detector information to be processed at the point of detection. | physics |
https://www.indiashopps.com/news/iqoo-smartphone-flies-to-space-survives-following-a-free-fall-from-31000-meters-altitude/ | 2022-01-19T14:15:59 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00148.warc.gz | 0.918494 | 289 | CC-MAIN-2022-05 | webtext-fineweb__CC-MAIN-2022-05__0__140421184 | en | Smartphones are going some unusual tests these days to test their durability under harsh conditions. Xiaomi sent the Redmi Note 7 to space which captured several images at 31,000m altitude before falling to the ground. In the latest test, Vivo’s sub-brand iQOO used industrial hydrogen balloon to send iQOO smartphone to space. The phone was brought to the altitude of 31,000 meters under the extremely cold weather of minus 56 °C. The phone continuously ran for five and a half hours without shutting down. Then from an altitude of 31,000 meters, the iQOO phone was allowed to freely fall and surprisingly, the phone survived with its screen still intact.
To recall, iQOO phone was launched on March 1 in China. Equipped with Qualcomm Snapdragon 855 processor, the phone uses a 6.41-inch AMOLED display with a waterdrop notch. With 44W ultra-fast flash charging technology, the 4000mAh battery present in the phone can be charged in just 45 minutes.
The iQOO smartphone uses a multi-layer heat dissipation structure that consists of a 10,000-degree thermal conductivity heat pipe, a high thermal conductivity aluminum alloy frame, a multi-layer composite graphite heat-dissipating film and a curable thermally conductive gel that help achieve better temperature regulation while playing high-end games for a longer time. | physics |
https://cheops.unibe.ch/de/mission/executive-summary | 2023-06-10T15:29:58 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657720.82/warc/CC-MAIN-20230610131939-20230610161939-00499.warc.gz | 0.889108 | 1,180 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__125921013 | en | The CHaracterizing ExOPlanet Satellite (CHEOPS) will be the first mission dedicated to search for transits by means of ultrahigh precision photometry on bright stars already known to host planets. By being able to point at nearly any location on the sky, it will provide the unique capability of determining accurate radii for a subset of those planets for which the mass has already been estimated from ground-based spectroscopic surveys. It will also provide precision radii for new planets discovered by the next generation ground-based transits surveys (Neptune-size and smaller).
Large ground-based high-precision Doppler spectroscopic surveys carried out during the last years have identified hundreds of stars hosting planets in the super-Earth to Neptune mass range (1<Mplanet/MEarth<20) and will continue to do so into the foreseeable future. The characteristics of these stars (brightness, low activity levels, etc.) and the knowledge of the planet ephemerids make them ideal targets for precision photometric measurements from space. CHEOPS will be the only facility able to follow-up all these targets for precise radius measurements.
The new generation of ground-based transit surveys (e.g. NGTS), capable of reaching 1 mmag precision on V < 13 magnitude stars, provide yet another source of targets. By the end of 2017, NGTS will provide of order 50 targets with R < 6 REarth for which CHEOPS will be able to measure radii to a precision of 10%. These stars are also bright enough for precise radial velocity follow-up measurements to be practical. While unbiased ground-based searches are well-suited to detect the transits and fix the ephemerids, CHEOPS is crucial to obtain precise measurements of planet radii.
Knowing where to look and at what time to observe makes CHEOPS the most efficient instrument to search for shallow transits and to determine accurate radii for planets in the super-Earth to Neptune mass range.
The main science goals of the CHEOPS mission will be to study the structure of exoplanets with radii typically ranging from 1-6 REarth orbiting bright stars. With an accurate knowledge of masses and radii for an unprecedented sample of planets, CHEOPS will set new constraints on the structure and hence on the formation and evolution of planets in this mass range. In particular, CHEOPS will:
To reach its goals, CHEOPS will measure photometric signals with a precision limited by stellar photon noise of 150 ppm/min for a 9th magnitude star. This corresponds to the transit of an Earth-sized planet orbiting a star of 0.9 Rsun in 60 days detected with a S/Ntransit >10 (100 ppm transit depth). This precision will be achieved by using a single frame-transfer back-side illuminated CCD detector located in the focal plane assembly (FPA) of an F/8 ~32 cm diameter on-axis telescope. The optical design is based on a Ritchey-Chretien style telescope to provide a de-focussed image of the target star. An industrial study has led to a suitable optical design, which also minimizes stray light onto the detector utilizing a dedicated field stop and a baffling system. This design meets the requirement of < 10 ppm stray light onto the detector even in the worst case observing geometry on the baseline orbit. Thermal control of the detector (stable within ~10 mK to minimize noise) will be obtained by coupling the detector to a radiator always exposed to deep space.
The telescope will reside on a spacecraft (S/C) platform providing pointing stability of < 4 arcsec rms over a typical 48 hour observing period. The S/C will be 3-axis stabilized but nadir locked with the thermal interface between the spacecraft bus and instrument payload remaining stable to within one degree. The S/C will provide 64W continuous power for instrument operations and allow for at least 1.2 GBit/day downlink. The S/C will be provided by EADS CASA Espacio based on the SEOSAT platform.
The baseline orbit satisfying the science requirements is a sun-synchronous 650 to 800 km altitude orbit (SSO) with a mean local time of the ascending node of 6 a.m.. This choice optimizes uninterrupted observations and keeps thermal variations of the S/C and stray light on the satellite to a minimum as the orbital plane follows as close as possible the day/night terminator. A shared launch is envisioned which, given the mass of the S/C (< 300 kg), will be possible using a number of existing launchers (VEGA, Dnepr, Rockot, Soyuz).
The CHEOPS mission baseline relies completely on components with flight heritage. This is valid for the platform as well as for the payload components. For the latter, the team can exploit significant heritage from the CoRoT mission minimizing both cost and risk.
The baseline CHEOPS mission fits within both the technical readiness requirements and the cost envelope defined by the ESA call for S-missions, yet represents a break-through opportunity in furthering our understanding of the formation and evolution of planetary systems. A number of options have originally been identified that could potentially significantly enhance the scientific return. These old options are presented in the table at the end of this page. However, in the meantime the mission baseline is frozen:
The baseline mission profile is:
|Orbit:||Low Earth, Sun Synchronous Orbit (LEO, SSO) 6am/pm at 700 km altitude|
|Detector:||CD detector, one wide wavelength band from 0.4 to 1.1 micron| | physics |
http://suschem.uva.nl/shared/subsites/van-t-hoff-institute-for-molecular-sciences/en/news/2017/06/innovative-bench-top-x-ray-spectrometer-competes-for-amsterdam-science--innovation-award-2017.html?origin=7tANsNTXTTW%2BsqhBpcUw%2FQ | 2017-12-11T03:39:43 | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00785.warc.gz | 0.924129 | 555 | CC-MAIN-2017-51 | webtext-fineweb__CC-MAIN-2017-51__0__1859417 | en | Benchtop X-ray spectrometer competes for Amsterdam Science & Innovation Award 2017
Dr Moniek Tromp and Monalisa Goswami MSc of the Van 't Hoff Institute for Molecular Sciences have reached the finals of the 2017 Amsterdam Science & Innovation Award. Their benchtop X‐Ray Absorption Spectrometer is one of ten projects selected by the jury. On 21 June the winner will be announced during a festive award ceremony. Online voting for the Audience Award is open until 20 June.
The Amsterdam Science & Innovation Award is the annual Amsterdam competition for innovative ideas with a societal and/or commercial impact. The award is open to all researchers, staff and students in all disciplines from all Amsterdam universities, universities of applied sciences, academic hospitals and public research centres. The winner will be awarded with € 10.000.
Earlier this month the jury presented their selection of ten finalists. Moniek Tromp and Monalisa Goswami defend the honour of the University of Amsterdam's Science faculty with their project XASPect: A benchtop X‐Ray Absorption Spectrometer.
High energy X-rays
XASPect was developed as part of Moniek Tromp's research at the UvA's research priority area Sustainable Chemistry, where she focuses on the characterization of catalysts that enable chemical conversions. To obtain crucial details on the way these catalyst work, Tromp needs to make use of high-energy X-ray beams. Since these are only available at a limited number of specialized European synchrotron research centres, this is quite a hassle. Beamtime is expensive and often researchers have to wait several months for a time slot (typically a few hours to a day) to perform their analysis.
To circumvent these drawbacks Tromp decided to develop her own laboratory X-ray spectrometer based on a commercial high power X-ray source. It has been up and running since Christmas 2016 and although not as powerful as the synchrotron X-ray analysis it offers a broad range of options and substantially speeds up research. Only very fast chemical reactions still require the trip to synchrotrons such as Diamond (UK), Soleil or ESRF (both in France).
Now Moniek Tromp, together with chemistry colleague Monalisa Goswami and business strategist Pokon Ganguli MBA are exploring the opportunities of turning XASPect into a business opportunity, offering benchtop X-ray spectrometry to other chemical and materials research laboratories.
An audience award of € 2.500,- is part of the competition. Until 20 June at 18:00 everybody can vote for their favourite project via the website. | physics |
http://qovelu.mihama.ru/hafnium-tungsten-dating-682.html | 2018-02-21T01:05:42 | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00253.warc.gz | 0.931365 | 401 | CC-MAIN-2018-09 | webtext-fineweb__CC-MAIN-2018-09__0__28875594 | en | Hafnium tungsten dating dating site introduction line
Learn about the science behind the current exploration of the solar system in this free class.
Use principles from physics, chemistry, biology, and geology to understand the latest from Mars, comprehend the outer solar system, ponder planets outside our solar system, and search for habitability in our neighborhood and beyond.
A fountain of lava erupts from Hawaii's Kilauea Iki crater on Dec. Two rock samples from this eruption contain geochemical anomalies that could date back 4.5 billion years, shortly after the Earth first formed. Some geologists assume that this slow circulation would have wiped away any geochemical traces of Earth's early history long ago.
Eaton Earth's mantle is made of solid rock that nonetheless circulates slowly over millions of years.
In the case of tungsten, which has many isotopes, the important ratio is tungsten-182 to tungsten-184.
The heavier isotope, tungsten-184, is stable and has existed since the planet first formed.
Tungsten tends to associate with metals, so most of it migrated to Earth's core, while hafnium, which tends to associate with silicate minerals, stayed in Earth's mantle and crust.
Alternatively, the rocky outer surface of the Earth might have formed in patches, with vast magma oceans in between.
Parts of these magma oceans may have crystallized and sunk to the boundary between the mantle and the core, preserving the ancient tungsten and helium signatures.
But a new study led by University of Maryland geologists has found new evidence that could date back more than 4.5 billion years.
The authors of the research paper, published April 7 in the journal Science, studied volcanic rocks that recently erupted from volcanoes in Hawaii and Samoa.
"Nearly all of these anomalies formed within the first 50 million years after the solar system formed," Mundl said. | physics |
http://ontheroadschool.blogspot.com/2015/02/hatching-dinosaur-eggs.html | 2018-05-28T07:48:17 | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00315.warc.gz | 0.973767 | 161 | CC-MAIN-2018-22 | webtext-fineweb__CC-MAIN-2018-22__0__133268755 | en | He's learning that dinosaurs were reptiles, and reptiles hatch from eggs. He acts this out by putting himself inside a cardboard box, making squeaky sounds then "hatching out" of the box.
We also did an activity at a children's museum where kids "hatch" dinosaurs out of eggs.
Really it's just water frozen inside a balloon with a miniature plastic dinosaur stuffed inside. The balloon helps keep an egg-like shape in the freezer, then I just cut open the balloon and put the ball of ice in a bowl.
He uses salt and warm water to melt the ice away and collect the miniature dinosaur. Obviously this isn't really how an animal would hatch but it's fun for him. He's done it about a dozen times in the last few weeks at home. | physics |
https://www.atlanticeyeconsultants.com/services-and-patient-education/blog/2020/4/1/what-you-need-to-know-about-blue-light/ | 2021-10-19T03:17:46 | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00335.warc.gz | 0.94037 | 777 | CC-MAIN-2021-43 | webtext-fineweb__CC-MAIN-2021-43__0__204146162 | en | The light spectrum stretches far beyond what we can see – or what most people notice daily.
For example, ultraviolet radiation is an invisible form of light emitted by the sun. It has long been associated with cellular damage that can lead to certain kinds of cancer. Some things, like the sun, never change. But changes in technology can lead to differences in the type of light we’re exposed to every day. For example, consider the relatively recent phase-out of incandescent light bulbs around the world in favor of LED lights.
At the beginning of this change, many people noted that LED light was visibly different from the incandescent light they knew well. One crucial difference was the shift from mostly yellow light to a “purer” white light. Luckily, LED light bulbs are not known to be associated with health problems. But there is one modern form of light that has many medical researchers working overtime to understand its effects: It is blue light, the kind of light emitted by most computer and smartphone displays.
Understanding Blue Light: What It Is and Where It Comes From
Over the last two decades, blue light in the environment has skyrocketed.
When scientists talk about blue light, they’re referring to high-energy light that sits right beside the invisible ultraviolet spectrum. This light has a frequency above 380 nanometers. In a focused form, it can be potent – and it appears in virtually all digital technologies. Small LED’s light modern digital displays. Unlike an LED light bulb, which allows light to diffuse over a large area, these lights create a focused field of light. Many people spend upwards of five hours a day with their eyes focused on this artificial “energy field.”
Vision care specialists have long known that blue light can be toxic to eye structures in certain situations. The risk is unusually high for children and teens, whose eye lenses can absorb more light than those of adults. Reduced light transmittance with age provides some protection. However, blue light has never been as abundant in the environment as it is today. With that in mind, ophthalmologists and other eye experts are striving to inform patients about possible risks.
Blue Light Considerations for Health
Research shows two main issues arising from blue light exposure: Eye strain and sleep disorders.
Chronic blue light exposure raises the risk of eye strain and retinal damage. Over time, this can cause reductions in vision, even among younger patients. Several products can help, including screen covers that filter blue light and apps that turn blue light down during the day. Patients who wear glasses may opt for specialized lenses that block some light in the blue part of the spectrum.
The other issue involves sleep disturbance. The human body wakes up and falls asleep based on precise hormonal changes controlled by an internal clock, the circadian rhythm. The brain uses changes in light levels to remain situated in time. Because of its similarity to the dispersed light of the daytime – think of blue skies – blue light can disrupt this 24-hour cycle.
In non-medical parlance, blue light exposure makes the body “later.” This delay can enable people to stay up later without feeling an ordinary level of fatigue. Upon going to bed, it may take longer to fall asleep and enter the deeper, more restorative phases of the sleep cycle. Hormonal cycles and neurochemical processes associated with sleep diminish or disappear.
No one can avoid blue light exposure in the modern world. Still, it is essential to be mindful concerning exposure and your options. Dr. Delianides can help you take steps to reduce blue light exposure. Lifestyle changes are only one part of the equation. Suitable eyeglasses and other aids can also be helpful. Contact us today at Atlantic Eye Consultants for personalized advice from the eye care experts. | physics |
http://arabic.heavyduty-equipments.com/sale-8421904-improved-classification-efficiency-hydrocyclone-with-long-service-life.html | 2019-11-22T07:26:25 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00216.warc.gz | 0.720435 | 335 | CC-MAIN-2019-47 | webtext-fineweb__CC-MAIN-2019-47__0__116092184 | en | شروط الدفع والشحن:
|Processing capacity(m3/h):||128~300||Partition size (μm):||50~115|
|Diameter of overflow port(mm):||130~220||Diameter of dust-setting nozzle(mm):||35~100|
Improved classification efficiency Hydrocyclone with long service life
|Partition size (μm)||50~115|
|Diameter of overflow port(mm)||130~220|
|Diameter of dust-setting nozzle(mm)||35~100|
Due to the different density of particles, the centrifugal force, the centripetal buoyancy and drag force is different. So most coarse particles (or heavy phase) are discharged from cyclone underflow outlet, and the fine particles (or light phase) from the overflow tube, so as to achieve separation.
The working principle of SINOMTP hydrocyclone is centrifugal sedimentation, when two phases (or three phases) mixed liquid is fed into hydrocyclone by a certain pressure liquid and produces strong three-dimensional-elliptic rotational movement.
1. reduced the turbulence of the burst emanative flow when materials entry, makes a smooth movement of liquid inside cyclone.
2. The rational length proportion of column and cone and reasonable insert depth of vortex finder
3. Wear – resistant rubber as liners prolongs the service life by 2-4 times
4. Unique involute feeding eliminating the disturbance, reduces wear and improves the classification efficiency | physics |
http://www.esmc2012.tugraz.at/ | 2017-04-27T05:09:53 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00344-ip-10-145-167-34.ec2.internal.warc.gz | 0.90229 | 341 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__145541562 | en | Photos from the Conference
Download all photos in Photos_ESMC2012.zip
Young Researcher Awards
for the best oral presentations go to
Katia Bertoldi (abstract)
Francisco López Jiménez (abstract)
The first European Solid Mechanics Conference (ESMC) was held in Munich in 1991. This very successful conference initiated a tri-annual series with subsequent conferences held in Genova, Stockholm, Metz, Thessaloniki, Budapest and Lisbon. The 8th European Solid Mechanics Conference will take place at the Grazer Congress, under the auspices of EUROMECH, during July 9-13, 2012. The aim of the ESMC is to provide a forum for scientists and engineers to exchange ideas on the current state-of-the-art in the mechanics of solids, on new concepts and ideas and to identify important new directions for research.
We invite you to participate in this conference and to contribute to any topic of your scientific interest. The General (contributed) Sessions for this conference have been organized into seven main areas:
- Continuum Mechanics
- Material Mechanics
- Computational Mechanics
- Multifield Problems
- Structural Mechanics
- Experimental Mechanics
In addition, Mini-Symposia will be organized in a range of specialized topics.
Authors wishing to contribute to the conference are invited to submit a two-page abstract by November 30, 2011 (extended deadline is now January 8, 2012), and notification of acceptance will be given by January 15, 2012.
Gerhard A. Holzapfel
Ray W. Ogden
ESMC Committee Chairman | physics |
https://www.amazingwarehouseinc.com/collections/best-fabric-for-sound-panels | 2024-02-29T22:43:25 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00533.warc.gz | 0.922812 | 1,724 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__85808913 | en | Sorry, there are no products in this collection.
Unveiling the Sonic Elegance: Cotton Fabric for Optimal Sound Panels
Enter the fascinating field of acoustic design, where artistic accuracy and sound reverberation collide. Here at Amazing Warehouse Inc., your number one supplier of wholesale fabrics, we go deep into the nuances of sound panel materials and highlight why cotton stands out as the best option. The fabric that is wrapped over sound panels is more than just a covering in the orchestration of acoustic design; it is an essential conductor that affects the complex symphony of utility and aesthetic appeal.
Every note counts in this symphony, and every fabric choice crafts a special synthesis of sound dispersion and absorption. Cotton, a fabric that goes beyond the norm and offers a pleasing fusion of acoustic brilliance and timeless elegance, is at the center of this quest for audio excellence. Learn why cotton is the unquestionable material of choice for designing rooms where every sound is not simply heard but also experienced as we peel back the layers of this aural tapestry. Welcome to Amazing Warehouse Inc.'s curated world, where rich fabric and exceptional acoustics collide.
Importance of Fabric in Sound Panels
Within the complex field of sound engineering, cloth becomes more than a covering; it becomes a crucial and subtle element that significantly impacts sound panel performance. The way to appreciate fabric in this context is like realizing the conductor's function in an orchestra: it arranges the soundscape in a certain area.
Within the realm of sound panel construction, fabric plays an active role in influencing acoustic experiences rather than being a passive component. Its choice can influence how sound waves interact with the panels, which can have a profound effect on absorption, diffusion, and ultimately the room's overall acoustic performance. Each fiber and each texture acts as a brushstroke on the acoustic canvas, affecting how sound is perceived as well as heard.
We at Amazing Warehouse Inc. go beyond the idea that fabric is only a covering. We explore the art and science of acoustic design, realizing the significant influence that the appropriate fabric selection can have on a room's atmosphere and use. By promoting a deliberate approach, we hope to provide our clients the ability to create acoustic experiences that are in line with the distinct personality of their spaces in addition to selecting fabrics. It is a symphony in which cloth is the principal player, and we are here to assist you in crafting the ideal arrangement for your audio composition.
Cotton Fabric: The Ideal Choice
Highlighting Acoustic Properties:
Cotton's exceptional acoustic qualities have earned it a renowned position in the world of sound panel fabrics, making it the best option for individuals looking for a pleasing soundscape. Cotton, known for its innate capacity to absorb sound waves, serves more purposes than just being practical; it also makes a substantial contribution to the development of a refined and well-balanced acoustic environment.
Cotton's natural qualities distinguish it from synthetic substitutes, especially in its harmonious relationship with sound absorption. Cotton functions as a silent conductor, improving the overall efficacy of sound panels, in contrast to certain synthetic materials that could disturb the complex dance of sound waves. The capacity to enable unhindered sound absorption is essential for attaining a refined and engrossing auditory experience in various environments.
Because of its special acoustic qualities, cotton is a material of choice in settings where sound purity and precision are crucial. The inherent ability of cotton to absorb sound means that every note, whisper, and spoken word is captured with remarkable fidelity, whether in recording studios, home theaters, or auditoriums. Because of this, cotton is no longer only a fabric option; rather, it plays a significant role in the artistry of acoustics, which raises the caliber of sound panel performance.
Emphasizing Natural Texture for Immersive Acoustic Harmony:
Beyond its amazing ability to absorb sound, cotton is important in acoustic design because of the intrinsic and priceless character of its natural texture. This characteristic is essential to sound dispersion and improves the overall acoustic experience in a variety of settings, from intimate home theaters to professional recording studios.
Cotton's Unique Textural Brilliance:
One notable characteristic that distinguishes cotton from other materials used to make sound panel fabrics is its inherent texture. Cotton, in contrast to synthetic textiles, has a distinct tactile texture that enhances the acoustic functionality of the material. Cotton fibers function as a diffuser in sound panels, softly dispersing sound waves that come into touch with the material.
Crucial Role in Sound Diffusion:
Preventing sound waves from bouncing off objects is the fundamental idea behind sound dispersion. The inherent texture of cotton works well in this capacity because the fibers produce a smooth, uneven surface that obstructs sound waves' straight path. Because of its intrinsic irregularity, sound waves are not only absorbed but also scattered in different directions, creating an acoustically balanced and immersive atmosphere.
Versatility Across Spaces:
Cotton's natural texture lends itself to adaptability, making it a dynamic solution for a variety of rooms with different acoustic requirements. Cotton's ability to diffuse sound aids in reducing echoes and creating an ideal recording atmosphere in recording studios, where accuracy and clarity are crucial. At the same time, cotton's diffusion helps create a more immersive and real soundstage in home theaters, where the aim is to create a cinematic experience.
Enhancing Immersion in Home Theaters:
The organic texture of cotton transforms into a quiet conductor arranging an aural symphony for fans of home theater systems. Cotton fabric ensures that every note, conversation, and sound effect is experienced as well as heard by dispersing sound waves. Richness of texture balances depth of sound to provide a harmonic blend of acoustics and aesthetics, transforming the listening experience into a holistic adventure.
Balancing Acoustic Elements:
To achieve a balanced acoustic profile, natural texture plays a critical role in sound diffusion. Cotton fibers evenly distribute sound waves, preventing sound waves from concentrating in one place. As a result of this fair distribution, the listener experiences the desired auditory landscape and the acoustic environment is more consistent and well-balanced, with no frequency taking center stage.
Durability and Longevity
Cotton's remarkable resilience in the context of sound panels is evidence of the natural fabric's strength. In contrast to several synthetic substitutes that may break down due to deterioration, cotton remains resilient in preserving its structural integrity throughout time. For sound panels, which are frequently used in settings where they are subject to constant use and sporadic physical touch, this innate robustness is essential.
Sound panels are often subjected to a variety of components in high-traffic venues like auditoriums, studios, or public spaces. Cotton's robust construction allows it to endure the demands of frequent use, guaranteeing the sound panels' continued efficacy as an acoustic medium. The vibrant ambiance of a conference room or the busy bustle of a recording studio—cotton fabric on sound panels stands strong against everyday adversity.
Furthermore, cotton's resilience adds to the sustainability of good management. The choice of fabric is crucial to the long-term efficacy of sound panels, which represent an investment in the creation of ideal acoustic circumstances. The sound panels' long-term worth to people who depend on them for excellent sound management is ensured by cotton's capacity to withstand repeated use without losing its acoustic qualities.
A meaningful tale is echoed by the melodic crescendo driven by cotton fabric in the acoustic design grand finale. Choosing the appropriate fabric for sound panels goes well beyond simple aesthetics and opens the door to a world in which sound becomes more than just an aural element; it becomes an immersive experience. With its remarkable acoustic properties, resilience, eye-catching aesthetic appeal, and eco-friendliness, cotton is the undisputed material of choice for individuals seeking the best possible acoustic balance.
As we come to a close of our investigation, think about how cotton's natural acoustic qualities combine with its long-lasting durability. It is about establishing an aural sanctuary where every sound, every note, is not just heard but deeply felt, not just about designing a visually appealing setting. We at Amazing Warehouse Inc. cordially encourage you to peruse our well chosen selection of wholesale cotton fabrics, where the organic beauty of cotton blends harmoniously with superior acoustics. The ageless elegance of cotton will enhance your soundscape since every note, every whisper, and every melody will be heard and integrated into the rich tapestry of your aural experience. | physics |
http://lehmanengineering.com/quiz/quiz2sol.html | 2018-09-25T03:12:43 | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00545.warc.gz | 0.939853 | 695 | CC-MAIN-2018-39 | webtext-fineweb__CC-MAIN-2018-39__0__151017368 | en | Generator #2 will continue to rotate in the same direction as before. However, the generator now acts as a motor, and its current reverses.
Current now flows into Generator #2 to keep the rotor spinning in the same direction as before. Generator #2 becomes a user of power (i.e. a load to the power system albeit a motor operating at no load since nothing is now attached to the rotor) instead of a supplier of power.
Generator #1 remains unchanged, still supplying power to the bus. However, this generator must carry the full system load and its current increases and more torque must be provided by its prime mover.
It is interesting to note that if the prime mover were disconnected suddenly, all the load would immediately shift to Generator #1. This shifted load would normally cause Generator #1 to slow down, and if it is too large a load, the generator would fall completely out of synchronization.
What one normally wishes, is for both generators to share the load according to their MVA ratings and for both generators to maintain the same synchronous 60 Hz speed. If it is necessary to remove Generator #2 from service, you would want to shift the load from one generator to the other in small increments.
Shifting of load is done by reducing the mechanical torque on the generator to be removed from service, and increasing the torque on the generator that is expected to pick up the load. This must be done while still regulating the synchronous speed of both generators.
When the load is shifting from Generator #2 to Generator #1, Generator #2 would want to speed up. By lowering the mechanical torque to Generator #2's rotor, the synchronous speed is maintained at 60 Hz and the load is thus decreased.
Meantime, the load that is shifted to Generator #1 would try to slow Generator #1 down. More mechanical torque has to be supplied to Generator #1's rotor to maintain the synchonous speed.
Remember: It is the mechanical torque of the prime mover that determines how much wattage each generator supplies to the load. When load increases, more torque must be supplied to the generator by the prime mover in order to maintain the 60 Hz speed of the generator.
(Note: The amount of vars the generator produces is determined by the strength of the excitation field that is supplied to the rotor's windings!).
The speed of the generator's rotor determines the frequency of the voltage sine wave (which in the USA is of course 60 Hz or 60 cycles per second). The rotor will actually spin at either 3600 RPM or 1800 RPM (depending on the number of poles with which it is designed). This rotational speed must be carefully monitored and the generator's governor (or whatever control system is used) must adjust the mechanical torque to ensure that these speeds are strictly adhered to.
In fact, most clocks use the frequency of the power system to regulate their timing. If the power frequency decreases (i.e. the generator slows down) the clocks will also slow down. Speeding up the generator, speeds up the clocks.
In practice, the generator's speed is constantly changing ever so slightly, going above and below the 60 Hz benchmark. The cumulative effect of the over speeds and the under speeds must average out to 60 Hz over time. This way a clock's time is maintained over a long period. | physics |
https://hollistongirlscout.wordpress.com/2017/06/14/free-girl-scout-workshop-from-acoustical-society-of-america/ | 2019-07-20T16:03:53 | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526536.46/warc/CC-MAIN-20190720153215-20190720175215-00476.warc.gz | 0.910349 | 162 | CC-MAIN-2019-30 | webtext-fineweb__CC-MAIN-2019-30__0__11819079 | en | The Acoustical Society of America is pleased to offer a special one-time workshop.
Girl Scouts ages 12-18 are invited to explore the basic principles of acoustics and the wide variety of exciting careers that involve the science of sound! Enjoy hands-on experiments and learn about career opportunities with women scientists, engineers and professors from all over the country. Each Girl Scout will earn a patch and pizza will be served.
This event is free, however space is limited.
5:30 – 7:30 pm
Where: Hynes Convention Center
900 Boylston Street, Boston, MA 02115
E-mail [email protected] and register now! Please include your name, troop number, the number of girls attending, and contact information. | physics |
http://tyerwind.com/biomimicry.html | 2017-03-29T05:13:20 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190183.80/warc/CC-MAIN-20170322212950-00492-ip-10-233-31-227.ec2.internal.warc.gz | 0.948092 | 295 | CC-MAIN-2017-13 | webtext-fineweb__CC-MAIN-2017-13__0__91499298 | en | TYER wind is the result of a deep and a different observation of Nature. It is a remarkable innovation that effectively mimics the wings' motion of one of the most energetically-efficient flyers: The Hummingbird.
TYER wind converter is a perfect illustration of Biomimicry, which is the practice of looking deeply into nature for solutions to engineering, design and other challenges. It is inspired from nature in order to better satisfy nature. The TYER design and its underlying kinematics reconcile outstanding technical performance with a high nature integration potential
Scientists tried to master nature by using mechanical engineering principles to analyze complex biological systems. Bird wing's morphology has been studied over centuries in an attempt to mimic the flapping motion but with no notable success. Designing the magical kinematics that perfectly mimics the flapping wings has always been a human dream (Da Vinci designs are a good illustration with this regard). We've just made it a reality.
Million of years of natural selection have turned hummingbirds into some of the world's most energetically efficient flyers. It has a unique morphology and kinematics that allows him to flap its wings between 50 and 200 times a second when in flight. Hummingbirds are the only group of birds with the ability to hover and to fly backwards. The motion of the Hummingbird wings (infinity in 3D) have intrigued many researches who have been struggling to mimic it. | physics |
http://a-sol.si/solar-edge-en/combining-pv-modules-from-different-manufacturers/ | 2023-09-23T21:34:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.3/warc/CC-MAIN-20230923194908-20230923224908-00832.warc.gz | 0.882285 | 228 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__6244157 | en | Installing power optimizers makes it possible for, photovoltaic modules with different technologies, powers and manufacturers to operate completely independently, each producing its own maximum power on the same inverter.
Power optimizers in the A-SOL solar system can enable each photovoltaic module in the series to operate at its maximum current power. Traditional inverters track the maximum power point collectively for an array of modules. By taking a “one-size-fits-all” approach traditional inverters compromise on receiving an average system output in which weaker modules hamper the output of stronger modules in the array.
The potential to add different modules is especially important when a particular type is no longer on the market and the array can no longer operate as intended, or when the capacity of an existing PV power plant needs to be increased.
With an A-SOL system, individual modules can be changed and the configuration can be added to and changed as necessary. The only limits are the minimum and maximum number of optimizers in a single string, the maximum power of each string and the maximum power of the inverter. | physics |
https://www.curiosityshift.org/news/2018/6/28/science-club-electricity | 2019-07-19T03:50:27 | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525974.74/warc/CC-MAIN-20190719032721-20190719054721-00308.warc.gz | 0.964082 | 639 | CC-MAIN-2019-30 | webtext-fineweb__CC-MAIN-2019-30__0__177631776 | en | This month for Science Club we tackled electricity. Everyone was a bit nervous about what activity to bring. They said things like, "I don't understand electricity myself." "I'm afraid I won't be able to explain it well." and "I need to ask my husband for ideas." After much hard work and research, we came up with some exciting activities. And we all, even the adults, learned more about electricity through this challenge.
First, here's a link to the basic format for Science Club. It explains how we generally plan and run things.
Described below are the activities families brought for our electricity theme.
One family brought a homopolar motor. It's where a piece of copper metal moves because of the opposite charges between the electricity moving through the wire and the neodymium magnet. Watch this youtube video to see it in action. It's mesmerizing.
Here's an electrostatic demonstration where static electricity from the television moves the pop tab between the pop cans, making a clinking noise. The kids try to figure out how it starts and stops by grounding a wire and turning the television on and off. Here's a youtube video that shows it in action with a good written description as well.
Here, a family brought a Potato Clock Kit to show how a potato or lemon can power a clock. It's a great introduction to how batteries work.
Here's an activity where kids learn about conductors and insulators. There is a circuit with a light bulb in one box. The kids take items from the other box and use alligator clips to clip them to the circuit. If the bulb lights, they have a conductor. If it doesn't, they have an insulator.
This is a battery testing station. Many kids have used a battery tester before, but they enjoyed the challenge of sorting. We also discussed the positive and negative sides of a battery. Inside the binder is a diagram and explanation of how batteries work.
For this flashlight activity, kids could investigate the flashlight that is taken apart and see if they could get the bulb to light. We talked about completing the circuit and how the switch works. The wires and battery are taped and rubber banded to keep everything except the bulb together.
Next, there was a magnetism activity, which I only got a worthless blurry picture for. Kids could determine which items were magnetic, and they learned how magnets work. Magnetism & electricity go hand-in-hand with electromagnetism and motors, so this was a great foundational activity.
After everyone made their rounds with the electricity activities, I sat down with each family individually to let them try our Snap Circuits set. I didn't have them follow a set of directions. I just asked the kids what they wanted to power (a fan, a light, or a buzzer) and then how they wanted to power it (a rechargeable battery or a hand crank). After walking them through creating their circuit, I asked if they'd like to add a switch. So, it was kind of basic, but all the kids got a chance to make a circuit work. | physics |
https://www.bcbusiness.ca/industries/tech-science/Hot-Property-Terrella-Energy-Systems/ | 2024-02-23T16:30:24 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474440.42/warc/CC-MAIN-20240223153350-20240223183350-00298.warc.gz | 0.940073 | 496 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__60985474 | en | Terrella's innovative graphite manufacturing process arose from work that founder John Kenna did at fuel cell pioneer Ballard Power Systems
John Kenna began his career designing and making cool high-tech stuff, only to shift to the less sexy but equally vital world of manufacturing. In 1992, the American mechanical engineer joined Ballard Power Systems, which was putting Vancouver on the map as the global centre for hydrogen fuel cell technology. The pace of innovation was furious, he recalls. Kenna eventually headed a team looking into the use of corrosion-resistant graphite for bipolar plates; the business end of fuel cells, these components collect heat and manage water, among other functions. But making graphite-coated plates was slow and expensive. Tackling this problem became the mother of invention, prompting Kenna to leave Ballard in 2012 and launch Terrella Energy Systems Ltd. out of a small shop in Mission.
To Kenna, graphite is a miracle material—malleable, conductive and impervious to corrosion. For the past four years, he’s been fine-tuning roll-embossing technology that can bulk produce graphite-embossed bipolar plates at a much lower cost than traditional stamping presses. (According to Kenna, one fuel cell has roughly 900 plates, and roll-embossing churns out a plate every three seconds, compared to one every 20 seconds for a stamp.) But most important, this method has applications elsewhere in the cleantech realm, in particular for heat exchangers and heat sinks (devices that absorb excessive or unwanted heat). “I knew that I’d have a tough time getting people excited about investing in fuel cell technology,” Kenna says.
Terrella is a lean outfit, with three full-time employees, but Kenna is leveraging a research partnership through SFU’s Laboratory for Alternative Energy Conversion. Headed by engineering professor Majid Bahrami, the lab secured $700,000 in Natural Sciences and Engineering Research Council of Canada (NSERC) funding and united Terrella with Burnaby-based telecom and electronics manufacturer Alpha Technologies Ltd. and Vancouver’s Westport Innovations, a specialist in natural-gas engines and vehicles, to explore the estimated US$40-billion market for graphite thermal management products. Although Terrella has orders for graphite bipolar plates, Kenna believes thermal will be the company’s hot ticket. | physics |
http://howimetyourmotherboard.com/what-to-know-about-laser-diffraction-for-particle-size-analysis/ | 2017-04-25T18:31:35 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00025-ip-10-145-167-34.ec2.internal.warc.gz | 0.915901 | 479 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__156382740 | en | What to Know About Laser Diffraction for Particle Size Analysis
When it comes to particle size analysis, laser diffraction is on of the most common methods. Here is what you should know about this technique.
Why Particle Size Matters
The size of the particles making up a material impacts how a product behaves under certain conditions. Measuring particle size can predict certain chemical reactions, determine the material’s density, avoid sedimentation, and measure the products ability to dissolve. Especially for pharmaceuticals, knowing these properties will help determine how drugs will interact with the body.
How Laser Diffraction Works
This technique measures particle size by shooting a laser at an object. When the light beam is scattered by a group of particles, the angle of the light indicates the particle size. This is because the light’s diffraction angle is inversely proportional to the particle size, so a larger angle demonstrates a smaller particle. This light diffraction is also called “edge diffraction” because it occurs on the edge of a particle.
How Particle Size is Calculated
Again, smaller particles will diffract the laser at higher angles and vice versa. The intensity of the light is also taken into account, as higher intensity diffraction indicates larger particles. Once these measurements are collected, they are calculated using an algorithm based on Mie Scattering Theory.
How to Ensure Accuracy
The accuracy of laser diffraction testing is determined by the laser diffraction particle size analyzer itself. Quality engineering and meticulous upkeep up the machine will ensure the best possible results. Of course, the best analysis will also come from the person, or nanoparticle measuring service, making the calculations.
Other Methods for Particle Size Analysis
While laser diffraction is one of the most common methods for measuring particle size, there are also other possible techniques such as Taylor dispersion analysis, nanoparticle tracking analysis, and spatial filter velocimetry. Laser diffraction is beneficial because it encompass a wider range of possible particle sizes than other methods.
Laser diffraction is an accurate, dependable method in measuring the size of a particle. This measurement, acting as a determinate of many material properties, is an essential indicator in the chemical and physical properties of a product. As meticulous methodology is essential, this form of testing is best performed in an analytical testing laboratory by industry professionals. | physics |
https://wearablecomputingreport.wordpress.com/2015/02/27/aerographite-six-times-lighter-than-air-conductive-and-super-strong/ | 2018-08-20T23:50:50 | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217901.91/warc/CC-MAIN-20180820234831-20180821014831-00578.warc.gz | 0.92517 | 192 | CC-MAIN-2018-34 | webtext-fineweb__CC-MAIN-2018-34__0__76479011 | en | German material scientists from Kiel University and the Hamburg University of Technology have created the world’s lightest material, dubbed aerographite. One cubic centimeter of aerographite weighs just 0.2 milligrams, which is four times lighter than the previous record holder, 5,000 times less dense than water, and six times lighter than air.
Aerographite, as you can see from the picture above, is a mesh of carbon tubes, each around 15nm in diameter, interwoven at the micro- and nano-scale level. It is electrically conductive, ductile, jet black (non-transparent), and can withstand high compression and tensile loads. Aerographite can be compressed to a 30th of its original size, gaining extra strength and conductivity in the process, and spring back without any damage to its structure — or it can carry up to 40,000 times its own weight. | physics |
http://www.krtv.com/news/small-earthquake-rumbles-near-lincoln/ | 2015-01-29T06:16:10 | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855845.27/warc/CC-MAIN-20150124161055-00206-ip-10-180-212-252.ec2.internal.warc.gz | 0.959126 | 361 | CC-MAIN-2015-06 | webtext-fineweb__CC-MAIN-2015-06__0__29851353 | en | Apr 23, 2014 2:21 PM by Dennis Bragg - Missoula
MISSOULA -- The U.S. Geologicial Survey says a small earthquake near Lincoln on Wednesday morning registered a magnitude of 3.5.
The Montana Bureau of Mines & Geology in Butte recorded the quake at 8:41 this morning about four miles east-southeast of Lincoln.
That puts the epicenter about 35 miles northwest of Helena.
The quake was at a moderate depth of 9.9 miles beneath the surface.
Geologists are still checking into the quake but say it may have been felt around Helena and as far west as Seeley Lake.
There have been no reports of damage or injuries.
Montana is one of the most seismically-active states; the vast majority of earthquakes are minor, primarily of interest only to geologists and researchers.
But occasionally, Montana earthquakes can be larger - and deadly. In 1935 there were a series of earthquakes between Yellowstone and Helena that killed four people and caused more than $4 million dollars worth of damage. The 1959 "Hebgen Lake" earthquake in Montana killed 28 people, and caused more than $11 million dollars worth of damage.
According to Michael Stickney, director of the Earthquake Studies Office at the Montana Bureau of Mines & Geology, four to five small earthquakes happen in the Treasure State every day, but most can't be felt.
Stickney said, "There is a seismic belt, known as the Intermountain Seismic Belt, that passes through the western one third of the state. Small earthquakes are very common within this zone, which runs more or less from Yellowstone Park up to about Flathead Lake."
Click here for more information at the USGS website. | physics |
https://furrytoystours.com/ultrasonic-welding-near-me/ | 2024-04-18T00:54:53 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817184.35/warc/CC-MAIN-20240417235906-20240418025906-00636.warc.gz | 0.934539 | 325 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__20583940 | en | Ultrasonic welding near me is a plastic fabrication process that produces strong, reliable bonds. It’s a clean, automated alternative to adhesive and mechanical joining methods, which require curing times and messy cleanups. It works with a variety of materials, including acrylic, acrylonitrile butadiene styrene (ABS), high impact polystyrene (HIPS), nylon, polycarbonate and polypropylene. It also has the ability to assemble and weld thermoplastic to thermoset prepreg composites and metals.
A typical ultrasonic welder consists of three core components: the converter, booster and horn. The converter takes standard electrical power at 60 Hz and converts it to an operating frequency of up to 20 kHz, which is then fed through the booster to the horn. The horn then transmits the mechanical vibration to the parts that are to be welded. The vibration and mechanical pressure cause the matrix system of the part to melt and flow together, forming a molecular bond.
There are a number of common troubleshooting issues that can arise during an ultrasonic welding project. Some may be solved by adjusting the settings, while others will require the services of an experienced repair technician.
Marking is a common problem that occurs when the horn of an ultrasonic welder heats up too much, dispersing energy beyond the area being welded. It can be resolved by reducing the weld time, trigger force and down speed to reduce the amount of energy being applied. In some cases, a redesign of the part to reduce localized high spots may be needed as well. | physics |
https://www.bigislandvideonews.com/2012/10/19/unprecedented-changes-on-jupiter-observed/ | 2023-12-06T21:38:08 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00217.warc.gz | 0.914217 | 1,021 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__290486944 | en | Media release from National Astronomical Observatory of Japan
Dr. Glenn Orton (Jet Propulsion Laboratory, National Aeronautics and Space Administration [NASA], USA), lead of an international team of astronomers observing changes in Jupiter (Note), presented some of the team’s new research results at the American Astronomical Society’s Division for Planetary Sciences Meeting in Reno, Nevada on October 17, 2012. Their data show a planet undergoing dramatic changes: continual peppering with small space rocks, wide belts of color change in its atmosphere, vanishing and reappearing hotspots, and clouds gathering over one area while dissipating over another.
These researchers have been making infrared observations of Jupiter since 2009 and comparing them with high-quality optical images taken by amateur astronomers. From 2009 to 2011, they studied the fading and darkening of the South Equatorial Belt (SEB), a prominent brown-colored belt just south of the equator, as well as similar fading and darkening of the North Equatorial Belt (NEB), a band just north of the equator. In 2011 the northern belt grew white over a wider extent than it had been in over a century; in 2012, it began to darken again.
Using the Cooled Mid-Infrared Camera and Spectrograph (COMICS) on the 8.2 meter Subaru Telescope and mid- and near-infrared instruments on NASA’s 3 meter Infrared Telescope Facility (IRTF), both atop Mauna Kea on the Island of Hawaii, the researchers matched up their infrared observations of changes in cloud thickness of several of Jupiter’s bands with the changes in their brightness and color. Their data revealed more detailed information about the features they had previously observed. Although the deeper cloud decks of the North Equatorial Belt thickened simultaneously, they did not thicken and clear up like those in the South Equatorial Belt; there was only a partial thickening of the upper cloud deck of the northern belt. The recent infrared data also resolved distinct features called “brown barges”, brown, elongated features in the newly whitened area of the usually dark North Equatorial Belt, which are more cloud-free regions characterized by downwelling, dry air. Finally, a series of blue-gray features, which appear to be the clearest and driest regions on Jupiter, show up as apparent hotspots in the infrared view. The infrared observations reveal radiation emerging from a very deep layer of Jupiter’s atmosphere. Although the hotspots disappeared from 2010 to 2011, they reappeared in June of 2012 and coincided with the whitening and re-darkening of the North Equatorial Belt. As Jupiter’s atmosphere has been churning through changes, amateur Jupiter-watchers on Earth have observed fireballs created by objects hurtling into Jupiter’s atmosphere. However, the researchers’ infrared investigations showed that the most recent object, probably less than 15 meters (45 feet) in diameter, did not cause lasting changes in the atmosphere.
Orton commented on the significance of the observed changes: “The changes we’re seeing in Jupiter are global in scale. We’ve seen some of these before, but never with modern instrumentation to clue us in on what’s going on. Other changes haven’t been seen in decades, and some regions have never been in the state they’re appearing in now. At the same time, we’ve never seen so many things striking Jupiter. Right now we’re trying to figure out why this is all happening.” In any case, Jupiter, a mythical Roman sky god associated with thunder and lightening, would certainly be pleased with all of the changes afoot on his namesake planet.
Astronomers on the current team are:
- Glenn Orton, Jet Propulsion Laboratory, NASA, California, USA
- Leigh Fletcher, University of Oxford, England
- Padma Yanamandra-Fisher, Space Science Institute, Colorado, USA
- Thomas Greathouse, Southwest Research Institute, Texas, USA
- Takuya Fujiyoshi, Subaru Telescope (National Astronomical Observatory of Japan), Hawaii, USA
Figure (above): Global Upheaval on Jupiter
Comparison of the images in the visible-light (left) and infrared (middle and right) parts of the spectrum from 2009 to 2012 highlight the massive changes affecting the atmosphere of Jupiter.
Credit for individual images in the composite:
- Left: Visible light images from A. Wesley (2009 and 2010), A. Kazemoto (2011) and C. Go (2012)
- Middle: Infrared images from IRTF (JPL-Caltech, NASA)
- Right: 8.7 micron wavelength images from IRTF (JPL-Caltech, NASA) (2009, 2010, 2012) and Subaru Telescope (National Astronomical Observatory of Japan) (2011)
Credit for composite creation: JPL-Caltech, NASA | physics |
http://www.airtreatment.net/faqs/ | 2018-03-20T11:25:22 | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00424.warc.gz | 0.930921 | 3,134 | CC-MAIN-2018-13 | webtext-fineweb__CC-MAIN-2018-13__0__41056329 | en | What is Radon?
Radon is a radioactive gas found naturally in the environment. It is produced by the decay of uranium found in soil, rock or water. Radon is invisible, odourless and tasteless and emits ionizing radiation. As a gas, radon can move freely through the soil enabling it to escape into the atmosphere or seep into buildings. When radon escapes from bedrock into outdoor air, it is diluted to such low concentrations that it poses a negligible threat to health. However, if a building is built over bedrock or soil that contains uranium, radon gas can be released into the building through cracks in foundation walls or, floors, or gaps around pipes and cables.
When radon is confined to enclosed or poorly ventilated spaces, it can accumulate to high levels. Radon levels are generally highest in basements and crawl spaces because these areas are nearest to the source and are usually poorly ventilated.
How Often Should I Change the Air Filter in my System?
Check it at least every month during peak use, and replace it when it looks dirty enough to significantly impair the air flow through it. Some filters, such as media filters or electronic air cleaners, are washable; others are disposable and must be replaced.
What is the Best Type of System to Meet all Indoor Comfort Needs?
The best system depends on many variables, including family size, house location and design, and utility cost and availability. The optimum indoor comfort system might include high efficiency central air conditioning and heating, a high-efficiency air cleaner, and a central humidifier.
Condensing Gas Boilers
Condensing gas boilers employ either an aspirating burner with an induced draft fan, or a power burner, similar to the units described previously. However, they have an additional heat exchanger made of corrosion resistant materials (usually stainless steel) that extracts latent heat remaining in the combustion by-products by condensing the combustion products before they are exhausted. A chimney is not needed, reducing the cost of installation. Because the flue gas temperature is low, the gases are vented through a plastic pipe out the side wall of the house.
A condensing boiler can have an AFUE rating of 90% or higher. But in practice, condensing boilers in hydronic (hot water) heating systems can have difficulty achieving this efficiency. For the condensing boiler’s heat exchanger to extract all the potential latent heat effectively, the system has to run with the lowest possible return water temperatures, preferably not exceeding 45–50°C (113–122°F). Unfortunately, most radiator systems are designed to operate at significantly higher return water temperatures, which makes it difficult for the flue gas to condense. If the return water temperature is too high, actual operating efficiency may be only slightly higher than that of the better models of non-condensing boilers.
Non-condensing Gas Boilers
Residential gas boilers sold in Canada today are required to have an AFUE rating of at least 80%. ENERGY STAR qualified boilers must have an AFUE rating of at least 85%. The following are some ways manufacturers have improved efficiency levels:
- Elimination of continuous pilot lights. Most boilers on the market today use some form of intermittent ignition device, usually electronic ignition.
- Improved insulation levels. Because boilers store more heat internally than warm air furnaces do, they are subject to greater heat losses, both out through their casing (sides) and up the chimney when they are not being fired. To reduce heat lost from casings, new boilers have much better insulation to keep the boiler water hot.
- Better draft control methods to reduce flue losses. Many boilers use draft hoods. The draft hood is located downstream of the boiler proper. It draws household air into the gas vent along with the flue gases. This stabilizes the airflow through the appliance, isolating the burner from outside pressure fluctuations. But it also continuously draws heat from the boiler and warm household air up the chimney. A vent damper is now usually installed downstream of the draft hood to close off the exhaust when the burner is not operating. When the gas burner turns off, the damper is closed automatically after a short period; before the burner lights again, the damper opens.
Other boilers that use aspirating gas burners have eliminated the need for a draft hood entirely by using a powered exhaust system, usually incorporating an induced draft fan. With no dilution air, high resistance to spillage during the on cycle, and minimal flow up the stack during the off cycle, these units tend to give superior performance to those using draft hoods and vent dampers.
Today, many gas boilers have replaced the naturally aspirating gas burner with a power burner. These use a fan on the burner to improve the combustion process and ensure the development and maintenance of an adequate draft. These burners, similar to ones used in advanced oil-fired equipment, tend to have a high-pressure restriction or even close off the combustion air passage when the burner is not operating. This minimizes off-cycle heat losses without requiring a flue damper. Such units minimize dilution air, or have sealed combustion, and have performance characteristics similar to or better than the aspirating burner with a powered exhaust system.
On What Heating and Cooling Products Would I Find the EnerGuide Rating?
The EnerGuide Rating label on heating and cooling products sold in Canada can be found at the back of product brochures for: gas and propane furnaces residential air conditioning systems air-to-air heat pumps.
Can Homeowners Repair their Own Air Conditioners?
In most cases, definitely not. Cooling systems today are more complicated to service and usually require expert attention in order to comply with federal regulations, such as the Clean Air Act which prohibits releasing refrigerants into the atmosphere. An EPA-certified air conditioning contractor or service technician should be called at the first sign of trouble.
What is the Average Life of a Central Air Conditioning System?
It can vary, depending on how much the system is used and how regularly it is checked or serviced. Generally, the average life of cooling units built in the 1970s and 1980s is about 15 years, but individual units may vary and last much longer, depending on use and how well they are maintained. Heat pumps have about the same life span– an ARI survey showed average heat pump life to be about 14 years when recommended maintenance procedures were followed. Newer units are expected to last even longer.
Higher Energy Efficient Furnaces and Air Conditioning Systems Often Cost More to Buy
Why Would I Want to Buy One?
Buying a high-efficiency furnace, heat pump or air conditioner is an economically and environmentally responsible decision. Equipment with high energy efficiency ratings:
- use less energy, which helps conserve non-renewable resources and contributes to reducing greenhouse gas emissions;
- accumulates savings over its lifetime from lower energy use
- has other advantages: they can cost less to operate, have more efficient motors and fans than standard efficiency systems
- sometimes has a longer and more comprehensive warranty
How are EnerGuide Ratings Determined and who decides what number goes on the rating label?
Like the EnerGuide appliance ratings, the numbers are the results of product testing using energy standards specified by Natural Resources Canada in the Regulations of Canada’s Energy Efficiency Act, and then verified by agencies such as the Canadian Standards Association (CSA).
Why is the EnerGuide Rating for Heating and Cooling Products on the Back of Manufacturer’s Brochures?
Unlike major household appliances, which are usually purchased after the customer has personally examined the various models on the retail floor, furnaces, heat pumps and air conditioning systems are usually sold from brochures or product literature. Therefore, the brochure is the most suitable place to help consumers looking for energy efficiency ratings.
What Does the ENERGY STAR® Logo Mean?
The ENERGY STAR® logo, found on packaging, literature, product advertising and in some cases, products themselves, means that the products are significantly more energy efficient than required under current federal standards. For example, central air conditioning systems with ENERGY STAR® endorsed logos exceed existing federal standards by a minimum of 20 percent, and furnaces with the logo, exceed minimum standards by 15 percent. This means that the products have a higher level of energy efficiency than standard products found on the market today.
By choosing ENERGY STAR® qualifying products, homeowners can use energy more efficiently, save money on utility bills, help make their homes more comfortable and reduce air pollution without sacrificing the features, versatility or style that they expect from high-performing products.
How Much Humidity Should You Have in Your Home?
Humidity levels above 20 percent help prevent dry, sore throats and make the air feel warmer and more comfortable. Moist air also eliminates static electricity in the house and helps to protect plants and preserve your furniture.
On the other hand, humidity levels over 40 percent can cause frosting and fogging of windows, staining of walls and ceilings, peeling paint, mould growth and odors. When relative humidity is over 50 percent, airborne diseases become more difficult to control. Condensation on your windows can provide a good indication of the relative humidity. You may, however, want to install a humidity sensor or humidistat to keep more accurate measurements of humidity levels.
How Do I Keep My Housing Structure Dry?
Use four strategies to keep the structure dry:
- Provide exterior weather and moisture protection. Use building paper, siding, flashing, gutters and other construction techniques to shed water and repel wind-driven rain. Pay attention as well to below-grade measures. Proper drainage, grade slope and damp-proofing can protect the foundation from ground-water leaks or from moisture movement by capillary action.
- Reduce moisture at the source. This means producing less moisture in the first place and exhausting moist air and bringing in drier air.
- Prevent moist indoor air from getting into the envelope.A vapour barrier will reduce moisture movement by diffusion, and an air barrier can prevent moisture movement by air leakage. Although less moisture can be moved into the envelope by vapour diffusion than by air leakage, it is still important to provide a vapour barrier. An effective vapour barrier must be the following:
- resistant to vapour diffusion
- installed on the warm side of the insulation
A number of building materials resist vapour diffusion well enough to be used as vapour barriers. These include polyethylene, oil-based paints and special vapour-barrier paints, some insulation materials and exterior-grade plywood. Different materials may act as the vapour barrier in different parts of the house.
The same material may work as both an air barrier and a vapour barrier, provided it meets both requirements and is properly installed. Polyethylene sheets and foil-backed gypsum drywall can both combine these functions. To avoid confusion of terms, we refer to a material doing both jobs as an air and vapour barrier.
As a general rule, the vapour barrier should be on the warm side of the insulation. In some cases, however, the vapour barrier can be located within the wall or ceiling assembly, provided that at least two thirds of the insulation value of the wall is on the cold side of the vapour barrier. Because this ratio should be adjusted for houses with high interior humidity or for homes in extremely cold climates, it is recommended that you consult a professional builder-renovator, who will apply the specifications outlined in the National Building Code of Canada.
- Let the envelope “breathe” to the outside.This will allow the house to deal with seasonal fluctuations in humidity and to release any moisture that does penetrate the envelope from the interior or exterior. The materials of the envelope are layered, with those most resistant to vapour diffusion located on the warm side of the envelope and the least resistant (such as building paper) located on the outside. In this way, any vapour that penetrates the envelope can escape to the outside.Some wall systems work well with a relatively impermeable insulated sheathing because the interior wall-cavity temperatures are kept high. As a precaution, when retrofitting a wall, always ensure that the interior surfaces are vapour-resistant.Some siding applications have an air space immediately behind the exterior finish to promote drying out of materials that have been soaked by rain or dampness. This air space also provides an escape route for any moisture that has penetrated the wall cavity from the indoors. This type of installation should not be used with insulated siding, as convection in the air space will negate the effect of the insulated backer board on the siding.
What are Sources of Moisture in the Home?
Even if your house has no leaks in the basement or roof and is apparently dry, it can have moisture problems. Where does all the moisture come from? There are a number of major sources that are not always obvious:
- Occupants and their activities: An average family of four will generate about 63 litres (20 gallons) of water a week through normal household activities.
- Wind-blown rain in walls: Where basement damp-proofing is inadequate, ground water in the soil can migrate through the foundation by capillary action and evaporate on the surface of the wall or floor.
- Damp basements
- Moisture stored in building materials and furnishings: Building materials and furnishings absorb moisture from the air during damp, humid weather and then expel it during the heating season.
Despite all this water produced each day, most older houses have “dry” air in winter to the point where they have to have humidifiers installed. Why?
Cold outdoor air cannot carry much water vapour. In older homes, uncontrolled airflow brings colder, drier air indoors and forces the warm, moist household air out through openings in the upper walls and attic. The air quickly escapes through the un-insulated envelope without cooling down enough to cause condensation.
When insulation is added, the building exterior becomes much colder. Unless additional protection is provided, water can condense in the building structure.
How? Remember that cold air is able to hold much less moisture than warm air. As the warm, moist air cools in the cold outer layers of the building, the water vapour it holds may condense as liquid or, if it is cold enough, as frost. This can reduce the effectiveness of insulation and even cause rot, peeling paint, buckled siding, mould growth and other problems.
Why You Should Control Moisture Flow?
We must control moisture in all its forms to keep our homes durable and comfortable. Building components and practices such as flashing, roofing and basement damp-proofing successfully protect the home from liquid water.
It is equally important to control the movement of water vapour, providing added protection for the house structure and helping to maintain indoor humidity at a comfortable level.
Controlling moisture involves three strategies:
- using construction techniques that keep moisture away from the structure
- producing less moisture
- exhausting excess moisture | physics |
https://www.prgasket.com/waterjet-cutting | 2019-08-25T16:06:58 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00439.warc.gz | 0.942108 | 179 | CC-MAIN-2019-35 | webtext-fineweb__CC-MAIN-2019-35__0__112442512 | en | Cutting material with ultra-high pressure water is an involved process that starts with the same water which comes from our faucet. We soften and purify the water before it even reaches the pump. The FLOW® intensifier shifts back and forth to intake the water which is again filtered by a 1 micron and a .45 micron filter. As the intensifier shifts, it converts 3000 PSI of hydraulic pressure to 60,000 PSI water pressure. That water moves through high pressure tubing to the cutting head. The cutting head contains a valve that shuts the flow off or on to allow the water to be forced thru a jewel orifice to the material. Waterjet cutting isn't actually cutting; it's technically eroding. The result is an extremely effective process acclaimed for speed, accuracy, and cost effectiveness. We invite you to experience the benefits of this remarkable system. | physics |
https://news.bostonscientific.com/DirectSense-Inspiration | 2023-09-24T09:56:04 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506632.31/warc/CC-MAIN-20230924091344-20230924121344-00855.warc.gz | 0.930753 | 508 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__279246045 | en | Inspiration from the ocean depths
The path to innovation is rarely a straight line. It’s interspersed with stops and starts, failures and frustrations. And it takes a spark of inspiration to find the way to invention.
For the team of scientists and engineers who developed the DIRECTSENSE™ Technology, a tool for monitoring the effect of radiofrequency (RF) energy delivery during cardiac ablation procedures, inspiration struck from an extraordinary place – the electric fish.
Found deep in the ocean where light is scarce, these fish navigate the waters by creating an electric field around their bodies. As the fish approach prey or obstacles, the field distorts and gives them a sense of the object’s presence.
This was a breakthrough realization for our scientists, who saw a parallel between these fish and how a catheter could provide electrophysiologists more information during ablation procedures.
Ablation is a treatment option for patients with cardiac arrhythmias, a condition which causes the heart to beat irregularly, triggered by problems in the electrical system of the heart. During ablation procedures, physicians use a catheter to create lesions and destroy heart tissue that causes the abnormal rhythms, relying on several proxy measurements to estimate the relationship between the catheter and the tissue, and the tissue’s response to the energy delivery.
An electric fish has specialized skin cells that create the electric field to sense a rock or prey. Our scientists leveraged the proprietary mini-electrodes in the tip of the INTELLANAV™ MiFi OI ablation catheter to create a similar electric field localized around the catheter tip. This allows physicians to sense what tissue they are touching -- a cardiac wall or valve, for instance – before, during, and after they perform an ablation
For electrophysiologists, this technology is transformative. Existing contact force-sensing catheters measure how hard the instrument is pushing up against tissue, which tells only part of the story. Data from the DIRECTSENSE Technology, along with other measurements, offers physicians a distinct understanding of tissue characteristics and how the RF energy is affecting that tissue during ablation. This understanding helps guide minimal, predictable ablation during a procedure, which benefits the patient by helping to reduce the chances of over-ablation and avoid complications.
As we continue to innovate at Boston Scientific to better equip physicians and transform lives, stories like this remind us that inspiration can come from anywhere, even the ocean depths. | physics |
https://www.dixieguitarking.com/guitar-construction-explained.htm | 2024-04-23T02:02:14 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00401.warc.gz | 0.976729 | 1,114 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__71851972 | en | Guitar Construction Explained
Today, I'm going to explain the differences between different types of guitar construction. There are three main types of acoustic guitar construction that pretty much every instrument is going to fall into the category of. Those three categories are all solid wood, solid top, and all laminate. Today I'm going to be referring to the body of the instrument, which is the largest portion of the instrument. Acoustic guitar bodies are composed of a soundboard, or top of the instrument,sides and a back. Solid wood refers to single layers of wood thick. Laminate wood refers to multiple layers of wood glued together. The difference in these makes a huge difference in how your instrument is going to sound- in the beginning and down the road.
Musical instruments, as they're played, and as they're vibrated- if they're made out of wood, are going to improve and "open up." When an instrument opens up it basically vibrates easier and easier depending on how much it's been played. So, some people say "instruments improve with time," which is partially true. With time they will dry out and improve in some ways and sometimes guitar companies use old wood because it does sound better if it has been "cured," but really an instrument is improved when it vibrates. It starts off with potential and then its potential is amplified based on what its construction is.
So, let me explain the difference between these three different types of construction. All solid wood means a single layer wood on the top, on the back and on the sides. These are the most sought-after instruments for professional use- the reason for this is that they are going to vibrate the easiest, they're going to transfer the vibrations to the strings the easiest, they're going to sound the best right off the bat, and they're going to improve the most down the road. These are the preferred types of instruments typically for professionals- or anybody who just really wants a better sounding instrument.
Solid top is the second category. The top of the instrument, as I mentioned before, is called the soundboard. The soundboard of the instrument is the most important part of the instrument to be a single layer of wood thick. The reason for this is because it transfers the vibration directly from the saddle and the strings which are playing into the instrument. The top is going to vibrate the most out of the entire instrument and it's going to make the biggest difference in tone. So, a solid top guitar is going to have a single layer of wood thick on the top and then laminate-in other words multiple layers of wood thick- on the sides and back. The advantage of this instrument over an all laminate guitar is that it's going to sound significantly better because the soundboard, or top, is super important. Compared to an all solid instrument though it is not going to have as much depth, it will not sound as good typically, and will not improve as much.
The third type of guitar construction is all laminate. All laminate instruments are made from multiple layers of wood glued together, or an epoxy sawdust composite, all the way around the body. The advantage of this is that they are going to be much, much more durable as laminate wood is more durable; however, it does not sound as good typically as solid wood because it does not vibrate as easily. Some companies use a very thin laminate, and those thin laminates can sound pretty good, but I have yet to find in all laminate guitar that sounds anywhere close to an all solid wood guitar. I have found old laminate guitars that have opened up- because they do open up- that sound as good as solid top guitars and even solid wood guitars that are new.
If you take an instrument that's been played, and it's made out of solid wood, it's going to typically sound better than a solid top. A solid top is typically going to sound better than all laminate. So, those are the main differences in acoustic guitar construction.
I'm also going to explain the difference between saddle and nut. The saddle and nut are going to be the direct contact points with the strings and the rest of the instrument, and because of this the material that's used is going to make a huge difference in tone. Bone has traditionally been used in high quality instruments because of its ability to transfer vibration extremely well. Plastic and other composite materials have been used in more modern instruments and typically don't sound as good. There are some exceptions to that- micarta and corian and graphite and a couple other materials such as tusq can sound comparable to bone and they do have more consistent transfer of vibration than bone; however, they typically don't sound as clear and the reason for that is that bone transfers vibration easily because it's a very very hard material- whereas plastic is usually a softer material and because of that it's going to dampen the sound a little bit.
So, today I'm going to put a focus on the ROSG9M Recording King Guitar, which is all solid wood- meaning it's got solid single layer of wood thick on the top, solid single layers of wood thick on the on the sides and solid single layers wood thick on the back- and a bone nut and saddle for under $350. You would be hard-pressed to find an instrument with those quality materials anywhere near that price range anywhere. | physics |
https://www.anamarzablog.com/how-to-extend-your-wi-fi-and-make-it-faster/ | 2023-12-02T16:10:29 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100427.59/warc/CC-MAIN-20231202140407-20231202170407-00648.warc.gz | 0.937135 | 1,058 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__234687396 | en | Crawling-WiFi-speeds, one can feel frustrated even by reading these three words. With all the outflow of technology and internet-dependent devices in our homes, making sure that our home internet can cope up with all of them is crucial. Most of the internet packages offer seamless roaming and connectivity, but for those that don’t, we have spectrum internet plans for an affordable switch and a list of hacks and fixes to both, extending the range and boosting the speeds of your Wi-Fi.
However challenging it is to achieve full Wi-Fi coverage in our homes, even in those sneaky little corners like the basements, it isn’t entirely impossible. As wireless signals travel through the atmosphere, in the form of radio signals, they need a clear and un-hurdled path unlike in the wired connections that aren’t disturbed by such hurdles. These interferences weaken wireless signals and hence are an important concern to look into, before installing your Wi-Fi connection.
Types of Interferences
Following is the list of obstructions that a wireless connection might face, eventually losing its strength:
- Physical Objects: Physical objects pose low to a high degree of interference in the Wi-Fi signal transmission. The denser the material of an object, the difficult it is for radio signals to pass. For example, concrete or steel walls make it difficult for the signals to maintain their strength.
- Electrical Interference: Devices like computers, microwaves, fans, and refrigerators cause electrical interference with the signals. The degree of interference depends upon the distance of wireless equipment from these electrical devices.
- Environmental Factors: Being radio technology-dependent, wireless signals depend highly on environmental variations. For example, the degree of moisture in the air or fog can interfere with the signal strength. Lightning can also cause electrical interference, hence weaken the signals.
- Router Positioning: Most of us fail to realize the importance of a ‘good spot’ for positioning our router. Positioning it low on the ground will result in greater hurdles and weaker signals. Similarly, the further you are from the device the weaker the signals get.
Ways to Boost Your Wi-Fi Signal
Now that we have listed some of the factors that could cause obstruction, let us talk about ways to mitigate these issues;
1. Optimal Placement of Wireless Equipment
When it comes to wired internet connections, it is always easy to hide it away from sight. However when you’re talking about wireless internet connections, one must consider placing them in areas (away from walls, etc.) with least obstructions and enough room for their omnidirectional-signals to flow.
2. Better Antennas
Most ISP-rented routers have weak antennas. Nevertheless, there are several options to choose from if you are planning to purchase your own internet modem and router. Although wireless routers are hideous-looking, they do have powerful antennas, which are capable of giving a significant boost to the Wi-Fi speed and coverage.
3. Update the Software
Many tech experts suggest updating your software as the first thing to do. It comes with two main advantages, first, your system is updated for all the new features and upgrades, and secondly, the system is updated with the latest security measures.
4. Get a Wireless Range Extender
If your home has thick walls or places like basements, then investing in a range extender will be a sane idea, as it will help to eliminate Wi-Fi dead zones. You can easily get one from Amazon.com at an economical price.
5. Reboot Your Modem
Sometimes the solutions are in simpler things, we’ve been ignoring all this time. Such as boosting your Wi-Fi signals by simply rebooting your modem and router. This can help get your network back on track. The suggested way to go about it is by unplugging both modem and router and waiting for 15 seconds. After this, turn on the modem first and wait for it to get online, followed by turning your router on.
6. Use Aluminum Foil
Only folks from the ’90s will remember using aluminum foil to the rabbit ears of your TV sets to boost or catch signals. Well, the same hack goes for Wi-Fi signals as well. The easiest way to go about it is by using an empty aluminum can, cutting it vertically in half, and planting it over the router’s antenna.
7. Keep a check on Bandwidth Hungry Apps
Just one bandwidth-hungry app can suck all the good juice out of your wireless device leaving the rest of the users frustrated. We all are aware of the internet lag that happens as soon as someone starts gaming in the house. Make sure to manage timings as to when such activities should ideally happen, in order to avoid inconveniences.
In short, I have listed the tried and tested ways to boost your Wi-Fi speed and increase its range. Try them at home and let us know in the comments, which one works the best for you. However, if all else fails, it might as well be the time to change your router. | physics |
https://badexplanations.com/2017/04/16/a-tiny-history-of-the-telescope/ | 2019-09-20T12:13:56 | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00042.warc.gz | 0.972151 | 1,323 | CC-MAIN-2019-39 | webtext-fineweb__CC-MAIN-2019-39__0__19698321 | en | Once upon a time, in 1610, Galileo pointed a telescope at Jupiter and discovered that Jupiter had four moons. This event is commonly regarded as the first real triumph for the Copernican theory—the theory that the Sun, and not the Earth, sat at the central point of the Universe. The discovery of Jupiter’s moons was revolutionary, unexpected and mysterious. It turned thousands of years of cosmology on its head.
Indeed, the discovery was so unexpected that we have reason to wonder: why on earth were Galileo’s observations taken seriously at all? After all, almost no-one else apart from Galileo himself could see the moons, and it seemed as though you had to use a newfangled and prohibitively expensive contraption called a “spyglass” to see the moons anyway. Galileo might as well have said that he had an invisible dragon in his kitchen, but that you could only see it with the aid of diamond spectacles or through a perfectly circular peep-hole.
Just think about it: no-one else could see these moons, and Galileo’s telescope was the only detection tool that yielded the observation. Galileo’s contemporaries had only one alleged method of collecting positive evidence for the existence of Jupiter’s moons—his bloody telescope. Could anyone else cross-check Galileo’s observations? If the telescopic observations couldn’t be cross-checked, then why were his results taken seriously?
Well, the fact is, they weren’t. When Galileo published Sidereus nuncius in 1610, most other scientists were skeptical that Galileo’s observations were reliable, since they were obtained by way of a mysterious spyglass contraption which was regarded as error-prone for celestial observations. Part of the reason for this widespread skepticism undoubtedly lay in the fact that the deliverances of telescopic observation flatly contradicted naked eye observation. “Just go look at Jupiter with your damned eyes, there ain’t no moons there!” Furthermore, the telescopic observations were very often, shall we say, “anti-Ptolemaic”. Each new discovery made with the telescope undermined geocentrism. For these two reasons, telescopes were considered to be giving misleading information, or as one of Galileo’s contemporaries, Martin Horky, put it: ‘On Earth, it works miracles; in the heavens it deceives.’
Galileo was apparently either deceived or a deceiver.
This “telescope-skepticism” was almost entirely abandoned within a year or two of the publication of Sidereus nuncius, after the growing corroboration of Galileo’s claims by independent observers, including the mathematicians at the official Collegio Romano. In August of 1610, Johannes Kepler was gifted one of Galileo’s telescopes and he made observations of Jupiter’s moons in the presence of a young astronomer fried, Benjamin Ursinus. Together, the pair attempted to show that the telescope’s images were reliable by following the following procedure: ‘what one observed he secretly drew on the wall with chalk, without its being seen by the other. Afterwards, we passed together from one picture to the other to see if they agreed.’
Following this method, Kepler and Ursinus agreed on the relative positions of three of Jupiter’s moons, yet disagreed over a fourth. This surprising, nay miraculous, intersubjective agreement about the positions of the moons was one of the first experimental verifications of the reliability of telescopic observations. Kepler’s aim was to demonstrate that the agreement generated between himself and Ursinus was not due to any suspicious interference, preconceived plan or fraudulent tricks. To that end, Kepler went so far as to withhold all contact from Galileo until publishing his observations, so that none could allege that Kepler was merely a Galilean stooge, acting under his instructions.
Given the rapid pace of telescopic advance in the 17th century, it was in the 1630s, only twenty years or so after Galileo’s initial claim, that his observations could be corroborated by telescopes that worked according to different physical principles. A new type of “astronomical telescope” that made use of a convex rather than a concave ocular lens was first described by Kepler in 1611. Astronomical telescopes became widespread around the middle part of the 17th century, and observations made with these “next generation spyglasses” corroborated what Galileo had said.
One further 17th century advance in telescope technology deserves mention, since the technology is similar to that used in most large telescopes today. Although the Jesuit astronomer Nicolas Zucchi experimented with replacing one of his lenses with a mirror in his telescope of 1616, it was not until 1668 that Newton developed a functioning example of a telescope that worked on the principle of reflection rather than refraction—the telescope used a curved mirror to reflect the incoming image towards the observer. Reflecting, or “Newtonian”, telescopes once again corroborated all that Galileo had said. Their chief improvement on earlier telescopes lay in the elimination of the chromatic aberrations that had plagued earlier observations made with lens-only telescopes. Diffuse halos—artifacts of the lenses—had surrounded the objects of telescopic observations up until Newton came along. This was an immense leap forward that took much technical, theoretical and practical skill to develop. Newtonian telescopes were welcomed for yet other reasons too: they were far shorter than the best refracting telescopes, some of which had grown to inconvenient lengths of 10 metres or more!
Thus, within about a year or two of their discovery, Galileo’s moons had been independently corroborated by other investigators, notably Kepler. Around 20 years afterwards, the observations were corroborated by investigators using “astronomical” telescopes, with entirely different lenses. Within 60 years of Galileo’s first observations, the moons could be observed by telescopes that were totally unlike any that had been used before. All these different telescopes, all operating according to different physical principles, all observed the very same thing: moons. The moons could no longer be argued to be trickery or an artifact of Galileo’s telescope. They were not the result of a deceiving spyglass. Galileo was neither deceived nor deceiver. He was simply, and quite exceptionally, correct. | physics |
https://iono2x.com/technology.php | 2024-03-03T22:50:57 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00185.warc.gz | 0.910603 | 513 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__78318472 | en | Ionox Non-Thermal Plasma (NTP) technology utilizes high voltage electrodes connected to a rapidly pulsing, high voltage power supply. The configuration of the electrodes creates a rapidly reversing, high intensity electrical flux within the electrode air gap. As the contaminated air stream passes through the electrode gap, the high intensity electric flux density fractures the volatile organic compound (VOC) molecules within the air stream, and strips and heats very low mass electrons to extremely high temperatures. Large volumes of micro discharge streamers, essentially tiny lightning bolts which produce a violet glow, are created. These streamers collide with diatomic oxygen and water vapor molecules in the contaminated air stream, forming highly reactive oxidative radicals, known as Reactive Oxygen Species (ROS) and Hydroxyl Radicals (•OH). This extremely reactive mixture of ionized gas and very high temperature electrons is known as Non-Thermal Plasma (NTP).
Within the non-thermal plasma inside the Ionox System, the ROS, •OH, and free electrons instantaneously react with the fractured VOC molecules to produce mainly water vapor and carbon dioxide. The key advantage of utilizing VOC oxidation to eliminate odor is that it allows oxidation reactions that would otherwise only occur at high temperatures of >1,350° Fahrenheit (>730° Celsius) to occur rapidly, with little measurable heat rise. NTP oxidation technology is highly energy efficient, as there is little heat rise of the treated air stream.
Ionox Odor Elimination Systems do not use plasma injection. On the contrary, Ionox Systems are “true” implementations of Non-Thermal Plasma technology. Free high temperature electrons, ROS, and •OH have a very short half-life and cannot be injected in reactively significant volumes, even over very short distances. First used in the late 1800s, ozone injection is a very old approach to odor control and has been shown to achieve odor reduction of 25-35% at best. So-called “NTP Injectors” or “Plasma Injectors” can only inject long-lived lower oxidation potential radicals, such as ozone. These systems can best be described as “ambient air fed ozone injectors” and are designed to produce large volumes of concentrated ozone. Quite the opposite, Ionox Systems minimize the production of residual ozone. We believe that any system that injects ozone into an air stream and that does not pass 100% of the contaminated air stream through the electrode gap is not NTP technology. | physics |
https://www.themedievalacademyblog.org/cara-news-duke-center-for-medieval-renaissance-studies/ | 2023-12-10T05:42:38 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101195.85/warc/CC-MAIN-20231210025335-20231210055335-00689.warc.gz | 0.943886 | 437 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__130503231 | en | Duke’s Center for Medieval & Renaissance Studies has completed another outstanding year of endeavor!
One of Leonardo Da Vinci’s flying machine models: a human powered glider capable of flapping.
Highlight – “The Stymphalian Project.” One of our undergraduate students, a double major in Mechanical Engineering and Medieval & Renaissance Studies, pursued honors work that stemmed from a senior ME project that was originally conceived as a MEDREN topic—on Da Vinci’s concepts and notes for a human flying machine. The student’s ME senior project team designed and constructed an “ornithopter,” a glider drone based on aerodynamic properties that would allow it to glide and then flap to keep its motion going. The MEDREN major on the team wrote up an honors thesis that discussed the project’s aims, historical context, aerodynamic concepts, and design process. The project team took its inspiration from an initial attempt to replicate Da Vinci’s design concept based on his observations of nature (large sea birds as gliders and bats’ flapping wings), but they quickly discovered that Da Vinci had no concepts of aerodynamics with which to avail himself, and so they shifted to design a unique glider (like Da Vinci envisioned) using modern aerodynamic engineering concepts, which could make use of flapping motion to extend the glider’s flight and keep it aloft. This was a highly complex design project that, In the end, failed to function as a flapping machine. No one in fact has ever successfully designed such a glider! But the intellectual process and bold attempt produced insights into Da Vinci’s imaginary conception, the complexity of natural bird wings, and the limitations of mechanical engineering.
To read about what went on in the 2017-18 academic year, see our recent online newsletters (with plenty of images):
https://mailchi.mp/duke/fall-2017-newsletter (fall 2017)
https://mailchi.mp/duke/fdwzna2lp4 (winter/spring 2018) | physics |
http://www.astrin.uz/samobs/main.html | 2020-04-06T18:18:20 | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00283.warc.gz | 0.870987 | 174 | CC-MAIN-2020-16 | webtext-fineweb__CC-MAIN-2020-16__0__26659243 | en | Ulugh Beg Astronomical
Institute and Samarkand State University have installed first telescope
of the university’s observatory in Samarkand, a native city of Ulugh Beg,
Uzbekistan. A Grubb Parsons 48-cm reflecting telescope was dismounted
from Maidanak Astronomical Observatory and re-installed at the Samarkand
State University observatory. First light at the telescope coincided with
the solar eclipse on 29th March 2006.
The Samarkand observatory is the first educational scientific observatory
in Uzbekistan, and appears to be the first in all of Central Asia as well.
Now not only students of the Samarkand State University, but students
and teachers of the universities throughout Uzbekistan will have hands-on
access to observations of celestial bodies using a real astronomical instrument. | physics |
https://www.eurostellar.com/vi/post/innovation-explosion-2-full-functionality-without-preheating-even-in-frosts | 2023-09-26T06:10:41 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510149.21/warc/CC-MAIN-20230926043538-20230926073538-00886.warc.gz | 0.904181 | 725 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__321713028 | en | Every ventilation unit with heat recovery needs frost protection. Thanks to its unique design, Jablotron’s ventilation unit Futura can operate in temperatures as low as -19 °C (-2 °F) without preheating.
Winter operation of ventilation units has its caveats: Heat from extracted (waste) air is transferred to fresh (supply) air in the heat exchanger. The waste air cools off in this process, which, at a certain point called “dew point” causes moisture condensation on the walls of heat exchanger. When the temperature drops below zero, condensate freezes and resulting ice can damage the heat exchanger.
How to prevent a damage to ventilation unit in frosts? There are a few ways to do it, but they do not come without setbacks:
1. Turn it off.
Why did you buy it then?
2. Turn the air extraction up and air supply down (fan disbalancing). In this case the exchanger is heated up and defrosted by extracted (waste) air The resulting underpressure negatively affects your house’s structures.
3. You can preheat air in a sub-soil exchanger.
It is an expensive and technically complicated solution.
4. Or the most common solution – use electrical preheating.
That cancels out savings from the heat recovery. How much sense does purchasing a high-efficiency unit make, when preheating uses up all of the saved energy?
5. Or you can get Jablotron’s Futura.
Unique defrosting without preheating
The heart (or more precisely lungs) of our ventilation unit is the controlled enthalpy exchanger. Inside the heat exchanger, there are nano layers with the ability to retain molecules of water, that are later evaporated into fresh (supplied) air improving its humidity.
This patented technology of moisture evaporation works even in frosts. To start, we have to say that the condensate in Futura actually freezes! However, this happens in a precisely controlled layer and in a given time interval (a few minutes). In the next part of the cycle, the ice layer melts and dries again without interrupting or affecting Futura’s performance. Periodic drying also maintains sanitary conditions inside the heat exchanger at all times – unlike other types of heat exchangers that are continuously damp. No additional electricity is required for defrosting, everything happens spontaneously.
Does it matter?
That depends. Electricity bill for preheating won’t break the bank. You can think of it as having an extra iron plugged into your mains. On the other hand, imagine 10 000 houses each with a ventilation unit that requires 2 kW preheating. That is 20 MW total – a fairly large photovoltaic power plant or some 4 % of one nuclear power block’s installed capacity – definitely an impact on our environment.
Our Futura has the ability to reduce this environmental impact – and that is… heartwarming!
Become Jablotron 's partner in Southeast Asia Region
Eurostellar - Exclusive Representative of Jablotron in ASEAN
Showroom: 449/63 Truong Chinh Street, Tan Binh District, Ho Chi Minh City, Vietnam
Hotline: +84902 401 488 | physics |
http://www.asclme.org/index.php?option=com_content&view=article&id=121%3Acartesian-diver&catid=63%3Amodels&Itemid=130&lang=en | 2013-12-12T14:13:46 | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164607702/warc/CC-MAIN-20131204134327-00067-ip-10-33-133-15.ec2.internal.warc.gz | 0.938972 | 1,093 | CC-MAIN-2013-48 | webtext-fineweb__CC-MAIN-2013-48__0__195669899 | en | If you'd like to make your own Cartesian Diver, read on.
But first, some background science!
Density is the mass of an object per unit volume.
Buoyancy is the force that is applied by the water displaced by the object;
- positively buoyant objects float - they displace as much water as they weigh when floating at the surface (or on their way there);
- negatively buoyant objects sink - they displace less water than they weigh;
- neutrally buoyant objects can “hover” when they have the same density as the fluid surrounding them.
An object which can vary its density (which a cartesian diver does) is able to rise, fall and even hover at a particular depth in water. Argo floats use variable density to travel through the water column, as do submarines. Most fish and even SCUBA divers find variable density helpful to get around under water too!
Squeezing the bottle will result in the straws sinking, once enough water enters through the bottom of the straw to make the straw more dense than the surrounding water. Essentially, the increasing pressure inside the bottle as you squeeze it compresses the air inside the straw, allowing water to enter. The straw displaces less water, becomes more dense, and sinks.
As you probably know, gases (like air) are very compressible (can be squeezed into a smaller space), whereas liquids (like water) are relatively incompressible. In the cartesian diver experiment, squeezing the bottle can't really compress the water, but the air spaces inside the straws are compressible, resulting in the air spaces obeying Boyle's Law and becoming smaller through the increased pressure you apply on the bottle. (Remember from Boyle's Law that increasing pressure will reduce the volume of a gas?). If your straw is fairly clear, you'll probably be able to see the bubble inside decreasing in volume as you squeeze.
Stop for a moment and think how a very very heavy object can be made to float on water - something like an enormous ship.
It's quite simple - if the object is able to displace as much/more water than it weighs, then it will float. A solid block of a metal (like steel) will sink, but if you make it less dense (by making it hollow, for instance, and full of air, which is much less dense than water (or steel!), you can make the object effectively less dense. This is why boats have a fairly thin hull with a lot of air inside (boats are mainly empty space) and tall sides - they push as much water out of the way as they weigh before water is able to come over the side - but, of course, if you get a hole in the boat or enough water comes over the side, it will eventually sink! So, even though boats can weigh many thousands of tonnes, they push just as many thousands of tonnes of water out of the way to ensure they float.
How to make a Cartesian Diver
- Squeezable bottle (2l fizzy drinks bottles are excellent)
- Plasticine, Prestik or any modelling clay
- Sticky tape
- Take away Ketchup packets (optional)
- Fill the bottle with water right to the top with water
- Cut the straw to about a third of its normal length
- Bend the top over a bit and secure it with sticky tape (to stop the air leaking out of the top)
- Put some Plasticine or Prestik around the bottom of the straw, taking care not to cover the hole
- Test the completed straw in a bowl of water; if it sinks to the bottom, remove plasticine/prestik until it floats with the top just above water level; if it floats too high in the water, add more plasticine/prestik
Other things to try:
- Try different lengths of straw. Which ones sink and rise the fastest?
- Try half filling the staw with water while you're building it (at the 3rd step above, just before you bend the top over and tape it - make sure the straw is dry before you try to tape it!
- Try to get your straw to hover somewhere in the middle of the water, motionless. You'll need to vary the pressure; if it sinks to the bottom, squeeze a bit less hard (slowly release the pressure and watch the straw rise; squeeze slightly harder to get it to stop where you want. If you achieve this, you straw is neutrally buoyant at the depth it is hovering.
- Bend a straw in half and connect the two halves with a paperclip with some plasticine/prestik as a weight hanging off the paperclip. Click here for diagram.
- Many ketchup or other condiments in small plastic packets act as cartesian divers too - you'll have to experiment with some until you find ones that float just right and can sink. Try different sauces like vinegar and barbeque or peri peri if you can find them. Not all will work because they don't all have the right density and air content to work - a fun investigation!
Simply stuff one through the mouth of a bottle, put the cap on and squeeze!
- Try using an eye dropper as a Cartesian Diver | physics |
https://www.menicon.com/pro/our-products/gp-lens/menicon-z/ | 2020-02-19T22:40:32 | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144429.5/warc/CC-MAIN-20200219214816-20200220004816-00532.warc.gz | 0.90601 | 467 | CC-MAIN-2020-10 | webtext-fineweb__CC-MAIN-2020-10__0__176108574 | en | Innovative material and design
As the result of many years of research and development, Menicon has succeeded in developing new materials and designs for gas permeable lenses, Menicon Z doesn’t only have only hyper gas permeability but also excellent wettability thereby offering all wearers the benefits of long term corneal health, vision and comfort
Lenses made in Menicon Z material can provide superior visual correction and is excellent for patients with astigmatism. It combines novel polymer components that increase the material strength and stability.
This chemical structure results in excellent mechanical properties, allowing the lens to be made significantly thinner than a typical rigid gas permeable lens. Historically, the strength of gas permeable materials would decrease as oxygen permeability increased. However, Menicon has worked to create a new material with both enhanced strength and hyper oxygen permeability Which allows for lenses to be made thinner and improves comfort.
This new polymer is composed of a novel siloxanylstyrene, fluoromethacrylate, and benzotriazol UV absorber.
Oxygen permeability (Dk 163)
The oxygen permeability (Dk 163 ISO /189 Fatt) of Menicon Z exceeds all other GP materials.
Menicon Z is the first material to be classified in the "hyper-oxygen transmissibility" category by a leading expert in the field of oxygen permeability research*.
*Benjamin W J.EOP and Dk/L: the quest for hyper transmissibility. Journal of the American Optometric Association
Hyper-Dk, combined with superior strength and wettability!
Change your mind with Menicon Z material when discovering its hyper Dk safety and excellent physical properties (due to the siloxanylstyrene component). Menicon Z has higher resistance to breakage and scratches thanks to its physical strength hardness, and has excellent deposit resistance combined with incredible wettability compared to other high and super Dk materials.
Menicon Z Material impact strengtht
It allows the manufacture of significantly thinner lenses in a material which combines superior strength, stability and wettability.
For new wearers, it offers them contact lenses with clear vision and enhanced comfort from the first day!
* Products and availability may vary by country. | physics |
https://liquidcrystalimages.com/products.html | 2023-06-03T00:43:04 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648911.0/warc/CC-MAIN-20230603000901-20230603030901-00315.warc.gz | 0.884726 | 499 | CC-MAIN-2023-23 | webtext-fineweb__CC-MAIN-2023-23__0__102272326 | en | We design liquid crystal displays for a wide range of product applications. Our displays are designed, developed & manufactured for use in HVAC controls, power meters, security system controls, flow meter controls & other applications.
Liquid crystal displays are passive electronic components. As most people are familiar, the states of matter include ions, gases, liquids, solids and also liquid crystals. A liquid crystal is a combination of special structure molecules that exist within a specific temperature range, which below turns into normal crystal. It has special physical property of elasticity, and most importantly, it has special opto-electronics property of crystallizing. It has special optical properties in addition to high sensitivity to an electromagnetic field.
Therefore, liquid crystal do no emit light, but instead manipulate light. The structure will bend polarized light as it passes through the liquid crystal material along with its polarizer. Typically, there is a front polarizer that is positioned 90 degrees out of phase with a rear polarizer thus sandwiching the liquid crystal. When an electric field is applied to the liquid crystal, the molecules align allowing the polarized light to pass through the display resulting in a positive-image of dark segments on a lighter background. Alternately, the two polarizing layers could be aligned in parallel which would result in a negative image when an electric field is applied.
There are numerous industrial and consumer applications for graphical displays made from liquid crystals. For example, the common applications include the display panels of a digital watch or a calculator. Other uses include liquid crystal displays for HVAC controls, power meters, security system controls and flow meters.
The specifications of a liquid crystal display include the size of the visual area, the viewing angle/direction, the display mode (reflective, transreflective or transmissive), pixel pitch, color and contrast ratio. In addition, the design of a LCD includes backlighting options, connector types, operating temperature range and storage temperature range. For many applications, the LCD response time is a critical design specification that affects how quickly the module responds to changes in the electrical field.
For most applications, the most common types of liquid crystal displays from lowest cost with lower contrast to higher cost with higher contrast are Twisted Nematic (TN) LCD, Super Twisted Nematic (STN) LCD and Film-compensated Super Twisted Nematic (FSTN) LCD. Both the STN and FSTN LCDs have higher viewing angles and more color options. | physics |
https://northeastmaglev.com/2022/12/20/maglev-in-the-classroom/ | 2023-12-03T20:43:47 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.53/warc/CC-MAIN-20231203193127-20231203223127-00293.warc.gz | 0.966517 | 999 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__114592701 | en | Bringing Maglev to the classroom: STEAM enrichment program
As part of our November celebration of National Education Month (and more specifically, November being National Science, Technology, Engineering, Arts, and Math, or STEAM, Day), our team recently launched our first STEAM afterschool program with (Restoring Inner-City Hope). The RICH Program, a nonprofit dedicated to empowering and educating Baltimore residents through community programming, hosted the event with us at their headquarters in the South Baltimore neighborhood of Cherry Hill. It was a huge success, and we are already looking forward to the next one!
Our goal for this STEAM program is to show students that cutting-edge technology, like that of the Superconducting Maglev, isn’t a far-off, future concept; in fact, it’s happening here and now. But we know that science and technology can be intimidating, especially when it comes to complex concepts like physics and superconductivity. Fortunately, with two former educators on our marketing team, we were able to bring complicated technological principles down to an introductory level, through interactive experiments and lessons!
The first activity we explored introduced the concept of magnets and magnetic fields. To help students best understand how exactly magnets could possibly push and pull a train, we had to introduce the concept of . We demonstrated how iron filings reacted to magnets and how switching the polarity of the magnets impacted how the filings behaved.
To accompany the hands-on experiments, we developed a booklet for students to explore additional reading material and to continue learning at home with at-home activities. Students then referenced their booklet to see how the magnets on the sides of the SCMAGLEV train interact with the guideway to work in a similar way.
Our second hands-on activity introduced the concept of friction. We used two wooden racing ramps and a miniature maglev on wheels to demonstrate the impact of friction on speed. We placed different materials, like felt or smooth paper, on the ramps and students predicted which materials would result in faster racing times. Then we explored how removing a source of ground friction altogether would impact racing time, seeing as the real-life maglev will levitate!
We also discussed the differences between traditional trains and the SCMAGLEV by completing a Venn Diagram as a group.
There were a few trick questions here and there, but students were quick to catch them as they called upon what they had recently learned at other stations; for example, the SCMAGLEV still deals with elements of friction even though it levitates, seeing as friction is all around us!
As exciting as these hands-on activities were, arguably the most enthralling part of the afterschool event was watching our marketing director, Craig, demonstrate characteristics of superconductivity using liquid nitrogen. He showed that a traditional electromagnet, which can be made to be considerably stronger than a traditional magnet, is limited by electrical resistance. Yes, it could pick up more metal fragments than a traditional magnet, but it quickly became very warm (as shown to students via a thermometer). Craig explained that the heat produced is a sign of decreased efficiency, meaning that much of the energy is being lost as heat instead of creating a magnetic force. Then, using liquid nitrogen, Craig introduced the concept of superconductivity, a phenomenon of certain materials that can occur at very low temperatures. When in a superconducting state, electrical resistance approaches zero, meaning that all the energy added is used to create strong magnetic forces, and none is lost as heat. Students watched in awe as Craig then placed a metal puck (that had been brought to a superconducting state in the liquid nitrogen) above a magnetic track. Not only did the puck float, but it traveled in circles around the track all on its own!
With everything that students had learned fresh in their minds, they had one final task ahead of them: constructing their own miniature maglev to race down a magnetic track. This final activity helped tie in everything we had learned about magnetic fields, poles, and levitation – and was, of course, an exciting and close competition!
We are so grateful that we were able to put on this STEAM event with RICH, especially because some of these topics can be hard to understand in a classroom setting without hands-on demonstrations! Moreover, part of what made our first event so exciting was that it truly felt aligned with RICH’s mission – empowering the residents of Baltimore. We believe empowerment through education can happen at many different levels and ages, and it is never too early to start learning about the technology and science that will be changing the future of transportation. We look forward to hosting more of these lessons in various educational settings, as these activities and experiments can be scaled to many different ages and grades. Many thanks to RICH for helping us host this great event – we can’t wait to be back! | physics |
https://www.saharaexpo.com/en/home/sectors/agricultural-materials-applications.html | 2021-05-06T18:39:56 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00320.warc.gz | 0.899214 | 320 | CC-MAIN-2021-21 | webtext-fineweb__CC-MAIN-2021-21__0__80619400 | en | The greenhouse effect is a natural process that warms the Earth’s surface. When the Sun’s energy reaches the Earth’s atmosphere, some of it is reflected back to space and the rest is absorbed and re-radiated by greenhouse gases.
Greenhouse gases include water vapour, carbon dioxide, methane, nitrous oxide, ozone and some artificial chemicals such as chlorofluorocarbons (CFCs).
The absorbed energy warms the atmosphere and the surface of the Earth. This process maintains the Earth’s temperature at around 33 degrees Celsius warmer than it would otherwise be, allowing life on Earth to exist.
Enhanced greenhouse effect
The problem we now face is that human activities – particularly burning fossil fuels (coal, oil and natural gas), agriculture and land clearing – are increasing the concentrations of greenhouse gases. This is the enhanced greenhouse effect, which is contributing to warming of the Earth.
Step 1: Solar radiation reaches the Earth's atmosphere - some of this is reflected back into space.
Step 2: The rest of the sun's energy is absorbed by the land and the oceans, heating the Earth.
Step 3: Heat radiates from Earth towards space.
Step 4: Some of this heat is trapped by greenhouse gases in the atmosphere, keeping the Earth warm enough to sustain life.
Step 5: Human activities such as burning fossil fuels, agriculture and land clearing are increasing the amount of greenhouse gases released into the atmosphere.
Step 6: This is trapping extra heat, and causing the Earth's temperature to rise. | physics |
https://journal.iscast.org/past-issues/from-physics-to-metaphysics-a-new-way | 2024-02-26T20:28:43 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474663.47/warc/CC-MAIN-20240226194006-20240226224006-00059.warc.gz | 0.934007 | 12,816 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__69795204 | en | Christian Perspectives on Science and Technology, New Series, Vol. 1 (2022), 46–71
Abstract: Brian Cox, at the end of his fifth episode in the 2021 BBC series Universe, says that big questions like, “why is there anything at all?” are scientific questions about nature. The paper challenges this form of naturalism by drawing on the work of V. J. Stenger, who derived virtually all the great laws of physics L from some physical knowledge and from a principle of point-of-view-invariance used by physicists in their enquiries. We will call this result R. The move from R to metaphysics is motivated by R having the oddity that L, operating from the Big Bang, are derivable from premises that include something that appears billions of years later, namely physicists using the above principle. The move is only justified if it can overcome two blockers: #1 that R is explicable wholly within the resources of the natural sciences; #2 that R is a brute fact. Either way, seeking a further explanation is not justified. I show these blockers logically cannot hold. Seeking a metaphysical explanation of R is therefore justified. It is shown that it is not unreasonable to conclude the universe is structured according to the laws of physics by God, the creator of the universe ex nihilo, in order that the universe be knowable through empirical enquiry, by embodied rational agents, using the principle of point-of-view-invariance. [p. 48]
Throughout my lecturing career, I have encountered several matters that make it difficult for many students to even grasp a Christian account of the scientific view of the universe. One is the sense that the Christian Bible is out of date for anyone with a scientific view of the world. Another is the problem of natural evil, that is, all the pain and death brought about by natural processes such as tsunamis, genetic disorders, the evolution of life on the planet, where such processes are supposedly created by a loving God. Another is the pervasive naturalism of modern culture. Naturalism is the doctrine that nature is all there is. Scientific naturalism says that nature answers to all the objects, relationships, and processes that are identified in the well-established natural sciences. Finally, students would like, if not a proof of God, then, a sense that there are rational grounds for belief in God, especially given pervasive naturalism and the exciting and relentless expansion of the natural sciences, especially physics and cosmology. Our culture is saturated by the natural sciences, technology, and the free market economy. Many people absorb from this milieu the view that there is no purpose or moral order written into the universe, and nothing beyond the universe. Here I draw on what Charles Taylor calls the “immanent frame,” meaning that many people envisage living a good life without any reference to anything transcendent, and get on living it.
In this paper I address two of these issues; pervasive naturalism, and the sought-after rational grounds for belief in God. Naturalism doesn’t necessarily present itself in philosophical terms. An example [p. 48] is the conclusion by Brian Cox in the last episode of his excellent BBC series, Universe. The first episode explores our cosmic origins examining how stars bring meaning to the universe. The second explores whether we are alone in the universe. The third tells how a new space mission has uncovered the history of the Milky Way. The fourth is about the super massive black hole at the centre of our galaxy. The fifth asks why we are here. This episode journeys back 13.8 billion years to the origin of the universe.
At the end of the fifth episode, Cox tells us four things. First, at some length he tells us that scientific enquiry is amazing, given the breadth, depth, and detail of its discoveries about our universe. As a crucial example, he highlights the cosmic microwave background radiation—the most ancient light in the universe. He also notes how much we have learned, though we are located on the tiny speck of our planet in this vast universe. Second, he identifies big questions like “why does anything exist?” and “why do we exist?” Cox grants that to many people these don’t sound like questions for science. They are more like questions for philosophy and perhaps even theology. But, third, Cox thinks they are scientific questions because they are questions about nature, which we can only answer by looking outwards, beyond the stars, not by looking within ourselves. Fourth, as we engage the universe, we not only ask questions, but we also begin to find answers, by which he means scientific answers.
Cox’s assurance that science can provide an answer to the big questions such as “why does anything exist?” is surprising. A couple of years ago, my atheist colleague Dr Kristian Camilleri and I were saying to a class in “God and the Natural Sciences” that if your question is “why is there anything at all?” science won’t help you with an answer. [p. 49] Straightaway a young man shot up his hand and said, “you mean science hasn’t yet provided an answer.” This second-year student was deeply into mathematics and physics. We affirmed the distinction he was making, but not its application in this case. Our claim was not based on a gap in scientific understanding, to be closed by further research. Our claim was based on the fact that any scientific answer necessarily draws on what already exists to do the explaining. Logically, it is unable to explain why there is anything at all. The student accepted this answer and even laughed. It is not a deep or complex point. Of course, we acknowledged that in making this point we were neither claiming nor denying that there is an answer to the question. Everyone knew that Kristian and I have different answers to that question. We left the question open for students to consider. Our point was simple, and it struck me that this student had reached second year university without this having been pointed out before. Doubtless he was not alone.
In what follows I accept Cox’s views about where to start to seek answers to the big questions, namely the amazing breadth and success of scientific enquiry. This will lead to a critique of the pervasive naturalism of contemporary culture, but not by rehearsing the familiar discussions about physicalism, which shows the need of an ontology richer than that assumed by scientific naturalism. Instead, a new way to make the journey from physics to a richer metaphysics is presented, using the work of physicist and atheist Victor Stenger. In daily talk, people do not make recourse to metaphysics, they rather tell stories. But every story told (or play performed, or movie made) is set within some world and will carry indications of the kind of world it is in which the story unfolds. For the story, this is reality. Here, metaphysics is a worldview. It is an account of reality and perhaps some idea of how we know it. [p. 50]
In summary, my approach starts from the relentless expansion of the natural sciences and voices a disciplined speculation based on this very successful form of human enquiry. I will show that the proposition entails two unavoidable questions: “why is there anything at all?” and “why is what there is structured—and structured the way it is?” The evidence for this proposition comes from finding answers to these two questions, which support each other and survive strong challenges.
The proposition is based on three observations about human enquiry. First, any particular research in the natural sciences presupposes that what is being enquired into is intelligible and open to rational explanation, though without prejudice to the forms of intelligibility and the forms of rationality that may be called for. This presupposition is what gets enquiry going and keeps it going. Second, history shows the incessant character of human enquiry, especially the last 450 years of scientific research that continues providing explanations of more and more of the universe in completely natural terms. Third, human enquiry conducts itself and envisages itself as continuing. It does not envisage itself as coming to an end. Human enquiry begins from wonder and proceeds through the continuing eruption of questions on a quest for a true understanding of whatever it researches. The natural sciences powerfully exemplify this dynamic process. Even if institutions (secular or religious) suppress enquiry, questions continue to erupt!
Let us recognise these aspects of human enquiry by the speculation that “all there is, is fully intelligible.” Of course, the proposition may lead nowhere—it might prove to be nonsense, or lack any interesting consequences, or there may be no evidence for it beyond the above motivation, and much against it.
Some clarifications are called for and some challenges are noted. Our speculation does not entail that everything is fully intelligible to us now. Human enquiry will never be faced with a brute fact for which there is no explanation. Furthermore, enquiry is not faced with an infinite [p. 51] regress of explanations of the way things are, for then the fully intelligible becomes unintelligible. There are at least three ways the proposition can be challenged. First, a direct challenge is the open ontological question, “Is all there is fully intelligible? After all, the universe may be a brute fact.” But do we not risk falling into a gaps argument if we assert that something is a brute fact, when without a larger argument all we can mean is that we have not yet filled the gap in our explanation?
While this proposition does not entail that everything is fully intelligible to us now, it does lead us to expect there ought to be answers for at least the two big questions mentioned above: “why is there anything at all?” and “why is the universe structured—and structured the way it is?” The speculation that all there is is fully intelligible cannot be fulfilled if there is only an infinite regress of explanations. It can only be fulfilled if there is something that explains the existence of everything else, the very nature of which explains its existence, which is to say its existence does not depend on anything else, but rather it exists necessarily. This is the idea of God, the creator of all there is ex nihilo—that is to say, not from preexisting stuff. Such a God would have complete understanding, including self-understanding and being self-explanatory. As Ward comments, “being self-explanatory, after all, does not entail that anyone else can understand the explanation, only [p. 52] that it exists.” Nor, I would add, does it entail that no one can ever come to understand the explanation. Lawrence Krauss concedes that if God is understood as the cause of all causes, then there is no regress of explanations. Our argument understands God as the cause of all causes and will go on to address Krauss’ further claim that there is no evidence for the idea of God.
Here is the beginning of an answer to the first question: “why is there anything at all?” It is a beginning of an answer given that, for example, the claim that God exists necessarily has been criticised on the grounds that a God existing necessarily cannot but act necessarily, including creating necessarily. This necessity excludes freedom from the act of creation and from what is created. This would contradict the freedom manifest in human living, including human enquiry. It would also contradict any idea of God creating freely. This well-known difficulty is noted by Ward and Paul Davies. The latter sees this as a fatal difficulty for the idea of God, citing Ward, but without considering Ward’s extensive answer to this difficulty in the last chapter of his Rational Theology.
Help with this difficulty is also given by Peter Laughlin, who discusses divine necessity and created contingence in Aquinas. A key point for Laughlin is what kind of necessity is meant when God is said to be necessary. For example, did Aquinas intend “logical necessity” when he spoke of God being necessary? Laughlin shows that this is not the case. The problem we are discussing comes from assuming “that if God is the first and necessary cause then there can be no contingent [p. 53] proximate causes and ipso facto there are no contingencies.” The assumption is that whatever comes from, or is brought about by a necessary being, proceeds necessarily (so Neoplatonism). Laughlin argues this assumption is not a problem for Aquinas, for whom creation “is not logically necessary since the proposition ‘God does not create’ does not by itself entail a contradiction. Indeed, creation is not required by some ineluctable logic or by the nature of deity so that God could not have willed not to create.” Rather, if it is open to God to choose between creating and not creating, once having created, it is no longer open to God not to create. “Whatever God wills, then, in the act of willing cannot be changed but God’s will remains free to choose what it is that God will in fact will. The acts of God’s will are thereby only conditionally necessary in this sense, they are not absolutely necessary for God.” Laughlin concludes by quoting Aquinas’s point that no absolute necessity can be inferred from the divine will.
Based on our speculation, an answer is also to be expected to the second question, “Why is the universe structured—and structured the way it is?” A reasoned answer is possible only when some idea of how the universe is structured is identified. Many will think of the laws of physics as at least part of the answer and so, in part, our question becomes, “Why is the universe structured according to the laws of physics?” An answer may be reached starting from the work of Victor J. Stenger.
Physics according to Stenger
J. Stenger, especially his 2006 book, The Comprehensible Cosmos, derives the laws of physics for classical physics, relativistic physics (special and general), quantum mechanics, the standard theory of particle physics, and statistical mechanics. The laws are well known. What is of interest for us here is in how he pursues the derivations. [p. 54]
Stenger starts by considering the kind of objectivity physicists seek in making models of reality. He illustrates this by contrasting the observations physicists make to observations from a subjective point of view, such as taking a photograph. “Instead, physicists seek universality, formulating their laws so that they apply widely and do not depend on the point of view of any particular observer. In that way, they can at least hope to approach an accurate representation of the objective reality that they assume lies beyond the perceptions of any single individual.” This claim is supported by a brief sketch of science’s history of increasing objectivity from Galileo to Einstein. Here, objectivity means that what is observed is not dependent on the position or reference frame of the observer. “This does not mean that the Universe looks the same at every point of space and time.” Rather, “while all phenomena may not look the same in detail, they can be modelled in terms of the same underlying principles.” Stenger’s key idea is this: “Physics is formulated in such a way to assure, as best as possible, that it does not depend on any particular point of view or reference frame. This helps make possible, but does not guarantee, that physical models faithfully describe an objective reality, whatever that may be.” He claims that when our models are the same for all points of view, “then the most important laws of physics, as we know them, appear naturally.” A model “should be able to successfully describe in a repeatable, testable fashion a whole class of observations of the same general type; enable the predictions of other unexpected observations; and provide a framework for further applications, such as in technology or medicine.”
The key idea amounts to the principle of point-of-view invariance (hereafter, PPOVI): “Point-of-view invariance: The models of physics cannot depend on any particular point of view.” Stenger readily shows that this principle requires the description of reality as invariant to the translation of the origin of the spatial coordinate system (space-translation), the rotation of a spatial coordinate-system (space-rotation), [p. 55] and the translation of the origin of the time variable (time-translation). He also designates invariance as symmetry, for example a sphere is invariant under rotation about any axis. Stenger shows that conservation of energy follows from time-translation invariance, conservation of linear momentum follows from space-translation invariance, and angular momentum is conserved by any space-rotation invariance. The conservation laws “are simple consequences of the symmetries of space and time,” or, equivalently, “from point-of-view-invariance” using space and time as a framework for constructing models that have invariance under time-translation, space-translation, and space-rotation. Stenger asks:
where does point-of-view invariance come from? It comes simply from the apparent existence of an objective reality—independent of its detailed structure. Indeed, the success of point-of-view invariance can be said to provide evidence for the existence of an objective reality … If we did not have an underlying objective reality, then we would not expect to be able to describe observations in a way that is independent of a reference frame.
If symmetry is the star performer of twentieth century physics, “broken symmetries” are no less important. Stenger discusses symmetry violations, arguing broken symmetry is a fundamental fact about the universe. He counts broken symmetries as a good thing, “at least from a human perspective. Without this complexity and diversity, the Universe would be a dull place indeed, and furthermore we would not be here to be bored by it.”
From PPOVI and other assumptions and principles (e.g., Noether’s Theorem), Stenger elegantly derives all the laws of classical, relativistic and quantum physics (Mathematical supplements A to G). [p. 56] This is an impressive tour de force. Stenger is clear: “The principle of point-of-view-invariance … is an eminently testable, falsifiable principle. So far, it has not been falsified.” Nothing guarantees the agreement. The universe might have turned out to be otherwise.
Significantly, Stenger does not claim to derive all the laws of physics, such as the second law of thermodynamics, which says that the entropy of an isolated system must remain constant or increase with time. He points out that a broken vase does not reassemble itself. It is not a universal law of physics. It holds at the macroscopic level, describing the average behaviour of systems of many particles, but not at the molecular level and below (atomic, nuclear, subnuclear).
This PPOVI concerns the models of reality physicists produce and are consistent with the kind of objectivity they seek. These models cannot depend on any particular point of view. The models are then to be tested empirically. This is a principle about model construction and testing. It is an epistemic principle, guiding physicists’ enquiries into the universe. Physicists and their construction and testing of models are an essential presupposition of this principle. The principle does not specify any model, but rather governs the production of any model. Thus, this principle is not reducible to some actual model of reality that meets the requirement stated by the principle, for example a model possessing certain kinds of symmetry.
I accept Stenger’s derivation of the laws of physics shown in his supplements A to G, and now want to draw conclusions from this part of his work. The derivations (not just the conclusions) may be gathered and represented as R: PPOVI, AOA => L. AOA stands for “all other assumptions” (e.g., about time, space, and matter), which Stenger uses in his arguments to derive the laws of fundamental physics L. The L are the conclusion to Stenger’s argument, but R is needed to represent the whole argument. After all, these derivations are what are distinctive about Stenger’s work. The derivations show that the fundamental laws [p. 57] of physics appear to conform to PPOVI. As noted, nothing guarantees the agreement. The universe might have turned out to be otherwise.
The subtitle of Stenger’s book asks, Where do the laws of physics come from? The derivations already discussed do not answer this question, for they do not explain how the universe appears to have been operating according to these laws from the earliest moments after the Big Bang. To seek help on this subtitle, we turn to his account of the origin of the universe. Stenger’s account of the universe’s origin sums up physics with the view that the known symmetries are the low energy consequences of the breaking of high energy symmetries. The breaking of symmetries “could be dynamical, that is, the result of some ‘lawful’ higher process lying still undiscovered.” More simply, symmetries could be broken spontaneously, “by a phase transition analogous to the breaking of symmetry when a magnet cools below the Curie point.” Symmetry breaking is a violation of PPOVI. It corresponds to a particular viewpoint being singled out. In the spontaneous symmetry breaking, the underlying model remains symmetric. Symmetry breaking does not contradict the idea of PPOVI.
Exactly what that higher symmetry is still has to be discovered. PPOVI simply requires symmetry without specifying any particular symmetry group. Stenger’s view is that empirical and theoretical indicators show that supersymmetry (invariance under transformations between bosons and fermions) will likely be part of any future unified model.
Stenger rejects the suggestion that the fine tuning of physical constants for life is the result of an external natural causal agent or “some agency beyond nature” designating a particulxar set of constants. Nor does he follow physicists who believe that the parameters currently determined by experiment will eventually be derived from some set of basic principles. “It seems highly improbable, however, that any purely natural set of principles would be so intimately connected to the biological structures that happened to evolve on our particular planet.” In his view it is more likely that life evolved in response [p. 58] to the physical parameters characterising our universe. Spontaneous symmetry breaking would mean the values of the constants arose by accident. “If we had an ensemble of universes, then the parameter values in our Universe arose from a random distribution—with no external, causal agent designating one particular set.” Stenger’s view is that the “observable universe, in fact, looks just as it would be expected to look in the absence of any such agent. The laws of physics are … ‘lawless laws’ that do not arise from any plan but from the very lack of a plan. They are the laws of the void.”
By void, Stenger means a vacuum that has zero vacuum energy. Various possible ways of thinking about zero energy are considered, viz., super-symmetric vacuum: negative energy solutions for the energy field. The issue is “how to get matter from a symmetric void.” Stenger appears to offer two answers, which I will not discuss here, in terms of quantum tunnelling and of the collapse of the symmetric void. While I have questions about these answers, I will show that my larger argument has no need to resolve these and other possibilities, including a multiverse. I can happily wait upon these matters to be resolved scientifically.
Moving from Physics to Metaphysics: Can the Move Be Justified?
The theme of this paper is the move from physics to metaphysics and so the motivation for this move is sought from within physics. Previously, the motivation for espousing scientific naturalism was the expanding success of scientific explanations, the basis for a positive induction that every question about our universe will be similarly answered. Here it [p. 59] is found in Stenger’s derivations of the form of the laws of physics L, which may be summarised as R: PPOVI, AOA => L.
There is an apparent oddity in R. The L, operating since very soon after the Big Bang, is explained in terms of PPOVI which refers to a principle used by enquirers that only show up billions of years later. This seems odd and leads to the question: is R true of the L and so true of the L operating from the earliest moments after the Big Bang? PPOVI yields laws that hold for all viewpoints and reference frames, including those located soon after the Big Bang. If we answer affirmatively, then we may wonder how does it come about that the L operating from the earliest moments after the Big Bang are derivable from premises that nontrivially include PPOVI, which refers to physicists conducting their enquiries billions of years later?
From a different angle, anyone working from a strongly naturalistic standpoint may be skeptical about this question, not giving it much weight and certainly not allowing anything to be built on a mere question. This skepticism would aim to show how R can be explained wholly within the resources of the natural sciences and physics in particular. After all, R has been obtained using these resources. If the oddity of R is only apparent, explicable after all in terms of the resources of the natural sciences, there would then be no justification for seeking a metaphysical explanation of R. Call this, blocker #1. Also, if it were reasonable to interpret R as a brute fact and therefore without further explanation, there would be no justification for seeking a metaphysical explanation of R. Call this, blocker #2. It can be shown that the resources of the natural sciences are logically unable to explain R. Blocker #1 is defeated. It can be shown that, logically, it is unreasonable to treat R as a brute fact. Blocker #2 also is defeated. [p. 60]
How Blockers #1 and #2 Are Defeated
Blocker #1 seeks a physical theory Tphys that explains R. In brief, a physical theory Tphys is:
- a “blind” causal explanation of physical events and processes; “blind” means no final causes, goals, purposes built in;
- the causal explanation is described mathematically and aims to derive a mathematical description of what is to be explained;
- open to empirical testing.
Blocker #1 would be Tphys => R. A series of problems are foreseeable:
R is the wrong kind of explanandum for any Tphys
- R is a rational inference. It stands in the logical space of reasons, not in the very different logical space of subsumption under natural laws.
- Logically, R can never be obtained from any Tphys (as defined).
Tphys has to provide PPOVI for the derivation of R to succeed.
- If Tphys includes PPOVI, then Tphys is not “blind.” PPOVI is about physicists pursuing valued epistemic ends guided by PPOVI in some universe, which TPhys at least in this way
Can Tphys lead to PPOVI?
- Physics alone cannot do this; it took the evolving processes of the 13.7-billion-year-old universe (physical, chemical, biological, and cultural) to bring about the existence of enquirers guided by PPOVI.
Conclusion: Any physical theory (so construed) logically cannot explain R. Blocker #1 fails. [p. 61]
Blocker #2 claims it is reasonable to treat R as a brute fact about the universe. Consider the following argument concerning R:
- If no scientific or nonscientific explanation of R is possible, R is a brute fact.
- No scientific theory can explain R.
- No nonscientific explanation of R is possible.
- Therefore, R is a brute fact.
The argument is valid. But if we reject the conclusion, as stated in the final dot point, which of the three preceding premises will we reject?
- R established above.
- Says what is meant by a brute fact.
- This is the failure of blocker #1.
- Says that there is nothing outside or beyond what the natural sciences can tell us, that can explain R.
How shall we assess this last point? An initial question is how do we know that no non-scientific theory can explain R? That would be the case only if we assumed scientific naturalism with its methodological, epistemic, and metaphysical theses. The latter says that all there is is what physics says there is, or complex configurations of the same. But with R we are concerned with something that scientific theories logically cannot explain, something beyond the scope of scientific theories.
PPOVI is obtained initially quite independently of knowing the evolutionary cosmology of the 13.7-billion-year-old universe. It is obtained by rational enquirers, with certain aims and some general beliefs about rationality and about how the world operates deciding what standards rationally ought to be met by actions directed to achieve valued epistemic ends. Analogous considerations have their place in practical actions like shooting an arrow from a bow to hit a target. We know about rationality because human beings instantiate rationality, [p. 62] whereby they think and act for various reasons, but this is known independently of how the origins of that instantiation might be explained.
This is one argument for thinking of PPOVI as something beyond the theories of natural science, yet PPOVI is nontrivially involved in explaining the form of the laws of fundamental physics L, as shown in R. This provides rational grounds for wondering if something beyond the natural sciences might explain R. But the penultimate dot point would lead us to expect any such explanation to be impossible. Hence the last dot point should be set aside as unreasonable. Therefore, the last dot point does not follow, and we reasonably set aside the claim that R is a brute fact. Note that this result is not based on Leibniz’ principle of sufficient reason. Blockers #1 and #2 fail. We are therefore justified in seeking further—beyond the resources of the natural sciences and physics in particular—a metaphysical explanation of R, including the oddity in R.
A Metaphysical Explanation of R
Seeking such an explanation is guided by the question, “What must minimally be assumed to hold to explain R?”
Any explanation of R must provide PPOVI. Whatever provides PPOVI is something that has language, that has access to the logical space of reasons, and thereby logic and mathematics, and it knows about intentionality—PPOVI assumes embodied rational agents (humans or aliens) in a universe (whether our universe only or within a multiverse) pursuing valued epistemic ends concerning that universe.
These are very good grounds for saying that only something capable of rational thought can provide PPOVI. This “something” should be thought of as some kind of rational agent, “RA.” A rational agent must be assumed because thought alone is not enough to explain the existence of any universe or multiverse however conceived. To explain how R holds for our universe, we must assume that RA envisages a universe at least for which R holds, as in the preceding paragraph. That is, we must minimally think of RA envisaging a universe at least operating [p. 63] according to L and for which AOA holds, for which PPOVI also holds, and that the universe so envisaged eventually produces embodied rational agents capable of pursuing valued epistemic ends guided by PPOVI.
We may properly treat this as the end/purpose RA envisages for this universe. This purposive explanation arises from within the argument rather than being imposed. (This purposive explanation at least invites the question of whether this end may be included in any larger end RA possibly envisages for this universe.) For R to be true of an existing universe, RA must also be understood as somehow bringing about this envisaged, but so far in this argument, not existing universe. Meeting this requirement would allow the developing explanation to be an answer to the question: Why is the universe structured and structured according to the laws of physics?
If the argument from Stenger’s work to this point was all we had to go on, a Kantian note would be that the most we could claim would be that RA is the architect of the envisaged universe, to be produced from some pre-existing stuff. We began the argument, however, from a speculation starting from the observation that human enquiry presupposes that what is being enquired into is intelligible and open to rational explanation, but without prejudice to the forms of intelligibility and rationality that may be called for. Based on the relentless expansion of human enquiry that is apparently unending, the speculation generalises that presupposition by assuming that all there is, is fully intelligible. That generalised presupposition blocked the idea of an infinite regress of explanations of the universe and the idea of the universe being a brute fact. The generalised presupposition entailed the expectation of answers to two unavoidable questions: “why is there anything at all?” and “why is the universe structured—and structured the way it is?” Based on Stenger’s work, we have the beginning of an answer to the second question. This supports the generalised presupposition and therewith the first question. Earlier we found the beginning of an answer to the first question by arguing to the idea of God, the creator of all there is ex nihilo—that is to say, not from preexisting stuff. Should we think that God creates RA or identify God as RA? If the [p. 64] first, then God must at least already have all the characteristics of RA, allowing us to identify God as RA. This is the simplest explanation of Stenger’s result R.
We may conclude that God the creator of the universe ex nihilo has structured the universe (at least) in term of the laws of physics in order that the universe be knowable by embodied rational agents (human or alien) though empirical enquiry guided by PPOVI.
The paper presents a new way of proceeding from physics to metaphysics, largely drawing on a speculation about the universe, based on: the relentlessly expanding success of the natural sciences; the observation that any scientific enquiry presupposes that what is enquired into is intelligible and open to rational explanation; and Stenger’s derivation of the laws of physics from premises that include PPOVI. Stenger’s result has an oddity that the laws of physics operating in the universe including from the earliest moments after the Big Bang are derived from premises that include PPOVI, an assumption about what only shows up billions of years later. The oddity could be tested and refuted by showing it can be explained entirely within the resources of physics. It is shown that this testing fails in principle. This critique of scientific naturalism is independent of other criticisms in circulation (see n. 3), and so contributes something new to the literature on scientific naturalism and physicalism in particular.
Generalising the presupposition of human enquiry led to having to face the questions “why is there anything at all?” and “why is the universe structured—and structured the way it is?” Answering the second question began by noting that the laws of physics must surely count as partly identifying how the universe is structured. Drawing on Stenger’s work, the argument led to the conclusion that the laws of physics are the way they are in order that the universe be knowable by embodied rational agents conducting empirical enquiries in the light of PPOVI. This leads to the expectation of other laws or other ways the universe [p. 65] is structured to bring such embodied agents into existence, and this may be pursued for example together with Daniel Dennett and Paul Davies. This line of thought leads to the expectation of a solution to the hard problem of consciousness, which may be pursued, for example, in conversation with Robert Spitzer and Daniel A. Helminiak, concerning proposed solutions to this problem.
Challenges, strengths, and limitations of this argument
Two important challenges have been raised in discussions. The first claims that my use of PPOVI represents a category mistake, because PPOVI is a methodological principle guiding research not an ontological principle, making ontological proposals. This claim is correct and concurs with Stenger’s thought that if “the models of physicists can be used to successfully describe previous observations and predict future ones, then we can use them without getting into metaphysical questions.” It turns out, however, that PPOVI can lead to ontological consequences for anyone embracing scientific naturalism. This is shown in my discussion of blockers #1 and #2. The challenge does not attend to this argument justifying the move from physics to metaphysics. In my opinion, there is also a hint of metaphysics in Stenger’s view of physicists as seeking “universality,” or an “accurate representation of the objective reality that they assume lies beyond the perceptions of any single individual.”
A second challenge is that there may be alternative approaches aiming to explain why the laws of physics are the way they are. If so, would Stenger’s result be all that significant, when there may be other premises X, such that X => L? If this were the case, why build [p. 66] anything based on R? I accept this as a proper concern. The search for contenders for such an X is evident, for example, in the work of P. Davies and Roberto M. Unger and Lee Smolin, though with derivations only as promissory notes. On the other hand, B. Roy Frieden has actually derived many of the laws of physics starting from Fisher information. This is the form of information introduced by R. A. Fisher at Cambridge, in the 1920s, who showed that Darwin’s theory of evolution by natural selection and Mendel’s genetics made sense statistically. Later, the mathematical form of what came to be called “Fisher information,” in honour of Fisher’s earlier research, showed up independently in the work of Harald L. Cramer and C. Radhakrishna Rao. They were theorising about how to measure a quantity that is subject to “noise” and so is fluctuating around some mean value θ. It is known as “classical measurement theory.” Their celebrated result is the Cramer-Rao Inequality (CRI): I e2 ≥ 1, where e2 is “the mean square [p. 67] error in the measurement-estimates of the fluctuating parameter” and I is the “Fisher information.”
Of interest is that the approaches of Stenger and Frieden make human enquiry central to the derivation of the laws of physics. Stenger assumes reality exists independently of what human beings know about it and draws the conclusion that physicists’ view of the universe cannot be dependent on a particular viewpoint. This is the basis of his PPOVI, central to his derivations of L. Frieden starts from classical measurement theory to determine the mean value of a fluctuating parameter. This argument is set within the space and time of classical physics. Frieden shows how this leads to “Fisher information” I, and the derivation of the Lorentz transformation, with the result that I is shown to be invariant and covariant under the Lorentz transformation. This provides a different basis for arriving at point of view invariance. Further comparison of the two approaches would highlight the role of Noether’s theorem in Stenger’s approach (see n. 17) and “Fisher information” which has the mathematical form of what is called an “action integral.” Stenger’s result is summarised, R: PPOVI, AOA => L, whereas [p. 68] Frieden’s result may be summarised RF: EF, AOAF => L, where EF represents idealised parameter measurement, AOAF stands for “all other assumptions,” and the subscript F indicates Frieden’s approach. That comparison will be for another time, as will comparing any other approaches to deriving the laws of physics, especially as they take account of dark matter and dark energy. A third challenge is based on studies examining whether physical constants vary over time. Stenger’s argument has basic physical constants invariant over time, which is still the standard view.
A limitation of the argument in its present stage refers to its theology as undeveloped in several ways. Philosophically, the idea of God entered the argument as an answer to the question “why is there anything at all?” Which is a thread in a larger canvas of natural theology for which I would especially commend Spitzer’s The Soul’s Upward Yearning. It is what allowed me to draw on Aquinas via the work of Laughlin’s “Divine Necessity.” Spitzer’s argument would reframe the idea of God used here, just as it reframes the idea of God as the architect of the universe. This still larger idea of God would call us to engage questions such as what kind of world should we expect God to create. Another limitation (and strength) refers to the fact that the argument leaves open an answer to how the universe was structured the way it is. Part of that answer will be given by physicists working on the physics of this question, and I wonder what theology might contribute. For example, my colleagues wanting to understand how God supposedly create all there is ex nihilo. Another limitation is that no appeal has been made to the Christian understanding of the vulnerable yet invincible triune God. This is a methodological limitation because this is where I want to begin to engage those who do not share this or any understanding of [p. 69] God, who happily live and work within a naturalistic view of the world and its accompanying narrative.
A strength of the argument is that the conclusion is independent of whatever physicists finally conclude about a multiverse. A consequence of the multiverse idea in its various forms (though not its motivation) is a “Darwinian” style objection to any purposive account of why the universe is structured the way it is. That objection does not apply here since my argument does not depend on rejecting the multiverse idea. A purposive answer to why the universe is structured, and structured the way it is, is arrived at from within the argument, rather than being imposed. This purposive answer does not trouble nor is it troubled by Darwinism. It provides a purposive account of natural laws that undergird the operation of the universe including Darwinian evolution. It means the “Watchmaker” is not blind, though the full purpose of God in creation is not thereby revealed. Allow me to illustrate. The room where I am working is filled with “blind” processes that have been set in place for a range of purposes. This is also true of the blind processes in our universe. (We need to be careful about the inference from blind to purposeless.) The designers of my workspace had their immediate purpose and their ultimate purpose. Even if we could infer the former from the blind processes (back engineering), in order to know the latter we would need the designers to disclose or reveal their ultimate purpose. We have not yet considered any argument for the idea of God having any ultimate purpose, nor for God disclosing or revealing such a purpose for the created universe.
Another strength is that the argument allows an answer to why empirical enquiry by embodied rational agents is so important that it is included within (part of) the purpose for which the universe is created by God. The question returns us to the earlier discussion. While God exists necessarily, but not with logical necessity, God freely creates all there is ex nihilo. The created world reflects this freedom. Therefore, pure thought alone will not be able to deduce the correct understanding [p. 70] of the God-given, contingent processes of this universe. To approach that understanding, enquirers will have to investigate the particular processes with their senses. The above argument also leads us to think the created world will reflect the rationality of God, but without prejudice on the part of enquirers to the forms of intelligibility and rationality that might be called for in understanding the world; and, I would add, even more so to do with attempts to understand God. Therefore, enquiry into the universe must be sensory, intelligent, and rational. This goes some way towards characterising empirical inquiry. This argument leaves for another time an account of why God would be interested in such empirical inquiry taking place in this created universe.
This overall argument brings to light an account of divine purpose as immanent in the operation of the universe according to blind natural laws. This argument has nothing to do with Intelligent Design, Anthropic principles, Fine Tuning, nor the old argument from design. It is not a “gaps” argument, nor does it entail deism, and makes no use of Leibniz’ principle of sufficient reason. It is unaffected by whatever turns out to be physicists’ conclusion about the multiverse proposal. This is an argument from physics to metaphysics. It is metaphysics because it goes beyond physics to what physics does not enquire into. It is not a physical explanation, but an explanation of the physical in terms of the purpose for which the laws of physics are the way they are.
It is however a metaphysics of enquiry sustaining the principle of point-of-view invariance. Given its key result, it logically cannot conflict with empirical enquiry. This argument is certainly not a science stopper! It logically cannot inhibit either empirical or theoretical enquiry in physics or any other science. On the contrary, it strongly encourages the continuing exploration of both physics and metaphysics as deeply in accord with why the universe is the way it is.
Brian Cox rightly praises the scope and detail of our scientific knowledge of the vast universe, though we are located on this speck of [p. 71] a planet. While he acknowledges this contrast, the contrast does not itself lead to any wondering about how this is possible. Presumably, this is because the scope of scientific methods of enquiry and the empirical vindication they offer is well known. The contrast between the speck and its vast context does lead to big questions, such as “why is there anything at all?” and “why are we here?” Cox takes these as questions about nature and as scientific questions, as if there are no other kinds of questions about nature. This paper offers an answer to these big questions, not a scientific answer, but a metaphysical one entirely friendly to the sciences.
Victor Stenger derived a great many of the great laws of physics, and the derivation entailed an oddity. This paper identifies and explains the oddity, after showing that the natural sciences logically could not explain it. Another way of stating the oddity is that the people telling the scientific story of the universe cannot be properly located within the story. Stenger also cited the famous statement of Einstein, that the most incomprehensible thing about the universe is that it so comprehensible. This paper begins to indicate how we might make the stunning comprehensibility of the universe comprehensible.
The author reports there are no competing interests to declare.
Received: 06/06/22 Accepted: 08/09/22 Published: 15/09/22
E. B. Davis and R. Collins, “Scientific Naturalism,” in G. B. Ferngren, Science and Religion: A Historical Introduction (Baltimore: John Hopkins University Press, 2002), 322.
C. Taylor, A Secular Age (Cambridge, MA: Belknap Press of Harvard University Press, 2007), 589. See also ibid., 548, 566.
The most philosophically developed form of scientific naturalism is physicalism. David Papineau, “The Rise of Physicalism,” in The Proper Ambition of Science, ed. M. W. Stone and J. Wolfe (Routledge: London, 2000); David Stoljar, ‘Physicalism’, Stanford Encyclopaedia of Philosophy at http://plato.stanford.edu/entries/physicalism/ (2001); James Ladyman and Don Ross, Everything Must Go: Metaphysics Naturalised (Oxford University Press, 2007). As well as defenders of physicalism, there are its critics. C. Hemple, “Reduction: Ontological and Linguistic Facets,” in Essays in Honour of Ernst Nagel, ed. S. Morgenbesser et al. (New York: St Martin’s Press, 1970). See Papineau, “The Rise of Physicalism,” 183 for his response to Hemple. See also J. Haught, Is Nature Enough? Meaning and Truth in the Age of Science (Cambridge University Press, 2006); C. Cunningham, Darwin’s Pious Idea: Why the Ultra-Darwinists and Creationists both Get It Wrong (Grand Rapids, MI: Eerdmans, 2010); S. Ames, “The Rise and Consequences of Scientific Naturalism,” in Anthropos in the Antipodes, ed. R. Horner, P. McArdle, and D. Kirchhoffer (Melbourne: Mosaic Books, 2013); S. Ames, “Critique of Daniel Dennett’s, From Bacteria to Bach and Back: The Evolution of Minds,” Journal of Bioscience & Bio Engineering 3:1 (2022): 1–7.
For a technical account of the meaning of metaphysics, see Neil Omerod, “Bernard Lonergan and the Recovery of a Metaphysical Frame,” Theological Studies 74 (2013): 960–982. Ormerod (ibid., 963) returns to Aristotle’s distinction between metaphysics as first philosophy and other “sciences” such as mathematics and physics. Cf. Aristotle, Metaphysics 4.1, 10003a24. See also J. Loux, Metaphysics: A Contemporary Introduction (Oxford: Clarendon, 1998).
With some differences, here I am very much influenced by B. Lonergan, Insight: A Study of Human Understanding, ed. F. E. Crowe and R. M. Doran (Toronto: Lonergan Research Institute of Regis College and University of Toronto Press, 2000), chs 19–20; B. Lonergan, “The General Character of the Natural Theology of Insight,” in Philosophical and Theological Papers 1965–1980: Collected Works of Bernard Lonergan, vol. 17, ed. R. C. Croken and R. M. Doran (Toronto: Lonergan Research Institute of Regis College and University of Toronto Press, 2004),1–10; B. Lonergan, Method in Theology (New York: Herder and Herder, 1972), 101–103; R. Spitzer SJ, The Soul’s Upward Yearning: Clues to Our Transcendent Nature from Experience and Reason (San Francisco: Ignatius Press, 2015), ch. 3 and Appendix 2; K. Ward, Rational Theology and the Creativity of God (Oxford: Basil Blackwell, 1982); K. Ward, “God as a Principle of Cosmological Explanation,” in Quantum Cosmology and The Laws of Nature, ed. R. J. Russell, N. Murphy, and C. J. Isham (Vatican City State and Berkeley, CA: Vatican Observatory Publications and the Centre for Theology and the Natural Sciences, 1996), 247–262; K. Ward, “God as the Ultimate Information Principle,” in Information and the Nature of Reality: From Physics to Metaphysics, ed. P. Davies and N. H. Gregersen (Cambridge University Press, 2010), 282–300.
Ward, Rational Theology, 8.
L. M. Krauss, A Universe from Nothing: Why There is Something Rather Than Nothing (New York: Free Press, 2012), 167, 170. Here, Krauss concedes that if God is understood as the cause of all causes, then there is no regress of explanations.
Ward, Rational Theology, 7–8.
P. Davies, The Goldilocks Universe: Why is the Universe Just Right for Life? (London: Allan Lane, 2006), 231; P. Davies, “Universe from Bit,” in Information and the Nature of Reality, 66.
P. Laughlin, “Divine Necessity and Created Contingence in Aquinas,” The Heythrop Journal (2009): 648–657. Laughlin’s article is also highly influenced by Lonergan’s work Grace and Freedom as a reading of Aquinas on these issues.
Laughlin, “Divine Necessity,” 654.
Laughlin, “Divine Necessity,” 655.
V. J. Stenger, The Comprehensible Universe: Where Do The Laws of Physics Come From? (New York: Prometheus Books, 2006).
See the table of the basic laws of physics in Stenger, The Comprehensible Universe, 113–114.
Stenger, The Comprehensible Universe, 15, 55, 65.
Stenger, The Comprehensible Universe, 56, 157–159.
Stenger, The Comprehensible Universe, 9, 10, 15.
Stenger, The Comprehensible Universe, 57.
Stenger, The Comprehensible Universe, 57.
Stenger, The Comprehensible Universe, 187. In my opinion, this is a hint of metaphysical realism underlying PPOVI.
Stenger, The Comprehensible Universe, 97–106.
Stenger, The Comprehensible Universe, 102.
Stenger, The Comprehensible Universe, 58.
Stenger, The Comprehensible Universe, 161.
Stenger, The Comprehensible Universe, 21–22, 117.
Stenger, The Comprehensible Universe, 166.
Stenger, The Comprehensible Universe, 168.
Stenger, The Comprehensible Universe, 169.
Stenger, The Comprehensible Universe, 148.
Stenger, The Comprehensible Universe, 150, 170.
E. Carlson and E. J. Olsson, “Is Our Existence in Need of Further Explanation?” Enquiry 41:3 (1998): 255–275.
W. Sellars, “Empiricism and the Philosophy of Mind,” in The Foundations of Science and the Concepts of Psychology and Psychoanalysis, ed. H. Feigl and M. Scriven (University of Minnesota Press, 1956), 253–329; J. McDowell, “Naturalism in the Philosophy of Mind,” in Naturalism in Question, ed.M. De Caro and D. Macarthur (Harvard University Press, 2004), 91–105.
D. Dennett, From Bacteria to Bach and Back: The Evolution of Minds (Allen Lane, 2017).
P. Davies, The Demon in the Machine: How Hidden Webs of Information Are Solving the Mystery of Life (Allan Lane, 2019).
Spitzer, The Soul’s Upward Yearning, ch. 6.
D. A. Helminiak, Brains, Consciousness and God: A Lonerganian Integration (Albany: Suny Press, 2015), chs 4 and 5.
Stenger, The Comprehensible Universe, 8.
Davies, “Universe from Bit.”
R. Unger and L. Smolin, The Singular Universe and The Reality of Time (Cambridge University Press, 2015).
B. R. Frieden, Science from Fisher Information: A Unification (Cambridge University Press, 2004); B. R. Frieden and A. G. Gatenby, eds, Exploratory Data Analysis Using Fisher Information (London: Springer Verlag, 2007). Frieden’s work has been criticised by D. Lavis and R. Streater, “Physics from Fisher Information,” Studies in the History and Philosophy of Modern Physics 33B:2 (2002): 327–343; for example, that his earlier derivation of quantum mechanics in effect assumed the De Broglie hypothesis. Frieden subsequently showed how the hypothesis can be derived from his “Fisher information” approach to physics. See B. R. Frieden and B. H. Soffer, “De Broglie’s Wave Hypothesis from Fisher Information,” Physica A—Statistical Mechanics and Its Applications 338:7 (2009). A senior physicist, T. Kibble, once required me to provide evidence, independent of Frieden, for thinking there was any fundamental connection between Fisher information and physics. I sent him the following paper which he had not known, but which he conceded that did indeed provide that evidence. S. L. Braunstein and C. M. Caves, “Statistical Distance and the Geometry of Quantum States,” Phys. Rev. Let. 72:22 (1994): 3439–3443. These brief comments on Frieden’s work are drawn from my (unpublished) PhD thesis at the University of Melbourne, 2005, “Cosmology and the Metaphysics of Enquiry: Towards a Non-Materialist Metaphysical Research Programme that Explains and Derives the Fundamental Laws of Nature.”
H. L. Cramer, Mathematical Methods of Statistics (Princeton University Press, 1946).
C. R. Rao, “Information and Accuracy Attainable in the Estimation of Statistical Parameters,” Bull. Calcutta Math. Soc. 37 (1945): 81–91.
The mathematical form of Fisher information I is called an “action integral.” It is natural in the sense that it follows logically from the assumptions from which the Cramer Rao inequality (I e2 ≥ 1) is derived. These assumptions concern the measurement of a parameter of a system undergoing fluctuations. The measurement proceeds by a probe particle fired at and interacting with the system to be measured. This happens under ideal epistemic conditions (e.g. no noise from the measurement system; see Frieden, Science from Fisher Information, 98). In this context and from other properties of Fisher information I, Frieden forms another action integral K characterising the measurement interaction. Frieden postulates that K has the property that an infinitesimal variation of K, denoted by δK, is zero, i.e. δK = 0. To put the matter briefly, δK = 0 allows Frieden to use the rich mathematical resources of Lagrangian Mechanics (so named after famous French mathematician Joseph-Louis Lagrange, 1736–1816). The use of these resources leads to second order differential equations of the kind we see in the laws of physics. This the basis for Frieden’s derivations of many of the laws of physics. The extremum principle δK = 0 is also a symmetry principle and so makes connections to Noether’s Theorem mentioned earlier. See Frieden, Science from Fisher Information, 3 for an important comment on the use of Noether’s Theorem. For standard texts on the physics and mathematics, see J. B. Marion and S. T. Thornton, Classical Dynamics of Particles and Systems (Fort Worth: Saunders College Publications, 1995), 214–217; H. Goldstein, Classical Mechanics (Massachusetts: Addison-Wesley, 1959), 37–38.
M. R. Wilczynski et al., “Four Direct Measurements of the Fine-Structure Constant 13 Billion Years Ago,” Science Advances 6:17 (2020), DOI: 10.1126/sciadv.
S. Ames, “Why would God use evolution?” in Darwin and Evolution in Interfaith Perspectives, ed. J. Arnould (Adelaide: ATF Press, 2009), 105–126.
Among many works, see E. M. Conradie, The Earth in God’s Economy: Creation, Salvation and Consummation in Ecological Perspective (Zurich: Lit Verlag, 2015).
For an indication of such an argument see Ames, “Why would God use evolution?” 112, 116–122. | physics |
https://www.expando.se/expando-participates-in-deflecting-asteroids-with-project-hera/ | 2023-09-26T01:32:47 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510130.53/warc/CC-MAIN-20230926011608-20230926041608-00750.warc.gz | 0.935881 | 897 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__278283239 | en | It’s been nearly half a century since humans left footprints on the moon and during that time, human space exploration has largely centered on manned low-Earth orbit missions and unmanned scientific exploration. But now, high levels of private funding, advances in technology and growing public-sector interest is renewing the call to look toward the stars.
We’re proud to highlight Expando’s participation in world renowned project Hera. A mission set on detecting near-Earth Objects (NEOs), such as asteroids and meteors, that pose a threat to earth. Here’s everything you need to know about planetary defense and the role we’re playing in it.
Planetary Defense and asteroid deflection
Planetary Defense involves detecting, monitoring, understanding, and mitigating, NEOs. Whether or not an object is classified as a NEO depends on its orbit, size, and composition. Take the very first discovered NEO, Eros for example. Eros, the first asteroid to be studied from orbit, has a length of about 16.8 kilometers.
For the NEOs that may impact Earth, Planetary Defense involves preventing or mitigating their impact. Prevention involves deflecting or disrupting the NEO’s orbit, and mitigation involves taking measures to protect people, such as evacuation, in cases where a NEO cannot be prevented from impacting Earth.
Discovering asteroids that may hit the Earth
There have been a number of academic and some technical studies that illustrate just how a devastating asteroid impact might be avoided. Asteroids outnumber comets 100 to 1 in the inner solar system, so asteroids pose more of a nearer-term threat to our planet.
To successfully mitigate a threatening asteroid we need to discover and physically characterize it in a timely manner that allows for the appropriate response.
If the object could be found far enough ahead of time and our space technology used to deflect it from its Earth threatening trajectory, it would be a tremendous demonstration of our space-faring capabilities!
Image: The European Space Agency
NEOs are a real risk
NEOs do hit Earth. A more recent impact did occur in 2013. An asteroid exploded over Chelyabinsk, Russia. The asteroid was about 20 meters long and it is known as the third largest impact event in recorded history. The energy of its explosion was 6 to 33 times as much energy released from the atomic bomb detonation in Hiroshima during World War II.
In the rare event that a NEO does hit Earth, we need to be prepared to prevent it from happening or mitigate its impact. This is where project Hera comes in.
Understanding the risk is vital – which is why the European Space Agency (ESA) is building state-of-the-art Flyeye telescopes that will scan the sky for new asteroids. In the case that we discover one on a collision course with Earth, even if it has a small chance of impact, mitigation will matter. The best way to mitigate an asteroid threat is to deflect its orbit.
ESA’s upcoming Hera Mission
ESA’s upcoming planetary defence mission Hera that is set to launch in October 2024. Its objectives are to follow behind NASA’s asteroid impactor, DART, after it has impacted the 160 meter Dimorphos asteroid (the smaller body, or Moon, of the Didymos binary asteroid system).
Altogether, DART’s impact and Hera’s data will help us understand whether this technique could be used in future to deflect an asteroid on a collision course with Earth. Extremely valuable information for future asteroid deflection missions and increases our understanding of asteroid geophysics as well as solar system formation and evolutionary processes.
Expando is a proud contributor of space components for the Hera mission. We work together with a network of organizations and projects contributing to the prevention efforts of this mission.
About Expando’s space technology
Expando provides advanced systems and services for commercial, military and government customers worldwide. With 20 years of experience, we have a broad portfolio that allows us to deliver high quality specialized solutions that push the boundaries in different environments, such as space, aerospace, defense, industrial and commercial avionics.
Expando AB is a regional expert offering planetary defence components and integration projects. We act as representative or distributor for first class suppliers and our aim is to be your long term reliable partner. Contact us to discuss your future projects. | physics |
https://chesapeakebay.noaa.gov/environmental-stem-education/student-built-buoys | 2017-04-27T16:47:58 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00510-ip-10-145-167-34.ec2.internal.warc.gz | 0.938193 | 253 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__105117654 | en | The NOAA Chesapeake Bay Office helps a range of students—from elementary school through high school—build buoys to introduce them to concepts behind observational platforms and to help connect them with their local ecosystem—and to help track measurements in that ecosystem.
Build-a-Buoy (BABs): Elementary and Middle School Students
Students as young as kindergarteners can learn basic principles of science, technology, engineering, and math, as well as marine navigation and observation, through Build-a-Buoy projects, where budding Bay stewards design and build the basic structure of a buoy using PVC pipe. The buoys must float in shallow water and incorporate a platform to hold golf balls or other similar objects. The students build the buoy, float it, and add golf balls until it tips over. Through this part of the exercise, the students learn concepts of buoyancy, symmetry, and balance.
When they are successful, students install an indoor/outdoor thermometer on the buoy and drop the "outdoor" sensor into the water. The buoy can then measure air and water temperature and thus becomes a simple observation buoy. Students learn that buoys have different functions: marking the boundaries of an underwater road, marking obstructions hidden underwater, and taking observations. | physics |
https://www.ipevo.com/wishpool/story/414 | 2017-02-20T06:16:40 | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00261-ip-10-171-10-108.ec2.internal.warc.gz | 0.947052 | 166 | CC-MAIN-2017-09 | webtext-fineweb__CC-MAIN-2017-09__0__256887346 | en | How Teacher Craig uses Ziggi USB Document Camera in the classroom
Tiverton, United Kingdom
May 7, 2014
Awesome!! I project students' work and they can all celebrate a successful, well present example of written work. It gives the students something to aspire to. I have used the Ziggi Document Camera to demonstrate how to draw graphs and how to set up apparatus to investigate the laws of reflection and refraction. I have even used it to show the dispersion of light by a prism. The Ziggi Document Camera has meant I have saved time and the disruption to learning caused by students having to gather round the desk to see a teacher demonstration. I use my Ziggi Document Camera every day and it has now become an essential teaching tool I could not be without. Thank you Wishpool! | physics |
https://gardensofthesun.com/blogs/news/blue-diamonds | 2024-04-14T21:08:29 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.9/warc/CC-MAIN-20240414192536-20240414222536-00296.warc.gz | 0.940336 | 2,186 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__99460717 | en | Diamonds are seriously quirky. They're nothing like the shining thing that dots Uncle Scrooge's vast treasure vaults. They're not always diamond shaped, they're not always crystal clear (hello salt and pepper diamonds), and they're not even always the same color!
Have you met your first blue diamond yet? If you can't remember, you probably haven't. Because these babies are super memorable.
But first! Let's be clear. Blue diamonds are real.
Since the supply is so limited and the demand has skyrocketed, most blue diamonds on the market have been color enhanced. Does it make them fake? Absolutely not! They're as real as can be, they've just gone through a quick makeover.
Color treatments are deeply connected with diamonds, with the first treatment going as far back as 1904. A curious Sir Crookes wanted to find out what radiation does to the color of a diamond. He used radium salts to slowly turn the classic clear diamond into a dark green. As it was only an experiment, it wasn't perfect. The green only showed as blotchy patches, and the color didn't pass the surface of the stone.
Today’s technical advancements make it easy and affordable to enhance or even completely change color in diamonds. They're the preferred alternative as the prices for natural blue diamonds are off the charts for most folks (and hard to come by).
If you’re skeptical about the authenticity of these mermaid colored stones, or you simply want to know more about how they get to have such a wicked color, we have the answers to your burning blue diamond questions.
HOW do diamonds turn blue?
The traditional white (or technically colorless) diamonds that we’re most familiar with, are created by nature’s longer-term wizardry process, involving super-heated, highly pressurized carbon molecules close to the Earth’s core. Nature also makes green and blue colored diamonds by exposing them to radiation deep down under the Earth’s surface.
This radiation is what changes the position of the atoms within a diamond’s crystal structure, which affects the color.
Irradiation is the fancy word for exposing a stone to radiation.
Modern day irradiation processes has revolutionized color treatments for gemstones.
There are four processes to change a diamond’s color through irradiation; cyclotron, gamma rays, electron, or neutron bombardment.
In 1942, scientists at the University of Michigan put some diamonds in a cyclotron (a type of partile accelarator) and bombarded them with heavy radiation of protons and deuterons to turn regular diamonds into vivid green stones.
After a short quarantine period to get rid of any leftover radioactiveness, the world had its first safe to wear artificially colored diamonds!
The color from cyclotroned treated diamonds were uneven and depended largely on the direction of the treatment. This method is rare nowadays.
Gamma ray treatment through exposure to cobalt-60 produces a blue to blue-green color that penetrates the entire stone.
Even though it's the cheapest and safest method of irradiating diamonds, it's also the longest, and can last for several months. This method is quite uncommon these days.
Scientist have refined the irradiation to a process that’s more common today, by blasting diamonds with high-energy neutrons or electrons. These modern processes are safer and bring out the most vivid and even colors.
A bombardement of neutrons from a reactor are ‘fired’ at the diamond. It gives a deep color, as the beam penetrates the entire stone.
The diamond is penetrated about 1 millimeter deep, while it’s exposed to tiny high-energy electrons.
Scientists and diamond nerds have tried to copy this process from nature for over a century. Color enhanced diamonds are real diamonds exposed to similar radiation, but over a shorter period of a time, in a lab. The radiation can enhance, change, or brighten stones to all sorts of colors like pink, blue, green, yellow, red purple and orange. So color treated diamonds are not grown in a lab, just treated. Think of it like dying your hair pink or blue. It’s still your real hair.
Color treated diamonds tend to start their life as diamonds with 'undesirable' colors, like pale yellows or browns. They are either dramatically enhanced (e.g. from a pale yellow to vivid yellow) or changed completely.
Irradiated diamonds are not lab created, they're natural and real diamonds
After a purely experimental phase of changing diamond colors in the early 1940s, diamonds colored using irradiation flooded the market in the 1950s. As there was no simple test to distinguish hues created in nature from those changed in a lab, the market for colored diamonds crashed. Nowadays, a simple test using spectroscopes can tell natural from irradiated diamonds (each shows different spectra, or light absorption characteristics).
Diamonds are forever. Is a blue diamond also forever blue?
Irradiation is non-nuclear and leaves no kind of residual radiation behind. The color change is permanent, stable and irreversible under normal wear and tear.
The color is not affected by chemicals, ultrasonic cleaners, steam cleaning or polishing. Only when exposed to extremely high temperatures - like the 500 to 900 degrees Celsius fire blowing out of a jeweler's torch - blue and green colored diamonds may fade or even turn yellow.
Under normal circumstances, your diamond wouldn’t come in touch with such high temperatures. So this is only relevant when you’re taking your ring in for its annual checkup like prong repairs, resizing, cleaning or any other service.
Please be sure to tell your jeweler if your diamond has been irradiated. Your jeweler will know to take the appropriate precautions. Natural colored diamonds on the other hand are unaffected by heat.
Because this process is permanent, the GIA will grade and certify irradiated diamonds and can also laser inscribe the diamond to notify any potential buyer the diamond has been irradiated.
Are irradiated diamonds safe?
Out of all of the gemstone treatments currently on the market, irradiation is the one treatment that always raises questions. When the words 'irradiation' are whispered, the first thing that springs to mind is, ‘is it safe’? The good news is: yes it is!
All diamonds have been exposed to natural radiation over the millennia before man unearthed them, so technically all colored diamonds have been irradiated. And this exposure doesn't make them radioactive.
Irradiation changes a diamond’s color and it's the only diamond treatment that exists in nature as well as in laboratory conditions.
Radiation is measured by millirem or Radiation Absorbance Dose. The United States Nuclear Regulatory Commission (NRC) did a comparison of a large 6 carat blue topaz stone (blue topaz is another gemstone that is commonly irradiated to obtain its blue color) against other common forms of exposure. Activities like an intercontinental flight or watching TV also expose you to certain levels of radiation.
A dose of wearing a blue topaz for one year = 0.03 millerem
Wearing porcelain crown or false teeth for one year = 0.07 millerem
Chest X-ray = 60 millerem (2000x that of topaz)
Blue diamonds are safer to wear than a porcelain crown
Full disclosure: The truth about blue diamonds
Most irradiated diamonds have a very particular coloring to them that can be easy to spot once you've seen enough of them. You can look for certain terms used to describe them;'irradiated diamonds' will always be used on official trade documentation and diamond grading reports. The terms'color enhanced' and'treated for color'are more widely used in the market.
With regard to irradiated stones, your jeweler should tell you whether the gemstone you’re looking at has been treated. This is really important, as treated gems may need special care. It may also significantly affect the value of the gemstone. All irradiated diamonds should have a full disclosure and must be presented as being color enhanced.
What else do you need to know about blue diamonds?
Blue, black, green and yellow are the most popular colors produced using the irradiation process. Orange, red, purple and pink colors are also possible, but more difficult to produce.
The overall clarity or imperfections of irradiated diamonds won't change with irradiation, but it can hide or disguise certain imperfections. Pretty much the same way a blue dress won’t look stained as easily as a virgin white dress. Irradiation may be followed by a high pressure, high temperature treatment to improve the stone’s clarity.
Advanced technology has enabled the jewelry industry to improve the visual appearance of lower grade diamonds by the process of laser drilling or fracture filling. This practice is referred to as 'clarity enhanced'.
Laser drilling, and fracture filling treatments in diamonds are considered to be huge alterations. So much so, they're no longer considered natural diamonds.
Will the value of the diamond change after enhancements? The value of a treated blue diamond is much lower (several 0’s less) when measured against a comparable untreated stone, just because it’s a lot faster and with higher certaintiy to get this blue color in a lab than it is to wait for nature to perform one of its rare tricks.
Of course diamonds wouldn’t be color enhanced if it didn’t make them more appealing and desirable, and thus sellable. However, they are priced lower than a naturally occuring colored diamond or a less included diamond.
Color enhanced blue diamonds are probably the only blue diamonds us common folks can afford. Yet, these diamonds shouldn't be considered an investment in the same way a natural blue diamond would. So buy the diamond because you love it, not because you think you can sell it for a profit later.
Buy the diamond because you love it, not as an investment
So to sum it up, irradiated diamonds are not lab created. They're natural and real diamonds. They've been treated using radioactivity, and are safer to wear than a porcelain crown.
We don't have a lot of blue diamonds in stock, so grab one before it goes! | physics |
https://cicl.stanford.edu/publication/gerstenberg2021omission/ | 2022-05-23T05:47:32 | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00529.warc.gz | 0.950096 | 274 | CC-MAIN-2022-21 | webtext-fineweb__CC-MAIN-2022-21__0__123083273 | en | When do people say that an event that didn’t happen was a cause? We extend the counterfactual simulation model (CSM) of causal judgment (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2021) and test it in a series of three experiments that look at people’s causal judgments about omissions in dynamic physical interactions. The problem of omissive causation highlights a series of questions that need to be answered in order to give an adequate causal explanation of why something happened: what are the relevant variables, what are their possible values, how are putative causal relationships evaluated, and how is the causal responsibility for an outcome attributed to multiple causes? The CSM predicts that people make causal judgments about omissions in physical interactions by using their intuitive understanding of physics to mentally simulate what would have happened in relevant counterfactual situations. Prior work has argued that normative expectations affect judgments of omissive causation. Here we suggest a concrete mechanism of how this happens: expectations affect what counterfactuals people consider, and the more certain people are that the counterfactual outcome would have been different from what actually happened, the more causal they judge the omission to be. Our experiments show that both the structure of the physical situation as well as expectations about what will happen affect people’s judgments.
<< Back to list of publications | physics |
https://www.euroscience.org/news/rammal-award-2012-honours-prof-abdeslam-hoummada/ | 2023-02-05T15:09:49 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00784.warc.gz | 0.95859 | 313 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__201506770 | en | Rammal Award 2012
EuroScience is pleased to announce that the Rammal Award 2012 will honour Professor Abdeslam Hoummada.
Prof. Hoummada has an outstanding career in Neutrino physics and High Energy particle physics. He is involved in the ATLAS experiment at the Large Hadron collider (LHC) at CERN and his contribution to the discovery of the Higgs boson has been largely recognised.
All along his scientific achievements he has been aiming in building bridges not only between CERN and the scientific community of Morocco and other Mediterranean countries but also between Morocco and the Middle East as an active member of SESAME (Synchrotron radiation facility for the Middle East). Prof. Hoummada believes High Energy Physics being a concrete bridge between the two sides of the Mediterranean. This idea is implemented in an “International Laboratory for Collider Physics” including Morocco, Algeria, Tunisia and France to be launched in 2014. The project is complemented by an exchange program of doctoral schools from France and the Maghreb plus Egypt.
The conferences of the Foundation “Sharing Knowledge” are an absolute must in matters of relationship in the Mediterranean basin. As a founding member and council member, Prof. Hoummada has been contributing to their success since 2004, the last taking place in Rabat in May 2013.
Prof. Hoummada is still involved in academic leadership in Morocco at Hassan II University of Casablanca and as Scientific Director at the Hassan II Academy of Science and Technology. | physics |
http://smcinstruments.com/product.php?id=High%20Temperature%20Tube%20Furnace | 2021-12-07T15:57:47 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00501.warc.gz | 0.894492 | 215 | CC-MAIN-2021-49 | webtext-fineweb__CC-MAIN-2021-49__0__68867022 | en | High Temperature Tube Furnace
High Temperature Tube Furnace is offered in a sturdy construction and has high tensile strength. The Tube Furnace offers efficient functionality for longer period of time. The Tube Furnace can function in a temperature range of 1600 Degree C. High Temperature Tube Furnace is made from quality grade raw material and components and is also known as double and single ended bogie hearth indirect fired tube annealing furnace.
SMC Instruments manufactures High Temperature Tube Furnace. SMC Instruments offers wide range of instrumentation & automation products and laboratory and electronic equipment, comprising of digital flow meter, digital monometer, hydraulic dead weight tester and magnetic level switches.
Key Features of High Temperature Tube Furnace
- Offered in a sturdy construction
- High tensile strength
- Efficient functionality
- Longer functional life
- Can function in a temperature range of 1600 Degree C
- Made from quality grade raw material and components
- Also known as double and single ended bogie hearth indirect fired tube annealing furnace | physics |
http://jackrabbitcheese.com/2010/04/24/introducing-the-solar-hot-water-system/ | 2015-01-31T10:00:36 | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122080417.25/warc/CC-MAIN-20150124175440-00021-ip-10-180-212-252.ec2.internal.warc.gz | 0.941683 | 1,095 | CC-MAIN-2015-06 | webtext-fineweb__CC-MAIN-2015-06__0__166317963 | en | Our new facility houses a 150-gallon vat pasteurizer with which we have begun to make cheese. Fresh cheeses must be pasteurized, which means raising the temperarture of the milk to 145 degrees and holding it there for 30 minutes. Heating 150 gallons of milk by 110 degrees requires a lot of energy.
In order to cut our heating costs, as well as our environmental impact, we installed a solar hot water system. The system incorporated used solar panels that someone gave us for free. Instead of spending thousands on a commercial heat exchanger and storage tank, we elected to build our own using the principles (if not exactly the plans) we found on BuildItSolar.com.
We constructed an EPDM-lined plywood tank on a 4 x 8 foot plywood base. We used 60 feet of 3/4″ copper pipe as a heat exchanger to preheat the water we use for washing. Then we built a system to recirculate the solar-heated water through our vat pasteurizer, with two tankless hot water heaters as a booster.
Because our goal is to heat the tank to 170 degrees for pasteurizing milk– hotter than a typical solar water heating system– we couldn’t use pex for any line under pressure. We used pex for the plumbing to the collectors and back. But for the heat exchanger, which carries up to 40 psi of tap water, we chose type M copper pipe. And for the pasteurization system, we used 3/4″ copper pipe insulated with polyethylene pipe insulation, and covered with 2″ PVC pipe. The PVC pipe isn’t necessary for structural purposes, but in a food processing area there can be no exposed insulation.
By the way, we used “shark bite” type fittings inside the PVC pipe. They’re not cheap, but they were the only way we could find to completely encase the hot water plumbing.
The vat heating system is not entirely closed– overflow comes from the vat during operations. So we also installed a float valve to keep the thermal storage tank full, and a drain to keep it from overflowing.
The result: a system that will heat and store over 300 gallons of water. It heats our washing water, and our vat. And it cost less than $2,700.
But we had a challenge: how to drain back the system so it wouldn’t freeze on cold winter nights. The system is pretty simple: a pump, which is situated outside the tank but below the water line, pumps water up through the roof to the collectors. Gravity brings the water back down. But when the pump stopped, there was no vent to allowthe water to drain back.
The obvious solution was to install a vacuum break at the collector, which we did. That let air into the tubing to allow the system to drain back. But siphoning action drained it back all the way into the tank, leaving the pump dry. In order to get the water flowing again, we had to re-prime the pump. Every time. That’s not very automated.
If we put in a check valve, the water wouldn’t drain back at all. I spent hours bugging the folks at my local plumbing supply store trying to figure out how to let the water drain back so far and no farther.
It took weeks of experimentation, but we finally found a solution: two check valves. Actually we chose swing valves because they were cheaper and gravity worked in our favor. The first I installed at the bottom on the intake line, preventing water from draining back that way. Without a second valve, water wouldn’t have drained back at all.
The second, I installed on a tee on the intake side of the pump, but above the water level– and I installed it “backwards.” It prevents water from being sucked in that leg of the tee by the pump, and when the pump stops, the pressure of gravity on the water allows draining back through this valve– but only down to the level of the check valve. When water reaches that level, gravity closes the valve and flow stops. The pump stays wet and primed for the next use.
Here you can see the pump sitting alongside the tank. At the top left is the drainback swing valve, with a length of 1/2″ pex that returns water to the tank. The valve you see in the middle is a standard hose bibb, which allows priming the pump the first time it is used (or any time air has gotten into the pipe). The valve on the right is another hose bibb on the outlet side of the pump. We use that one to drain the tank when needed.
The system looks more complex than it is. Here, a series of ball valves allows the vat operator to choose hot water from the solar system to heat the milk, or cold water from a storage tank to cool the milk– and return the water to the location from which it came.
It has taken some doing, but we’re now using the system to make cheese. In cold weather, we use the tankless hot water heaters to boost the temperature. But in summer, we expect to have plenty of hot water. | physics |
http://www.teenreads.com/reviews/bomb-the-race-to-build-and-steal-the-worlds-most-dangerous-weapon | 2017-04-27T18:52:56 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00620-ip-10-145-167-34.ec2.internal.warc.gz | 0.949907 | 1,177 | CC-MAIN-2017-17 | webtext-fineweb__CC-MAIN-2017-17__0__267918954 | en | Bomb: The Race to Build--and Steal--the World's Most Dangerous Weapon
At the end of Steve Sheinkin's book BOMB: The Race to Build --- and Steal --- The World’s Most Dangerous Weapon is a copy of a letter to President Roosevelt from Albert Einstein. Dated August 2nd, 1939, the letter outlines the potential for Germany to weaponize uranium and the need for a coordinated response from the scientific community. This letter provides the theoretical framework for the Manhattan Project, an initiative involving some of the greatest minds of the century to create the world's first atom bomb.
Steve Sheinkin’s BOMB: The Race to Build --- and Steal --- The World’s Most Dangerous Weapon uses the lives of scientists, saboteurs, and spies to tell the story of the first atomic bomb. With material gleaned from archival materials and declassified government reports, Sheinkin includes tales of Norweigan saboteurs who helped destroy Hitler’s capability to create nuclear materials, as well as the stories of numerous spies who ferried scientific discoveries --- and sometimes the scientists themselves --- across international borders in a world at war. At first thrilling tales of adventure and intrigue, these seemingly separate threads eventually narrow down to the atomic bomb and the terrible consequences of creating a weapon of such overwhelmingly destructive powers.
"But perhaps the most striking aspect of BOMB is the way it subtly begs one central question: why?...Sheinkin does not answer these questions directly, but gives us many explanations, sometimes from the people who had to make the decisions for themselves."
Sheinkin includes key personalities and first-person accounts of the creation, deployment and aftermath of the atomic bomb. Among these, one of the most charismatic of the movement, J. Robert Oppenheimer emerges as a tragic figure. A physicist at University Berkeley, he was handpicked by General Leslie Grove to recruit scientists and coordinate scientific efforts. Oppenheimer goes from absent-minded professor to genius mastermind before he is broken by the weight of his conscience and a vote of no-confidence by his country. Oppenheimer’s outspoken opposition to using the powerful weapon he was instrumental in creating eventually led to congress stripping his security clearance and removing him from the role of researcher and advisor to the U.S. government forever.
BOMB is similarly filled with striking moments and images. Even readers familiar with the topic will be impressed by the breadth and depth of Sheinkin’s research and how much information he manages to condense into this small book. Simple diagrams help explain the nuclear fission process and the different methods the Manhattan Project came up with to weaponize the energy potential of the split atom. For the first time, with the aid of pictures of simple explanations, I was able to understand that they developed not one bomb, but two bombs utilizing different trigger methods and different core materials. Likewise, the outrageous communist hunts of the era seem saner in the context of how Soviets --- then allies in the war --- were able to access nuclear secrets.
But perhaps the most striking aspect of BOMB is the way it subtly begs one central question: why? Knowing the risks they took to create it, and the devastating destruction that would result, why did scientists do this research? Why did spies give away the secrets? How could someone choose to use such a terrible weapon? What is to be done now that the secret is unlocked and we must live with the consequences?
Sheinkin does not answer these questions directly, but gives us many explanations, sometimes from the people who had to make the decisions for themselves. Many of the scientists involved in the project were excited by the challenge of nuclear research, at the same time believing Hitler to be an imminent threat. Perhaps the most difficult decision was made by President Truman when he decided to use the bombs on Japan, effectively ending the war. Since that time, there has been a great deal of controversy about this decision; it is the only time when nuclear bombs have actually been used in war.
I believe the best non-fiction books are not merely educational, but instill something extra in their reader: the ability to start asking questions about what they’re reading and whether it has resonance in their own lives. Although a timeline and a ‘rogues gallery’ of key players would be helpful additions to BOMB, Sheinkin’s book achieves what other nonfiction titles do not. In writing a book about the development of the atomic bomb, Sheinkin takes on some of the biggest moral quandaries of our time.
In the epilogue, Sheinkin acknowledges the difficulties of addressing such a big topic. “In the end, this is a difficult story to sum up,” he writes. “The making of the atomic bomb is one of history’s most amazing examples of teamwork and genius and poise under pressure. But it’s also the story of how humans created a weapon capable of wiping our species off the planet. It’s a story with no end in sight… And like it or not you’re in it” (p236).
What awes me most in reading BOMB: The Race to Build --- and Steal --- The World's Most Powerful Weapon is not the terrible destruction wrought by the world’s most brilliant minds, but the fact that it has not been used in warfare again. Despite the conflict our world has seen since 1945, humanity has been united in its decision not to deploy nuclear weapons. It is my hope that the sights Sheinkin describes in his book --- whether the brilliant light and colors of a nuclear explosion or the horror and devastation that follows --- will never be seen again. | physics |
https://sites.up.edu/cas/science-and-religions/ | 2021-03-02T17:30:10 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00448.warc.gz | 0.936304 | 97 | CC-MAIN-2021-10 | webtext-fineweb__CC-MAIN-2021-10__0__210671073 | en | Shannon Mayer is a professor of physics and Fr. Thomas Hosinski, C.S.C., Ph.D., is a professor of theology specializing in the science and religion dialogue. Both are members of the UP faculty. This lecture directly follows the awarding of the annual Garaventa High School essay contest awards. The 2013 contest theme is Behold, I am doing a new thing; now it springs forth, do you not perceive it? (Isaiah 43:19). | physics |
http://aymikjm.tumblr.com/ | 2013-12-09T06:44:46 | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163930735/warc/CC-MAIN-20131204133210-00095-ip-10-33-133-15.ec2.internal.warc.gz | 0.941085 | 473 | CC-MAIN-2013-48 | webtext-fineweb__CC-MAIN-2013-48__0__205279078 | en | After nearly a decade of careful observations an international team of astronomers has measured the distance to our neighbouring galaxy, the Large Magellanic Cloud, more accurately than ever before. This new measurement also improves our knowledge of the rate of expansion of the Universe — the Hubble Constant — and is a crucial step towards understanding the nature of the mysterious dark energy that is causing the expansion to accelerate.
Astronomers survey the scale of the Universe by first measuring the distances to close-by objects and then using them as standard candles to pin down distances further and further out into the cosmos. But this chain is only as accurate as its weakest link. Up to now finding an accurate distance to the Large Magellanic Cloud (LMC), one of the nearest galaxies to the Milky Way, has proved elusive. As stars in this galaxy are used to fix the distance scale for more remote galaxies, it is crucially important.
But careful observations of a rare class of double star have now allowed a team of astronomers to deduce a much more precise value for the LMC distance: 163 000 light-years.
The improvement in the measurement of the distance to the Large Magellanic Cloud also gives better distances for many Cepheid variable stars. These bright pulsating stars are used as standard candles to measure distances out to more remote galaxies and to determine the expansion rate of the Universe — the Hubble Constant. This in turn is the basis for surveying the Universe out to the most distant galaxies that can be seen with current telescopes. So the more accurate distance to the Large Magellanic Cloud immediately reduces the inaccuracy in current measurements of cosmological distances.
The astronomers worked out the distance to the Large Magellanic Cloud by observing rare close pairs of stars, known as eclipsing binaries. As these stars orbit each other they pass in front of each other. When this happens, as seen from Earth, the total brightness drops, both when one star passes in front of the other and, by a different amount, when it passes behind.
By tracking these changes in brightness very carefully, and also measuring the stars’ orbital speeds, it is possible to work out how big the stars are, their masses and other information about their orbits. When this is combined with careful measurements of the total brightness and colours of the stars remarkably accurate distances can be found. | physics |
https://www.celnav.de/muzzleloaders/internal_ballistics.htm | 2019-01-23T22:58:07 | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584415432.83/warc/CC-MAIN-20190123213748-20190123235748-00243.warc.gz | 0.91882 | 7,154 | CC-MAIN-2019-04 | webtext-fineweb__CC-MAIN-2019-04__0__2092841 | en | Internal Ballistics of Muzzleloaders
This chapter is about how the chemical energy stored in the powder charge is converted into the kinetic energy of
the projectile (muzzle energy) and how the efficiency of this process is influenced by powder weight, powder granulation
(burning rate), caliber, barrel length, and projectile weight.
Like any firearm, a muzzleloader can be considered as a piston engine with internal combustion. The processes running inside the barrel of a gun are very similar to the power stroke of a 2-stroke engine where the pressurized hot combustion gases produced by the burning fuel-air mixture push the piston downward until the exhaust port opens. In the same manner as a car driver wants the engine to get the maximum mileage out of a given quantity of fuel, the economically-minded shooter is interested in extracting the desired kinetic energy from the smallest possible quantity of (expensive) powder.
One grain of black powder produces about 1.0...1.1 in3 of combustion gases (T = 273 K, p = 1013 hPa), depending on the exact composition. These gases temporarily expand to a much greater volume because of the intense heat released by the chemical reaction (redox reaction). If the gases are hindered from expanding, for example by the inertia of the bullet seated in front of the powder charge, they build up a high pressure. The gas force resulting from this pressure and the cross-sectional area of the projectile accelerates the latter and expels it out of the barrel. After exiting the muzzle, the projectile gets decelerated by air drag.
After ignition of the powder charge, the gas pressure inside the barrel rises rapidly, passes through a maximum, and declines asymptotically as the projectile moves forward. The acceleration of the projectile begins as soon as the gas force acting on its base surpasses the (relatively small) static friction force. In all practical cases, the projectile leaves the barrel long before the pressure has dropped back to atmospheric level. The pressurized gas escaping from the barrel bore at this instant produces the muzzle blast.
By way of example, Fig. 1 shows the gas pressure, p, inside a barrel as a function of the instantaneous volume measured between breech plug and projectile, V (blue curve). Note that this is not the same as pressure vs. time!
The orange curve in Fig. 1 indicates the mechanical work done by the expanding combustion gases, E. The latter is represented by the definite integral
|E :||Total mechanical work|
|V :||Volume measured between breech plug and base of projectile at a given instant|
|VP :||Volume of powder charge|
|VB :||Total bore volume|
|d :||Bore diameter (caliber)|
|L :||Distance measured between breech plug and base of projectile at a given instant|
|LP :||Length of powder column|
|LB :||Total bore length|
Geometrically, E is measured by the area between the pressure curve and the abscissa in the interval [VP,
The greater part of E remains preserved as the kinetic energy (muzzle energy) of the projectile, E0.
A certain portion of the total mechanical work is used up by the acceleration of gaseous and solid combustion products. This is particularly the case with black powder which produces a substantial percentage of solid residues (approx. 60%) which do not contribute to the acceleration of the projectile since they do not expand. A further, smaller portion of the mechanical work is lost by friction.
Fig. 1 reveals that after a more or less steep rise, E asymptotically approaches a constant value as the gas pressure decreases (ignoring friction). The same applies to E0. To reach this condition, we would need a very light powder charge or an extremely long barrel which is not practicable.
Fig. 2 displays two more realistic scenarios. With barrels of common length, there is still a considerable gas pressure inside the bore when the ball exits the muzzle. The energy still contained in the powder gases at this instant is inevitably lost, decreasing the energy efficiency of the process. The pressure curve for a fast-burning powder is shown in blue, the curve for a slow-burning powder in red.
Obviously, the slow powder requires a greater bore volume to achieve a sufficiently low residual pressure and, as a result, a satisfactory energy efficiency. The latter depends on several factors, among them the quotient of the total bore volume, VB, and the volume of the powder charge, VP. This quotient is called expansion ratio, R (eq. 2).
ρP is the bulk density (tapped) of black powder (approx. 270 grains/in3), mP
is the powder weight measured in grains.
In general, the energy efficiency of a combustion engine, η, is the quotient of the mechanical energy supplied by the engine, E0, and the chemical energy released by the combustion of the fuel, Echem.*
*It is a well-known fact that internal combustion engines with a high compression ratio (≈ expansion ratio) are more energy-efficient than engines with a low compression ratio. This is one reason why diesel-powered cars (compression ratio ≈ 16) have a better gas mileage than cars with gasoline engines (compression ratio ≈ 9).
For the shooter, it may be more convenient to define energy efficiency as the quotient of muzzle energy and powder weight:
The curve obtained by plotting η versus R would have the same shape as the orange curve shown in Fig. 1, i. e., after a
more or less steep rise, η approaches a constant value.
Energy efficiency is mainly enhanced by:
1. High expansion ratio:
Assuming a constant powder weight, the expansion ratio rises in proportion with the bore length and the square of the bore diameter (caliber). Thus, doubling the caliber has the same effect as quadrupling the bore length. As a result, a big-bore pistol may be able to extract the same muzzle energy (or even more) from a given quantity of powder as a small-bore rifle. Increasing the expansion ratio beyond a reasonable level does not lead to a further improvement of energy efficiency.
2. High burning rate of powder:
Generally, a fast-burning powder is more energy-efficient than a slow powder because it burns more completely during the residence time of the projectile in the barrel. This results in a lower residual gas pressure at the instant the projectile exits the muzzle. So, why should anyone want to use a slow powder at all? Because of the lower peak pressure! A heavy charge of fast-burning powder behind a heavy long projectile might develop a peak pressure exceeding the upper limit set by the gun manufacturer. A too high peak pressure may further show itself by torn patches and reduced precision. While generally 3FG powder is a good choice for handguns and small- or medium-caliber (≤ .50) rifles firing round balls, the slower 2FG powder is better suited for rifles firing heavier long projectiles (except the smallest calibers). It is also better for big-bore (> .50) rifles firing heavy round balls (see below). Use good judgement.
3. High projectile weight:
A suitably high projectile weight is another factor improving energy-efficiency. If we use a heavier elongated projectile instead of a round ball, the acceleration of the projectile inside the barrel will be smaller due to its greater inert mass. As a result, the residence time of the projectile in the barrel will increase. Thus, the red pressure curve shown in Fig. 2 will be compressed along the abscissa and assume a shape similar to the blue one. The powder will burn more completely, resulting in a higher peak pressure and a lower residual pressure. In this way, a heavier projectile has a similar effect on energy efficiency as a powder with a higher burning rate. Increasing the caliber works in a similar way, even with round balls. While the gas force acting on a round ball during the shot rises with the square of the bore diameter, the inert mass of the ball rises with the cube of the diameter. As a result, the acceleration of the ball will decrease with increasing caliber at a given gas pressure. This, again, results in a longer residence time of the ball and a more complete burning of the powder. Conversely, a very low projectile weight (low inertia) will prevent the build-up of a sufficiently high peak pressure unless the powder has a very high burning rate. This is particularly observed with small-caliber round balls (see farther below).
4. High ratio of projectile weight to powder weight:
The mechanical work done by the expanding powder gases is not only converted into the kinetic energy of the projectile but also into the kinetic energy of the reaction products (gases and solids) of the powder itself. Thus, a high ratio of projectile weight to powder weight improves energy efficiency because a greater portion of the total mechanical work is available for the acceleration of the projectile. In contrast, an unreasonably heavy powder charge will not improve the muzzle energy of the projectile to the desired extent but may result in an uncomfortable recoil of the gun.
There is another influencing factor: heat loss. The combustion gases give off heat to the cooler barrel walls. This reduces gas temperature and pressure, and thus has a negative impact on energy efficiency. Heat exchange is promoted by a long residence time of the projectile and by a high specific surface area of the bore (the quotient of bore surface and bore volume). The surface area of the bore wall rises in direct proportion with the caliber while the bore volume rises with the square of the caliber. Thus, large-caliber barrels have a lower specific surface area, a less pronounced cooling effect, and a higher thermal efficiency.* This is partially counteracted by the longer residence time of the slower large-caliber projectiles.
*For the same reason, ship (diesel) engines with their huge cylinders have a higher thermal efficiency than the much
smaller diesel engines used in trucks and passenger cars.
The complicated interactions of expansion ratio, caliber, projectile weight, powder weight, burning rate, and heat loss make it
difficult to predict the exact muzzle energy we can extract from a given powder charge. Therefore, the average shooter mostly
has to rely on empirical data. In any event, powder weight and granulation, projectile weight, caliber, and barrel length
should be carefully adjusted to each other for best results.
The muzzle energy of a projectile is a function of its muzzle velocity, v0, and its mass, m. Thus, we can determine E0 through velocity measurements, for example by means of a gun chronograph or a ballistic pendulum:
*I prefer calculating with metric units (kg, m/s) and convert the result to foot-pounds of energy (1 fpe = 1.3558 J).
The equation indicates that increasing energy-efficiency by raising the projectile weight may come at the cost of a
lower muzzle velocity.
Lacking the resources for extensive experiments, I did a statistical evaluation of ballistic data from the literature to get more insights. The results proved to be quite informative. The following scatter plot demonstrates the general relationship between muzzle energy and powder weight (Fig. 3). The plot is based on two groups of caplock rifles of various calibers and barrel lengths. The blue data points represent the group of small- and medium-caliber rifles (≤ .45 cal), the red data points represent rifles with larger calibers (> .45 cal). This study includes only ballistic data measured with round balls.
What we see is a roughly (!) linear correlation between muzzle energy and powder weight. The diagram further indicates that
big-bore rifles tend to be more energy-efficient than small-bores. Due to the low coefficient of determination, a reliable
calculation of the muzzle energy resulting from a given charge weight by means of the trendline equation is not possible here.
Ignoring a few other minor factors, the muzzle energy depends on three variables: powder weight, ball weight (which is proportional to d3 in case of round balls), and barrel length (eq. 6).
In a first, admittedly crude attempt, I plotted E0 versus the product mp ·mb ·LB but the results were not usable. Next, I tried a more refined approach and plotted E0 versus the term mpu ·mbv·LBw (denoted by X in the diagram). Now, the results look more plausible, as shown in Fig. 4.
The exponents u, v, and w have been optimized so as to obtain the highest possible coefficient of determination, R2, which
was 0.976 in this case (linear regression). The optimization was done with the solver tool (nlp-solver, DEPS evolutionary algorithm)
of the spreadsheet software (LibreOffice Calc). The data set used here comprises muzzle energies of round balls (calibers ranging from
.32 to .54) fired from caplock rifles, carbines, and pistols with varying charges of 3FG powder (130 shots in total). Revolvers are not
included due to the considerable and hard-to-predict portion of gas escaping through the air gap between cylinder and barrel.
Here is the resulting trendline equation:
The constants thus found with 3FG powder are:
Equation 7 yields an average value of the muzzle energy resulting from a given combination of powder weight (3FG), ball
weight (round balls only), and bore length. The equation is only valid for powder charges in the usual range and should
not be used for exotic charges, for example 100 grains of powder in combination with a .32 cal (45 grains) round ball.
Further, the equation will lead to inaccurate results when used for pocket pistols with extremely short barrels since
these are outside the observed range (LB = 6.0"- 41.5").
We have a .50 cal rifle with a bore length of 30 inches. We fire a 177-grain round ball with a charge of 60 grains of 3FG powder. What is the approximate muzzle energy we can expect?
E0 ≈ 3.112 · 600.738 · 1770.282 · 300.417 − 92.665 ≈ 1043 fpe
As already said, this is only an estimate, and there is considerable data scattering apparent from Fig. 4 (standard error of E0: ±49 fpe). This is not too surprising because there are other factors influencing the combustion process, as for example powder quality, moisture content, and the seating force applied to projectile and powder charge during the loading procedure. Further, pressure loss caused by gases escaping around the projectile (blow-by gas) contributes to data scattering. Pressure loss is hard to predict. With round balls, it depends on ball diameter and patch thickness in relation to bore diameter and groove depth. A patch is never a perfect gas seal but a tight-fitting ball/patch combination can reduce pressure loss to a tolerable level. Pressure loss is further influenced by the diameter of the ignition channel. Last but not least, the combustion process itself is random controlled to a certain degree and even under the same conditions, no powder charge burns exactly like the other. Six shots (177 gr round ball, 70 gr 2FG powder) fired from various .50 cal rifles with the same barrel length (28") delivered muzzle energies between approx. 880 and 1120 fpe (mean: 1010 fpe). This gives us an impression of the scattering of muzzle energies we can expect.
Since I was not sure about the portion of gas escaping through the vent of flintlock guns, I did a separate study comprising 20 shots (round balls, 3FG powder) fired from flintlock rifles (.32 - .50 cal) and pistols (.44 - .50 cal). After linearization, the results within this group are surprizingly consistent (R2 = 0.996, standard error of E0 : ±24 fpe) as shown in Fig. 5.
The conspicuously small standard error may be coincidental. The constants found here are:
Repeating the last calculation with the constants for flintlocks, we get:
E0 ≈ 0.846 · 600.824 · 1770.346 · 300.576 − 35.543 ≈ 1014 fpe
One might see this as an indication for an inherently greater pressure loss with flintlocks (vent). However, considering the standard errors, the small difference between caplocks and flintlocks is not really significant here.
I did a similar evaluation of ballistic data obtained from 119 shots fired with the slower 2FG powder (Fig. 6). This study included caplock rifles and carbines with calibers ranging from .36 to .54 (round balls only) but not handguns (no data available).
The constants found with 2FG powder are:
To my surprise, I found more data scattering here (R2 = 0.931, standard error of E0: ±79 fpe). I
do not have an explanation for this, and I do not have enough data for a more comprehensive evaluation. Thus, muzzle energies
calculated from this data set are less accurate.
Calculating the muzzle energy for mp = 60 gains, mb = 177 grains, and LB = 30", we now get
E0 ≈ 19.225 · 600.733 · 1770.146 · 300.078 − 205.153 ≈ 868 fpe
Thus, 2FG powder shows a clear tendency to yield lower muzzle energies than 3FG powder (see above).
I did a similar study on ballistic data produced with rifles firing long projectiles. However, data scattering was even worse (R2 = 0.88, standard error of E0: ±136 fpe) and the results I obtained were not really usable.
To obtain more accurate results, we have to look at a specific combination of barrel, ball, patch (if any), and powder grade. In such a case, powder weight is the only variable while all other influencing quantities are constant. Fig. 7, for example, shows muzzle energies observed with a .54 cal caplock rifle (32" barrel, 225 gr round ball, .01" patch, 2FG powder).
Within the observed range (!), the muzzle energy is a nearly linear function of the powder weight, mp (eq. 8).
Again, the constants a (intercept) and b (slope) are found by linear regression. This can conveniently be done by plotting
muzzle energy versus powder weight with a spreadsheet program (scatter plot with linear trendline).
The next diagram (Fig. 8) shows a comparison between a caplock rifle (24" barrel) and a pistol (9" barrel). Both guns have the same caliber (.45) and fire a 133 gr round ball with a .013" patch. 2FG powder was used for the rifle, the faster 3FG powder for the pistol.
As expected, a long barrel is more energy-efficient than a short one. With a longer barrel we have a higher expansion ratio
and extract more energy from a given quantity of powder. Only if the residual pressure has dropped to a very low level, a
further increase of the expansion ratio will be without much effect (see Fig. 1).
If we go in the other direction and decrease the expansion ratio too much, for example by increasing the powder weight beyond a reasonable level, the projectile will leave the barrel long before the combustion process is complete and the gas pressure has become small enough. In such a case, energy efficiency will drop rapidly. 70 grains of powder in a .45 cal pistol barrel would probably yield a much lower muzzle energy than suggested by linear extrapolation in Fig. 8. Thus, the linear correlation between muzzle energy and powder weight applies only within a limited range. This is what I would call the "economical range".
Fig. 9 demonstrates how the efficiency of the combustion process is affected by caliber and projectile weight. Here, we have a comparison between two rifle barrels (caplock) with equal length (30" each) but different calibers: .50 cal (177 gr round ball, 0.15" patch, 2FG powder) and .54 cal (225 gr round ball, 0.15" patch, 2FG powder), respectively.
The .54 cal barrel is clearly superior to the .50 cal barrel in terms of energy efficiency. According to this diagram, the .54 cal barrel loaded with 80 grains of powder would yield about the same muzzle energy as the .50 cal barrel with 90 grains of powder. With a powder charge of 100 grains each, the .54 cal ball yields an energy of 1467 fpe (v0 = 1714 fps) while the .50 cal ball yields only 1322 fpe (v0 = 1834 fps). This confirms the tendency we have seen in Fig. 3. Why is this so? If we increase the caliber of a barrel, the bore volume will grow as well, increasing the expansion ratio and thus the energy efficiency. Doubling the cross-sectional area of a barrel bore (= multiplying the caliber by √2), for example, has the same effect as doubling the bore length. In addition, the heavier (+26%) .54 cal ball improves the efficiency of the combustion process through its greater inertia, as explained farther above.*
*These correlations may have played a role when the British began equipping many of their merchant ships and a number of naval vessels with carronades in the late 18th century although the laws of thermodynamics were not fully understood at that time. Carronades were light-weight, short-barreled cannon firing large-caliber iron balls (68-pounders were not uncommon) with comparatively small powder charges. Although they had a low muzzle velocity and a limited range, carronades had a devastating effect in close combat. Why were they so successful in their time? A possible explanation may be that they had a high expansion ratio and fired heavy projectiles, both factors ensuring that the small powder charge burnt completely and efficiently. What they lacked in muzzle velocity, they made up for in projectile weight. Moreover, the low gas pressure did not require thick-walled barrels, reducing overall weight and manufacturing costs. If used with the right tactics, these guns were able to inflict maximum damage in a most economical manner. Obviously, British merchants knew to calculate!
Fig. 10 shows that while the muzzle energy increases with increasing powder weight, energy efficiency, measured as the quotient of muzzle energy and powder weight, actually (slightly) decreases. This is due to the decreasing expansion ratio which results in a higher residual gas pressure (= energy loss). Moreover, if we increase the ratio of powder weight to projectile weight, a growing portion of the energy released by the powder will be used to accelerate the combustion residues of the latter. This portion is not available for the acceleration of the projectile. Last but not least, a larger quantity of powder takes more time to burn which is an additional factor increasing the residual pressure (see Fig. 2).
According to Fig. 10, the .54 cal rifle barrel with a powder charge of 120 grains displays about the same energy efficiency as the
.50 cal barrel with a charge of 75 grains. In other words, in large-caliber barrels we can use much heavier charges without wasting
expensive powder! Conversely, using heavy powder charges in small caliber guns should result in poor energy efficiency, at least with
Fig. 11 shows a comparison between two small-caliber round balls (.32 cal / 45 grains and .36 cal / 65 grains) fired from caplock rifles with similar barrel lengths (.32 cal: 26", .36 cal: 25"). Powder granulation and patch thickness are identical in both cases (3FG, 0.013").
The .36 cal ball shows the same behavior as seen in Fig. 7 through Fig. 9, i. e., a more or less linear rise of E0
with increasing powder weight. As expected, the energy gained from 50 grains of powder is smaller in this case than with big-bore
rifles (compare with Fig. 9). The real surprise is the behavior of the .32 cal ball. With a light powder charge (20 grains), it
delivers almost the same muzzle energy as the bigger .36 cal ball. However, the curve is not linear here but flattens out rapidly
as the powder weight increases (u<1). As a result, the .32 cal ball needs about 50 grains of powder to produce the same energy as a .36
cal ball with a powder charge of only 35 grains. How can we explain this? Apart from the smaller expansion ratio, the unfavorable
ratio of powder weight to ball weight probably becomes a dominant factor here. Above 45 grains, the powder weight surpasses the ball
weight in this case. This means that a high portion of the mechanical energy released by the powder is consumed to accelerate the
inert mass of the combustion products (gases and solids) and is not available to accelerate the ball. Accordingly, energy efficiency
decreases considerably. Further, the shape of the curve may also be an indication that even 3FG powder is too slow for a .32 cal
round ball. In other words, the inertia of this ball is so small that the latter gets blown out of the barrel while the combustion
process is still in full progress, possibly not far from the pressure maximum. Remember, a heavy powder charge takes more time to
burn than a light one. There must be a reason why the old .32 cal squirrel rifles sometimes had barrels as long as 40" or even
At first, I suspected the shape of the curve was an artefact caused by data scattering. However, I found a very similar curve when looking at the ballistic data of a .32 cal ball fired from a different rifle with a 24" barrel (Fig. 12).
These results suggest that using the faster 4FG powder or Swiss Powder #1 might be the better choice for .32 cal rifles when
firing round balls.
Apart from the relatively low projectile weight, there is another reason why small-caliber guns work more efficiently with a fast-burning powder: the geometry of the powder charge. A charge of 50 grains, for example, has a length of 2.30" in a .32 cal barrel while the length of the same charge in a .54 cal barrel is only 0.81" (assuming a bulk density of 270 grains/in3). It is easy to see that a thin, elongated powder charge needs more time to burn than a charge of a more compact shape. Another factor slowing down combustion is the more pronounced cooling effect of the bore surface in a small-caliber barrel (higher specific surface area). In an extremely thin bore, the powder charge would become a slow-burning fuse.
Now let us see what happens when we replace the inefficient .32 cal round ball (45 grains) with a heavier elongated projectile of the same caliber (Maxi ball, 103 grains). Again, a rifle with a 24" barrel was used. The powder grade was 3FG, as before (Fig. 13).
As expected, the Maxi Ball delivers a much higher muzzle energy than the round ball. This is because the heavier elongated
projectile takes more time to leave the barrel due to its greater inertia which causes the powder to burn more completely.
Moreover, we have a much more favorable projectile-to-powder weight ratio here.
Even with medium-caliber round balls, a faster powder is more efficient than a slower one. By way of example, Fig. 14 shows a comparison between 3FG and 2FG powder (.45 cal caplock rifle, 30" barrel, 133 gr round ball, .015" patch).
It is quite surprising that in spite of the long (30") barrel, 2FG powder is relatively inefficient when compared with 3FG powder.
In this example, a 49-grain charge of 3FG powder would yield about the same muzzle energy as a 60-grain charge of 2FG
powder. A similar effect has been observed with .50 and .54 cal round balls. The only conclusion we can draw is that 3FG
powder is the more economical choice as long as we use moderate powder charges and round balls. However, we should not forget
the peak pressure! If we are using long projectiles (except small calibers), very large calibers (= heavy round balls),
or heavy powder charges, we should rather play it safe and use 2FG powder.
So, what have we learned? In terms of energy-efficiency, long barrels are better than short ones, large calibers are better than small ones, and long projectiles are better than round balls. In general, a slow powder is good for long barrels, large calibers, and heavy elongated projectiles while a fast powder is particularly good for small or medium calibers and round balls. Further, a fast-burning powder and/or a heavy projectile produces a high peak pressure (which the ordinary shooter can neither measure nor calculate).
The powder weight recommended for a given caliber, projectile type, and barrel length depends on the muzzle energy required for the intended purpose, e. g., hunting (big game, small game) or target shooting. Knowing the desired muzzle energy, we can solve equation 7 for the powder weight, mp, to get a first guess of the latter (eq. 9):
Remember, this is a low-precision formula obtained by statistical analysis.
We have a .54 cal rifle with a barrel length of 32 inches. The gun fires a patched round ball of 225 grains. We need a muzzle energy of 1000 fpe. How much 2FG powder would we need approximately? With the constants for 2FG powder (see farther above), we get:
mp ≈ [(1000 + 205.153) / (19.225 · 2250.146 · 320.078)] 1/0.733 ≈ 67 grains
How much 3FG powder would we need to obtain the same muzzle energy? We use the constants for 3FG powder and get:
mp ≈ [(1000 + 92.665) / (3.112 · 2250.282 · 320.417)] 1/0.738 ≈ 50 grains
Particularly hunters have a keen interest to know how much powder they need to achieve the required kinetic energy with their individual combination of gun, projectile, patch (if any), and powder grade. The research-minded shooter may wish to set up his own calibration curve for this purpose, as demonstrated in Fig. 7. One should start with a small powder charge and increase the latter in increments of 5 or 10 grains from shot to shot. The muzzle velocities measured with a gun chronograph have to be converted to energies (eq. 5) before plotting muzzle energy vs. powder weight with a spreadsheet program (scatter plot with trendline). It is essential to fire each shot under exactly the same conditions (except the powder weight). The first shot has to be discarded because the bore surface is not dressed with powder residues yet and may still be coated with oil. Further, a constant quantity of patch lubricant (if applicable) should be used. The time elapsed between loading the gun and firing the shot should also be fairly constant because otherwise the portion of lubricant migrating into the powder charge would vary, changing the burning rate in an uncontrollable manner (putting a layer of semolina or a felt disk on top of the powder charge may help as well). Last but not least, we have to apply the same seating force to each projectile. If these rules are observed, a fairly linear relationship between muzzle energy and powder weight should be found (similar to Fig. 7). Probably, some degree of data scattering will be observed. Obvious outliers should be discarded and the respective shot repeated. The upper limit of the economical range of powder weights is where the curve flattens out visibly (see Fig. 11). This procedure may be repeated with different powder grades, if necessary.
As mentioned farther above, I was not quite satisfied with the results obtained with long projectiles. Therefore, I did another statistical evaluation including round balls and long projectiles fired from various caplock rifles with 2FG powder (total number of shots: 238). Since I assumed the muzzle energy to be a function of four variables now (including the bore diameter, d), I plotted E0 versus the term d t ·mpu ·mbv·LBw (denoted by X in the diagram) this time. Fig. 15 shows the scatter plot after modifying the four exponents so as to obtain the highest possible coefficient of determination. The red data points represent round balls, the blue data points long projectiles.
In spite of the optimization, data scattering is still tremendous (R2=0.89, standard error of E0: ±129 fpe).
The optimum constants found with the solver tool of the spreadsheet software (this time I used Gnumeric) are:
|t =||8·10-16 ≈ 0|
The most interesting result of this statistical study is that the exponent t is virtually zero, i. e., the muzzle energy is practically independent of the bore diameter (d0=1) but is almost exclusively determined by powder weight, projectile weight*, and barrel length (apart from the powder grade). Thus, we still can use equation 7 (see above) to obtain approximate muzzle energies. Statistically, a long projectile exhibits about the same muzzle energy as a round ball of the same weight (but larger caliber), provided all other parameters are kept constant.
* Of course, the weight of a projectile is a function of its caliber and shape.
Calculating the muzzle energy for mp = 60 gains, mb = 177 grains, and LB = 30" again, we now get
E0 ≈ 0.6187 · 600.9791 · 1770.3611 · 300.3593 + 122.51 ≈ 872 fpe
This result is very similar to the one obtained with round balls alone farther above (see Fig.6). However, we should expect much greater errors when using this method to predict muzzle energies of long projectiles.
Fadala, Sam. Blackpowder Loading Manual, 3rd Edition, DBI Books, Inc.; 1995: pp 169-317
This book is a must for every black-powder shooter and cannot be praised enough. The author took the trouble to fire hundreds of test shots from a greater number of guns, measure the muzzle velocity, calculate the muzzle energy, and tabulate the results. Further, the book contains a wealth of additional information. | physics |
https://apotheosis.neocities.org/demons | 2023-02-01T16:46:22 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00527.warc.gz | 0.930795 | 160 | CC-MAIN-2023-06 | webtext-fineweb__CC-MAIN-2023-06__0__277971104 | en | The evil demon is a thought experiment pervasive in physics, mathematics and philosophy. Here we will examine some denizens of the academic underworld.
Maxwell's DemonMaxwell's Demon is a thought experiment designed to challenge whether the second law of thermodynamics would always hold. This demon has the ability to open a slot or a divider that divides two gas chambers, quickly enough so that only the fastest molecules of the gas pass the divider in a certain direction. This would theoretically create a difference in temperature of the gas in each chamber, which challenges the second law that entropy should always decrease in an isolated system. In this case, the difference in temperature raises the potential energy of the system, which shouldn't be the case if there is no tranfser of heat. | physics |
https://www.carpediemday.com/microadventure-49-hot-air-balloon-ride-over-san-miguel-de-allende/ | 2023-11-30T21:34:19 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00716.warc.gz | 0.972168 | 566 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__81352765 | en | On our final morning in SMA, Chris and I enjoyed a sunrise Hot Air Balloon ride with Vole en Globo. For about $150 US dollars, we were picked up and dropped off right at our casita, enjoyed an hour long balloon ride, a sweet toast after we landed, and even a full breakfast at a local restaurant.
When we arrived, the crew was firing up two balloons, one that held 6 passengers, and ours, that would hold 8 of us, along with the pilot. Each couple got our own little corner compartment that we were able to climb into using footholds, with the pilot in the middle.
Interestingly, even though heights can make me super-nervous, I wasn’t the slightest bit anxious going up. I guess the basket was high enough that I felt totally secure.
It’s even more odd that I felt secure since it really didn’t seem like the pilot has much control other than by how much propane gas he shoots up into the balloon.
We actually went up soon after we were all in, with no warning! No 3-2-1 countdown.. just up we went, with no time to change your mind! Someone said it reminded them of the scene from Wizard of Oz when Dorothy missed the balloon!
It did make me curious about the science involved in navigating a hot air balloon. I thought the pilot would be able to “steer” in some way, but it appeared he mostly could just adjust vertically and we were dependent on the wind for where we’d end up.
The scene from above was peaceful and serene.. well, except for the dogs. (Yeah… Definitely barking dogs… Something to be aware of before you buy a house in the neighborhood.)
Ignoring the barks, the birds-eye view, rising sun, and seeing other balloons in the sky as we slowly floated over the city provided a much more tranquil experience than I’d expected. It wasn’t windy or cold, just a perfect picturesque scene.
When it came time to land, the pilot had to get low enough to throw down cables that the crew used to help to reel us in to the landing spot. Even once we got to the ground, we’d bounce up and down a bit before we were firmly settled on the ground.
The crew expertly packed up the balloon like a giant sleeping bag and loaded up the balloon and basket into the trailer ready to be be transported ‘home.’
Meanwhile we were treated to our celebratory toast and breakfast with our fellow passengers.
Absolutely, one of my favorite experiences of the trip!
Full set of Microadventures in Mexico City / San Miguel de Allende: | physics |
https://ecoview-wds.com/why-does-window-glass-crack-3-reasons/ | 2022-11-30T16:27:36 | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00185.warc.gz | 0.941045 | 1,190 | CC-MAIN-2022-49 | webtext-fineweb__CC-MAIN-2022-49__0__166017189 | en | Windows are an essential part of the infrastructure of your house. They provide air ventilation in summers and help maintain the thermal stability of your house during winters. Without any windows, the condensation inside your house will increase. there will be no proper air ventilation system. Your house will be home to germs.
Sunlight is another important factor that comes along with having windows in your house. There’s a study that suggests windows help homeowners overcome the seasonal affective disorder that many of us are susceptible to. Direct contact with sunlight helps brighten our mood, increasing our serotonin levels.
While having windows in your house has benefits of its own, you have to maintain them regularly. It’s common for you to find chipped and cracked window glass now and then since it withstands harsh external impacts and protects your house all year long.
Here are a 3 reasons why the window glass can crack.
1. Stress Cracks
Humans are not the only beings that crack because of stress. Windows can too. Stress cracks start from the edges of your window frames and start growing into bigger cracks. They usually happen because of a sudden drop in temperature either outside or inside.
If it’s an extremely cold day, and you decide to suddenly turn the heat up, your window may not be able to withstand that change in temperature and crack. Likewise, if the temperature drops outside by a lot, it will affect the glass of your windows, much like how ice cubes crackle when you place them under warm water or a hot dish cracks when you place them under cold water.
You cannot completely avoid stress cracks as they occur naturally. But you can decrease their rate by installing thicker windows that have features based on the climate of your surroundings. Since stress cracks occur because of expanding and contracting of glass due to extreme fluctuation in the temperature, thicker glass may help substantially.
2. Impact Breaks
If you live in a neighborhood with kids who love sports, your house has a high chance of being the victim of an impact break. Since impact breaks normally occur when a ball hits windows with a high velocity. Impact breaks have spiderweb-like patterns. They vary differently according to the size and mass of the ball, and the velocity also factors in the impact.
Impact glass has an internal layer of polyvinyl butyral (PVB), which helps with the blunt force that hits the glass in the form of a ball, a tree branch, or dust and debris during a storm. Because of PVB, you won’t find impact glass to normally break. But there are some rare occurrences when the glass cannot withstand the impact and will break with a circular puncture that will have cracks extending to the sides.
Cleaning out the shards can be dangerous, so you don’t want to go for that. Most homeowners tape up the whole glass or use cardboard to cover it completely. Then take the whole panel out. Or better yet, call a professional to replace your window.
3. Pressure Cracks
Pressure cracks are the least common. They occur due to atmospheric pressure changes in the air, which is a rare occurrence. Hence it’s not common for pressure cracks to occur.
Florida residents must have windows with a design pressure (DP) rating of 50, which can withstand air with a minimum of 150 mph speed. You need to ask your window contractors to provide you with a proper design pressure rating before installing new windows.
You need to keep your DP rating in mind, especially if you live in a high-velocity-hurricane zone (HVHZ). These areas have building codes of their own that are set in place to protect your house against high-speed winds that threaten to shatter the structural integrity of your house.
How to Fix Cracked Windows?
Cracked windows are an inconvenience for homeowners as they bring safety hazards. Once cracks have started to appear in your window, there’s no way for you to fix them. Replacing is the only option. But sometimes, you need to manage it temporarily until it can be replaced by a professional.
If you notice a crack in your window or a punctured web because a ball hit it right in the middle, you can use tape to cover the impact. You can also use a plywood board to cover the complete window frame. Plywood will waterproof your windows.
Cracked window glass can make it easier for burglars and thieves to enter your house. It can also cause leakage problems and mess with the HVAC system of your house, costing you high energy bills.
You need to secure the glass in place before it causes any further damage to your house. The shards of broken glass can be fatal so avoid cleaning them. If you find any tiny shards, use a vacuum around the edges of the window frame to clean that.
Hire a Professional
Undoubtedly, the windows of your house add their importance. But a minor inconvenience can lead to bigger problems. The windows crack because they’re not installed at the right altitude. To avoid such problems, you need to let a professional do their job. They’re aware of the mechanics that windows require. They make the job easier with the right tools and equipment.
EcoView-WDS is a professional windows and doors replacement company. If you wish to install quality impact windows at your house, contact us today. We provide their services in Daphne, AL, and Pensacola, FL. We install single-hung and double-hung windows that are hurricane resistant and energy-efficient. We also provide Low-E insulated glass that prevents up to 72% of the sun’s energy from warming up your house. Take a look at the testimonials of our customers for a better look at our work. | physics |
https://www.environment911.org/Green_Gadgets_-_The_Bizzare_to_the_Important | 2023-09-21T22:09:12 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506045.12/warc/CC-MAIN-20230921210007-20230922000007-00830.warc.gz | 0.96156 | 613 | CC-MAIN-2023-40 | webtext-fineweb__CC-MAIN-2023-40__0__133868660 | en | Green gadgets to replace items that are damaging to our environment will be flooding into the market in upcoming years. Some of these green gadgets will make it, some won't; some are pretty much useless, and some have the potential to have a major social impact. Whether you are someone who is environmentally conscious or not, it's hard to0 say that some of these ideas are not fascinating:
Rocco the Energy Pal
When you think of green gadgets, a rocking horse probably doesn't come to mind. Meet Rocco, a rocking horse that uses the kinetic energy of being rocked back and forth to charge his handles which double as nightlights or flashlights.
Rocco is manufactured from recycled plastic, another green plus. An energy magnet slides up and down each rocker through a copper coil conductor to produce a voltage which is used to charge a battery in the flashlight/nightlight handle. All those kids who are scared of the dark will now have to work for their night light! Rocco is only a concept for now. If it becomes a reality it would make a great gift for a toddler.
Cola Powered Cell Phone
We all should know that drinking cola is not healthy by now. Thankfully Nokia may have come up with an alternate use for our favorite corn-syrup caffeine beverages. A cell phone that runs on cola could be on the horizon. It would get its energy from carbohydrates using enzymes for the catalyst. This sort of energy would drastically reduce chemical battery pollution if it were to catch on.
We all have a solar powered calculator that sits around collecting dust until tax time. If we can power a calculator with indoor light, why not cell phone and other devices? The Illumicharger will charge devices through a USB port using available indoor light. This would include MP3 players, cell phones, and digital cameras. This means a lot when one considers that 6 billion USB portable devices were purchased in 2009. Coupled with the use of LED lighting in the home, the green advantages pile up.
A Cancer Killing Machine
Arnold is not the cancer killing machine, although it would make a great movie title. A cancer killing machine may well be one of the most intriguing and important inventions coming down the pipe. John Kanzius invented the machine in his home garage in Sanibel, Florida. John has a heavy background in radio and physics. He was inspired to do this work while fighting cancer himself and seeing children and adults go through the grueling treatments of present day medicine.
The science involves using nano-particles to attach to cancer cells. These nano-particles act as antennae to receive the potent radio waves to destroy the cancer cells they are attached to. The neighboring cells are left unaffected. Thus far, rabbits with liver tumors have been treated with this technology resulting in the removal of the tumors. What has this to do with green technology? This machine can also make salt burn and could possibly lead to saltwater as a fuel for internal combustion engines. | physics |
https://jobs.lererhippeau.com/companies/zipline/jobs/25117251-test-engineering-manager-avionics | 2024-04-23T09:20:18 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818468.34/warc/CC-MAIN-20240423064231-20240423094231-00677.warc.gz | 0.911487 | 609 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__175183680 | en | Hardware Validation Manager - Avionics
About You and The Role
The Avionics team is a group of hardware and software engineers responsible for designing robust systems that power our life-saving drone delivery service in some of the toughest environments on Earth. Our team designs aircraft avionics that must excel in a wide range of harsh, real-world climates and electromagnetic interference (EMI) environments. Our work also plays a vital role in system architecture, where we strive to achieve optimal performance and reliability while minimizing mass and cost.
As the Validation Manager on this team, you will play a pivotal role in the validation of Avionics flight computer assemblies and vehicle-level harnessing through a broad range of environments. You will be responsible for testing environmental performance, including thermal, UV, rain, and humidity, as well as load cases such as vibration, shock, and acceleration. Additionally, you will oversee the electromagnetic environment validation, including EMI, EMC, and self-compatibility.
Your leadership will be critical as you would be the first hire for this discipline and be responsible for growing and guiding a team in the validation of the assemblies, developing test infrastructure, and partnering with design engineers to ensure that the design units meet engineering and production validation standards. This is an exciting opportunity to build a team that is crucial in the development of life-saving technology and pushes the boundaries of what's possible.
What You'll Do
- Lead, mentor, and grow a small but mighty team of test engineers
- Coordinate team resources, project scopes, and timelines to complete deliverables aligned with program needs
- Plan and execute electrical design validation in prototypes, engineering validation, manufacturing validation, and production ramp
- Design test systems for avionics hardware manufacturing tests
- Partner with contract manufacturers to design manufacturing test systems for external volume manufacturing
- Develop test infrastructure for Avionics specific challenges, such has high speed network or camera interfaces
- Lead validation of environmental performance (thermal, UV, rain, humidity), load cases (vibration, shock, acceleration), and electromagnetic environments (EMI, EMC, including self compatibility
What You'll Bring
- Built and leveraged a team of engineers to achieve ambitious goals, with proven success
- 8+ years of experience across design, integration, or test engineering roles
- A degree in engineering, physics, or equivalent practical experience
- Experience in validating the performance of electronics in fields such as consumer electronics, automotive, aerospace, transportation, or medical devices
- Test experience validating computer function, networking interfaces, or radio interfaces as part of a formal test or qualification program
- Enthusiasm for collaborating in a fast paced, cross-functional environment
- Clear communication skills and the ability to explain technical challenges and solutions to fellow engineers and non-engineers alike
- Must be eligible to work in the US
- Must be able to work in person at Zipline’s South San Francisco office | physics |
https://www.rustonfurrow.com/products/dripping-glass | 2022-01-25T08:38:20 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00342.warc.gz | 0.958738 | 111 | CC-MAIN-2022-05 | webtext-fineweb__CC-MAIN-2022-05__0__104437436 | en | This striking sculpture is entirely hand-made in a lengthy process where many layers of glass are brought up to a temperature high enough that the glass slowly begins to drip. The drips are watched continuously until they reach just the right place and then piece is then quickly cooled to suspend the drips in place. The result in this case is a wire bowl like structure that is suspended by the glass dripping down creating a dynamic effect.
- Overall height: approx 8in
- Overall width: approx 12in
- Materials: stainless steel wire basket, glass | physics |
https://bahistoryofscience.wordpress.com/2010/04/11/discover-the-astrolabe/ | 2017-03-28T06:09:06 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189680.78/warc/CC-MAIN-20170322212949-00355-ip-10-233-31-227.ec2.internal.warc.gz | 0.889306 | 270 | CC-MAIN-2017-13 | webtext-fineweb__CC-MAIN-2017-13__0__275844195 | en | Chris Parkin, Museum of the History of Science, Oxford
Thursday 16th & Friday 17th September
Times: 10-10.45, 11-11.45,12noon-12.45 and 1.45-2.30pm each day
This session offers a unique opportunity to celebrate the contribution made by Islamic cultures to science during the middle ages by discovering the astrolabe, one of the most extraordinary early astronomical instruments.
During the workshop we will trace the development of ideas about the universe that go back to the ancient Greeks. Students will make a model of an astrolabe and find out how it can be used to show the movements of the heavens and make astronomical calculations such as measuring the time from the positions of the stars, and calculating the times of sunrise and sunset.
The session will be illustrated by examples from the world-class collection of astrolabes at the Museum of the History of Science.
This is a practical session involving modelling and problem-solving. It is well suited to challenging more able students and introducing cross-curricular links.
Duration of session: 1.5 hours
Maximum numbers at each session: 30
Curriculum Links: KS3/4 Science/History: understanding astronomy/models of the universe/historical influence of Islamic culture, science and mathematics | physics |
https://torcuil.wordpress.com/2008/01/12/concentrated-solar-gets-salty/ | 2018-07-18T19:55:12 | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00465.warc.gz | 0.928417 | 411 | CC-MAIN-2018-30 | webtext-fineweb__CC-MAIN-2018-30__0__7427317 | en | Cleantech: Windsor Locks, Conn.-based Hamilton Sundstrand is bringing salt to the desert. The company announced plans to work with Santa Monica, Calif., private equity firm US Renewables Group to commercialize a concentrated solar power system that uses molten salt for energy storage.
The new venture, called called SolarReserve, will operate the utility-scale solar thermal projects using technology and equipment developed and manufactured by Hamilton Sundstrand’s Rocketdyne unit. “The molten salt holds its heat very efficiently and for long periods of time,” Dan Coulom, spokesman at Hamilton Sundstrand, told Cleantech.com.
Coulum said the company, a subsidiary of United Technologies (NYSE: UTX), plans to build as many as 10 plants over the next 10 to 15 years, pulling in revenues of $1 billion over that time period. With concentrated solar, a large number of motor-controlled mirrors track the sun and reflect the solar energy onto a tower receiver, which in turn heats a liquid that can be used to make steam. A steam turbine can then produce electricity.
“The molten salt, which is in a storage tank at the bottom of the tower, is run up through the receiver and heated to about 1,000 degrees Fahrenheit,” said Coulom. The company said using molten salt, a mixture of sodium and potassium nitrate, instead of water or oil, allows the heat to be stored for use on cloudy days or at night.
Hamilton Sundstrand, which is a major subcontractor on the International Space Station’s photovoltaic solar system, got involved in the more down-to-earth concentrated solar when its parent company grabbed Rocketdyne in 2005 from Boeing (NYSE: BA). Rocketdyne had been involved for more than 20 years in the U.S. Department of Energy’s Solar Two project in Barstow, Calif., which uses Rocketdyne’s power tower and molten salt technology…. | physics |
https://www.citizenscience.uzh.ch/en/about/team/managingoffice/rmondardini.html | 2024-02-23T12:57:29 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474412.46/warc/CC-MAIN-20240223121413-20240223151413-00517.warc.gz | 0.941387 | 251 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__31890148 | en | - Leitung Forschung und Entwicklung
- Director Research and Development
Rosy Mondardini has a background in physics and started her research career at FERMILAB (USA) and CERN (Switzerland), where she worked for almost 10 years in research before focusing her energy and passion on science for social change. From 2008 to 2015 she was Associate Director at the World Economic Forum, where she established and managed the Young Global Leaders Alumni community. In 2015 Rosy took up the role of Co-Director at the Citizen Cyberlab in Geneva, a partnership between CERN, UNITAR, and University of Geneva, working with international organizations to shape the contribution of Citizen Science to sustainable development. Rosy joined the Competence Center – Citizen Science in Zurich at its creation in 2017 and managed its establishment of growth till 2022. As of 2023, she’s responsible of Research and Development, focusing on current and future projects to raise the impact, scientific reputation, and national and international visibility of Citizen Science Zurich and of its funder institutions. Rosy is active in national and international Citizen Science task forces and networks, and she has been a driving force behind the establishment of the Citizen Science Global Partnership. | physics |
https://ketainovel.com/hail-the-king-chapter-160/ | 2020-04-05T19:39:04 | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00175.warc.gz | 0.98568 | 2,561 | CC-MAIN-2020-16 | webtext-fineweb__CC-MAIN-2020-16__0__56269834 | en | Chapter 160: Coming Out
In front of his eyes, a tornado that was about a hundred meters in diameter thick had formed some time ago. Like a pillar that was connected to the sky, it slowly moved towards the mountain of bones as it twirled its vicious body. The white fog continuously rushed into the pillar like tornado, and the tornado’s diameter gradually grew bigger and bigger. Soon, it had enveloped the entire mountain of bones in it.
An irresistible suction force was created by the tornado that seemed connected to the ceiling of this mysterious space.
Huge blocks of ice were also being thrown from the chilling abyss from all three sides. These cold gales were like tangible objects; although Fei had already backed to the very center of this mysterious space, he was still able to feel the cold air. He felt like the cold wing was like numerous needles poking his face. This degree of freezing sensation was far beyond everything that Fei thought about before. If he didn’t use all of his magical power and spread a layer of fire magic elements around him, he would be frozen into a corpse already. At the same time, a ton of hot air was being injected into the mysterious space from the dense, spider web like cracks on the ground. 【Flame of Earth Core】also appeared sometimes and emitted terrifying heat; Fei felt like the stone like ground was getting soft to the point that it was going to melt.
These two completely opposite forces started a battle till the death between them in this mysterious space.
When the hot air met the cold air, a series of magnificent meteorological wonders appeared.
Fei should immediately open up the teleport portal and leave this extreme and dangerous place. However, he felt like a voice out of nowhere was calling him deep down in his ears, “Stay, if you stay, you will get the stuff that is beyond your imagination……”
It was a very strange feeling; it was very vague, and it felt like the calling and the sensation of a close relative who he shared blood with.
Fei wasn’t someone who was so careless and would risk his life for some benefits; instead, he was very careful most of the time. However, at this moment, he decided firmly to stay for some reason. He took out the rope from his storage space and tied himself tightly onto the thickest and the strongest stone pillar that was not too far from the mountains of bones. At the same time, he held onto the stone pillar tightly with his hands to avoid getting pulled into the tornado that was getting bigger and more terrifying. He also used all of the magic power in his body to protect himself against the coldness from the abyss and the hot heat from the cracks that took turns to invade him.
It was an extremely painful process.
Although he had 【Arcanna’s Trick】, a level 7 Green Items Set to protect him, but the coldness from the abyss and the heat from the cracks weren’t simple energies. They were able to penetrate through this item set easily and invade into Fei’s body. This pain wasn’t tolerable by ordinary humans; it felt a little similar to how the 【Hulk Potion】transformed and strengthened his body. These energies froze Fei’s body to an icicle, then roasted his body till half-cooked for a while. However, Fei was still able to clearly felt that his body’s strength was slowly increasing as these two extreme energies took turns destroying his body repeatedly. Different from killing monsters in the Diablo World and leveling up, every single cell in his body was experiencing destruction and regeneration; during the process, the impurities and the hidden toxins in his body were completely wiped out……
It was quite hard, but Fei had his consciousness completely clear during the process. He stared at the mountain of bones that was enveloped by the tornado closely; the mysterious call was from the mountain of bones.
At this time, the cold air and ice blocks was coming out of the dark abyss more and more, and the heat energy from the cracks on the ground, as if it didn’t want to show weakness, boosted its heat output significantly as well. The two difference forces continued to run into each other and formed a lot more white steam and violent air flow. In the mysterious mountains, there were thunder, lighting and numerous other meteorological phenomena; all these together formed a bad weather microcirculation.
The diameter of the tornado was getting so big that the entire mountain of bones was covered by it. As the suction force from the tornado was getting stronger and stronger. Cracking noises sounded from the mountain of bones, and some of the big bones were getting pulled into the air and swirled and cycled around in the tornado crazily.
Gradually, more and more bones were getting pulled into the tornado and rotated aggressively in the air with the tornado.
Fei suddenly noticed something very strange. Not sure when but since the tornado started, didn’t matter which meteorological phenomena occurred, it had always been around the center of the mysterious space, right on top of the mountain of bones to be more exact, and didn’t diverge away from it at all. As the tornado got stronger, all of the bones that formed the mountain in the first place got sucked into the tornado and flew around like straws as if they were locked in some kind of a fish tank.
This was the source of the suction force that Fei experienced when he just went into the corridor behind the huge iron gates in the underground cave.
For that past twenty days or so, Fei was thinking about where the suction force was from, and now it was all in the open. The reaction of extremely cold air and hot air created the tornado phenomena. This tornado was stronger than all the tornados that Fei had heard about; it was like it had broken the wind scale that people in his previous life created, and it wasn’t surprising to see this level of suction force.
As the entire mountain of bones was pulled apart and pulled into the air, Fei had finally seen what was behind the mountain of bones.
It seemed like it was a pile of bone powder. Since this powder was extremely light, they were sucked into air immediately after the bones were up in the air.
At this moment, Fei’s pupil instantly contracted as an unbelieving expression appeared on his face.
He saw a complete magic array under the piles of bone powder. It was a magic array that was still functioning! With all kind of colors shining on it, it broke the space as these lights combined together and formed a dark blue teleport portal. This portal was very similar to the teleport port used in the Diablo World; it was in a blue oval shape and was about two meters tall. Water like blue light would flash though it infrequently, and it looked like condensed amber from afar.
At this moment, that kinship like calling became stronger and stronger in Fei’s mind.
Fei was completely sure that this intimate calling was from that dark blue teleport portal.
“What is behind this portal that is attracting me this much?”
Fei had an irresistible impulse of untying the rope on him and rushing into that portal to check it out. But at that very critical moment, that last tiny bit of consciousness that he had stopped him from doing so. Without question, if he broke away from the rope, he would definitely be sucked into the terrifying tornado and turned into meat paste by these [Remain of Demons] that were as hard and sharp as god-tier weapons and rotating at an insane speed. There was no chance for him to actually reach the center of the tornado and get into that mysterious teleport portal.
He had to come up with better ways.
Fei forced himself to calm down.
After two three hours, the speed of ice blocks shooting up at the mysterious space and chilling air coming up from the abyss slowed, and the heat energy coming from the underground was getting weaker and weaker as well. The result of the two forces slowing down was that the pouring rain toned down, and the thunders and lightings soon disappeared as well. Even the tornado in Fei’s eyes was getting weaker.
Then, as Fei expected, all the changes started to calm down.
As the wind slowed down, the huge bones that were flying in the air started to fall down back onto the ground. The bigger and heavier bones landed on the ground first. They fell back down on the magic array and the teleport portal; although they were heavy, the magic array wasn’t damaged at all. Slowly, more and more bones dropped and they stacked up onto each other; soon, a new mountain was formed. The tornado was like an agile huge hand and placed each bone to the most suitable position. The higher up on the mountain, the smaller the diameter. Soon, the final piece of bone was placed on the top of the mountain, and the mountain was reformed.
Then, the flying bone powder slowly fell down from the sky and sprinkled onto the mountain of white bones.
Although the wind was decreasing, it was still shaking and pressuring this new mountain of white bones. All the bone powders that were sprinkled on top of the mountain gradually “sunk” to the bottom of the mountain through the gaps between the bones……. Till the end, the tornado disappeared, all the bones stacked onto each other firmly and tightly, and a brand new mountain of white bones was reformed.
Fei was stunned as he witnessed the whole process.
Extremely fine craftsmanship!
It was the real craftsmanship that nature had!
“So this is how the mountain of white bones was formed. So it looks like the chilling abyss and the 【Flame of Earth Core】would “erupt” every twenty days. It will create all the meteorological phenomena, and it will also reform the mountain of white bones!” This explained why this mountain of white bones didn’t had any dust on it when Fei just came here and looked brand new. It looked like it was newly constructed and made Fei think that there was a hidden creature in here who just build this dark and horrifying bone mountain.
After thinking about the mysterious magic array and the teleport portal under the mountain of white bones, Fei tried to “dig” up a path in this bone mountain and get to them. However, it was very hard and almost impossible. These [Remains of Demons] were so tightly stacked onto each other that Fei was only able to pull out a few bones on the surface. These bones were also very tough, and Fei wasn’t able to use his sharp sword to break open a path.
“Looks like I have to come up with some better plans later!”
After trying for a while, Fei decided to give up on this mission. Maybe after he gets strong enough or finds some way to withstand the strong wind or find a way to directly break through the mountain of white bones and get to the teleport portal, he could give it a shot. However, that wasn’t the case now!”
Fei had got a ton of benefits from witnessing these meteorological phenomena. His body was strengthened under the two completely opposite energy of the abyss’s chill and the crack’s heat; his body was strengthened at least one time over. Since Fei’s body was already very special and was immune to the【Hulk Potion】at this point, it was quite a surprise that his body had improved again.
To Fei, this improvement meant a lot to him.
This meant that the strength and power he had from Diablo World could be unleashed more; his body was too weak for him to use all of his powers.
After this incident, Fei returned to Diablo World and continued training and having some moments of peace.
In the blink of an eye, another ten days passed. In the ten days, Fei repeated what he did before: kill monsters to level up, use the Zen Power of his Assassin Mode to revise training scrolls by adding more energy connection channels, help Charsi to get a design, forge all kinds of items, follow Priestess Akara and old man Cain to learn the knowledge about magic…… the month long time limit had arrived, and he had to come out and return back to Chambord city! | physics |
https://clickitgolf.com/how-much-farther-would-a-completely-illegal-driver-go/ | 2022-10-03T07:26:24 | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00443.warc.gz | 0.964334 | 862 | CC-MAIN-2022-40 | webtext-fineweb__CC-MAIN-2022-40__0__224766561 | en | As heavily reported in the golf equipment space of late, golf’s governing bodies draw a firm line in the sand on driver technology advancements. For example, under the current rules of golf, a driver must not exceed a CT (characteristic time) reading of 257 (the limit is actually 239, but there’s a tolerance of 18).
There are also extensive rules in place that limit size, shape, texture, adjustability, and all sorts of restrictions you can read more about here. That means the rules keep golf manufacturers from making drivers that are too fast, thus preventing golfers from hitting the golf ball as far as possible.
But let’s suspend reality for a second.
Ignoring all restrictions, how much farther would an average golfer really hit the ball with a driver that’s completely illegal?
Fortunately, for those of us curious about the answer, this isn’t just a hypothetical question anymore. Gene Parente, the foremost expert on golf robot testing and the founder of Golf Laboratories, recently got his hands on a driver that was built for the most speed possible, disregarding the rulebook altogether.
Parente did what he always does when he gets his hands on a product: he tested it on his golf robot. While he couldn’t reveal the company who designed and developed the driver, he assured GOLF’s Fully Equipped podcast that it was made without constraints.
“I can’t name the company because it’s confidential, but they came out with a full-blown non-conforming club [a few years ago],” Parente told GOLF’s Fully Equipped podcast. “Full blown, like everything about it was non-conforming. And it was wild. It was really, really wild.”
For a guy whose job it is to test basically every hard goods product that hits the market every year, hearing him call a driver “really, really wild” isn’t to be taken lightly.
“It was wild because it was testing a lot of my preconceived notions of the physics of what would happen when you started like getting real high ball velocities off of a club face, but it was also wild because it showed, at least with this design, that even if the governing bodies took the regulator off, there’s kind of limits to what a golf ball can do,” he explained.
According to Parente, he tested the nonconforming driver at 80-85 mph, which is near the reported average male swing speed of 93 mph. Even at slightly below average swing speeds, Parente says the driver produced 15-20 yards of extra distance compared to its conforming counterparts. Of course, at faster swing speeds, the distance increases would be even more pronounced.
That brought Parente to another interesting point.
Thanks to physics, the slower you swing a golf club, the less impactful some technological advancements may have on improving speed. For example, if a certain driver design increases speed by, say, 10 percent, that means someone who swings the club 130 mph will gain more speed (1.3 mph increase) than someone who swings it 70 mph (0.7 mph increase).
“The sick joke of this industry is that the people who benefit the most from any performance advantage in a golf club are those who need it the least: those who swing it 130-140 mph,” Parente said. “Whereas someone who swings 70-80 mph, and can use every single yard, they don’t get much advantage unfortunately. That’s just due to physics; they’re not generating enough club head speed to see big differentials between equipment.”
Regardless, the best way to ensure you’re optimizing distance under the rules of golf, at your current swing speed, is to get with a trusted fitter to dial in your equipment. If you want to increase swing speed, you can also try long-driver Kyle Berkshire’s recent tips, or even try using a golf tee that reduces friction.
This article originally appeared on Golf.com. | physics |
https://lasertech.vn/may-han-laser-cam-tay-al-arm-450f/ | 2023-03-31T09:47:44 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00544.warc.gz | 0.786951 | 589 | CC-MAIN-2023-14 | webtext-fineweb__CC-MAIN-2023-14__0__106338378 | en | Máy hàn laser cầm tay AL-ARM 450F
Hệ thống máy hàn AL-ARM cho phép hàn chế tạo và sửa chữa rất nhanh chóng và linh hoạt, không mất nhiều thời gian chuẩn bị.
Ứng dụng: hàn chế tạo kim loại tấm, sửa chữa thân vỏ ô tô, thiết bị công nghiệp nặng
The AL-ARM is different:
With this welding laser, the welding process is monitored through 3D visualization instead of customary microscope attachments. This is realized via (pass-through) 3D laser protection goggles, which allow you to keep an eye on the environment and the welding task at the same time. The welding area is augmented and the process-relevant data, such as the cross-hairs, are displayed directly.
This laser system has no resonator, but instead a handset with automated wire feeding for wire thicknesses up to 0.6 mm. The wire is fed automatically. The handset weighs only 1.5 kg and is connected to the supply unit via a 3.5 m long energy chain.
It is easy to handle and very flexible during operation. The welding seam width can be variably adjusted during the welding process.
A lot of emphasis was placed on safety: Integrated workpiece detection ensures that welding is only allowed when the handset touches the workpiece. This prevents uncontrolled laser radiation emissions. The safety concept verified by TÜV for compliance with the high safety requirements for performance level d is included.
|Laser type / wavelength||Fiber laser, 1070 nm|
|Average power||450 W|
|CW output||450 W|
|Peak pulse power||4500 W|
|Pulse energy||45 J|
|Pulse duration||0.2 ms – CW|
|Pulse frequency||Single pulse – 100 Hz|
|Electrical connection||3-phase, 16 A|
|Operating modes||Pulsed / CW|
|Welding spot Ø||0.3 – 4 mm|
|Focal distance||120 mm|
|Pulse shaping||Adjustability of power curve within a laser pulse|
|Energy chain length||3.5 m| | physics |
http://m.mos-chemical.com/sale-12725545-li2o-lithium-oxide-powder-cas-12057-24-8-with-29-8814-molecular-weight.html | 2020-08-12T17:15:07 | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00359.warc.gz | 0.800078 | 166 | CC-MAIN-2020-34 | webtext-fineweb__CC-MAIN-2020-34__0__173995499 | en | White powder or hard shell solid, ionic compound, with relative density of 2.013g/cm3, melting point of 1567 ℃ (1840k), boiling point of 2600 ℃, and sublimation at 1000 ℃ or above, it is the highest melting point of each element oxide in the first main group (IA) (alkali metal). It is easy to deliquesce and dissolve in water to form strong alkaline LiOH.
2.013 g/mL at 25 °C(lit.)
1. Used as spectral reagent;
2. Preparation of main battery materials of battery grade lithium oxide;
3. It is also used in special glass, ceramics, medicine and other fields.
We can customize according to customer's requirement. | physics |
https://danielofarabica.wordpress.com/2010/10/10/heat-loss-what-heat-loss/ | 2018-04-19T13:57:04 | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00091.warc.gz | 0.972134 | 1,231 | CC-MAIN-2018-17 | webtext-fineweb__CC-MAIN-2018-17__0__175641981 | en | So, here’s a confession: I have never measured the temperature of the water I use to brew coffee. Or at least I can’t recall doing so. Yep, it’s true.
I’ll give you a minute to soak that up, throw up your hands, maybe give out a guffaw, delete me from your bookmarks, whatever … … There. All done? Ok, moving on …
It’s not that I don’t have a standard. :30 – :45 off the boil. I think that’s adequate. The emphasis is on repeatability around here, not pinpoint precision accuracy. It’s worked out quite well but this time curiosity got the better of me. Or maybe it was frustration.
In “the lab” this morning I decided to go that “extra mile” and use a thermometer to test the temperature of my brew water in the Buono. This was brought on by a particularly temperamental coffee or, to be more precise, a particularly temperamental flavor I was getting from a particular coffee, a flavor that is obstinate and tenacious in the face of my attempts to avoid it: “burnt”. My first thought was that the water was too hot so I decided to get all scientific on its ass. I don’t call it “the lab” for nothin’
This little experiement was also brought on by an idea that has been floating around, that the Buono is poor at maintaining a consistent water temperature over a brew cycle.
By heating the water in the Buono on the stove, as opposed to heating it in another vessel and then transferring it to the Buono, the kettle is material is raised in temperature such that the heat of the metal sustains the temperature of the water inside over an amount of time sufficient to brew a cup of coffee in a Hario V60 or, for that matter, any number of pour-over methods.
My rig? A small kitchen thermometer – the kind you see in the pockets of chefs all the time – held by a pair of kitchen tongs.
The thermometer was checked for accuracy the way I used to do it back when I worked at Peet’s: fill a cup with ice (really pack it in there), gently fill with water, give it a bit of a stir, place the thermometer in the cup and wait. Give it a few minutes. It should read right around 32º F. If not, it’s time to calibrate. Luckily for me, it was spot on because I have no idea where the plastic calibration wrench that came with the thermometer went.
It was crude setup, I admit but by pure chance, it had at least a couple advantages over simply placing the probe end of a corded remote thermometer into the pot and definitely one over holding it with my bare hand. I’ll let you figure out that last one but as for the other two: because I held it with tongs and because the ends of the tongs were plastic, I was able to keep the thermometer off the sides of the kettle and have some measure of assurance that there was little to no heat transfer from the ends of the tongs to the thermometer itself.
I measured the water temperature at various points along the way toward boiling and also, just for another accuracy check, after the water was at a full boil. Once again, spot on: right around 212º F.
Just as I did for the brew immediately preceding this one, I let the water sit for a full 1:00 off the boil (I had been attempting to cool the water down beyond what it would normally be at :30 – :45 off the boil to get rid of the burnt flavor). The temp? Right around 190º – 195º F.
I started the pre-infusion (AKA, at least around here as “makin’ the bed). It went a little over (≈1:00) because I was busy taking temperature measurements. The temp this time? Again, right around 190º – 195º F.
After the brew was done I took another measurement and again it was right around 190º – 195º F.
This was a full 4:00 off the boil and the water temperature inside the Buono was amazingly stable. Far more stable, in fact, than I have been led to believe I could expect it to be.
The take-away, then? I heat my water on the stove in the Buono – i.e. I’m not simply using it as a transfer pitcher – and am getting an incredibly stable temperature during the brew process and I believe it is because I am heating my water in the kettle on the stove and not using it simply as a transfer pitcher.
A couple questions/issues/blah, blah, blah this brings to my mind:
- Why is it that in 1:00, right off the boil, the temperature of the water in the kettle came down by about 17º – 22º F but came down no further in the succeeding 3:00? I suspect that holding onto a temperature sufficient to boil water is difficult but that 190º – 195º F is easier to maintain with little to no additional energy input and that temperature plateau would probably be even lower without the insulating effects of the heated kettle.
- How long does the heated kettle preserve the temperature of the water at the proper brewing temperature i.e. if I wanted a lower water temp for brewing a particular coffee how long would I need to wait? Another experiment, perhaps.
That was fun. I need to do more of these.
Please feel free to add your two cents in the comments.
P.S. I really gotta get on it – I’ve been meaning to create a “My Brewing Parameters” page for some time now. | physics |
https://iqmsglobal.com/ansiesd-s20-20/ | 2020-10-29T07:59:11 | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00256.warc.gz | 0.917006 | 448 | CC-MAIN-2020-45 | webtext-fineweb__CC-MAIN-2020-45__0__4601927 | en | What Is Electrostatic discharge (ESD)
Electrostatic discharge (ESD) is the release of static electricity when two objects come into contact. Familiar examples of ESD include the shock we receive when we walk across a carpet and touch a metal doorknob and the static electricity we feel after drying clothes in a clothes dryer. A more extreme example of ESD is a lightening bolt. While most ESD events are harmless, it can be an expensive problem in many industrial environments.
ESD first requires a build-up of an electrostatic charge. This occurs when two different materials rub together. One of the materials becomes positively charged; the other becomes negatively charged. The positively-charged material now has an electrostatic charge. When that charge comes into contact with the right material, it is transferred and we have an ESD event.
Manufacturers of electronic devices incorporate measures to prevent ESD events throughout the manufacturing, testing, shipping, and handling processes. For example, an employee may wear a wrist strap when working with devices or may wear ESD control footwear and work on an ESD floor mat that causes the electrostatic charge to go into the ground instead of into the device. Sensitive devices can be packaged with materials that shield the product from a charge.
ANSI/ESD S20.20 is the multi-industry standard for the development of ESD control programs that protect today’s increasingly sensitive electronic components, assemblies, and equipment from costly ESD damage and reduce down-time. Using the standard’s control methods and guidance, an organization can develop an ESD control program that protects devices down to 100v (class 1a) or less.
ANSI/ESD S20.20 is based on these fundamental static control principles:
- All conductors in a given facility must be bonded or electrically connected and attached to a known ground or contrived ground;
- All necessary non-conductors in a facility cannot be allowed to lose their electrostatic charge by attachment to or connection with ground; and
- Moving or transporting ESD-sensitive items outside of an electrostatic discharge protected area requires the use of static protective materials. | physics |
http://www.powerguru.org/permanent-magnet-synchronous-generators-in-wind-energy-systems/ | 2017-03-29T22:48:57 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00025-ip-10-233-31-227.ec2.internal.warc.gz | 0.918211 | 1,282 | CC-MAIN-2017-13 | webtext-fineweb__CC-MAIN-2017-13__0__28905452 | en | Reducing the losses by one-third for this type of generator
Renewable energy sources already account for around 11.81% of today’s energy mix. The most successful of these is wind energy, occupying top place and delivering 5.7% of power consumption with an output of over 30 billion kilowatts in 20062. The technology has advanced in leaps and bounds since the first systems for exploiting wind energy – windmills – were introduced.
By Dr. Roland Zoller, Product Marketing DM-PM, VACUUMSCHMELZE GmbH & Co KG
The first wind energy systems of the early 80s notched up a mere 55 kilowatts in nominal output. By comparison, the largest of today’s modern systems have an output of 6 megawatts, although these “powerhouses“ are still the exception. The majority of wind energy systems today are based on a double feed asynchronous geared generator, and reach outputs between 500 kilowatts and one megawatt.
The system has a number of advantages which have contributed to its widespread popularity. Its modular structure of gear, generator and inverter is a widely accepted standard involving relatively low start-up investments. In addition, double-feed asynchronous generators are low in weight, and transformers are unnecessary since the stator windings can be connected directly to the power grid while the rotor windings are connected to an inverter. As a result, the inverter need only be dimensioned for one-third of the generator’s nominal output, a further factor that lowers costs.
The gears are an essential part of this system, which cannot be designed as a multi-pole system and requires high speeds to operate. However, the costs of the gear system and the complex maintenance they require – for example, oil must be regularly checked and changed – represent disadvantages of this type of generator. Slip rings are also high-maintenance components. The output of these systems is lower than, say, electrically excited synchronous generators.
Electrically excited synchronous generators – geared and gearless
Today, very few of the wind energy systems in operation use rotor induction generators with electrically excited synchronous generators.
These generators deliver higher efficiency than asynchronous generators, owing to the lack of remagnetization loss in the rotor. An exciter coil in the rotor produces the magnetic field. Although highspeed variations do not require sliding contacts since they include a rotating rectifier with pilot exciter. However this type of system also requires gears. The frequency generated depends on the generator’s rotation speed. Since this is not at a constant level in wind energy systems, the power generated cannot be fed directly into the grid, and an additional transformer is generally required.
A similar principle applies to direct-driven electrically excited synchronous generators. While they do not require gears, they require a transformer and inverter. Because this type of generator can support a high number of poles, a gear system is unnecessary or a singlestage planetary gear is sufficient. Direct drive involves a number of disadvantages. For example, the structure is not modular, since all components must be designed as integral parts of the wind energy system. The diameter, and thus the weight, of the synchronous generator increases with the pole number. In addition, electrically excited synchronous generators require slip rings, involving expensive maintenance procedures.
Permanently excited synchronous generators – the model of the future?
Supported by statutory innovation incentives, the technological development of wind energy systems will continue to advance. Given the annually falling remuneration scales in Germany’s Renewable Energy Act ( Erneuerbare-Energien-Gesetz, EEG ), an increasingly efficient method of converting wind energy into electric power is required. “Repowering“, the term used for the replacement of wind energy systems which may be considerably over 20 years old, is also a major concern for wind farm operators. In addition, these old, comparatively inefficient systems often occupy prime sites in coastal regions or mountainous areas.
Wind energy system prototypes using permanently excited synchronous generators are currently in operation at an array of test sites. In this design, the exciter output is replaced by permanent magnets, reducing the losses by one-third for this type of generator in comparison to electrically excited synchronous generators, and thus increasing output. The generator can be designed as a multi-pole system for smooth adjustment to lower rotation speeds. Here too, a gear system is unnecessary or a single-stage planetary gear is sufficient, although an inverter and transformer for grid feed-in are required.
Permanently excited direct-drive systems are lower in weight than electrically excited synchronous generators. Their permanent magnets are maintenance-free and do not require slip rings, so that it can be assumed that this generator type incurs significantly lower maintenance and operating costs throughout its lifespan.
Land sites favourable to wind generation are scarce, and are artificially rendered scarcer by the imposition of regional regulations restricting maximum height and distance. The solution to this problem is advanced development of offshore wind farms, for instance off Germany’s North Sea coast – a vision for which it is particularly important that the systems combine minimum maintenance with maximum efficiency. Here, the advantages of wind energy systems based on permanently excited synchronous generators come to the fore.
Permanent magnets from Vacuumschmelze GmbH & Co KG are designed for outstanding corrosion stability, an essential property in an environment of salt air and damp. The permanent magnets are also demagnetization-proof in specific constructions. Vacuumschmelze is primarily known as a leading supplier of magnetic sub-systems. These simplify assembly and operation for the customers since the magnetic poles are attached to metal pole pieces during manufacture by Vacuumschmelze, enabling customers to mount the magnetic poles directly onto the rotor. Vacuumschmelze GmbH & Co KG has extensive expertise in system assembly and magnetic design, and is thus a reliable partner in the development of permanently magnetically excited wind energy systems.
1) Information for 2006 from German Ministry for the Environment, Conservation and Reactor Safety.
2) Information from German association Bundesverband WindEnergie e.V. | physics |
https://www.ezurio.com/resources/newsroom/laird-technologies-launches-first-series-product-demonstration-videos | 2024-04-24T09:04:38 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819089.82/warc/CC-MAIN-20240424080812-20240424110812-00171.warc.gz | 0.906395 | 836 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__44490718 | en | Laird Technologies Launches the First in a Series of Product Demonstration Videos
Published on May 5, 2010
Video Highlights Advanced Laser Marking and Coloring Process for Handset Metals
St. Louis, Missouri, USA – May 5, 2010 – Laird Technologies, Inc., a global leader in the design and supply of customized performance-critical components and systems for advanced electronics and wireless products, today announced the launch of its product demonstration video series. This first video discusses in detail the advanced laser marking and metal coloring processes that are involved in handset metal manufacturing. The video is available for viewing and downloading on the Laird Technologies Website.
The video examines how laser marking is used to design custom markings, patterns, logos, part numbers, or any other information desired on the handset metal surfaces. This is accomplished by focusing a laser beam directly onto the metal surface, applying a precise amount of power and balance with proper movement, so the metal is not destroyed. Aluminum and stainless steel are the most common metals used in this marking process.
Laser marking includes several benefits such as being permanent or not easily removed, can be easily integrated into the existing manufacturing process, and replacing the use of some adhesive labels on handset metals; thus reducing the overall cost. As the leading supplier of metals using laser marking as a key technology, Laird Technologies has the capability to perform this process in all of its major metal facilities throughout the world.
“Laird Technologies has been using lasers for many different processes for many years now. Laird Technologies’ handset metals utilize processing technologies to provide custom products for our customers,” said Steve Ulm, Laird Technologies Vice President and General Manager of Handset Metals. “This video series we are working on will help explain to our customers how we use advanced technologies to create many new options in our visual metals, as well as structural metals products. If pictures are worth a thousand words, videos can tell the story even better of how Laird Technologies uses lasers to create unique customer products.”
The video also explains how color is created in a logo with the use of a laser on stainless steel metal. Color is constructed in the marking of the stainless steel metal by creating an oxide film that creates a reflective interference of light on the surface of the metal. A variety of colors can be produced including silver, white, gold, violet, orange, red, brown, black, blue, green, and yellow.
As a leading supplier of metals to the handset industry with expertise in the application of laser technology for multiple processes, Laird Technologies is focused on the reliability, repeatability, low-cost, and high yield of handset metals through the automation of processes.
Later handset metal tutorial videos will cover the capabilities of Laird Technologies in laser welding, cutting, and engraving of handset metals. For more information, please logon to www.lairdtech.com.
About Laird Technologies, Inc.
Laird Technologies designs and manufactures customized, performance-critical products for wireless and other advanced electronics applications.
The company is a global market leader in the design and supply of electromagnetic interference (EMI) shielding, thermal management products, mechanical actuation systems, signal integrity components, and wireless antenna solutions, as well as radio frequency (RF) modules and systems.
Custom products are supplied to all sectors of the electronics industry including the handset, telecommunications, data transfer and information technology, automotive, aerospace, defense, consumer, medical, and industrial markets.
Laird Technologies, a unit of Laird PLC, employs over 10,000 employees in more than 39 facilities located in 13 countries.
For additional information, visit http://www.lairdtech.com or contact us at:
Translated versions of this press release are available in Simplified and Traditional Chinese, Japanese, Korean, and German languages.
© 2010 All rights reserved. Laird Technologies and its logo are trademarks of Laird Technologies, Inc. Other products, logos, and company names mentioned herein, may be trademarks of their respective owners. | physics |
http://ir.rubicontechnology.com/corporateprofile.aspx?iid=4384786 | 2017-05-24T00:32:15 | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607726.24/warc/CC-MAIN-20170524001106-20170524021106-00585.warc.gz | 0.833438 | 203 | CC-MAIN-2017-22 | webtext-fineweb__CC-MAIN-2017-22__0__38743283 | en | Rubicon Technology is an advanced electronic materials
provider that develops and manufactures high quality mono crystalline sapphire products
in large volume. Our products enable multiple, high growth end markets including
light-emitting diodes (LEDs), radio frequency integrated circuits (RFICs), and blue
We apply our proprietary crystal growth technology to produce very high-quality sapphire in a form that allows for volume production of various sizes and orientations of substrates and windows. We are a vertically-integrated manufacturer with strong capabilities in converting bulk crystal into large diameter, polished windows and lenses at very tight tolerances specific to our customers' specifications.
|5/04/2017||Rubicon Technology Announces Reverse Stock Split|
|4/05/2017||Rubicon Technology to Display Large Area Optical Sapphire Products at the 2017 SPIE Defense and Commercial Sensing Expo|
|3/16/2017||Rubicon Technology Names Timothy E. Brog as Chief Executive Officer| | physics |
https://zameer36.com/icy-fortress-of-solitude-snapped-by-antarctic-survey/ | 2024-03-02T06:10:30 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00665.warc.gz | 0.951623 | 254 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__144741092 | en | Icy fortress of solitude snapped by Antarctic survey
Actually, it is completely natural, and nothing to do with supervillain antics. It is a lenticular cloud, which can be produced when air near the surface gets pushed upwards as it flows over peaks in the landscape, creating pressure waves. The clouds form at the top of the wave, where the air is coolest.
That’s not all. The jagged outcrops of the fortress are bulges of sea ice caused by two ice floes crashing into each other, similar to the way colliding tectonic plates form mountain ranges.
This picture was snapped by a scientist on a flight from McMurdo station in Antarctica as part of NASA’s Operation IceBridge. The missions aims to combine radar data taken from aircraft with satellite images, like the one below, in an effort to track changing ice conditions in Antarctica and the Arctic.
(And yes, we know the Man of Steel’s home-away-from-home is normally found at the other end of the planet in the Arctic, but he’s also a fictional alien who wears his underwear on the outside, so how about allowing us some poetic licence?)
(Image: NASA Earth Observatory) | physics |
https://endongneng.nwpu.edu.cn/info/1173/8953.htm | 2024-02-23T08:25:32 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474361.75/warc/CC-MAIN-20240223053503-20240223083503-00037.warc.gz | 0.942334 | 371 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__77559800 | en | Short Course on Turbomachinery Aeroelasticity
From July 16th to July 19th 2019, the Computational Physics and Energy Science Research Center of the Taicang Yangtze River Delta Research Institute of Northwestern Polytechnical University invited Dr. Petrie-Repar of the Royal Institute of Technology in Sweden to lecture a summer course on the turbomachinery aeroelasticity for aeroengine Blade. The course attracted more than 20 graduate students and engineers from Shanghai Jiaotong University, Institute of Engineering Thermophysics, Chinese Academy of Sciences, Aero Engine Corporation of China and Shanghai Electric. The three day course covered the flutter and forced response analysis method for turbomachinery blade.
Dr. Petrie-Repar is a director and co-founder of RPMTurbo, an Australia based engineering consultancy specializing in aeroacoustic and flutter analysis for the manufacturers of gas turbines, steam turbines and aircraft engines. He was an associate professor in KTH (Royal Institute of Technology), Sweden from Feb 2015 to May 2019, and the coordinator of a large European Union funded (budget 7.5 million Euros) research project called ARIAS. The aim of the project was to investigate blade vibrations in aircraft engines. The members of the project included all major European aircraft engine manufacturers such as Rolls-Royce, Safran, Siemens and GE Avio. He worked in German Aerospace Center (DLR) as a research engineer from Nov 1999 to Nov 2004, where he developed an efficient and robust linearized method for the engineering analysis of blade vibrations in aircraft engines, gas turbines and steam turbines that was subsequently used by Siemens Power Generation.
linearized method for the engineering analysis of blade vibrations in aircraft
engines, gas turbines and steam turbines that was subsequently used by
Siemens Power Generation. | physics |
https://yayoibrandt.blogspot.com/2011/12/earthquake.html | 2018-07-21T00:16:31 | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592001.81/warc/CC-MAIN-20180720232914-20180721012914-00157.warc.gz | 0.990162 | 105 | CC-MAIN-2018-30 | webtext-fineweb__CC-MAIN-2018-30__0__23575013 | en | Our adventure living, working and traveling in Japan.
Friday, 23 December 2011
Experienced my first earthquake this morning at about 11:25am, December 24, 2011. It felt as though a giant had grabbed the house and shook it. Almost immediately NHK television announced that it was a 3.9 magnitude earthquake with the epicentre quite close to where we are. My first reaction was that snow was sliding off of the slippery metal roof again, but our relatives here knew immediately that it was an earthquake. | physics |
https://naranglab.ucla.edu/publication/imaging-phonon-mediated-hydrodynamic-flow-in-wte2/ | 2024-02-28T02:34:26 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474690.22/warc/CC-MAIN-20240228012542-20240228042542-00297.warc.gz | 0.865374 | 352 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__201341540 | en | In the presence of strong interactions, electrons in condensed matter systems can behave hydrodynamically thereby exhibiting classical fluid phenomena such as vortices and Poiseuille flow. While in most conductors large screening effects minimize electron-electron interactions, hindering the search for possible hydrodynamic candidate materials, a new class of semimetals has recently been reported to exhibit strong interactions. In this work, we study the current flow in the layered semimetal tungsten ditelluride (WTe2) by imaging the local magnetic field above it using a nitrogen-vacancy (NV) defect in diamond. Our cryogenic scanning magnetometry system allows for temperature-resolved measurement with high sensitivity enabled by the long defect spin coherence. We directly measure the spatial current profile within WTe2 and find it differs substantially from the uniform profile of a Fermi liquid, indicating hydrodynamic flow. Furthermore, our temperature-resolved current profile measurements reveal an unexpected non-monotonic temperature dependence, with hydrodynamic effects strongest at ~20 K. We further elucidate this behavior via ab initio calculations of electron scattering mechanisms, which are used to extract a current profile using the electronic Boltzmann transport equation. These calculations show quantitative agreement with our measurements, capturing the non-monotonic temperature dependence. The combination of experimental and theoretical observations allows us to quantitatively infer the strength of electron-electron interactions in WTe2. We show these strong electron interactions cannot be explained by Coulomb repulsion alone and are predominantly phonon-mediated. This provides a promising avenue in the search for hydrodynamic flow and strong interactions in high carrier density materials.
Last updated on 10/04/2021 | physics |
https://www.bennettnbennett.com/category/static-control-solutions/ | 2018-11-21T13:35:07 | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748901.87/warc/CC-MAIN-20181121133036-20181121155036-00480.warc.gz | 0.907639 | 159 | CC-MAIN-2018-47 | webtext-fineweb__CC-MAIN-2018-47__0__187389305 | en | Did you know Ionization is more effective than using air blowers for reducing particulate matter in gown rooms?
Grounding is the most common way to combat static electricity. “The single most important concept in the field of static control is grounding. Attaching all electrically conductive and dissipative items in the workplace to ground allows built-up electrostatic charges to equalize with ground potential. A grounded conductor cannot hold a static charge.” – “ANSI / ESD 6.1 ESD Association Standard for the Protection of Electrostatic Discharge Susceptible Items – Grounding”.
Ionization is an alternative to grounding. It is the process of charging air molecules positively and negatively to balance the charge on the surface of ungrounded objects. | physics |
http://ganshalomchai.blogspot.com/2012/03/february-surprises.html | 2018-07-17T18:52:56 | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00582.warc.gz | 0.982852 | 243 | CC-MAIN-2018-30 | webtext-fineweb__CC-MAIN-2018-30__0__146282748 | en | Some of the leaves collected water drops. This cabbage leaf looks like a swimming pool for a lady bug.
When the rain stopped, we found a big puddle in the parking lot. I challenged the children to "think like a scientist" and make observations about the puddle. Several children noticed that the surface of the water acted like a mirror. They commented that they could see the trees and the building in the water. One child noticed that the water was moving. I asked her what was making the water move. She watched for a moment and then answered, "the wind".
Finally, some noticed that leaves were floating on the surface. I asked them if they could create an experiment using the puddle. They gathered items found on the ground and conducted a sink and float experiment. At one point a pansy was floating. The wind caught it and moved it forward. One child picked it up, then placed it back on the water. But, it was turned upside down. The children noticed that instead of floating, it partially sank. After a period of discussion, they decided that it sank because it had water on top of the petals. And water made it too heavy to float. | physics |
https://bas.com.bh/learning-and-development/aircraft-weight-balance/ | 2024-02-29T09:14:09 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00872.warc.gz | 0.810937 | 324 | CC-MAIN-2024-10 | webtext-fineweb__CC-MAIN-2024-10__0__33278756 | en | Aircraft Weight & Balance
Dispatchers, Load Controller and Ramp Handler staff
- To understand the basic principles of aircraft Weight and Balance
- Become familiar with the various terms used in the field
- Understand and identify the various weights used in the compilation of a load sheet
- Construct indexes and understand their use
- Become familiar with the design of trim charts
- Be able to fully prepare various load documents
- List the two most important factors to consider when planning the loading of an aircraft.
- Be able to understand the aircraft weight limit during preparing load sheet.
- Be able to define the center of gravity of the aircraft.
- Be familiar with the type of aircrafts –narrow bodied & wide bodied.
- Be able to apply L.M.C procedures in load sheet.
- Course objective
- Course introduction
- Definition and Responsibilities of LC
- Principle of Flight
- Trim Determination
- Principle Center of Gravity
- MAC safe range
- Weight Control & Quick weight combinations reference table
- Traffic Load
- Regulated Take-off Weight
- Standard Weight
- Loading Instruction Report
- A320 – Aircraft Weight and CG Data
- Loading Instruction / Report A321
- Load Categories
- Load Documentation
- Definition Of Load Sheet Terms
- Description of EDP Load Sheet
- Acars Load Sheet and Term Sheet
- Provisional Load Sheet
- Description Of Acars Load Sheet
- Last Minute Change (LMC) Procedure
- Load Message
- A330 – Aircraft Weight and CG Data | physics |
http://www.ict-pof-plus.eu/ | 2016-07-01T11:33:55 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00192-ip-10-164-35-72.ec2.internal.warc.gz | 0.907752 | 401 | CC-MAIN-2016-26 | webtext-fineweb__CC-MAIN-2016-26__0__37527825 | en | Welcome in the POF-PLUS EU Project Website
Plastic Optical Fibre for Pervasive Low-cost Ultra-high capacity Systems
The POF-PLUS project, officially ended on the 31st of May 2011, focused on developing new photonic components and transmission technologies for large core plastic optical fibre (POF) systems, aiming at the unprecedented implementation of tens of Gbps transmission over this medium. The different flavours of large core POF with core diameters in the range of 1 millimetre allow us to envision an extremely simple installation technology, significantly more user friendly than traditional glass optical fibre (GOF) or even standard copper solutions (UTP, coaxial, etc). The extreme simplicity of POF has to date come at the expense of lower transmission capacity with respect to GOF. Strategies to overcome these limitations based on novel transmitter and receiver components are the core goal of POF-PLUS. The project has achieved the following results:
- realization of fully engineered systems working at 1Gbps over at least 50 meters;
- proof of concept demonstration of multi-Gbps over 50 meters (from 4 to 10 Gbps);
- proof of concept demonstration of 10Gbps transmission over tens of meters using parallel optic solutions (multi core and ribbon cables);
- reliable transmission of selected radio-over-fibre systems (such as transporting wireless UWB, Ultra Wide Bandwidth, radio signals).
An abstract from the POF-PLUS Description of Work, describing the basic concepts and objectives can be found in the documents area.
The Final Report of the project, describing all the results of the 3 years of activity, is available as well, while a general summary can be found in the Results folder. In addition, we have put available:
- a video with the POF-PLUS overview;
- a video describing the results of the 1Gbps effort;
- the POF-PLUS Handbook. | physics |
https://support.xumm.app/hc/en-us/articles/4499411625618-Will-Xumm-operate-after-a-Natural-Disaster- | 2022-09-29T09:05:36 | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00599.warc.gz | 0.946146 | 906 | CC-MAIN-2022-40 | webtext-fineweb__CC-MAIN-2022-40__0__246494994 | en | We receive a large number of requests from people who want to know how well Xumm would survive a natural disaster, an EMP, a Solar Flare or other catastrophic events. Although such events are rare or hypothetical, they do present a possible danger when it comes to your funds, so let's consider the possibilities...
What is an EMP?
An EMP stands for ElectroMagnetic Pulse. It is brief burst of electromagnetic energy that can disrupt communications, damage electronic equipment and in some cases, physically damage objects such as buildings and aircraft. An EMP can be generated naturally or artificially (nuclear weapons) but in either case, it could cause severe damage to electrical grids and/or communication lines.
What is an Solar Flare?
A solar flare is an intense eruption of electromagnetic radiation in the Sun's atmosphere. While a solar flare is not directly dangerous to humans there is a possibility that the effects of a large solar flare could be similar to an EMP.
What is the concern here?
The main concern is that any event that effects the electrical grid or the communication infrastructure of a country, continent, or even the world, would impact the XRP Ledger and therefore put your XRP in danger of being destroyed or becoming inaccessible.
How the XRP Ledger works
The XRP Ledger is run on network validators and full history nodes which are distributed all over the world. With so many machines operated in different geographical locations, with different levels of redundancy on a per machine, per location basis and on a mixture of hardware, it is highly unlikely that the same disaster could effect a critical amount of them. Since your XRP exists on the XRPL, as long as even 1 validator or full history node exists, your XRP still exists. EVERY validator and full history node would need to be destroyed in order for your funds to be lost.
How does this effect Xumm?
Since your XRP exists on the XRP Leger, your funds exist as long as the XRP Ledger is still operational, however, there are other issues to consider.
- Was your phone effected by the event? Does it still function? Can it still be charged?
Xumm only works on cellular devices, so you still need a fully operational phone order for Xumm work, let's assume that your phone has survived the event and is still fully functional and you have a way to charge it.
- Was the internet or cellular network effected by the event?
Xumm needs a way to connect to the internet/cellular network in order to access the XRP Ledger. If you can not connect, you can not access your funds.
- What does a post EMP/Solar Flare/Natural Disaster world look like?
Is your plan to use your XRP to barter/purchase items and services? Is the plan to replace the current system of finance with the XRPL after the event?
- Has anyone else been as diligent as you?
We certainly appreciate all of your preparations, but a network is only as useful as the people who use and maintain it. If you are the only one who has an operational validator, an operational phone and an operational cellular tower, than a network of one person is not going to be super helpful.
What about my Xumm Tangem cards?
Tangem cards pass rigorous testing and can withstand environmental extremes, occasional mechanical deformation, electromagnetic pulse (EMP), electrostatic discharge (ESD) and X-rays within limits defined in ISO7810 standard.
You can learn more about technical specifications of the Tangem cards by visiting their website here:
Xumm is a non custodial (un-hosted), XRP Ledger wallet that allows you to create and manage XRP Ledger accounts in a safe, secure and user friendly way. It is not effected by EMPs, Solar Flares or Natural Disasters. What could be effected by these events is the infrastructure required to access the XRP Ledger. If you are unable to access the XRPL, Xumm is not going to be much use to you.
We understand that you might have additional questions regarding this topic so you are welcome to contact us any time via the Xumm Support xApp in Xumm or you can simply scan this QR code with Xumm and be directed there automatically. | physics |
https://hbradiology.com/ultrasound-non-invasive-imaging/ | 2023-12-05T04:47:59 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00187.warc.gz | 0.915249 | 778 | CC-MAIN-2023-50 | webtext-fineweb__CC-MAIN-2023-50__0__23148077 | en | Ultrasound is a medical imaging technique. It uses sound waves to create images of the body’s internal structures. It is a safe and painless procedure that can diagnose a wide range of conditions, including:
- Heart problems
- Breast cancer
- Kidney stones
- Gallbladder disease
- Liver disease
- Joint problems
- Muscle injuries
- Vascular problems
It is also used to guide procedures such as needle biopsies and fluid drainage.
How does ultrasound work?
An ultrasound machine sends out sound waves that bounce off of the body’s tissues. The sound waves are then converted into electrical signals. These signals display on a monitor as an image. The different tissues in the body reflect sound waves differently. So the images created by ultrasound can be used to distinguish between different tissues.
What are the benefits of ultrasound?
- Ultrasound is a non-invasive procedure. Which means that it does not involve any needles or surgery.
- Ultrasound is a safe procedure. There is no known risk of radiation exposure or other side effects.
- Ultrasound is a painless procedure.
- Ultrasound is a relatively inexpensive procedure.
- Ultrasound diagnoses a wide range of conditions.
What are the risks of ultrasound?
The risks of ultrasound are very low. There is no known risk of radiation exposure or other side effects. However, there is a small risk of infection if the probe is not properly cleaned.
How is an ultrasound performed?
Ultrasound is performed by a trained technician. The technician will apply a gel to the skin. Then move a handheld device called a transducer over the area of the body. The transducer sends out sound waves and receives the echoes. Which are then converted into images on a monitor.
The entire procedure usually takes only a few minutes.
What should I expect after an ultrasound?
There is no special care required after an ultrasound. You may notice a small amount of gel on your skin, which can be wiped away with a damp cloth.
Here is some additional information about ultrasound:
- Ultrasound can be used to see the baby’s heart beating. To measure the baby’s growth during pregnancy.
- Ultrasound can be used to guide needle biopsies. Which are procedures in which a needle is inserted into the body to remove tissue for testing.
- Ultrasound can be used to guide fluid drainage. Which is a procedure in which fluid is removed from the body through a needle.
Why Choose Ultrasound Imaging?
– Safe and Non-Invasive: Ultrasound imaging involves no radiation exposure. It is a safe option for patients of all ages. Including pregnant women and children.
– Versatile and Wide-Ranging: Ultrasound can be used to examine various body parts. Including the abdomen, pelvis, heart, and blood vessels. And even superficial structures like tendons and muscles.
– Real-Time Imaging: Ultrasound provides real-time images. Allowing healthcare professionals to visualize the movement of organs and assess their functionality.
– Cost-Effective: Ultrasound is generally more affordable. Making it a cost-effective option for patients.
Ultrasound imaging is a versatile and valuable diagnostic tool. That offers a range of benefits for patients.
Its non-invasive nature and cost-effectiveness make it an attractive choice for healthcare providers.
Whether it’s monitoring a pregnancy, evaluating abdominal pain, or assessing the heart’s function. Ultrasound plays a significant role in diagnosing medical conditions.
By harnessing the power of sound waves, ultrasound continues to provide safe and reliable imaging. Contributing to better patient care. | physics |
https://www.eyeflare.com/article/meteor-crater-winslow-arizona/ | 2024-04-21T09:00:58 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.87/warc/CC-MAIN-20240421071342-20240421101342-00305.warc.gz | 0.952442 | 428 | CC-MAIN-2024-18 | webtext-fineweb__CC-MAIN-2024-18__0__15673346 | en | Standing at the edge of the worlds best preserved meteor crater in the Arizona desert gives you a feeling for both the vastness of the universe and the age of the Earth at the same time. It is not the crater of the meteor which killed off the dinosaurs, but the impact must have been almost equally violent - even if it was an isolated event.
50,000 years ago, an asteroid crashed into the Earth at the speed of 26,000 miles per hour. The impact was larger than any atomic bomb tested so far - it would take 20 million tons of TNT to make a similar explosion. The scar from the violent explosion is a mile across, and spectacularly situated in the desert makes for an amazing view.
Today, the crater is open to tourists. You can walk around the 550 feet deep, 2.4 mile circumference crater on specially prepared observation trails. The well-planned visitor center, with air conditioning and a movie theater showing recreations of the impact, is open between 8 AM and 5 PM, longer on Memorial Day but closing earlier on Thanksgiving.
Seeing the crater makes you want to take a piece of it home, but that is not allowed. However, in the gift shop adjacent to the visitor center, you can buy rocks and unique gifts from this natural wonder.
While the crater is usually known as the Meteor Crater, the formal name is Barringer's Crater, and it's commonly referred to as the Winslow Crater as well.
Despite the vastness (and emptiness) of the Arizona desert, this natural wonder is unusually accessible, situated on the I-35 only 35 miles from Flagstaff, and 20 miles from Winslow, the nearest city.
Meteor Crater location and hours
Meteor Crater is located off I-40 at exit 233, then 6 miles south on the paved road. 35 miles east of Flagstaff, 20 miles west of Winslow, in Arizona, USA
Visitor center 8:00 am to 5:00 pm daily
Photo by j.bautista on flickr
You should follow me on twitter here. | physics |
https://theskincentre.co.za/products/picosecond-q-switched-nd-yag-laser-for-tattoo-removal | 2021-03-06T16:53:05 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00573.warc.gz | 0.818179 | 438 | CC-MAIN-2021-10 | webtext-fineweb__CC-MAIN-2021-10__0__217193209 | en | The Picosecond Q-Switch Nd: YAG Laser is the latest advancement in laser tattoo removal technology. This laser releases a high demand for energy in an extremely short timeframe, making it a safer option for tattoo removal due to the reduced chances of burning the skin or altering the pigmentation. The energy of the Picosecond Q-Switch Nd: YAG Laser shatters the ink of the tattoo into significantly smaller particles. At Aesthetica Skin Centre, our aesthetic professionals rely on this incredibly advanced technology to eliminate tattoo inks, safely and effectively with fewer complications than the older laser technologies. We make use of the advantage of different Q-switched wavelengths with our advanced system for tattoo removal. The good news is that the Picosecond Q-Switch Nd: YAG Lasers are beneficial for clearing both blue and green pigments, which are difficult to eliminate using other lasers, and tattoos that are refractory to treatment with the traditional Q-switched lasers.
Ali, F.R., & Al-Niami, F. (2018). Picosecond Laser. Dermnet NZ. Retrieved from: https://dermnetnz.org/topics/picosecond-laser/
Saad, A.M., & Abdullah, A.A. (2017). Tattoo Removal using (1064 nm and 532 nm) Q-Switched Nd: YAG Laser. J Fac Med Baghdad, 59(3) 217-220. https://www.iasj.net/iasj/download/fa42576e685482c1
Torbeck, R., Bankowski, R., Henize, S., Saedi, N. (2016). Lasers in tattoo and pigmentation control: role of the PicoSure® laser system. Department of Dermatology and Cutaneous Biology, 9, 63-67. https://www.dovepress.com/lasers-in-tattoo-and-pigmentation-control-role-of-the-picosureacircreg-peer-reviewed-fulltext-article-MDER | physics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.