source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
303,804
This site says that water vapor isn't visible. However, take a look at this picture: Isn't that water vapor?
Water vapour is a clear and colourless gas, so it can't be seen by the naked eye. What you see in the photo in your second link is (partially) condensed water vapour, i.e. fog (or mist). Fog contains tiny, discrete water droplets and light bounces off their surface in random directions, causing the visibility. Water vapour by contrast only contain free molecules, too small for light to bounce off, so pure water vapour (without any condensate) is invisible, like most gases (some gases are clear but coloured like chlorine gas).
{ "source": [ "https://physics.stackexchange.com/questions/303804", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/140807/" ] }
303,815
What is the reason that the windows of ships' bridges are always inclined as shown in the above picture?
It's about reflections seen from inside. No one cares where reflections go outside. Here is a window that isn't quite vertical. It's tilted top-in, rather than top-out. Notice that the girl's eyes are exactly at the horizon, yet the eyes of her reflection are above the horizon. The tilt has moved her reflection up. Tilt the window the other way, like on a ship's bridge or an air traffic control tower, and her reflection moves down. Tilt far enough and she can look at the horizon without seeing herself. Instead, at the horizon, she will see a reflection of the dark ceiling. Dark enough, and she'll see no reflection at all. Just a clear view of the horizon. We tend to ignore our own reflection because we can focus our eyes on the distant horizon, which blurs the reflection, but that doesn't mean the reflection isn't still causing visual noise, making it hard to see. As if that wasn't reason enough: Pick up a pair of binoculars with curved lenses that glint in the sun from almost any angle (ask a sniper) and look out of a vertical window while the sun shines in and you'll see why they insist on tilting the windows. The tilt also keeps the lights inside from showing up in the reflection, as long as you didn't install lights in the ceiling. You'll notice that this ships bridge has a nice dark ceiling with no lights. That's no accident. The regulation To help avoid reflections, the bridge front windows shall be inclined from the vertical plane top out, at an angle of not less than 10° and not more than 25°. says nothing about why doing this avoids reflections. It just says that it does. The fairly wide 10° to 25° allowance can be explained as simply dictating how close you can stand next to the window and not see your forehead reflecting at the horizon. When spotting a craft miles away, stepping a few feet back isn't going to matter.
{ "source": [ "https://physics.stackexchange.com/questions/303815", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110669/" ] }
303,977
If I have an ice cube of, let's say, for example (15$\times$15$\times$15 cm), it was put inside a container of internal dimensions (15$\times$15$\times$15 cm), and the container is strong so as not to allow compression, will the cube melt when put at room temperature? Or will it just melt partially? What will happen exactly?
Sure it can melt. Ice is less dense than liquid water. The difference in volume (the molten water will not fill the cube completely) is filled with water vapor. In case you are wondering why water vapor is created: You might have seen the experiment where water starts to boil if you put it in vacuum. If there is no vacuum pump to remove the vapor, the boiling will stop once the vapor has sufficiently increased the pressure. (The equilibrium pressure is called the vapor pressure , and depends on temperature.) The same thing happens in your example: if the melting ice were to create a vacuum, the water would boil until the vapor had filled it.
{ "source": [ "https://physics.stackexchange.com/questions/303977", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141545/" ] }
304,212
I was watching this video by Veritasium (note: I don't have much physics knowledge). As I understand, at LIGO they detect the gravitational waves that were generated by the collision of the two black holes. How can they still measure these waves if the energy that they measure was released was only the last 10th of the seconds of the merging black holes (as I understand from the video)? As far as I understand, that would mean that there is only one peak that they can measure, which is that 10th of a second, but their experiment seems to be going on many years and they have made many measurements. How is this possible if the final collision was so short? What do they really measure then? Edit Basically my question comes down to: was that a "once in a lifetime chance" of measuring the waves? Have they been sitting there waiting for the exact moment and then do a measurement? It isn't something they can measure everyday?
This is the data recorded from the first black hole merger: The figure is from this paper by the LIGO collaboration . A PDF of the paper is available here . The detectable signal lasted around 0.1 of a second, but the black holes were orbiting each other so fast that they completed about ten orbits during that time. Basically each oscillation in the data is one orbit. The data immediately gives the rate of decay of the orbit as the black holes merge and the amplitude with which the gravitational waves are emitted, plus lots of other information hidden away in the detail. This is easily enough to confirm that this was a black hole merger and to measure the masses of the black holes involved. Each pair of black holes only merge once, so this was the first and last signal detected from that particular pair of black holes. However the universe is a big place and there are lots of black hole binaries in it, so we expect black hole mergers to take place regularly. LIGO has already detected three mergers. The first (shown above) on 14th September 2015, then a second possible detection (at low confidence) in October 2015 and then a third firm detection on 26th December 2015. LIGO took a pause to upgrade its sensitivity, but is now working again. As a rough estimate we expect it to detect a merger around one a month, that is roughly once a month a black hole binary will merge somewhere in the region of the universe that lies within LIGO's detection limits. We don't know in advance where an when a merger will occur, so it's just a matter of waiting until one happens near enough to be detected.
{ "source": [ "https://physics.stackexchange.com/questions/304212", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/121763/" ] }
304,245
I am confused. Some sources say it is possible at least theoretically ( http://www.wiskit.com/marilyn/battleship.jpeg ) and some say it is not true ( http://blog.knowinghumans.net/2012/09/a-battleship-would-not-float-in-bathtub.html ) Is it necessary or not that there exists an amount of water around the ship that weights at least the same as the weight of the ship?
Yes a ship can float in a big bath tub with very little water. No you do not need as much water as the weight of the ship. In theory, you can use less than a cupful! Image from B5-2. How much water is needed to float a wood block? Explanation Suppose the ship is floating in the ocean, far from the sea bed or any shore. The layer of water in contact with the ship (let's say a layer of thickness 1mm all round the submerged surface) is providing enough force ("upthrust") to support the whole weight of the ship. This layer of water is in equilibrium vertically : it is pulled down by gravity (its own weight), pushed down by the weight of the ship above it, and pushed up by the water below it. It is in equilibrium horizontally : it is pushed outwards by the ship, and pushed inwards by the surrounding water. Suppose the surrounding water beyond the contact layer is replaced by a thick concrete wall like a dam, many meters thick. The concrete wall is pushing both horizontally and vertically on the contact layer of water next to the ship. It makes no difference to the contact layer whether it is being pushed by other water or by the concrete wall. In both cases the contact layer remains in equilibrium and does not move. It does not, for example, get squeezed upwards and outwards between the concrete wall and the ship. That process of adjustment has already taken place, when the ship was launched or loaded. The pressure in the contact layer of water varies only with its depth, not with its thickness. If there is more or less water between the ship and the wall, the pressure at that depth does not change. The concrete wall or dam can be replaced by an enormous bath tub, providing that it is strong enough to exert the same force which the wall exerted. Comment on Explanations by "Marilyn" and Brian Holtz My explanation is essentially the same as Marilyn's ( wiskit.com ) - without the benefit of her excellent diagrams. The only difference is that Marilyn starts with water in the dock and replaces most of it with the battleship. I start with the ship floating in the ocean and replace most of the ocean water with dry dock. Brian Holtz ( Knowing Humans blog) is incorrect. His reasoning (accessed 12 January 2017) is not very clear to me, so I apologise to him if I have misunderstood it. His arguments are as follows : 1. The bathtub must initially contain sufficient water to be displaced by the ship when it floats. eg If the battleship (USS Missouri, for example) weighs 45,000 tonnes then the bathtub must initially contain at least 45,000 tonnes of water. The volume of the bath tub does need to be at least equal to the volume of 45,000 tonnes of water. However, it is not necessary for water to be actually displaced from the bathtub and to overflow from it. A small amount of water displaced upwards is just as good as an ocean displaced sideways. If the battleship is lowered gradually into the bathtub and fits snugly into it, the cupful of water at the bottom will be squeezed up into the gap, increasing the depth of water. With only a cupful of water this will happen very quickly when the ship is almost in place. As this water moves upwards the upthrust which it provides increases. Eventually the upthrust is sufficient to support the whole weight of the ship. As Deep says in his answer, and Jim in his comment, the volume of water displaced in Archimedes' Principle refers to the volume of the ship which is below the final waterline , not below the initial waterline. We cannot, of course, float a battleship in a bathtub which is only 1m deep, however wide it is. The draught of the USS Missouri is 8.8m, so our bathtub must be at least this deep. It must also be at least as wide and long as the ship at this height above the keel. 2. The battleship could not be floated by pouring an arbitrarily small amount of water into the gap between the ship and the bathtub. "You can't do the enormous work of lifting a massive ship merely by balancing it against a small mass of water." Not correct. Hardly any work needs to be done to float the ship. Floating it is just a question of redistributing the load from direct contact with the bathtub to indirect contact via the layer of water. The ship does not need to be lifted up any further than say 1 micron - just enough to ensure that it is not pressing directly on the bath tub at any point. To create the narrow gap the ship could rest with its whole weight on the bottom of the bathtub (both the ship and the tub must be enormously strong to do this) while narrow supports, perhaps 1 mm thick, at the sides prevent it from making contact with the sides of the bathtub. Water could easily be poured into this gap under its own gravity - there is nothing to prevent it falling down, until it reaches the level of the water already poured in. As the water gets deeper it gradually exerts more pressure on the ship, so there is less pressure on keel of the ship (directly from the bathtub) and more on the rest of the underneath surface (from the water, which is in turn pushing on the rest of the bathtub). When the water is deep enough (at least 8.8m for USS Missouri) the pressure from it will be enough to support the whole weight of the ship and it will no longer exert any contact force directly on the bathtub. To float the battleship any finite distance higher up in the bathtub (say 5mm higher) it is only necessary to add more water to the tub. However, the amount of water required to do this could be very large because raising the ship will increase enormously the volume of the gap which needs to be filled. A gap which is initially "snug" does not remain "snug" as the ship rises vertically. The work done will of course also be enormous : the weight of the battleship times the distance moved upwards. However, this work is done by gravity, acting on the extra water. If the extra water is already in a reservoir above the current water level, gravity will carry it down into the tub. But if it is necessary to pump this extra water up to the current water level from below the level of the keel, the energy required to do so will be at least equal to the work done in raising the 45,000 tonne battleship by 5mm. So as Brian Holtz says, There is no free lunch. Lifting the battleship even by 5mm requires enormous energy. But this is not the same as getting it to float, which is just a question of transferring the weight from the keel to the contact layer of water. 3. In canal locks there is sufficient clearance all round (behind and in front as well as at the sides) to hold a volume of water equal to the weight of the ship above the original water level. Not necessarily true. There is no reason why a rectangular ship (eg a barge) cannot "dock" with clearance of say only 6" on either side and below. The water it displaced has been pushed aside and behind it. A dam can then be erected behind it, again with only 6" clearance. When the dam is strong enough, the water behind it can be pumped away, leaving the barge afloat in an isolated "dock" which contains much less than its own weight of water. The same thing happens in canal locks when the ship occupies almost the total volume of the lock. After the lower gate is closed, water is allowed to fall in from the lock above, raising the height of the ship. This requires enormous energy, but it is all done by gravity, courtesy of rainwater and reservoirs. 4. In the no-overlow scenario, there is enough room at the top of the bath tub to hold the mass of water which balances the floating object. It is not clear what this means. If it means that the volume of the empty bath tub must be big enough to hold a volume of water equal to the ship's weight, then this is not in dispute. eg See Marilyn's diagrams. The bath tub needs to be at least as deep as the draught of the battleship. But the final amount of water in it can be very small. It is the weight of the missing ("displaced") water below the waterline which is crucial, not the weight of the water which remains. If it means that the weight of water in the gap must be at least as much as the weight of the ship, this is false, as argued in #1 and #3. 5. When large machines like telescopes "float" on a thin film of lubricating oil, the oil is kept pressurized in a sealed system, and the pool of oil is not open to the atmosphere. True, but this is a matter of convenience, not necessity. The telescope could float just as well on a much "deeper" film of oil which is open to the atmosphere. Sealing the vessel enables high pressure to be achieved fairly uniformly and with the minimum amount of oil. When open to the atmosphere, the balancing depth (not mass) of oil provides the required pressure.
{ "source": [ "https://physics.stackexchange.com/questions/304245", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141673/" ] }
304,370
So the kaon particle (K) and the sigma particle (Σ) are created very quickly through the strong interaction and decay slowly through the weak interaction. How is this so? Is this not some kind of discrepancy? What is the explanation for this? An internet search has lead me to determine that it could be something to do with CP violation. Could it also be to do with the fact that they tend to decay into pions and protons? I don't want a fully complex mathematical explanation, just a short qualitative one would be great. It's from a past exam paper, and is just a short 2 mark question.
Kaons and sigmas contain strange quarks, so the "ground state" particles must decay by changing quark flavour. The strong and electromagnetic interactions cannot change flavour, but the weak interaction can, hence they can only decay weakly. The strong interaction can easily produce $s\overline{s}$ pairs, which can subsequently pair up with lighter quarks to produce strange hadrons (including kaons and sigmas) without the need for any flavour-changing processes.
{ "source": [ "https://physics.stackexchange.com/questions/304370", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8082/" ] }
304,651
According to this answer , energy has some (minimal) mass associated with it. Therefore, when lots of energy hits the earth (such as solar radiation in a 24 hour period) shouldn't the earth gain some small additional mass? And if so, how much?
There's an answer to your question, but it's not all that meaningful. The sun strikes the Earth with $1.5\cdot10^{22}J$ of energy every day. Using $m=\frac{E}{c^2}$ we find this has a mass equivalent of 166897kg. However, the Earth does not actually gain mass this way. The Earth is also radiating energy into space, continuously. If we assume the average , so the amount of energy coming into the system equals the amount of energy leaving the system. As a result, the earth is not gaining mass by this at all. (or if any, its a small amount attributable to global warming). We also gain about 40000kg of space dust every day, and lose about 95000kg of hydrogen from the atmosphere. You win some, you lose some.
{ "source": [ "https://physics.stackexchange.com/questions/304651", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/46728/" ] }
304,837
For a few days, I was thinking of this question. Lets assume we have a simple circuit that is 100 meters long. And lets say that we have bulbs A, B and C connected to the circuit's 30th, 60th and 90th meter relatively (from the + side). When we switch the system on, would all the bulbs light up at the same time? Or would A light up first and C last (or the opposite)?
I'm assuming that you're imagining a long, skinny, series circuit with three simple resistive lamps, like this: switch A B C __/ _____________^v^v^v_________________^v^v^v_________________^v^v^v________ | | = battery short | |_____________________________________________________________________________| (Sorry for the terrible ASCII diagram.) The story we tell children about electric currents --- that energy in electric circuits is carried by moving electric charges --- is somewhere between an oversimplification and a fiction. This is a transmission line problem. The bulbs illuminate in the order $A\to B\to C$, but reflections of the signal in the transmission line complicate the issue. The speed of a signal in a transmission line is governed by the inductance and capacitance $L,C$ between the conductors, which depend in turn on their geometry and the materials in their vicinity. For a transmission line made from coaxial cables or adjacent parallel wires, typical signal speeds are $c/2$, where $c=30\rm\,cm/ns=1\rm\,foot/nanosecond$ is the vacuum speed of light. So let's imagine that, instead of closing the switch at $x=0$ and leaving it closed, we close the switch for ten nanoseconds and open it again. (This is not hard to do with switching transistors, and not hard to measure using a good oscilloscope.) We've created a pulse on the transmission line which is about 1.5 meters long, or 5% of the distance between the switch and $A$. The pulse reaches $A$ about $200\rm\,ns$ after the switch is closed and illuminates $A$ for $10\rm\,ns$; it reaches $B$ about $400\rm\,ns$ after the switch is closed, and $C$ at $600\rm\,ns$. When the pulse reaches the short at the $100\rm\,m$ mark, about $670\rm\,ns$ after the switch was closed, you get a constraint that's missing from the rest of the transmission line: the potential difference between the two conductors at the short must be zero. The electromagnetic field conspires to obey this boundary condition by creating a leftward-moving pulse with the same sign and the opposite polarity: a reflection. Assuming your lamps are bidirectional (unlike, say, LEDs which conduct only in one direction) they'll light up again as the reflected pulse passes them: $C$ at $730\rm\,ns$, $B$ at $930\rm\,ns$, $A$ at $1130\rm\,ns$. You get an additional reflection from the open switch, where the current must be zero; I'll let you figure out the polarity of the second rightward-moving pulse, but the lamps will light again at $A, 1530\,\mathrm{ns}; B, 1730\,\mathrm{ns}; C, 1930\,\mathrm{ns}$. (Unless you take care to change your cable geometry at the lamps, you'll also get reflections from the impedance changes every time a pulse passes through $A$, $B$, or $C$; those reflections will interfere with each other in a complicated way.) How do we extend this analysis to your question, where we close the switch and leave it closed? By extending the duration of the pulse. If the pulse is more than $1330\rm\,ns$ long, reflections approaching the switch see a constant-voltage boundary condition rather than a zero-current condition; adapting the current output to maintain constant voltage is how the battery eventually fills the circuit with steady-state direct current. Note that if your circuit is not long and skinny but has some other geometry, then transmission-line approximation of constant $L,C$ per unit length doesn't hold and one of your other answers may occur.
{ "source": [ "https://physics.stackexchange.com/questions/304837", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141926/" ] }
304,992
I guess it has something to do with their being both a high horizontal and a vertical velocity components during re-entry. But again, wouldn that mean there is a better reentry maneuver that the one in use?
Re-entry velocity from LEO is $~7,800 \frac m s$ , from lunar space it is as high as $~11,000 \frac m s$ [1]. Different books give the terminal velocity of a skydiver as about $56 \frac m s$ or $75 \frac m s$ [2, 3]. The exact value isn't material, but the fact that it is two powers of ten smaller then re-entry velocity, is. The difference between skydiving and re-entry is that in order to orbit, you need to go very fast sideways. You essentially fall so fast sideways that you miss the ground when falling towards it ( see related xkcd ). A skydiver, whether they dive from a plane or balloon, has only marginal horizontal velocity and almost zero vertical velocity to start with. The skydiver then accelerates to terminal velocity, which is quite slow compared to re-entry velocity (see above). In comparison, the Apollo capsule had a terminal velocity of $150 \frac m s$ at $7,300 \ m$ altitude. It is from that moment on that Apollo behaves like a skydiver. Drogue chutes are pulled that slow down the craft to $80 \frac m s$ , and then finally the main chutes that slow down the craft to $8.5\frac m s$ [4]. But that is only the very last phase of the flight. You need to somehow slow down from $7,800 \frac m s$ to $150 \frac m s$ first and descent from space deep into the atmosphere. Creating chutes that can both withstand that and are big enough to slow the craft down enough that high up in the atmosphere is simply not feasible from an engineering point of view, and even if it were, it would probably prohibitive from a weight/delta-v point of view. The Falcon 9 first stage does not have problems with re-entry heating, although it also reaches space. But that stage does not achieve orbital velocity. It only goes about $2,000\frac m s$ at separation, which is slow enough that heating is not an issue when it comes down (see this question on Space StackExchange ). 1: Atmospheric Entry. Wikipedia, the free encyclopedia. 2: Tipler, Paul A. College Physics. New York: Worth, 1987: 105. 3: Bueche, Fredrick. Principles of Physics. New York: McGraw Hill, 1977: 64. 4: W. David Woods. How Apollo Flew to the Moon. Springer, 2008: 371.
{ "source": [ "https://physics.stackexchange.com/questions/304992", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132055/" ] }
304,993
A 25,000 gallon (95,000 litre) bulk chemical storage railcar can store products with vapor pressures in excess of 200 psia (1.38 Mpa). The same railcar can not withstand a vacuum when being unloaded. I want to understand why. A bulk chemical storage car in this example (assume refrigerant on a warm day) is unique in it's design in the sense that it is a pressure vessel contains well over 200 psig (1.38 MPa) internal pressure whereas many metal tanks are rated for far lower pressures thus the pressure differential between atmosphere and the inside of the metal tank may be typically be close to the difference found in this example, 14.7 psi (101 kPa) or less. The definition of a metal tanks can be open to interpretation not only with regard to pressure ratings but also wall thicknesses. One of the answers posed refers to plastic or aluminum soda containers. The material properties are far different than those typically found in the types of rail cars I have presented here.
A pressure vessel consists mainly of a thin metal plate. If you experiment with a thin sheet of material (for example a piece of paper) you will find that it is very easy to bend it, but much harder to stretch it. If there is internal pressure in a spherical or cylindrical vessel, the only way the pressure can do mechanical work (i.e. force $\times$ distance) is to increase the internal volume of the vessel, and the only way to do that is to stretch the material, which is difficult. However, external pressure can do work by reducing the internal volume, and that is easy to do without stretching the vessel wall, by breaking the original cylindrical or spherical symmetry of the vessel and "crumpling" the wall. Since the structure will never be a perfectly uniform shape, there will always be a "weak point" somewhere from which the crumpling can start. This page http://publish.ucc.ie/boolean/2010/00/dePaor/11/en has some nice pictures showing what happens. The basic principle works exactly the same way with simpler geometry, for example Euler buckling of a column. If you apply a tension load to a column, the only possibility is to stretch the material and make the column longer. But if you apply a compressive load, you can make the column shorter without changing the length of the material, when it buckles into a curved shape.
{ "source": [ "https://physics.stackexchange.com/questions/304993", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133229/" ] }
305,563
One of the main reasons why we haven't switched to clean energy is the lack of efficient storage methods - But, why aren't we using dead weights to store energy and draw it back later when needed? As an example of what I mean:
You can use dead weights, but you need a huge amount of weight. For example the biggest pumped hydroelectric system in the world (the Gianelli Hydroelectric Plant in California, USA) uses water stored in a reservoir about 9 miles long by 5 miles wide, lifted through a height of about 300 feet. Even then, it can only supply about 5% of California's electricity usage for less than 2 weeks before running dry - and given the current long term droughts in California, it can't even do that, because there would be no water available to refill it. Trying to build devices like this for individual homes would be hopelessly uneconomical. One way to get "free" energy to pump the water is to use tidal barrages, but even in the most suitable locations, the amount of power you get from a given area of water behind the barrage is only the same order of magnitude as covering that entire area with solar panels. The biggest operating tidal barrage in Europe (which has been running for about 20 years) only supplies about 0.1% of France's total electricity consumption.
{ "source": [ "https://physics.stackexchange.com/questions/305563", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/142273/" ] }
305,573
So at school we have a semicircle made of stone, and if you stand in the middle and face the semicircle and speak, then you can hear every word echoed perfectly There's also a second (big) semicircular arc made of stone such that if one person stands at each end and whispers into the stone, the other person can easily hear it. Is a semicircle really the optimal shape for echoing though (in either the first case or the second)? I know that parabolas have nice reflective properties, but I'm not really sure what shape would be best.
You can use dead weights, but you need a huge amount of weight. For example the biggest pumped hydroelectric system in the world (the Gianelli Hydroelectric Plant in California, USA) uses water stored in a reservoir about 9 miles long by 5 miles wide, lifted through a height of about 300 feet. Even then, it can only supply about 5% of California's electricity usage for less than 2 weeks before running dry - and given the current long term droughts in California, it can't even do that, because there would be no water available to refill it. Trying to build devices like this for individual homes would be hopelessly uneconomical. One way to get "free" energy to pump the water is to use tidal barrages, but even in the most suitable locations, the amount of power you get from a given area of water behind the barrage is only the same order of magnitude as covering that entire area with solar panels. The biggest operating tidal barrage in Europe (which has been running for about 20 years) only supplies about 0.1% of France's total electricity consumption.
{ "source": [ "https://physics.stackexchange.com/questions/305573", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60080/" ] }
305,627
A moving magnet induces a current in a conductor, then shouldn't we be able to generate electricity through manual labour? I was thinking about building a gym that used magnets as weights. People would lift the magnets up and down creating a change in flux generating current. For example; the exercise bikes and the rowing machines would definitely be able to produce a current due to their rotating discs. Also, machines like a squat stand can be turned into a generator because the weight can be turned into a magnet. The key idea is that any machine that can move can turn into a generator to produce electricity for homes. There should be many gyms spread out along the city like mini power stations. The electricity generated doesn't have to be used straight away but can also be stored in a battery for later use. I am wondering if there will be enough electricity generated to supply homes (if not all homes then a street or two).
The maximum continuous power that can be generated for an hour by a fairly fit person on an efficient machine like an exercise bike or rowing machine is $\sim 200$ W (olympic-standard track cyclists might manage 400 W). Let's say that a gym is occupied at any time by 10 people who are doing this kind of intense exercise. Then you might just be producing enough electricity to boil a kettle (kettles are 2-3 kW) and keep the lights on in the gym. Unfortunately there are another 10 people in the showers who have just consumed more electricity than they generated (typical electric shower consumes 8kW, so a 2 minute shower needs 960 kJ = 200W $\times$ 80 minutes). Does that answer the question? However, it might be an interesting gimmick to allow people to charge up their phones or other personal devices using the electricity that they personally generate. That would probably be feasible with the right adaptors and transformers.
{ "source": [ "https://physics.stackexchange.com/questions/305627", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130713/" ] }
305,942
See the image first: Why are light rays able to cross each other? Air isn't able to.
Note: this answer was in response to the original question: My question is that Why the light rays able to cross each other weather water waves and air could not cross each other Other waves pass through each other just as with light. This is easy to test. Place four people at the corner of a large room. Have two of them, at adjacent corners talk to the person at a diagonal corner. Use a cone such as a cheerleader might use to somewhat channel the sound. You may be a bit distracted by the other voice but you will clearly hear the voice from the opposite corner. Here's a standard demo in a high school science class. Have two students hold each end of a moderately stretched slinky resting on a smooth floor. Have each student give the slinky a sharp snap to their right. Since the students are facing each other, the pulses will be opposite one another as they travel toward opposite ends. When the two pulses meet in the middle the slinky will appear relatively straight but only for an instant. The two pulses will continue to travel past one another as if they never had met. Waves of the same kind traveling through one another maintain their original identity after the encounter. This is a basic property of waves, you can read about it in any introductory Physics text.
{ "source": [ "https://physics.stackexchange.com/questions/305942", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/139543/" ] }
306,013
This press release by NIST , titled "NIST Physicists ‘Squeeze’ Light to Cool Microscopic Drum Below Quantum Limit", makes the following claim: The new technique theoretically could be used to cool objects to absolute zero, the temperature at which matter is devoid of nearly all energy and motion, NIST scientists said. I'm not sure that's exactly what the NIST scientists said, nor what they meant. I'm very suspicious of anyone who claims to be even theoretically capable of reducing a mechanical system to absolute zero. The full publication itself is at Sideband cooling beyond the quantum backaction limit with squeezed light. J.B. Clark et al. Nature 541 , 191 (2017) , arXiv:1606.08795 . Can someone with access to the actual article in Nature clarify whether the news article at NIST was accurately reporting on the content of the Nature article regarding achieving absolute zero? The abstract instead references cooling "arbitrarily close to the motional ground state". I'm not looking for a debate about whether or not reaching absolute zero is possible, nor a discussion about the discrepancies in definitions of absolute zero. I just want someone to clear up the probable discrepancy between what NIST wrote and what Nature published.
Let's go through the article's abstract (emphasis added by me): Quantum fluctuations of the electromagnetic vacuum produce measurable physical effects such as Casimir forces and the Lamb shift. They also impose an observable limit—known as the quantum backaction limi on the lowest temperatures that can be reached using conventional laser cooling techniques. As laser cooling experiments continue to bring massive mechanical systems to unprecedentedly low temperatures, this seemingly fundamental limit is increasingly important in the laboratory. Right, conventional laser cooling cannot get a system below a certain minimum temperature. This is essentially because lasers (or microwave sources, or whatever coherent field generator you care about) output a so-called coherent state , which has a finite width in both of its quadratures. This limit is called, later in the abstract, the "quantum backaction limit", as we'll see in just a moment. Fortunately, vacuum fluctuations are not immutable and can be ‘squeezed’, reducing amplitude fluctuations at the expense of phase fluctuations. Right, coherent states aren't the only possible states of the electromagnetic field (or any other harmonic oscillator)! It is possible to generate so-called "squeezed states" where one of the quadratures is more narrow than the other. These squeezed states do not violate the Heisenberg uncertainty relation: you get squeezing in one direction at the expense of broadening in the other. This is directly related to what the authors refer to as reducing amplitude fluctuations at the expense of phase fluctuations. I'm not getting in the details on this because it's out of bounds for what OP is asking. Here we propose and experimentally demonstrate that squeezed light can be used to cool the motion of a macroscopic mechanical object below the quantum backaction limit. Ok fine. They get beyond the "quantum backaction limit" because they're not using a normal coherent state. They use a squeezed state. We first cool a microwave cavity optomechanical system using a coherent state of light to within 15 per cent of this limit. We then cool the system to more than two decibels below the quantum backaction limit using a squeezed microwave field generated by a Josephson parametric amplifier. Yep, as we just said, using a squeezed state lets you get past the limit you have with normal coherent states. From heterodyne spectroscopy of the mechanical sidebands, we measure a minimum thermal occupancy of 0.19 ± 0.01 phonons. With our technique, even low-frequency mechanical oscillators can in principle be cooled arbitrarily close to the motional ground state, enabling the exploration of quantum physics in larger, more massive systems. Ok so there we clearly see that they did not get to absolute zero. They still had about 20% of a phonon (one quantum unit of vibrational excitation) in their oscillator, whereas absolute zero would be zero phonons. They say that in-principle you can used squeezed states to get to arbitrarily low temperatures (i.e. arbitrarily low phonons). That may technically be true, but to get arbitrarly low temperature you need arbitrarily much squeezing, which is very, very hard to actually do in the lab. They're making an "in-principle" statement where the conditions for the in-principle thing to be achieved are totally unrealistic, and what's more, it's not even known whether the theory accurately describes the physical system in the parameter range you would need to get, say, $10^{-10^6}$ phonons (see comments under Emilio's answer for more on this). Statements like that are still useful, because they tell the reader that there's no known hard limit to how far you can go with the squeezed state technique, i.e. the limits are entirely practical. This is important because other protocols could have in-principle limitations. For example, I might have some protocol which cools an oscillator but cannot possibly get below 0.3 phonons because of something baked into the physics. In that case, you know that if you need a phonon number lower than 0.3, don't even consider using my protocol.
{ "source": [ "https://physics.stackexchange.com/questions/306013", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/142386/" ] }
306,305
Is it an inherent portion of defining something as a wave? Say if I had something that was modeled as a wave. When this thing encounters something else, will it obey the principle of superposition. Will they pass through each other?
If a wave $f(x,t)$ is something that satisfies the wave equation $Lf=0$ where $L$ is the differential operator $\partial_t^2-c^2\nabla^2$ then, because $L$ is linear, any linear combination $\lambda f+\mu g$ of solutions $f$ and $g$ is again a solution: $L(\lambda f + \mu g)=\lambda Lf+\mu Lg=0$. In general, there might be things that propagate (not exactly waves, but since the question is for waves of any kind) determined by other differential equations. If the equation is of the form $Lf=0$ with $L$ a linear operator, the same argument applies and the superposition principle holds.
{ "source": [ "https://physics.stackexchange.com/questions/306305", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/95956/" ] }
306,684
Today I saw a circle of light outside my plane window on the clouds, as if someone was shining a bright, tightly focused flashlight, or perhaps like the halo that sometimes appears around the sun. I think it was approximately where I would expect our shadow to be (at least the sun was shining on the opposite side of the plane). I took a video with my phone . I move the camera around a bit to show that it doesn't seem to be an artifact of the window.
What you're seeing is called a glory , an optical phenomenon related to rainbows, caused by reflections and refraction inside the water drops in the cloud (plus some additional physics, which are not fully agreed on; cf. this question ). If the cloud had been closer you would have been able to see the shadow of the plane embedded in the glory: Image source For more information see the page on glories at Atmospheric Optics , particularly regarding their formation . (Also, just go to Atmospheric Optics and have a browse! they've got all sorts of gorgeous stuff there.)
{ "source": [ "https://physics.stackexchange.com/questions/306684", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3751/" ] }
306,975
Atoms individually have no colors, but when there is a large collection of atoms we see objects colorful, which leads to a question: at least how many atoms are required for us to see the color?
There are a couple of issues here. A pink (#FF00FF) object appears pink not because each atom is pink (there is no wavelength of light that is perceived to be the same pink by the ordinary human eye. What is happening is that a pink object is emitting (or reflecting) light of multiple wavelengths that enter the eye and are detected and processed to allow us to perceive its colour as pink. One single atom, therefore, would not be able to appear to us as pink under ordinary conditions because it will not emit photons of the appropriate wavelengths rapidly enough that we see no oscillation but a steady pink. Even for colours that correspond to a single wavelength of light, we would need a significant number of atoms before it emits enough photons to form a stable statistical distribution of wavelengths (called an emission spectrum), which we can then perceive and compare to the colours that we have previously experienced. How many atoms are needed would of course depend on the rate of emission, which is proportional to the power output. For reflection it would depend largely on the intensity of light incident on the object. And of course, molecules, complexes and macromolecular structures can have very different spectra compared to their individual constituent atoms, because the energy levels for electrons change drastically when bonds are formed (or broken). For example aqueous $Fe^{3+}$ is yellow while aqueous $Fe^{2+}$ is green, while solid $Fe_2O_3$ is reddish-brown. Only about 10% of the light incident on the eye actually makes it through to the retina. Even those that strike the retina may not be detected. A human eye has receptors called cones and rods. Incidentally, a rod can actually respond to a single photon that strikes an active molecule in it, ultimately triggering an electrical pulse down the optic nerve. A cone is theoretically able to respond to a single photon as well, but for the below reason a single photon is never enough for us to see its 'colour'. Each cone absorbs incident photons of different frequencies with different probabilities. This is precisely how we can see many colours using only 3 types of cones, because light of different wavelengths can be distinguished by how much they are absorbed by each type of cone. ( https://en.wikipedia.org/wiki/File:1416_Color_Sensitivity.jpg ) But since a photon can only be absorbed by a single cone, it also implies that the retina plus brain needs many photons from the same source before it can get a statistical picture of absorption by the 3 types of cones, which it then interprets as a colour. This is the main reason we need thousands of photons from a point source before we can clearly distinguish its colour from that of other objects. The lower the intensity of light, the harder it is for us to distinguish colours. And note that we perceive the combination of pure red and pure green light (namely the combination of light of two different frequencies) the same way we perceive pure yellow light (of the appropriate single frequency), because they result in the same absorption profile for the three types of cones. Rods are much denser than cones, except in the fovea where there are nearly no rods, and hence one can see better around the central spot when in the dark. In the fovea, the 'Blue'-sensitive cones (S cones) are also rarer than the other two types at about 5%, whereas the 'Red'-sensitive cones (L cones) number about 50% to 75%. The net effect is that you need something like 100,000 photons from the same point incident on your eye before you can perceive its colour at the normal human accuracy , even more for blue light. And finally there is Rayleigh scattering in the Earth's atmosphere, which scatters 'violet' light (400nm wavelength) about $7$ times as strongly as red light (650nm wavelength).
{ "source": [ "https://physics.stackexchange.com/questions/306975", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96481/" ] }
307,113
I have looked in wikipedia: Hermitian matrix and Self-adjoint operator , but I still am confused about this. Is the equation: $$ \langle Ay | x \rangle = \langle y | A x \rangle \text{ for all } x \in \text{Domain of } A.$$ independent of basis?
The relation $$ \langle Ay | x \rangle = \langle y | A x \rangle \text{ for all } x \in \text{Domain of } A\tag1$$ makes no reference to any basis at all, so it is indeed basis-independent. In fact, this definition, which seems pretty strange when you first meet it, arises precisely out of a desire to make things basis-independent. The particular observation that sparks the definition is this: Let $V$ be a complex vector space with inner product $⟨·,·⟩$, and let $\beta=\{v_1,\ldots,v_n\}$ be an orthonormal basis for $V$ and $A:V\to V$ a linear operator with matrix representation $A_{ij}$ over $\beta.$ Then, if this matrix representation is hermitian, i.e. if $$A_{ji}^*=A_{ij}\tag2$$ when $A$ is represented on any single orthonormal basis, then $(2)$ holds for all such orthonormal bases. (Similarly, for a real vector space simply remove the complex conjugate.) Now this is a weird property: it makes an explicit mention of a basis, and yet it is basis independent. Surely there must be some invariant way to define this property without any reference to a basis at all? Well, yes: it's the original statement in $(1)$. To see how we build the invariant statement out of the matrix-based porperty, it's important to keep in mind what the matrix elements are: they are the coefficients over $\beta$ of the action of $A$ on that basis, i.e. they let us write $$ Av_j = \sum_i A_{ij}v_i.$$ Moreover, in an inner product space, the coefficients of a vector on any orthonormal basis are easily found to be the inner products of the vector with the basis: if $v=\sum_j c_j v_j$, then taking the inner product of $v$ with $v_i$ gives you $$\langle v_i,v\rangle = \sum_j c_j \langle v_i,v_j\rangle = \sum_j c_j \delta_{ij} = c_i,$$ which then means that you can always write $$v=\sum_i \langle v_i,v \rangle v_i.$$ (Note that if $V$ is a complex inner product space I'm taking $⟨·,·⟩$ to be linear in the second component and conjugate-linear in the first one.) If we then apply this to the action of $A$ on the basis, we arrive at $$ Av_j = \sum_j A_{ij}v_i = \sum_i \langle v_i, Av_j\rangle v_i, \quad\text{i.e.}\quad A_{ij} = \langle v_i, Av_j\rangle,$$ since the matrix coefficients are unique. We have, then, a direct relation between matrix element and inner products, and this looks particularly striking when we use this language to rephrase our property $(2)$ above: the matrix for $A$ over $\beta$ is hermitian if and only if $$ A_{ji}^* = \langle v_j, Av_i\rangle^* = \langle v_i, Av_j\rangle = A_{ij}, $$ and if we use the conjugate symmetry $\langle u,v\rangle^* = \langle v,u\rangle$ of the inner product, this reduces to $$ \langle Av_i, v_j\rangle = \langle v_i, Av_j\rangle. \tag 3 $$ Now, here is where the magic happens: this expression is exactly the same as the invariant property $(1)$ that we wanted , only it is specialized for $x,y$ set to members of the given basis. This means, for one, that $(1)$ implies $(2)$, so that's one half of the equivalence done. In addition to this, there's a second bit of magic we need to use: the equation in $(3)$ is completely (bi)linear in both of the basis vectors involved, and this immediately means that it extends to any two vectors in the space. This is a bit of a heuristic statement, but it is easy to implement: if $x=\sum_j x_j v_j$ and $y=\sum_i y_i v_i$, then we have \begin{align} \langle A y, x\rangle & = \left\langle A \sum_i y_i v_i, \sum_j x_j v_j\right\rangle && \\ & = \sum_i \sum_j y_i^* x_j \langle A v_i, v_j\rangle &&\text{by linearity} \\ & = \sum_i \sum_j y_i^* x_j \langle v_i, Av_j\rangle &&\text{by }(3) \\ & = \left\langle \sum_i y_i v_i, A\sum_j x_j v_j\right\rangle &&\text{by linearity} \\ & = \langle y, A x\rangle,&& \end{align} and this shows that you can directly build the invariant statement $(1)$ out of its restricted-to-a-basis version, $(3)$, which is itself a direct rephrasing of the matrix hermiticity condition $(2)$. Pretty cool, right?
{ "source": [ "https://physics.stackexchange.com/questions/307113", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/134764/" ] }
307,129
I am a high school student in AP Physics C. We are currently on our gravitation unit, and one of my homework questions goes like this: Show that the escape speed, $v_e$, from a planet is related to the speed of a circular orbit just about the surface of the planet, $v_c$, according to the following law: $v_e = \sqrt 2 v_c$. I know that when moving from an orbital state to an unbounded state, no external work should be done on the object-planet system, so I should be able to use conservation of energy: $$\begin{align} E_1 &= E_2 \\ K_c + U_{Gc} &= K_e \end{align}$$ I decided to let the radius of the bounded orbit equal $r_c$, the mass of the object equal $m$, and the mass of the planet equal $M$. $$\frac{1}{2}mv_c^2 - \frac{GmM}{r_c} = \frac{1}{2}mv_e^2$$ As expected, the mass $m$ of the object itself is irrelevant: $$\frac{1}{2}v_c^2 - \frac{GM}{r_c} = \frac{1}{2}v_e^2$$ This is where I immediately get stuck, so to get $M$ and $r_c$ out of the equation, I add in a second equation that interprets the force of gravity while the object is in a bound circular orbit as a centripetal force: $$\begin{align} F_c = ma_c &= F_G \\ m\frac{v_c^2}{r_c} &= \frac{GmM}{r_c^2} \\ v_c^2 &= \frac{GM}{r_c} \end{align}$$ How wonderful! I think. I should be able to substitute this back in and solve for $v_e = \sqrt 2 v_c$. . . . $$\begin{align} \frac{1}{2}v_c^2 - v_c^2 &= \frac{1}{2}v_e^2 \\ -\frac{1}{2}v_c^2 &= \frac{1}{2}v_e^2 \\ -v_c^2 &= v_e^2 \\ \sqrt{-v_c^2} &= v_e \\ \sqrt{-1}v_c &= v_e \end{align}$$ Well, isn't that lovely? The classic non-real answer. My first suspicion is that I made an arithmetical error, but I can't find one. Now I'm thinking that my most probable error would involve the signs of the forces since I'm not using unit vectors to keep track of directions. (My formula chart reads “$a_c = v^2/r = \omega^2r$” and “$\left\vert \vec{F}_G \right\vert = Gm_1m_2/r^2$.”) Does anyone have any ideas of what error(s) I made or what step(s) I failed to produce? If you could, please provide a few lines of equation showing your thought process and the substitutions you make.
The relation $$ \langle Ay | x \rangle = \langle y | A x \rangle \text{ for all } x \in \text{Domain of } A\tag1$$ makes no reference to any basis at all, so it is indeed basis-independent. In fact, this definition, which seems pretty strange when you first meet it, arises precisely out of a desire to make things basis-independent. The particular observation that sparks the definition is this: Let $V$ be a complex vector space with inner product $⟨·,·⟩$, and let $\beta=\{v_1,\ldots,v_n\}$ be an orthonormal basis for $V$ and $A:V\to V$ a linear operator with matrix representation $A_{ij}$ over $\beta.$ Then, if this matrix representation is hermitian, i.e. if $$A_{ji}^*=A_{ij}\tag2$$ when $A$ is represented on any single orthonormal basis, then $(2)$ holds for all such orthonormal bases. (Similarly, for a real vector space simply remove the complex conjugate.) Now this is a weird property: it makes an explicit mention of a basis, and yet it is basis independent. Surely there must be some invariant way to define this property without any reference to a basis at all? Well, yes: it's the original statement in $(1)$. To see how we build the invariant statement out of the matrix-based porperty, it's important to keep in mind what the matrix elements are: they are the coefficients over $\beta$ of the action of $A$ on that basis, i.e. they let us write $$ Av_j = \sum_i A_{ij}v_i.$$ Moreover, in an inner product space, the coefficients of a vector on any orthonormal basis are easily found to be the inner products of the vector with the basis: if $v=\sum_j c_j v_j$, then taking the inner product of $v$ with $v_i$ gives you $$\langle v_i,v\rangle = \sum_j c_j \langle v_i,v_j\rangle = \sum_j c_j \delta_{ij} = c_i,$$ which then means that you can always write $$v=\sum_i \langle v_i,v \rangle v_i.$$ (Note that if $V$ is a complex inner product space I'm taking $⟨·,·⟩$ to be linear in the second component and conjugate-linear in the first one.) If we then apply this to the action of $A$ on the basis, we arrive at $$ Av_j = \sum_j A_{ij}v_i = \sum_i \langle v_i, Av_j\rangle v_i, \quad\text{i.e.}\quad A_{ij} = \langle v_i, Av_j\rangle,$$ since the matrix coefficients are unique. We have, then, a direct relation between matrix element and inner products, and this looks particularly striking when we use this language to rephrase our property $(2)$ above: the matrix for $A$ over $\beta$ is hermitian if and only if $$ A_{ji}^* = \langle v_j, Av_i\rangle^* = \langle v_i, Av_j\rangle = A_{ij}, $$ and if we use the conjugate symmetry $\langle u,v\rangle^* = \langle v,u\rangle$ of the inner product, this reduces to $$ \langle Av_i, v_j\rangle = \langle v_i, Av_j\rangle. \tag 3 $$ Now, here is where the magic happens: this expression is exactly the same as the invariant property $(1)$ that we wanted , only it is specialized for $x,y$ set to members of the given basis. This means, for one, that $(1)$ implies $(2)$, so that's one half of the equivalence done. In addition to this, there's a second bit of magic we need to use: the equation in $(3)$ is completely (bi)linear in both of the basis vectors involved, and this immediately means that it extends to any two vectors in the space. This is a bit of a heuristic statement, but it is easy to implement: if $x=\sum_j x_j v_j$ and $y=\sum_i y_i v_i$, then we have \begin{align} \langle A y, x\rangle & = \left\langle A \sum_i y_i v_i, \sum_j x_j v_j\right\rangle && \\ & = \sum_i \sum_j y_i^* x_j \langle A v_i, v_j\rangle &&\text{by linearity} \\ & = \sum_i \sum_j y_i^* x_j \langle v_i, Av_j\rangle &&\text{by }(3) \\ & = \left\langle \sum_i y_i v_i, A\sum_j x_j v_j\right\rangle &&\text{by linearity} \\ & = \langle y, A x\rangle,&& \end{align} and this shows that you can directly build the invariant statement $(1)$ out of its restricted-to-a-basis version, $(3)$, which is itself a direct rephrasing of the matrix hermiticity condition $(2)$. Pretty cool, right?
{ "source": [ "https://physics.stackexchange.com/questions/307129", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/127932/" ] }
307,541
Whenever I put a meal in the microwave which contains cheese, why does the cheese get hot before the rest of the meal is heated through?
It is because cheese has a nice combination of water and fat. The water is important since the microwave transfers energy to it by making the water molecules vibrate. On the other hand, oils, in general, have lower specific heat (compared to water). This means that given the same amount of heat, the temperature change is higher for fat than for water. You can see in this table as normally fatty food has greater specific heat. Moreover, oils have higher boiling points so the cheese can reach a temperature above $100\ \mathrm{^\circ C}$. Edit Both vegetable and animal oils are made of nonpolar molecules . This means that oils cannot be effectively heated up by dieletric heating (microwave absorption). If we consider the limit case where oil does not absorb microwaves at all, then any combination of water and oil (mixture) outperforms pure oil at the rate of heating up under microwaves. The mixture, in this case, heats up because water is absorbing microwaves and is giving up heat to the oil by thermal conduction. On the other hand to compare the performances of the mixture and pure water we need to take into account the specific heat of both substances. If the specific heat of the mixture is sufficiently smaller than the specific heat of the water, then the former will outperform the latter in heating up under microwaves. Can we heat up oil in a microwave? Oils' molecule, in general, may have a non-zero dipole moment but it is so small that oil's dielectric loss factor is about a hundredth of the water's one. Recall that the dielectric loss factor roughly expresses the degree to which an externally applied electric field will be converted to heat. It is in general dependent of the frequency of the radiation and for water, it is maximum at $2.45\, \mathrm{GHz}$, the frequency of most microwaves oven. By a simple home experiment, one can easily check that conduction plays a major role. Try to get some containers that respond differently to microwaves, that is, test how the empty containers heat up. Then separate one that does not heat up and one that does heat up. Fill them with the same amount of oil and let them on the microwave oven for the same amount of time. The oil in the microwave interacting container will be much warmer. The explanation is that the oil was mainly heated up by conduction. Note that in a homogeneous mixture of oil and water (like a cheese) this conduction is optimized.
{ "source": [ "https://physics.stackexchange.com/questions/307541", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/122325/" ] }
307,635
People said outside earth is a vacuum. But the air does not get sucked from the Earth's surface. Some said it is due to gravity and some said the speed of air molecules are not high enough to escape. We know vacuum will suck air like your vacuum cleaner and it has nothing to do with gravity. If outer space is really a vacuum, what prevents air escape the earth?
Vacuum doesn't suck air. In a vacuum it is the other air which pushes it into the empty space. Air like any other gas will expand to fill the volume. So you would expect the atmosphere to spread out to fill the rest of the universe - and without gravity holding it onto the Earth, it would do. Edit: Yes some air is continually lost. The molecules in the atmosphere are moving at a range of speeds, some of the very fastest ones will be moving fast enough to have enough energy to overcome gravity and escape. This is especially true for the lightest elements, eg. Helium, which move fast and feel the effect of gravity least.
{ "source": [ "https://physics.stackexchange.com/questions/307635", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143298/" ] }
307,654
In the framework of classical electrodynamics, at distances much greater than a conductor's dimension, the field ought to approach that of a point charge located at the conductor. But where at ? For a highly symmetric conductor, we ought to be able to deduce some information about the "point charge" location. Yet, consider an arbitrarily shaped conductive body. Is the point located at the centroid? Edit: I am not questioning why we approximate a source as a point charge at far distances. This question regards geometrical convergence in a physical scenario. If the field lines converge to that of a point charge, they emanate from a point. Are there known relations between the point and the body itself?
Based on some of the back-and-forth I see, I think you're asking the wrong question. I think the question you want to ask is "Given a charge distribution $\rho(\mathbf{r})$, where should I place a point source so that the exact potential $\phi(\mathbf{r}) = \int \rho(\mathbf{r}')/|\mathbf{r}-\mathbf{r}'| dv'$ is most closely approximated by the potential from the point source?" The answer is that you want to choose $\mathbf{r}_0$ such that $\int (\mathbf{r}'-\mathbf{r}_0) \rho(\mathbf{r}') dv' = 0$ If the charge distribution is uniform, then the answer is at the centroid. The reason this is the right point is it makes the dipole moment of the difference between exact and approximate solutions go to zero. So the error in the potential is $\mathcal{O}(1/r^3)$, whereas with any other choice the error would include the dipole term, and therefore be $\mathcal{O}(1/r^2)$. (Properly setting the magnitude of the point charge accounts for the monopole term of $\mathcal{O}(1/r)$.) Further clarification: The choice of $\mathbf{r}_0$ that satisfies the dipole constraint above is $\mathbf{r}_0 = \frac{\int \mathbf{r}' \rho(\mathbf{r}') dv'}{\int \rho(\mathbf{r}') dv'}$ and can be thought of a as a "center-of-charge" similar to a center-of-mass. The multipole expansion of the potential $\phi(\mathbf{r})$ contains terms of increasing order in $1/r$ Monopole terms decay with $\mathcal{O}(1/r)$. Any charge distributions with the same total charge within a local region have the same monopole moment. That's why a point charge with the same total charge works as an approximation, and it doesn't matter where it is, as long as it's close to the same region. With this approximation, the error between the exact potential and the approximation will be $\mathcal{O}(1/r^2)$. If $r$ is big enough, then like everyone else says, it works fine and it doesn't matter where $\mathbf{r}_0$ is. However, if we want, we can be even more accurate with a judicious choice of the location of the point charge. Dipole terms decay with $\mathcal{O}(1/r^2)$. Since the point source clearly has no dipole moment, picking the point $\mathbf{r}_0$ so that the exact potential has no dipole moment about $\mathbf{r}_0$ removes $\mathcal{O}(1/r^2)$ dependence from the error. This leaves only $\mathcal{O}(1/r^3)$ and higher error terms.
{ "source": [ "https://physics.stackexchange.com/questions/307654", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/71773/" ] }
307,721
I came across this pic on the internet today. At beginning I thought it is just not possible because the centre of mass is way off so gravity will generate torque making the stick and hammer fall. Later I thought that the heaviest part of hammer could've balanced the centre of mass and so it could be possible. Still I'm confused. Is it possible or not assuming that it is performed on our planet or with planet with similar g (acceleration due to gravity). In other words: is the center of mass of the hammer usually in the metal part? (Because that would explain this picture) And and if it is possible, and we get a function representing this equilibrium, what is your rough inference? is it dependent of acc. due to gravity?
The ruler is actually being supported by the handle of the hammer to provide two points of support so the downward force from the string lies between the two and the system balances. Moment on the hammer in blue, forces on the ruler in red. Edit: To explain in a little more detail the center of mass of the hammer lies to the right of the string so the hammer would (if the ruler weren't there) rotate clockwise. The handle of the hammer can then be treated as a leaver pushing up against the bottom of the ruler. The blue triangle represents the support from the string, the grey block our hammer head. For this problem we treat the handle as a weightless rod. As you can see the left hand side of the rod will attempt to turn. This is what provides the supporting force on the ruler.
{ "source": [ "https://physics.stackexchange.com/questions/307721", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/129570/" ] }
307,794
In classical mechanics, we perform a Legendre transform to switch from $L(q, \dot{q})$ to $H(q, p)$. This has always been confusing to me, because we can always write $L$ in terms of $q$ and $p$ by just taking the expression for $\dot{q}(q, p)$ and stuffing it in. In thermodynamics, we say $U$ is a function of $S$, $V$, and $N$ because $$dU = T dS + p dV + \mu dN,$$ which is exceptionally simple. But for the Lagrangian, we instead generally have $$dL = (\text{horrible expression})\, dq + (\text{horrible expression})\, d\dot{q}$$ In this case, I see no loss in 'naturalness' to switch to $q$ and $p$, so what's the real difference between considering $L(q, \dot{q})$ and $L(q, p)$?
We should abandon the "naive" langauge of functions depending on coordinates and consider functions as maps between mathematical spaces, which are only expressed in local coordinates after their domains have been defined. The starting point for both the Lagrangian and the Hamiltonian formalism is a configuration space $Q$, whose coordinates are called $q^i$. It should be thought of as the space of positions of the system under considerations. The two formalisms now immediately take different paths: Lagrangian mechanics takes place on the tangent bundle $TQ$, Hamiltonian mechanics on the cotangent bundle $T^\ast Q$. The local coordinates on $TQ$ are denoted $(q^i,\dot{q}^i)$, the local coordinates on $T^\ast Q$ are $(q^i,p_i)$. Note that, since there is no metric on $Q$, you do not have a canonical identification of tangents and cotangents and therefore cannot switch between the description freely as one might be used to from Riemannian geometry. Note furthermore that $\dot{q}$ is not the derivative of anything - it's simply a notation for a new coordinate. The Lagrangian is a function $L : TQ\to \mathbb{R}$. Given it, we may define a function $f : TQ\to T^\ast Q$ in local coordinates by $$ f(q,\dot{q}) = \left(q,\frac{\partial L}{\partial \dot{q}}(q,\dot{q})\right)$$ and the associated Hamiltonian $H : T^\ast Q \to \mathbb{R}$ in local coordinates as the Legendre transform $$ H(q,p) = \sup_{\dot{q}}\left(p_i \dot{q}^i - L(q,\dot{q})\right).$$ It should be clear here that neither $H(q,\dot{q})$ nor $L(q,p)$ are meaningful objects in this context - $H$ and $L$ act on different spaces, you cannot feed a $p$ into $L$ at all. Observe now that $f$ does permit us to do this in some sense, only rigorously: If $f$ is invertible, one may define a "co-Lagrangian" or "Hamiltonian Lagrangian" $L_H : T^\ast Q \to\mathbb{R}$ by $L_H(q,p) = L(f^{-1}(q,p))$. Crucially, $L$ and $L_H$ are different functions and should, for clarity's sake, never be denoted by the same symbol. The expression in the definition of the Legendre transform obtains its extremum at $$ p_i = \frac{\partial L}{\partial \dot{q}^i}(q,\dot{q}),$$ which means that $$ H(q,p) = p_i\dot{q}^i - L(q,\dot{q})\tag{0}$$ holds exactly for a triple $(q,\dot{q},p)$ such that $$f(q,\dot{q}) = (q,p).\tag{1}$$ Note that the fact that $H$ does not depend on $\dot{q}$ means that $\dot{q}$ in eq. (0) is implicitly a function $\dot{q}(q,p)$ as defined implicitly by eq. (1). Only when we impose the relation eq. (1) there is a functional relation between the $q,\dot{q},p$, otherwise there is not. This is why, as abstract functions, the Lagrangian is not a function of $p$ and the Hamiltonian is not a function of $\dot{q}$ - these are coordinates on different spaces with no relation to each other. It is only when we impose eq. (1) in order to express the Hamiltonian without the extremisation procedure prescribed in the Legendre transform that they become related, and not necessarily uniquely so. If $f$ is not invertible, then the Lagrangian system is a gauge theory and the Hamiltonian system is constrained - both terms which essentially mean that the relation between the $p$ and the $\dot{q}$ is not uniquely defined. Finally, let me address a closely related confusion which nevertheless crops up because of the same reason, i.e. not respecting the actual domains functions are defined on. The $q,\dot{q}$ arguments of the Lagrangian are independent , and become dependent only when we consider a path $\gamma: I\to Q$, which induces a path $\tilde{\gamma} : I\to TQ, t\mapsto (\gamma(t),\dot{\gamma}(t))$ on the tangent bundle, where $\dot{\gamma}$ now denotes the actual time derivative, i.e. the tangent vector field to $\gamma$. The action is a function $S : [I,Q]\to\mathbb{R}$, where $[I,Q]$ denotes the space of all maps $I\to Q$, and is defined as $$ S[\gamma] = \int_I L(\tilde{\gamma}).$$ When now considering this action, the physicist often writes the coordinates of $\tilde{\gamma}$ as $(q(t),\dot{q}(t))$, and it is only in this context that $\dot{q}(t)$ truly is a time-dependent function and the derivative of $q(t)$.
{ "source": [ "https://physics.stackexchange.com/questions/307794", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83398/" ] }
307,797
So I was messing around with a pendulum, and was wondering how the force exerted by the pendulum change at different points of its trajectory. So, I came up with the following set up: What I did to find out the effect of the position on the force exerted on the sensor was that I moved the force sensor back and adjusted the clamp such that the new position of the sensor would match with the point in the trajectory of the pendulum I wanted to investigate. After graphing the results, I observed the obvious decreasing trend in force exerted as position along the trajectory (starting from the mean position) increased. Given that this would be an inelastic collision, could an equation be formulated for this scenario, which would calculate the force exerted by the pendulum on the sensor given the height from the mean position? Thank you!
We should abandon the "naive" langauge of functions depending on coordinates and consider functions as maps between mathematical spaces, which are only expressed in local coordinates after their domains have been defined. The starting point for both the Lagrangian and the Hamiltonian formalism is a configuration space $Q$, whose coordinates are called $q^i$. It should be thought of as the space of positions of the system under considerations. The two formalisms now immediately take different paths: Lagrangian mechanics takes place on the tangent bundle $TQ$, Hamiltonian mechanics on the cotangent bundle $T^\ast Q$. The local coordinates on $TQ$ are denoted $(q^i,\dot{q}^i)$, the local coordinates on $T^\ast Q$ are $(q^i,p_i)$. Note that, since there is no metric on $Q$, you do not have a canonical identification of tangents and cotangents and therefore cannot switch between the description freely as one might be used to from Riemannian geometry. Note furthermore that $\dot{q}$ is not the derivative of anything - it's simply a notation for a new coordinate. The Lagrangian is a function $L : TQ\to \mathbb{R}$. Given it, we may define a function $f : TQ\to T^\ast Q$ in local coordinates by $$ f(q,\dot{q}) = \left(q,\frac{\partial L}{\partial \dot{q}}(q,\dot{q})\right)$$ and the associated Hamiltonian $H : T^\ast Q \to \mathbb{R}$ in local coordinates as the Legendre transform $$ H(q,p) = \sup_{\dot{q}}\left(p_i \dot{q}^i - L(q,\dot{q})\right).$$ It should be clear here that neither $H(q,\dot{q})$ nor $L(q,p)$ are meaningful objects in this context - $H$ and $L$ act on different spaces, you cannot feed a $p$ into $L$ at all. Observe now that $f$ does permit us to do this in some sense, only rigorously: If $f$ is invertible, one may define a "co-Lagrangian" or "Hamiltonian Lagrangian" $L_H : T^\ast Q \to\mathbb{R}$ by $L_H(q,p) = L(f^{-1}(q,p))$. Crucially, $L$ and $L_H$ are different functions and should, for clarity's sake, never be denoted by the same symbol. The expression in the definition of the Legendre transform obtains its extremum at $$ p_i = \frac{\partial L}{\partial \dot{q}^i}(q,\dot{q}),$$ which means that $$ H(q,p) = p_i\dot{q}^i - L(q,\dot{q})\tag{0}$$ holds exactly for a triple $(q,\dot{q},p)$ such that $$f(q,\dot{q}) = (q,p).\tag{1}$$ Note that the fact that $H$ does not depend on $\dot{q}$ means that $\dot{q}$ in eq. (0) is implicitly a function $\dot{q}(q,p)$ as defined implicitly by eq. (1). Only when we impose the relation eq. (1) there is a functional relation between the $q,\dot{q},p$, otherwise there is not. This is why, as abstract functions, the Lagrangian is not a function of $p$ and the Hamiltonian is not a function of $\dot{q}$ - these are coordinates on different spaces with no relation to each other. It is only when we impose eq. (1) in order to express the Hamiltonian without the extremisation procedure prescribed in the Legendre transform that they become related, and not necessarily uniquely so. If $f$ is not invertible, then the Lagrangian system is a gauge theory and the Hamiltonian system is constrained - both terms which essentially mean that the relation between the $p$ and the $\dot{q}$ is not uniquely defined. Finally, let me address a closely related confusion which nevertheless crops up because of the same reason, i.e. not respecting the actual domains functions are defined on. The $q,\dot{q}$ arguments of the Lagrangian are independent , and become dependent only when we consider a path $\gamma: I\to Q$, which induces a path $\tilde{\gamma} : I\to TQ, t\mapsto (\gamma(t),\dot{\gamma}(t))$ on the tangent bundle, where $\dot{\gamma}$ now denotes the actual time derivative, i.e. the tangent vector field to $\gamma$. The action is a function $S : [I,Q]\to\mathbb{R}$, where $[I,Q]$ denotes the space of all maps $I\to Q$, and is defined as $$ S[\gamma] = \int_I L(\tilde{\gamma}).$$ When now considering this action, the physicist often writes the coordinates of $\tilde{\gamma}$ as $(q(t),\dot{q}(t))$, and it is only in this context that $\dot{q}(t)$ truly is a time-dependent function and the derivative of $q(t)$.
{ "source": [ "https://physics.stackexchange.com/questions/307797", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/137994/" ] }
307,804
During my studies on QFT a fundamental question occurred concerning the canonical quantization. In our course, we mentioned that: "The canonical quantization of a field with values in the complex numbers can lead only commutation relations, as opposed to anticommutation relations." How can I interpret this statement? Is there any justification?
We should abandon the "naive" langauge of functions depending on coordinates and consider functions as maps between mathematical spaces, which are only expressed in local coordinates after their domains have been defined. The starting point for both the Lagrangian and the Hamiltonian formalism is a configuration space $Q$, whose coordinates are called $q^i$. It should be thought of as the space of positions of the system under considerations. The two formalisms now immediately take different paths: Lagrangian mechanics takes place on the tangent bundle $TQ$, Hamiltonian mechanics on the cotangent bundle $T^\ast Q$. The local coordinates on $TQ$ are denoted $(q^i,\dot{q}^i)$, the local coordinates on $T^\ast Q$ are $(q^i,p_i)$. Note that, since there is no metric on $Q$, you do not have a canonical identification of tangents and cotangents and therefore cannot switch between the description freely as one might be used to from Riemannian geometry. Note furthermore that $\dot{q}$ is not the derivative of anything - it's simply a notation for a new coordinate. The Lagrangian is a function $L : TQ\to \mathbb{R}$. Given it, we may define a function $f : TQ\to T^\ast Q$ in local coordinates by $$ f(q,\dot{q}) = \left(q,\frac{\partial L}{\partial \dot{q}}(q,\dot{q})\right)$$ and the associated Hamiltonian $H : T^\ast Q \to \mathbb{R}$ in local coordinates as the Legendre transform $$ H(q,p) = \sup_{\dot{q}}\left(p_i \dot{q}^i - L(q,\dot{q})\right).$$ It should be clear here that neither $H(q,\dot{q})$ nor $L(q,p)$ are meaningful objects in this context - $H$ and $L$ act on different spaces, you cannot feed a $p$ into $L$ at all. Observe now that $f$ does permit us to do this in some sense, only rigorously: If $f$ is invertible, one may define a "co-Lagrangian" or "Hamiltonian Lagrangian" $L_H : T^\ast Q \to\mathbb{R}$ by $L_H(q,p) = L(f^{-1}(q,p))$. Crucially, $L$ and $L_H$ are different functions and should, for clarity's sake, never be denoted by the same symbol. The expression in the definition of the Legendre transform obtains its extremum at $$ p_i = \frac{\partial L}{\partial \dot{q}^i}(q,\dot{q}),$$ which means that $$ H(q,p) = p_i\dot{q}^i - L(q,\dot{q})\tag{0}$$ holds exactly for a triple $(q,\dot{q},p)$ such that $$f(q,\dot{q}) = (q,p).\tag{1}$$ Note that the fact that $H$ does not depend on $\dot{q}$ means that $\dot{q}$ in eq. (0) is implicitly a function $\dot{q}(q,p)$ as defined implicitly by eq. (1). Only when we impose the relation eq. (1) there is a functional relation between the $q,\dot{q},p$, otherwise there is not. This is why, as abstract functions, the Lagrangian is not a function of $p$ and the Hamiltonian is not a function of $\dot{q}$ - these are coordinates on different spaces with no relation to each other. It is only when we impose eq. (1) in order to express the Hamiltonian without the extremisation procedure prescribed in the Legendre transform that they become related, and not necessarily uniquely so. If $f$ is not invertible, then the Lagrangian system is a gauge theory and the Hamiltonian system is constrained - both terms which essentially mean that the relation between the $p$ and the $\dot{q}$ is not uniquely defined. Finally, let me address a closely related confusion which nevertheless crops up because of the same reason, i.e. not respecting the actual domains functions are defined on. The $q,\dot{q}$ arguments of the Lagrangian are independent , and become dependent only when we consider a path $\gamma: I\to Q$, which induces a path $\tilde{\gamma} : I\to TQ, t\mapsto (\gamma(t),\dot{\gamma}(t))$ on the tangent bundle, where $\dot{\gamma}$ now denotes the actual time derivative, i.e. the tangent vector field to $\gamma$. The action is a function $S : [I,Q]\to\mathbb{R}$, where $[I,Q]$ denotes the space of all maps $I\to Q$, and is defined as $$ S[\gamma] = \int_I L(\tilde{\gamma}).$$ When now considering this action, the physicist often writes the coordinates of $\tilde{\gamma}$ as $(q(t),\dot{q}(t))$, and it is only in this context that $\dot{q}(t)$ truly is a time-dependent function and the derivative of $q(t)$.
{ "source": [ "https://physics.stackexchange.com/questions/307804", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/140236/" ] }
307,854
I found an article on the "Reference Frame" titled Simple QM proof implies many worlds don't exist . I tried to read it, but being a complete layman, I did not understand a thing. Could somebody tell me if this proof is valid and the many worlds interpretation is no longer considered as an option? Or does the proof have a critical flaw? Also, if possible, can somebody give a brief summary of the argument in layman's terms? Thank you.
Lubos Motl's argument isn't right; it's shooting down a strawman of the MWI, not the MWI itself. To recap, the argument goes like this: Many worlds claims that after a spin measurement, there are separate 'worlds' with different measurement results. For example, there could be one world where the electron is spin up, and one where the electron is spin down. We can identify whether a quantum state is spin up or spin down, and there are no quantum states that are both at once. Therefore, the electron can't be both spin up and spin down, so many worlds is false. The trick is that in step (3), Lubos has assumed that the state of the electron in the MWI is a standard quantum state (and that the "worlds where the electron is spin up/down" are simply a quantum superposition). However, this isn't what the MWI says at all! Instead, it says that after measurement, the electron is entangled with the measuring apparatus, so their joint quantum state is something like $$|\text{screen says +1, electron spin up}\rangle + |\text{screen says -1, electron spin down} \rangle$$ where I'm neglecting coefficients and phases. Because the electron is entangled with something else, it doesn't have a quantum state of its own, so step (3) doesn't work. The simplest thing we can do to extract a "state" for the electron is to ignore (' trace out ') the state of the apparatus. When we do this, we find that the electron is actually described by a mixed state , i.e. something like $$\text{50% chance of } |\text{spin up} \rangle + \text{50% chance of } |\text{spin down} \rangle.$$ This is a probabilistic, not quantum, mixture of states, and the $+$ sign is not quantum superposition. In accordance with step (2) above, there are no quantum states here that are both spin up and spin down at once -- just a mixture of two that are spin up and down separately. These two possibilities are what MWI people would call the two 'worlds'.
{ "source": [ "https://physics.stackexchange.com/questions/307854", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21904/" ] }
307,861
while attending a lecture on antennas, we were explained an example where it was stated that a lossless antenna with a directive gain of +6 db is going to radiate 4 mW of power if it is fed 1mW of power. However, working on this example myself, I am of the view that the output power should be 1 mW instead, since the antenna is lossless, and so no ohmic power should be dissipated and the radiated power should equal the power fed. I tried browsing the net for finding out an explanation for this, but couldn't find such an example. Can anyone please clarify this doubt of mine, for I need to develop a project based on antennas, and I must have all my concepts crystal clear. Thanks!
Lubos Motl's argument isn't right; it's shooting down a strawman of the MWI, not the MWI itself. To recap, the argument goes like this: Many worlds claims that after a spin measurement, there are separate 'worlds' with different measurement results. For example, there could be one world where the electron is spin up, and one where the electron is spin down. We can identify whether a quantum state is spin up or spin down, and there are no quantum states that are both at once. Therefore, the electron can't be both spin up and spin down, so many worlds is false. The trick is that in step (3), Lubos has assumed that the state of the electron in the MWI is a standard quantum state (and that the "worlds where the electron is spin up/down" are simply a quantum superposition). However, this isn't what the MWI says at all! Instead, it says that after measurement, the electron is entangled with the measuring apparatus, so their joint quantum state is something like $$|\text{screen says +1, electron spin up}\rangle + |\text{screen says -1, electron spin down} \rangle$$ where I'm neglecting coefficients and phases. Because the electron is entangled with something else, it doesn't have a quantum state of its own, so step (3) doesn't work. The simplest thing we can do to extract a "state" for the electron is to ignore (' trace out ') the state of the apparatus. When we do this, we find that the electron is actually described by a mixed state , i.e. something like $$\text{50% chance of } |\text{spin up} \rangle + \text{50% chance of } |\text{spin down} \rangle.$$ This is a probabilistic, not quantum, mixture of states, and the $+$ sign is not quantum superposition. In accordance with step (2) above, there are no quantum states here that are both spin up and spin down at once -- just a mixture of two that are spin up and down separately. These two possibilities are what MWI people would call the two 'worlds'.
{ "source": [ "https://physics.stackexchange.com/questions/307861", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143065/" ] }
308,237
I have this problem from University Physics with Modern Physics (13th Edition): The inside of an oven is at a temperature of 200 °C (392 °F). You can put your hand in the oven without injury as long as you don't touch anything. But since the air inside the oven is also at 200 °C, why isn't your hand burned just the same? What I understood from this problem is that my hand won't be as hot as the air temperature, but then my first conjecture was: It’s the nature of the air (i.e., a gas) that its molecules are more disperse than those of a solid. Is my reasoning right? Or what thermodynamics concepts do I need to understand better to tackle this problem?
There are two points relevant for the discussion: air itself carries a very small amount of thermal energy and it is a very poor thermal conductor. For the first point, I think it is interesting to consider the product $\text{density} \times \text{specific heat}$, that is the amount of energy per unit volume that can be transferred for every $\text{K}$ of temperature difference. As of order of magnitudes, the specific heat is roughly comparable, but the density of air is $10^3$ times smaller than the density of a common metal; this means that for a given volume there are much less "molecules" of air that can store thermal energy than in a solid metal, and hence air has much less thermal energy and it is not enough to cause you a dangerous rise of the temperature. The rate at which energy is transferred to your hand, that is the flow of heat from the other objects (air included) to your hand. In the same amount of time and exposed surface, touching air or a solid object causes you get a very different amount of energy transferred to you. The relevant quantity to consider is thermal conductivity , that is the energy transferred per unit time, surface and temperature difference. I added this to give more visibility to his comment; my original answer follows. Air is a very poor conductor of heat, the reason being the fact that the molecules are less concentrated and less interacting with each other, as you conjectured (this is not very precise, but in general situations this way of thinking works). On the opposite, solids are in general better conductors: this is the reason why you should not touch anything inside the oven. Considering order of magnitudes, according to Wikipedia , air has a thermal conductivity $ \lesssim 10^{-1} \ \text{W/(m K)} $, whereas for metals is higher at least of two orders of magnitude. I really thank Zephyr and Chemical Engineer for the insight that they brought to my original answer, that was much poorer but got an unexpected fame.
{ "source": [ "https://physics.stackexchange.com/questions/308237", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143600/" ] }
308,413
We were asked a question to differentiate the difference between the idea of an Ether and the idea of Quantum Fields. When I really began to think about it I concluded that the ideas are the same. The two are essentially the same idea. They both consist of the idea that 'something' permeates all of space and acts on everything. Why is this wrong?
A different point of view to Jamal's answer: I think what distinguishes quantum field theory, where each elementary particle in the particle table defines a field all over space time, from the luminiferous aether is Lorenz invariance . The luminiferous aether theory was falsified by the Michelson Morley experiment because it was not Lorenz invariant. In quantum field theory an electron traversing space time is described by a quantum mechanical wave packet (which means that what "waves" is the probability of existing at (x, y, z, t)), manifested by creation and annihilation operators acting on the electron field, and the expectation value defines the location of the electron as a function of (x, y, z, t). The same for a photon, riding on the photon field. The quantum fields though by construction are Lorenz invariant and thus cannot be identified with the luminiferous aether.
{ "source": [ "https://physics.stackexchange.com/questions/308413", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143696/" ] }
308,617
The mass of a carbon 14 atom is $14.003\,241\,988\,7\:\mathrm{u}$, nitrogen 14 has a mass of $14.003\,074\,004\,78\:\mathrm{u}$, and the rest-mass of an electron is $0.000\,548\,579\,9\:\mathrm{u}$ In $\beta^-$ decay, $$_6^{14}\mathrm C \ \longrightarrow \ _7^{14}\mathrm N + e^- + \nu_e^+. $$ The mass of nitrogen and an electron is substantially larger than that of the carbon atom, so where did the extra mass-energy come from?
One needs to be very careful in doing mass-energy balances in nuclear decay reactions, especially in beta-decay (electron or positron) The reaction as written in the OP is correct, and is exactly as it is normally written but is slightly misleading ( not the fault of the OP!). Consider an individual carbon-14 atom on the LHS of the reaction. It consists of 6 protons, 8 neutrons, and six orbital electrons . The orbital electrons are not involved in the nuclear reaction, and are usually ignored. The major product of the decay of this carbon-14 atom is an atom of nitrogen-14. But this atom still has the same six orbital electrons as the parent carbon -14 . However, the atomic masses are tabulated for the whole neutral atom . So the mass used for the carbon-14 atom is correct, but the mass used for the daughter nitrogen-14 atom is actually too large by the mass of the seventh orbital electron found in a neutral atom. To emphasize, the actual reaction product has six orbital electrons, while the mass used is for a seven-orbital-electron nitrogen-14 atom. So, one needs to subtract an electron from the tabulated nitrogen-14 atom, and then add back in the mass of the actual beta-particle produced in the reaction. This is the same as just using the tabulated atomic mass values, and not including the beta-particle mass.
{ "source": [ "https://physics.stackexchange.com/questions/308617", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143790/" ] }
308,735
According to the Wikipedia article on night vision , Many animals have better night vision than humans do, the result of one or more differences in the morphology and anatomy of their eyes. These include having a larger eyeball, a larger lens, a larger optical aperture (the pupils may expand to the physical limit of the eyelids), more rods than cones (or rods exclusively) in the retina, and a tapetum lucidum. But a recent study has shown that the human eye is capable of detecting individual photons of visible light. It seems to me that this should be the highest physically possible sensitivity to light, since QED requires excitations of the E&M field to be quantized into integer numbers of photons. How is it possible for animals to have better night vision than humans, if humans can detect individual light quanta? Is it just that while the human eye can sometimes detect individual photons, other animals' eyes can do so more often?
That research shows that humans can detect single photons, not that we're particularly good at it. Averaging across subjects’ responses and ratings from a total of 30,767 trials, 2,420 single-photon events passed post-selection and we found the averaged probability of correct response to be 0.516±0.010 (P=0.0545; Fig. 2a), suggesting that subjects could detect a single photon with a probability above chance. (emphasis mine) This study showed that we could do better than random chance, but not that we could do substantially better than random chance. Based on the efficiency of the signal arm and the visual system, we estimate that in ∼6% of all post-selected events an actual light-induced signal was generated (Methods section).
{ "source": [ "https://physics.stackexchange.com/questions/308735", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92058/" ] }
308,917
Is it possible to bend light so that it forms a circle and goes round and round indefinitely without losing energy?
How could one manipulate light? It does not have mass, it does not have electric charge. For that matter, it also does not have any color or weak charge. There seems no way to change its direction of motion. Black Hole General relativity describes how masses can create curvature in spacetime. If you have enough mass, it will get curved significantly. Light will follow this curvature, because light will go “straight” which will become curved in curved spacetime. Right at the Schwarzschild radius of a black hole, the escape velocity is the speed of light. That means that a photon there trying to go straight away from the black hole will not get any further, although it moves with the speed of light. That is not a closed orbit, of course. As Jerry Schirmer pointed out in the comments, a closed orbit happens at $r = 3M$ where $M$ is the mass of the black hole. The problem with this orbit is that it is unstable. Any perturbation will either send the photon away from the black hole or lets it spiral into the singularity. Either way it breaks from the closed orbit. Since a photon has an energy, it also creates spacetime curvature. A moving photon will therefore radiate gravitational waves, although they will be minuscule. However, they are sufficient perturbation to prevent the orbit from being closed forever . This could be prevented by using a solid ring of light such that the mass density along the orbit is constant. Then no gravitational waves would be emitted. If the Hawking temperature of the black hole does not exactly match the temperature of the ambient universe (think of the cosmic microwave background), the black hole will grow or shrink. This will change the radius of the orbit and also prevents an orbiting photon for eternity. All in all this is very unstable and will not work out. See also : https://en.wikipedia.org/wiki/Schwarzschild_radius https://en.wikipedia.org/wiki/Kruskal%E2%80%93Szekeres_coordinates Wave Optics Another possibility is to use refraction of light. If you have an optical medium with different optical densities (different index of refraction $n$), light will also bend. This is how a lens works. With the right setup of lenses one can refract light to go around a path. You could even set up three mirrors and let the light go round and round in a triangle! The optical fiber is a bit more sophisticated, it has a gradient of the optical density and can therefore smoothly direct the light around a curve. Quantum Electrodynamics With quantum electrodynamics, there is the tiny interaction of light rays with other light rays. Although light has no charge in itself, it can couple to virtual charged fermions and create a closed loop that couples four photons in total. If you have enough light around in a particular configuration, one could bend light rays with that. However, I fear that this is not realizable in any experiment. See also : https://en.wikipedia.org/wiki/Euler%E2%80%93Heisenberg_Lagrangian The Point? Another valid issue was raised in the comments: If you would have this situation successfully set up, how would you know that it is working? If you try to observe the photon, you would change it. If it radiates something to the outside (scattered light, gravitational waves), it would lose energy over time and leave the orbit.
{ "source": [ "https://physics.stackexchange.com/questions/308917", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/140964/" ] }
309,019
Beta minus decay emits an electron with a range of energies. Within the nucleus, the following is happening: $n\rightarrow p+e^-+\bar{v}_e$. For this reaction to be possible, by lepton number conservation, the neutrino must be present. Since this neutrino accounts for the range of electron energies, can this not be used to be a constraint on the mass of the neutrino? For the maximum electron energy, the neutrino will have no kinetic energy, and only its mass energy, $mc^2$. So, how come this principle has not been utilised in putting limits on neutrino mass-energies; there must be a problem somewhere?
This has been attempted, however the energy released in a neutron decay is a shade under a MeV and the neutrino masses are probably below $0.1$ eV. The energy of the neutron decay simply cannot be measured accurately enough to determine the neutrino mass. The closest estimate I kmow of is reported in Neutrino mass limit from tritium beta decay by E. W. Otten, C. Weinheimer, but their estimate is $m(\nu_e)\lt 2$ eV, so it's a fair way from the expected mass.
{ "source": [ "https://physics.stackexchange.com/questions/309019", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/114841/" ] }
309,711
I'm writing a piece on the nuclear force, and I'm struggling with something. I always thought of the alpha particle as something with a tetrahedral disposition. If you search the internet on this there's plenty of hits. Ditto if you search for images : The alpha particle is usually depicted as a tetrahedral arrangement of two protons and two neutrons. And not just in popscience pictures. Here it is again in a scholarpedia article clusters in nuclei by Professor Martin Freer. He says things like the alpha+alpha cluster structure is found in the ground state of 8 Be , and gives this depiction showing the arrangement of four alpha particle clusters in the nucleus 16 O : However I'm struggling to find any hard scientific evidence of the tetrahedral disposition or configuration of the alpha particle. So my question is this: Is there any hard scientific evidence that the alpha particle is tetrahedral?
The alpha particle is a quantum mechanical system, and it is not clear what we might mean by drawing pictures of billiard balls arranged according to classical polyhedra.In particular, the alpha has quantum numbers $J^\pi=0^+$ , so it has complete spherical symmetry. In a shell model picture, which provides a simple guide to the exact 4-body wave function, the alpha is a state in which all four particle (a neutron with spin up/down, and a proton with spin up/down) occupy the same 1s (spherically symmetric) orbital. This implies that the alpha should be drawn as a blob, with smeared out protons and neutrons. The shell model wave function is not exact, and there are short range correlations, that means if I detect a spin up proton at the origin, then there is a slightly enhanced/reduced probability to find a spin up neutron/proton nearby, but these correlations do not in any sense favor tetrahedral configurations. Larger nuclei (deformed nuclei, like plutonium) have (semi) classical shapes. The corresponding quantum mechanical wave function is a superposition of states with different orientations of the nucleus. The ground state is still isotropic, but excited states correspond to rotational bands. There is also a sense in which alpha particle cluster nuclei (like oxygen and carbon) involve large wave function components that favor certain geometric arrangements. Postscript (experimental evidence): Entire text books (for example Bohr and Mottelson, Nuclear Structure) are devoted to explaining why the shell model provides an accurate guide to nuclear states. Modern variational (and exact numerical) wave functions can be found in http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.70.743 . Empirically, the simplest piece of evidence is the spectrum of excited states. A deformed nucleus has low-lying rotational and vibrational states. The alpha particle has a large gap (consistent with a closed shell), and the lowest excited state is $0^+$ , consistent with a monopole vibration (see, for example, Fig. 3-2a in Bohr & Mottelson, vol I).
{ "source": [ "https://physics.stackexchange.com/questions/309711", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/76162/" ] }
310,671
I never had a problem accepting that spacetime is curved as a result of matter, until I learned the LIGO experiments showed that evidently the curvature of spacetime can be measured. This, to me, is very strange. Suppose the entire universe is empty except for two people, floating in space 100 meters apart. Since their masses are so small, spacetime is almost entirely flat compared to the conditions on the surface of Earth. Then suddenly, some incredibly huge cosmic event happens, and unthinkably large gravitational waves pass through the area. Somehow this doesn't kill them. Spacetime goes all spaghetti, and these two dudes are just shaking all over like a jellyfish. But if space itself is changing, how would they know? They should observe, at all times, that they are still 100 meters apart, since they are not moving in space, but space is moving with them in it... right? Or maybe would each guy think that he is remaining still while the other one just goes totally nuts? If the changing curvature of spacetime can be measured, what if these two guys were each holding one end of a 99 meter pole? Seems to me being measurable would imply that or one (or both) would have to lose grip and then have to dodge the end as it comes back. This is probably a well-explored question and I just don't know the answer. I looked around on physics.se and saw a lot of questions like this very informative one and this one that assumes the effects are noticeable but none of them seem to be quite what I'm looking for.
Analogy: how do we know that the surface of the Earth is curved? Well, we could e.g. draw a triangle on the surface of the Earth, and check the sum of the corner angles. If the Earth was flat, you'd always find that the sum of these angles was 180°, so it would be impossible to e.g. create a triangle with two 90° corners. However, since the Earth is curved, this is indeed possible ; you could e.g. draw a triangle where one edge follows the equator, and the two others follow meridians from the equator to the north pole. The same concept would apply to spacetime: simple geometric relationships such as e.g. the sum of corner angles in a triangle would be different in flat spacetime from curved spacetime, and these relationships should be possible to measure to figure out the curvature of spacetime itself.
{ "source": [ "https://physics.stackexchange.com/questions/310671", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/111067/" ] }
310,838
I just don't get it. Isn't snow just another form of water? Also are all ices transparent or do they go white after a certain temperature?
That's same for cloud, fog, wave splash and so on. Because of tiny size (but in large number) and irregular appearance, reflection and refraction occur in an irregular manner so the light is scattered randomly so that every images and colors mixed together. As a result, it looks white. By the way, under thick clouds, they look dark because all the light from the top has scattered away. Under microscope, snow flakes (ice crystal aggregate in fractal way) look crystal clear. Photo credit goes to Henry David Thoreau : snowcrystals.com photos' link Method of illumination is mentioned at the end of this link .
{ "source": [ "https://physics.stackexchange.com/questions/310838", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141926/" ] }
310,881
My question is in the title: Do black holes have a moment of inertia? I would say that it is: $$I ~\propto~ M R_S^2,$$ where $R_S$ is the Schwarzschild radius, but I cannot find anything in the literature.
The angular velocity of a Kerr black hole with mass $M$ and angular momentum $J$ is $$ \Omega = \frac{J/M}{2M^2 + 2M \sqrt{M^2 - J^2/M^2}} $$ The moment of inertia of an object can be thought of as a map from the object's angular velocity to its angular momentum. However, here we see that the relationship between these two quantities is non-linear. If we want to think of moment of inertia in the usual sense, we should linearise the above equation. When we do so, we find the relationship $$ J = 4 M^3 \Omega \qquad (\mathrm{to\ first\ order})$$ And so the moment of inertia is $$ I = 4 M^3 $$ In other words, the expression you guessed is correct, and the constant of proportion is unity. Note that since the Schwarschild radius of a black hole is merely twice its mass, and since the only two parameters that describe the black hole are its mass and angular momentum, any linear relationship between the angular velocity and angular momentum of our black hole must be of the form $J = k\, M R_S^2\, \Omega$ on dimensional grounds. Note that $G = c = 1$ throughout. EDIT. As pointed out in the comments, it's not obvious how one should define the angular velocity of a black hole. At the risk of being overly technical, we can do this as follows. First consider the Killing vector field $\xi = \partial_t + \Omega \partial_\phi$ (using Boyer-Lindquist coordinates), where $\Omega$ is defined to be as above. The orbits, or integral curves, of this vector field are the lines $\phi = \Omega t + \mathrm{const.}$, which correspond to rotation at angular velocity $\Omega$ with respect to a stationary observer at infinity. One can show that this vector field is tangent to the event horizon, and its orbits lying on the event horizon are geodesics . These geodesics hence rotate at angular velocity $\Omega$ (with respect to an observer at infinity), and hence it is natural to interpret the quantity $\Omega$ as the angular velocity of the black hole. Whether it is possible to make a more definite statement than this I do not know.
{ "source": [ "https://physics.stackexchange.com/questions/310881", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/46926/" ] }
310,897
I was going through a portion speaking about interference due to thin films. It says that a phase difference of pi occurs in the reflected system, due to reflection from the surface of a denser medium but gives no reason to it. Any help is appreciated.
The angular velocity of a Kerr black hole with mass $M$ and angular momentum $J$ is $$ \Omega = \frac{J/M}{2M^2 + 2M \sqrt{M^2 - J^2/M^2}} $$ The moment of inertia of an object can be thought of as a map from the object's angular velocity to its angular momentum. However, here we see that the relationship between these two quantities is non-linear. If we want to think of moment of inertia in the usual sense, we should linearise the above equation. When we do so, we find the relationship $$ J = 4 M^3 \Omega \qquad (\mathrm{to\ first\ order})$$ And so the moment of inertia is $$ I = 4 M^3 $$ In other words, the expression you guessed is correct, and the constant of proportion is unity. Note that since the Schwarschild radius of a black hole is merely twice its mass, and since the only two parameters that describe the black hole are its mass and angular momentum, any linear relationship between the angular velocity and angular momentum of our black hole must be of the form $J = k\, M R_S^2\, \Omega$ on dimensional grounds. Note that $G = c = 1$ throughout. EDIT. As pointed out in the comments, it's not obvious how one should define the angular velocity of a black hole. At the risk of being overly technical, we can do this as follows. First consider the Killing vector field $\xi = \partial_t + \Omega \partial_\phi$ (using Boyer-Lindquist coordinates), where $\Omega$ is defined to be as above. The orbits, or integral curves, of this vector field are the lines $\phi = \Omega t + \mathrm{const.}$, which correspond to rotation at angular velocity $\Omega$ with respect to a stationary observer at infinity. One can show that this vector field is tangent to the event horizon, and its orbits lying on the event horizon are geodesics . These geodesics hence rotate at angular velocity $\Omega$ (with respect to an observer at infinity), and hence it is natural to interpret the quantity $\Omega$ as the angular velocity of the black hole. Whether it is possible to make a more definite statement than this I do not know.
{ "source": [ "https://physics.stackexchange.com/questions/310897", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/124783/" ] }
311,069
Will a propeller work in a superfluid? Opinions differ .
No. I actually tried this in an undergraduate physics lab long ago. We put two fan blades right up against each other in a glass dewar. One was driven and the other was free to spin. We filled the dewar with liquid He and spun the fan. The other spun just fine. We pumped on the LHe until it transitioned to a superfluid. We spun the fan. The other just sat there and then slowly slowly started to turn. Edit as requested So yes, it worked a little. But so poorly that the best answer is no. As Does every superfluid have a normal and a superfluid component? says, it has two components. The normal component was responsible for the residual viscosity. If we had reduced the temperature, there would be a smaller fraction of normal component, and less viscosity. It would work even more poorly. And I must include the obligitory XKCD .
{ "source": [ "https://physics.stackexchange.com/questions/311069", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59208/" ] }
311,468
Assumptions: The universe is flat (currently supported) The universe is simply connected (the edges aren't glued together as in a torus) The universe contains finite mass and energy Conclusion: The universe must have an edge. Yes, there is a similar question here: How can the universe be flat and have no center if universal mass-energy content is finite? But my question is not answered. In fact, people are neatly dodging the notion of an "edge" by suggesting "unusual topologies" This is a purely hypothetical question, but since everyone says the universe has no edge and is flat, I am forced to ask the obvious: space or some form of truly empty vacuum might go on forever, but if matter/energy are finite in the universe, then eventually, if we travel far enough past the cosmic horizon, we'll find that there are no more stars, no more galaxies, no more photons... and no more anything. Unless the universe is actually a sphere, in which case eventually we'll end up back where we started. Is there a flaw in my reasoning? I must have read 100 articles today to get to the bottom of this.
Yes, if the universe is: flat (zero spatial curvature) has finite mass energy (since we know it is uniform this also means it is bounded. If you drop the bounded es because you don't want to admit uniformity or otherwise, i.e., if it is unbounded, then the answer is clearly no) is simply connected (has what is called a trivial topology) Then it does have to have an edge. See the zero curvature and other sections of the wiki article on the shape of the universe, it's fairly complete, at https://en.wikipedia.org/wiki/Shape_of_the_universe The simply connected condition is critical also. If you allow other topologies then both the torus and the Klein bottle topologies are bounded, flat and have no edges. There are a total of 17 possible different topologies for multiply connected spaces that are flat, in 3D (our spatial dimensions, which is what is referred to when one talks about curvature of the universe) Riemannian space. See fig. 4 in the arXiv paper at https://arxiv.org/abs/0802.2236 for all of them. There are others if the space is not flat. As far as space being unbounded but mass energy finite, that would violate what we know of the homogeneity and isotropy of the universe. From the CMB we see the (large) scale homogeneity and isotropy. Now, we only see back to 380,000 years after the Big Bang, but no sign of large inhomogeneities. It could theoretically still be true that out inflation bubble is homogeneous, and thus the part of the universe beyond our particle horizon might not be, but there is no theoretical reason to think so. The more prevalent view is that it was as uniform more or less, and the same inflation that created our bubble might have created others. If we ever fully understand our inflation (which at this point looks pretty consistent with observations but those don't rule out various versions, or other unknown mechanisms from an unknown theory of quantum gravity), we might find out better or differently. But presently, a large scale homogeneity with possible bubbles is consistent with all observations.
{ "source": [ "https://physics.stackexchange.com/questions/311468", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/142050/" ] }
311,606
I found after searching that this question has been asked before. But all the answers were not convincing. Suppose I have a body which is free, not constrained always rotate about its center of mass (COM). Why is that so? A convincing answer that I found was that in most cases the moment of inertial about the center of mass is the least and that's why the body rotates about the center of mass. But I ask it again with hope of the question not getting closed and getting a better succinct answer. I was thinking that motion about the COM is the most stable one and the rotation about other points degenerates. I don't think it's right. Is it? —————————————————————————————————— Note-: 1). This question has been wrongly closed. The other questions linked don’t answer my question at all. It asks me to ask a new question if my question is still not resolved. I did make it clear that I am not satisfied with the answers in the linked questions. 2). The answer to this question is that a free body never rotates about its center of mass ( the instantaneous axis of rotation never passes through the center of mass). In fact we choose a point about which we want to decompose the motion into rotation and translation and we could very well have chosen any point other than the center of mass and analysed rotation about it. Moreover the instantaneous axis of rotation for a free body never passes through the center of mass. I would urge the moderators to give me the right to add my answer to this question. This is the correct answer, the one which satisfied me the most and it is nowhere in the linked answers. So kindly give me the right to open this question and let me add my answer to it.
You presumably already know that in the absence of external forces, the center of mass of any collection of particles moves at a constant velocity. This is true whether they are stuck together in a single body or are just a bunch of separate bodies with or without interactions between them. We now move to a frame of reference moving at that velocity. In that frame the CofM is stationary. Now suppose that the particles are indeed stuck together to form a rigid body. We see that the body is moving so that: 1) the CofM remains fixed, 2) all the distances between the particles are fixed. (This second condition is what is meant by a $rigid$ body after all). A motion with these two properties, (1) and (2), is precisely what is meant by the phrase ``a rotation about the CofM''
{ "source": [ "https://physics.stackexchange.com/questions/311606", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/113699/" ] }
311,637
Besides the obvious cases where I'm behind a " one-way " mirror or have goggles/glasses on: is there one where I can see someone's eyes, and they can't see mine?
Fermat's principle says that the direction of travel for any light ray can be reversed. Therefore there is always a line of sight between a pair of eyes in both ways. If one person is in the dark, then only one person can see the eyes of the other. So there needs to be enough light reflected from both person's eyes for this to work.
{ "source": [ "https://physics.stackexchange.com/questions/311637", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98739/" ] }
312,119
Let's say I took a tiny metal sphere that when put under water has the surface area to allow, at any point in time, to be surrounded by up to 1000 water molecules. Now lets say we put this sphere first in shallow water and then in the Mariana Trench. Obviously, the sphere would feel much more pressure in the deep water! But why is that? Let's look at the formula's: For the pressure from the 1000 water moleculse:$$P=1000*\frac{F_{molecule(H2O)}}{A_{sphere}}$$ Assuming the collisions of the water molecules with the sphere are perfectly elastic and are always in the same direction and happen in the same period of time: $$P=1000*\frac{2m_{molecule(H2O)}*v_{molecule(H2O)}}{A_{sphere}*\Delta t}$$ So the only variable here is the velocity of the water molecules. But we know that deep water is colder than shallow water so the kinetic energy, and thus velocity, of the water molecules in the mariana trench is lower and so it doesn't make much sense that the pressure would be higher. P.S. To be explicit. My logic here is that the only things capable of DOING the force (to cause pressure) are the water molecules directly in contact with the sphere. That is where an exchange of energy would be happening.
The problem is that you're modeling the liquid like an ideal gas, whose molecules independently bounce off the ball, but liquids are characterized by strong interactions at short distances. A better (but still inaccurate) model would be to treat the liquid like a solid locally, i.e. imagine each of the liquid molecules connected in a chain by springs. An increase in pressure means that the springs are compressed more and more, so they push outward onto your object more and more. In terms of your variables, we should have $F \sim k \Delta x$, not $F \sim 2mv/\Delta t$. In this model, pressure can be transmitted from molecules far away, just like tension is transmitted through a rope.
{ "source": [ "https://physics.stackexchange.com/questions/312119", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72875/" ] }
312,289
Suppose the apparent diameters of the sun and the moon are exactly the same (which in fact very close to the real situation). If the moon had a perfect mirror surface, would the reflected visible light of a full moon (at night) illuminate the earth with the same intensity as the visible light of the sun would do? Or would this only happen if we place a giant flat perfect mirror which reflects the light of the sun during the night so that every person on the night side of the earth could see the sun?
No, because of the sizes of their surfaces. Let's make these simplified assumptions: The Earth and the Moon are both spheres 1 AU from the Sun. The total amount of sunlight an object receives is proportional to the solid angle it takes up from the Sun's point of view. The Sun and the Moon are each visible from a hemisphere of the Earth. Then the total amount of sunlight received by the sunlit hemisphere of Earth is proportional to the square of the Earth's radius, while the total amount of sunlight received by the sunlit hemisphere of the Moon is proportional to the square of the Moon's radius. Since the Moon is ≈1/3.67 the radius of Earth, it receives ~1/13.5 the total amount of sunlight. Certainly, even a perfectly reflective Moon can't reflect more sunlight than it receives, so even if all of the light bouncing off of the Moon reached the Earth it would only provide brightness comparable to a cloudy day. Of course, owing to the geometry, most of the light bouncing off of the Moon doesn't land on Earth; it goes off into space in directions that miss the Earth completely. Making another simplifying assumption, I think we can say that the fraction of it that reaches Earth is proportional to the fraction of the Moon's sky taken up by the Earth. The Earth has an apparent size of about 2 degrees as seen from the Moon, so its angular size is $2\pi\left(1 - \cos\frac{2^\circ}{2}\right) \approx 0.00096$ steradians. A hemisphere is $2\pi$ steradians, so the Earth occupies about 0.00015 hemispheres (about 0.015% of the Moon's sky). Now we have that geometrically, a perfectly reflective Moon should illuminate the Earth at about $\frac{0.00015}{13.5} \approx \frac{1}{90,000}$ the intensity of the Sun. In real life, the light from a full Moon is about 1/480,000 the brightness of the noon Sun. Given the Moon's albedo is somewhere between 0.1 and 0.2 depending on the angle of incidence, and given the huge simplifications made in the above math, I think this indicates that we're in the right ballpark.
{ "source": [ "https://physics.stackexchange.com/questions/312289", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98822/" ] }
312,406
Why do nuclei like Oganesson (also known as Ununoctium, this is the 118th element on the periodic table) decay in about 5 milliseconds? This is weird that they decay. In comparison, why do elements like uranium take about 200,000 years to decay, or even more? Why do atoms decay at all? Why do elements like Polonium (84th element) take only about 140 days to decay?
In a nutshell, atoms decay because they're unstable and radioactive. Ununoctium (or Oganesson ) has an atomic number of 118. That means that there are 118 protons in the nucleus of one atom of Oganesson, and that isn't including the number of neutrons in the nucleus. We'll look at the most stable isotope of Oganesson, $\mathrm{{}^{294}Og}$. The 294 means that there are 294 nucleons, or a total of 294 protons and neutrons in the nucleus. Now, the largest stable isotope of an element known is $\mathrm{{}^{208}Pb}$, or lead-208. Beyond that many nucleons, the strong nuclear force begins to have trouble holding all those nucleons together. See, normally, we'd think of the nucleus as impossible because the protons (all having a positive charge) would repel each other, because like charges repel. That's the electromagnetic force. But scientists discovered another force, called the strong nuclear force. The strong nuclear force is many times stronger than the electromagnetic force (there's a reason it's called the strong force) but it only operates over very, very small distances. Beyond those distances, the nucleus starts to fall apart. Oganesson and Uranium atoms are both large enough that the strong force can't hold them together anymore. So now we know why the atoms are unstable and decay (note that there are more complications to this, but this is the general overview of why). But why the difference in decay time? First, let me address one misconception. Quantum mechanics says that we don't know exactly when an atom will decay, or if it will at all, but for a collection of atoms, we can measure the speed of decay in what's called an element's half-life. It's the time required for the body of atoms to be cut in half. So, to go back to decay time, it's related (as you might expect) again to the size of the nucleus. Generally, isotopes with an atomic number above 101 have a half-life of under a day, and $\mathrm{{}^{294}Og}$ definitely fits that description. (The one exception here is dubnium-268.) No elements with atomic numbers above 82 have stable isotopes. Uranium's atomic number is 92, so it is radioactive, but decays much more slowly than Oganessson for the simple reason that it is smaller. Interestingly enough, because of reasons not yet completely understood, there may be a sort of "island" of increased stability around atomic numbers 110 to 114. Oganesson is somewhat close to this island, and it's half-life is longer than some predicted values, lending some credibility to the concept. The idea is that elements with a number of nucleons such that they can be arranged into complete shells within the atomic nucleus have a higher average binding energy per nucleon and can therefore be more stable. You can read more about this here and here . Hope this helps!
{ "source": [ "https://physics.stackexchange.com/questions/312406", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/145619/" ] }
312,478
When I pour hot water (near boiling) and cold water ($5 \unicode{x2103}$) from a height on a platform, there is a distinct difference in the sound that is generated. I feel that hot water splashing has a lower frequency than cold water splashing. What can be the possible reason behind this? Edit 1: I used a tea kettle to heat the water and dropped it on a marble platform. I did the same experiment with cold (refrigerated) water using the same kettle. Height would be around 1.5m. There's a distinct difference between the sound produced. Edit 2: I guess I won't need to do the experiment as @Deep suggested. Please view the link given by @Porges. Also, I was incorrect in relating the frequencies. Hot water makes higher frequency. Only thing is, how does bubbling make it more shrill?
This is a guess since I have never done the experiment, but the viscosity of water falls by a factor of 5 on heating from 5°C to 100°C . The viscosity is one of the two factors (the other being density) that control the water flow, so it is quite reasonable to suppose that water at 100°C splashes in a noticably different way to water at 5°C. I mentioned above that the density also affects the flow. However the density of water only changes by about 4% over this temperature range . So it seems likely that the change in the viscosity is the main factor.
{ "source": [ "https://physics.stackexchange.com/questions/312478", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116557/" ] }
312,824
How much does thermal expansion affect neutron stars? Would the loss of temperature cause a neutron star to be more densely packed and thus collapse into a black hole?
No (or at least not much). One of the essential properties of stars that are largely supported by degeneracy pressure, is that this pressure is independent of temperature and that is because although a neutron star may be hot, it has such a small heat capacity, it contains very little thermal energy$^{*}$. When a neutron star forms, it cools extremely rapidly by the emission of neutrinos, on timescales of seconds. During this phase, the neutron star does contract a little bit (tens of per cent), but by the time its interior has cooled to a billion Kelvin, the interior neutrons are degenerate and the contraction is basically halted. It is possible that a (massive) neutron star could make the transition to a black hole before this point. If it does not do so, then from there, the neutron star continues to cool (but actually possesses very little thermal energy, despite its high temperature), but this makes almost no difference to its radius. $^{*}$ In a highly degenerate gas the occupation index of quantum states is unity up the Fermi energy and zero beyond this. In this idealised case, the heat capacity would be zero - no kinetic energy can be extracted from the fermions, since there are no free lower energy states. In practice, and at finite temperatures, there are fermions $\sim kT$ above the Fermi energy that can fall into the few free states at $\sim kT$ below the Fermi energy. However, the fraction of fermions able to do so is only $\sim kT/E_F$, where $E_F$ is the kinetic energy of fermions at the Fermi energy. At typical neutron star densities, this fraction is of order $T/10^{12}\ {\rm K}$, so is very small once neutron stars cool (within seconds) below $10^{10}$ K. What this means is that the heat capacity is extremely small and that whilst the neutrons in a neutron star contain an enormous reservoir of kinetic energy (thus providing a pressure), almost none of this can be extracted as heat.
{ "source": [ "https://physics.stackexchange.com/questions/312824", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/105751/" ] }
313,134
A neutron is a neutral particle which is merely some times more massive than an electron. What makes it so unstable outside the nucleus that it has a half life only of about 12 min?
How long is long? So "half life only of about 12 min" is actually really a strange idea to most of your readers. 12 minutes is a very long time, atomically speaking! Like, the charged pions have a half-life of 18 nanoseconds, the uncharged one is 58 nano-nanoseconds (attoseconds). You might say "well those are mesons, not baryons like the proton and neutron," but actually the first new baryon ever discovered, the $\Lambda^0$, had a half-life of 0.18 ns and this was considered so strange (in the sense of being so much longer than expected!) that the newly discovered particle was thought to have a quality called strangeness and this eventually became the name of the relevant quark; it is still today called the "strange quark." The mass difference The neutron decays to the proton for a simple reason: a proton is made of two ups and a down, a neutron is made of two downs and an up, and the down quark is intrinsically more massive than the up quark. Now there is a subtlety: the vast majority of the proton's and neutron's masses comes from their strong-force binding energy via $E=mc^2,$ which is why they have basically the exact same mass when fully assembled, a little over 930 MeV. (An electron volt, or eV, is the amount of energy that an electron gains when it goes through one volt of potential difference; it corresponds to a certain mass after dividing by $c^2.$) But the up quarks in these particles are about 2 MeV lighter than down quarks are (we actually don't know the real masses 100%, but the story seems to be about right), and the point is that this ~2 MeV gap is big enough that even after creating an electron (0.5 MeV) and neutrino and accounting for the greater electromagnetic self-repulsion, the proton is still 1.3 MeV lighter overall. Lighter means lower-energy, which means the total energy is spread out more across the universe, and in some sense we're talking about entropy and statistics again. You might wonder why this argument doesn't go one step further, to a particle with three ups. This particle exists and is called the $\Delta^{++}.$ However, this fact that "most of the mass is binding energy" comes back to bite us, because some of that binding energy, it turns out, lives in the spin configuration of the quarks that make up the nucleon. This comes down to the "Pauli exclusion principle": a down and an up-quark, being different particles, can be in "the same state" but two up-quarks must be "in different states". In the details, this exclusion principle takes the form that the up/down "flavor" configuration and spin configuration must either both be symmetric or antisymmetric, since the color-charge state is antisymmetric and the overall state must be antisymmetric. Well the up-up-up state of the $\Delta^{++}$ and down-down-down state of the $\Delta^-$ can't help but be symmetric; so the spin-state must be symmetric too, and the spin-symmetric state has a higher energy than the spin-antisymmetric state by 200-300 MeV. By contrast there are two (1u,2d) and (2u, 1d) configurations, the ones that are flavor-antisymmetric and spin-antisymmetric have total spin 1/2 and are the proton and neutron; the ones that are flavor-symmetric and spin-symmetric have total spin 3/2 and are the $\Delta^+$ and $\Delta^0.$ Anyway the point is that the extra energy which needs to be bound in this state to keep the extra spin in the system is very high, so that's why you don't see these particles in nature. Quantum tunneling So neutrons are a higher energy-state than protons, and quantum mechanics says that if there ever is a lower-energy state, and there is any process which can transfer energy out, then eventually the system will come to be in that lower-energy state. But, this could take a while if the transfer-process requires more energy than the system has, in which case quantum mechanics has to "tunnel" through the higher-energy state which takes some time due to time-energy uncertainty. That's what makes this process take so long for neutrons; the only pathway involves creating a $W^-$ boson which eventually decays into an electron and an antineutrino, but the boson in the middle has a very large mass -- 80,000 MeV or so -- and there is therefore nowhere near enough mass to create one of these. QM has to tunnel through this $W$-boson state. How does the presence of other nucleons stabilize neutrons? On the flip side, when these baryons are within a nucleus, the attraction of the different baryons can create a force which "holds together" neutrons, in the sense that the decay of a neutron would increase the energy of the whole, formed nucleus. This actually occurs by the exact same mechanism that makes that $\Delta^{++}$ baryon cost energy, that Pauli exclusion. So if you have dealt with atoms you know that two uncharged atoms will still "stick" to each other by the van der Waals forces, which just have to do with "even though the total charge is 0, there is still some charge-distribution structure here, which matters a lot at short distances." The nucleons within atoms actually have a very similar property even though the color charge is more complicated than the electric charge. Basically, these protons and neutrons are being held internally together with these gluons into color-charge-neutral particles; but they can still "stick" to each other through the strong force, generally by exchanging virtual pions. The pions are mesons: combinations of a quark and an antiquark with opposite color charges, so they end up being color-neutral as well. In this case the up-antidown meson is called $\pi^+$ while the down-antiup meson is called $\pi^-$ and there are two very short-lived $\pi^0$ mesons between them, up-antiup and down-antidown. These were predicted by Yukawa a long time before we knew anything about quarks: they were, in fact, our first jump down the rabbit hole! But anyway, there are these short-lived pions that "stick" protons and neutrons together at short ranges. Now Pauli exclusion comes in and says "hey, these protons and neutrons are also identical spin-1/2 particles, so I demand that they be in different states." This picture is much more like the electron-shell model of the atom, there are some energy "shells" for the protons and an almost-identical set of shell levels for the neutrons: the proton levels are a little higher in energy because the electromagnetic force says that like charges repel. Imagine these are laid out side-by side, left column is protons, right column is neutrons. If a neutron wants to become a proton by emitting an electron and antineutrino, it may need to pay an extra "cost" if there is no corresponding proton state to the left: and those levels also see a non-negligible splitting based on spin due to a strong spin-orbit interaction. In fact these effects are already enough to keep a neutron together in the case of deuterium, one proton bound to one neutron by these pions. Add one more neutron, and this becomes weakly unstable tritium with a half-life of 12 years, add one more neutron and the result is severely unstable. Actually there is a balance here where the energy gain from being able to "drop" down several energy shells can drive a nucleus with too few neutrons and too many protons to emit a positron (an anti-electron) in reverse-beta decay, turning into a neutron in order to "drop" a few shells down in energy. Those nuclei are very useful in medicine, because the positron then usually annihilates with an electron to produce two gamma rays going in opposite directions, and detecting these gamma rays is how the PET scanner works. So you say "drink this positron-emitting fluid!" and then you can map out with the PET scanner where all of these atoms have gone in the body.
{ "source": [ "https://physics.stackexchange.com/questions/313134", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/145764/" ] }
313,149
I have a basic knowledge in several fields of physics. I aspire to study further in electromagnetism and quantum mechanics. I have fundamental knowledge in calculus. Are Feynmann's lectures appropriate for me? I mean do they contain advanced mathematical concepts?
How long is long? So "half life only of about 12 min" is actually really a strange idea to most of your readers. 12 minutes is a very long time, atomically speaking! Like, the charged pions have a half-life of 18 nanoseconds, the uncharged one is 58 nano-nanoseconds (attoseconds). You might say "well those are mesons, not baryons like the proton and neutron," but actually the first new baryon ever discovered, the $\Lambda^0$, had a half-life of 0.18 ns and this was considered so strange (in the sense of being so much longer than expected!) that the newly discovered particle was thought to have a quality called strangeness and this eventually became the name of the relevant quark; it is still today called the "strange quark." The mass difference The neutron decays to the proton for a simple reason: a proton is made of two ups and a down, a neutron is made of two downs and an up, and the down quark is intrinsically more massive than the up quark. Now there is a subtlety: the vast majority of the proton's and neutron's masses comes from their strong-force binding energy via $E=mc^2,$ which is why they have basically the exact same mass when fully assembled, a little over 930 MeV. (An electron volt, or eV, is the amount of energy that an electron gains when it goes through one volt of potential difference; it corresponds to a certain mass after dividing by $c^2.$) But the up quarks in these particles are about 2 MeV lighter than down quarks are (we actually don't know the real masses 100%, but the story seems to be about right), and the point is that this ~2 MeV gap is big enough that even after creating an electron (0.5 MeV) and neutrino and accounting for the greater electromagnetic self-repulsion, the proton is still 1.3 MeV lighter overall. Lighter means lower-energy, which means the total energy is spread out more across the universe, and in some sense we're talking about entropy and statistics again. You might wonder why this argument doesn't go one step further, to a particle with three ups. This particle exists and is called the $\Delta^{++}.$ However, this fact that "most of the mass is binding energy" comes back to bite us, because some of that binding energy, it turns out, lives in the spin configuration of the quarks that make up the nucleon. This comes down to the "Pauli exclusion principle": a down and an up-quark, being different particles, can be in "the same state" but two up-quarks must be "in different states". In the details, this exclusion principle takes the form that the up/down "flavor" configuration and spin configuration must either both be symmetric or antisymmetric, since the color-charge state is antisymmetric and the overall state must be antisymmetric. Well the up-up-up state of the $\Delta^{++}$ and down-down-down state of the $\Delta^-$ can't help but be symmetric; so the spin-state must be symmetric too, and the spin-symmetric state has a higher energy than the spin-antisymmetric state by 200-300 MeV. By contrast there are two (1u,2d) and (2u, 1d) configurations, the ones that are flavor-antisymmetric and spin-antisymmetric have total spin 1/2 and are the proton and neutron; the ones that are flavor-symmetric and spin-symmetric have total spin 3/2 and are the $\Delta^+$ and $\Delta^0.$ Anyway the point is that the extra energy which needs to be bound in this state to keep the extra spin in the system is very high, so that's why you don't see these particles in nature. Quantum tunneling So neutrons are a higher energy-state than protons, and quantum mechanics says that if there ever is a lower-energy state, and there is any process which can transfer energy out, then eventually the system will come to be in that lower-energy state. But, this could take a while if the transfer-process requires more energy than the system has, in which case quantum mechanics has to "tunnel" through the higher-energy state which takes some time due to time-energy uncertainty. That's what makes this process take so long for neutrons; the only pathway involves creating a $W^-$ boson which eventually decays into an electron and an antineutrino, but the boson in the middle has a very large mass -- 80,000 MeV or so -- and there is therefore nowhere near enough mass to create one of these. QM has to tunnel through this $W$-boson state. How does the presence of other nucleons stabilize neutrons? On the flip side, when these baryons are within a nucleus, the attraction of the different baryons can create a force which "holds together" neutrons, in the sense that the decay of a neutron would increase the energy of the whole, formed nucleus. This actually occurs by the exact same mechanism that makes that $\Delta^{++}$ baryon cost energy, that Pauli exclusion. So if you have dealt with atoms you know that two uncharged atoms will still "stick" to each other by the van der Waals forces, which just have to do with "even though the total charge is 0, there is still some charge-distribution structure here, which matters a lot at short distances." The nucleons within atoms actually have a very similar property even though the color charge is more complicated than the electric charge. Basically, these protons and neutrons are being held internally together with these gluons into color-charge-neutral particles; but they can still "stick" to each other through the strong force, generally by exchanging virtual pions. The pions are mesons: combinations of a quark and an antiquark with opposite color charges, so they end up being color-neutral as well. In this case the up-antidown meson is called $\pi^+$ while the down-antiup meson is called $\pi^-$ and there are two very short-lived $\pi^0$ mesons between them, up-antiup and down-antidown. These were predicted by Yukawa a long time before we knew anything about quarks: they were, in fact, our first jump down the rabbit hole! But anyway, there are these short-lived pions that "stick" protons and neutrons together at short ranges. Now Pauli exclusion comes in and says "hey, these protons and neutrons are also identical spin-1/2 particles, so I demand that they be in different states." This picture is much more like the electron-shell model of the atom, there are some energy "shells" for the protons and an almost-identical set of shell levels for the neutrons: the proton levels are a little higher in energy because the electromagnetic force says that like charges repel. Imagine these are laid out side-by side, left column is protons, right column is neutrons. If a neutron wants to become a proton by emitting an electron and antineutrino, it may need to pay an extra "cost" if there is no corresponding proton state to the left: and those levels also see a non-negligible splitting based on spin due to a strong spin-orbit interaction. In fact these effects are already enough to keep a neutron together in the case of deuterium, one proton bound to one neutron by these pions. Add one more neutron, and this becomes weakly unstable tritium with a half-life of 12 years, add one more neutron and the result is severely unstable. Actually there is a balance here where the energy gain from being able to "drop" down several energy shells can drive a nucleus with too few neutrons and too many protons to emit a positron (an anti-electron) in reverse-beta decay, turning into a neutron in order to "drop" a few shells down in energy. Those nuclei are very useful in medicine, because the positron then usually annihilates with an electron to produce two gamma rays going in opposite directions, and detecting these gamma rays is how the PET scanner works. So you say "drink this positron-emitting fluid!" and then you can map out with the PET scanner where all of these atoms have gone in the body.
{ "source": [ "https://physics.stackexchange.com/questions/313149", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/145852/" ] }
313,422
Whenever we take a photograph of something moving at a considerably high speed, its image appears fuzzy/smudgy/distorted due to motion blur. Why doesn't this happen in case of Earth photographs from space, taking into account the fact that Earth rotates at a speed $\approx 1675 \,\text{km/hour}$? EDIT : tfb's lovely answer provides useful insight for the problem and helps solving it by utilizing some simple trigo and Mathematical manipulations. Diracology and macgyver_sc have also provided "Non-Mathematical" (or at least Non-Highly_Mathematical) explanations for the posed problem. Be sure to check these three answers ! A BIG THANK YOU to all of you who up-voted, supported and answered my question. It is just my second question on this site. Hope you Enjoy ! ^_^
[Caveat for this answer: it (both parts) is almost literally a transcript of a back-of-the-envelope calculation: there may be mistakes.] The calculation for a distant camera not co-rotating with the Earth A 50mm lens on 35mm film has about a 40 degree angle of view. Let's assume we're pointing that lens at the Earth so the Earth fills this angle of view, we are looking down at the equator, and the camera is not co-rotating with the Earth. Rather than do any complicated sums we'll assume that the end points on a line drawn through the centre of the planet and ending at the surface subtend 40 degrees to the camera. If we assume the radius of the earth is $R$ and the angle of view of the lens is $2\theta$, this gives us $$B = \frac{R}{\tan\theta}$$ where $B$ is the distance from the camera to the centre of the earth. From this we get $$b = R\left(\frac{1}{\tan\theta} - 1\right)$$ where $b$ is the distance from the point on the surface of the Earth directly under the camera to the camera. Now we want to calculate the angular velocity of this point with respect to the camera, $\omega_C$, in terms of $\omega_E$, the angular velocity of the Earth. Well, we can do this by equating the distance it moves in terms of $\omega_C$ to that it moves in terms of $\omega_E$ in some short time $\delta t$: $$\omega_C \delta t b = \omega_E \delta t R$$ or $$\omega_C = \frac{\omega_E R}{b}$$ or $$\omega_C = \frac{\omega_E}{\frac{1}{\tan\theta} - 1}$$ So, we know $\omega_E$ and $\theta$, and so we know $\omega_C$. The next thing we want to know is the angular size of a pixel for the camera. If there are $N$ pixels across the field of view, then at the centre of the field of view a pixel subtends an angle of about $(2\tan\theta)/N$ (I might have this wrong). So, now, finally, the time for a point on the surface of the Earth directly under the camera to move across one pixel is $$\frac{\left(\frac{2\tan\theta}{N}\right)}{\left(\frac{\omega_E}{\frac{1}{\tan\theta} - 1}\right)} = \frac{2-2\tan\theta}{N\omega_E}$$ So, OK, plug in $\theta=\pi/9$, $N=5000$ and $\omega_E=2\pi/(3600\times 24)$, and we get about 3.5 seconds (note I previously had both the expression here wrong (I had $\omega_E = 2\pi/3600$) and also the result was hopelessly wrong for some reason). So, in other words, it takes a point on the equator of the Earth about 3.5 seconds to move a single pixel across the image for a camera with a 25M pixel sensor and with with a normal lens, taking a picture such that the Earth fills the entire picture, if the camera is not co-rotating with the Earth. A typical exposure might be a couple of milliseconds. This is why the Earth does not seem to be blurred when viewed like this. It's worth noting, as pointed out by Jibb Smart in a comment, that the radius of the earth vanishes above: the parameters which control the motion blur are $\omega_E$, the angular velocity of the Earth, $\theta$, half the angle of view of the camera and $N$, the number of pixels, or equivalently, the resolution of the image if that is dominated by some other factor such as the lens. So this result applies to a photograph of any spherical, rotating object (it would need to be corrected for very wide angles of view as my assumption that you can see the ends of a line through the planet becomes seriously wrong in that case: fixing this is just a matter of doing slightly more correct trigonometry though, I was just lazy). The calculation for low Earth orbit Errol Hunt pointed out in a comment that a more plausible case is to consider a camera on a satellite in LEO, so let's do that. We know that satellites in LEO orbit the Earth in about 90 minutes. This means that we can just ignore the Earth's rotation to a good first approximation, so we'll do that. For a light object in a circular orbit about a point mass at a distance $r$, the speed of the object is given by $$v = \sqrt{\frac{G M}{r}}$$ The Earth is well-approximated by a point mass because of Newton's shell theorem, so for a satellite a distance $h$ above the Earth we have $$v = \sqrt{\frac{G M}{R + h}}$$ Where $G$ is Newton's gravitational constant, $M$ is the mass of the Earth, & $R$ is its radius. If the satellite is looking down at the Earth directly below it, then in time $\delta t$ it sees the surface move by $v\delta t$. Assuming that $\delta t$ is sufficiently small, then the image will move by an angle $$\delta\theta \approx \frac{v\delta t}{h}$$ So again, we want to know how long $\delta t$ can be for this to be the same as a pixel at the centre of the image. From above this means that $$\frac{2 \tan\theta}{N} = \frac{v\delta t}{h}$$ (where now $\theta$ is half the angle of view again, sorry), and so $$\delta t = \frac{2 h\tan\theta}{Nv}$$ Or, plugging in $v$ in terms of $h$: $$\delta t = \frac{2h\sqrt{h + R}\tan\theta}{N\sqrt{GM}}$$ And, once more, we can plug in $\theta = \pi/9$, $N=5000$ and, say $h=200\,\mathrm{km}$ (this is a very low orbit: things only get better as we go up) as well as standard values for $G$, $M$ & $R$ and we get $\delta t \approx 4\times 10^{-3}\,\mathrm{s}$: about $1/250\,\mathrm{s}$ in other words. This is a completely reasonable exposure time for any reasonably modern sensor (or film!) looking down at the Earth. Again, this is why the Earth is not blurred when we take pictures of it from space.
{ "source": [ "https://physics.stackexchange.com/questions/313422", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
313,673
I've been taught that in a simple pendulum , for small $x$, $\sin x \approx x$. We then derive the formula for the time period of the pendulum. But I still don't understand the Physics behind it. Also, there's no angle $x$ involved in a spring-mass system, then why do we consider it an SHM only for small amplitudes?
A simple pendulum does not strictly show simple harmonic motion unless you allow some approximations and uncertainties. It approximately behaves as a harmonic oscillator for small amplitudes. An object is said to be executing simple harmonic motion (no damping; not a forced oscillation) if and only if it satisfies the following condition: $$\frac{d^2 \phi}{dt^2} = -\omega^2 \phi \tag{1}$$ where $\phi$ is a variable quantity such as displacement, angular displacement, etc. Does a pendulum execute simple harmonic motion? The equation of motion for the pendulum can be written as: $$\vec{F} = {m\vec{g}} - \vec{T}$$ We know that the pendulum bob will move in a circle (assume that the string does not stretch), therefore, there is no motion in the direction of the string. This would mean that the net force on the bob will be used to provide a constant centripetal force. $$F_{radial} = T - mg\cos \theta = \frac{mv^2}{L}$$ The acceleration along the circumference of the string can be written as: $$F_{tangential} = ma = mg \sin \theta$$ $$a_{tangential} = a = g \sin \theta \tag{2}$$ The tangential acceleration can be expressed in terms of the angle $\theta$ as follows: $$v = L \frac{d\theta}{dt}$$ $$\frac{dv}{dt} = a = -L\frac{d^2\theta}{dt^2} \tag{3}$$ We have a minus sign because the gravitational force (acceleration) always tries to decrease the angle $\theta$. Substituting $(3)$ in $(2)$, you get, $$L\frac{d^2\theta}{dt^2} = -g \sin \theta \tag{4}$$ If you compare equation $(4)$ with equation $(1)$, you'll notice that it does not match. This would mean that the pendulum bob does not execute a simple harmonic motion. However, if the amplitude is small, then the maximum value of $\theta$ is small. The small angle approximation can be stated as follows: $$\sin \theta \approx \theta$$ Image Source: Wikipedia Using the approximation, you can rewrite equation $(4)$ as $$L\frac{d^2\theta}{dt^2} = -g\theta \tag{5}$$ The above equation looks quite similar to the equation $(1)$. It does match perfectly. Therefore, for small amplitudes, the pendulum executes a simple harmonic motion with a reasonable uncertainty. Does a spring-mass system execute simple harmonic motion? If the spring obeys Hooke's law, then it always executes simple harmonic motion. Hooke's law states that: $$F_{restoring} = ma = - kx \tag{6}$$ It is clearly evident from the above equation that the acceleration is directly proportional to the displacement and acts in the direction opposite to the displacement. Why do we limit the amplitude of a spring-mass system? Under high strain, the spring does not obey Hooke's law. This is kinda obvious: if you stretch a spring too much, it deforms permanently. Therefore, the equation $(6)$ no longer holds. If that equation does not hold, then the mass won't execute simple harmonic motion.
{ "source": [ "https://physics.stackexchange.com/questions/313673", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/134959/" ] }
313,758
I always feel unsure about the definitions of phase and phase transition. First, let's discuss in Laudau's paradigm. For example, some people say that phase is classified by symmetry. Some people say that phase is classified by order parameter and that a phase transition is when there is some discontinuity in free energy. Does this mean that gas and liquid are the same phase? Because in the phase diagram they are connected and they have the same symmetries (translations and rotations). If they are not the same phase, what should we call the state of large pressure and large temperature? Liquid or gas? Does this mean that above the critical point the transition from gas to liquid is not a phase transition, but below the critial point the transition from gas to liquid is a phase transition? If the answers to my first and second question are "yes", does this mean that even in the same phase, there can still be a phase transition? This conclusion is so weird! In Landau's paradigm, what's the symmetry breaking and order parameter in the gas-liquid phase transition? It seems that the symmetry is same in gas and liquid. Gas-liquid phase transition must be able to be explained by Landau's paradigm but Landau's paradigm says that there must be symmetry breaking in a phase transition. There is an answer . I admit that from modern point of view phase transition is not necessarily due to symmetry breaking, but I don't think that gas-liquid transition has been beyond Landau's paradigm. Up to now we only talk about the classical phase transition. If we consider the general paradigm, we know that symmetry breaking must imply phase transition, but phase transition don't imply symmetry breaking. For example, in $Z_2$ gauge Ising model, we can prove there is no symmetry breaking and local magnet is always zero. But we can choose Wilson loop as order parameter and find there is confined and deconfined phase. So if given one phase, we first find the symmetry is same in this phase and then check that several other order parameters are also the same in this phase. However how do you prove that there is no weird order parameter that in one part of this phase is zero and in another part of this phase is nonzero? For example, in a solid phase of water which has the same crystal structure, how to prove that any order parameter that you can construct will not be zero in one part of the phase and nonzero in other part?
Does this mean that gas and liquid are the same phases? Because in the phase diagram they are connected and they have the same symmetry(translation and rotation). If they are not the same phase, how to call the state in large pressure and large temperature? Liquid or gas? Yes. From the modern point of view, liquid and gas are in the same phase. Because, as the asker has mentioned, they are continuously connected in the phase diagram through the "supercritical" regime. By definition, two states of matter are in the same phase if they can be smoothly deformed to each other without going through phase transitions . Historically, liquid and gas are named as different phases (by mistake) because people thought that "there must be one different phase at each side of the phase transition" (as argued in Diracology's answer). But this idea is wrong. We can not declare different phases just by observing phase transitions. Otherwise, for example, in the following phase diagram on the left, we could have declared states A and B to be in different phases, simply because they are separated by phase transitions, as we can first go out of the blue phase and then reenter it. This way of separating phases is clearly stupid. Any reasonable person would agree that A and B should belong to the same phase in this case. Now we just deform the left phase diagram to the right by squeezing the intermediate red phase to a first-order transition line, then why we suddenly get confused about whether A and B are in the same phase or not? Definitely, they should still remain in the same phase! The liquid-gas transition is indeed a situation like this. So a logically consistent definition will have to define liquid and gas as a single phase. Does this mean that above the critical point the transition from gas to liquid is not a phase transition, but below the critical point, the transition from gas to liquid is a phase transition? Yes. If answers to my first and second question are "yes", does this mean even in the same phase there still can have phase transition? This conclusion is so weird! Given the example of the above phase diagrams, one will not feel weird about the fact that there can be (first-order) phase transitions inside a single phase. In fact, first-order transitions often appear by merging two second-order transitions together (this can be explained by Landau's theory). So going across a first-order transition is like going out of the phase and back again immediately, which definitely can happen inside a single phase. However, I am not aware of any example that continuous phase transitions can also happen within a single phase. So I conjecture that if a phase transition happens inside a phase, it must be first-order . (The conjecture is falsified by the recent discovery of "unnecessary" quantum criticalities in Bi, Senthil 2018 , Jian, Xu 2019 , Verresen, Bibo, Pollmann 2021 . -- Edited 2021) The liquid-gas transition is one example of my conjecture. From the Landau's paradigm, what's the symmetry breaking and order parameter in the gas-liquid phase transition? It seems that the symmetry is same in gas and liquid. Gas-liquid phase transition must be able to be explained by Landau's paradigm but Landau's paradigm says that there must be symmetry breaking in phase transition. There is an answer. I admit that from a modern point of view phase transition is not necessary due to symmetry breaking, but I don't think that gas-liquid transition has been beyond Landau's paradigm. No symmetry breaking is associated with the liquid-gas transition. Landau's paradigm only says that there must be spontaneous symmetry breaking in second-order transitions, but not in first-order transitions. In fact, nothing can be said about first-order transitions, because first-order transitions can happen anywhere in the phase diagram without any reason. The liquid-gas transition is indeed a case like this. Even though the liquid-gas transition is not a symmetry breaking transition, it can still be described within Landau's paradigm phenomenologically (who says that Landau's theory only applies to symmetry breaking transitions?). We can introduce the density $\rho$ of the fluid as the order parameter. Because no symmetry is acting on this order parameter, so there is no symmetry reason to forbid odd-order terms like $\rho, \rho^3, \cdots$ to appear in Landau's free energy. However the first order term can always be absorbed by redefining the order parameter with a shift $\rho\to\rho+\rho_0$ , then the Landau free energy takes the general form of $$F= F_0 + a \rho^2 + b \rho^3+ c \rho^4+\cdots.$$ First-order transition happens by driving the parameter $a$ to $a<9b^2/32c$ . From this example, we can see that (within Landau's paradigm) if a phase transition happens without breaking any symmetry, it must be first-order . Again the liquid-gas transition is one such example. So if given one phase, we firstly find the symmetry is same in this phase and then check several order parameters also same in this phase. However, how do you prove you must not be able to construct some weird order parameters such that in one part of this phase is zero and in another part of this phase is nonzero? For example, in a solid phase of water which has the same crystal structure, how to prove any order parameter that you can construct will not be zero in one part of the phase and nonzero in another part? Indeed, you can never rule out the possibility that some weird order parameter is hiding there to further divide the phase into more phases. That is actually why the solid phase of water is divided into so many different crystal phases. Each crystal structure is associated with a different symmetry-breaking pattern. Sometimes the symmetries are just so complicated that you may miss one or two of them if you are not careful enough. In that case, you will also miss the order parameters associated with the missing symmetry, until you see a specific heat anomaly in the experiment where you did not expect, then you start to realize that oh there is a missing order parameter that actually changes across this transition, and one needs to add some additional symmetry to explain it. This is actually the typical way that physicists work every day, they never figure out the full classification of phases until they see the evidence for new phases and phase transitions. I think this is also the fun part of condensed matter physics: there are always new phases of matter waiting for us to discover .
{ "source": [ "https://physics.stackexchange.com/questions/313758", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/34669/" ] }
313,923
I have a lot of solid objects that were exposed to the sun for many years and "obviously" they changed their color. I write obviously because I know it empirically and from other people, but what is actually happening in the material? Are the photons from the sun knocking electrons or what is going on at an atomic level? Or is the heat accelerating the oxidation process? Can anyone help me to understand and visualize this phenomenon?
The term "solid objects" is a rather vague definition. I will try to break it down into categories and try to answer individually. But right at the beginning I can tell that the colour change observed in materials is mostly a result of chemical changes induced by UV light from the sun. By chemical changes I mean three main mechanisms; breaking of chemical bonds, formation of radicals or light being a catalyst for certain reactions. The general field that can answer all questions of this sort is called, photochemistry . Metals: The sunlight alone cannot to much to pure metals. Of course if the metal is painted (like most of the metals we see in our daily life) the pigments in the paint can degrade (change their chemical structure) by the sunlight, which can cause a colour change. Humans : (assuming that humans are "solid objects") Most humans change colour in the sunlight as a protection mechanism of the body. The UV light is not good for skin (burns the skin) and therefore human skin exposed to sun produces pigments to absorb the light to protect the skin from burning. The produced pigments result in a colour change. Plastics: Plastics are one of the most drastically effected materials from sunlight. They may not only change colour but also lose some other materials properties, such as elasticity, due to breaking of chemical bonds of polymers via UV light. The effect of sunlight on plastics can be so drastic that it can even cause health concerns. Most beverages kept in plastic bottles have a sticker on them saying "avoid direct sunlight" because the chemical reactions induced by the UV light can cause some toxic components (or photochemical reaction products) to be released into the liquid. Paper : Paper is made from wood that mainly consists of cellulose. Cellulose is colourless in principle but because of its opacity looks white. Apparently, cellulose in paper can degrade through different chemical reactions (oxidation and interactions with acids in the paper) and change colour. But it is also found that cellulose absorb UV light and therefore degrade. However, the changes in chemical structure of cellulose is only one of the reasons for colour change in paper. Different types of papers contain different impurities (result of less effort in purification), one of the most common being lingin, another constituent of wood. Lingin gives the brownish colour to the newspaper paper and more prone to degradation compared with cellulose, therefore brown paper changes colour faster compared to white paper. A side note about fabric : Since cellulose is also the main component of cotton which is used a lot in textile production one can also expect that cotton-made clothes being affected by sunlight. EDITs: After seeing that my answer is appreciated by the community, I have decided to make a little research about the reason for colour change in paper and added my findings to my answer. I found out that there is at least one scientist who devoted significant amount of his time to understand the colour change in paper. Here is one of his main publications. Also found out that there is an answer in Chemistry Stackexchange to a similar question.
{ "source": [ "https://physics.stackexchange.com/questions/313923", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/85515/" ] }
313,936
I know that gloves (or socks, or clothing, anything) doesn't really make you warmer—instead, it makes heat loss slower, by improving insulation. I was wondering, whether the converse is true. Let us assume I am outside and both of my hands get equally cold, and then I walk into a warm room indoors (uniform temperature in the room and outside, for simplicity's sake). Would my hands get warmer if I had gloves on, or gloves off? I'm thinking I should take the gloves off, to maximise the heat transfer between the warm air and cold hands, but I was hoping someone else would confirm/refute this.
The term "solid objects" is a rather vague definition. I will try to break it down into categories and try to answer individually. But right at the beginning I can tell that the colour change observed in materials is mostly a result of chemical changes induced by UV light from the sun. By chemical changes I mean three main mechanisms; breaking of chemical bonds, formation of radicals or light being a catalyst for certain reactions. The general field that can answer all questions of this sort is called, photochemistry . Metals: The sunlight alone cannot to much to pure metals. Of course if the metal is painted (like most of the metals we see in our daily life) the pigments in the paint can degrade (change their chemical structure) by the sunlight, which can cause a colour change. Humans : (assuming that humans are "solid objects") Most humans change colour in the sunlight as a protection mechanism of the body. The UV light is not good for skin (burns the skin) and therefore human skin exposed to sun produces pigments to absorb the light to protect the skin from burning. The produced pigments result in a colour change. Plastics: Plastics are one of the most drastically effected materials from sunlight. They may not only change colour but also lose some other materials properties, such as elasticity, due to breaking of chemical bonds of polymers via UV light. The effect of sunlight on plastics can be so drastic that it can even cause health concerns. Most beverages kept in plastic bottles have a sticker on them saying "avoid direct sunlight" because the chemical reactions induced by the UV light can cause some toxic components (or photochemical reaction products) to be released into the liquid. Paper : Paper is made from wood that mainly consists of cellulose. Cellulose is colourless in principle but because of its opacity looks white. Apparently, cellulose in paper can degrade through different chemical reactions (oxidation and interactions with acids in the paper) and change colour. But it is also found that cellulose absorb UV light and therefore degrade. However, the changes in chemical structure of cellulose is only one of the reasons for colour change in paper. Different types of papers contain different impurities (result of less effort in purification), one of the most common being lingin, another constituent of wood. Lingin gives the brownish colour to the newspaper paper and more prone to degradation compared with cellulose, therefore brown paper changes colour faster compared to white paper. A side note about fabric : Since cellulose is also the main component of cotton which is used a lot in textile production one can also expect that cotton-made clothes being affected by sunlight. EDITs: After seeing that my answer is appreciated by the community, I have decided to make a little research about the reason for colour change in paper and added my findings to my answer. I found out that there is at least one scientist who devoted significant amount of his time to understand the colour change in paper. Here is one of his main publications. Also found out that there is an answer in Chemistry Stackexchange to a similar question.
{ "source": [ "https://physics.stackexchange.com/questions/313936", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/146331/" ] }
314,783
I mean how the co-frame field $e^I(x)$ differs from the coordinate vector basis for co-vector $dx^\mu$. How to interpret them? Does it means for coordinate vector we can find the integral curves?
Integral curves of non-coordinate (anholonomic) basis vectors also exist, they just don't form a coordinate system. This might be a bit difficult to swallow, but the heart of the issue in a coordinate system, the coordinates are independent . Here's a direct example: Consider polar coordinates in $\mathbb{R}^2$. These are given by $$ x=r\cos\varphi \\ y=r\sin\varphi. $$ The coordinate basis vectors are $$ \partial_r=\cos\varphi\partial_x+\sin\varphi\partial_y \\ \partial_\varphi=-r\sin\varphi\partial_x+r\cos\varphi\partial_y. $$ These are orthogonal, but not ortho normal . We can also norm these vectors and get $$ \hat{e}_r=\partial_r=\cos\varphi\partial_x+\sin\varphi\partial_y \\ \hat{e}_\varphi=\frac{1}{r}\partial_\varphi=-\sin\varphi\partial_x+\cos\varphi\partial_y. $$ The first set is holonomic, the second isn't, you can calculate $ [\hat{e}_r,\hat{e}_\varphi]$ to ascertain this. To try to interpret what it means for the integral curves of the anholonomic set not forming coordinates, consider that for the holonomic polar coordinate vectors, the $\partial_\varphi$ vector is longer, the further you are away from the origin. This is expected, $\varphi$ is an angular coordinate, the same angular displacement corresponds to larger and larger actual displacements the further you are away from the origin. Consider now the integral curves of the set $\hat{e}_r,\hat{e}_\varphi$ instead. It is visually clear that the "paths" corresponding to the integral curves are the same, BUT the parametrization for the $\varphi$-curves are different. The vector field $\hat{e}_\varphi$ has the same length everywhere, so all "$\hat{\varphi}$" curves have the same velocity. Imagine you are at the point $(r_0,\varphi_0)$. You move a parameter $\bar{\varphi}$ along the integral curves of $\hat{e}_\varphi$ . Since the integral curves are path-length parametrized (the length of $\hat{e}_\varphi$ is 1 after all), you move a distance of $\bar{\varphi}$, and now you are at $(r_0,\varphi_0+\frac{1}{r_0}\bar{\varphi})$ (remember that I am measuring points in the original "holonomic" coordinates and that the real displacement corresponding to a coordinate displacement $\varphi$ is $\bar{\varphi}=r_0\varphi$, since we are at radius $r_0$). Now we move radially to $2r_0$, so our coordinates are $(2r_0,\varphi_0+\frac{1}{r_0}\bar{\varphi})$. After this, we move along the integral curves of $\hat{e}_\varphi$ a parameter value of $-\bar{\varphi}$. This is once again, "actual displacement", and its value in $\varphi$-coordinates is $-\frac{1}{2r_0}\bar{\varphi}$, since we are now at radius $2r_0$. Our new position is $(2r_0,\varphi_0+\frac{1}{r_0}\bar{\varphi}-\frac{1}{2r_0}\bar{\varphi})=(2r_0,\varphi_0+\frac{1}{2r_0}\bar{\varphi})$. Now we move back radially from $2r_0$ to $r_0$, and end up at $(r_0,\varphi_0+\frac{1}{2r_0}\bar{\varphi})$. What we notice is that we did not do a loop at all, we had a net displacement of $\frac{1}{2r_0}\bar{\varphi}$ in the angular direction. Now, if there actually did exist an $(r,\hat{\varphi})$ coordinate system attached to the basis $(\hat{e}_r,\hat{e}_\varphi)$, then in this coordinate system our path would have been $$ (r_0,\hat{\varphi}_0)\mapsto (r_0,\hat{\varphi}_0+\bar{\varphi})\mapsto (2r_0,\hat{\varphi}_0+\bar{\varphi}) \mapsto (2r_0,\hat{\varphi}_0+\bar{\varphi}-\bar{\varphi})=(2r_0,\hat{\varphi}_0)\mapsto (r_0,\hat{\varphi}_0), $$ so we would have made a "coordinate loop", yet we would have had a net displacement. The "anholonomic coordinates" are not unambigous, because doing a finite loop, we ended up in a different point, yet both points have the same coordinates. Clearly a coordinate system could never work this way.
{ "source": [ "https://physics.stackexchange.com/questions/314783", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
314,885
Consider the following statement: Hadron Epoch, from $10^{-6}$ seconds to $1$ second: The temperature of the universe cools to about a trillion degrees, cool enough to allow quarks to combine to form hadrons (like protons and neutrons). What does it mean to say "from $10^{-6}$ seconds to $1$ second" ? How is time being measured? One particle might feel just $10^{-20}\ \mathrm s$ having passed and another could feel $10^{-10}\ \mathrm s$ having passed. Is saying "1 second after the big bang" a meaningful statement?
We know that time passes differently for different observers, and the question is how can a time be given without telling which frame it is in. The answer is that there's a preferred reference frame in cosmology, the comoving frame , because of the fact that there's matter and radiation in it. Intuitively, the special frame is the one that's "static" with respect to this matter and radiation content. More precisely, it is the one in which all observers that see an isotropic universe are static. Time measured in this system is called comoving time. The time from the beginning of the universe is usually given in this way, as a comoving time. To get some intuition about the comoving frame one might consider the comoving observers, the ones that see isotropy and therefore have constant comoving coordinates. A comoving observer is such that when it looks around and adds the motion of the objects it sees zero net motion. For example, we can look at the cosmic microwave background and detect some variation in the redshift depending on the direction. It's caused by Doppler effect and it means that we have some velocity relative to the comoving frame. On the other hand, a comoving observer sees the same redshift in any direction. Another example: we can choose to measure the distances and velocities of galaxies. By Hubble's law , we expect the velocity to be proportional to the distance. If we find a deviation from this behavior, we know that the galaxy is moving with respect to the comoving frame, and thus has a peculiar velocity (we also have a peculiar velocity). If all galaxies had constant comoving coordinates, we would see perfect agreement with Hubble's law: the relative motions of galaxies would be due only to the expansion of the universe.
{ "source": [ "https://physics.stackexchange.com/questions/314885", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60891/" ] }
315,049
I'm very confused about why it is a consequence of special relativity.
Let's forget about anything quantitative at all. Special relativity gives you length contraction -- so, when you're moving at a certain speed, distances along your direction of motion are compressed. Amongst many other things, this means that volumes will shrink, which also means that densities will increase. Now, electromagnetism tells us that the electric force is proportional to the charge density. So, naïvely, we'd expect the electric force on a test particle external to the charge distribution to be higher in a boosted reference frame. This, however, contradicts the central assumption of special relativity that the net force on an object doesn't depend on the speed of the reference frame. So, you need some new force that isn't present in the stationary reference frame. Well, in the boosted frame, the compressed charges are moving, so there is a current, so you could perhaps cancel out your excess force with some force that depends on the current distribution of spacetime. If you work this out, it turns out that magentism exactly does the trick, and if you factor in both electricity and magnetism, then the net force on the particle does not depend on whether you are in a stationary or moving reference frame.
{ "source": [ "https://physics.stackexchange.com/questions/315049", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143860/" ] }
315,101
We know that a soft iron bar placed inside a solenoid turns into an electromagnet when current passes through the solenoid. Now my question is how is electricity produced? Is the iron core moved by the turbine inside a solenoid? Because I know that a moving magnetic field creates electric field. Please help me on this.
Let's forget about anything quantitative at all. Special relativity gives you length contraction -- so, when you're moving at a certain speed, distances along your direction of motion are compressed. Amongst many other things, this means that volumes will shrink, which also means that densities will increase. Now, electromagnetism tells us that the electric force is proportional to the charge density. So, naïvely, we'd expect the electric force on a test particle external to the charge distribution to be higher in a boosted reference frame. This, however, contradicts the central assumption of special relativity that the net force on an object doesn't depend on the speed of the reference frame. So, you need some new force that isn't present in the stationary reference frame. Well, in the boosted frame, the compressed charges are moving, so there is a current, so you could perhaps cancel out your excess force with some force that depends on the current distribution of spacetime. If you work this out, it turns out that magentism exactly does the trick, and if you factor in both electricity and magnetism, then the net force on the particle does not depend on whether you are in a stationary or moving reference frame.
{ "source": [ "https://physics.stackexchange.com/questions/315101", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/127822/" ] }
315,104
Entropy as it is explained on this site is a Lorentz invariant. But, we can define it as a measure of information hidden from an observer in a physical system. In that sense, is entropy a relative quantity depending on the computation, measurement and storage capacity of the observer?
E.T. Jaynes agrees with you, and luckily he is a good guy to have on your side: From this we see that entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. Even at the purely phenomenological level, entropy is an anthropomorphic concept. For it is a property, not of the physical system, but of the particular experiments you or I choose to perform on it. This is a quote from his short article `` Gibbs vs Boltzmann Entropies '' (1965), which is a great article on the concept of entropy in general, but for this discussion in specific you can turn to section VI. The "Anthropomorphic" Nature of Entropy . I will not try to paraphrase him here, because I believe he already described himself there as succinctly and clearly as possible. (Note it's only one page). I was trying to find another article of him, but I couldn't trace it at the moment. [EDIT: thanks to Nathaniel for finding it ]. There he gave a nice example which I can try to paraphrase here: Imagine having a box which is partitioned in two equally large sections. Suppose each half has the same number of balls, and they all look a dull grey to you, all bouncing around at the same velocity. If you now remove the partition, you don't see much happen actually. Indeed: if you re-insert the partition, it pretty much looks like the same system you started with. You would say: there has been no entropy increase. However, imagine it turns out you were color blind, and a friend of yours could actually see that in the original situation, the left half of the box had only blue balls, and the right half only red balls. Upon removing the partition, he would see the colors mix irreversibly. Upon re-inserting the partition, the system certainly is not back to its original configuration. He would say the entropy has increased. (Indeed, he would count a $\log 2$ for every ball.) Who is right? Did entropy increase or not? Both are right. As Jaynes nicely argues in the above reference, entropy is not a mechanical property, it is only a thermodynamic property. And a given mechanical system can have many different thermodynamic descriptions. These depend on what one can --or chooses to-- measure. Indeed: if you live in a universe where there are no people and/or machines that can distinguish red from blue, there would really be no sense in saying the entropy has increased in the above process. Moreover, suppose you were color blind, arrive at the conclusion that the entropy did not increase, and then someone came along with a machine that was able to tell apart red and blue, then this person could extract work from the initial configuration, which you thought had maximal entropy, and hence you would conclude that this machine can extract work from a maximal entropy system, violating the second law. The conclusion would just be that your assumption was wrong: in your calculation of the entropy, you presumed that whatever you did you could not tell apart red and blue on a macroscopic level. This machine then violated your assumption. Hence using the 'correct' entropy is a matter of context, and it depends on what kind of operations you can perform. There is nothing problematic with this. In fact, it is the only consistent approach.
{ "source": [ "https://physics.stackexchange.com/questions/315104", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110669/" ] }
315,109
For a simple fermionic system the formula for calculating the density of states (DOS) is $N(E) = \sum_{n}\delta(E-E_{n})$ where $\{E_{n}\}$ is the set of eigenvalues obtained after diagonalizing the hamiltonian. Now to diagonaloize a hamiltonian with pair correlation terms ( $\sum_{k}c_{k\uparrow}^{\dagger}c_{-k\downarrow}^{\dagger}$ ) Bogoliubov transformation ( $c_{k\uparrow}=u_{k}\gamma_{k\uparrow}-v_{k}^{\ast}\gamma_{-k\downarrow}^{\dagger}; c_{-k\downarrow}^{\dagger}=v_k\gamma_{k\uparrow}+u_{k}^{\ast}\gamma_{-k\downarrow}^{\dagger}$ ) is used. Now after diagonalizing we get a set of eigenvalues in the form: $\{E_n,-E_n\}\forall n$ . Now to find the density of states I found a formula like this: $N(E)=\sum_{k}|u_k|^2\delta(E-E_k)+|v_k|^2\delta(E+E_k)$ where $\{E_k\}$ is the set of positive eigenvalues only. I don't understand this particular formula for density of states of bogoliubov quaisparticles. If anyone can explain it that would be very helpful.
E.T. Jaynes agrees with you, and luckily he is a good guy to have on your side: From this we see that entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. Even at the purely phenomenological level, entropy is an anthropomorphic concept. For it is a property, not of the physical system, but of the particular experiments you or I choose to perform on it. This is a quote from his short article `` Gibbs vs Boltzmann Entropies '' (1965), which is a great article on the concept of entropy in general, but for this discussion in specific you can turn to section VI. The "Anthropomorphic" Nature of Entropy . I will not try to paraphrase him here, because I believe he already described himself there as succinctly and clearly as possible. (Note it's only one page). I was trying to find another article of him, but I couldn't trace it at the moment. [EDIT: thanks to Nathaniel for finding it ]. There he gave a nice example which I can try to paraphrase here: Imagine having a box which is partitioned in two equally large sections. Suppose each half has the same number of balls, and they all look a dull grey to you, all bouncing around at the same velocity. If you now remove the partition, you don't see much happen actually. Indeed: if you re-insert the partition, it pretty much looks like the same system you started with. You would say: there has been no entropy increase. However, imagine it turns out you were color blind, and a friend of yours could actually see that in the original situation, the left half of the box had only blue balls, and the right half only red balls. Upon removing the partition, he would see the colors mix irreversibly. Upon re-inserting the partition, the system certainly is not back to its original configuration. He would say the entropy has increased. (Indeed, he would count a $\log 2$ for every ball.) Who is right? Did entropy increase or not? Both are right. As Jaynes nicely argues in the above reference, entropy is not a mechanical property, it is only a thermodynamic property. And a given mechanical system can have many different thermodynamic descriptions. These depend on what one can --or chooses to-- measure. Indeed: if you live in a universe where there are no people and/or machines that can distinguish red from blue, there would really be no sense in saying the entropy has increased in the above process. Moreover, suppose you were color blind, arrive at the conclusion that the entropy did not increase, and then someone came along with a machine that was able to tell apart red and blue, then this person could extract work from the initial configuration, which you thought had maximal entropy, and hence you would conclude that this machine can extract work from a maximal entropy system, violating the second law. The conclusion would just be that your assumption was wrong: in your calculation of the entropy, you presumed that whatever you did you could not tell apart red and blue on a macroscopic level. This machine then violated your assumption. Hence using the 'correct' entropy is a matter of context, and it depends on what kind of operations you can perform. There is nothing problematic with this. In fact, it is the only consistent approach.
{ "source": [ "https://physics.stackexchange.com/questions/315109", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/67820/" ] }
315,765
My professor insists that weight is a scalar. I sent him an email explaining why it's a vector, I even sent him a source from NASA clearly labeling weight as a vector. Every other source also identifies weight as a vector. I said that weight is a force, with mass times the magnitude of gravitational acceleration as the scalar quantity and a downward direction. His response, "Weight has no direction, i.e., it is a scalar!!!" My thought process is that since weight is a force, and since force is a vector, weight has to be a vector. This is the basic transitive property of equality. Am I and all of these other sources wrong about weight being a vector? Is weight sometimes a vector and sometimes a scalar? After reading thoroughly through his lecture notes, I discovered his reasoning behind his claim: Similarly to how speed is the scalar quantity (or magnitude) of velocity, weight is the scalar quantity (or magnitude) of the gravitational force a celestial body exerts on mass. I'm still inclined to think of weight as a vector for convenience and to separate it from everyday language. However, like one of the comments stated, "Definitions serve us."
On earth, weight of a body is defined as the force by which the body is attracted by the earth towards its center. Weight can thus be considered the same as the gravitational force exerted by the earth on that body. Hence, weight can be deemed a vector since it is a force, irrespective of the planet you consider. $$\vec W=m\vec g=\frac{GMm}{r^2}\hat r$$ As mentioned in the comments, since $g$ has the same direction (directed towards the center of the concerned planet) always, it might be(?) considered a scalar. Thats what your prof is doing. But strictly speaking, weight is a vector. Hope this helps you.
{ "source": [ "https://physics.stackexchange.com/questions/315765", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/142751/" ] }
316,135
I've read that plane wave equations can be represented in various forms, like sine or cosine curves, etc. What is the part of the imaginary unit $i$ when plane waves are represented in the form $$f(x) = Ae^{i (kx - \omega t)},$$ using complex exponentials?
It doesn't really play a role (in a way), or at least not as far as physical results go. Whenever someone says we consider a plane wave of the form $f(x) = Ae^{i(kx-\omega t)}$ , what they are really saying is something like we consider an oscillatory function of the form $f_\mathrm{re}(x) = |A|\cos(kx-\omega t +\varphi)$ , but: we can represent that in the form $f_\mathrm{re}(x) = \mathrm{Re}(A e^{i(kx-\omega t)})=\frac12(A e^{i(kx-\omega t)}+A^* e^{-i(kx-\omega t)})$ , because of Euler's formula ; everything that follows in our analysis works equally well for the two components $A e^{i(kx-\omega t)}$ and $A^* e^{-i(kx-\omega t)}$ ; everything in our analysis is linear, so it will automatically work for sums like the sum of $A e^{i(kx-\omega t)}$ and its conjugate in $f_\mathrm{re}(x)$ ; plus, everything is just really, really damn convenient if we use complex exponentials, compared to the trigonometric hoop-jumping we'd need to do if we kept the explicit cosines; so, in fact, we're just going to pretend that the real quantity of interest is $f(x) = Ae^{i(kx-\omega t)}$ , in the understanding that you obtain the physical results by taking the real part (i.e. adding the conjugate and dividing by two) once everything is done; and, actually, we might even forget to take the real part at the end, because it's boring, but we'll trust you to keep it in the back of your mind that it's only the real part that physically matters. This looks a bit like the authors are trying to cheat you, or at least like they are abusing the notation, but in practice it works really well, and using exponentials really does save you a lot of pain. That said, if you are careful with your writing it's plenty possible to avoid implying that $f(x) = Ae^{i(kx-\omega t)}$ is a physical quantity, but many authors are pretty lazy and they are not as careful with those distinctions as they might. (As an important caveat, though: this answer applies to quantities which must be real to make physical sense. It does not apply to quantum-mechanical wavefunctions, which must be complex-valued, and where saying $\Psi(x,t) = e^{i(kx-\omega t)}$ really does specify a complex-valued wavefuntion.)
{ "source": [ "https://physics.stackexchange.com/questions/316135", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147381/" ] }
316,444
When I was going to my school with my ID card hanging around my neck, it started doing oscillations like a pendulum. I was moving forward and it was oscillating left to right and right to left. What forces are at play here?
As humans we oscillate left and right when we walk because we have two legs. You can get a resonance when the length of the cord is such that your pace matches the period of the swing. (Like pushing a child on a swing a little higher each time they approach you.) Whilst walking we also oscillate up and down - this can also contribute to driving the resonance.
{ "source": [ "https://physics.stackexchange.com/questions/316444", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/138221/" ] }
316,780
We had a little discussion in the physics class. We were talking about resistance, and she said that when a wire is heated up, the resistance also increases; but I think that the resistance decreases because when something is heated up the electrons also gain energy, enabling them to move with lower resistance. So what is correct approach and solution to this problem?
Either one can be true depending on the material. In metals, the electrons don't need any additional energy to move, so the main effect of temperature is to cause the atoms to vibrate more, which interferes with the motion of the electrons, increasing the resistance. On the other hand, in a semiconductor, the electrons do need to gain some non-zero amount of energy before they can start moving at all. In this case, raising the temperature does decrease the resistance for the reason you state. On wikipedia it says : Near room temperature, the resistivity of metals typically increases as temperature is increased, while the resistivity of semiconductors typically decreases as temperature is increased. The resistivity of insulators and electrolytes may increase or decrease depending on the system. You can read more about these effects on wikipedia here and here .
{ "source": [ "https://physics.stackexchange.com/questions/316780", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
316,785
How much water is transported (volume) as a wave travels into a sea cave? The wave has a height of 1m and a period of 12 seconds. The average water depth at the cave mouth it 5M and the width is 15m. How do I calculate this? This is a real world problem and if there is any other information needed for this problem I will happily collect it. This isn't a home work problem. This is a real cave and I assumed water is transported because it does "pile up" in the back of the cave until the wave is reflected and exits the cave in the opposite direction. Just as on beaches where waves push water up on to the beach and gravity pulls the water back to ocean creating the near shore current. I understand that deep water waves have an almost circular orbit but due to stoke's drift a particle will be slightly displaced in the direction the swell is moving. In this cave we can assume the wave is well with in the sallow water wave criteria and almost at the critical depth in which the wave deformation is to extreme and wave energy is about to be converted in to turbulent kinetic energy as the wave breaks. The orbit therefore would linear, as the wave moves in and out. The net change would be zero 0m^3, but how much water flows in and then flows back out of the cave?
Either one can be true depending on the material. In metals, the electrons don't need any additional energy to move, so the main effect of temperature is to cause the atoms to vibrate more, which interferes with the motion of the electrons, increasing the resistance. On the other hand, in a semiconductor, the electrons do need to gain some non-zero amount of energy before they can start moving at all. In this case, raising the temperature does decrease the resistance for the reason you state. On wikipedia it says : Near room temperature, the resistivity of metals typically increases as temperature is increased, while the resistivity of semiconductors typically decreases as temperature is increased. The resistivity of insulators and electrolytes may increase or decrease depending on the system. You can read more about these effects on wikipedia here and here .
{ "source": [ "https://physics.stackexchange.com/questions/316785", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147675/" ] }
317,359
It is said that in a spaceship, you need to spend as much energy to brake as you spent for accelerating. An electric car, however, charges its batteries while braking, thus it actually recovers energy by braking. Both facts somehow seem intuitive to me, but aren't these two observations contradicting each other? Addendum Looking at the answers, I realize the quastion might not have been clear enough. So let me pose the question in a different way: Do you absolutely need an outside object moving at a different speed (the road for a car, slamming into an atmosphere as a space ship) to convert kinetic energy into another form? What is the fundamental principle?
The main point is that the space-ship is a closed system and the car is not Consider that to conserve momentum we need to give something else the momentum our decelerating object had before. In the case of the space-ship this requires ejecting something in the opposite direction to the direction of travel. We need to put energy in to do this. In the case of the car we have been connected to the road the whole time and because of this friction we need to continuously provide energy in order not to decelerate. So our wheels are turning and because of the connection with the road friction will decelerate us, what electric cars do is to add an extra resistive force to the turning of the wheels (which is needed to keep going) and make use of the energy gained from this. So because space-ships don't require any further thrust to maintain a constant speed we have no process to steal the energy from. If you could provide a resistive force on the space-ship you could regain some of the energy but it would have to be outside the space-ship (a magnetic field emitted from a series of space stations for example). You have to be moving relative to something else which you can impart energy to.
{ "source": [ "https://physics.stackexchange.com/questions/317359", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62489/" ] }
317,624
I'm arguing with a friend of mine on whether the light emitted from the sun is of the same type of that emitted by a bulb. Her insistent ignorance is laughable, unless I'm wrong... She's talking about how light from the bulb is "artificial" ... I've tried explaining that that makes no sense, and the only difference between the light is the way they're produced, and the intensities across their spectra. The bulb will emit minuscule amounts of varying wave-lengths, but with the intensity focused around visible light, right? The Sun will produce higher intensities of different types of wave-lengths, right? Her counter-argument is that light bulbs don't inflict harm (via harmful radiation, like the sun).
She's right that there's a difference, and you are right that it's all just electromagnetic waves! The key to this is that there is no such thing as "white light" when you really get down to it. Each light emits a range of wavelengths of light. If they have a sufficiently even distribution of wavelengths, we tend to call that light "white," but we can only use that term informally. Both the sun and the light bulb emit so-called "Blackbody radiation." This is the particular spectrum of light that's associated with the random thermal emissions of a hot object. Cool objects tend to emit more of their energy in the longer wavelengths like reds and IRs, while hotter objects emit more energy in the shorter wavelengths like blues and UV. (Note, there are other possible emission spectra, but those are associated with different materials doing the emissions and, for the purposes of this discussion, they aren't too important. We can just claim the emissions are all blackbody) If you notice, as you get hotter, a larger portion of the energy is emitted in the blue, violet, and ultraviolet. That's how you get a sunburn from the sun. It's harder to get a sunburn from an artificial light, not because it's artificial, but because those lights are almost always cooler than the sun. They don't have as much UV content. Instead, they have more red and yellow, which incidentally is why pictures taken indoors look very yellow. If you use a strobe, however, all those yellow hues go away because a strobe light is very warm, with lots of blues. You can get a sunburn from artificial light, of course. Tanning beds are the obvious example, but there are other interesting ones. When you're a jeweler working in platinum, for instance, you need to wear UV protective gear (like glasses or even sunscreen). Platinum's melting point is so hot that it actually emits quite a lot of UV light and can give you a sunburn! Other than these spectra, there is nothing different between light from an artificial source and light from the sun. Photons are photons.
{ "source": [ "https://physics.stackexchange.com/questions/317624", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98739/" ] }
318,286
The Large Hadron Collider, at low power, accelerates particles such that much of the total energy provided goes towards increasing their kinetic energy and their masses increase to some extent as well, and so Newton's equations are valid for this situation. However, when it's turned onto high power, as the particles tend to the speed of light, any additional power provided by the accelerator increases mostly the masses of the particles and their kinetic energy increases only slightly. That is why these high energy particles can pack quite a punch. It's sort of like taking a car and propelling it such that it transforms into a massive freight train. This is a very stupid question, but I bet I'll get some really remarkable answers. If the power output of the Large Hadron Collider were infinite (or at least a very big number), and notwithstanding a failure or limitation of the mechanical and engineering aspects of the machinery, would it eventually "explode" if the power is turned up too high?
In the case of the LHC, yes the beam can do quite a lot of damage. At full power there is something like 350 MJ stored in the beam - close to a freight train, or roughly the kinetic energy of a full jumbo jet at take-off. There is a very complex safety system to dump the beam safely eventually steering the beam energy into a large block of graphite inside a much larger cooled block of metal inside a very big block of concrete. Without this any instability in the beam could allow it to hit the wall of the vacuum tube where it would slice through it and then through the magnets like a hot knife through butter, or indeed like a high intensity beam of relativistic protons through superconducting magnets which is more impressive and a lot more expensive. edit: detail of the beam dump below, Sorry I was obviously half-remembering a talk on the quench protection heaters. http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/components/beam-dump.htm
{ "source": [ "https://physics.stackexchange.com/questions/318286", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/26071/" ] }
318,398
Gauss's law states that $\int_S \vec B\cdot d\vec S=0$. But law of induction states that $\xi=-\frac {d\phi}{dt}$, where $\phi=\int_S \vec B\cdot d\vec S$. So if Gauss's law was to be correct there should be no induction at all, because then $\phi$ would be zero through every loop.
The definition of magnetic flux is $$\Phi = \int_S d\vec{A}\cdot\vec{B},$$ where the integral is not over a closed surface in general. Gauss' Law requires that the integral is over a closed surface, and so there is no contradiction. In particular, look at any basic discussion of Faraday's Law. They always look at simple loops or coils of wire. There are clearly not closed surfaces, and so the definition of flux can't involve a closed surface in these cases. Without a closed surface it's easy to think of cases where the field gives nonzero flux.
{ "source": [ "https://physics.stackexchange.com/questions/318398", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148447/" ] }
319,040
My mother came back from a market which bags the products in paper-bags with handles, and asked me to move the bags from the trunk of the car to the house. Being the lazy human I am, I hung a few bags on each arm so I could cut the number of trips back and forth. As I was walking to the front door, the handles of a bag tore, the bag plummeting to the concrete ground. A glass jar of peppers had been smashed in to a zillion little pieces. As you might expect, my mother was furious. "You're so lazy! If you hadn't hung so many on your arm, the peppers and their jar would still be intact!" I disagree, here's why... Scenario Lazy Scenario Peppers Conclusion Note that Bag A will have $N_A$ on it regardless of Bag B's existence. Sure, my arm had $N_A + N_B$ ($> N_A$), but it wasn't the thing that broke. So, I conclude, that the tearing of the bag was inevitable, and that the peppers' fates were written by someone other than me (e.g.: manufacturer didn't put enough glue to handle expected weight, cashier put more weight than permitted, etc). Is my reasoning correct? Or am I missing something that proves that I'm guilty?
I think you are guilty. The shop assistant (or your mother) was able to load the bags into the car without causing the handles to break. They probably did not try to carry many bags at the same time. If you hang the bags from a rod with sufficient spacing between them so that each handle hangs vertically, then the handles all bear only the weight of the bag's contents. However, I think you probably held the bags in each hand rather than hung them from your extended arm (which would require enormous effort) or from a pole (which is unlikely to have been handy, and you are too lazy to look for one). When the bags hang from the same point the tension $T$ in the handles of the outer bags is higher than the weight $W$ of the bag, because of the large angle $\theta$ which the handle makes with the vertical. The vertical force $T\cos\theta$ provided by the handle must equal the weight $W$ of the bag's contents; the horizontal force $T\sin\theta$ is balanced by contact forces $N$ between the bags. If the filled bags are wide, the handles of the outer bags will be at a large angle $\theta$ to the vertical, requiring a large force $T=\frac{W}{\cos\theta}$. This force tends towards infinity as the handle becomes horizontal $(\theta \to 90^{\circ})$. The outermost handles are much more likely to break than the innermost handle $(\theta = 0^{\circ})$. Edit 1 Scenario #1 in RowanC's answer can be analysed in the same way. Assuming that the upper part of the bags have a trapezoidal shape, they spread out in an arc, with the middle bag supporting some of the weight of the outer bags. Balancing forces on all 3 bags we get $T_2+2T_1\cos\theta=3W$. Balancing forces on the outer bags we get $2T_1(1-\cos^2\alpha)=W$ since $\theta=180^{\circ}-2\alpha$. Therefore $T_2(1-\cos^2\alpha)=(4-5\cos^2\alpha)W$. If $2(1-\cos^2\alpha) \lt 1$ then $T_1 \gt W$ - the outer bags bear more than their own weight. This happens for $\alpha \lt 45^{\circ}$. If $2(4-5\cos^2\alpha) \gt 1$ then $T_2 \gt T_1$ - the middle bag bears more weight than the outer bags. This happens when $\alpha \gt 33.2^{\circ}$. A more thorough analysis could balance the torque on each bag.
{ "source": [ "https://physics.stackexchange.com/questions/319040", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/113809/" ] }
319,123
I have wondered that in an octave in piano there are seven primary notes, and also we observe mostly seven primary colors of a rainbow. I know we perceive logarithmically, that means we only care about relative differences. Is there any relation between $7$ musical notes (in an octave) and $7$ colors of a rainbow? EDIT: I agree that the primary term for the $7$ notes in an octave is more or less the matter of taste. However, if we take the western musical taste as a guide, we can justify ourselves to use $12$ notes in an octave and place piano keys in the present way. Take a look at here .
On the most basic level, the answer is a flat no. The seven primary notes in an octave is specific to the western musical tradition. It's not entirely arbitrary as you say, but there are many other choices that could have been made, and there are other cultures who use fewer notes (e.g. pentatonic scales in blues music) or more (e.g. Indian classical music). The seven colours in the rainbow are also somewhat arbitrary. (Are indigo and violet really different colours? Why don't we count aquamarine, right between green and blue?) Having said that, it does happen to be the case that the range of frequencies we can see is just a little short of an octave, ranging from about 440-770 THz. This is really more or less a coincidence, but because of it, I can point out a relationship between light and colours, just for fun. The A above middle C is defined, for modern instruments, as 440Hz. The A an octave above is 880Hz, and in general if we go $n$ octaves up we get a frequency of $440\times 2^n$. If we go forty octaves up from A we get a note of 483 THz. This can't be played as a sound wave (air can't vibrate at frequencies that are too high) but as an electromagnetic wave it's a slightly reddish orange. If we go down a note to G we get $392\times 2^{40}$ Hz $= 431$ THz, which is just into the infra-red. (It might be possible to see it as a very deep red colour, but I'm not sure.) However, moving up from there we get the following colours: G - 431 THz - infra-red A - 483 THz - orange B - 543 THz - yellow-green C - 576 THz - green D - 646 THz - blue E - 724 THz - indigo F - 768 THz - violet (barely visible) G - 862 THz - ultra-violet (I leave the sharps and flats as an exercise to the reader.) So you can't see G (or F#), but the other notes do actually have colours. However, as I said this is just a bit of fun and does not in any way have any practical implications, since sounds at those frequencies can't be transmitted through air.
{ "source": [ "https://physics.stackexchange.com/questions/319123", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/101968/" ] }
319,819
A while ago it was raining and I noticed that, on sloped pavement, water was flowing in very regular consistent periodic waves, as you see below. However, I realized I had no idea why this should be happening. There was nothing uphill actually creating these waves, and they continued down as far as the pavement went, despite the rain that was falling on them along the way. Why wasn't the water flowing down smoothly, or irregularly? What causes the noticeable wavelike patterns? Is there a name for this phenomenon?
These waves are called "roll waves." They are due to an instability in shallow shear flows. The analysis is much too complex for a short answer, but if you google "Roll Wave" you will find more images and links to technical articles. If you are not bothered by a little mathematics you will find a discussion of the cause of the instability starting on page 259 in these online lecture notes: https://courses.physics.illinois.edu/phys508/fa2016/amaster.pdf After the waves have formed due to the instability, the actual form -- a series of breaking waves -- is due to the non-linear propagation effect described by md2perpe -- the deeper the water the faster the wave.
{ "source": [ "https://physics.stackexchange.com/questions/319819", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/853/" ] }
319,831
I understand the problem, but I am unsure of why they use the work done by the spring instead of the work the glider does on the spring in the work-energy theorem, and the book also makes it sound like you cannot use the work the glider does on the spring in the work-energy theorem and I am clueless as to why it is so?
These waves are called "roll waves." They are due to an instability in shallow shear flows. The analysis is much too complex for a short answer, but if you google "Roll Wave" you will find more images and links to technical articles. If you are not bothered by a little mathematics you will find a discussion of the cause of the instability starting on page 259 in these online lecture notes: https://courses.physics.illinois.edu/phys508/fa2016/amaster.pdf After the waves have formed due to the instability, the actual form -- a series of breaking waves -- is due to the non-linear propagation effect described by md2perpe -- the deeper the water the faster the wave.
{ "source": [ "https://physics.stackexchange.com/questions/319831", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147609/" ] }
319,911
I know that helium balloons float because it is less dense than air. I'm not expecting my bike to float, although that would be pretty cool. I just wanna know if replacing normal air with helium in the tires will produce a noticeable effect on its weight. Will the helium 'lift'/reduce the weight force on the bike?
It will make it lighter, but the effect will be very small. The volume of the tube is probably less than a liter. One mol of an ideal gas is 23 liters at atmospheric pressure. So you have about 0.2 mol of gas in there at 4 bar pressure. Helium weighs 4 g/mol, nitrogen about 28 g/mol. So for 0.2 mol, the weights are 0.8 g and 5.6 g. Cleaning off the dirt from the frame will have a greater effect. Helium atoms are smaller than nitrogen molecules. Therefore there is a greater rate of diffusion through the bike tires. Your tires will become flat quicker than normal. Therefore it is not really a good idea to use helium.
{ "source": [ "https://physics.stackexchange.com/questions/319911", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130713/" ] }
319,917
I'm working through some past papers and came across the question below and I've shown my ray diagrams as the red and blue lines. For the next question I would think that the image is virtual because light has not actually from the image. However the mark scheme says that it is because the two light rays don't join up in the plane mirror, but they do join up in my diagram. Is my diagram wrong and if so, why would the two rays of light not joining give the indication that the image is virtual?
It will make it lighter, but the effect will be very small. The volume of the tube is probably less than a liter. One mol of an ideal gas is 23 liters at atmospheric pressure. So you have about 0.2 mol of gas in there at 4 bar pressure. Helium weighs 4 g/mol, nitrogen about 28 g/mol. So for 0.2 mol, the weights are 0.8 g and 5.6 g. Cleaning off the dirt from the frame will have a greater effect. Helium atoms are smaller than nitrogen molecules. Therefore there is a greater rate of diffusion through the bike tires. Your tires will become flat quicker than normal. Therefore it is not really a good idea to use helium.
{ "source": [ "https://physics.stackexchange.com/questions/319917", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130268/" ] }
319,933
Wikipedia says - Penrose's theorem is more restricted and only holds when matter obeys a stronger energy condition, called the dominant energy condition, in which the "energy is larger than the pressure". (inverted commas mine) How can we compare 2 very different physical quantities of pressure and energy? I have thought some possibilities: 1. The wikipedia page has a mistake. 2. It's simply energy is in large amount for the black-hole , bur pressure is low. 3.Some counterintuitive higher- level physics relation between those 2 quantities (It's very counterintuitive to me because my physics knowledge is very much limited to A level physics.) But which of the 3 possibilties is true? Any wise words?
It will make it lighter, but the effect will be very small. The volume of the tube is probably less than a liter. One mol of an ideal gas is 23 liters at atmospheric pressure. So you have about 0.2 mol of gas in there at 4 bar pressure. Helium weighs 4 g/mol, nitrogen about 28 g/mol. So for 0.2 mol, the weights are 0.8 g and 5.6 g. Cleaning off the dirt from the frame will have a greater effect. Helium atoms are smaller than nitrogen molecules. Therefore there is a greater rate of diffusion through the bike tires. Your tires will become flat quicker than normal. Therefore it is not really a good idea to use helium.
{ "source": [ "https://physics.stackexchange.com/questions/319933", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/106713/" ] }
319,936
When an Ideal gas is left alone for a while, the atoms or molecules collide with each other, each time transferring a tiny amount of energy. So, essentially, the particles keep transferring energy and eventually all the atoms should have the same amount of energy. But this is not the case. The atoms resort to the maxwell- boltzmann distribution. But why is this so? Isn't the equilibrium state particles with equal energy? What causes the maxwell-boltzmann distribution to be the distribution particles resort to?
It will make it lighter, but the effect will be very small. The volume of the tube is probably less than a liter. One mol of an ideal gas is 23 liters at atmospheric pressure. So you have about 0.2 mol of gas in there at 4 bar pressure. Helium weighs 4 g/mol, nitrogen about 28 g/mol. So for 0.2 mol, the weights are 0.8 g and 5.6 g. Cleaning off the dirt from the frame will have a greater effect. Helium atoms are smaller than nitrogen molecules. Therefore there is a greater rate of diffusion through the bike tires. Your tires will become flat quicker than normal. Therefore it is not really a good idea to use helium.
{ "source": [ "https://physics.stackexchange.com/questions/319936", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/135464/" ] }
320,197
I knew that there were $365.25$ days in a year (ish) but we only have $365$ on calendars, that's why we have February 29. I then learned in class about the sidereal and solar day; sidereal being $23$ hours and $56$ minutes, and solar being $24$. When we say "$365.25$ days" which day are we talking about (sidereal or solar)? My teacher said that the $4$ minutes we gain from the solar day being longer than the sidereal day caused the $0.25$ (ish) more, which causes February 29. I do not see how being $4$ minutes ahead each day already means that we need to add even more time. Surely the $4$ minutes each day, that adds up to $24.3$ hours extra each year, means that we must remove a day every single year, not add one. What does being $4$ minutes ahead/behind mean for the year?
There seems to be some confusion. The number of solar days in a year differs from the number of sidereal days in year by 1--that difference of course being due the 1 revolution around the sun per year influencing the solar day. Back to the number of days in a year: Baring tidal resonances, there is no reason for the length of a day to be commensurate with length of year; it is what is it: 365.2425 I remember this as follows: 365 day in the year +1/4 A leap year every 4 years -1/100 Except on years ending in "00" +1/400 Unless the year is divisible by 400 (e.g. Y2K) 365.2425 so that 2000 was a leap-leap-leap year.
{ "source": [ "https://physics.stackexchange.com/questions/320197", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143986/" ] }
320,784
I am starting to study physics in detail and as I read about physical quantities, I was puzzled why mol (amount of substance) is taken as a physical quantity. A physical quantity is any quantity which we can measure and has a unit associated with it. But a mol represents the amount of substance by telling the number of particles (atoms, molecules, ions, etc.) present. So it is a pure number and numbers are dimensionless. So mol should not be considered a physical quantity. Also, fundamental physical quantities should be independent of each other. I am wondering whether mass and mol are independent. This is so as they surely affect each other as we can evidently see while calculating the number of moles and using the mass of that sample for calculation. So how is the mol a fundamental physical quantity and independent of mass?
The mole definitely isn't a fundamental physical quantity. It's just a shorthand for Avogadro's number, to make really big numbers more tractable. It's purely there for convenience, there's nothing fundamentally physically significant about it.
{ "source": [ "https://physics.stackexchange.com/questions/320784", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/120609/" ] }
320,858
According to the Wikipedia article on atomic nucleus , captioned on an impression of helium atom, it states that This depiction shows the particles as separate, whereas in an actual helium atom, the protons are superimposed in space and most likely found at the very center of the nucleus, and the same is true of the two neutrons. Thus, all four particles are most likely found in exactly the same space, at the central point. How is this possible? Does this not violate Pauli's exclusion principle?
This does not violate the exclusion principle because the exclusion principle merely states that there cannot be more than one fermion in the same quantum mechanical state . In the case of two protons and two neutrons, the different particle species don't exclude each other to begin with (because a neutron state is different from a proton state). Furthermore, that they have the same expectation value for position doesn't mean that they are in the same state. States can coincide with their expectation values for some observables but not for others. In this specific case, the states likely differ by their spin (one proton/neutron has "spin up" and the other "spin down").
{ "source": [ "https://physics.stackexchange.com/questions/320858", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123796/" ] }
320,863
I mean for example if earth is the observer, then there might be entire galaxies travelling faster than the speed of light relative to earth. So according to Einstein relativity this shouldn't be possible so I want to know what I should consider as an observer to measure cosmic objects' speeds.
This does not violate the exclusion principle because the exclusion principle merely states that there cannot be more than one fermion in the same quantum mechanical state . In the case of two protons and two neutrons, the different particle species don't exclude each other to begin with (because a neutron state is different from a proton state). Furthermore, that they have the same expectation value for position doesn't mean that they are in the same state. States can coincide with their expectation values for some observables but not for others. In this specific case, the states likely differ by their spin (one proton/neutron has "spin up" and the other "spin down").
{ "source": [ "https://physics.stackexchange.com/questions/320863", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/149690/" ] }
321,013
I just want to point out that I'm still in highschool and I don't really have any advanced education within physics, however, this is something that has been on my mind. I may just be completely wrong with my knowledge regarding waves as a whole though. From my understanding of waves and objects, waves have a few defining features: being massless and having energy. Objects have mass and energy. So according to Einstein's equation $$E=mc^2$$ mass and energy go hand in hand (sort of). So, why do waves, which have energy, not have mass? I am aware of wave-particle duality, but (now I may be wrong with this as well), to my understanding, that is regarding particles giving off weak waves and not the other way around.
Equation $E=mc^2$ is incomplete. The proper form is (in units with $c=1$) $E=\sqrt{m^2+p^2}$. When an object is at rest then $E=m$ is recovered. But for massless objects $E=p$. So this means that even objects which have no mass can have energy because they have momentum, and waves carry momentum. Massless objects can never be at rest.
{ "source": [ "https://physics.stackexchange.com/questions/321013", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/149755/" ] }
321,540
I have an intuition problem calculating torque using the cross product formula. As for example let the magnitude of the force be 50 lbs and length of the wrench be one foot and you are exerting force in a clockwise motion and the angle you apply the force 60 degrees. This is an example so I can ask my question. Using the right hand rule the torque points perpendicular to the force you are applying to the bolt. In this case since the sine of 60 degrees is about .86 it would be (.86)(50) foot lbs. How can the bolt turn clockwise if the force is concentrated perpendicular to where it needs to turn? The cross product formula demands the torque be perpendicular. Obviously my mistake but I don't see where.
How can the bolt turn clockwise if the force is concentrated perpendicular to where it needs to turn? Because that force is perpendicular to the direction towards the rotation-centre . Not to the turning direction. The bolt does indeed turn in the same way as the force pulls it. When you define a torque vector direction , you have a problem. You can't define a vector direction as something that turns around. The direction must be along a straight line. So instead of choosing the torque "turn", we could choose the torque axis as the vector direction. Have a look at this picture: The axis is vertical through the bolt along the two upwards/downwards arrows. If you choose to define the torque vector direction along this axis, all fits. We just have to remember that choice. Torque is: $$\vec \tau = \vec F \times \vec r$$ The force vector $\vec F$ times the vector towards the rotation-centre $\vec r$ gives the torque vector. The result of a cross-product is mathematically a vector pointing vertically upwards , so this fits perfectly to that choice. The torque vector $\vec \tau$ that you get from this calculation has the torque magnitude but the torque- axis direction . As long as you remember this choice - this definition - all is good. Everytime you hear " the direction of the torque is horizontal ", you know that this is only the axis of the torque; the torque (the turn) is then upright.
{ "source": [ "https://physics.stackexchange.com/questions/321540", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
321,551
I'm trying to understand light's refraction properties but I find issue with every explanation I come across. For instance, my book uses as an example a marching band which comes across a muddy terrain. Because they have to keep the same distance in each singular row, every row is going to rotate when it goes through the mud. But, if instead every person didn't have to keep the same distance with the others on his row, when they crossed the mud they'd all just slow down, without deviating their path. So, what's going on with light? Why should photons keep the same distance with one another in their "row", and as a consequence deviate when they change medium? Most certainly, it has something to do with light behaving as a wave, but I still don't understand it on an intuitive level.
How can the bolt turn clockwise if the force is concentrated perpendicular to where it needs to turn? Because that force is perpendicular to the direction towards the rotation-centre . Not to the turning direction. The bolt does indeed turn in the same way as the force pulls it. When you define a torque vector direction , you have a problem. You can't define a vector direction as something that turns around. The direction must be along a straight line. So instead of choosing the torque "turn", we could choose the torque axis as the vector direction. Have a look at this picture: The axis is vertical through the bolt along the two upwards/downwards arrows. If you choose to define the torque vector direction along this axis, all fits. We just have to remember that choice. Torque is: $$\vec \tau = \vec F \times \vec r$$ The force vector $\vec F$ times the vector towards the rotation-centre $\vec r$ gives the torque vector. The result of a cross-product is mathematically a vector pointing vertically upwards , so this fits perfectly to that choice. The torque vector $\vec \tau$ that you get from this calculation has the torque magnitude but the torque- axis direction . As long as you remember this choice - this definition - all is good. Everytime you hear " the direction of the torque is horizontal ", you know that this is only the axis of the torque; the torque (the turn) is then upright.
{ "source": [ "https://physics.stackexchange.com/questions/321551", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/146629/" ] }
321,552
In almost every book that has a introductory notion on relativity, the author usually says the signature that he uses: $(+---)$ or $(-+++)$. The book I'm reading says: Note that the convention on the metric signature is not unique and in several textbooks it is used the other one; the physics, of course , is left unchanged. Why does the physics not change? They say that physics cannot depend on a special coordinate system and it is quite simple why but it is not completely obvious (to me) that changing signatures will not lead to change the physics of the problem. There is some full explanation on why the physics is left unchanged or there is some study that proved that this is true? In this Physics.SE question the two different conventions are explained. This question is more in the last part of this usual statement.
How can the bolt turn clockwise if the force is concentrated perpendicular to where it needs to turn? Because that force is perpendicular to the direction towards the rotation-centre . Not to the turning direction. The bolt does indeed turn in the same way as the force pulls it. When you define a torque vector direction , you have a problem. You can't define a vector direction as something that turns around. The direction must be along a straight line. So instead of choosing the torque "turn", we could choose the torque axis as the vector direction. Have a look at this picture: The axis is vertical through the bolt along the two upwards/downwards arrows. If you choose to define the torque vector direction along this axis, all fits. We just have to remember that choice. Torque is: $$\vec \tau = \vec F \times \vec r$$ The force vector $\vec F$ times the vector towards the rotation-centre $\vec r$ gives the torque vector. The result of a cross-product is mathematically a vector pointing vertically upwards , so this fits perfectly to that choice. The torque vector $\vec \tau$ that you get from this calculation has the torque magnitude but the torque- axis direction . As long as you remember this choice - this definition - all is good. Everytime you hear " the direction of the torque is horizontal ", you know that this is only the axis of the torque; the torque (the turn) is then upright.
{ "source": [ "https://physics.stackexchange.com/questions/321552", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
321,658
Will a disc or cylinder (rigid body) executing pure rolling on a rough surface stop, neglecting air drag and other heat losses and rolling friction but not static and kinetic friction? If yes, due to which friction it will stop, static or kinetic and how? Assume surface has no rolling friction.
As Yashas Samaga said, it will not stop on a smooth , but frictional surface. It will stop however on an actual rough surface (as it does in reality – e.g. a steel marble rolling on a rough stone surface will come to a halt quite quickly, although drag / rolling friction is as low as on a smooth glass plate, where the marble would indeed roll very far). The reason is that a rough surface can in general not be continually tangent to the rolling body. Instead, if the object has rolled over a peak, it will not smoothly traverse the following trough but slightly collide with the next peak. If there's no rolling friction, then the collision will (ideally) be perfectly elastic, i.e. the cylinder will bounce off . When it hits the surface again, the vertical kinetic energy will regenerally not be fully reclaimed to movement in the original direction. In fact, while it has still some velocity in that direction, it will statistically more likely clash with yet another opposing front of the profile, thus losing yet more momentum. So, I reckon ideally this would eventually lead to a random-walk kind of motion. In reality, this doesn't happen because the collisions are scarcaly sufficiently elastic – actually a good amount or kinetic energy is lost right when the roller hits the next peak.
{ "source": [ "https://physics.stackexchange.com/questions/321658", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141464/" ] }
321,781
I know the values of the metric tensor is $$\eta =\begin{bmatrix} 1&0&0\\ 0&r^{2}&0\\ 0&0&r^{2}\sin^{2}\left ( \theta \right ) \end{bmatrix},$$ but how is this derived? Also, is the '(Non)Euclidean'-ness of the spacetime geometry of any relevance to this metric tensor value?
That is simply the metric of an euclidean space, not spacetime, expressed in spherical coordinates. It can be the spacial part of the metric in relativity. We have this coordinate transfromation: $$ x'^1= x= r\, \sin\theta \,\cos\phi =x^1 \sin(x^2)\cos(x^3) $$ $$x'^2= y= r\, \sin\theta \,\sin\phi =x^1 \sin(x^2)\sin(x^3)$$ $$x'^3= z= r\, \cos\theta = x^1\ \cos(x^2) $$ With $\, x^1=r, \quad x^2=\theta, \quad x^3=\phi \quad$ and $\quad x'^1=x, \quad x'^2=y, \quad x'^3=z$ Now you start from $$ \eta_{ij} = \frac{\partial {x'^1}}{\partial {x^i}} \frac{\partial {x'^1}}{\partial {x^j}} +\frac{\partial {x'^2}}{\partial {x^i}}\frac{\partial x'^2}{\partial x^j} + \frac{\partial {x'^3}}{\partial {x^i}}\frac{\partial x'^3}{\partial x^j} $$ And doing it for each component you obtain the result you're looking for. I'll illustrate the case for $\eta_{22}$ $$ \eta_{22}= \frac{\partial {x'^1}}{\partial {x^2}} \frac{\partial {x'^1}}{\partial {x^2}} +\frac{\partial {x'^2}}{\partial {x^2}}\frac{\partial x'^2}{\partial x^2} + \frac{\partial {x'^3}}{\partial {x^2}}\frac{\partial x'^3}{\partial x^2} = \\ \frac{\partial {x}}{\partial {\theta}} \frac{\partial {x}}{\partial {\theta}} +\frac{\partial {y}}{\partial {\theta}}\frac{\partial y}{\partial \theta} + \frac{\partial {z}}{\partial {\theta}}\frac{\partial z}{\partial \theta} = \\ r^2 \cos^2\theta \, \cos^2\phi + r^2 \cos^2\theta \sin^2\phi + r^2 \sin^2\theta = r^2 $$ Where use has been made of the well known relation $\quad$ $\sin^2 \alpha +\cos^2\alpha=1$
{ "source": [ "https://physics.stackexchange.com/questions/321781", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143860/" ] }
321,811
It is common place (e.g. here ), for Lattice QCD calculations to be computed using reference masses (such as the pion mass) which are greater that the physical values of those quantities. Sometimes, multiple calculations are done at various heavier values so as to extrapolate down to the physical value. The problem with this is that QCD is not entirely scale independent, even thought the QCD coupling constant is dimensionless. For example, I've seen a credible claim that a bound dineutron state is stable at sufficiently higher quark masses than are measured experimentally (also here ), even though bound dineutron states are not stable at physical masses. I presume that Lattice QCD uses greater than physical masses because it is harder to do the calculations at the physical masses than at the heavier than physical masses, but I have trouble understanding why this should be so mathematically. Could someone please explain the reason that Lattice QCD calculations are routinely done at greater than physical masses, rather than at physical masses?
Lattice QCD calculations involve computing the inverse of the Dirac operator $\gamma\cdot D+m$. The difficulty of inverting an operator is controlled by its smallest eigenvalues, and computing the inverse of the Dirac operator becomes harder as $am\to 0$. The exact scaling of the computational cost depends on the algorithms. It was once feared that realistic simulations with physical quark masses would be prohibitively expensive, but after some algorithmic improvements things look much better. Currently $$ {\rm cost} \sim \left(\frac{1}{m}\right)^{(1-2)} \left(\frac{1}{a}\right)^{(4-6)} \left(L\right)^{(4-5)}, $$ and simulations with physical masses are expensive, but doable. There is an additional problem with $m\to 0$, which is related to the fact that finite volume effects are controlled by $m_\pi L$ and $m_\pi^2\sim m_q$. This problem is not severe for physical masses, because $m_\pi^{-1}\sim 1.4$ fm is not that large. Regarding the neutron mass: $m_n-m_p$ is controlled by the difference of the quark masses, compared to the electromagnetic self energy of the proton. If you let both masses (up and down) go to zero, then the neutron will eventually be lighter than the proton (and therefore stable). There is a magic range of quark masses for which $|m_n-m_p|<m_e$, so that both the neutron and proton are stable. And the di-neutron: In the real world the deuteron is a shallow bound state, and the dineutron is just barely unbound. I don't think that this is a theorem, but intuition and numerical evidence suggest that the dineutron would become bound for heavier quark masses. Finally, as noted by David, (multi) nucleon calculations suffer from a noise problem that is controlled by the quark mass. The signal-to-noise ratio in an $A$ nucleon correlator scales as $\exp(-A(m_N-3m_\pi/2)\tau)$, where $\tau$ is the time separation at which the correlator is computed. The typical $\tau$ we are interested in is a physical scale (something like the inverse of the separation between the ground state and the first excited state), and if anything it is larger for bigger $A$. This means that calculations for $A>1$ are typically done for unphysical quark masses.
{ "source": [ "https://physics.stackexchange.com/questions/321811", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117725/" ] }
322,598
What is special about Maxwell's equations ? If I have read correctly, what Maxwell basically did is combine 4 equations that were already formulated by other physicists as a set of equations. Why are these 4 equations (out of large numbers of mathematical equations in electromagnetism) important? Or What is special about these 4 equations?
Maxwell's equations wholly define the evolution of the electromagnetic field. So, given a full specification of an electromagnetic system's boundary conditions and constitutive relationships (i.e. the data defining the materials within the system by specifying the relationships between the electric / magnetic field and electric displacement / magnetic induction), they let us calculate the electromagnetic field at all points within the system at any time. Experimentally, we observe that knowledge of the electromagnetic field together with the Lorentz force law is all one needs to know to fully understand how electric charge and magnetic dipoles ( e.g. precession of a neutron) will react to the World around it. That is, Maxwell's equations + boundary conditions + constitutive relations tell us everything that can be experimentally measured about electromagnetic effects (including quibbles about the Aharonov-Bohm effect , see 1). Furthermore, Maxwell's equations are pretty much a minimal set of equations that let us access this knowledge given boundary conditions and material data, although, for example, most of the Gauss laws are contained in the other two given the continuity equations. For example, if one takes the divergence of both sides of the Ampère law and applies the charge continuity equation $\nabla\cdot\vec{J}+\partial_t\,\rho=0$ together with an assumption to $C^2$ (continuous second derivative) fields, one derives the time derivative of the Gauss electric law. Likewise, the divergence of the Faraday law yields the time derivative of the Gauss magnetic law. Maxwell's equations are also Lorentz invariant, and were the first physical laws that were noticed to be so. They're pretty much the simplest linear differential equations that possibly could define the electromagnetic field and be generally covariant; in the exterior calculus we can write them as $\mathrm{d}\,F = 0;\;\mathrm{d}^\star F = \mathcal{Z}_0\,^\star J$; the first simply asserts that the Faraday tensor (a covariant grouping of the $\vec{E}$ and $\vec{H}$ fields) can be represented as the exterior derivative $F=\mathrm{d} A$ of a potential one-form $A$ and the second simply says that the tensor depends in a first order linear way on the sources of the field, namely the four current $J$. This is simply a variation on Feynman's argument that the simplest differential equations are linear relationships between the curl, divergence and time derivatives of a field on the one hand and the sources on the other (I believe he makes this argument in volume 2 of his lecture series , but I can't quite find it at the moment). 1) Sometimes people quibble about what fields define experimental results and point out that the Aharonov-Bohm effect is defined by the closed path integral of the vector magnetic potential $\oint\vec{A}\cdot\mathrm{d}\vec{r}$ and thus ascribe an experimental reality to $\vec{A}$. However, this path integral of course is equal to the flux of $\vec{B}$ through the closed path, therefore knowledge of $\vec{B}$ everywhere will give us the correct Aharonov-Bohm phase for to calculate the electron interference pattern, even if it is a little weird that $\vec{B}$ can be very small on the path itself.
{ "source": [ "https://physics.stackexchange.com/questions/322598", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21068/" ] }
322,602
Correct me if I'm wrong, I think the pressure in a fluid reduces when the speed increases(The airplane rises because the air above the airfoil moves faster than the air below it). Next, looking at the air above the surface of the water which is in a spinning glass, I'm wondering if this relative motion decreases the pressure in the air just above the water surface... If yes, does the boiling point of the water reduce due to the loweing of this air pressure over the surface ? (A layer of air just above the water surface spins with the water, so there will be a relative motion between this layer of air and the air above it.)
No, mostly You mostly can't boil water by spinning the glass. "Mostly" because some weird stuff is possible under extreme conditions like in a rotary evaporator ; in such cases, whether or not there's "boiling" starts to become an issue of semantics. That explanation of aerodynamic lift is a common misconception First, to correct a misconception: Correct me if I'm wrong, I think the pressure in a fluid reduces when the speed increases(The airplane rises because the air above the airfoil moves faster than the air below it). This statement is a common misconception supported even by some authoritative sources. However the logic behind it presumes that equilibrium behavior applies in dynamic scenarios which doesn't hold; more in the comments below. " What really allows airplanes to fly? " has some excellent discussion on airfoils. Spinning water in a glass can decrease pressure If you spin a glass of water fast enough, you can get a vortex going in the center. This vortex is sorta like a tornado, with lower pressure in the center and higher pressure at the boundaries, against the glass. In principle, if you do this fast enough, you could lower the inner pressure down to the static boiling point. Checking a phase diagram for water , it looks like water can boil at room temperature if we drop the pressure down to a just few percent of normal atmospheric pressure. Would additional evaporation be due to "boiling"? In a classical sense, boiling is when a material in the liquid state reaches the point where it'll start turning into its vapor state with appreciable stability throughout its volume. Non-boiling liquids can still turn into vapor, but they usually do so at their boundaries, in which case we call it "evaporation" instead of "boiling". The distinction between rapid evaporation and boiling kinda breaks down under extreme conditions since the classical sense in which we defined those terms no longer applies, but I think that most people would find the term "boiling" to be misleading in this case.
{ "source": [ "https://physics.stackexchange.com/questions/322602", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/146502/" ] }
323,183
In every derivation of Kepler's Laws that I have seen, we assume that the sun is stationary. However, in other places I have read that celestial bodies move about their barycentre (center of mass). So are planets actually moving in elliptical orbits around the Sun or do they move in circular orbits around their center of mass?
In an ideal two body system (say a sun and a planet), both bodies would move around their barycenter. An ideal periodic orbit would be an ellipse or a circle. EDIT : See comment by @user11153 regarding the barycenter of the solar system and related links. In a more complex system like our solar system, to a good approximation the planets can be modeled by a two body system (i.e. the Sun being so massive it is the dominant effect) and for many practical purposes the motion of the Sun around the barycenter is not significant, as the barycenter is actually inside the Sun. More precise calculations the motion of a planet requires allowing for the gravitational perturbation of other planets as well as allowing for the center of mass and relativistic effects. The net effect is that no planets actually orbit in ideal elliptical orbits. So are they actually moving in elliptical orbits around the sun or do they move in circular orbits around their center of mass? I have the impression from this question that you think the elliptical orbits are a result of using the barycenter as a center of motion and that otherwise a circle would be the orbit's shape. This is not the case. The general shape for an orbit in an ideal two body system with a Newtonian gravitational force is an ellipse. A circle is a special case of an ellipse.
{ "source": [ "https://physics.stackexchange.com/questions/323183", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/97482/" ] }
324,273
When you have made some stupid mistakes on a blackboard, you quickly want to erase it with a wet sponge before anyone sees them. So you clean the blackboard and within a minute the blackboard is clean and dry again! I was wondering why the board is drying so quickly compared to other surfaces. Does it absorb the water or is it all due to "good" evaporation?
[Already said] A blackboard is not porous , i.e. it actually never takes up much water from the sponge in the first place (and if you were to squeeze out more than a little, it would just run down to the bottom). [Already said] Yet the surface is hydrophilic , i.e. the water that does stay on the board forms a very thin film instead of droplets (as you'd get on a plastic or freshly wiped glass surface), and together with the slightly rough texture this makes for a large surface area to only a very small volume of water. This surface is where evaporation takes place; the larger, the better. The board is mounted vertically. That's the ideal configuration for convection : water vapour has a lower density than air , so close to the surface (which, because of the second point, quickly evaporates a lot of water into the air directly next to it) the air rises up, and because the entire surface is aligned in the same direction and air can efficiently stream along the surface from below (turbulence helps further), there's a steady supply of unsaturated air into which more water can evaporate unhindered. [Already said] The bulk of the board is usually metallic, i.e. it has good thermal conductivity . To the touch (which emits heat into the board), one perceices it therefore as cold, but to the evaporating water (which requires heat ) the same property has a warming effect . That keeps the evaporation speed high, both directly and through preventing the reduced temperature from mitigating the convection-causing density reduction.
{ "source": [ "https://physics.stackexchange.com/questions/324273", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/38873/" ] }
325,535
This is a question thats been bothering me a while. I don't even know if it makes sense or not (like if it is a physics question or becoming a philosophical one). But here it goes. The crux of my question basically is that we all know that we can't see light (like in its photon or electromagnetic wave form) directly when it is traveling past us. However, we also know that the way we see objects is by light reflecting off them. This then means that we are "seeing" the light reflecting from the object which then sends the signal to our brain saying that we are seeing a particular object. We know that both light traveling past us and light reflected from objects are made of photons (so they are the same kind)? So then my question is that what is happening to the photon of a light after it is reflected from the objects, that causes us to see it or the object, but on the other hand we can't see light as it is directly traveling past us.
The key is that light must enter the eye for you to see something. You cannot see a beam of light from a low powered laser which is not directed into your eye if the air through which the light is travelling is devoid of dust. Adding dust to the air and you can see the trajectory of the laser beam because of the light being reflected/scattered from the dust and enters your eye. Similarly no atmosphere on the Moon leads to a black sky even in daylight whilst on the Earth the sky is blue. To see something light must enter the eye and the rods (and cones) must be stimulated sufficiently for the signals to be produced for processing by the brain.
{ "source": [ "https://physics.stackexchange.com/questions/325535", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/135978/" ] }
325,602
Can someone please explain concept of spacetime in simple language? What is it and how it is important in the universe? Wherever I have tried searching this concept, I have come across most complicated explanations. A simple example will be appreciated.
The intuitive and traditional idea of space and time is that objects live in an infinite three-dimensional box, space, and that their motion in space happen in time in such a way that at each definite moment in time all objects have a position, and we can compare those positions because time flows the same for all objects. Physicists discovered that there is no such box, and there is no such flow of time. This traditional space/time framework somewhat holds but only relatively to an object; it is not the same for all objects. So there is no universal spatial background, and no universal time flow. Spacetime is then the notion we use to still have a background after all. By forming a space (in the mathematical sense) combining traditional space and traditional time in an intricate way allowing space to rotate into time and the other way round, we can still get by with the idea that there is some smooth universal scene where everything happens. The price to pay to see spacetime as a background is that this scene is completely static, sometimes called the block universe. But since all space and all time are intrinsically part of it, it actually cannot be conceived from an external point of view, and indeed Einstein's equations are strictly local and relational: they describe how the distribution of energy defines its own playground and how time and space can be seen in the way we are used to only instant by instant for specific observers, whose mutual perspectives are always shifting and transforming. In that view spacetime is far from static, it is more like a sort of fluid.
{ "source": [ "https://physics.stackexchange.com/questions/325602", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/151016/" ] }
325,701
In order to find the effective Hamiltonian in a subspace which is energetically well separated from the rest of the Hilbert space people try to find a unitary transformation which makes the Hamiltonian block-diagonal in that subspace. Usually this procedure is done perturbatively and the corresponding formulae -usually to second order- is available. But I saw somewhere that the effective Hamiltonian satisfies the compact relation : $$ \frac{1}{E-H_{eff}}=P_s \frac{1}{E-H} P_s$$ Where $P_s$ is the projection operator into the subspace which we want its effective Hamiltonian. So where does the above relation come from? Also it will be very helpful if you mention some references about about different ways of obtaining effective Hamiltonian systematically. I saw the above formula in the book "Interacting Electrons and Quantum Magnetism" by Auerbach.
This approach is straightforward to understand if you realize that $1/(E-H)$ is nothing but the propagator (the Green's function) $G(E)=(E-H)^{-1}$ . So this approach simply means that the effective propagator $G_\text{eff}(E)=(E-H_\text{eff})^{-1}$ is obtained by restricting the full propagator to the subspace of interest $G_\text{eff}(E)=P_sG(E)P_s$ . One may wonder why not projecting the Hamiltonian directly to the subspace but projecting the propagator. The reason is that all physical observables are measured with respect to the density matrix $\rho(E)=-2\Im G(E+i0_+)$ , which is the imaginary part of the propagator. For example, the expectation value of an operator $A$ evaluated on an eigenstate of the energy $E$ is given by $$\bar{A}(E)=\text{Tr}\hat{A}\rho(E)=-2\text{Tr}\hat{A}\Im G(E+i0_+).$$ Now suppose we are only interested in the physical observables in the Hilbert subspace $\mathcal{H}_s$ , then the information of the propagator $G(E)$ in this subspace will be sufficient to reproduce all measurement result, and hence an "effective" description of the subsystem. The Hamiltonian that will produce the effective propagator is therefore considered as the effective Hamiltonian for the subsystem. Of course, the effective Hamiltonian is typically only calculated perturbatively to some order, so approximations are introduced. But imagine if we could find the effective Hamiltonian to all orders, then it would agree with the full Hamiltonian on any physical measurements that take place in the subsystem (or the subspace). Take a simple quantum mechanical problem for example. Consider a two-level system described by the Hamiltonian $$H=\left[\begin{matrix}0&t\\t&U\end{matrix}\right],$$ where $t\ll U$ is treated as a perturbation. In the limit of $t\to 0$ , we get two levels of the energies 0 and $U$ respectively. Now we are interested in the energy correction to the low-energy level (the level around energy 0). So we first calculate the propagator of the system $$G=\frac{1}{E-H}=\left[\begin{matrix}\frac{E-U}{E^2-E U-t^2} & \frac{t}{E^2-E U-t^2} \\ \frac{t}{E^2-E U-t^2} & \frac{E}{E^2-E U-t^2} \end{matrix}\right].$$ The effective propagator for the low-energy level is obtained by restricting the propagator to the low-energy subspace, i.e. by taking the $\mathcal{P}_1 G(E) \mathcal{P}_1=G(E)_{11}$ component (at the first line and first column), $$G_\text{eff}(E)=\frac{E-U}{E^2-E U-t^2}.$$ Now we wish to construct an effective Hamiltonian $H_\text{eff}$ such that the effective propagator can be produced by $G_\text{eff}(E)=1/(E-H_\text{eff})$ . We find $$H_\text{eff}=\frac{t^2}{E-U}.$$ We note that $H_\text{eff}$ is also a function of $E$ , because the physics can change with respect to the energy scale. To find the eigen energy, one may solve the Schrodinger equation $H_\text{eff}(E)|\psi\rangle=E|\psi\rangle$ . Because the subspace only contains a single state, in this case, the eigenstate is simply fixed with respect to the basis of the subspace (but actually implicitly varies with $E$ in the original basis of the full space), and the eigenenergy is given by $t^2/(E-U)=E$ , whose solution is $$E=\frac{1}{2}\big(U\pm\sqrt{U^2+4t^2}\big).$$ One can see the effective Hamiltonian, if calculated exactly to all ordered, still contains the spectrum of the full system. But in general, we can only compute the effectively Hamiltonian perturbatively. In that case, it will only make sense to evaluate the effective Hamiltonian around the unperturbed energy level. So we evaluate $H_\text{eff}(E)$ at $E=0$ and find the second-order perturbation result $H_\text{eff}(E=0)=-t^2/U$ . To obtain higher-order corrections, we feed back the second-order energy to the effective Hamiltonian and find the result that is accurate to the fourth-order in $t/U$ , i.e. $H_\text{eff}(E=-t^2/U)=-t^2/U+t^4/U^3+\mathcal{O}[t^6]$ . In this way, we can obtain the perturbative corrections order by order recursively.
{ "source": [ "https://physics.stackexchange.com/questions/325701", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/90744/" ] }
325,733
With help from XKCD , which says Miles are units of length, and gallons are volume — which is $\text{length}^3$. So $\text{gallons}/\text{mile}$ is $\frac{\text{length}^3}{\text{length}}$. That's just $\text{length}^2$. I recently realised that the units of fuel efficiency are $\text{length}^{-2}$ (the reciprocal of which would be $\text{length}^{2}$) and I can't work out why this would be, because $\mathrm{m}^2$ is the unit of area, but fuel efficiency is completely different to this. The only reason I could think of for these units is just that they were meant to be used as a ratio; but then again, ratios are meant to be unitless (as far as I know, e.g: strain ). Please could someone explain why these units are used.
Imagine that you have a tube laid along some path and that the tube is completely filled with the fuel that you would spend to cover that path. Area of the cross-section of that tube is the area you're asking about. Now, if this area is bigger, the tube is thicker, which means more fuel. That is, more fuel to cover the same distance, which means lower efficiency. Therefore, efficiency is proportional to the inverse of the area of that tube and that's why it can be measured in inverse square meters.
{ "source": [ "https://physics.stackexchange.com/questions/325733", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117422/" ] }
326,114
I study maths in uni and we have a course about relativity. In the main principles I've read that the speed of light is invariant since we can calculate it from the Maxwell equations. My problem is that the Maxwell equations I know are not relativistic. What is the clear way to formulate the Maxwell equations with respect to the relativistic spacetime? Using that formulation do we get the same value for $c$? How do we do that? Edit: It is clear now what was my problem after the answers. The wrong concept I had was that: From the classical maxwell equations we can calculate the speed of light, and with that information we can build up the relativistic spacetime where the maxwell equations might look different. And it was weird for me. From the answers it became clear that the invariant speed of light is an observation not a result. Ps: I find it interesting that my maths like approach did not consider the possibility of something just being an observation, not a result.
In the main principles I've read that the speed of light is constant since we can calculate it from the Maxwell equations. The fact that the speed of light could be deduced from Maxwell's equations does not , in and of itself, imply that the speed of light is constant in all reference frames. Certainly the equations don't make an obvious reference to a reference frame; but once you've made the connection between electric and magnetic fields and light, it seems pretty obvious what the "natural" rest frame is (bolding mine): We can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena. – James Clerk Maxwell, On the Physical Lines of Force In other words, one could easily imagine a world in which Maxwell's equations are only valid in the rest frame of the luminiferous aether — and from about 1860–1905 or so, this is precisely the universe that physicists thought we lived in. In such a universe, Maxwell's equations would in fact look different in different reference frames; a "full" version of these equations would include terms that depended on an observer's velocity $\vec{v}$ with respect to the aether. There is nothing mathematically inconsistent about the equations describing such a Universe. What these equations are inconsistent with, however, is two things: (1) experimental evidence, and (2) our sense of symmetry. The Michelson-Morley experiment was designed to detect Earth's motion relative to the aether — in other words, to indirectly verify the presence of these $\vec{v}$ -dependent terms in the hypothetical Maxwell's equations. Of course, they famously came up short. The other problem is that there seem to be a lot of convenient coincidences between what seem to be the same phenomena described in different reference frames: It is known that Maxwell's electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated. But if the magnet is stationary and the conductor in motion, no electric field arises in the neighbourhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives rise—assuming equality of relative motion in the two cases discussed—to electric currents of the same path and intensity as those produced by the electric forces in the former case. Examples of this sort, together with the unsuccessful attempts to discover any motion of the earth relatively to the “light medium,” suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. — Albert Einstein, On the Electrodynamics of Moving Bodies Or, to summarize: if I move a coil near a magnet, the magnetic field causes the charges to flow. If I move a magnet near a coil, the changing magnetic field causes an electric field, which causes the charges to flow. These two descriptions seem very different, and yet somehow they give rise to exactly the same amount of current in the coil. Einstein's contention was that this couldn't be a coincidence, and that only relative velocity should matter. If you buy that, then you find (as Einstein did) that when you go into another reference frame, the electric and magnetic fields intermingle with each other. If you look at the above link to Einstein's original paper, §6 describes how the electric and magnetic fields transform into each other. His notation is a little antiquated — what he calls $(X, Y, Z)$ we would nowadays usually call $(E_x, E_y, E_z)$ , and what he calls $(L, M, N)$ we would usually call $(B_x, B_y, B_z)$ . In different reference frames moving relative to each other in the $x$ -direction, all of these components change, and the components $E_y$ , $E_z$ , $B_y$ , and $B_z$ get mixed up with each other. In other words, the electric and magnetic field strengths observed by Observer A and Observer B are not necessarily the same. These transformations between the fields are a necessary consequence of the postulate that the laws of physics are the same in all reference frames. But Maxwell's equations don't necessarily imply that the laws of physics are all the same in such reference frames; they are agnostic on the subject. Historically, physicists originally believed that there was in fact a privileged frame in which Maxwell's equations held exactly, and it was only after careful experimentation and careful thought that we figured out that Maxwell's equations were also consistent with the principle of relativity.
{ "source": [ "https://physics.stackexchange.com/questions/326114", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/152775/" ] }
327,076
For example, has anyone has directly observed charges oscillating due to standing EM waves? I am particularly interested because it'd demonstrate that radiation has a transverse electric component to it. Anything else (historical or modern) that shows that light has a transverse electric component would also be gladly invited.
Yes, we have. As other answers have explained, this is easy to do in the radio regime, but over the past fifteen years or so we've been able to do it for light too. The landmark publication here is Direct measurement of light waves. E. Goulielmakis et al. Science 305 , 1267 (2004) ; author eprint . which broke new ground on a method called attosecond streaking that lets us see things like this: $\qquad$ On the left you've got the (mildly processed) raw data, and on the right you've got the reconstruction of the electric field of an infrared pulse that lasts about four cycles. To measure this, you start with a gas of neon atoms, and you ionize them with a single ultrashort burst of UV radiation that lasts about a tenth of the period of the infrared. (For comparison, the pulse length, $250\:\mathrm{as}$, is to one second as one second is to $125$ million years.) This releases the electron out of the atom, and it does so at some precisely controlled point within the infrared pulse. The electric field of the infrared can then have a strong influence on the motion of the electron: it will be forced up and down as the field oscillates, but depending on when the electron is released this will accumulate to a different impulse, and therefore a different final energy. The final measurement of the electron's energy, as a function of the relative delay between the two pulses (top left) clearly shows the traces of the electric field of the infrared pulse.
{ "source": [ "https://physics.stackexchange.com/questions/327076", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123113/" ] }