source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
589,074 | If gravitons mediate the gravitational force, couldn’t the detection of gravitons by an observer be used to distinguish whether they are experiencing gravitational acceleration vs. inertial acceleration, contradictory to general relativity? If this is not the case, and detection of gravitons can not be used to distinguish gravity from other acceleration, shouldn’t acceleration affect the way objects interact with the gravitational field? Obviously, this can not be correct, so what am I missing? | Gravitons do not mediate the gravitational force and you cannot detect gravitons flashing to and fro between objects interacting gravitationally. Since you cannot detect the gravitons you cannot use said gravitons to find out whether acceleration is inertial or gravitational. It is often said that forces are due to the exchange of virtual particles, for example the EM force is due to the exchange of virtual photons while the gravitational force is due to the exchange of virtual gravitons. But virtual particles are a computational device and do not actually exist. Those Feynman diagrams you have seen showing the exchange of a virtual particle are just a graphical representation of an integral called a propagator and do not show a physical process. I cannot emphasise this strongly enough: Virtual particles do not exist ! Real gravitons are the quanta of gravitational waves, just as real photons are the quanta of light waves, but real gravitons do not transmit the gravitational force any more than real photons transmit the EM force. When we write the four-acceleration of some observer we write it as a sum of the inertial and gravitational terms: $$ A^\alpha = \frac{\mathrm d^2x^\alpha}{\mathrm d\tau^2} + \Gamma^\alpha{}_{\mu\nu}U^\mu U^\nu $$ where the first term on the right hand side is the inertial part and the second term is the gravitational part. However neither of the terms on the right hand side are tensors so both are changed when we change the coordinate system. It is a fundamental principle in general relativity that we cannot distinguish between the two terms since either can be made zero just by choosing appropriate coordinates. In fact this is the equivalence principle stated mathematically. | {
"source": [
"https://physics.stackexchange.com/questions/589074",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/272989/"
]
} |
589,133 | The electroweak force separated into the weak force and electromagnetism. So, will the electromagnetic force eventually separate into electricity and magnetism? | In the case of electroweak force and electromagnetism there is an Higgs mechanism, which makes the $W^{\pm}, Z$ bosons massive, and preserves the photon $\gamma$ massless. But the symmetry relating the electric and magnetic fields is actually a Lorentz symmetry, which is global and remains unbroken. The difference between electric and magnetic fields emerges in non-relativistic limit, but it is not tied with the breaking of symmetry. | {
"source": [
"https://physics.stackexchange.com/questions/589133",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/50612/"
]
} |
589,142 | I am thinking of two circular gears with radii $R_1$ and $R_2=2R_1$ . Gear 1 initially spins at $\Omega_1=10$ $rpm$ while the Gear 2 is at rest. After the gears come into contact, Gear 2 angularly accelerates and eventually reaches the steady speed $\Omega_2=20$ $rpm$ . Gear 1 applies a torque $\tau_1=F_1 R_1$ on the second gear, correct? Why doesn't torque $\tau_1$ continue to increase the speed of the Gear 2? Why does $\Omega_2$ stop at 20 rpm? After all, an unbalanced torque should continue to provide angular acceleration... Angular speed is amplified from $10$ $rpm$ to $20$ $rpm$ but the torques are $\tau_2<\tau_1$ . Is torque $\tau_1=R_1F_1$ the torque acting on Gear 1 or on Gear 2? Is the force $F_1$ acting on Gear 1 to keep it spinning at $\Omega_1$ ? Or is $F_1$ the force acting on Gear 2 to make/keep it spinning at $\Omega_2$ ? I am confused. Is $\tau_2=R_2F_2$ the torque that Gear 2 could apply to a 3rd hypothetical gear, if it came in contact with it, or is it the torque acting on Gear 2 itself? Thank you | In the case of electroweak force and electromagnetism there is an Higgs mechanism, which makes the $W^{\pm}, Z$ bosons massive, and preserves the photon $\gamma$ massless. But the symmetry relating the electric and magnetic fields is actually a Lorentz symmetry, which is global and remains unbroken. The difference between electric and magnetic fields emerges in non-relativistic limit, but it is not tied with the breaking of symmetry. | {
"source": [
"https://physics.stackexchange.com/questions/589142",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/109670/"
]
} |
589,156 | Essentially, I'm using CMS Dimuon data, from the decay of a $J/\psi$ particle, to prove that momentum is 'conserved' in relativistic collisions. However, I'm unable to find how I can do this. I thought of using the Dispersion Relation formula which is $E^2 = p^2 + mc^2$ but I'm not sure how I'd apply it to the data. I have the relativistic 4-vector with energy, $p_x$ , $p_y$ , $p_z$ , and transverse momentum for both dimuons produced, along with their invariant masses. Here is where I obtained the data from: http://opendata.cern.ch/record/301 I'm using Octave to process this data, and I'm not sure what operations I should be performing, or if I should even be calculating the rest mass of the J/psi particle? I essentially need help with trying to understand how I can prove conservation of momentum with this data (and if finding the rest mass is one way), how I proceed to do that? Moreover, if there are other ways to show this? | In the case of electroweak force and electromagnetism there is an Higgs mechanism, which makes the $W^{\pm}, Z$ bosons massive, and preserves the photon $\gamma$ massless. But the symmetry relating the electric and magnetic fields is actually a Lorentz symmetry, which is global and remains unbroken. The difference between electric and magnetic fields emerges in non-relativistic limit, but it is not tied with the breaking of symmetry. | {
"source": [
"https://physics.stackexchange.com/questions/589156",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277978/"
]
} |
589,389 | While studying black-body radiation , I saw that most of the textbooks and videos I watched mentioned that there was a contradiction between the classical model of black body radiation and experimental data, which created the ultraviolet catastrophe. They all said that the classical model predicted continous increase in spectral radiance as wavelength decreased. However, none of them describe how the classical model worked, and what assumption about it made it wrong. Could someone explain what this was? | Let me add something to anna v's answer.
The classical model of blackbody radiation is based on: an exact recasting of the Maxwell equations describing electromagnetic radiation in a cavity, which shows that this physical system can be described as an infinite set of classical harmonic oscillators (normal modes) whose frequencies start from zero and are not upper bounded; the hypothesis that there is an effective mechanism which allows thermal equilibrium of the radiation field; that statistical mechanics can be applied to such a system to calculate equilibrium properties. All together, these hypotheses have as a consequence that: due to the equipartition theorem, each normal mode should contribute to the internal energy with the same average energy: ( $k_BT$ ); in a given volume and interval of frequencies $d\nu$ , the number of normal modes grows like $\nu^2$ ; therefore the total energy per unit volume diverges, due to the unintegrable growth of normal modes at high frequencies (from here, the name ultraviolet catastrophe ). Up to here, these are the facts textbooks refer to. However, there are a few facts that could be useful to know to put things in the historical perspective and, more important, to learn a lesson which could be useful in other contexts even today. The actual weight of the ultraviolet catastrophe argument on the historical development of quantum physics is usually over-exagerated. It looks compelling for us, but for contemporaries of Rayleigh and Jeans, it was not a clear indication that something was wrong with classical mechanics. Statistical mechanics was in its infancy and not everybody was convinced of its general validity. Remember that Boltzmann had a hard time convincing the scientific community of the truth of his ideas. In particular, the general validity of equipartition theorem was not acknowledged by everybody. Somewhat connected with this observation, reading the first paper where Planck derived his distribution makes clear that he was not concerned by any ultraviolet catastrophe (which is not mentioned in any place in his two main contributions to the blackbody radiation). On the contrary, his main concern was the disagreement between new experiments by Pringsheim and Lummer and Wien's energy distribution at low frequencies (long wavelengths). A readable account of the real history of Planck's contribution to the blackbody problem can be found in the paper Klein, M. J. (1962). Max Planck and the beginnings of the quantum theory. Archive for History of Exact Sciences, 1(5), 459-479 . A final comment on hypothesis n.2 (see above). 20-th century research on dynamical systems has shown that when oscillators with very different frequencies are weakly coupled, equilibration times could easily exceed any reasonable experimental time. In a way, it was a lucky circumstance for the birth of Quantum Mechanics that such a result was not clearly known at the beginning of the century. | {
"source": [
"https://physics.stackexchange.com/questions/589389",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/260855/"
]
} |
589,425 | I am concerned about the accuracy of some information in a science textbook which I would like to clarify please. When white light is shone through a blue filter, only blue light will pass through. When the emergent blue light is passed through a red filter, no light gets through, because there is no blue light left. This makes perfect sense.
However, what would you expect to see, and why, if white light is shone through: a yellow filter, followed by a blue filter? a blue filter followed by a yellow filter? a yellow filter followed by a red filter? a red filter followed by a green filter? a green filter followed by a yellow filter? | Let me add something to anna v's answer.
The classical model of blackbody radiation is based on: an exact recasting of the Maxwell equations describing electromagnetic radiation in a cavity, which shows that this physical system can be described as an infinite set of classical harmonic oscillators (normal modes) whose frequencies start from zero and are not upper bounded; the hypothesis that there is an effective mechanism which allows thermal equilibrium of the radiation field; that statistical mechanics can be applied to such a system to calculate equilibrium properties. All together, these hypotheses have as a consequence that: due to the equipartition theorem, each normal mode should contribute to the internal energy with the same average energy: ( $k_BT$ ); in a given volume and interval of frequencies $d\nu$ , the number of normal modes grows like $\nu^2$ ; therefore the total energy per unit volume diverges, due to the unintegrable growth of normal modes at high frequencies (from here, the name ultraviolet catastrophe ). Up to here, these are the facts textbooks refer to. However, there are a few facts that could be useful to know to put things in the historical perspective and, more important, to learn a lesson which could be useful in other contexts even today. The actual weight of the ultraviolet catastrophe argument on the historical development of quantum physics is usually over-exagerated. It looks compelling for us, but for contemporaries of Rayleigh and Jeans, it was not a clear indication that something was wrong with classical mechanics. Statistical mechanics was in its infancy and not everybody was convinced of its general validity. Remember that Boltzmann had a hard time convincing the scientific community of the truth of his ideas. In particular, the general validity of equipartition theorem was not acknowledged by everybody. Somewhat connected with this observation, reading the first paper where Planck derived his distribution makes clear that he was not concerned by any ultraviolet catastrophe (which is not mentioned in any place in his two main contributions to the blackbody radiation). On the contrary, his main concern was the disagreement between new experiments by Pringsheim and Lummer and Wien's energy distribution at low frequencies (long wavelengths). A readable account of the real history of Planck's contribution to the blackbody problem can be found in the paper Klein, M. J. (1962). Max Planck and the beginnings of the quantum theory. Archive for History of Exact Sciences, 1(5), 459-479 . A final comment on hypothesis n.2 (see above). 20-th century research on dynamical systems has shown that when oscillators with very different frequencies are weakly coupled, equilibration times could easily exceed any reasonable experimental time. In a way, it was a lucky circumstance for the birth of Quantum Mechanics that such a result was not clearly known at the beginning of the century. | {
"source": [
"https://physics.stackexchange.com/questions/589425",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270745/"
]
} |
589,812 | Newtonian mechanics seems to allow for both positive and negative gravitational mass as long as the inertial mass is always positive. The situation is analogous to electrostatics but with the opposite sign. Two positive masses or two negative masses are attracted to each other whereas one positive and one negative mass repel each other. General relativity says gravitational and inertial mass are the same thing through the equivalence principle. This has been confirmed experimentally to a very high degree of accuracy, though not for very small masses and only for normal matter. Antimatter is known to have positive inertial mass from observing the trajectories of particles in electric or magnetic fields. Presumably it is also known that the $m$ in the famous $E=mc^2$ is positive. The gravitational mass of elementary particles is currently too small to measure, but is it possible that antimatter could have negative gravitational mass - or is this absolutely precluded in general relativity? | A long comment: AEGIS is a collaboration of physicists from all over Europe. In the first phase of the experiment, the AEGIS team is using antiprotons from the Antiproton Decelerator to make a beam of antihydrogen atoms. They then pass the antihydrogen beam through an instrument called a Moire deflectometer coupled to a position-sensitive detector to measure the strength of the gravitational interaction between matter and antimatter to a precision of 1%. A system of gratings in the deflectometer splits the antihydrogen beam into parallel rays, forming a periodic pattern. From this pattern, the physicists can measure how much the antihydrogen beam drops during its horizontal flight. Combining this shift with the time each atom takes to fly and fall, the AEGIS team can then determine the strength of the gravitational force between the Earth and the antihydrogen atoms. Also new experiments are in the process. In total there are three experiments at CERN to measure the effect of the earth's gravitational field on antimatter. Patience. | {
"source": [
"https://physics.stackexchange.com/questions/589812",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/273145/"
]
} |
590,473 | I've been wondering: why is the electrical conductivity of a given material defined as the inverse of its electrical resistivity? In other words, why is $$ \sigma \equiv \frac{1}{\rho}~?$$ It indeed makes sense to define a number called conductivity such that, when the resistivity of the material decreases, the conductivity increases. However, there are a bunch of functions for which this property holds. So why aren't the following as convenient as the definition given above? $$ \sigma = \frac{1}{\rho^2} $$ $$ \sigma = - \rho $$ In fact, every decreasing function on $\rho$ could be used here. What is it that makes $\frac{1}{\rho}$ so special and unique? | In my experience this comes from resistance and conductance in electrical engineering and circuit theory. If you use the loop current analysis method on a circuit of resistors and sources then you get a matrix of linear equations whose coefficients are resistances. If you use the node voltage method on the same circuit you get a matrix whose coefficients are inverse resistances. So the inverse of resistance shows up very often quite naturally in circuit equations, rather than the negative of resistance or the inverse of resistance squared. Because it shows up naturally it makes sense to give the inverse of resistance a name. Usually when you run into some quantity that is defined and you are unsure why, that quantity first simply showed up in some important formula. So people needed a way to discuss that part of that formula, and so they gave it a name. But the quantity showed up on its own in the math first and was given a name later. | {
"source": [
"https://physics.stackexchange.com/questions/590473",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/269497/"
]
} |
590,774 | So I keep hearing people talking about how physics break down at for example the center of a black hole. And maybe I am just to stupid but, why? How can we say that? For all we know a black hole could just be a very dense sphere. Kind of like a neutron star where are all the atoms sort of combine to become a single object. Just one step further. Now I am pretty sure I don't out think everyone ever right now. Thus I most likely don't understand something that everyone else gets. Can someone explain why we know that physics stops working? | "Physics breaks down" is a bad way of saying what people are trying to say. It's the sort of thing that sounds cool at first, but then it starts misleading people. What scientists mean is "our best theory produces non-sensical or contradictory results in this situation, so we know the theory doesn't make good predictions there." They do not mean that there can never be a theory that works, or that somehow there are no laws of physics whatsoever in the situation. It just means we don't know what the law is. Every physicist fully expects that there are laws of physics that predict what happens at the center of a black hole. Probably something perfectly sensible happens, though it's probably something weird and unlike anything else we know. | {
"source": [
"https://physics.stackexchange.com/questions/590774",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/276740/"
]
} |
590,983 | I recently watched this video by Veritasium where he talks about the One Way Speed of Light and talks about the limiting case where in one direction the speed of light is $c/2$ while it's instantaneous in the other. He also says this is perfectly fine according to our Physics theories. He also points at Einstein's assumption in his famous 1905 paper where he assumes that the speed of light is same in all directions. This made me ask this question is taking the speed of light same in all directions an axiom of some sort? As I've often read no information can be sent at more than the speed of light but here one-way taking the speed to infinite makes no difference. So are all of our physics theories based on the assumption and what would happen if light turns out to be moving at different speeds in different direction? Will that enable transfer of information faster than the speed of light and is there any way for us knowing that the transfer happens faster than the speed of light? The video takes a Earth Mars case where he says it isn't possible for us to every realize this discrepancy but is there a more general proof which says it isn't possible | This made me ask this question is taking the speed of light same in all directions an axiom of some sort? Yes, although it is called a postulate rather than an axiom. This is Einstein's famous second postulate: Any ray of light moves in the “stationary” system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Hence $${\rm velocity}=\frac{{\rm light\ path}}{{\rm time\ interval}} $$ where time interval is to be taken in the sense of the definition in § 1. A. Einstein, 1905, "On the Electrodynamics of Moving Bodies" https://www.fourmilab.ch/etexts/einstein/specrel/www/ This postulate is simply assumed to be true and the consequences are explored in his paper. The subsequent verification of many of the rather strange consequences is then taken to be strong empirical support justifying the postulate. This is the heart of the scientific method. So are all of our physics theories based on the assumption and what would happen if light turns out to be moving at different speeds in different direction? Will that enable transfer of information faster than the speed of light and is there any way for us knowing that the transfer happens faster than the speed of light? Yes, all of our physics theories are based on this assumption, but the assumption itself is simply a convention. The nice thing about conventions is that there is no "wrong" or "right" convention. This specific convention is known as the Einstein synchronization convention, and it is what the second postulate above referred to by "time interval is to be taken in the sense of the definition in § 1". From the same paper in section 1: Let a ray of light start at the “A time” $t_{\rm A}$ from A towards B, let it at the “B time” $t_{\rm B}$ be reflected at B in the direction of A, and arrive again at A at the “A time” $t'_{\rm A}$ . In accordance with definition the two clocks synchronize if $$t_{\rm B}-t_{\rm A}=t'_{\rm A}-t_{\rm B}$$ A. Einstein, 1905, "On the Electrodynamics of Moving Bodies" https://www.fourmilab.ch/etexts/einstein/specrel/www/ If we define $\Delta t_A= t'_A-t_A$ then with a little rearranging this becomes $t_B=\frac{1}{2}(t_A+t'_A)=t_A+\frac{1}{2}\Delta t_A$ . This is a convention about what it means to synchronize two clocks. But it is not the only possible convention. In fact, Reichenbach extensively studied an alternative convention where $t_B=t_A+ \epsilon \Delta t_A$ where $0 \le \epsilon \le 1$ . Einstein's convention is recovered for $\epsilon = \frac{1}{2}$ and the Veritasium video seemed oddly excited about $\epsilon = 1$ . Note that the choice of Reichenbach's $\epsilon$ directly determines the one way speed of light, without changing the two way speed of light. For Einstein's convention the one way speed of light is isotropic and equal to the two way speed of light, and for any other value the one way speed of light is anisotropic but in a very specific way that is sometimes called "conspiratorial anisotropy". It is anisotropic, but in a way that does not affect any physical measurement. Instead this synchronization convention causes other things like anisotropic time dilation and even anisotropic stress-free torsion which conspire to hide the anisotropic one way speed of light from having any experimental effects. This is important because it implies two things. First, there is no way to determine by experiment the true value, there simply is no true value, this is not a fact of nature but a description of our coordinate system's synchronization convention, nature doesn't care about it. Second, you are free to select any value of $\epsilon$ and no experiment will contradict you. This means that $\epsilon=\frac{1}{2}$ is a convention, just like the charge on an electron being negative is a convention and just like the right-hand rule is a convention. No physical prediction would change if we changed any of those conventions. However, in the case of $\epsilon=\frac{1}{2}$ a lot of calculations and formulas become very messy if you use a different convention. Since there is no point in making things unnecessarily messy, it is a pretty strong convention. Finally, regarding FTL information transfer. If we use $\epsilon \ne \frac{1}{2}$ then there is some direction where information can travel faster than $c$ . However, since in that direction light also travels faster than $c$ the information still does not travel faster than light. It is important to remember that under the $\epsilon \ne \frac{1}{2}$ convention the quantity $c$ is no longer the one way speed of light, so faster than light and faster than $c$ are no longer equivalent. | {
"source": [
"https://physics.stackexchange.com/questions/590983",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/248422/"
]
} |
590,991 | I understand that the modulation depth of a sinusoidally modulated signal can be defined as the modulation amplitude divided by the mean value, as explained here . But why would one wish for a high modulation depth in an experiment? What advantages does it bring? Some articles state that they have achieved a high modulation depth of 90%, but isn't what matter that amplitude of the signal or its "shape"/frequency? | This made me ask this question is taking the speed of light same in all directions an axiom of some sort? Yes, although it is called a postulate rather than an axiom. This is Einstein's famous second postulate: Any ray of light moves in the “stationary” system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Hence $${\rm velocity}=\frac{{\rm light\ path}}{{\rm time\ interval}} $$ where time interval is to be taken in the sense of the definition in § 1. A. Einstein, 1905, "On the Electrodynamics of Moving Bodies" https://www.fourmilab.ch/etexts/einstein/specrel/www/ This postulate is simply assumed to be true and the consequences are explored in his paper. The subsequent verification of many of the rather strange consequences is then taken to be strong empirical support justifying the postulate. This is the heart of the scientific method. So are all of our physics theories based on the assumption and what would happen if light turns out to be moving at different speeds in different direction? Will that enable transfer of information faster than the speed of light and is there any way for us knowing that the transfer happens faster than the speed of light? Yes, all of our physics theories are based on this assumption, but the assumption itself is simply a convention. The nice thing about conventions is that there is no "wrong" or "right" convention. This specific convention is known as the Einstein synchronization convention, and it is what the second postulate above referred to by "time interval is to be taken in the sense of the definition in § 1". From the same paper in section 1: Let a ray of light start at the “A time” $t_{\rm A}$ from A towards B, let it at the “B time” $t_{\rm B}$ be reflected at B in the direction of A, and arrive again at A at the “A time” $t'_{\rm A}$ . In accordance with definition the two clocks synchronize if $$t_{\rm B}-t_{\rm A}=t'_{\rm A}-t_{\rm B}$$ A. Einstein, 1905, "On the Electrodynamics of Moving Bodies" https://www.fourmilab.ch/etexts/einstein/specrel/www/ If we define $\Delta t_A= t'_A-t_A$ then with a little rearranging this becomes $t_B=\frac{1}{2}(t_A+t'_A)=t_A+\frac{1}{2}\Delta t_A$ . This is a convention about what it means to synchronize two clocks. But it is not the only possible convention. In fact, Reichenbach extensively studied an alternative convention where $t_B=t_A+ \epsilon \Delta t_A$ where $0 \le \epsilon \le 1$ . Einstein's convention is recovered for $\epsilon = \frac{1}{2}$ and the Veritasium video seemed oddly excited about $\epsilon = 1$ . Note that the choice of Reichenbach's $\epsilon$ directly determines the one way speed of light, without changing the two way speed of light. For Einstein's convention the one way speed of light is isotropic and equal to the two way speed of light, and for any other value the one way speed of light is anisotropic but in a very specific way that is sometimes called "conspiratorial anisotropy". It is anisotropic, but in a way that does not affect any physical measurement. Instead this synchronization convention causes other things like anisotropic time dilation and even anisotropic stress-free torsion which conspire to hide the anisotropic one way speed of light from having any experimental effects. This is important because it implies two things. First, there is no way to determine by experiment the true value, there simply is no true value, this is not a fact of nature but a description of our coordinate system's synchronization convention, nature doesn't care about it. Second, you are free to select any value of $\epsilon$ and no experiment will contradict you. This means that $\epsilon=\frac{1}{2}$ is a convention, just like the charge on an electron being negative is a convention and just like the right-hand rule is a convention. No physical prediction would change if we changed any of those conventions. However, in the case of $\epsilon=\frac{1}{2}$ a lot of calculations and formulas become very messy if you use a different convention. Since there is no point in making things unnecessarily messy, it is a pretty strong convention. Finally, regarding FTL information transfer. If we use $\epsilon \ne \frac{1}{2}$ then there is some direction where information can travel faster than $c$ . However, since in that direction light also travels faster than $c$ the information still does not travel faster than light. It is important to remember that under the $\epsilon \ne \frac{1}{2}$ convention the quantity $c$ is no longer the one way speed of light, so faster than light and faster than $c$ are no longer equivalent. | {
"source": [
"https://physics.stackexchange.com/questions/590991",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/172936/"
]
} |
591,001 | Merzbacher in his Quantum Mechanics says that for the "particle in a box" potential ( $V(x) = 0$ for $|x|\le L$ and $+\infty$ otherwise), Since the expectation value of the potential energy must be finite, the wavefunction must vanish within and on the walls of the box. However, I don't quite get this reasoning. Why must the potential energy's expectation value be finite? | This made me ask this question is taking the speed of light same in all directions an axiom of some sort? Yes, although it is called a postulate rather than an axiom. This is Einstein's famous second postulate: Any ray of light moves in the “stationary” system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Hence $${\rm velocity}=\frac{{\rm light\ path}}{{\rm time\ interval}} $$ where time interval is to be taken in the sense of the definition in § 1. A. Einstein, 1905, "On the Electrodynamics of Moving Bodies" https://www.fourmilab.ch/etexts/einstein/specrel/www/ This postulate is simply assumed to be true and the consequences are explored in his paper. The subsequent verification of many of the rather strange consequences is then taken to be strong empirical support justifying the postulate. This is the heart of the scientific method. So are all of our physics theories based on the assumption and what would happen if light turns out to be moving at different speeds in different direction? Will that enable transfer of information faster than the speed of light and is there any way for us knowing that the transfer happens faster than the speed of light? Yes, all of our physics theories are based on this assumption, but the assumption itself is simply a convention. The nice thing about conventions is that there is no "wrong" or "right" convention. This specific convention is known as the Einstein synchronization convention, and it is what the second postulate above referred to by "time interval is to be taken in the sense of the definition in § 1". From the same paper in section 1: Let a ray of light start at the “A time” $t_{\rm A}$ from A towards B, let it at the “B time” $t_{\rm B}$ be reflected at B in the direction of A, and arrive again at A at the “A time” $t'_{\rm A}$ . In accordance with definition the two clocks synchronize if $$t_{\rm B}-t_{\rm A}=t'_{\rm A}-t_{\rm B}$$ A. Einstein, 1905, "On the Electrodynamics of Moving Bodies" https://www.fourmilab.ch/etexts/einstein/specrel/www/ If we define $\Delta t_A= t'_A-t_A$ then with a little rearranging this becomes $t_B=\frac{1}{2}(t_A+t'_A)=t_A+\frac{1}{2}\Delta t_A$ . This is a convention about what it means to synchronize two clocks. But it is not the only possible convention. In fact, Reichenbach extensively studied an alternative convention where $t_B=t_A+ \epsilon \Delta t_A$ where $0 \le \epsilon \le 1$ . Einstein's convention is recovered for $\epsilon = \frac{1}{2}$ and the Veritasium video seemed oddly excited about $\epsilon = 1$ . Note that the choice of Reichenbach's $\epsilon$ directly determines the one way speed of light, without changing the two way speed of light. For Einstein's convention the one way speed of light is isotropic and equal to the two way speed of light, and for any other value the one way speed of light is anisotropic but in a very specific way that is sometimes called "conspiratorial anisotropy". It is anisotropic, but in a way that does not affect any physical measurement. Instead this synchronization convention causes other things like anisotropic time dilation and even anisotropic stress-free torsion which conspire to hide the anisotropic one way speed of light from having any experimental effects. This is important because it implies two things. First, there is no way to determine by experiment the true value, there simply is no true value, this is not a fact of nature but a description of our coordinate system's synchronization convention, nature doesn't care about it. Second, you are free to select any value of $\epsilon$ and no experiment will contradict you. This means that $\epsilon=\frac{1}{2}$ is a convention, just like the charge on an electron being negative is a convention and just like the right-hand rule is a convention. No physical prediction would change if we changed any of those conventions. However, in the case of $\epsilon=\frac{1}{2}$ a lot of calculations and formulas become very messy if you use a different convention. Since there is no point in making things unnecessarily messy, it is a pretty strong convention. Finally, regarding FTL information transfer. If we use $\epsilon \ne \frac{1}{2}$ then there is some direction where information can travel faster than $c$ . However, since in that direction light also travels faster than $c$ the information still does not travel faster than light. It is important to remember that under the $\epsilon \ne \frac{1}{2}$ convention the quantity $c$ is no longer the one way speed of light, so faster than light and faster than $c$ are no longer equivalent. | {
"source": [
"https://physics.stackexchange.com/questions/591001",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/231957/"
]
} |
591,584 | This might be a very trivial question, but in condensed matter or many body physics, often one is dealing with some Hamiltonian and main goal is to find, or describe the physics of, the ground state of this Hamiltonian. Why is everybody so interested in the ground state? | To add to Vadim's answer, the ground state is interesting because it tells us what the system will do at low temperature, where the quantum effects are usually strongest (which is why you're bothering with QM in the first place). OR it is interesting because the finite temperature behavior can be treated as a perturbation above the ground state. For example, in a metal, the dividing line between "low" and "high" temperature might be the Fermi temperature (essentially the temperature that is equivalent to the highest occupied electron state). For many metals the Fermi temperature is on the order of $10^4 K$ or more, so a metal at room temperature is nearly in its ground state, with a few excitations given by Fermi-Dirac statistics. As another example, if you consider a permanent magnet, the relevant temperature scale is the Curie temperature which might be hundreds of K, so a room temperature magnet could be considered to be in its ground state with some excitations (perturbations) on top of that. | {
"source": [
"https://physics.stackexchange.com/questions/591584",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/137908/"
]
} |
591,657 | As we all know, atomic clocks are being used to measure time and the GPS system.
But I was wondering based on what was the first atomic clock calibrated and how accurate this calibration was based on our standards nowadays? | More specifically, caesium atomic clocks realize the second (see this Q&A for the meaning of realization) or, said another way, they are a primary frequency standard. Generally, when a new primary standard is being developed—for whatever quantity, not only time—and has not yet become, by international agreement, a primary standard, it should be calibrated against the primary standards of the time. The first caesium atomic clocks were developed during the 1950s (the first prototype was that of Essen and Parry in 1955, at the National Physical Laboratory, UK). At the time, the second was defined as the fraction 1/86400 of the mean solar day , which is an astronomical unit of time, that is, based on the rotation of the Earth and its motion in the solar system. So the first atomic clock should have been calibrated against that definition of time, which was in operation up to the 1960. However, scientists already knew that due to the irregularities of the Earth's motion, the mean solar time was not a good time scale and had already started to devise a new time scale based on the ephemeris time . This was recognized as a more stable time scale, even before its implementation, and so the first accurate measurement of the frequency of a caesium atomic clock was made in the 1958 against the ephemeris second (whose definition would be ratified by the CGPM only in 1960), obtaining the value $$\nu_\mathrm{Cs} = (9\,192\,631\,770\pm 20)\,\mathrm{Hz}$$ Note that since there is no device generating the ephemeris time, which should be obtained from the analysis of the earth and moon motions, this determination took about three years! When the second was redefined as an atomic unit in 1967, the above value was used to define exactly the frequency associated to the hyperfine transition of the caesium ground level (see the 1967 resolution of the CGPM ). It's also worth noting that the relative uncertainty of that measurement is of about $2\times 10^{-9}$ ; nowadays, caesium atomic clocks can be compared with relative uncertainties, limited by the clock instability, of around $10^{-16}$ , and even better uncertainty, around $10^{-18}$ , can be achieved in the comparison of optical atomic clocks. Quite a remarkable improvement from those days! For more information about this history, I suggest you the following wonderful book (though not up to date with the current state-of-the-art): C. Audoin and B. Guinot, The measurement of time. Time, frequency and the atomic clock (Cambridge, 2001). The description of said experiment can be found in: W. Markowitz et al., "Frequency of Cesium in Terms of Ephemeris Time", Phys. Rev. Lett. , 1 , 105-107, 1958 . L. Essen et al., "Variation in the Speed of Rotation of the Earth since June 1955", Nature 181 , 1054, 1958 | {
"source": [
"https://physics.stackexchange.com/questions/591657",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40678/"
]
} |
591,825 | The Sunlight is an electromagnetic radiation. Is it known what is the origin of this radiation? Can it be adequately described by classical electrodynamics (Maxwell's equations) as a motion of electric charges in the Sun? Is it necessary to take into account quantum effects described by quantum electrodynamics? Or is it necessary to take into account other processes? | The light from the Sun comes from the photosphere; a relatively thin layer, a few hundred km thick. The photosphere of the Sun is in radiative equilibrium, getting neither hotter or colder on average.
What this means is that the emission processes that produce the radiation that escapes from the photosphere, are the inverse of the absorption processes that stop radiation from deeper, hotter layers reaching us. The dominant continuum process is bound-free photoionisation of H $^{-}$ ions that form when hydrogen atoms capture electrons released from the ionisation of potassium and sodium atoms in the atmosphere. There are some other bound-free photoionisation processes of other species that contribute continuum opacity, and bound-bound transitions between energy levels in a variety of atoms and ions that contribute opacities at discrete wavelengths. The principle of detailed balance means that these absorption processes are balanced by free-bound photorecombination of H $^-$ ions contributing light over a continuum of wavelengths and bound-bound downward transitions in atoms and ions at specific wavelengths. The understanding of these processes certainly requires quantum physics and cannot be described by classical electromagnetism. | {
"source": [
"https://physics.stackexchange.com/questions/591825",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27463/"
]
} |
592,720 | Wikipedia gives the most general expression for the $n^{\rm th}$ moment $\mu_n$ of a physical quantity $\Lambda$ as: $$ \mu_n = \int {\bf x}^n \space \lambda({\bf x}) \space \rm d^3 x$$ provided that we know the spatial distribution density $\lambda({\bf x})$ of $\Lambda$ . Here are some examples (again taken from Wikipedia) concerning the mass $m$ , as a physical quantity: total mass ( $0^{\rm th}$ moment) $M = \int \varrho ({\bf r}) \space d\tau $ centre of mass ( $1^{\rm st}$ moment, normalized) $ {\bf R}_M = {1 \over M} \int {\bf r}\space \varrho({\bf r}) \space d\tau $ moment of inertia ( $2^{\rm nd}$ moment) $I =\int r^2 \varrho ({\bf r}) \space d\tau$ I'd really like to have an analogous list for the electric charge $q$ . Now, the $0^{\rm th}$ moment is obviously the total electric charge: $$Q = \int \rho({\bf r}) \space d\tau $$ As $1^{\rm st} $ moment the only example I can mention is the dipole moment : $$ {\bf p} = \int {\bf r} \space \rho({\bf r}) \space d\tau $$ but I can't give it a meaningful interpretation... Is it the position vector for the "centre of charge"? Well no, I think. In the simple case of a dipole, with +q and -q separated by a distance ${\bf d}$ going from -q to +q, this so-called 'centre of charge' should be zero and placed in between the two (using symmetry arguments), while ${\bf p} = \rm q \bf d $ , clearly not zero. So how should I think of it? For the $2^{\rm nd}$ moment I have no clues...
Is it even reasonable to have an equivalent "moment of inertia" for electric charge? Actually (see Griffiths), there is a general expression involving all the moments, it's called multipole expansion of the potential $V$ in powers of $1 \over r$ ( Legendre polynomials ). Namely, $$V({\bf r}) = {1\over 4\pi\epsilon_0} \sum_{n=0}^{\infty} {1\over r^{n+1}} \int (r')^n \space {\rm P}_n({\rm cos \space \!}\alpha) \space \rho({\bf r}') \space d\tau', $$ with coordinates as in this image: What is the explanation for the dipole moment? | For a globally-charged system If your system has a nonzero net charge $Q$ , then the quantity $$
\mathbf r_\mathrm{COC} = \frac{1}{Q} \int \mathbf r\; \rho(\mathbf r)\mathrm d\mathbf r
\tag 1
$$ defines the center of charge of the distribution. How is this concept useful? Well, in short, if your charge distribution is localized and you're fairly far away from it, then it's reasonable to try to approximate its electrostatic potential as a point-charge source, i.e., $$
V(\mathbf r) \approx \frac{Q}{4\pi\epsilon_0} \frac{1}{|\mathbf r-\mathbf r_0|},
\tag 2
$$ where $\mathbf r_0$ is the position of the putative point charge. The center of charge $\mathbf r_\mathrm{COC}$ in $(1)$ is the optimal position for this $\mathbf r_0$ , i.e., the approximation $(2)$ works best when $\mathbf r_0 = \mathbf r_\mathrm{COC}$ . For a neutral system For a neutral system, on the other hand, two things happen: The definition $(1)$ for $\mathbf r_\mathrm{COC}$ becomes undefined, since $Q=0$ . The approximation $(2)$ becomes meaningless, since it just returns $V(\mathbf r) \approx 0$ . And, while the potential is generally "small" in some sense, it's generally never exactly zero -- and the approximation $(2)$ , however, true, is definitely not useful . So, what are we to do? The answer comes from understanding where the approximation $(2)$ comes from. In short, the full potential is known to be of the form $$
V(\mathbf r)
=
\frac{1}{4\pi\epsilon_0}
\int \frac{\rho(\mathbf r')}{|\mathbf r-\mathbf r'|}\mathrm d\mathbf r',
\tag 3
$$ and the approximation $(2)$ comes from a Taylor expansion of the Coulomb kernel, $\frac{1}{|\mathbf r-\mathbf r'|}$ , in powers of $(\mathbf r' - \mathbf r_0)/|\mathbf r-\mathbf r_0|$ (where $\mathbf r_0$ is an arbitrarily-chosen origin, and $\mathbf r$ should be "outside" the charge distribution, i.e., further away from $\mathbf r_0$ than any of the $\mathbf r'$ ). As it turns out, the first couple of terms of this expansion are reasonably simple to work out: $$
\frac{1}{|\mathbf r-\mathbf r'|}
=
\frac{1}{|\mathbf r-\mathbf r_0|}
+
\frac{(\mathbf r-\mathbf r_0)\cdot (\mathbf r'-\mathbf r_0)}{|\mathbf r-\mathbf r_0|^3}
+
\mathcal O\mathopen{}\left(\frac{|\mathbf r'-\mathbf r_0|^2}{|\mathbf r-\mathbf r_0|^3}\right)\mathclose{}.
\tag 4
$$ If you then plug this $(4)$ back into $(3)$ , you basically get back $(2)$ , with an additional correction: $$
V(\mathbf r)
\approx
\frac{Q}{4\pi\epsilon_0} \frac{1}{|\mathbf r-\mathbf r_0|}
+
\frac{1}{4\pi\epsilon_0}
\frac{\mathbf r-\mathbf r_0}{|\mathbf r-\mathbf r_0|^3}
\cdot
\int(\mathbf r'-\mathbf r_0)\rho(\mathbf r')\mathrm d\mathbf r'.
\tag{5}
$$ The coefficient in that last term, $$
\mathbf d = \int(\mathbf r'-\mathbf r_0)\rho(\mathbf r')\mathrm d\mathbf r',
\tag 6
$$ is the dipole moment (relative to the origin $\mathbf r_0$ ), so we can reexpress $(5)$ in a cleaner form: $$
V(\mathbf r)
\approx
\frac{Q}{4\pi\epsilon_0} \frac{1}{|\mathbf r-\mathbf r_0|}
+
\frac{1}{4\pi\epsilon_0}
\mathbf d\cdot
\frac{\mathbf r-\mathbf r_0}{|\mathbf r-\mathbf r_0|^3}.
\tag{7}
$$ For one thing, this provides a proof for the assertion I made earlier about globally-charged systems: since $\mathbf d = \int\mathbf r'\rho(\mathbf r')\mathrm d\mathbf r'-Q\mathbf r_0 $ , it's straightforward to show that $\mathbf d$ vanishes if $\mathbf r_0 = \mathbf r_\mathrm{COC}$ , and therefore so does the correction. On the other hand, for a neutral system, we know that $Q=0$ , which means that the first term in $(7)$ vanishes (as we already knew). However, we're now better equipped than before, because we now have a sub-leading term that can provide us a useful approximation: $$
V(\mathbf r)
\approx
\frac{1}{4\pi\epsilon_0}
\mathbf d\cdot
\frac{\mathbf r-\mathbf r_0}{|\mathbf r-\mathbf r_0|^3}.
\tag{8}
$$ Moreover, as a bonus, the relation $\mathbf d = \int\mathbf r'\rho(\mathbf r')\mathrm d\mathbf r'-Q\mathbf r_0 $ tells us that $\mathbf d$ is independent of the choice of origin $\mathbf r_0$ , since $Q=0$ . (Here, you might wonder about the $\mathbf r_0$ s remaining in $(8)$ $-$ can this position be optimized to make $(8)$ better? funny you should ask !) I won't go on for longer because this is getting quite long, but that's the general idea. As you've already noted, the full framework is that of multipolar expansions . In short, they basically allow us to write the electrostatic potential $(3)$ (which is a hassle $-$ a full new integral for every $\mathbf r$ !) as a series of terms of the form $$
V(\mathbf r)
=
\frac{1}{4\pi\epsilon_0}
\sum_{\ell=0}^\infty
\frac{\mathrm{poly}_\ell(\mathbf Q_\ell,\mathbf r)}{|\mathbf r|^{\ell+1}},
\tag 9
$$ where $\mathrm{poly}_\ell(\mathbf Q_\ell,\mathbf r)$ is a homogeneous polynomial in the coordinates of $\mathbf r$ of degree $\ell$ , and where the coefficients of that polynomial, the $\mathbf Q_\ell$ (known as the multipole moments), are integrals of the form $$
Q_{\ell} = \int \mathrm{poly}_{\ell}(\mathbf r'-\mathbf r_0)\rho(\mathbf r')\mathrm d\mathbf r'.
\tag{10}
$$ This expansion is useful on many counts, but to start with, it has the key property that the factor of $|\mathbf r|^{-(\ell+1)}$ ensures that the various terms become less and less relevant with increasing $\ell$ , so if we fix $|\mathbf r|$ and our tolerance for approximation, we can truncate the series in terms of just a few integrals to calculate. More intuitively, the multipole moments $(10)$ capture more and more detail about the angular shape of the charge distribution as $\ell$ grows, and this in turn provides more and more detail about the angular shape of $V(\mathbf r)$ $-$ and, moreover, our calculation shows that the higher- $\ell$ terms with higher detail will decay faster as you get away from the distribution. | {
"source": [
"https://physics.stackexchange.com/questions/592720",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277655/"
]
} |
592,790 | How do scientists know the min limit of temperature is -273 degree celsius? I wonder how do scientists confirmed that there is no place belong to less than -273 degree celsius in the universe? Why the scale is exactly -273 degree celsius? | Historically the value of -273.15 C° was extrapolated from the state equation of ideal gas. In fact, theoretically (without considering quantum mechanics) at 0K° gasses would have 0 volume: $ pV = nRT $ Another way to calculate it is that at -273.15 (or 0K) atoms and molecules have a kinetic energy of 0, in other words they stop any movement, rotation or vibration, and since there is no way to go lower than "staying still" we can say that this is the lower limit of temperature. (In reality this is much more complicated and this is just a very simplified way to see this)
Nowadays the value of -273.15 can also be retrieved considering many physical processes. The value of 0K is demonstrated to be the lower limit of temperature and it is also demonstrated that it's impossible to reach this value. Some statistical mechanics considerations allow for "negative-temperatures" but just for small period of times, with respect to Heisenberg's uncertainty principle. On the other hand, from a physicist point of view, you question is "wrong" since for a physicist 0K is just 0k and the "correct" question would be "why we define 0C° to be 273.15K°?". In fact the Kelvin temperature scale is based only on theoretical considerations and from "nature behaviour". The Celsius scale instead is just a convention that we use to represent the temperature. Obviously the "width" of a Kelvin degree is also a convention, in fact in most of the application and equations you would always see the ratio of two temperatures (which is the same no matter the width of the degree you use) or you just the conversion of the temperature to energy through the Boltzmann's constant $k_b$ . | {
"source": [
"https://physics.stackexchange.com/questions/592790",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/145862/"
]
} |
593,039 | Consider a metal stick, say iron or aluminum. From the experience, even if it's resilient, bend it forward and backward a couple of times, it would be broken. However, consider a thin iron foil or thin aluminum foil. From the experience, we know that it could be bend forward and backward for almost as many time as time was permitted. How to explain this in solid states? Why was it that the thin foil seemed to be much more deformable than stick?(Does it has anything to do with the fact that in the normal direction, the metallic bound was weak?) Why thin foil doesn't break? | Almost all solid metals are made up of individual small crystals called grains. A small stretching movement will simply stretch the crystal lattice of each grain a little, so the whole thing bends. When you flex thin foil, it is so thin that the stretching distance is small and the grains can deform to match. But with a thicker rod, the stretching is much bigger and the stress force it creates in the material is much higher. The outermost grain boundaries (the furthest stretched) will begin to pull apart, creating surface cracks in the metal. Each time you flex it, these cracks grow until they pass right through and the thing snaps in two. If you look closely at such a "fatigue" fracture with a magnifying glass, you can sometimes see the individual crystals forming a rough surface. Or, sometimes you can see the individual "waves" as the crack progressed at each stress peak. The formation and behaviour of these grains, and the factors which control them, is the principal phenomenon studied by metallurgists. | {
"source": [
"https://physics.stackexchange.com/questions/593039",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/209383/"
]
} |
593,122 | When I look at Snell's law $\frac{\sin\theta_2}{\sin\theta_1} = \frac{v_2}{v_1} = \frac{n_1}{n_2}$ I don't see any reference to wavelength. If red and blue have the same speed in the same medium, why they refract differently?
What am I missing? | In general, red and blue light do not travel at the same speed in a non-vacuum medium, so they have different refractive indices and are refracted by different amounts. This phenomena is known as dispersion . | {
"source": [
"https://physics.stackexchange.com/questions/593122",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/128457/"
]
} |
593,185 | let's consider a classic mercury thermometer. I do not understand why it does not behave like a "normal" thermometer which exploits volume dilatation. In a normal thermometer, I'd say that the mercury length would be proportional to its temperature. Therefore, I should be able to measure, for instance, 37 of body temperature, also starting with the thermometer at 38: there would be a contraction, but the measure would be correct! Why does this not happen? And why if I measure for instance, 38, and I try to cool the thermometer by putting it inside cold water, it does not become cooler? Why should I cool it by shaking it? It seems a very not ideal thermometer... but what are the causes of these non-idealities? | I think you are speaking of a clinical thermometer which records the maximum temperature it reaches. The thermometer has a narrow kink in the bore near the bulb that causes the mercury thread to break at that point when the volume of mercury in the bulb shrinks (the image you've posted actually shows that). As a consequence the top of the thread does not retract from the high-point reading. (One might worry about the mercury above that break-point shrinking, but there is very little mercury in the thread, most of it is in the bulb. Consequently there is little effect from the volume of the thin thread getting smaller.) The reason that the thermometer is designed this way is so that the doctor or nurse can take their time in reading the thermometer --- which would otherwise begin to read lower temperatures as soon at it is removed from the patients mouth, or wherever. Shaking the thermometer after it has coooled to room temperature causes the mercury in the broken thread to reconnect with the mercury in the bulb, and allow it to be used again. | {
"source": [
"https://physics.stackexchange.com/questions/593185",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/233022/"
]
} |
593,675 | Kerr metric has the following form: $$
ds^2 = -\left(1 - \frac{2GMr}{r^2+a^2\cos^2(\theta)}\right) dt^2 +
\left(\frac{r^2+a^2\cos^2(\theta)}{r^2-2GMr+a^2}\right) dr^2 +
\left(r^2+a^2\cos(\theta)\right) d\theta^2
+ \left(r^2+a^2+\frac{2GMra^2}{r^2+a^2\cos^2(\theta)}\right)\sin^2(\theta) d\phi^2 -
\left(\frac{4GMra\sin^2(\theta)}{r^2+a^2\cos^2(\theta)}\right) d\phi\, dt
$$ This metric describes a rotating black hole. If one considers $M=0$ : $$
ds^2 = - dt^2 +
\left(\frac{r^2+a^2\cos^2(\theta)}{r^2+a^2}\right) dr^2 +
\left(r^2+a^2\cos(\theta)\right) d\theta^2
+
\left(r^2+a^2\right)\sin^2(\theta) d\phi^2
$$ This metric is a solution of the Einstein equations in vacuum. What is the physical interpretation of such a solution? | It's simply flat space in Boyer-Lindquist coordinates . By writing $\begin{cases}
x=\sqrt{r^2+a^2}\sin\theta\cos\phi\\
y=\sqrt{r^2+a^2}\sin\theta\sin\phi\\
z=r\cos\theta
\end{cases}$ you'll get good ol' $\mathbb{M}^4$ . | {
"source": [
"https://physics.stackexchange.com/questions/593675",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/203557/"
]
} |
593,680 | You have two possible configurations to make push-ups. In both configurations our arms are placed at 90 degrees from our body. The first position is normal push-ups where. your hands are placed on the ground. The second push-up. position, you have your legs on the ground and your hands on a cube. Which one between those two configurations is the easiest and why according to force diagrams? I thought that maybe the first one is the easiest because your hands are directly on the ground and therefore only force you have to fight against is gravity. While in the second position your hands are place on a block and you have to maintain that block in position while doing push up, therefore you have more force to fight against. Maybe im wrong... | It's simply flat space in Boyer-Lindquist coordinates . By writing $\begin{cases}
x=\sqrt{r^2+a^2}\sin\theta\cos\phi\\
y=\sqrt{r^2+a^2}\sin\theta\sin\phi\\
z=r\cos\theta
\end{cases}$ you'll get good ol' $\mathbb{M}^4$ . | {
"source": [
"https://physics.stackexchange.com/questions/593680",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/274957/"
]
} |
593,781 | As a scenario let's take a box filled with air in empty space and no gravitational field around. As the box is opened,the air inside will rush outside and the box will move in opposite direction because of Newton's third law of motion but what is exactly pushing the box or where does the force coming from? | Before the box is opened, air molecules are bouncing off of every surface of the box. Let's focus on the surface with the door (call it the "front" wall), and the opposite surface (the "back" wall). Molecules bouncing off the front wall exert a force on the box in the forward direction, but these forces are balanced by molecules bouncing off of the back wall. When the door is opened, molecules simply escape through the door and no longer bounce off the front wall. Therefore the forces due to molecular collisions on the back wall are not compensated by collisions with the front wall, and there is a net backwards force on the box. | {
"source": [
"https://physics.stackexchange.com/questions/593781",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/279758/"
]
} |
593,874 | We know that at point A and B there is same voltage.Then why electron moves from one point to the another one? | "We know that [at] point A and B there is same voltage." There isn't. There will be enough of a voltage difference between A and B to drive a current through the wire. The resistance of the wire between A and B is, we assume, very low, so only a very small voltage difference will be needed. | {
"source": [
"https://physics.stackexchange.com/questions/593874",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/259367/"
]
} |
593,897 | From the wikipedia article In physics, a physical system is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment. The environment is ignored except for its effects on the system. Can we say this mathematically and very generally? This would first require a general definition of what a "physical universe" is, and then what a "portion of it" would be. It would have to track the state of the "portion" over time, dependent on the universe state, and also track the effects of the environment on that portion. | "We know that [at] point A and B there is same voltage." There isn't. There will be enough of a voltage difference between A and B to drive a current through the wire. The resistance of the wire between A and B is, we assume, very low, so only a very small voltage difference will be needed. | {
"source": [
"https://physics.stackexchange.com/questions/593897",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/279764/"
]
} |
594,164 | I'm watching Interstellar and as a huge gravity geek I'm loving it. I have some doubts about the accuracy of the wormhole visualizations, but I want to double check because I heard they had physicists advising them while making the film. Some of the shots while they are in the wormhole look kind of unrealistic like they were trying to make it feel like a tunnel, but that's not what I want to ask about. I want to talk about the shots looking at the wormhole from the outside before they went in. You can watch the whole scene here . My main problem is that I expected to see duplicate images of things near the horizon of the wormhole. You are supposed to be able to see infinitely many copies of things on both sides of the wormhole as your eyes approach the horizon, although the copies get smaller and smaller. Instead the horizon just looks black in the film. Some explanations could be that the copies are too small, or too dim, or the light is stretched into a frequency band that we can't see. Is there a reason the horizon is black, or is this just an inaccuracy in the film? Now that I think about it, shouldn't things look all red shifted and blue shifted near wormhole? I have less issue with that because if they were realistic with the colors it could look ugly. There's a video game about special relativity called A Slower Speed of Light that is accurate with the colors, but it doesn't look very good. | TL;DR: Wormholes are entirely speculative, so they allowed themselves a great deal of leeway. In particular, they devised a wormhole metric without any mechanism explaining how it would actually exist (beyond hand-wavy nods toward a fifth dimension); they put in some more-or-less imagined astronomical objects as sources of light; and they sometimes tweaked/fudged brightness and color to make things look more interesting. But given those conditions, the photon trajectories were modeled quite accurately. I heard they had physicists advising them while making the film Yes. In fact, Nobel laureate Kip Thorne was one half of the original team behind the film, and an executive producer and scientific consultant on the final result, as well as the author of the book The Science of Interstellar . He's also one of the world's leading authorities on the theory of wormholes (among other things), and worked with the visual effects team to ensure that the visuals were essentially consistent with currently known physics — with some artistic license granted to make things prettier. There's a peer-reviewed paper available here (and another that may be of interest here ). Thorne was one of my PhD supervisors, and he was in the early stages of development on the movie at the time, so I got a little insight into the process. I particularly recall him saying that the objective was to present phenomena that were fantastical, but not specifically forbidden by our current knowledge of physics. I expected to see duplicate images of things near the horizon of the wormhole. You are supposed to be able to see infinitely many copies of things on both sides of the wormhole as your eyes approach the horizon, although the copies get smaller and smaller Multiple images presumably are present, but there's so much going on that it's hard to notice, or we see only closeups that don't actually span more than one. If I look carefully at certain frames, I can mostly persuade myself that there are multiple images of certain objects — though they are naturally quite distorted. It's also easy to see the Einstein ring, with stars zipping around it, and presumably being multiply lensed — though it's hard to pick out duplicate points of light. Moreover, the paper about the visual-effects development shows and describes multiple images in several places. But the paper illustrates them by showing a scene dominated by Saturn, so it's easy to pick out the multiple images; there are so many nebulous elements being layered into the movie that it's harder to discern what's what. Also, in a different part of the movie, we see a black hole with an accretion disk, where the multiple images are clearly visible. Now that I think about it, shouldn't things look all red shifted and blue shifted near wormhole? I'm not sure what you mean, but not necessarily. There are certainly some effects here, but also note that just because a photon passes close to a horizon, that doesn't mean it ends up with significantly different energy (and thus a change to its wavelength). Just imagine a photon emitted by some distant star, passing close to an ordinary (Schwarzschild) black hole, and escaping to be observed by some distant observer. If the star, the black hole, and the observer are all basically at rest with respect to each other, the observed photon will have basically the same wavelength as when it was emitted — or if it hadn't passed near the black hole at all — because whatever energy it gained on moving towards the black hole it lost on moving away from it. It is possible for the photon to gain net energy if the black hole is spinning very rapidly, or moving very rapidly relative to the emitter or observer. As for an observer actually passing close to a horizon (or entering a wormhole), it's important to remember that the observer is also being affected. For example, when falling towards a black hole, an observer is accelerated, meaning that much of the blueshift a photon may be given will be canceled out by the redshift due to the observer's motion. if they were realistic with the colors it could look ugly. It is true that they weren't always scientifically precise with intensity and colors, because the completely accurate versions were less aesthetically pleasing or more confusing. This is discussed in section VI of the paper . There's also an informative blog post about that in the context of the black hole with the accretion disk here . | {
"source": [
"https://physics.stackexchange.com/questions/594164",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/260328/"
]
} |
594,468 | If we hold one end of a slinky and leave other end free, the earth's gravity applies force on the slinky and it expands. If we do the same on the moon with the same slinky, will the acquired height of the slinky be different? | Simple answer yes, Think about taking two extreme cases : How much does a slinky extend in a gravity-free space? None at all How much would it extend if it was on perhaps Jupiter or even a black hole ?It should extend by a large amount. Gravity does play a role. | {
"source": [
"https://physics.stackexchange.com/questions/594468",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277894/"
]
} |
594,476 | In general relativity, if there is a line element of the form $$ds^2 = [f(u, v)]du^2 + [h(u, v)]dvdu + [w(u, v)]dv^2$$ which I believe corresponds to metric coefficients $$g_{00} = f(u, v)$$ $$g_{01} = \frac{1}{2}h(u, v)$$ $$g_{10} = \frac{1}{2}h(u, v)$$ $$g_{11} = w(u, v)$$ Does one have to 'guess' a coordinate transformation which diagonalizes this matrix and then rescale it to a Minkowski metric to show we are in a locally flat spacetime, around a given point $P$ ? Is there not a more systematic way than just guessing a transformation? Is it necessary to work through and find the eigenvalues and eigenvectors? I have also seen some answers which refer to Taylor-expanding the metric around a given point $P$ w.r.t some coordinate transformation such as $$g_{ij} = g_{ij}(P) + \frac{\partial g_{ij}(P)}{\partial x^k} + \frac{1}{2}\frac{\partial \partial g_{ij}}{\partial x^l \partial x^k} + ...$$ where I'm assuming $x^k$ is another coordinate, but again this seems to require guessing the correct transformation and hoping for the best, which seems like it could take a long time if you have nasty functions in your metric. Does the Taylor expansion need to be with respect to another coordinate by using some transformation or do we just expand each component in the metric around a given point? | Simple answer yes, Think about taking two extreme cases : How much does a slinky extend in a gravity-free space? None at all How much would it extend if it was on perhaps Jupiter or even a black hole ?It should extend by a large amount. Gravity does play a role. | {
"source": [
"https://physics.stackexchange.com/questions/594476",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/280025/"
]
} |
594,479 | For an Intro. Thermal Physics course i am taking this year, I had a simple problem which threw me off-guard, I would appreciate some input to see where i am lacking. The problem is as follows: Does the entropy of the substance decrease on cooling? If so, does the total entropy decrease in such a process? Explain. Here is how i started this: ->Firstly, for a body of mass m and specific heat, c (assuming it is constant) the heat absorbed by the body for an infinitesimal temperature change is $dQ=mcdT$ . ->Now if we raise the temperature of the body from $T_1$ to $T_2$ , the entropy change associated with this change in the system is $\int_{T_1}^{T_2}mc\frac{dT}{T}=mcln\frac{T_1}{T_2}$ . This means the entropy of my system has increased. Up to this was fine. I face difficulty in the folowing: <*> Is this process, the act of heating this solid, a reversible or an irreversible one? Now, I know that entropy is a state variable, so even if it was irreversible, so to calculate the entropy change for the system during this process we must find a reversible process connecting the same initial and final
states and calculate the system entropy change. We can do so if we imagine that we have at our disposal a
heat reservoir of large heat capacity whose temperature T is at our control. We first adjust the reservoir temperature to $T_1$ and put the object in contact with the reservoir. We then slowly (reversibly) raise the reservoir temperature from $T_1 to T_2$ . The body gains entropy in this process, the amount i have calculated above. According to the main problem, if i were to reverse this process and slowly lower the temperature of the body from $T_2$ to $T_1$ wouldn't the opposite were to happen? i.e. the body loses entropy to the reservoir, the same amount as calculated above, but different signs? <*> From above discussion, can i say that the net entropy of the system+surroundings is zero? Had it been a reversible process then from the second law i know it would've been zero, even if it is irreversible, as long as i connect the same two states with a reversible path, the net still comes out to be zero. Am i right to think of it as such? I had this problem of discerning which is reversible/irreversible for a while. | Simple answer yes, Think about taking two extreme cases : How much does a slinky extend in a gravity-free space? None at all How much would it extend if it was on perhaps Jupiter or even a black hole ?It should extend by a large amount. Gravity does play a role. | {
"source": [
"https://physics.stackexchange.com/questions/594479",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257503/"
]
} |
594,482 | I understand that an RGB LED produces light in three constrained wavelength ranges and that any color beyond R, G, or B (orange, say) is due to the interaction of the cells in our eyes with the various wavelengths of light from the LED. I am interested in building a spectrophotometer. It does not have to be highly precise, but I do want to be able to trust my data. If I shine emulated orange light through a sample and onto a detector, will the effect be in any way similar to shining actual orange light, as my eye perceives, or will the sample essentially be responding only to the R, G, B light? | Simple answer yes, Think about taking two extreme cases : How much does a slinky extend in a gravity-free space? None at all How much would it extend if it was on perhaps Jupiter or even a black hole ?It should extend by a large amount. Gravity does play a role. | {
"source": [
"https://physics.stackexchange.com/questions/594482",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/120723/"
]
} |
594,496 | It is often stated that the property of spin is purely quantum mechanical and that there is no classical analog. To my mind, I would assume that this means that the classical $\hbar\rightarrow 0$ limit vanishes for any spin-observable. However, I have been learning about spin coherent states recently (quantum states with minimum uncertainty), which do have a classical limit for the spin. Schematically, you can write down an $SU(2)$ coherent state, use it to take the expectation value of some spin-operator $\mathcal{O}$ to find $$
\langle \mathcal{\hat{O}}\rangle = s\hbar*\mathcal{O},
$$ which has a well defined classical limit provided you take $s\rightarrow \infty$ as you take $\hbar\rightarrow 0$ , keeping $s\hbar$ fixed. This has many physical applications, the result usually being some classical angular momentum value. For example, one can consider a black hole as a particle with quantum spin $s$ whose classical limit is a Kerr black hole with angular momentum $s\hbar*\mathcal{O}$ . Why then do people say that spin has no classical analog? | You're probably overthinking this. "Spin has no classical analogue" is usually a statement uttered in introductory QM, where we discuss how a quantum state differs from the classical idea of a point particle. In this context, the statement simply means that a classical point particle as usually imagined in Newtonian mechanics has no intrinsic angular momentum - the only component to its total angular momentum is that of its motion, i.e. $r\times p$ for $r$ its position and $p$ its linear momentum. Angular momentum of a "body" in classical physics implies that the body has an extent and a quantifiable motion rotating around its c.o.m., but it does not in quantum mechanics. Of course there are many situations where you can construct an observable effect of "spin" on the angular momentum of something usually thought of as "classical". These are just demonstrations that spin really is a kind of angular momentum, not that spin can be classical or that the angular momentum you produced should also be called "spin". Likewise there are classical "objects" that have intrinsic angular momentum not directly connected to the motion of objects, like the electromagnetic field, i.e. it is also not the case that classical physics does not possess the notion of intrinsic angular momentum at all. "Spin is not classical" really is just supposed to mean "A classical Newtonian point particle possesses no comparable notion of intrinsic angular momentum". (Note that quantization is also not a particular property of spin, since ordinary angular momentum is also quantized, as seen in e.g. the azimuthal quantum number of atomic orbitals) | {
"source": [
"https://physics.stackexchange.com/questions/594496",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/15764/"
]
} |
594,849 | I just heard someone mention that photons take 40 thousand years to travel from the centre of the Sun to its
surface which is roughly 700,000 kilometres. How is that possible if the speed of light/photons is 300,000 km/second? | Well, photons always travel at the speed of light (in a vacuum and in this case between particle collisions - see below) about $3 \times 10^8 \ m/s$ and they are being slowed down in this
scenario, but not the way you think and not because of the suns' gravitational field. You should also note that the photon emitted at the centre of the sun and the one escaping
at the suns surface are not the "same" photon. Because the sun is very dense, a photon emitted at the core will be absorbed by another nearby proton almost immediately, and the proton will vibrate then re-emit another photon in a random direction. This happens over and over again trillions of trillions of times so that by the time it reaches the suns surface, thousands of years have passed.
This process is described by what is called a random walk . The distance that a photon can travel before it is absorbed, is given by what's called the mean free path and is given by the relation $$l = \frac{1}{\sigma n}$$ (from Wiki) "where $n$ is the number of target particles per unit volume, and $\sigma$ is the effective cross-sectional area for collision." As you can appreciate, the number of target particles (protons) will be significantly high making this distance extremely small, so that effectively, the photon travels a "vast distance" from within the Suns' core to its surface, taking thousands of years. Then it takes a measly 9 minutes to reach us! | {
"source": [
"https://physics.stackexchange.com/questions/594849",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
594,964 | Many people with long hair tie their hair to ponytail-style: Closely observing the movement of their hair when they are running, I have noticed that the ponytail oscillates only horizontally, that is, in "left-right direction". Never I have seen movement in vertical "up-down" direction or the third direction (away-and-back from the jogger's back). Why is the horizontal direction the only oscillation? | The human gait has a natural bobbing motion, with the head moving slightly up-and-down and side-to-side. The side-to-side motion (swinging on an axis parallel to the nose) turns the ponytail into a natural pendulum which swings back and forth, since this plane of motion is gravitationally symmetric and has nothing to stop the swing. Small driving forces can build up over time, causing a noticeable swing, very similar to how one would use a swing on a swingset. The up-and-down motion (swinging on an axis parallel to the shoulders) does not turn the ponytail into a pendulum, because the hair cannot swing freely on this axis. The problem is, there no mechanism to conserve energy at the bottom of the up-down swing, since the ponytail hits the back of the runner's head and loses all its energy. For the side-to-side swing, there's a constant oscillation of gravitational potential and kinetic energy in the ponytail, which isn't so in an up-and-down swing - when the ponytail reaches the bottom of an up-down swing, it has lost all its potential and kinetic energy, so you can't keep imparting small forces which will grow over time and produce a repeating oscillation. The front-to-back oscillations described in the question are the same as the up-and-down oscillations described in the previous paragraph (swinging along an axis parallel to the shoulders). The third axis of oscillation would be swinging on an axis parallel to the spine, which I think does happen to an extent. But since this axis is parallel to gravity, the ponytail hangs down very close to the axis, and rotations at this small radius tend to be lost in the much larger side-to-side swing. I suspect that the ponytail doesn't swing perfectly in a flat plane along only one axis, but actually wraps "around" the head slightly as it swings side-to-side - there may be a major swing along the axis of the nose, and a minor one along the axis of the spine. In the end, the most noticeable swing is side-to-side along the axis of the nose. Up-and-down oscillations on the axis of the shoulders cannot build up over time with small driving forces. And since the ponytail hangs very close to the third axis of rotation (along the axis of the spine), these are of much smaller magnitude than the obvious swing along the axis of the nose. | {
"source": [
"https://physics.stackexchange.com/questions/594964",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/119008/"
]
} |
596,121 | I've watched The Truth About Gravity With Professor Jim Al-Khalili | Gravity And Me | Spark where astronaut Chris Hadfield says at 3:55: To come back to Earth is violent Then after several seconds of music and video of descent it can be five times the force of gravity...for quite a long time I got immediately puzzled as in free fall objects are not subjected to "gravity force": the rest of the video itself explains that. Just to be sure I've web-searched and found e.g. Return to Earth: An Astronaut's View of Coming Home but more gravity is mentioned when talking about ascent to space. I've tried to read wiki Gravity turn where some mechanics of descent are explained but have not found about high acceleration on descent. I consider Jim Al-Khalili a respectable scientist. Looks to me the issue is a video editor mistake overlooked by reviewers. What other explanation can there be? | Reentry speeds are fast. Astonishingly fast. The shuttle reentered at 7.8km/s. Now note the units. That's "per second." That's 28,158km/hr. And you have roughly 100 vertical kilometers to do that braking in. Yes, the braking gets to be done at a very shallow angle, which means you have more linear distance to break than the 100km would suggest, but its still a very short time to lose a ton of speed! This requires a pretty substantial braking force. The force they are referring to is the aerodynamic forces felt by the airframe as it starts to bite into the ever thickening atmosphere. The steeper one's reentry, the more the force has to be. This force causes a deceleration, of course. And it is not easy for humans to come to grasp with 50m/s^2. It just isn't a concept we have a good intuitive sense of. So what we tend to do is phrase it in terms of gravitational accelerations, dividing out 9.8m/s^2. We can intuitively grasp the idea of feeling like you weigh five times as much as you do when you are standing upright. We endure these brutal reentry forces, of course, because there's a balance to be played. It would be possible to reenter slower by taking a more shallow angle. However, it can be tricky to control in this environment, and you run into additional heating problems because you spent more time at temperature as you slowly dropped your speed. | {
"source": [
"https://physics.stackexchange.com/questions/596121",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/278640/"
]
} |
596,184 | I was carrying my friend around the other day when I was spinning her she was light but when not spinning her and just carrying her around she was heavy, why is that? | Normally, lifting someone up while stationary requires significant activation of the anterior deltoid (a relatively small muscle). When spinning, that force translates to back muscles, which are significantly larger/stronger, and your deltoids have to do less work. An analogous situation is taking a weight and suspending it from a rope. Holding it stationary in front of you requires your bicep and anterior deltoid to activate, but if you start spinning yourself around, the centripetal force (+tension) keeps the weight elevated and now you just need to provide the centrifugal force inward to prevent it from flying away from you, which means pulling in a more horizontal direction, and so your bicep can work with your back rather than your shoulder. | {
"source": [
"https://physics.stackexchange.com/questions/596184",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/275176/"
]
} |
596,400 | Some of the "smaller" black holes have a mass of 4-15 suns. But still, they are black holes.
Thus their gravity is so big, even light cannot escape. Shouldn't this happen to some stars, that are even more massive? (mass of around 100 suns)
If their mass is so much bigger, shouldn't their gravity be also bigger? (So they would behave like a black hole). Or does gravity depend on the density of the object as well? | The true answer lies in General Relativity, but we can make a simple Newtonian argument. From the outside, a uniform sphere attracts test masses exactly as if all of its mass was concentrated in the center (part of the famous Shell theorem ). Gravitational attraction also increases the closer you are to the source of gravitation, but if you go inside the sphere, some of the mass of the sphere will form a shell surrounding you, hence you will experience no gravitational attraction from it, again because of the Shell theorem. This is because while the near side of the shell is pulling you towards it, so is the far side, and the forces cancel out, and the only gravitational forces remaining are from the smaller sphere in front of you. Once you get near the center of the sphere, you will experience almost no gravitational pull at all, as pretty much all of the mass is pulling you radially away from the center. This means that if you can get very close to the center of the sphere without going inside the sphere, you will experience much stronger gravitational attraction, as there is no exterior shell of mass to compensate the center of mass attraction. Hence, density plays a role: a relatively small mass concentrated in a very small radius will allow you to get incredibly close to the center and experience incredible gravitational forces, while if the same mass occupies a larger space, to get very close to the center you will have to get inside the mass, and some of the attraction will cancel out. The conclusion is that a small mass can be a black hole if it is concentrated inside a small enough radius. The largest such radius is called the Schwarzschild radius . As a matter of fact our own Sun would be a black hole if it had a radius of less than $3$ km and the same mass, and the Earth would be a black hole if it had a radius of less than $9$ mm and the same mass. | {
"source": [
"https://physics.stackexchange.com/questions/596400",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/254719/"
]
} |
597,734 | I know that there are Noether theorems in classical mechanics, electrodynamics, quantum mechanics and even quantum field theory and since this are theories with different underlying formalisms, if was wondering it is possible to find a repeating mathematical pattern. I know that a common "intuitive" explanation is that each symmetry has a corresponding constant quantity - but can we express this in a mathematical way? In other words: Can all Noether theorems be regarded as special cases of one recipe (in mathematical terms) that works for all formalisms? | The core of the Noether theorem in all contexts where it arises is surprisingly elementary! From a very general point of view, one considers the following structure. (i) A set of "states" $x\in \Omega$ , (ii) A one-parameter group of transformations of the states $\phi_u : \Omega\to \Omega$ , where $u\in \mathbb{R}$ . These transformations are requested to satisfy by definition $$\phi_t\circ \phi_u = \phi_{t+u}\:, \quad \phi_{-u}= (\phi_u)^{-1}\:, \quad \phi_0 = \text{id}\tag{0}\:.$$ (iii) A preferred special one-parameter group of transformations $$E_t : \Omega \to \Omega $$ representing the time evolution (the dynamics ) of the physical system whose states are in $\Omega$ . The general physical interpretation is clear. $\phi_u$ represents a continuous transformation of the states $x\in \Omega$ which is additive in the parameter $u$ and is always reversible . Think of the group of rotations of an angle $u$ around a given axis or the set of translations of a length along a given direction. A continuous dynamical symmetry is a one-parameter group of transformations that commutes with the time evolution, $$E_t \circ \phi_u = \phi_u \circ E_t \quad \forall u,t \in \mathbb{R}\:.\tag{1}$$ The meaning of $(1)$ is that if I consider the evolution of a state $$x_t = E_t(x)$$ and I perform a symmetry transformation at each time $$ \phi_u(x_t)\:,$$ then
the resulting time-parametrized curve of states is still a possible evolution with respect the said dynamics $$\phi_u(x_t) = E_t(\phi_u(x))\:.$$ These features are shared by the theory of dynamical systems , Lagrangian mechanics , Hamiltonian mechanics , Quantum Mechanics , general Quantum Theory including QFT. The difference is the mathematical nature of the space $\Omega$ and some continuity/differentiability properties of the map $\mathbb{R} \ni u \mapsto \phi_u$ , whose specific nature depends on the context. The crucial observation is that, once assumed these quite natural properties,
the one-parameter group structure $(0)$ provides a precise meaning of $$X := \frac{d}{du}|_{u=0} \phi_u$$ and, exactly as for the standard exponential maps which satisfies $(0)$ , one has (for us it is just a pictorical notation) $$\phi_u = e^{uX}\:.$$ $X$ is the generator of the continuous symmetry. In quantum theory, $X$ (more precisely $iX$ ) is a self adjoint operator and hence a quantum observable , in dynamical system theory and Lagrangian mechanics $X$ is a vector field, in Hamiltonian mechanics $X$ --written as $X_f$ -- is an Hamiltonian vector field associated to some function $f$ . $X$ (or $iX$ , or $f$ ) has another meaning, the one of observable .
However, it is worth stressing that this interpretation is delicate and strictly depends on the used formalism and on the mathematical nature of the space $\Omega$ (for instance, in real quantum mechanics the said interpretation of $X$ in terms of an associated quantum observable is not possible in general). Now notice that, for a fixed $t\in \mathbb{R}$ , $$u \mapsto E_t\circ e^{uX} \circ E^{-1}_t =: \phi^{(t)}_u$$ still satisfies $(0)$ as it immediately follows per direct inspection. Therefore it can be written as $$E_t\circ e^{uX} \circ E^{-1}_t = e^{uX_t}\tag{3}$$ for some time-depending generator $X_t$ . We therefore have a time-parametrized curve of generators $$\mathbb{R} \ni t \mapsto X_t\:.$$ The physical meaning of $X_t$ is the observable (associated to) $X$ temporally translated to the time $t$ . That interpretation can be grasped from the equivalent form of $(3)$ $$E_t \circ e^{uX} = e^{uX_t} \circ E_t \tag{4}.$$ The similar curve $$\mathbb{R} \ni t \mapsto X_{-t}$$ has the meaning of the time evolution of the observable (associated to) $X$ .
One can check that this is in fact the meaning of that curve in the various areas of mathematical physics I introduced above. In quantum mechanics $X_t$ is nothing but the Heisenberg evolution of $X$ . Noether Theorem . $\{e^{uX}\}_{u\in \mathbb R}$ is a dynamical symmetry for $\{E_t\}_{t\in \mathbb R}$ if and only if $X=X_t$ for all $t\in \mathbb R$ . PROOF.
The symmetry condition $(1)$ for $\phi_t = e^{tX}$ can be equivalently rewritten as $E_t \circ e^{uX} \circ E^{-1}_t = e^{uX}$ . That is, according to $(3)$ : $e^{uX_t} = e^{uX}$ . Taking the $u$ -derivative at $u=0$ we have $X_t=X$ for all $t\in \mathbb R$ . Proceeding backwardly $X_t=X$ for all $t\in \mathbb R$ implies $(1)$ for $\phi_t = e^{tX}$ . QED Since $E_t$ commutes with itself, we have an immediate corollary. Corollary . The generator $H$ of the dynamical evolution $$E_t = e^{tH}$$ is a constant of motion. That is mathematics. Existence of specific groups of symmetries is matter of physics. It is usually assumed that the dynamics of an isolated physical system is invariant under a Lie group of transformations . In classical mechanics (in its various formulations) that group is Galileo's one. In special relativity that group is Poincaré's one. The same happens in the corresponding quantum formulations. Every Lie group of dimension $n$ admits $n$ one-parameter subgroups. Associated to each of them there is a corresponding conserved quantity when these subgroups act on a physical system according to the above discussion. Time evolution is one of these subgroups. The two afore-mentioned groups have dimension $10$ and thus there are $10$ (scalar) conserved quantities. Actually $3$ quantities (associated to Galilean boosts and Lorentzian boosts) have a more complex nature and require a bit more sophisticated approach which I will not discuss here; the remaining ones are well known: energy (time evolution) , three components of the total momentum (translations along the three axes) , three components of the angular momentum (rotations around the three axes) . | {
"source": [
"https://physics.stackexchange.com/questions/597734",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/237923/"
]
} |
597,958 | We all are well aware with the equation of continuity (I guess) which is given by: $$A_1V_1= A_2V_2$$ Where $A_1$ and $A_2$ are any two cross sections of a pipe and $V_1$ and $V_2$ are the speeds of the fluid passing through those cross sections. Suppose $A_1$ is $1\ m^2$ and $V_1$ is $2\ m/s$ and $A_2$ is $10^{-7}\ m^2$ . This will mean that $V_2$ will be $2×10^{7}\ m/s$ which is much closer to the speed of light. But my teacher just said it is not possible. He didn't give a reason. Is he saying this because density of the liquid will change (given mass increases and length contracts with higher speed)? Why can't we use this equation to push fluids to a higher speed? If there is a limit on the maximum speed we can get to with this equation, what is it? For simplicity of calculations, you may take water. | That simplified form of continuity equation assumes that the fluid is incompressible. That is only a valid assumption at low Mach numbers. I think a typical “rule of thumb” is that a Mach number less than 0.3 is required for the assumption to hold. So for the continuity equation to hold in that form requires a speed which is much less than the speed of sound which in turn is much less than the speed of light. You cannot use the continuity equation to achieve supersonic flow, let alone relativistic flow. Note, that is not to say that supersonic flow is impossible, but rather that it is not possible simply by application of that form of the continuity equation. You need a form that accounts for compressibility. Superluminal flow is fundamentally impossible as no massive particle can reach c with finite energy. | {
"source": [
"https://physics.stackexchange.com/questions/597958",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271783/"
]
} |
597,970 | In the context of solving the eigenvalue equation for an operator $C = A(1) + B(2)$ in terms of the eigenvectors of each of $A(1)$ and $B(2)$ , which are the extended operators from the Hilbert spaces $\scr E_1$ resp. $\scr E_2$ to $\scr E = \scr E_1\otimes\scr E_2$ , the author finds that $$
C|\varphi_n(1)\rangle|\chi_p(2)\rangle = (a_n + b_p)|\varphi_n(1)\rangle|\chi_p(2)\rangle = c_{np}|\varphi_n(1)\rangle|\chi_p(2)\rangle
$$ where $a_n$ and $b_n$ are the eigen values of $A(1)$ and $B(2)$ to $|\varphi_n(1)\rangle$ and $|\chi_p(2)\rangle$ resp., assuming no degeneracy in them . In the case of degeneracy of the eigenvalues of $C$ , the author comments that this may be the case if, e.g., there is two different pairs of indices such that $c_{mq} = c_{np}$ , and in this case the eigenvector of $C$ corresponding to this eigenvalues is of the form $$
\lambda|\varphi_n(1)\rangle|\chi_p(2)\rangle + \mu|\varphi_n(1)\rangle|\chi_p(2)\rangle
$$ which I think he meant (note the indices) $$
\lambda|\varphi_n(1)\rangle|\chi_p(2)\rangle + \mu|\varphi_m(1)\rangle|\chi_q(2)\rangle
$$ Question: ...right ? | That simplified form of continuity equation assumes that the fluid is incompressible. That is only a valid assumption at low Mach numbers. I think a typical “rule of thumb” is that a Mach number less than 0.3 is required for the assumption to hold. So for the continuity equation to hold in that form requires a speed which is much less than the speed of sound which in turn is much less than the speed of light. You cannot use the continuity equation to achieve supersonic flow, let alone relativistic flow. Note, that is not to say that supersonic flow is impossible, but rather that it is not possible simply by application of that form of the continuity equation. You need a form that accounts for compressibility. Superluminal flow is fundamentally impossible as no massive particle can reach c with finite energy. | {
"source": [
"https://physics.stackexchange.com/questions/597970",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/260283/"
]
} |
597,978 | $I(x) = I_0 \cdot e^{~-u \cdot x}$ Where u is the linear attenuation coefficient And how does this relate to the following $N(x) = N_0 \cdot e^{~-u \cdot x}$ Where N is the count rate of the beam typically measured by a GM counter. | That simplified form of continuity equation assumes that the fluid is incompressible. That is only a valid assumption at low Mach numbers. I think a typical “rule of thumb” is that a Mach number less than 0.3 is required for the assumption to hold. So for the continuity equation to hold in that form requires a speed which is much less than the speed of sound which in turn is much less than the speed of light. You cannot use the continuity equation to achieve supersonic flow, let alone relativistic flow. Note, that is not to say that supersonic flow is impossible, but rather that it is not possible simply by application of that form of the continuity equation. You need a form that accounts for compressibility. Superluminal flow is fundamentally impossible as no massive particle can reach c with finite energy. | {
"source": [
"https://physics.stackexchange.com/questions/597978",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/281379/"
]
} |
598,122 | Yesterday, a team of physicists from France announced a breakthrough in nailing down a "magic number" by adding three decimals to the the fine-structure constant ( news article ; technical paper ) $$\alpha^{-1}\approx 137.035\,999\,206(11)$$ To the layman's eyes, 3 more decimals does not seem so spectacular. Why is this such a big deal when it is about the fine-structure constant? | The fine structure constant tells us the strength of the electromagnetic interaction . There are some misleading statements in the news story. The big one is how to read the result, \begin{align}
\alpha_\text{new}^{-1} & = 137.035\,999\,206(11)
\\ &= 137.035\,999\,206\pm0.000\,000\,011
\end{align} The digits in parentheses give the uncertainty in the final digits; you can see that the traditional $\pm$ notation is both harder to write and harder to read for such a high-precision measurement. The new high-precision experiment is better than the average of all measurements as of 2018 , which was $$\alpha_\text{2018}^{-1} = 137.035\,999\,084(21)$$ You can see that the new uncertainty is smaller than the old uncertainty by a factor of about two. But even more interesting is that the two values do not agree : the new result $\cdots206\pm11$ is different from the previous average $\cdots086\pm 22$ by about five error bars. A "five sigma" effect is a big deal in physics, because it is overwhelmingly more likely to be a real physical difference (or a real mistake, ahem) than to be a random statistical fluctuation.
This kind of result suggests very strongly that there is physics we misunderstand in the chain of analysis. This is where discoveries come from. This level of detail becomes important as you try to decide whether the explanations for other puzzles in physics are mundane or exciting. The abstract of the technical paper refers to two puzzles which are impacted by this change: the possibility that a new interaction has been observed in beryllium decays , and the tension between predictions and measurements of the muon’s magnetic moment , which is sensitive to hypothetical new interactions in a sneakier way. | {
"source": [
"https://physics.stackexchange.com/questions/598122",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/664/"
]
} |
598,257 | Studying Quantum Mechanics I only thought about Spherical Harmonics $Y_{l,m}(\theta , \phi)$ : $$Y_{l,m}(\theta , \phi)=N_{l,m}P_{l,m}(\theta)e^{im\phi}$$ as the simultaneous eigenfunctions of $L_z$ and $L^2$ .
But then I stumbled on these two statements: The Spherical Harmonics are a complete orthonormal basis for the space of the functions defined on the sphere. $P_{l,0}$ , called Legendre's polynomials, are a complete orthonormal basis on the circle. This two statements have been stated to me without any further explanation.
Given my current physical and mathematical background, as an undergraduate student, I struggle to understand these two; but they seem to me really crucial to comprehend the concepts of angular momentum in QM. I have a feel on what these two statements are trying to tell me, but I don't understand them precisely at all. Are this two statements really crucial as I think or can I neglect them and continue thinking about $Y$ as eigenfunctions and nothing more without any repercussions?
Is there a simple way to explain what these two statements mean? (By simple way I mean an undergraduate level explanation, without assuming knowledge about Group Theory and such) | How should we think about Spherical Harmonics? In short: In the same way that you think about plane waves. Spherical harmonics (just like plane waves) are basic, essential tools. As such, they are used over a very wide variety of contexts, and each of those contexts will end up using a different "way to think" about the tools: using different properties of the tool, putting it in different frameworks, and so on. In that regard, "how we should think" about the tool depends on what we want to do with it and what we already know about the context. This is ultimately a personal question that is up to you, and you haven't really given us much to go on to say anything useful. That said, you seem to be confused about the relationship between the statements spherical harmonics are eigenfunctions of the angular momentum and spherical harmonics form a basis for the functions over the sphere. As I said above, you should think of these in the same way that you think of plane waves, for which the two statements above have direct analogues: plane waves are eigenfunctions of the linear momentum and plane waves form a basis for the functions over the line, in the more specific senses that the plane wave $e^{ikx}$ is an eigenfunction of $\hat p = -i\hbar \frac{\partial}{\partial x}$ with eigenvalue $k$ and every function $f(x)$ over real $x$ can be expressed as a superposition of plane waves, by means of its Fourier transform: $f(x) = \int_{-\infty}^\infty \tilde f(k) e^{ikx} \mathrm dk$ . So: which of these two properties is more important? it depends! it depends on what you're doing and what you care about. Maybe you need to combine the two on equal footings, maybe you need to put more weight on one or the other. Can you forget about the second statement and just focus on the eigenfunction properties? maybe! it depends on what you're doing and what you care about. If you continue analyzing the problem, both properties will come into play eventually, but where and when depends on the path you take. (As a rule of thumb, if a statement is unclear and it is not being actively used in the context you're in, then: yes, it is probably safe to put a pin on it and move on, and to return to understanding what it means and why it holds only when you run into it being actively used. In many cases, actually seeing it used in practice is a massive help into understanding what it's for!) In any case, though, the completeness properties can indeed seem a little mysterious. They've been handled quite well by the other answers, so I won't belabour that point. Instead, I will fill you in on a secret that very few textbooks will tell you: the spherical harmonics are all polynomials! More specifically, they are polynomials of the Cartesian coordinates when the position $\mathbf r$ is restricted to the unit sphere. Once you take away all of the trigonometric complexity and so on, the $Y_l^m(\theta,\phi)$ become simple polynomials: \begin{align}
Y_0^0(\mathbf r) & =
1
\\
Y_1^{\pm 1}(\mathbf r) & =
x\pm i y
\\
Y_1^0(\mathbf r) & =
z
\\
Y_2^{\pm 2}(\mathbf r) & =
(x\pm i y)^2
\\
Y_2^{\pm 1}(\mathbf r) & =
z (x\pm i y)
\\
Y_2^0(\mathbf r) & =
x^2+y^2-2 z^2
\end{align} (where I've removed the pesky normalization constants). The basic concept of building this family of polynomials is quite simple: Work your way up in degree, from constant to linear to quadratic to higher powers. Keep the whole set linearly independent. Within each degree, keep things as invariant as possible under rotations about the $z$ axis. That last part might sound mysterious, but it should be relatively easy to see why it forces a preference for the combinations $x\pm iy$ : if you rotate by an angle $\alpha$ in the $x,y$ plane, $(x\pm iy)$ transforms simply, to $e^{\pm i\alpha}(x\pm iy)$ . If you're wondering, this is the feature that connects to the $Y_l^m$ being eigenfunctions of $\hat L_z$ . With that in place, it is fairly easy to see how the progression goes $$
1,x\pm iy, z, (x\pm iy)^2, z(x\pm iy),\ldots
$$ ... but then what's with the $x^2+y^2-2 z^2$ combination? The answer to that is that, in general, there are six separate quadratic monomials we need to include: $$
x^2,y^2,z^2,xy,xz,yz.
\tag{*}
$$ We have already covered some of these in $(x\pm i y)^2 = x^2-y^2 \pm 2ixy$ , and we've fully covered the mixed terms $xy,xz,yz$ , so now we need to include two more: to keep things symmetric, let's say $x^2+y^2$ and $z^2$ . Except, here's the thing: the pure-square monomials $x^2,y^2,z^2$ in $(*)$ are not linearly independent! Why is this? well, because we're on the unit sphere, which means that these terms satisfy the identity $$
x^2+y^2+z^2 = 1,
$$ and the constant term $1$ is already in our basis set. So, from the combinations $x^2+y^2$ and $z^2$ we can only form a single polynomial, and the correct choice turns out to be $x^2+y^2-2z^2$ , to make the set not only linearly independent but also orthogonal (with respect to a straightforward inner product). As for the rest of the spherical harmonics $-$ the ladder keeps climbing, but the steps are all basically the same as the ones I've outlined already. Anyways, I hope this is sufficient to explain why the spherical harmonics are a complete orthonormal basis for the space of the functions defined on the sphere. This is just a fancy way of saying that, if you have a function $f(\mathbf r)$ that's defined for $|\mathbf r|=1$ , then you can expand $f(\mathbf r)$ as a "Taylor series" of suitable polynomials in $x$ , $y$ and $z$ , with a slightly reduced set of polynomials to account for the restrictions on $|\mathbf r|$ . | {
"source": [
"https://physics.stackexchange.com/questions/598257",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/265836/"
]
} |
598,808 | If both bulbs are 50 watts shouldn't they emit the same frequency light since they both convert 50 J of electrical energy into light energy per second? But this is the example I came across. I don't understand. | The infrared (IR) bulb will emit more photons per second than the ultraviolet (UV) bulb because each IR photon has less energy than a UV photon. After some follow-up questions, I've realized that there is an ambibuity in the phrase "50-watt bulb." Is this a bulb that emits 50-watts of light, or a bulb that consumes 50 watts of electrical power? If the former, then my first paragraph is still true. However, most light bulbs are labeled based on the latter. So, in the case of a 50-watt IR bulb and a 50-watt UV bulb, it matters how the light is generated. For example, if both bulbs are incandescents (the electrical power heats up a filament to thousands of degrees to emit light) with filters in front of them that only let through a desired spectrum, then the IR light power emitted will be much greater than the UV power emitted. This is because the filament is a black body radiator that emits much more power in the lower energy (IR) part of the spectrum. A UV blacklight works by taking a light bulb and putting a filter in front of it that absorbs visible and lower energy light, leaving UV to pass through. Even if the blacklight consumes 50 watts of power, the light emitted will have much less than 50 watts of power because most of the radiated power will be blocked by the filter. Different methods of light generation produce different amounts of light given an amount of input electrical power. | {
"source": [
"https://physics.stackexchange.com/questions/598808",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/279619/"
]
} |
598,816 | For simplest operator $\textit{M}$ , I could write it as $|k\rangle\langle m|$ . \begin{equation}
\mathit{M} = |k\rangle\langle m|
\end{equation} When operating on a state $|n\rangle$ , I could write as: \begin{equation}
\mathit{M}|n\rangle=|k\rangle\langle m|n\rangle=\langle m|n\rangle|k\rangle
\end{equation} From that, I could interpret the operator M as transforming the state $|n\rangle$ to $|k\rangle$ with $\langle m|n\rangle$ as coefficient. However, how does $|n\rangle$ relate to $|k\rangle$ by the inner product $\langle m|n\rangle$ in Hilbert space? For example, the identity operator I is just: \begin{equation}
\mathit{I}|n\rangle = \sum_{j=1}^{N} |j\rangle \langle j|n\rangle = \sum_{j=1}^{N} \langle j|n\rangle |j\rangle
\end{equation} It is just projection of $|n\rangle$ on the basis $|j\rangle$ . For $\langle m|n\rangle|k\rangle$ , I have no idea how they are related in Hilbert space. | The infrared (IR) bulb will emit more photons per second than the ultraviolet (UV) bulb because each IR photon has less energy than a UV photon. After some follow-up questions, I've realized that there is an ambibuity in the phrase "50-watt bulb." Is this a bulb that emits 50-watts of light, or a bulb that consumes 50 watts of electrical power? If the former, then my first paragraph is still true. However, most light bulbs are labeled based on the latter. So, in the case of a 50-watt IR bulb and a 50-watt UV bulb, it matters how the light is generated. For example, if both bulbs are incandescents (the electrical power heats up a filament to thousands of degrees to emit light) with filters in front of them that only let through a desired spectrum, then the IR light power emitted will be much greater than the UV power emitted. This is because the filament is a black body radiator that emits much more power in the lower energy (IR) part of the spectrum. A UV blacklight works by taking a light bulb and putting a filter in front of it that absorbs visible and lower energy light, leaving UV to pass through. Even if the blacklight consumes 50 watts of power, the light emitted will have much less than 50 watts of power because most of the radiated power will be blocked by the filter. Different methods of light generation produce different amounts of light given an amount of input electrical power. | {
"source": [
"https://physics.stackexchange.com/questions/598816",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/255102/"
]
} |
599,461 | I've recently learned about rotational kinetic energy and how an object can have both translational kinetic energy and rotational kinetic energy at the same time. However, I get confused when I try to apply this to, say, a uniform rod rotating downward: At first glance, it seems to me that there should only be Rotational Kinetic Energy, as the rod can be seen as simply rotating around one of its end points. However, I've learned that an object has translational kinetic energy when its center of mass is moving. Since the rod's center of mass is changing, does this mean that it also has translational kinetic energy? | It depends what you consider to be the "pivot" about which the rotational kinetic energy is calculated here: If you choose the pivot as the end of the rod that is physically held in place, then you only have to consider rotational kinetic energy, since if you think about what rotational kinetic energy represents, then it should be clear that the kinetic energy of every individual particle is accounted for here. However, if you choose as the pivot the center of mass of the rod, then since the center of mass also has translational kinetic energy, you need to account for that as well, so that your two terms will be the rotational kinetic energy about the center of mass and the translational kinetic energy of the center of mass. (Bob D's answer contains an excellent visualization of this case.) Try calculating the total kinetic energy both ways; you should get the same answer at the end regardless of how you did it. (Keep in mind the moment of inertia of the rod changes since you change what the pivot is. My above argument is also closely related to the parallel-axis theorem; can you see why?) Hope this helps. | {
"source": [
"https://physics.stackexchange.com/questions/599461",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/279947/"
]
} |
599,495 | Stress is like pressure and it doesn't matter in which direction the force acts (given it is perpendicular to the surface). I read in my book that if we have a rope which is being pulled on both sides by a force $F$ , then the stress at any cross section of the rope is defined as $\sigma = \frac{F}{area}$ . But my question is that since the rope is pulled from both the ends, the molecules of the considered cross section is being pulled by other molecules from both sides of the cross sections. So the stress is due to both the forces. So shouldn't stress be defined as $$\sigma =\frac{2F}{area}$$ Edit : The answer of Bob D forces me to add an edit. By $\sigma = \frac{2F}{A}$ , I meant to say $\sigma =\frac{|F_{left}|}{A}+\frac{|F_{right}|}{A} = \frac{2F}{A}$ . Here $|F_{left}|$ and $|F_{right}|$ are the forces applied by molecules on the left and right of the considered cross section on the cross section . Hope it is clear now. | I think your perplexity is understandable, and it comes from the clash between the notion of stress, which belongs to continuum mechanics, and the molecular description. Let me arrive in a roundabout but hopefully insightful way at why the factor of 2 is unnecessary. The notion of stress (more precisely: internal pressure) was introduced by Euler around 1753 and then generalized by Cauchy around 1828. Euler's question, summarizing a little, was the following: if I have a body of matter, delimited by some boundary, how can I represent the total force exerted on it by the matter outside of it? His idea was to consider forces that act purely on the boundary (just like when we have something pressing on our skin). The total force on the body of matter could then be found integrating this field of surface forces over the whole surface. His revolutionary idea was that we could imagine to delimit an arbitrary inner portion of a body by an imaginary surface, and consider the forces acting on such surface. Euler considered only forces orthogonal to the surface, and Cauchy generalized them to forces with arbitrary directions – for example tangential to the surface: that's what viscosity is. Cauchy also showed that such a force $\pmb{t}$ could actually be expressed by the action of a linear operator – the stress tensor $\pmb{\tau}$ – on the normal to the surface: $\pmb{t} = \pmb{\tau}\pmb{n}$ . The invention of stress also suggests the best way to think about it, in my opinion. Do not imagine two sides of a surface. Instead, imagine a 3D portion of matter delimited by a closed surface. The stress is just a field of force which that portion of matter "feels" on its surface, caused by external agents. The stress is called "tensile" if the force is directed outwards and pulls on the surface; it's called "compressive" if the force is directed inwards and presses on the surface. In the case of the rope, imagine a portion of it, even if very short, delimited by two circular surfaces: a short cylinder. You want to know the total force exerted on this 3D portion of rope from the rest of the rope (or anything else outside). The lateral surface has no forces acting on it. Each circular surface does have a force acting on it – the stress – directed outwards with respect to our short piece of rope, which thus feels a pull at its extremities. In continuum mechanics stress stands in contrast to so-called "body forces" or "volume forces", which instead act on every small volume of a body of matter. Chief example is gravity. Thus body forces $\pmb{f}$ scale like a volume, while stresses $\pmb{t}$ scale like an area. The total force on a body of matter $B$ is then given by the contribution of both: $$\pmb{F}_B = \iiint_{\text{bulk of $B$}} \pmb{f}\ \mathrm{d}V + \iint_{\text{boundary of $B$}} \pmb{t} \ \mathrm{d}A \ .$$ The centre of mass of the body will move as if this total force is applied directly to it.
You see from these ideas and equation that there's no need for a factor $2$ . The question of the "two sides" of the surface appears when there is a body of matter $B_2$ adjacent to the first $B_1$ , so that they partly share a delimiting surface $S$ . By Newton's third law, if $B_2$ is pulling on $B_1$ at the surface $S$ , then $B_1$ is pulling $B_2$ at the surface $S$ , in the opposite direction. So if you consider the surface $S$ from $B_1$ 's perspective, the stress is directed outwards, towards $B_2$ . And if you consider the surface $S$ from $B_2$ 's perspective, the stress is also directed outwards, towards $B_1$ . The situation is no different from when we say that the Earth pulls on the Moon, and the Moon pulls on the Earth with an equal and opposite force. Only, in the case of surface forces this pulling is happening on the same spot. That's what's often confusing. But the two forces are acting on different bodies – keep this in mind. At a molecular level surface forces don't exist . All forces are body/volume forces. The notion of stress doesn't apply here in its original sense. What we consider as stress on an (imaginary) surface from a macroscopic point of view, turns out to be one of two things, or a combination of both, from a microscopic point of view. First: atomic/molecular body forces having short range: just few layers of molecules on one side of the imaginary surface act on just few layers of molecules on the other side. That's why, from a macroscopic perspective, we consider these forces as only existing on the surface itself. Second: motion of molecules across the surface. Since the molecules carry momentum, momentum is decreasing on one side of the surface and increasing on the other. And since a force causes a change of momentum, macroscopically we interpret the microscopic change of momentum as a force existing on the surface. The decrease of momentum on one side of the surface is equal and opposite to the increase on the other; so the macroscopic intepretation is that the material on one side is macroscopically experiencing a given surface force, and the one on the other side an equal and opposite surface force. Many viscous forces are of this kind. A curious final note. In contrast with molecular or particle dynamics, stress is the only kind of force that appears in general relativity instead, because action at a distance is forbidden there. In fact, when we write the Einstein equations in a "Newtonian" form, split into space and time, the Newtonian stress tensor $\pmb{\tau}$ (and the energy, but no momentum or energy flux) fully appears in the evolution equation for the metric. Euler's article is really cool to read: L. Euler: Principes généraux de l'état d'équilibre des fluides , English translation General Principles of the Motion of Fluids . A good book to get acquainted with the notion of stress and also its microscopic interpretation is Bird, Stewart, Lightfoot: Transport Phenomena (2nd ed. Wiley 2002); they have also written an Introductory Transport Phenomena . There's a beautiful lecture by Truesdell on the history of the concept of stress, which can be very useful for its understanding: Truesdell: The Creation and Unfolding
of the Concept of Stress , chap. IV in Essays in the History of Mechanics (Springer 1968). The microscopic interpretation of stress was approached in a rigorous manner I believe first by Irving & Kirkwood at the end of the 1940s, followed by many others. Recent reviews are given by Murdoch: A Critique of Atomistic Definitions of the Stress Tensor , J. Elast. 88 (2007) 113–140 Murdoch: On the identification of continuum concepts and fields with molecular variables , Contin. Mech. Thermodyn. 23 (2011) 1–26. even if the maths in these may be somewhat advanced, from the text and the equations you can get a glimpse of all sorts of different microscopic stuff that contribute to what we macroscopically call "stress". For the role of stress in general relativity see for example Gourgoulhon: 3+1 Formalism in General Relativity: Bases of Numerical Relativity (Springer 2012), also on arXiv . See eqn (5.69) there. | {
"source": [
"https://physics.stackexchange.com/questions/599495",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271783/"
]
} |
599,701 | I teach physics to 16-year-old students who do not know calculus and the derivates. When I explain the formula for centripetal acceleration in circular uniform motion, I use this picture: Here, $$\vec{a}_{\text{av}}=\frac{\Delta \vec{v}}{\Delta t}=\frac{\vec{v}_2-\vec{v}_1}{\Delta t}$$ and $$\vec{v}_1=(v\cos\phi){\bf \hat x}+(v\sin\phi){\bf \hat y}, \quad \vec{v}_2=(v\cos\phi){\bf \hat x}+(-v\sin\phi){\bf \hat y}.$$ Combining these equations gives $$\vec{a}_{\text{av}}=\frac{\Delta \vec{v}}{\Delta t}=\frac{-2v\sin\phi}{\Delta t}{\bf \hat y}, \tag 1$$ which shows that the average acceleration is towards the center of the circle.
Using $\Delta t=d/v=2r\phi/v$ , where $d$ is the distance along the curve between points $1$ and $2$ , gives $$\vec{a}_{\text{av}}=-\frac{v^2}{r}\left(\frac{\sin \phi}{\phi}\right){\bf \hat y}.$$ As $\phi\to 0$ , $\sin \phi/\phi\to 1$ , so $$\vec{a}_{\text{cp}}=-\frac{v^2}{r}{\bf \hat y}, \tag 2$$ which shows that the centripetal acceleration is towards the center of the circle. Does there exist another simple proof of Equation $(2)$ , in particular, that the centripetal acceleration is towards the center of the circle? | With no calculus, and for only uniform circular motion: consider the figure below. On the left, we see the position vector $\vec{r}$ sweep out a circle of radius $r$ , and the velocity vector $\vec{v}$ moving around with it. The tip of the position vector travels the circumference of the left-hand circle, which is $2 \pi r$ , in one period $T$ . Thus, $v = 2 \pi r / T$ . Now, acceleration is the rate of change of velocity, just as velocity is the rate of change of position. If we take all the velocity vectors from the left-hand diagram and re-draw them at a common origin, we see that the velocity vector must also sweep out a circle of radius $v$ . The tip of the velocity vector travels the circumference of the right-hand circle, which is $2 \pi v$ , in one period $T$ . The acceleration vector, being "the velocity of the velocity", must by analogy have magnitude $a = 2 \pi v / T$ . Thus, $$
\frac{a}{v} = \frac{2 \pi}{T} = \frac{v}{r} \quad \Rightarrow \quad a = \frac{v^2}{R}.
$$ We can also see from the diagram that at any time, $\vec{a}$ is directly opposite the direction of $\vec{r}$ , i.e., directly towards the center of the circle. Credit goes to my Grade 11 physics teacher, Mr. Imhoff, who showed me this trick over 20 years ago. | {
"source": [
"https://physics.stackexchange.com/questions/599701",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/140404/"
]
} |
599,980 | I'm aware of the uncertainty principle that doesn't allow $\Delta x$ and $\Delta p$ to be both arbitrarily close to zero. I understand this by looking at the wave function and seeing that if one is sharply peeked its fourier transform will be wide. But how does this stop one from measuring both position and momentum at the same time?
I've googled this question, but all I found were explantions using the 'Observer effect'. I'm not sure, but I think this effect is very different from the intrinsic uncertainty principle. So what stops us from measuring both position and momentum with arbitrairy precision?
Does a quantum system always have to change when observerd? Or does it have to do with the uncertainty principle? Thank you in advance. | When someone asks "Is it really impossible to simultaneously measure position and momentum with arbitrary precision in quantum theory?", the best preliminary answer one can give is another question: "what do you exactly mean by measurement , by precision , and by position and momentum ?". Those words have several meanings each in quantum theory, reflected in literature and experimental practice. There is a sense in which a simultaneous and arbitrarily precise measurement of position and momentum is not only possible, but also routinely made in many quantum labs, for example quantum-optics labs. Such measurement is indeed at the core of modern quantum applications such as quantum-key distribution. I think it's best first to make clear what the different meanings of measurement , position , momentum are in actual applications and in the literature, and then to give examples of the different experimental procedures that are called "measurement of position" etc. What's important is to understand what's being done; the rest is just semantics. Let me get there step by step. The answer below summarizes what you can find in current articles published in scientific journals and current textbooks, works and results which I have experienced myself as a researcher in quantum optics. All references are given throughout the answer, and some additional ones at the end. I strongly recommend that you go and read them . Also, this answer is meant to discuss the uncertainty principle and simultaneous measurement within quantum theory . Maybe in the future we'll all use an alternative theory in which the same experimental facts are given a different meaning; there are such alternative theories proposed at present, and many researchers indeed are working on alternatives. Finally, this answer tries to avoid terminological debates, explaining the experimental, laboratory side of the matter. Warnings about terminology will be given throughout. (I don't mean that terminology isn't important, though: different terminologies can inspire different research directions.) We must be careful, because our understanding of the uncertainty principle today is very different from how people saw it in the 1930–50s. The modern understanding is also borne out in modern experimental practice. There are two main points to clarify. 1. What do we exactly mean by "measurement" and by "precision" or " $\Delta x$ "? The general picture is this: We can prepare one copy of a physical system according to some specific protocol. We say that the system has been prepared in a specific state (generally represented by a density matrix $\pmb{\rho}$ ). Then we perform a specific operation that yields an outcome. We say that we have performed one instance of a measurement on the system (generally represented by a so-called positive-operator-valued measure $\{\pmb{O}_i\}$ , where $i$ labels the possible outcomes). We can repeat the procedure above anew – new copy of the system – as many times as we please, according to the same specific protocols. We are thus making many instances of the same kind of measurement, on copies of the system prepared in the same state. We thus obtain a collection of measurement results, from which we can build a frequency distribution and statistics. Throughout this answer, when I say "repetition of a measurement" I mean it in this specific sense. There's also the question of what happens when we make two or more measurements in succession, on the same system . But I'm not going to discuss that here; see the references at the end. This is why the general empirical statements of quantum theory have this form: "If we prepare the system in state $\pmb{\rho}$ , and perform the measurement $\{\pmb{O}_i\}$ , we have a probability $p_1$ of observing outcome $i=1$ , a probability $p_2$ of observing outcome $i=2$ , ..." and so on (with appropriate continuous limits for continuous outcomes). Now, there's a measurement precision/error associated with each single instance of the measurement, and also a variability of the outcomes across repetitions of the measurement. The first kind of error can be made as small as we please. The variability across repetitions, however, generally appears not to be reducible below some nonzero amount which depends on the specific state and the specific measurement. This latter variability is what the " $\Delta x$ " in the Heisenberg formula refers to . So when we say "cannot be measured with arbitrary precision", what we mean more exactly is that "its variability across measurement repetitions cannot be made arbitrarily low". The fundamental mystery of quantum mechanics is the lack – in a systematic way – of reproducibility across measurement instances. But the error in the outcome of each single instance has no theoretical lower bound. Of course this situation affects our predictive abilities, because whenever we repeat the same kind of measurement on a system prepared on the same kind of state, we don't really know what to expect, within $\Delta x$ . This important distinction between single and multiple measurement instances was first pointed out by Ballentine in 1970: Ballentine: The Statistical Interpretation of Quantum Mechanics , Rev. Mod. Phys. 42 (1970) 358 ( other copy ) see especially the very explanatory Fig. 2 there. And it's not a matter of "interpretation", as the title might today suggest. It's an experimental fact. Clear experimental examples of this distinction are given for example in Leonhardt: Measuring the Quantum State of Light (Cambridge 1997) see for example Fig. 2.1 there and its explanation. Also the more advanced Mandel, Wolf: Optical Coherence and Quantum Optics (Cambridge 2008). See also the textbooks given below. The distinction between error of one measurement instance and variability across measurement instances is also evident if you think about a Stern-Gerlach experiment. Suppose we prepare a spin in the state $x+$ and we measure it in the direction $y$ . The measurement yields only one of two clearly distinct spots, corresponding to either the outcome $+\hbar/2$ or $-\hbar/2$ in the $y$ direction. This outcome may have some error in practice, but we can in principle clearly distinguish whether it is $+\hbar/2$ or $-\hbar/2$ . However, if we prepare a new spin in the state $x+$ and measure $y$ again, we can very well find the opposite outcome – again very precisely measured. Over many measurements we observe these $+$ and $-$ outcomes roughly 50% each. The standard deviation is $\hbar/2$ , and that's indeed the " $\Delta S_y$ " given by the quantum formulae: they refer to measurement repetitions, not to one single instance in which you send a single electron through the apparatus. It must be stressed that some authors (for example Leonhardt above) use the term "measurement result" to mean, not the result of a single experiment, but the average value $\bar{x}$ found in several repetitions of an experiment. Of course this average value has uncertainty $\Delta x$ . There's no contradiction here, just a different terminology. You can call "measurement" what you please – just be precise in explaining what your experimental protocol is. Some authors use the term "one-shot measurement" to make the distinction clear; as an example, check these titles: Pyshkin et al: Ground-state cooling of quantum systems via a one-shot measurement , Phys. Rev. A 93 (2016) 032120 ( arXiv ) Yung et al: One-shot detection limits of quantum illumination with discrete signals , npj Quantum Inf. 6 (2020) 75 ( arXiv ). The fact that, even though the predictive uncertainty $\Delta x$ is finite, we can have infinite precision in a single (one-shot) measurement, is not worthless, but very important in applications such as quantum key distribution . In many key-distribution protocols the two key-sharing parties compare the precise values $x$ they obtained in single-instance measurements of their entangled states. These values will be correlated to within their single-instance measurement error, which is much smaller than the predictive uncertainty $\Delta x$ . The presence of an eavesdropper would destroy this correlation. The two parties can therefore know that there's an eavesdropper if they see that their measured values only agree to within $\Delta x$ , rather than to within the much smaller single-instance measurement error. This scheme wouldn't work if the single-instance measurement error were $\Delta x$ . See for example Reid: Quantum cryptography with a predetermined key, using continuous-variable Einstein-Podolsky-Rosen correlations , Phys. Rev. A 62 (2000) 062308 ( arXiv ) Grosshans et al: Quantum key distribution using gaussian-modulated coherent states , Nature 421 (2003) 238 ( arXiv ). In Figure 2 one can see very well the difference between single-instance measurement error and the variability $\Delta x$ across measurements. Madsen et al: Continuous variable quantum key distribution
with modulated entangled states (free access), Nat. Comm. 3 (2012) 1083. See especially Fig. 4 and its explanation. 2. What is exactly a "measurement of position" or of "momentum"? In classical mechanics there's only one measurement (even if it can be realized by different technological means) of any specific quantity $Q$ , such as position or spin or momentum. And classical mechanics says that the error in one measurement instance and the variability across instances can both be made as low as we please. In quantum theory there are many different experimental protocols that we can interpret, for different reasons, as "measurements" of that quantity $Q$ . Usually they all yield the same mean value across repetitions (for a given state), but differ in other statistical properties such as variance. Because of this, and of the variability explained above, Bell (of the famous Bell's theorem ) protested that we actually shouldn't call these experimental procedures "measurements": Bell: Against "measurement" ( other copy ), in Miller, ed.: Sixty-Two Years of Uncertainty: Historical, Philosophical, and Physical Inquiries into the Foundations of Quantum Mechanics (Plenum 1990). In particular, in classical physics there's one joint, simultaneous measurement of position and momentum. In quantum theory there are several measurement protocols that can be interpreted as joint, simultaneous measurements of position and momentum , in the sense that each instance of such measurement yields two values, the one is position, the other is momentum. In the classical limit they become the classical simultaneous measurement of $x$ and $p$ . This possibility was first pointed out by Arthurs & Kelly in 1965: Arthurs, Kelly: On the simultaneous measurement of a pair of conjugate observables , Bell Syst. Tech. J. 44 (1965) 725 ( other copy ). and further discussed, for example, in Stenholm: Simultaneous measurement of conjugate variables , Ann. Phys. (NY) 218 (1992) 233. This simultaneous measurement is not represented by $\hat{x}$ and $\hat{p}$ , but by a pair of commuting operators $(\hat{X}, \hat{P})$ satisfying $\hat{X}+\hat{x}=\hat{a}$ , $\hat{P}+\hat{p}=\hat{b}$ , for specially chosen $\hat{a}, \hat{b}$ . The point is that the joint operator $(\hat{X}, \hat{P})$ can rightfully be called a simultaneous measurement of position and momentum, because it reduces to that measurement in the classical limit (and obviously we have $\bar{X}=\bar{x}, \bar{P}=\bar{p}$ ). In fact, from the equations above we could very well say that $\hat{x},\hat{p}$ are defined in terms of $\hat{X},\hat{P}$ , rather than vice versa. This kind of simultaneous measurement – which is possible for any pairs of conjugate variables, not just position and momentum – is not a theoretical quirk, but is a daily routine measurement in quantum-optics labs for example. It is used to do quantum tomography , among other applications. As far as I know one of the first experimental realizations was made in 1984: Walker, Carroll: Simultaneous phase and amplitude measurements on optical signals using a multiport junction , Electron. Lett. 20 (1984) 981, 1075. You can find detailed theoretical and experimental descriptions of it in Leonhardt's book above, chapter 6, tellingly titled " Simultaneous measurement of position and momentum ". But as I said, there are several different protocols that may be said to be a simultaneous measurement of conjugate observables, corresponding to different choices of $\hat{a},\hat{b}$ . What's interesting is the way in which these measurements differ. They can be seen as forming a continuum between two extremes (see references above): – At one extreme, the variability across measurement repetitions of $X$ has a lower bound (which depends on the state of the system), while the variability of $P$ is infinite. Basically it's as if we were measuring $X$ without measuring $P$ . This corresponds to the traditional $\hat{x}$ . – At the other extreme, the variability across measurement repetitions of $P$ has a lower bound, while the variability for $X$ is infinite. So it's as if we were measuring $P$ without measuring $X$ . This corresponds to the traditional $\hat{p}$ . – In between, there are measurement protocols which have more and more variability for $X$ across measurement instances, and less and less variability for $P$ . This "continuum" of measurement protocols interpolates between the two extremes above. There is a "sweet spot" in between in which we have a simultaneous measurement of both quantities with a finite variability for each. The product of their variabilities, $\Delta X\ \Delta P$ , for this "sweet-spot measurement protocol" satisfies an inequality similar to the well-known one for conjugate variables, but with an upper bound slightly larger than the traditional $\hbar/2$ (just twice as much, see eqn (12) in Arthurs & Kelly). So there's a price to pay for the ability to measure them simultaneously. This kind of "continuum" of simultaneous measurements is also possible for the famous double-slit experiment. It's realized by using "noisy" detectors at the slits. There are setups in which we can observe a weak interference beyond the two-slit screen, and at the same time have some certainty about the slit at which a photon could be detected. See for example: Wootters, Zurek: Complementarity in the double-slit experiment: Quantum
nonseparability and a quantitative statement of Bohr's principle , Phys. Rev. D 192 (1979) 473 Banaszek et al: Quantum mechanical
which-way experiment with an internal degree of freedom , Nat. Comm. 4 (2013) 2594 ( arXiv ) Chiao et al: Quantum non-locality in two-photon
experiments at Berkeley , Quant. Semiclass. Opt. 73 (1995) 259 ( arXiv ), for variations of this experiment. We might be tempted to ask "OK but what's the real measurement of position an momentum, among all these?". But within quantum theory this is a meaningless question, similar to asking "In which frame of reference are these two events really simultaneous?" within relativity theory. The classical notions and quantities of position and momentum simply don't exist in quantum theory. We have several other notions and quantities that have some similarities to the classical ones. Which to consider? it depends, on the context and application. The situation indeed has some similarities with that for "simultaneity" in relativity: there are "different simultaneities" dependent on the frame of reference; which we choose depends on the problem and application. In quantum theory we can't really say "the system has these values", or "these are the actual values". All we can say is that when we do such-and-such to the system, then so-and-so happens. For this reason many quantum physicists (check eg Busch et al. below) prefer to speak of "intervention on a system" rather than "measurement of a system" (I personally avoid the term "measurement" too). Summing up: we can also say that a simultaneous and arbitrarily precise measurement of position and momentum is possible – and in fact a routine. So the answer to your question is that in a single measurement instance we actually can (and do!) measure position and momentum simultaneously and both with arbitrary precision . This fact is important in applications such as quantum-key distribution, mentioned above. But we also observe an unavoidable variability upon identical repetitions of such measurement. This variability makes the arbitrary single-measurement precision unimportant in other applications, where consistency through repetitions is required instead. Moreover, we must specify which of the simultaneous measurements of momentum and position we're performing: there isn't just one, as in classical physics. To form a picture of this, you can imagine two quantum scientists having this chat: – "Yesterday I made a simultaneous measurement of position and momentum using the experimental procedure $M$ and preparing the system in state $S$ ." – "Which values did you expect to find, before making the measurement?" – "The probability density of obtaining values $x,p$ was, according to quantum theory, $P(x,p)=\dotso$ . Its mean was $(\bar{x},\bar{p}) = (30\cdot 10^{-17}\ \mathrm{m},\ 893\cdot 10^{-17}\ \mathrm{kg\ m/s})$ and its standard deviations were $(\Delta x, \Delta p)=(1\cdot 10^{-17}\ \textrm{m},\ 1\cdot 10^{-17}\ \mathrm{kg\ m/s})$ , the quantum limit. So I was expecting the $x$ result to land somewhere between $29 \cdot 10^{-17}\ \mathrm{m}$ and $31 \cdot 10^{-17}\ \mathrm{m}$ ; and the $p$ result somewhere between $892 \cdot 10^{-17}\ \mathrm{kg\ m/s}$ and $894 \cdot 10^{-17}\ \mathrm{kg\ m/s}$ ." (Note how the product of the standard deviations is $\hbar\approx 10^{-34}\ \mathrm{J\ s}$ .) – "And which result did the measurement give?" – "I found $x=(31.029\pm 0.00001)\cdot 10^{-17}\ \textrm{m}$ and $p=(893.476 \pm 0.00005)\cdot 10^{-17}\ \mathrm{kg\ m/s}$ , to within the widths of the dials. They agree with the predictive ranges given by the theory." – "So are you going to use this setup in your application?" – "No. I need to be able to predict $x$ with some more precision, even if that means that my prediction of $p$ worsens a little. So I'll use a setup that has variances $(\Delta x, \Delta p)=(0.1\cdot 10^{-17}\ \textrm{m},\ 10\cdot 10^{-17}\ \mathrm{kg\ m/s})$ instead." Even if the answer to your question is positive, we must stress that:
(1) Heisenberg's principle is not violated , because it refers to the variability across measurement repetitions, not the the error in a single measurement. (2) It's still true that the operators $\hat{x}$ and $\hat{p}$ cannot be measured simultaneously . What we're measuring is a slightly different operator; but this operator can be rightfully called a joint measurement of position and momentum, because it reduces to that measurement in the classical limit. Old-fashioned statements about the uncertainty principle must therefore be taken with a grain of salt. When we make more precise what we mean by "uncertainty" and "measurement", they turn out to have new, unexpected, and very exciting faces. Here are several good books discussing these matters with clarity, precision, and experimental evidence: de Muynck: Foundations of Quantum Mechanics, an Empiricist Approach (Kluwer 2004) Peres: Quantum Theory: Concepts and Methods (Kluwer 2002) ( other copy ) Holevo: Probabilistic and Statistical Aspects of Quantum Theory (2nd ed. Edizioni della Normale, Pisa, 2011) Busch, Grabowski, Lahti: Operational Quantum Physics (Springer 1995) Nielsen, Chuang: Quantum Computation and Quantum Information (Cambridge 2010) ( other copy ) Bengtsson, Życzkowski: Geometry of Quantum States: An Introduction to Quantum
Entanglement (2nd ed. Cambridge 2017). | {
"source": [
"https://physics.stackexchange.com/questions/599980",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/263985/"
]
} |
599,984 | Imagine the following problem: Person 1 travels with velocity $v$ , and person 2 has a velocity of $u$ according to the rest frame. They both travel in straight line, with an angle $\theta$ between their trajectories. Find the speed of Person 2 in Person 1's frame: The solution of this problem is starts with realizing that one of the person's can be taken to be traveling on the x-axis of the rest frame, making $$\mathbf{v} = \langle v,0 \rangle$$ and $$\mathbf{u} = \langle u\cos\theta,u\sin\theta \rangle$$ Now by the relativistic addition of velocities we realize that, the y-component of the Person's 2 in velocity in Person's 1 frame ( $u_y'$ ) and the x-component ( $u_x'$ ) are: $$u_x'=\frac{u\cos\theta-v}{1-\frac{uv\cos\theta}{c^2}}$$ $$u_y'=\frac{u\sin\theta}{\gamma \left( 1-\frac{uv\cos\theta}{c^2} \right)}$$ where: $$\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$ we can now find the speed ( $\| \mathbf{u'} \|$ ) as: $$\| \mathbf{u'} \|=\sqrt{u_x'^2+u_y'^2}$$ Now if you go through the process of simplifying it you will find out that: $$\| \mathbf{u'} \| = \frac{1}{1-\left (\frac{\mathbf{v}\cdot\mathbf{u}}{c^2}\right )}\sqrt{\left (\mathbf{u}-\mathbf{v}\right )^2 - \frac{(\mathbf{v}\times\mathbf{u})^2}{c^2}}$$ I am curious why such equation is true and why do each of the products (dot and cross, especially the cross) appear in this equation?? Thank you in advance! | When someone asks "Is it really impossible to simultaneously measure position and momentum with arbitrary precision in quantum theory?", the best preliminary answer one can give is another question: "what do you exactly mean by measurement , by precision , and by position and momentum ?". Those words have several meanings each in quantum theory, reflected in literature and experimental practice. There is a sense in which a simultaneous and arbitrarily precise measurement of position and momentum is not only possible, but also routinely made in many quantum labs, for example quantum-optics labs. Such measurement is indeed at the core of modern quantum applications such as quantum-key distribution. I think it's best first to make clear what the different meanings of measurement , position , momentum are in actual applications and in the literature, and then to give examples of the different experimental procedures that are called "measurement of position" etc. What's important is to understand what's being done; the rest is just semantics. Let me get there step by step. The answer below summarizes what you can find in current articles published in scientific journals and current textbooks, works and results which I have experienced myself as a researcher in quantum optics. All references are given throughout the answer, and some additional ones at the end. I strongly recommend that you go and read them . Also, this answer is meant to discuss the uncertainty principle and simultaneous measurement within quantum theory . Maybe in the future we'll all use an alternative theory in which the same experimental facts are given a different meaning; there are such alternative theories proposed at present, and many researchers indeed are working on alternatives. Finally, this answer tries to avoid terminological debates, explaining the experimental, laboratory side of the matter. Warnings about terminology will be given throughout. (I don't mean that terminology isn't important, though: different terminologies can inspire different research directions.) We must be careful, because our understanding of the uncertainty principle today is very different from how people saw it in the 1930–50s. The modern understanding is also borne out in modern experimental practice. There are two main points to clarify. 1. What do we exactly mean by "measurement" and by "precision" or " $\Delta x$ "? The general picture is this: We can prepare one copy of a physical system according to some specific protocol. We say that the system has been prepared in a specific state (generally represented by a density matrix $\pmb{\rho}$ ). Then we perform a specific operation that yields an outcome. We say that we have performed one instance of a measurement on the system (generally represented by a so-called positive-operator-valued measure $\{\pmb{O}_i\}$ , where $i$ labels the possible outcomes). We can repeat the procedure above anew – new copy of the system – as many times as we please, according to the same specific protocols. We are thus making many instances of the same kind of measurement, on copies of the system prepared in the same state. We thus obtain a collection of measurement results, from which we can build a frequency distribution and statistics. Throughout this answer, when I say "repetition of a measurement" I mean it in this specific sense. There's also the question of what happens when we make two or more measurements in succession, on the same system . But I'm not going to discuss that here; see the references at the end. This is why the general empirical statements of quantum theory have this form: "If we prepare the system in state $\pmb{\rho}$ , and perform the measurement $\{\pmb{O}_i\}$ , we have a probability $p_1$ of observing outcome $i=1$ , a probability $p_2$ of observing outcome $i=2$ , ..." and so on (with appropriate continuous limits for continuous outcomes). Now, there's a measurement precision/error associated with each single instance of the measurement, and also a variability of the outcomes across repetitions of the measurement. The first kind of error can be made as small as we please. The variability across repetitions, however, generally appears not to be reducible below some nonzero amount which depends on the specific state and the specific measurement. This latter variability is what the " $\Delta x$ " in the Heisenberg formula refers to . So when we say "cannot be measured with arbitrary precision", what we mean more exactly is that "its variability across measurement repetitions cannot be made arbitrarily low". The fundamental mystery of quantum mechanics is the lack – in a systematic way – of reproducibility across measurement instances. But the error in the outcome of each single instance has no theoretical lower bound. Of course this situation affects our predictive abilities, because whenever we repeat the same kind of measurement on a system prepared on the same kind of state, we don't really know what to expect, within $\Delta x$ . This important distinction between single and multiple measurement instances was first pointed out by Ballentine in 1970: Ballentine: The Statistical Interpretation of Quantum Mechanics , Rev. Mod. Phys. 42 (1970) 358 ( other copy ) see especially the very explanatory Fig. 2 there. And it's not a matter of "interpretation", as the title might today suggest. It's an experimental fact. Clear experimental examples of this distinction are given for example in Leonhardt: Measuring the Quantum State of Light (Cambridge 1997) see for example Fig. 2.1 there and its explanation. Also the more advanced Mandel, Wolf: Optical Coherence and Quantum Optics (Cambridge 2008). See also the textbooks given below. The distinction between error of one measurement instance and variability across measurement instances is also evident if you think about a Stern-Gerlach experiment. Suppose we prepare a spin in the state $x+$ and we measure it in the direction $y$ . The measurement yields only one of two clearly distinct spots, corresponding to either the outcome $+\hbar/2$ or $-\hbar/2$ in the $y$ direction. This outcome may have some error in practice, but we can in principle clearly distinguish whether it is $+\hbar/2$ or $-\hbar/2$ . However, if we prepare a new spin in the state $x+$ and measure $y$ again, we can very well find the opposite outcome – again very precisely measured. Over many measurements we observe these $+$ and $-$ outcomes roughly 50% each. The standard deviation is $\hbar/2$ , and that's indeed the " $\Delta S_y$ " given by the quantum formulae: they refer to measurement repetitions, not to one single instance in which you send a single electron through the apparatus. It must be stressed that some authors (for example Leonhardt above) use the term "measurement result" to mean, not the result of a single experiment, but the average value $\bar{x}$ found in several repetitions of an experiment. Of course this average value has uncertainty $\Delta x$ . There's no contradiction here, just a different terminology. You can call "measurement" what you please – just be precise in explaining what your experimental protocol is. Some authors use the term "one-shot measurement" to make the distinction clear; as an example, check these titles: Pyshkin et al: Ground-state cooling of quantum systems via a one-shot measurement , Phys. Rev. A 93 (2016) 032120 ( arXiv ) Yung et al: One-shot detection limits of quantum illumination with discrete signals , npj Quantum Inf. 6 (2020) 75 ( arXiv ). The fact that, even though the predictive uncertainty $\Delta x$ is finite, we can have infinite precision in a single (one-shot) measurement, is not worthless, but very important in applications such as quantum key distribution . In many key-distribution protocols the two key-sharing parties compare the precise values $x$ they obtained in single-instance measurements of their entangled states. These values will be correlated to within their single-instance measurement error, which is much smaller than the predictive uncertainty $\Delta x$ . The presence of an eavesdropper would destroy this correlation. The two parties can therefore know that there's an eavesdropper if they see that their measured values only agree to within $\Delta x$ , rather than to within the much smaller single-instance measurement error. This scheme wouldn't work if the single-instance measurement error were $\Delta x$ . See for example Reid: Quantum cryptography with a predetermined key, using continuous-variable Einstein-Podolsky-Rosen correlations , Phys. Rev. A 62 (2000) 062308 ( arXiv ) Grosshans et al: Quantum key distribution using gaussian-modulated coherent states , Nature 421 (2003) 238 ( arXiv ). In Figure 2 one can see very well the difference between single-instance measurement error and the variability $\Delta x$ across measurements. Madsen et al: Continuous variable quantum key distribution
with modulated entangled states (free access), Nat. Comm. 3 (2012) 1083. See especially Fig. 4 and its explanation. 2. What is exactly a "measurement of position" or of "momentum"? In classical mechanics there's only one measurement (even if it can be realized by different technological means) of any specific quantity $Q$ , such as position or spin or momentum. And classical mechanics says that the error in one measurement instance and the variability across instances can both be made as low as we please. In quantum theory there are many different experimental protocols that we can interpret, for different reasons, as "measurements" of that quantity $Q$ . Usually they all yield the same mean value across repetitions (for a given state), but differ in other statistical properties such as variance. Because of this, and of the variability explained above, Bell (of the famous Bell's theorem ) protested that we actually shouldn't call these experimental procedures "measurements": Bell: Against "measurement" ( other copy ), in Miller, ed.: Sixty-Two Years of Uncertainty: Historical, Philosophical, and Physical Inquiries into the Foundations of Quantum Mechanics (Plenum 1990). In particular, in classical physics there's one joint, simultaneous measurement of position and momentum. In quantum theory there are several measurement protocols that can be interpreted as joint, simultaneous measurements of position and momentum , in the sense that each instance of such measurement yields two values, the one is position, the other is momentum. In the classical limit they become the classical simultaneous measurement of $x$ and $p$ . This possibility was first pointed out by Arthurs & Kelly in 1965: Arthurs, Kelly: On the simultaneous measurement of a pair of conjugate observables , Bell Syst. Tech. J. 44 (1965) 725 ( other copy ). and further discussed, for example, in Stenholm: Simultaneous measurement of conjugate variables , Ann. Phys. (NY) 218 (1992) 233. This simultaneous measurement is not represented by $\hat{x}$ and $\hat{p}$ , but by a pair of commuting operators $(\hat{X}, \hat{P})$ satisfying $\hat{X}+\hat{x}=\hat{a}$ , $\hat{P}+\hat{p}=\hat{b}$ , for specially chosen $\hat{a}, \hat{b}$ . The point is that the joint operator $(\hat{X}, \hat{P})$ can rightfully be called a simultaneous measurement of position and momentum, because it reduces to that measurement in the classical limit (and obviously we have $\bar{X}=\bar{x}, \bar{P}=\bar{p}$ ). In fact, from the equations above we could very well say that $\hat{x},\hat{p}$ are defined in terms of $\hat{X},\hat{P}$ , rather than vice versa. This kind of simultaneous measurement – which is possible for any pairs of conjugate variables, not just position and momentum – is not a theoretical quirk, but is a daily routine measurement in quantum-optics labs for example. It is used to do quantum tomography , among other applications. As far as I know one of the first experimental realizations was made in 1984: Walker, Carroll: Simultaneous phase and amplitude measurements on optical signals using a multiport junction , Electron. Lett. 20 (1984) 981, 1075. You can find detailed theoretical and experimental descriptions of it in Leonhardt's book above, chapter 6, tellingly titled " Simultaneous measurement of position and momentum ". But as I said, there are several different protocols that may be said to be a simultaneous measurement of conjugate observables, corresponding to different choices of $\hat{a},\hat{b}$ . What's interesting is the way in which these measurements differ. They can be seen as forming a continuum between two extremes (see references above): – At one extreme, the variability across measurement repetitions of $X$ has a lower bound (which depends on the state of the system), while the variability of $P$ is infinite. Basically it's as if we were measuring $X$ without measuring $P$ . This corresponds to the traditional $\hat{x}$ . – At the other extreme, the variability across measurement repetitions of $P$ has a lower bound, while the variability for $X$ is infinite. So it's as if we were measuring $P$ without measuring $X$ . This corresponds to the traditional $\hat{p}$ . – In between, there are measurement protocols which have more and more variability for $X$ across measurement instances, and less and less variability for $P$ . This "continuum" of measurement protocols interpolates between the two extremes above. There is a "sweet spot" in between in which we have a simultaneous measurement of both quantities with a finite variability for each. The product of their variabilities, $\Delta X\ \Delta P$ , for this "sweet-spot measurement protocol" satisfies an inequality similar to the well-known one for conjugate variables, but with an upper bound slightly larger than the traditional $\hbar/2$ (just twice as much, see eqn (12) in Arthurs & Kelly). So there's a price to pay for the ability to measure them simultaneously. This kind of "continuum" of simultaneous measurements is also possible for the famous double-slit experiment. It's realized by using "noisy" detectors at the slits. There are setups in which we can observe a weak interference beyond the two-slit screen, and at the same time have some certainty about the slit at which a photon could be detected. See for example: Wootters, Zurek: Complementarity in the double-slit experiment: Quantum
nonseparability and a quantitative statement of Bohr's principle , Phys. Rev. D 192 (1979) 473 Banaszek et al: Quantum mechanical
which-way experiment with an internal degree of freedom , Nat. Comm. 4 (2013) 2594 ( arXiv ) Chiao et al: Quantum non-locality in two-photon
experiments at Berkeley , Quant. Semiclass. Opt. 73 (1995) 259 ( arXiv ), for variations of this experiment. We might be tempted to ask "OK but what's the real measurement of position an momentum, among all these?". But within quantum theory this is a meaningless question, similar to asking "In which frame of reference are these two events really simultaneous?" within relativity theory. The classical notions and quantities of position and momentum simply don't exist in quantum theory. We have several other notions and quantities that have some similarities to the classical ones. Which to consider? it depends, on the context and application. The situation indeed has some similarities with that for "simultaneity" in relativity: there are "different simultaneities" dependent on the frame of reference; which we choose depends on the problem and application. In quantum theory we can't really say "the system has these values", or "these are the actual values". All we can say is that when we do such-and-such to the system, then so-and-so happens. For this reason many quantum physicists (check eg Busch et al. below) prefer to speak of "intervention on a system" rather than "measurement of a system" (I personally avoid the term "measurement" too). Summing up: we can also say that a simultaneous and arbitrarily precise measurement of position and momentum is possible – and in fact a routine. So the answer to your question is that in a single measurement instance we actually can (and do!) measure position and momentum simultaneously and both with arbitrary precision . This fact is important in applications such as quantum-key distribution, mentioned above. But we also observe an unavoidable variability upon identical repetitions of such measurement. This variability makes the arbitrary single-measurement precision unimportant in other applications, where consistency through repetitions is required instead. Moreover, we must specify which of the simultaneous measurements of momentum and position we're performing: there isn't just one, as in classical physics. To form a picture of this, you can imagine two quantum scientists having this chat: – "Yesterday I made a simultaneous measurement of position and momentum using the experimental procedure $M$ and preparing the system in state $S$ ." – "Which values did you expect to find, before making the measurement?" – "The probability density of obtaining values $x,p$ was, according to quantum theory, $P(x,p)=\dotso$ . Its mean was $(\bar{x},\bar{p}) = (30\cdot 10^{-17}\ \mathrm{m},\ 893\cdot 10^{-17}\ \mathrm{kg\ m/s})$ and its standard deviations were $(\Delta x, \Delta p)=(1\cdot 10^{-17}\ \textrm{m},\ 1\cdot 10^{-17}\ \mathrm{kg\ m/s})$ , the quantum limit. So I was expecting the $x$ result to land somewhere between $29 \cdot 10^{-17}\ \mathrm{m}$ and $31 \cdot 10^{-17}\ \mathrm{m}$ ; and the $p$ result somewhere between $892 \cdot 10^{-17}\ \mathrm{kg\ m/s}$ and $894 \cdot 10^{-17}\ \mathrm{kg\ m/s}$ ." (Note how the product of the standard deviations is $\hbar\approx 10^{-34}\ \mathrm{J\ s}$ .) – "And which result did the measurement give?" – "I found $x=(31.029\pm 0.00001)\cdot 10^{-17}\ \textrm{m}$ and $p=(893.476 \pm 0.00005)\cdot 10^{-17}\ \mathrm{kg\ m/s}$ , to within the widths of the dials. They agree with the predictive ranges given by the theory." – "So are you going to use this setup in your application?" – "No. I need to be able to predict $x$ with some more precision, even if that means that my prediction of $p$ worsens a little. So I'll use a setup that has variances $(\Delta x, \Delta p)=(0.1\cdot 10^{-17}\ \textrm{m},\ 10\cdot 10^{-17}\ \mathrm{kg\ m/s})$ instead." Even if the answer to your question is positive, we must stress that:
(1) Heisenberg's principle is not violated , because it refers to the variability across measurement repetitions, not the the error in a single measurement. (2) It's still true that the operators $\hat{x}$ and $\hat{p}$ cannot be measured simultaneously . What we're measuring is a slightly different operator; but this operator can be rightfully called a joint measurement of position and momentum, because it reduces to that measurement in the classical limit. Old-fashioned statements about the uncertainty principle must therefore be taken with a grain of salt. When we make more precise what we mean by "uncertainty" and "measurement", they turn out to have new, unexpected, and very exciting faces. Here are several good books discussing these matters with clarity, precision, and experimental evidence: de Muynck: Foundations of Quantum Mechanics, an Empiricist Approach (Kluwer 2004) Peres: Quantum Theory: Concepts and Methods (Kluwer 2002) ( other copy ) Holevo: Probabilistic and Statistical Aspects of Quantum Theory (2nd ed. Edizioni della Normale, Pisa, 2011) Busch, Grabowski, Lahti: Operational Quantum Physics (Springer 1995) Nielsen, Chuang: Quantum Computation and Quantum Information (Cambridge 2010) ( other copy ) Bengtsson, Życzkowski: Geometry of Quantum States: An Introduction to Quantum
Entanglement (2nd ed. Cambridge 2017). | {
"source": [
"https://physics.stackexchange.com/questions/599984",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/282334/"
]
} |
600,066 | If I try to handle a tumbler or cup on my fingertip (as shown in fig), it is quite hard to do so (and the cup falls most often). And when I did the same experiment but this time the cup is upside down (as shown in fig), it was quite stable and I could handle it easily. In both the cases, the normal force as well as the weight of that cup is the same but in first case it falls down and in the other it is stable. I guess that it is falling because of some torque but why is there no torque when it is upside down. What is the reason behind this? | Take a look at this picture of a cup slightly out-of-balance : In case (A), generated torque is directed out of your reference axis and in case (B) - towards your reference axis. So in case A), you need to compensate out of balance movement with your finger contra-movement. But in case B),
torque assists you and makes balancing for yourself, so that you need minuscule additional efforts. | {
"source": [
"https://physics.stackexchange.com/questions/600066",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271783/"
]
} |
600,790 | In my life I hear/read this statement a lot: A non-linear equation or theory leads to self-interactions. For example in GR , we say that gravity can interact with itself because it is non-linear.
For some reason I always assumed it was correct. However now I think about it, I can not see a clear reason in the maths for why this statement holds. Can someone help me out? :D Edit $_1$ : As Vadim pointed out. The statement should be the other way around. A self interacting physical system leads to non-linear equations. Edit $_2$ : The questions is beautifully answered by @gandalf61 for 2 variable system. However, still do not really understand what is going on for 1 variable system, e.g. in general relativity. Could someone maybe also give an example there? Thank you in advance. :D In the comments on the answer of @gandalf61, you will also find the answer of edit $_2$ . | If I go to a shop and buy $5$ apples and $10$ bananas then I can usually take the price of one apple $a$ and the price of one banana $b$ and add these together to get a total cost of $5a+10b$ . And I pay the same total amount if I buy apples and bananas at the same time or I buy apples, then go back to the shop later and buy bananas - my purchases do not interact with one another. This is a linear system. But if there is an offer of " $5$ apples for the price of $3$ " or "one free banana with every $5$ apples" or " $10\%$ off if you spend more than $\$5$ " then the cost of $5$ apples and $10$ bananas will no longer be $5a+10b$ . This is a non-linear system, and there is an interaction between my different purchases. | {
"source": [
"https://physics.stackexchange.com/questions/600790",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/233164/"
]
} |
601,185 | My sister was doing a quiz and I tried to point her in the right direction by giving her scenarios to imagine. One of the questions in the quiz was: Which of the following objects do not reflect light: Polished metal Mirror Undisturbed water Book She suggested that the answer was "undisturbed water" and that made sense to me too. But the answer given was "book", which didn't make sense to me. How can you even see the book if it didn't reflect light in the first place? Is this terrible framing by her teacher or am I having a conceptual misunderstanding? | The question is asking " which of the following objects will you not see a reflection ?". A distinction (albeit poorly) is being made between specular reflection and diffuse reflection. The objects in options 1-3 will exhibit specular reflection, while option 4 "a book" will exhibit diffuse reflection. So the correct option will be "4 Book" since this object will not exhibit specular reflection, whereas "1. Polished metal, 2. Mirror" and "3. Undisturbed water" all exhibit specular reflection. You are correct and the question should probably have been worded similar to this: " Which of the following objects would exhibit diffuse reflection, as oppose to specular reflection? " Now with the understanding that the question posed by the teacher was probably at an elementary school level, it should be noted that a more technical answer (and more accurate answer) should explain these two forms of reflection in detail, so see more in the links below. But to briefly summarize: Diffuse reflection : Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection . Specular reflection is described as: Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface. | {
"source": [
"https://physics.stackexchange.com/questions/601185",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277480/"
]
} |
601,215 | T1,2,3,4 are name of strings here .
Pulleys are also massless as well as the string Assumption is that B accelerates first. When B goes down x distance T4. A(T1) does up x distance. I am confused how will A(T2)will go up now. So I did by method by which I saw online. When T 1 and T2 does up x , then B goes down x. Then T3 goes x also.Therefore , the pulley moves down and T4 goes down again and T1 goes up again. Not sure if this is right.because this answer does not match with my sir. Please help Answer is that it should be 3x(A)string lost = XB strong lost which is not the answer in online case. I am getting confused because I think it is right Also V IMP POINT.We say that it is xA distance by A block and XB distance by B block.Why don’t we write them 3x = x since dispatch co feed is same.Why changing variables ? | The question is asking " which of the following objects will you not see a reflection ?". A distinction (albeit poorly) is being made between specular reflection and diffuse reflection. The objects in options 1-3 will exhibit specular reflection, while option 4 "a book" will exhibit diffuse reflection. So the correct option will be "4 Book" since this object will not exhibit specular reflection, whereas "1. Polished metal, 2. Mirror" and "3. Undisturbed water" all exhibit specular reflection. You are correct and the question should probably have been worded similar to this: " Which of the following objects would exhibit diffuse reflection, as oppose to specular reflection? " Now with the understanding that the question posed by the teacher was probably at an elementary school level, it should be noted that a more technical answer (and more accurate answer) should explain these two forms of reflection in detail, so see more in the links below. But to briefly summarize: Diffuse reflection : Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection . Specular reflection is described as: Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface. | {
"source": [
"https://physics.stackexchange.com/questions/601215",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/284453/"
]
} |
601,223 | 1 K is defined as (1/273.15)th of the temperature of the triple point of water . At least, that's how it's defined in my book. But which scale is the triple point of water being measured in? Celsius? Fahrenheit? | To answer this question it may help to take an example from a more familiar area of physics, and then discuss temperature. For a long time the kilogram (the SI unit of mass) was defined as the mass of a certain object kept in a vault in Paris. Then the gram can be defined as one thousandth of the mass of that object, and so on. If you now ask, what units are being used to state the mass of the chosen object? then it does not matter as long as they are proportional to the scale of units you want to adopt. So if someone were to tell you the mass of the special object in pounds (e.g. 2.2 pounds) then you would still know that one gram is a thousandth of that. With temperature it goes similarly. There is a certain state of water, water vapour and ice all in mutual equilibrium. That state has a temperature independent of other details such as volume, as long as the substances are pure and they are not crushed up too small. So that state has a certain temperature. It has one unit of temperature in "triple point units" (a temperature scale that I just invented). When we say the Kelvin is a certain fraction of that temperature, we are saying that a thermometer whose indications are proportional to absolute temperature must be calibrated so as to register 273.16 when it is put into equilibrium with water at the triple point, if we wish the thermometer to read in kelvin. For example, if the thermometer is based on a constant-volume ideal gas then one should make the conversion factor from pressure in the gas to indicated temperature be a number which ensures the indicated temperature is 273.16 at the triple point. You then know that your gas thermometer is giving readings in kelvin, and you never needed to know any other units. (Note, such a thermometer is very accurate over a wide range of temperature, but it cannot be used below temperatures of a few kelvin. To get to the low temperature region you would need other types of thermometer. In principle they can all be calibrated to agree where their ranges overlap.) (Thanks to Pieter for a detail which is signaled in the comments and now corrected in the text, but I hope the comment will remain.) | {
"source": [
"https://physics.stackexchange.com/questions/601223",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/259747/"
]
} |
601,814 | Taylor's classical mechanics ,chapter 4, states: A force is conservative, if and only if it satisfies two conditions: $\vec{F}$ is a function of only the position. i.e $\vec{F}=\vec{F}(\vec{r})$ . The work done by the force is independent of the path between two points. Questions: Doesn't $1$ automatically imply $2$ ? : Since from 1, we can conclude that $\vec{F}=f(r)\hat{r}$ , for some function $f$ . Then, if $A$ is the antiderivative of $f$ , we can say that $\vec{F}=\nabla{A}$ , and therefore the work (line integral) will depend on the final and initial positions only. Or even simply put, $\vec{F}.d\vec{r}$ is a simple function of $r$ alone, so the integral will only depend on initial and final $r$ . I have seen in many places, only "2" is the definition of a conservative force. In light of this, I cant think of why 1 has to be true: i.e how is it necessary that path independence implies $\vec{F}=f(r)\hat{r}$ . It could be that my interpretation of 1 as $\vec{F}=f(r)\hat{r}$ is wrong, on which my entire question hinges. Taylor writes $\vec{F}=\vec{F(\vec{r})}$ , which I interpreted as : "since F is a function of position vector , F is a function of both the magnitude and direction, and hence $\vec{F}=f(r)\hat{r}$ ". | Your conclusions are not correct.
Here is a simple counter-example.
Consider this force $$\vec{F}=k(x\hat{y}-y\hat{x})$$ where $\hat{x}$ and $\hat{y}$ are the unit-vectors
in $x$ and $y$ -direction, and $k$ is some constant. From this definition we see, the magnitude
of the force is $F=k\sqrt{x^2+y^2}=kr$ ,
and its direction is at right angle to $\vec{r}=x\hat{x}+y\hat{y}$ .
So we can visualize this force field like this: The force circulates the origin in a counter-clockwise sense. This force clearly satifies your first condition $\vec{F}$ is a function of only the position,
i.e. $\vec{F}=\vec{F}(\vec{r})$ But it is not of the form $\vec{F}=f(r)\hat{r}$ . And this force violates your second condition The work done by the force is independent of the
path between the two point. To prove this consider the following two paths: Path A (in green): beginning on the right
at $(x=R,y=0)$ , doing a half circle counterclockwise,
to the point on the left $(x=-R,y=0)$ . Path B (in red): beginning on the right
at $(x=R,y=0)$ , doing a half circle clockwise,
to the point on the left $(x=-R,y=0)$ . Then the work for path A is (because here $\vec{F}$ is always parallel to $d\vec{r}$ ) $$W_A=\int \vec{F}(\vec{r}) d\vec{r}=kR\cdot\pi R=\pi k R^2.$$ Then the work for path B is (because here $\vec{F}$ is always antiparallel to $d\vec{r}$ ) $$W_B=\int \vec{F}(\vec{r}) d\vec{r}=-kR\cdot\pi R=-\pi k R^2.$$ You see, the work is different for the two paths, although
the start and end point of the paths are the same. This is a simple example of a non- conservative force .
The non-conservativeness can easily be checked
by calculating its curl and finding it is non-zero. $$\vec{\nabla}\times\vec{F}
=\vec{\nabla}\times k(x\hat{y}-y\hat{x})
=2k\hat{z}
\ne \vec{0}$$ | {
"source": [
"https://physics.stackexchange.com/questions/601814",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/196626/"
]
} |
601,945 | Chocolate Science! I melt 3 spoons of dark chocolate in microwave oven in low. It melts in 3 minutes and it's just mildly warm. I add half a spoon of milk which makes it a bit cold again. So I microwave it again and in 10 seconds it BOILS!! In 20 seconds the whole thing is burnt. WHY ??? (Same thing happens if I use almond milk instead of cow milk.) UPDATE : I already know that liquids heat better than solids in microwaves because the oven emits in the water resonance frequency causing water molecules to move rapidly. However I feel like the degree in which the chocolate-milk mixture gets hot, combined with the 10sec/3min figures is disproportionate to the quantity of the milk contained in the mixture. If I put the same quantity of milk alone in the microwave and I heat it for 10 seconds it does get pretty hot...But the chocolate-milk mixture gets even hotter than that. That is the part that I don't understand. | As already pointed out, microwaves in the oven have just the right frequency to heat water molecules.
But this alone does not explain why chocolate with a tiny bit of milk heats up so much quicker than a glass of milk.
The key is heat capacity. With no milk the chocolate is almost transparent to the microwaves.
Even a little bit of milk makes the mixture capable of absorbing a substantial portion of the energy the microwaves bring in.
This energy is then spread out through the mixture of chocolate and milk.
The heat capacity of the mixture is essentially that of chocolate if there is little milk, and the specific heat capacity of chocolate is much lower than that of water or milk.
Therefore it takes less energy to heat chocolate to such temperatures that it burns. If there is too little milk, you don't catch enough of the energy of the microwaves.
If there is too much milk, it increases the heat capacity and slows down the heating.
Somewhere in between there is a sweet spot where the milk acts as a microwave antenna for the chocolate but does not take up a substantial portion of the total heat. I am not commenting on what the burning of the chocolate actually means.
This answer only concerns the energy transfer process that leads to temperature change at different rates.
The temperature around the boiling point of milk (≈water) is not enough for combustion but is enough for other processes that change the color and flavor of the chocolate.
Also, sensitivity to microwave heating is not binary; plain chocolate heats up too but less so than milk, and it also depends on the variety of chocolate in use. | {
"source": [
"https://physics.stackexchange.com/questions/601945",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/283280/"
]
} |
601,998 | Humans can detect sounds in a frequency range from about 20 Hz to 20 kHz. Then ultrasound is above 20 kHz Occupational exposure to ultrasound in excess of 120 dB may lead to
hearing loss . Source How can ultrasound hurt human ears if it is above the audible range? | Ultrasound is used to break kidney stones and "burn" brain tumors. So no surprise that it can damage body parts, including inner ear. The effects of ultrasound on human body are multiple. One of them is heating due to absorption of the mechanical energy in the tissue and conversion to heat. This is used in physiotherapy as a desirable effect but it can be damaging if too much power is absorbed. There is also a mechanical effect. Parts reached by ultrasound vibrate so if the amplitude is too high they may break. The effect is used in ultrasound cleaners as well in medical field, to break apart stones and blood clots. Again, too much can do bad things. For this reason there are strict rules on maximum power allowed in any ultrasound imaging system. These effects have nothing to do with the fact that our ear can or cannot hear the sound. In similar way, your eyes can get burned by microwaves even though you don't see them. Edit Richard Tingle mentions an interesting aspect of this problem.
I focused my answer above on the possibility of physical damage to the tissue due to ultrasound. In order to prevent these effectss, the limits for the maximum power used for ultrasound imaging is for example around 750 $mW/cm^2$ for sturdy organs like heart, liker, kidneys. For more sensitive areas like baby ultrasound or eyes the limits are much lower but stil of the order few (or tens of) $mW/cm^2$ . Now let's compare these with airborne ultrasound at 120dB level. In terms of intensity this means 1 $W/m^2$ or 0.1 $mW/cm^2$ . And this is the same no matter if the sound is audible or not. The sensation we get depends on the frequency range but the physical intensity is the same. This shows that the damage to the ears is of a different nature, and usually is a long term effect. One efect is an over-working of the sensitive hairs in the inner ear which in time can result of death of the cells in these hairs. The hairs are tuned to frequencies in the audible range. This does not mean that they do not vibrate when the frequency is off-resonance, just that the vibration amplitude is lower. Indeed, the limits established by the International Non-Ionizing Radiation Comitee for exposure to ultrasound are 110 dB for frequencies between 20 and 100 kHz (ultrasound) and just 75 dB for audible sound. This indicates that the effect of airborne ultrasound is less pronounced than that of sound. The difference between 110 dB and 75 dB is significant in terms of intensity. Hovewer the effect is here, as proved by many studies used by INRC to established their limits. The effects are not just possible hearing loss but also nausea, dizziness, headache and other things. So indeed 120 dB can be damaging even for ultrasound. You don't need to hear it to hurt you. | {
"source": [
"https://physics.stackexchange.com/questions/601998",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/283297/"
]
} |
602,020 | I am looking to find the reason: why air pressure decreases with altitude? Has it to do with the fact that gravitational force is less at higher altitude due to the greater distance between the masses? Does earth’s spin cause a centrifugal force? Are the molecules at higher altitude pushing onto the molecules of air at lower altitudes thus increasing their pressure? Is the earths air pressure higher at the poles than at the equator? | The air pressure at a given point is the weight of the column of air directly above that point, as explained here . As altitude increases, this column becomes smaller, so it has less weight. Thus, points at higher altitude have lower pressure. While gravitational force does decrease with altitude, for everyday purposes (staying near the surface of the Earth), the difference is not very large. Likewise, the centrifugal force also does not have significant impact . | {
"source": [
"https://physics.stackexchange.com/questions/602020",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/283273/"
]
} |
603,701 | What would be the effect of placing an object that cold in an environment that warm? Would the room just get a little colder? Would it kill everyone in the room like some kind of cold bomb? What would happen? Don't think about how the cube got there, or the air which it would displace. | Nothing overly dramatic, though it would be cool to look at. The cube would very quickly become covered by a layer of nitrogen/oxygen ice as the air which came into contact with it froze. Further away, you'd see condensation of water vapor into wispy clouds, which would swirl around the block due to the air currents generated by the sudden pressure drop. Other than that, as long as you aren't in immediate thermal contact with the block, you wouldn't notice much other than that the room cools down. Here's a video I took of a vacuum can that was just removed from a dewar of liquid helium at 4 kelvin. It's maybe 5 kg of copper, not 10 kg of lead, but I'd say that's close enough to get the idea. You can see one of my coworkers climbing down into a pit below it; he had to be careful not to bump his head on it, which would have really ruined his day, but there was no fatal cold bomb :) | {
"source": [
"https://physics.stackexchange.com/questions/603701",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271884/"
]
} |
604,074 | I have seen that power is transmitted from power stations to households at high voltage and low current to minimize the power loss. That means the current in the transmission line is less than the current in the household wiring as there using a transformer we decrease the voltage and current is increased to keep the same power. As it's the current that is dangerous, as it means how much charge flows per time unit: why are transmission lines more dangerous than household lines, even though voltage is high, but current is less? | Current flowing in the wire is irrelevant to the danger. It's the current flowing through your body that will hurt you, and the amount of current that flows through your body will be proportional to the voltage between the wire and anything else that you happened to be touching (e.g., the ground upon which you are standing.) | {
"source": [
"https://physics.stackexchange.com/questions/604074",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/184613/"
]
} |
604,457 | This paper talks about finding theoretical correlations to experiential phenomena in quantum mechanical experiments using artificial intelligence (AI). If AI applications can be sufficiently well trained in theoretical simulations using Schrödinger's equation , is it possible to actually use it to predict position-momentum and the like to arbitrary levels of accuracy by virtue of some (yet unknown) pattern, thereby being able to possibly describe deterministic relationships between non-commuting variables? I am interested in knowing the possible limitations/lack of it, of an intelligent system in such tricky areas like the foundations of quantum mechanics. Edit : Now that I have read comments and answers like this , I am also interested to know about efforts to actually find problems with the uncertainty principle, using methods of computation like in the mentioned answer. | If it could be done it would be a violation of the uncertainty principle. This would mean one of two things: The AI cannot violate the uncertainty principle, or... The uncertainty principle is wrong So if we start from the assumption that the current model of QM is perfect in every way, then the AI could not beat the odds, because it would not have the physical tools needed to go about beating the odds. However, where AI tools like neural nets are powerful is in their ability to detect patterns that we did not see before. It is plausible that an AI could come across some more fundamental law of nature which yields more correct results than the uncertainty principle does. This would invite us to develop an entirely new formulation of microscopic physics! As a very trivial example, let me give you a series of numbers. 293732 114329 934700 172753 489332 85129 759100 61953 644932 335929 623500 671153 760532 866729 527900 353 836132 677529 472300 49553 871732 768329 456700 818753 867332 139129 481100 307953 822932 789929 545500 517153 738532 720729 649900 446353 614132 931529 794300 95553 449732 422329 978700 464753 245332 193129 203100 553953 932 243929 467500 363153 716532 574729 771900 892353 392132 185529 116300 141553 27732 76329 500700 110753 623332 247129 925100 799953 178932 697929 389500 209153 694532 428729 893900 338353 170132 439529 438300 187553 605732 730329 22700 756753 1332 301129 647100 45953 356932 151929 311500 55153 672532 282729 15900 784353 948132 693529 760300 233553 183732 384329 544700 402753 379332 355129 369100 291953 534932 605929 233500 901153 650532 136729 137900 230353 726132 947529 82300 279553 761732 38329 66700 48753 757332 409129 91100 537953 712932 59929 155500 747153 628532 990729 259900 676353 504132 201529 404300 325553 339732 692329 588700 694753 135332 463129 813100 783953 890932 513929 77500 593153 606532 844729 381900 122353 282132 455529 726300 371553 917732 346329 110700 340753 513332 517129 535100 29953 68932 967929 999500 439153 584532 698729 503900 568353 60132 709529 48300 417553 495732 329 632700 986753 891332 571129 257100 275953 246932 421929 921500 285153 562532 552729 625900 14353 838132 963529 370300 463553 These numbers appear highly random. Upon seeing it in a physical setting, one might assume these numbers actually are random, and invoke statistical laws like those at the heart of the uncertainty principle. But, if you were to throw an AI at this, you'd notice that it could predict the results with frustratingly high regularity. Once a neural network, like that described in the journal article, has shown that there is indeed a pattern, we can try to tease it apart. And, lo and behold, you would find that sequence was $\{X_1, X_2, X_3, ...\}$ where $X_i=2175143 * X_{i-1} + 10653\quad\text{(mod 1000000)}$ starting with $X_{0}=3553$ I used a linear congruential PRNG to generate those. If the universe actually used that sequence as its "source" for drawing the random values predicted in QM, then an AI could pick up on it, and start using this more-fundamental law of nature to do things that the uncertainty principle says are impossible. On the other hand, if the universe actually has randomness in it, the AI cannot do any better than the best statistical results it can come up with. In the middle is a fascinating case. Permit me to give you another series of numbers, this one in binary (because the tool I used outputs in binary) 1111101101100110111010101101010001000101111100101011111110000110100010010001110010010011101010000010101001111001100011100110001010011110100100010001000111110000010100101101111101011111000001011101011110110100000000000101010110100001101101001100111111000110000101000110000000110001100101001011000110101111011011101011011101110010111101111001111110010110011000000101110010010010111111001110101101111100110100111010010001011101101111110001111111011010111000101000001011001011010010011111000000110011100000001110000011000101110111100001100010111010111101010101000011010111010011011010101000111110110011100111000011101101110011111100011100101111101110100111001101011000000000110000111001010000001011100100100010111100101101101111011110000011110100010100011000011110010000001100011001110111011010001100010000011101011011011001011001100110100101001011001000101101000110010010010000110100110010111010001111001000111000100100100100111011001101011111001110011100100001001010001011110101001010000010100010111010 I will not tell you whether this series is random or pseudorandom. I will not tell you whether it was generated using the Blum Blum Shub algorithm. And I certainly wont tell you the key I used if I used the Blum Blum Shub algorithm. It is currently believed that, to tell the difference between a random stream and the output of Blum Blum Shub, one must solve a problem we do not currently believe is solvable in any practical amount of time. So, hypothetically, if the universe actually used the stream of numbers I Just provided as part of the underlying physics that appears to be random per, quantum mechanics, we would not be able to tell the difference. But an AI might be able to detect a pattern that we didn't even know we could detect. It could latch onto the pattern, and start predicting things that are "impossible" to predict. Or could it? Nobody is saying that that string of binary numbers is actually the result of an algorithm. It might truly be random... Neural networks like the one described in the paper can find patterns that we did not observe with our own two eyes and our own squishyware inside our skull. However, they cannot find a pattern if one does not exist (or worse: they can find a false pattern that leads one astray). | {
"source": [
"https://physics.stackexchange.com/questions/604457",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/227794/"
]
} |
604,980 | I am new to this topic and was just wondering about the use of instantaneous speed. I mean, we use to calculate the speed of car let us say at 5 sec. So we take the distance travelled in 4.9 to 5.0 seconds and divide it by time. We get instantaneous speed. We could simply as well have had taken distance travelled from 0 to 5 seconds and then divide it by time. So what is the use of instantaneous speed then? | Because instantaneous speed affects physics. Imagine a wall $10~\textrm m$ in front of you. You walk towards it smoothly over a timeframe of, say, $20~\textrm s$ , and without getting slower, you walk into the wall. You'll feel a slight bonk, but nothing serious is going to happen. Now imagine the same 20 seconds going differently: You wait for 17 seconds, then you sprint towards the wall at full speed. Both scenarios will give you the same average speed over the 20 seconds, but you better be wearing a helmet for the second one. The difference lies in the fact that the instantaneous speed at the end of the 20 second interval is different. It's a quantity that affects things. So it makes sense to talk about it. | {
"source": [
"https://physics.stackexchange.com/questions/604980",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/284453/"
]
} |
604,987 | I hope you can understand that I am talking about instantaneous speed here.You can notice I have done summation of all small ds/dt in my photo. The angle which I have got it theta.I want to know is whether this angle theta is going to always remain same for those ds or not. How would it affect the answers if it does or not be same value of theta for all the ds ? | Because instantaneous speed affects physics. Imagine a wall $10~\textrm m$ in front of you. You walk towards it smoothly over a timeframe of, say, $20~\textrm s$ , and without getting slower, you walk into the wall. You'll feel a slight bonk, but nothing serious is going to happen. Now imagine the same 20 seconds going differently: You wait for 17 seconds, then you sprint towards the wall at full speed. Both scenarios will give you the same average speed over the 20 seconds, but you better be wearing a helmet for the second one. The difference lies in the fact that the instantaneous speed at the end of the 20 second interval is different. It's a quantity that affects things. So it makes sense to talk about it. | {
"source": [
"https://physics.stackexchange.com/questions/604987",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/284453/"
]
} |
605,316 | I am not a physicist - just a curious person. We know from observation that Newton's formulation of physics is incomplete. We could suppose that, on a planet of perpetual fog, no-one would have thought to question it. Prior to discovering flight, we wouldn't have had much in the way of cosmology to point us in the right direction. Question Without physical experimentation, are there any inconsistencies in Newton's laws that would hint at them being incorrect? I'm thinking of calculations that would require dividing by zero or similar. For example, could we have discovered, say relativity, purely from mathematical inconsistencies in the Newtonian formulation? | No. Newtonian physics is self consistent. You do get into some non-trivial conceptual difficulties with point particles. But you can simply take those difficulties as an indication of the logical impossibility of classical point particles. There are also some interesting edge-cases where either determinism or time reversal symmetry seem to be in conflict (Norton’s dome). But pure logic does not require that the Newtonian universe must be both deterministic and time-reversible in all cases. | {
"source": [
"https://physics.stackexchange.com/questions/605316",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85871/"
]
} |
606,390 | I wondered this since my teacher told us about half life of radioactive materials back in school. It seems intuitive to me to think this way, but I wonder if there's a deeper explanation which proves me wrong. When many atoms are involved, half life can statistically hold, but since decaying of an individual atom is completely random and stateless, can't all the atoms in a 1 kg of matter just decide to decay in the next minute, even if the probability of this event occurring is extremely small? | The short answer is yes . No matter how many atoms there are, there is always a (sometimes vanishingly small) chance that all of them decay in the next minute. The fun answer is actually seeing how small this probability gets for large numbers of atoms. Let's take iodine-131 , which I chose because it has the reasonable half-life of around $8$ days = $\text{691,200}$ seconds. Now $1$ kg of iodine-131 will have around $7.63 \times N_A$ atoms in it, where $N_A$ is Avogadro's constant. Using the formula for probability for the decay of an atom in time $t$ : $$
P(t) = 1-\exp(-\lambda t),
$$ and assuming that all decays are statistically independent $^\dagger$ , the probability that all the atoms will have decayed in one minute is: $$
(1-\exp(-\lambda \times 60\,\text{s}))^{7.63\times N_A}
$$ where $\lambda$ is the decay constant, equal to $\frac{\ln 2}{\text{half-life}}$ , in this case, almost exactly $10^{-6}\,\text{s}^{–1}$ . So $$
P = (1-\exp(-6\times10^{-5}))^{7.63\times N_A} \\
\approx(6\times10^{-5})^{7.63\times N_A} \\
\approx (10^{-4.22})^{7.63\times N_A} \\
= 10^{-4.22\times7.63\times N_A} \\
\approx 10^{-1.94\times10^{25}}
$$ (I chose iodine-131 as a concrete example, but pretty much any radioactive atom will result in a similar probability, no matter what the mass or the half-life is.) So if you played out this experiment on $10^{1.94\times10^{25}}$ such setups, you would expect all the atoms to decay in one of the setups, on average. To give you an idea of how incomprehensibly large this number is, there are "only" $10^{78}$ atoms in the universe - that's $1$ followed by $78$ zeroes. $10^{1.94\times10^{25}}$ is $1$ followed by over a million billion billion zeroes. I'd much rather bet on horses. $^\dagger$ This Poisson distribution model is a simplifying, but perhaps crude approximation in this scenario, since even small deviations from statistical independence can add up to large suppressing factors given the number of atoms, and so $10^{1.94\times10^{25}}$ is certainly an upper bound (of course, the approximation is fully justified if the atoms are separated to infinity at $0 \text{ K}$ , or their decay products do not have sufficient energy to make more than a $1/N_A$ -order change in the decay probability of other atoms). A more detailed analysis would have to be tailored specifically to the isotope under consideration - or a next-order approximation could be made by making the decay constant $\lambda$ a strictly increasing function of time. Rest assured that the true probability, while much more difficult to calculate than this back-of-the-envelope estimation, will still run into the mind-bogglingly large territory of $1$ in $1$ followed by several trillions of zeroes. | {
"source": [
"https://physics.stackexchange.com/questions/606390",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/285199/"
]
} |
606,633 | My sense is that even though neutrons decay into a proton and an electron they are made up of quarks, it is not just some "merged" particle where, for example, the electron is orbiting the proton very closely or something (which would be basically a hydrogen atom). Anyway, is there some way to simply shoot a stream of electrons at hydrogen ions (which I think are easy to make and are just protons) and observe, if you do it fast enough and in a great enough volume, that some neutrons, besides a great number of new hydrogen atoms, get produced? Maybe this happens in fusion? New neutrons get produced since I recall an objection to cold fusion is that in fact no neutrons were found. | The one-word answer is yes. You are also correct that the neutron is not just a proton and electron living together. The process of merging a proton and electron proceeds via the weak force. Specifically, an up quark in the proton exchanges a W boson with the electron. The W boson carries a unit of positive charge from the quark to the electron. In that process the up quark (charge +2/3) is converted to a down quark (charge -1/3) so that the proton (uud) becomes a neutron (udd). The negatively charged electron is converted into a neutrino. This is one important point left out in your question. The full reaction is $p+e^-\to n+\nu_e$ . There is a general principle of quantum field theory called crossing symmetry that roughly states that for any process I can exchange what I call initial and final particles. So you are correct that neutron decay $n\to p+ e^- + \bar\nu_e$ implies that the process $p+e^-\to n+\nu_e$ can also happen. This process does also happen in nature. It is one mode of radioactive decay of nuclei. Some nuclei with a sufficiently large number of protons can become more stable by absorbing one of their electrons and converting one proton into a neutron. This can happen because electron orbitals have a small but non-zero overlap with the nucleus, so that they "sometimes come into contact with" the protons. This process can also happen artificially as you suggest. In fact, it seems that accelerators used in medical facilities produce neutrons as a by-product, exactly as you suggest, and this is apparently a difficulty that must be dealt with, see this paper . In general, because the mass difference between the proton and neutron is about an MeV, in any system including protons and electrons at a temperature of order an MeV or higher, there will necessarily be populations of both neutrons and protons connected to each other by such processes, with relative amounts determined by the relevant Boltzmann factors. This should include systems where thermal fusion is taking place. However the actual process of producing helium from hydrogen, as far as I understand, does not depend on capturing an electron on a proton to form a neutron. In stellar nucleosynthesis , two protons merge to form deuterium. That is, in the process of merging, one proton is converted into a neutron by the emission of a positron and a neutrino. Helium-2 (two protons) is highly unstable, so that this proton-to-neutron conversion producing stable deuterium is more important. | {
"source": [
"https://physics.stackexchange.com/questions/606633",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/208473/"
]
} |
606,634 | The tangential force exerted on a pendulum weight is $-mgsin(\theta)$ . If we say that the pendulum has length L than $sin\theta$ = $\frac{x}{l}$ . Then $$F_{tangential} = \frac{-mg}{l}x$$ Then why do we need the small angle approximation at all? This relation between the force and the displacement satisfies the condition of simple harmonic motion, which is $\frac{F}{x} = c$ ; $c<0$ . My textbook uses small angle approximation and derives the same force equation from there. But to me, it seems like the relation should be linear even if the angle is large. | The one-word answer is yes. You are also correct that the neutron is not just a proton and electron living together. The process of merging a proton and electron proceeds via the weak force. Specifically, an up quark in the proton exchanges a W boson with the electron. The W boson carries a unit of positive charge from the quark to the electron. In that process the up quark (charge +2/3) is converted to a down quark (charge -1/3) so that the proton (uud) becomes a neutron (udd). The negatively charged electron is converted into a neutrino. This is one important point left out in your question. The full reaction is $p+e^-\to n+\nu_e$ . There is a general principle of quantum field theory called crossing symmetry that roughly states that for any process I can exchange what I call initial and final particles. So you are correct that neutron decay $n\to p+ e^- + \bar\nu_e$ implies that the process $p+e^-\to n+\nu_e$ can also happen. This process does also happen in nature. It is one mode of radioactive decay of nuclei. Some nuclei with a sufficiently large number of protons can become more stable by absorbing one of their electrons and converting one proton into a neutron. This can happen because electron orbitals have a small but non-zero overlap with the nucleus, so that they "sometimes come into contact with" the protons. This process can also happen artificially as you suggest. In fact, it seems that accelerators used in medical facilities produce neutrons as a by-product, exactly as you suggest, and this is apparently a difficulty that must be dealt with, see this paper . In general, because the mass difference between the proton and neutron is about an MeV, in any system including protons and electrons at a temperature of order an MeV or higher, there will necessarily be populations of both neutrons and protons connected to each other by such processes, with relative amounts determined by the relevant Boltzmann factors. This should include systems where thermal fusion is taking place. However the actual process of producing helium from hydrogen, as far as I understand, does not depend on capturing an electron on a proton to form a neutron. In stellar nucleosynthesis , two protons merge to form deuterium. That is, in the process of merging, one proton is converted into a neutron by the emission of a positron and a neutrino. Helium-2 (two protons) is highly unstable, so that this proton-to-neutron conversion producing stable deuterium is more important. | {
"source": [
"https://physics.stackexchange.com/questions/606634",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/250129/"
]
} |
606,722 | In many places over the Internet, I have tried to understand entropy. Many definitions are presented, among which I can formulate three (please correct me if any definition is wrong): Entropy = disorder, and systems tend to the most possible disorder Entropy = energy distribution, and systems tend to the most possible energy distribution Entropy = information needed to describe the system, and systems tend to be described in less lines Entropy = statistical mode, and the system tends to go to a microscopic state that is one of the most abundant possible states it can possess. Now, I have these contrary examples in my mind: Disorder => how about a snowflake? What is disorder? How do we agree on what is ordered and what is disordered? Because to me a snowflake is a perfect example of order. Energy distribution => then why Big Bang happened at all? As they say, universe was one tiny point of energy equally distributed. Now the universe is a repetition of energy density and void. Information => we can describe the universe before the Big Bang in one simple sentence: an energy point, X degrees kelvin . But we need billions of billions of lines of descriptions to be able to describe the universe. Mode => Again, before the big bang, or even in early epochs of the universe we had uniform states that were the most abundant possible states. I'm stuck at this very fundamental philosophical definition. I can understand the "cup of coffee" example of course, or the "your room gets messy over time" example. Those are very clear examples. But I'm stuck at these examples. Can you clarify this for me please? | Your concern about the too many definitions of entropy is well-founded. Unfortunately, there is an embarrassing confusion, even in the scientific literature on such an issue. The answers you may find even in the SE sites just mirror this state of things. The short answer is that there is nothing like a unique concept of entropy . There are many different but correlated concepts, which could have been named differently. They have some direct or indirect relation with thermodynamic entropy, although they usually do not coincide with it without additional assumptions. Just a partial list of different concepts, all named entropy contains Thermodynamic entropy. Dynamical system entropy. Statistical mechanics entropy. Information theory entropy. Algorithmic entropy. Quantum mechanics (von Neumann) entropy. Gravitational (and Black Holes) entropy. Although all these quantities are named entropy , they are not entirely equivalent. A schematic list of the range of systems they can be applied and some mutual relation could help organize a mental map in such a confusing conceptual landscape. Let me add a preliminary disclaimer. I am not going to write a comprehensive treatise on each possible entropy. The list is intended as an approximate map. However, even if I may be missing some important relation (I do not claim to be an expert on every form of entropy!), the overall picture should be correct. It should give an idea about the generic non-equivalence between different entropies. 1. Thermodynamic entropy It can be applied to macroscopic systems at thermodynamic equilibrium or even non-equilibrium systems, provided some sort of a local thermodynamic equilibrium (LTE) can be justified for small regions of the system. LTE requires that each subregion is large enough to neglect the effect of relative fluctuations (local thermodynamic quantities have to be well defined), and the relaxation times are faster than typical dynamic evolution times. Usual thermodynamics requires the possibility of controlling the work and heat exchanged by the system and crucially depends on some underlying microscopic dynamics able to drive the system towards equilibrium. 2. Dynamical system entropy The present and other items should contain sublists. Under this name, one can find entropies for abstract dynamical systems (for example, the metric entropy introduced by Kolmogorov and Sinai ) and continuous chaotic dynamical systems. Here, the corresponding entropy does not require an equilibrium state, and recent proposals for non-equilibrium entropies ( an example is here ) can be classified under this title. 3. Statistical mechanics entropies Initially, they were introduced in each statistical mechanics ensemble to provide a connection to the thermodynamic concept. In principle, there is one different entropy for each other ensemble. Such different expressions coincide for a broad class of Hamiltonians only at the so-called thermodynamic limit (TL), i.e., for systems with a macroscopically large number of degrees of freedom. Notice that Hamiltonians have to satisfy some conditions for TL could exist. Apart from the coincidence of entropies in different ensembles, TL is also required to ensure that the statistical mechanics' entropies would satisfy some key properties of thermodynamic entropy, like convexity properties or extensiveness. Therefore, one could say that the statistical mechanics' entropy is a generalization of the thermodynamic entropy, more than being equivalent. 3. Information theory entropy This entropy is the well-known Shannon's formula $$
S_{info}= -\sum_i p_i \log p_i
$$ where $p_i$ are the probabilities of a complete set of events. It is clear that $S_{info}$ requires only a probabilistic description of the system. There is no requirement of any thermodynamic equilibrium, the energy of a state, and no connection exists with work and heat, in general. $S_{info}$ could be considered a generalization of the statistical mechanic entropy, coinciding with that only in the case of an equilibrium probability distribution function of thermodynamic variables. However, $S_{info}$ can be defined even for systems without any intrinsic dynamics. 4. Algorithmic entropy In the present list, it is the only entropy that can be assigned to an individual (microscopic) configuration. Its definition does not require large systems, probability distribution, intrinsic dynamics, or equilibrium. It is a measure of the complexity of a configuration , expressed by the length of its shortest description. The relation of algorithmic entropy and information entropy is that if there is an ensemble of configurations, the average value (on the ensemble) of the algorithmic entropy provides a good estimate of the information entropy. However, one has to take into account that the algorithmic entropy is a non-computable function. 6. Quantum mechanics (von Neumann) entropy Although different from the formal point of view can be considered a generalization of Shannon's ideas to describe a quantum system.
However, concepts like thermal equilibrium or heat do not play any role in this case. 7. Gravitational (and Black Holes) entropies A set of stars in a galaxy can be thought of as systems, at least in LTE. However, their thermodynamics is quite peculiar. First of all, it is not extensive (energy grows faster than the volume). The ensembles' equivalence does not hold, and it is well known that the microcanonical specific heat is negative. A similar but not precisely equal behavior is found for the Black Hole entropy proposed by Beckenstein. In this case, the quantity that plays the role of entropy is the area of the horizon of the events of the Black Hole. Although it has been shown that this entropy shares many properties of thermodynamic entropy and can be evaluated within String Theory by counting the degeneracy of suitable states, its connection with thermodynamic entropy remains to be established What about disorder? It remains to discuss the relation between entropies (plural) and disorder. It is possible to associate to each entropy a specific concept of disorder . But it is easy to guess that, in general, it won't be the same for all. The only disorder associated with thermodynamic entropy is the disorder connected to how extensive quantities are stored in different subsystems of the same macroscopic state. Within thermodynamics, a well ordered macroscopic state is a state where extensive quantities are spatially concentrated. The maximum disorder coincides with a spread of the extensive state variables to ensure the same temperature, pressure, and chemical potential in each subvolume. Within classical statistical mechanics, one can associate disorder to the number of available microstates in the phase space. Notice however, that this disorder , in general, has nothing to do with the usual definition of spatial order. The reason is connected with the non-intuitive role of inter-particle interactions and the fact that the statistical mechanic entropy is related to counting the number of microstates. Probably, the entropy with the closest connection with the usual meaning of disorder is the algorithmic entropy. But that is also the most difficult to evaluate and the farthest from the thermodynamic entropy. A small postscript A pedagogical illustration of the complete decoupling between configurational order and entropy comes from the Sackur-Tetrode formula for classical ideal gas entropy . It shows that the entropy is directly proportional to the atoms' mass, while the accessible configuration space and the probability of each spatial configuration are the same. | {
"source": [
"https://physics.stackexchange.com/questions/606722",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4499/"
]
} |
606,949 | As I understand it, the eardrum works like any other kind of speaker in that it has a diaphragm which vibrates to encode incoming motion into something the inner ear translate to sound. It's just a drum that moves back and forth, so it can only move at one rate or frequency at any given time. But humans have very discerning ears and can simultaneously tell what instruments are playing at the same time in a song, what the notes in the chord of one of those instruments is, even the background noise from the radiator. All of this we can pick apart at the same time despite that all of these things are making different frequencies. I know that all of these vibrations in the air get added up in a Fourier Series and that is what the ear receives, one wave that is a combination of all of these different waves. But that still means the ear is only moving at one frequency at any given time and, in my mind, that suggests that we should only be able to hear one sound at any given time, and most of the time it would sound like some garbled square wave of 30 different frequencies. How can we hear all these different frequencies when we can only sense one frequency? | But that still means the ear is only moving at one frequency at any given time No, it doesn't mean that at all. It means the eardrum is moving with a waveform that is a superposition of all the frequencies in the sound-wave it is receiving. Then, within the inner ear, hair cells detect the different frequencies separately. It is entirely possible for several hair cells to be stimulated simultaneously so that you hear several frequencies at the same time. | {
"source": [
"https://physics.stackexchange.com/questions/606949",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75387/"
]
} |
607,329 | If we fold a paper and then apply pressure on the newly formed crease, it seems that the paper's surface gets a permanent deformation but what exactly has happened to the paper at a molecular scale? | Basically, a fold or crease in paper will remain because the structure of the fibers in the paper have become irreversibly damaged. This happens because the paper is bent/compressed beyond its elastic limit. Chemically, paper is mainly composed of cellulose from plant fibers. Cellulose is an organic polymer, which has D-glucose units connected through hydrogen bonds. These bonds form between the oxygen atom of the one-hydroxyl group belonging to the glucose and the hydrogen atom of the next glucose unit. These are microscopic properties of paper, but to understand what happens when we fold paper or do Origami, it is sufficient to learn what is happening macroscopically. All materials have what is called an elastic limit and a plastic region . The elastic limit is the point at which a material will bend but still return to its original position without any permanent change or damage to its structure. Further deforming the material beyond this limit takes it to its plastic region. At this point any structural or physical changes become permanent and the paper will not return to its original form. Every material has a different elastic limit or yield , and plastic region. Imagine holding a piece of paper slightly bent but not folding or creasing it. The plant fibers that make up the paper will not have exceeded their elastic limit. So as soon as you let go of the paper sheet it will quickly return to its noncreased original flat state. However, if you were to roll that piece of paper into a cylinder and hold it for a few minutes, some of these fibers will be pushed beyond the elastic limit which is evident since it will not lie flat anymore since slight deformations have occurred in this sheet. Now, when you properly fold a piece of paper as you would during Origami, the plant fibers along the crease are pushed into the plastic region of the paper, causing a fracture point at the actual line of the fold. A practical example of this is if you were to fold a piece of paper, you will note that if you stretch the paper evenly on both sides of the fold, the paper will tear right on the fold (a quick way to "cut" paper if you have no scissors). The fold then becomes an irreversible structural failure and the fibers in the paper will never regain their original state. Because of this damage to its structure, the paper will from then on have this fold. And no matter how hard you try to flatten out the fold it will never return to its original state. This is why Origami models continually retain their shape. | {
"source": [
"https://physics.stackexchange.com/questions/607329",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/236734/"
]
} |
607,544 | An igloo is not only used as shelter from snow but also to keep warm. Perhaps, a simple igloo is made of ice and nothing else still, why is its interior warmer than the exterior? | An igloo is not made from ice, but made from compressed snow. Snow is basically semi-frozen water or frozen crystalline water . Contrary to intuition, snow has actually got very good insulating properties. Solid ice on the other hand, is not a good insulator compared to compressed snow. This is because ice is actually solid but snow is filled with minute pockets of air. While snow on an igloo does indeed look solid, up to 95% of it is actually air trapped inside minute crystals. Because this air cannot circulate much inside these ice crystals, heat becomes trapped inside it. Some engineering also goes into the design of the inside of an igloo. The inside is divided into levels, where the upper level is for sleeping, the middle one is for fire and cooking (yes, fire! and a little hole is built into the top of the igloo to prevent smoke inhalation) and a lower level is used as a sink for cold air. As we know, heavier colder air naturally drops, and since the lowest level is where the door is placed, this cold air stays there. And warm air which rises, collects where it is mostly needed - in the eating and sleeping levels. Also, since the entrance to the igloo is at the bottom part - the tunnel to crawl through whilst entering or exiting the igloo - freezing air cannot blow directly into its interior. Temperatures can reach as low as -50°F (-45°C) outside the igloo but the temperature inside can be a “comfortable” 20°-70°F (-7° to 20°C) (when you’re exposed to those low temperatures outside, coming back in can be most pleasant!). All of this can be explained if we consider heat transfer and convection . This is a process whereby when a fluid moves, it transfers heat along with it. When this fluid is stationary, it will transfer heat by thermal conduction which is the transfer of heat from one body to another when they are in contact. You can see an example of this when you touch an ice cube and seeing it melt right where you fingertip is. But the more a fluid moves, the greater is it’s Reynolds number since the flow patterns become more unpredictable. The greater the Reynolds number, the more heat transferred via convection. And because snow has a low thermal conductivity, as mentioned above for air and how the snow contains air pockets, an igloo stops the heat transfer into its outside surroundings. The compressed snow and stationary air both act as surprisingly effective insulators. | {
"source": [
"https://physics.stackexchange.com/questions/607544",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75502/"
]
} |
607,574 | Unfortunately I broke my specs today which I used in this question . But I observed that the edges are completely different then the entire part of the lens. The middle portion of the lens was completely transparent but the edges appeared opaque (and I can't see through the edges). This image shows the same in case of a shattered glass. The edges in the above picture are green and not transparent as other portions appear. So why are the edges not transparent (in both the case of specs and the shattered glass)? Edit : I would like to add that the edges of my specs were not green. They were just silvery opaque. I couldn't take a pic of it during asking the question but take a look at it now. | Because you're looking through more of glass I'd like to just add to the other answers with some diagrams. We have an intuition that light beams travel in straight lines, so we tend to assume that the beam paths looking through glass might be as follows: However, the actual paths of the beam due to refraction and total internal reflection look more like this: Note that the beams that enter the face of the glass aren't significantly deflected, and exit the glass pretty quickly. However beams that enter the edge of the glass spend a lot more distance within the glass. As the beam spends more time within the glass, it has more of a path to be affected by impurities. | {
"source": [
"https://physics.stackexchange.com/questions/607574",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271783/"
]
} |
607,585 | How does the spectrum of F type stars differ from our sun? I have tried to find the information on the internet but I haven't found anything that helps me. I need the information for a novel I am writing. | Because you're looking through more of glass I'd like to just add to the other answers with some diagrams. We have an intuition that light beams travel in straight lines, so we tend to assume that the beam paths looking through glass might be as follows: However, the actual paths of the beam due to refraction and total internal reflection look more like this: Note that the beams that enter the face of the glass aren't significantly deflected, and exit the glass pretty quickly. However beams that enter the edge of the glass spend a lot more distance within the glass. As the beam spends more time within the glass, it has more of a path to be affected by impurities. | {
"source": [
"https://physics.stackexchange.com/questions/607585",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/285719/"
]
} |
607,682 | Not sure if this is where I should be posting something like this, but here goes. I wrote a small program today that maps 512 evenly-spaced points on the edge of a circle, and then, iterating over each point, draws a line from the center to that point on the edge. Also this image is drawn on a canvas that is 500x500 pixels in size. The result is the image below, and where my question arises What's curious about this image to me is that there are these little diamonds that are appearing at various points in the circle. Is there an explanation as to why these shapes are being produced? | That is a Radial Moire pattern. The Wikipedia article on Moire patterns doesn't currently mention the radial version, but it is below for reference: https://en.wikipedia.org/wiki/Moir%C3%A9_pattern As of this writing, the website: http://thomasshahan.com/Radial/ Provides an example: That is a snip from a portion of it. Its not as good as yours, I suspect because it is widening the lines the further they are from center.
You weren't sure where to post it, I recognize it from general Moire patterns that are encountered in computer science/graphics. Anti-aliasing should help reduce it: https://en.wikipedia.org/wiki/Anti-aliasing Which usually requires multiple pixels per line width with various shading. An example of such code is at: https://www.codeproject.com/Articles/13360/Antialiasing-Wu-Algorithm Simple image from that site: Notice that the Anti-aliased lines appear thickened. Another image from that last site zooms in on some lines: | {
"source": [
"https://physics.stackexchange.com/questions/607682",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/285775/"
]
} |
608,092 | I often notice small patches of snow that remain on the ground in seemingly random locations, many days or even weeks after all other snow in an area has melted, and even when temperatures have been well above freezing for some time. What makes these particular patches of snow so resistant to melting and how do they "choose" to remain in one spot and not another seemingly identical spot nearby? More detail about my question: I'm not asking why snow takes a long time to melt in general (I've seen this ). I'm asking how it's possible that the conditions would be sufficient to cause almost all snow to melt, but yet a few random patches (the "survivors") are able to escape the fate of the other snow for so long. These are some possibilities that come to mind: Maybe the snow has been melting at a steady pace since the last snowfall and I'm just witnessing the end of that process (i.e. yesterday there was more snow on the ground, now there only are a few patches that I'm seeing, and tomorrow there will be none). But this doesn't match my admittedly nonscientific observation of the phenomenon. I've seen it happen that most of the snow will melt in a short time, perhaps a day or two after the last snowfall, except for a few patches that get "stuck." These patches remain in the same state (not melting further) over a much longer timespan. I've been startled by these lingering snow patches even when the weather has gotten warm enough that I'm no longer wearing my winter jacket. Maybe the lingering snow is in a shady spot. But no, it's not always in a shady spot. Maybe the lingering snow is in a wind shadow. But often I can't see any obstacles or other features on the ground that would create a wind shadow in that one spot. Maybe there's something underneath the snow that makes that portion of the ground especially cold. Do ground temperatures really differ so much on a foot to foot basis? Maybe there was a rapid melting event several days or weeks ago, but temperatures dropped before the melting finished. With subsequent freezes, the few lingering patches of snow became denser and more resistant to melting? So the remaining snow now has a different consistency and it won't melt even in the same conditions that melted the rest of the snow? [Added in response to comment from rob.] Maybe the lingering snow is from a large pile (for example, a pile made by a snow plow). Quite possible, but I've definitely seen this phenomenon in the woods and other places where there are no plows or other reasons for a disproportionately large pile to form. And while I'd expect that a large snow pile would last longer than a thin layer of snow, it seems counter-intuitive that a three or four foot pile could last for weeks in conditions that have been sufficient to melt all the other snow (not saying it's impossible, just that I find it counter-intuitive). These are a few examples spotted in the Boston area on Jan 15, 2021. They are not representative of all cases, they're just the images I have available now. Boston had some light snow on Jan 5. There's been sun and rain since then, temps have reached the low 40s, and most of the ground has been clear for days. The question about these snow patches: why there ? And why not a foot to the left, or a foot to the right? Added Jan 18, 2021: Here is an example of lingering snow patches at the Agassiz Rock site in Manchester-by-the-Sea, MA. I hiked for several hours there today and saw no snow anywhere until I suddenly encountered these few patches. They are on the north side of a hill, on a leafy surface, and likely have good protection from sun and wind. But there's plenty of other leafy ground on north side of the hill where snow would presumably experience the same favorable conditions. So it strikes me as odd that there aren't patches of snow scattered around the area; instead, everything has melted except these few survivors in the picture. Why not them too? The ground is not wet in a way that would suggest that all the other snow had just recently melted. One could speculate that these patches are remnants of a large pile, but there are no plows here; if a large pile formed naturally in this spot, one might expect that the same conditions that formed it would have formed at least a few other piles at the same time, with their own lingering patches, but there are no other lingering patches to be seen. | A hidden assumption is that all of your remnant piles start with the same amount of snow. However your first photo seems to be adjacent to, and parallel to, a roadway. You mention you’re in Boston, which has an army of plows which remove fresh snow from the roads and pile it nearby. Snow which has been piled, or otherwise compactified, has less air in the space between the ice crystals and therefore a larger heat capacity than an equal volume of loose, powdery snow. A common example of this is that a snowman will generally take much longer to melt than the snow on the ground around him. | {
"source": [
"https://physics.stackexchange.com/questions/608092",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/285947/"
]
} |
608,230 | I read that good absorbers are good emitters - hence a blackbody, that absorbs all kinds of radiation, also emits all kinds of radiation? I'm not able to get my head around this. What does it mean to absorb all kinds of radiation? Radiation of all frequencies? or are we saying that $100\%$ radiation incident on a blackbody is absorbed ? None is reflected? Well then, how is it a good emitter? Am I confusing good emitter and good reflector ? I'd appreciate some clarifications. Thank you! P.S. I'm coming back on Physics SE after years. I'm majoring in mathematics, but I decided to take a quantum physics course for fun anyway. Hence, I'm back here for the spring! | Yeah, good emitter and good reflector are definitely not the same property. Maybe the best way to visualize it is just as the time inverse, under the mapping $t\mapsto-t$ , of absorption is emission. This also gives something of a hint of why this relationship might hold, albeit I only know the derivation for the case of thermal radiation. For thermal radiation, you bring two bodies of the same temperature into radiative contact, one of one material, one of the other. If either one emits more radiation than it absorbs, then it spontaneously cools and the other one spontaneously heats up, violating the second law of thermodynamics. Very simple argument. May require sealing the two in some idealized setup of mirrors or so, to make the proof work, but aside from that it appears to have immediate physical relevance. | {
"source": [
"https://physics.stackexchange.com/questions/608230",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/131341/"
]
} |
608,372 | If a jet engine is bolted to the equator near ground level and run with the exhaust pointing west, does the earth speed up, albeit imperceptibly? Or does the Earth's atmosphere absorb the energy of the exhaust, and transfer it back to the ground, canceling any effect? | It's the latter. Look at the system earth + engine + atmosphere. Conservation of angular momentum must hold for the whole system (assume no gases leave the atmosphere due to the engine, which is a fair assumption). | {
"source": [
"https://physics.stackexchange.com/questions/608372",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/286076/"
]
} |
608,403 | I know the intuitive explanation of the stress-energy tensor and I have seen equations for stress-energy according to a specific situation, but I have not seen a general mathematical definition. What is the mathematical definition of the stress-energy tensor presented in Einstein's Field Equations? I would prefer something that says $T_{\mu\nu} = \cdots$ | It's the latter. Look at the system earth + engine + atmosphere. Conservation of angular momentum must hold for the whole system (assume no gases leave the atmosphere due to the engine, which is a fair assumption). | {
"source": [
"https://physics.stackexchange.com/questions/608403",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/203271/"
]
} |
608,581 | I bought 5.6 gr of uranium ore.
The measured gamma radiation is 1µSv/h, we didn't have the instruments to measure alpha/beta radiation. EDIT:
The gamma radiation was measured at 1cm distance. I also updated the unit of measurement as I had only written 1µSv instead of 1µSv/h. | With a half-life of 4 billion years, uranium is only very weakly radioactive. In fact, since uranium is a heavy metal, its chemical toxicity is actually more of a danger than its radioactivity. If you touch it directly with your hands, you should wash your hands afterwards. You should not eat it. Apart from that, it is not dangerous. Regarding the legality: Most countries have an exemption limit for activity below which the permit-free handling is possible. For example in the EU, Council Directive 2013/59/Euratom sets the limit for uranium to $10^4$ Bq (including the activity of its daughter nuclides). The exemption limits correspond to the limit values of the International Atomic Energy Agency (IAEA). | {
"source": [
"https://physics.stackexchange.com/questions/608581",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/286187/"
]
} |
609,550 | We already have a magnetic core, why can't we use it to recharge the batteries? The only problems I see with it are potentially wiping magnetic data, but doesn't the electromagnet have to be revolving around the damageable device? | This is basically what happens in the alternator. The car's engine, which turns the wheels, also turns the alternator's rotor. The magnetic rotor is surrounded by coils of wire, and induces a current that charges the battery. It is important to note, though, that this does take energy from the engine: nothing is for free. | {
"source": [
"https://physics.stackexchange.com/questions/609550",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/286669/"
]
} |
609,676 | I am very new to quantum field theory, so forgive me if this question is a bit silly. The Casimir force is usually explained by the zero point energy of the field. You assume that the frequencies of the field are quantized between the two plates, perform some regularization, and out pops, for the electromagnetic field, $$F=-\frac{\pi^2\hbar c}{240a^4}A$$ where $a$ is the separation and $A$ is the area of the plates. However, what if we have multiple fields? For an ordinary scalar field, I believe the Casimir force for that only differs by a factor of $2$ (due to the polarizations of light), so we have $$F=-\frac{\pi^2\hbar c}{480a^4}A$$ In a world with both of these fields, I'd assume the total Casimir force would be their sum of each contribution. In the real world, we have a bunch more fields than just the electromagnetic one (including a scalar Higgs field)! I would assume that each of these would produce a Casimir force in the same manner as the scalar field and the electromagnetic field, and that the total Casimir force is their sum. However, we only measure the Casimir force due to the electromagnetic field. Why is this? Is there a flaw in my reasoning? | The answer by G.Smith is correct and concise. For whatever it's worth, I'll give a longer answer. Sometimes authors use terms like zero point energy or vacuum energy for marketing purposes, because it sounds exotic. But sometimes authors use those terms for a different reason: they're describing a shortcut for doing what would otherwise be a more difficult calculation. Calculations of the Casimir effect typically use a shortcut in which material plates (which would be made of some complicated arrangement of electrons and nuclei) are replaced with idealized boundary conditions on space itself. In that shortcut, the "force" between the boundaries of space is defined in terms of $dE/dx$ , where $E$ is the energy of the ground state (with the given boundary conditions) and $x$ is the distance between the boundaries. This is a standard shortcut for calculating the force between two nearly-static objects: calculate the lowest-energy configuration as a function of the distance between them, and then take the derivative of that lowest energy with respect to the distance. When we idealize the material plates as boundaries of space, the lowest-energy configuration is called the vacuum , hence the vacuum energy language. The important point is that this is only a shortcut for the calculation that we wish we could do, namely one that explicitly includes the molecules that make up material plates, with all of the complicated time-dependent interactions between those molecules. The only known long-range interactions are the electromagnetic interaction and the gravitational interaction, and gravity is extremely weak, so that leaves electromagnetism. What about all of the other quantum fields in the standard model(s)? Why don't they also contribute to the Casimir effect? Well, they would if we really were dealing with the force between two movable boundaries of space itself, because then the same boundary conditions would apply to all of the fields. But again, the boundaries-of-space thing is just an idealization of plates made of matter, so the only relevant fields are the ones that mediate macroscopic interactions between matter. Okay, but isn't the usual formula for the Casimir effect independent of the strength of the interaction? Not really. That's another artifact of the idealization. The paper https://arxiv.org/abs/hep-th/0503158 says it like this: The Casimir force (per unit area) between parallel plates... the standard result [which I called the shortcut], which appears to be independent of [the fine structure constant] $\alpha$ , corresponds to the $\alpha\to\infty$ limit. ... The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates. For perspective, the electromagnetic Casimir effect typically refers to an attractive interaction between closely-spaced plates, and van der Waals force typically refers to an attractive interaction between neutral molecules, but they're basically the same thing: interactions between objects, mediated by the (quantum) electromagnetic field. The related post Van der Waals and Casimir forces emphasizes the same point. | {
"source": [
"https://physics.stackexchange.com/questions/609676",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/115161/"
]
} |
609,681 | I don't mean to be cheeky with the question, but it reflects the myriad of often seemingly conflicting answers I've seen around this. And that's not surprising of course given the dual nature of light and the multiple possible interpretations of what's actually happening at the quantum scale. As a quick summary of what I understand of the particle perspective, glass is clear because the photons that aren't reflected based on the angle of incidence, aren't absorbed either and so just go straight through. A similar explanation I've heard is that the question itself is flawed, as the real question should be why things are opaque since most of matter is just empty space, leaving one having to explain why all photons don't just go through all matter. And this of course can be explained by electron energy levels and photons carrying enough energy to lift an electron to a higher energy level, absorbing the photon, and later emitting it back. And then there are some quantum mechanical explanations for how the photon maintains its direction and frequency between absorption and emission. That said, here's where I get stuck: I know that light travels "more slowly" through denser materials like glass and water (while of course traveling at c between molecules), and this slowdown causes refraction; yet glass is clear because photons don't get absorbed by glass. I'm not getting how both these statements can be true: how can photons both travel through glass because of not being absorbed, and be refracted which ostensibly requires photon-molecule interaction (absorption/emission). Funny thing is, as I wrote this long question, I feel like I answered my own question. Is it basically that whereas the molecules in opaque materials generally convert photons to heat after absorption, those in transparent materials such as glass/water are unable to do so and so must re-emit them? | The answer by G.Smith is correct and concise. For whatever it's worth, I'll give a longer answer. Sometimes authors use terms like zero point energy or vacuum energy for marketing purposes, because it sounds exotic. But sometimes authors use those terms for a different reason: they're describing a shortcut for doing what would otherwise be a more difficult calculation. Calculations of the Casimir effect typically use a shortcut in which material plates (which would be made of some complicated arrangement of electrons and nuclei) are replaced with idealized boundary conditions on space itself. In that shortcut, the "force" between the boundaries of space is defined in terms of $dE/dx$ , where $E$ is the energy of the ground state (with the given boundary conditions) and $x$ is the distance between the boundaries. This is a standard shortcut for calculating the force between two nearly-static objects: calculate the lowest-energy configuration as a function of the distance between them, and then take the derivative of that lowest energy with respect to the distance. When we idealize the material plates as boundaries of space, the lowest-energy configuration is called the vacuum , hence the vacuum energy language. The important point is that this is only a shortcut for the calculation that we wish we could do, namely one that explicitly includes the molecules that make up material plates, with all of the complicated time-dependent interactions between those molecules. The only known long-range interactions are the electromagnetic interaction and the gravitational interaction, and gravity is extremely weak, so that leaves electromagnetism. What about all of the other quantum fields in the standard model(s)? Why don't they also contribute to the Casimir effect? Well, they would if we really were dealing with the force between two movable boundaries of space itself, because then the same boundary conditions would apply to all of the fields. But again, the boundaries-of-space thing is just an idealization of plates made of matter, so the only relevant fields are the ones that mediate macroscopic interactions between matter. Okay, but isn't the usual formula for the Casimir effect independent of the strength of the interaction? Not really. That's another artifact of the idealization. The paper https://arxiv.org/abs/hep-th/0503158 says it like this: The Casimir force (per unit area) between parallel plates... the standard result [which I called the shortcut], which appears to be independent of [the fine structure constant] $\alpha$ , corresponds to the $\alpha\to\infty$ limit. ... The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates. For perspective, the electromagnetic Casimir effect typically refers to an attractive interaction between closely-spaced plates, and van der Waals force typically refers to an attractive interaction between neutral molecules, but they're basically the same thing: interactions between objects, mediated by the (quantum) electromagnetic field. The related post Van der Waals and Casimir forces emphasizes the same point. | {
"source": [
"https://physics.stackexchange.com/questions/609681",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/286742/"
]
} |
610,064 | In my atomic physics notes they say In general, filled sub-shells are spherically symmetric and set up, to a good approximation, a central field. However sources such as here say that, even for the case of a single electron in Hydrogen excited to the 2p sub-shell, the electron is really in a spherically symmetric superposition $\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$ (which I thought made sense since there should be no preferred direction in space). My question there now is, why is the central field approximation only an approximation if all atoms are really perfectly spherically symmetric, and why are filled/half-filled sub-shells 'especially spherically symmetric? | In general, atoms need not be spherically symmetric . The source you've given is flat-out wrong. The wavefunction it mentions, $\varphi=\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$ , is in no way spherically symmetric. This is easy to check: the wavefunction for the $2p_z$ orbital is $\psi_{2p_z}(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:z \:e^{-r/2a_{0}}$ (and similarly for $2p_x$ and $2p_y$ ), so the wavefunction of the combination is $$\varphi(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:\frac{x+y+z}{\sqrt 3} \:e^{-r/2a_{0}},$$ i.e., a $2p$ orbital oriented along the $(\hat{x}+\hat y+\hat z)/\sqrt3$ axis. This is an elementary fact and it can be verified at the level of an undergraduate text in quantum mechanics (and it was also obviously wrong in the 1960s). It is extremely alarming to see it published in an otherwise-reputable journal. On the other hand, there are some states of the hydrogen atom in the $2p$ shell which are spherically symmetric, if you allow for mixed states , i.e., a classical probabilistic mixture $\rho$ of hydrogen atoms prepared in the $2p_x$ , $2p_y$ and $2p_z$ states with equal probabilities. It is important to emphasize that it is essential that the mixture be incoherent (i.e. classical and probabilistic, as opposed to a quantum superposition) for the state to be spherically symmetric. As a general rule, if all you know is that you have "hydrogen in the $2p$ shell", then you do not have sufficient information to know whether it is in a spherically-symmetric or an anisotropic state. If that's all the information available, the initial presumption is to take a mixed state, but the next step is to look at how the state was prepared: The $2p$ shell can be prepared through isotropic processes, such as by excitation through collisions with a non-directional beam of electrons of the correct kinetic energy. In this case, the atom will be in a spherically-symmetric mixed state. On the other hand, it can also be prepared via anisotropic processes, such as photo-excitation with polarized light. In that case, the atom will be in an anisotropic state, and the direction of this anisotropy will be dictated by the process that produced it. It is extremely tempting to think (as discussed previously e.g. here , here and here , and links therein) that the spherical symmetry of the dynamics (of the nucleus-electron interactions) must imply spherical symmetry of the solutions, but this is obviously wrong $-$ to start with, it would apply equally well to the classical problem! The spherical symmetry implies that, for any anisotropic solution, there exist other, equivalent solutions with complementary anisotropies, but that's it. The hydrogen case is a bit special because the $2p$ shell is an excited state, and the ground state is symmetric. So, in that regard, it is valid to ask: what about the ground states of, say, atomic boron? If all you know is that you have atomic boron in gas phase in its ground state, then indeed you expect a spherically-symmetric mixed state, but this can still be polarized to align all the atoms into the same orientation. As a short quip: atoms can have nontrivial shapes, but the fact that we don't know which way those shapes are oriented does not make them spherically symmetric . So, given an atom (perhaps in a fixed excited state), what determines its shape? In short: its term symbol , which tells us its angular momentum characteristics, or, in other words, how it interacts with rotations. The only states with spherical symmetry are those with vanishing total angular momentum, $J=0$ . If this is not the case, then there will be two or more states that are physically distinct and which can be related to each other by a rotation. It's important to note that this anisotropy could be in the spin state, such as with the $1s$ ground state of hydrogen. If you want to distinguish the states with isotropic vs anisotropic charge distributions, then you need to look at the total orbital angular momentum, $L$ . The charge distribution will be spherically symmetric if and only if $L=0$ . A good comprehensive source for term symbols of excited states is the Levels section of the NIST ASD . | {
"source": [
"https://physics.stackexchange.com/questions/610064",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/260493/"
]
} |
610,073 | I was learning basic quantum mechanics using Feynman Lectures on Physics. In Chapter 8 Feynman has described ammonia inversion due to tunnelling. Feynman has first used the base states as the two possible orientations of ammonia molecule in space. Due to tunnelling effects he has also explained how the opposite configurations of the molecule affect each other. Then he has constructed another set of base states with definite energies $E-A$ and $E+A$ using the superpositions of the base states. It is these base states that I am confused with. Even though I understood the mathematics what do these two states represent physically? How did the two base states interfere to give two other states with different energy levels? Is there an intuitive explanation or a better way to think about it? These are the chapters I'm referring to: Chapter 8 , Chapter 9 I realized that I have a problem in understanding energy splitting in two state systems in general. So I changed the title. | In general, atoms need not be spherically symmetric . The source you've given is flat-out wrong. The wavefunction it mentions, $\varphi=\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$ , is in no way spherically symmetric. This is easy to check: the wavefunction for the $2p_z$ orbital is $\psi_{2p_z}(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:z \:e^{-r/2a_{0}}$ (and similarly for $2p_x$ and $2p_y$ ), so the wavefunction of the combination is $$\varphi(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:\frac{x+y+z}{\sqrt 3} \:e^{-r/2a_{0}},$$ i.e., a $2p$ orbital oriented along the $(\hat{x}+\hat y+\hat z)/\sqrt3$ axis. This is an elementary fact and it can be verified at the level of an undergraduate text in quantum mechanics (and it was also obviously wrong in the 1960s). It is extremely alarming to see it published in an otherwise-reputable journal. On the other hand, there are some states of the hydrogen atom in the $2p$ shell which are spherically symmetric, if you allow for mixed states , i.e., a classical probabilistic mixture $\rho$ of hydrogen atoms prepared in the $2p_x$ , $2p_y$ and $2p_z$ states with equal probabilities. It is important to emphasize that it is essential that the mixture be incoherent (i.e. classical and probabilistic, as opposed to a quantum superposition) for the state to be spherically symmetric. As a general rule, if all you know is that you have "hydrogen in the $2p$ shell", then you do not have sufficient information to know whether it is in a spherically-symmetric or an anisotropic state. If that's all the information available, the initial presumption is to take a mixed state, but the next step is to look at how the state was prepared: The $2p$ shell can be prepared through isotropic processes, such as by excitation through collisions with a non-directional beam of electrons of the correct kinetic energy. In this case, the atom will be in a spherically-symmetric mixed state. On the other hand, it can also be prepared via anisotropic processes, such as photo-excitation with polarized light. In that case, the atom will be in an anisotropic state, and the direction of this anisotropy will be dictated by the process that produced it. It is extremely tempting to think (as discussed previously e.g. here , here and here , and links therein) that the spherical symmetry of the dynamics (of the nucleus-electron interactions) must imply spherical symmetry of the solutions, but this is obviously wrong $-$ to start with, it would apply equally well to the classical problem! The spherical symmetry implies that, for any anisotropic solution, there exist other, equivalent solutions with complementary anisotropies, but that's it. The hydrogen case is a bit special because the $2p$ shell is an excited state, and the ground state is symmetric. So, in that regard, it is valid to ask: what about the ground states of, say, atomic boron? If all you know is that you have atomic boron in gas phase in its ground state, then indeed you expect a spherically-symmetric mixed state, but this can still be polarized to align all the atoms into the same orientation. As a short quip: atoms can have nontrivial shapes, but the fact that we don't know which way those shapes are oriented does not make them spherically symmetric . So, given an atom (perhaps in a fixed excited state), what determines its shape? In short: its term symbol , which tells us its angular momentum characteristics, or, in other words, how it interacts with rotations. The only states with spherical symmetry are those with vanishing total angular momentum, $J=0$ . If this is not the case, then there will be two or more states that are physically distinct and which can be related to each other by a rotation. It's important to note that this anisotropy could be in the spin state, such as with the $1s$ ground state of hydrogen. If you want to distinguish the states with isotropic vs anisotropic charge distributions, then you need to look at the total orbital angular momentum, $L$ . The charge distribution will be spherically symmetric if and only if $L=0$ . A good comprehensive source for term symbols of excited states is the Levels section of the NIST ASD . | {
"source": [
"https://physics.stackexchange.com/questions/610073",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
610,259 | We tend to rub soap after applying it to the skin. I found it interesting that the mere act of sliding our hands on the wet skin surface produces millions of air bubbles in the liquid, that later becomes foam. I wonder how exactly we manage to do that? [ Image source ] This is the kind of foam I am talking about (foam/lather/froth... I find these words confusing). Talking of foam, I have an unexplainable feeling that the effectiveness of a wash/bath is directly proportional to the amount of foam produced. Coming to think about it, it seems like the opposite should be true. Soap without foam has a lesser amount of soap solution protruding out as bubbles; most of it is in contact with the skin surface, where actual cleaning takes place. I suspect that this is a misconception that got imprinted to our minds because soap does not clean or foam well in hard water (but that has an entirely different reason). So to sum up, How exactly does rubbing soap on the skin produce foam? Is there any plausible reason why a soap with foam can do better cleaning than the same soap without any foam? Simple and straightforward answers are welcome. | The soap bubbles are a side-effect of the cleaning process. It is the mixing of air with the soapy water, and the film stability of the resulting bubble walls, that generates and maintains the bubbles. (Note that soap bubble liquid contains glycerine, which is a powerful film stabilizer that makes the bubbles last as long as possible). Note also that it is possible to design molecules called surfactants that behave like soap but do not create a foam of bubbles when agitated (these are used in dishwashing detergent mixtures) and furthermore that it is also possible to design molecules which when added to foamy soaps inhibit the creation of bubbles. These are called defoaming agents and are added to soap or detergent solutions which have to be pumped mechanically through filters and pipes, so the pump impeller does not spin out of control and lose prime when it ingests a slug of foam. Defoaming agents are commonly used in things like rug shampooing machines and self-powered floor scrubbers. It is also possible to design detergents which foam up very strongly and persistently when mixed with air, by adding chemicals called film formers to them, as in the glycerine example above. Such detergents are used when processing things like crushed mineral ores, where the foam phase is used to carry off specific constituents of the crushed ore. | {
"source": [
"https://physics.stackexchange.com/questions/610259",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/155230/"
]
} |
610,538 | Please don't explain it mathematically. I have been searching for the reason for a long time. I have watched Walter Lewin's video giving an example of resonance, but I didn't get the reason behind the intuition. I am confused if the reason exists. | For intuition I find it easier to start with a regular pendulum. Imagine a steel ball on a string hanging down. If you give it a push, it will start to swing back and forth. Now if you, while the pendulum is swinging, give it another push in the same direction, it will matter where it is when you push it. If it is travelling in the opposite direction you are pushing it in (say, you push from left to right, then this would mean pushing when the pendulum is travelling right to left), you will slow the pendulum down. However, if you push it, when it is already travelling in that direction (so, push it from left to right when it is already travelling left to right), it will speed up. Now say you push it periodically, that is, in regular time intervals. If you just choose a random interval to push the pendulum, you will sometimes push it to make it go faster, and sometimes to slow it down. Depending on the exact frequency you push it at, this will mostly cancel out. If you, however, push it always when it is going left to right, it will speed up every single time you push it. But in order to push it at the same point in its period, the frequency you are pushing it in must match the frequency the pendulum is going in anyway. Conveniently, this frequency is independent of the amplitude (i.e. how high the pendulum is swinging) and only depends on properties of the system itself. 1 This is called the natural frequency or resonant frequency of the system (there are nuances between these terms that don't matter in this context). So pushing at that frequency will lead to resonance and (without damping) the pendulum will swing higher and higher (its amplitude will become arbitrarily large). The same is true for larger and more complicated systems, though they will ususally have multiple resonance frequencies. 1. This is strictly only true for harmonic oscillators, i.e. systems where the restoring force is proportional to the displacement; for a hanging pendulum this is only true for small angles, but for the sake of simplicity we'll ignore that in the context of this answer. | {
"source": [
"https://physics.stackexchange.com/questions/610538",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/283650/"
]
} |
610,717 | In this video potassium is dropped into the bucket and it reacts with the water present in the bucket causing an explosion within the bucket itself. I don't see how an internal explosion makes the bucket launch up vertically in the air. If anything it should press harder on the ground. | Relevant XKCD . Unusually for an exploding-potassium demonstration, this one uses a potassium sample attached to a weight. This causes the potassium to sink to the bottom of the bucket, where it generates and ignites hydrogen gas. This creates a temporary bubble launching the water upwards and the bucket downwards. The bucket's motion gets blocked by the table, but the water is free to rise. The bubble quickly cools and collapses, and air pressure pushes the bucket upwards towards the rising water. | {
"source": [
"https://physics.stackexchange.com/questions/610717",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277480/"
]
} |
611,300 | I'm trying to find a "classical" thought experiment to form a better intuition for special relativity. Bats are creatures that observe their world via echolocation. Their observations are bound by the speed of sound in air (~343 meters per second). Assume three bats, A, B and C, move relative to each other. Each bat observes the other's position and speed via echolocation. Bat C measures bat A's and B's position and velocity, and wants to determine bat B's position and velocity, as observed by bat A. Does bat C need to use the Lorentz transformation with $c$ equal to the speed of sound? | Nope, the " $c$ " in the Lorentz Transformations doesn't just apply to the speed of propagation of information in a medium, though I understand why you might think of it that way. Instead, the Lorentz Transformations are fundamental to the structure of space and time itself. In particular, they imply that the speed of light $c$ is frame independent : all inertial observers will agree on this quantity. Such an argument cannot hold for the speed of sound in a medium, which is not something fundamental, but which depends on properties of the medium itself like its density and pressure. Indeed, objects can (and frequently do) travel faster than the speed of sound in a medium. Furthermore, it should be clear that if you consider just the bats moving in air, they don't even satisfy the basic criterion for relativity: for example, the Doppler Effect for sound clearly distinguishes cases where the "source" is really moving and where the "observer" is really moving (with respect to the ambient medium). The big result of special relativity is that there is one -- and only one -- frame independent speed, and that is $c$ , the speed at which light happens to travel. It makes no physical sense to speak of a speed greater than this value. The speed limit of 343 m/s in your problem, however, is a biological limit. If the bats in your problem were replaced by fighter jets, they would certainly be able to move faster than the speed of sound. So I see no reason to draw a parallel. In other words, you don't need to teach the bats Special Relativity: good old-fashioned Newtonian mechanics should work fine. | {
"source": [
"https://physics.stackexchange.com/questions/611300",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12150/"
]
} |
611,408 | Why does carbon dioxide not sink in air if other dense gases do? We evidently do not suffocate by carbon dioxide sinking to the bottom of the atmosphere and displacing oxygen and yet there are gases that do sink. This is commonly a problem in coal mines. Lower layers can fill up with gas that is unbreathable. Here is a demonstration showing a 'boat' floating on sulphur hexafluoride. Question Given a mixture of two mutually non-reactive gases, what property determines whether the denser gas sinks to the bottom? | Gases are all miscible . If initially separate and adjacent, they do not mix instantly, but once mixed (a process that occurs by molecular diffusion and is accelerated by macroscopic stirring or convection, just as for liquids), they do not spontaneously unmix. During the time before substantial mixing occurs, gases behave somewhat like you may be picturing for immiscible liquids, e.g., water settling below oil. If a heavy gas is introduced into an environment in a pure or somewhat pure form (from some kind of reservoir), it will initially sink and displace lighter gases. This is a real danger with suddenly introduced carbon dioxide, but not with the carbon dioxide that has been in the atmosphere a long time. In the mixed gas phase, the composition varies with height due to gravitational potential energy of different molecules, but all components are present at all heights, and on human scales the variation is small. In equilibrium, the vertical distance over which the density of a given gas changes substantially is termed the scale height , and is ~8 km for nitrogen, ~7 km for oxygen, and ~5 km for carbon dioxide (in Earth conditions). Even for sulfur hexafluoride it is ~1.5 km. As you go down, all components gradually become denser, the heaviest ones the fastest. Moreover, as noted in a comment, the outdoor atmosphere is not in equilibrium but has a lot of turbulent motion, so even these gradual variations in composition that might be seen in controlled conditions are washed out in natural conditions. | {
"source": [
"https://physics.stackexchange.com/questions/611408",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85871/"
]
} |
612,075 | If gravity was inversely proportional to distance, will the dynamics of celestial bodies be much different from our world? Will celestial bodies fall into each other? | Why not test it? The following Mathematica code numerically integrates the equations of motion for $F\propto 1/r$ . G = 1; M = 1;
T = 20;
r0 = 1;
dv = .1;
sols = NDSolve[{ x''[t] == -((G M)/(x[t]^2 + y[t]^2)) x[t],
y''[t] == -((G M)/(x[t]^2 + y[t]^2)) y[t], x[0] == r0, y[0] == 0,
x'[0] == 0, y'[0] == Sqrt[G M] + dv}, {x, y}, {t, 0, T}];
ParametricPlot[
Evaluate[{{Cos[t], Sin[t]}, {x[t], y[t]}} /. sols], {t, 0, T},
AspectRatio -> 1] where $T$ is the integration time, $r0$ is the starting radius and $dv$ is the deviation from a circular trajectory. The circular trajectory has been calculated with the help of @joseph h’s answer. This code gives the following plots for different $dv$ : The blue circle shows the reference circular trajectory. We notice two important things. Firstly the orbits precess. They generally don't end up at their starting point. Non-precessing orbits are a special characteristic of Keplerian orbits. Secondly the orbits are still bound. They don't spiral inward. To make sense of this we can look at the potential $V(r)=GM\log(r)$ . If you plot this and compare it to $V(r)=-\frac{GM}{r}$ they actually have very similar shape. But, because $\log(r)$ doesn't have an asymptote, there are no longer escape trajectories. Every orbit will eventually return even though it will take very long to return for large velocities. Edit: for reference I will also include these plots for a $F\propto 1/r^2$ to get a sense of how large $dv$ is. At $dv=+0.5$ we already have an escape trajectory. | {
"source": [
"https://physics.stackexchange.com/questions/612075",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/33960/"
]
} |
612,968 | A friend of mine told me that temperature rises when snow falls. And this is because condensation of water in snowflakes reduces entropy and the temperature of the air rises to compensate for this. Is this explanation correct? | Yes. A simpler way to look at this is that because freezing, as well as resublimation (turning a gas directly into a solid) emits heat. It may seem strange but consider this: you need to put in heat to make water turn from solid into liquid, so the inverse process should transfer the heat in the opposite direction. And that's what it does. In cold air the water droplets and the water vapor can turn into ice crystals, but doing so they need to expel some heat into the air. So cold air and cold water vapor turn into slightly warmer air and ice. It can be also said in terms of entropy, as entropy and heat transfer are corelated. Turning water droplets and water vapor into ice decreases its entropy, and that requires an outward heat transfer into the surrounding air. | {
"source": [
"https://physics.stackexchange.com/questions/612968",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/288237/"
]
} |
613,321 | Sorry for brevity, but what is the exact physics explanation of why smaller quantities placed inside a microwave oven heat up faster than when you place a larger quantity of a similar material inside? | The magnetron injects microwave radiation at a certain rate. Ignoring losses, that radiation bounces around the walls until it’s absorbed by the food. If you put two burritos in there instead of one, on average there will be fewer bounces before absorption. That means that with two burritos, the average intensity of the radiation impinging on any point is less—some of photons, if you want to think of it that way, that would have been hitting the spot aren’t there because they’ve already been absorbed. This is quite different from a regular oven—as long as there is enough power to keep the air temperature at the desired setting, it doesn’t much matter how many burritos you put in there, as long as there’s air space between them. They are heated by conduction from the air, which is unaffected by neighbors, and blackbody radiation from the surroundings which is only affected a bit. | {
"source": [
"https://physics.stackexchange.com/questions/613321",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8144/"
]
} |
613,381 | we can see most of the science fiction stories, movies, and television serials use the concept of wormholes . Do wormholes really exist in this universe? I'm curious about it. And also, what are the types of wormholes? | General relativity asserts that certain configurations of spacetime, called wormholes, will satisfy the equations that govern spacetime and this is why they are part of mainstream physics. However, there are two qualifying observations: If at some time there are no wormholes anywhere, then it seems that none can form by any physical process of a type which could in principle be described by classical as opposed to quantum physics. I say "seems" here because this has not been proved in complete generality but it is certainly true in all configurations that one can reasonably expect to come about by natural processes. The mere existence of a wormhole does not necessarily imply that travel through the wormhole is possible, because of stability considerations (and some kinds of wormhole are not timelike anyway). The stability problem is of two kinds: the impact on the traveler of other stuff such as light passing into the wormhole, and the fact that their very presence (their mass) may cause the spacetime to distort in such a way as to seal the wormhole. At the moment it is hard, therefore, to assert with any confidence that there really could be a traversable wormhole somewhere. However one should add that when we bring in quantum physics then the range of possibilities becomes a lot richer. It seems now that one ought not to discount the possibility of wormholes, at least at microscopic scales, which is another way of saying that the structure of spacetime may be very complicated at the Planck scale. However this is not the kind of thing that ordinary people mean when they think about wormholes in science fiction scenarios. | {
"source": [
"https://physics.stackexchange.com/questions/613381",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/288476/"
]
} |
613,393 | Consider a solid sphere rotating with angular velocity $\omega$ is put on a rough surface so how will friction react and why does it convert the translational energy to rotational energy. | General relativity asserts that certain configurations of spacetime, called wormholes, will satisfy the equations that govern spacetime and this is why they are part of mainstream physics. However, there are two qualifying observations: If at some time there are no wormholes anywhere, then it seems that none can form by any physical process of a type which could in principle be described by classical as opposed to quantum physics. I say "seems" here because this has not been proved in complete generality but it is certainly true in all configurations that one can reasonably expect to come about by natural processes. The mere existence of a wormhole does not necessarily imply that travel through the wormhole is possible, because of stability considerations (and some kinds of wormhole are not timelike anyway). The stability problem is of two kinds: the impact on the traveler of other stuff such as light passing into the wormhole, and the fact that their very presence (their mass) may cause the spacetime to distort in such a way as to seal the wormhole. At the moment it is hard, therefore, to assert with any confidence that there really could be a traversable wormhole somewhere. However one should add that when we bring in quantum physics then the range of possibilities becomes a lot richer. It seems now that one ought not to discount the possibility of wormholes, at least at microscopic scales, which is another way of saying that the structure of spacetime may be very complicated at the Planck scale. However this is not the kind of thing that ordinary people mean when they think about wormholes in science fiction scenarios. | {
"source": [
"https://physics.stackexchange.com/questions/613393",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/254192/"
]
} |
613,395 | For the simple quantum harmonic oscillator we can solve Schrodinger's equation and derive the analytic form of the eigenstates of e.g. a non relativistic electron in a harmonic potential. We may then go on to define ladder operators which enable us to move between eigenstates of the potential. One interpretation of these ladder operators is that they 'create' or 'destroy' a photon of energy hw. One can go on to define the Number operator which enables us to determine how many times the oscillator has been excited or equivalently how many 'photons' there are in the system. My question is: is it correct to say the wavefunction we derived for the different energy levels still just describes the single non-relativistic electron and doesn't describe the newly created/destroyed photons. If this is correct then where does the interpretation of 'creation' of particles even come from because it seems like we are not describing the produced photons with any wavefunction we are merely postulating that they have been created? What is a good intuitive explanation for the interpretation in terms of photon creation? | General relativity asserts that certain configurations of spacetime, called wormholes, will satisfy the equations that govern spacetime and this is why they are part of mainstream physics. However, there are two qualifying observations: If at some time there are no wormholes anywhere, then it seems that none can form by any physical process of a type which could in principle be described by classical as opposed to quantum physics. I say "seems" here because this has not been proved in complete generality but it is certainly true in all configurations that one can reasonably expect to come about by natural processes. The mere existence of a wormhole does not necessarily imply that travel through the wormhole is possible, because of stability considerations (and some kinds of wormhole are not timelike anyway). The stability problem is of two kinds: the impact on the traveler of other stuff such as light passing into the wormhole, and the fact that their very presence (their mass) may cause the spacetime to distort in such a way as to seal the wormhole. At the moment it is hard, therefore, to assert with any confidence that there really could be a traversable wormhole somewhere. However one should add that when we bring in quantum physics then the range of possibilities becomes a lot richer. It seems now that one ought not to discount the possibility of wormholes, at least at microscopic scales, which is another way of saying that the structure of spacetime may be very complicated at the Planck scale. However this is not the kind of thing that ordinary people mean when they think about wormholes in science fiction scenarios. | {
"source": [
"https://physics.stackexchange.com/questions/613395",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/288496/"
]
} |
613,722 | Here are a few commonly heard sentences that will make my question clear: Statements #1
“The Michelson Morley experiment once and for all did away with the ether or the idea that light needs a medium in which to travel”
“Light can travel through empty space” Statements #2
“Light travels though the electromagnetic field”
“The photon field is only one of the fields for each particle in the standard model.”
“There must be dark matter in the universe for our current model of gravity to make sense.”
“Space is not empty. A quantum foam fills it” You can see the confusion that arises from the conflicting statements which are commonly heard in physics. The fact that light travels via an electromagnetic field is in clear contradiction to the statement that light travels through empty space. I suppose we could try to join empty space and the EM field into one thing by saying light is the field. Frankly, that sounds like doublespeak. What am I missing?
Light is a disturbance in the EM field. There is no empty space. Light travels through a field. Why do some physicists say light can travel through empty space? | I guess what a lot of people mean when they say that light travels through empty space is that it doesn't require a physical medium (matter) to travel through - like sound waves do. The fact of the matter is that light does travel in the electromagnetic field, but the electromagnetic field is just a mathematical tool - as are all the other fields in QFT. You can't touch the EM field, just like you can't touch the fields that give rise to physical matter. You might say that we can touch matter and so those fields are more real . However, the sensation of touching those fields is just a result of electromagnetic interactions between the particles in ourselves and those in the object we are touching, and the reason for that being true is no more real than the reason that light can move from A to B. To answer your specific points, the Michelson-Morley experiment proved that light travels at a constant speed for all observers. It proved that there was no physically-measurable medium in which light travels through (like sound). The electromagnetic field is how we describe light moving, however, it is not a physical object - it is just a mathematical tool and so only exists on paper. I think the confusion in terminology comes from people trying to attach physical meaning to fields. Maybe you can attach meaning to them, and one day we will be able to look at them in a different way to how we can now. However, it must be noted that the fields introduced in QFT are mathematical tools, and that is all they are. We should not imagine them as filling space like a fluid, they simply make useful and accurate predictions. In summary, they tell us how everything works - they will never tell us why . | {
"source": [
"https://physics.stackexchange.com/questions/613722",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/129433/"
]
} |
613,728 | Suppose we have two particles with spin $1/2$ . They have $S^{tot}=1$ and $S^{tot}_y=0$ . How can we write the state of the system in terms of the eigenstates of $S_{1z},S_{2z}$ ? My attempt: I would like to divide the problem in two: firstly we write the state of the system in terms of the states of $S_{1y},S_{2y}$ and then we re-write the state in terms of $S_{1z},S_{2z}$ : We know that it's always true that: $$S^{tot}_y=S_{1y}+S_{2y}$$ this is a fundamental rule of the spin's algebra, right? So: $$0=S_{1y}+S_{2y}$$ but of course this last two can only be $+1/2$ or $-1/2$ , so the possible states are: $$|+1/2,-1/2\rangle \ \ \ or \ \ \ |-1/2,+1/2\rangle$$ I then state that the more general solution is the linear combination of this two possibilities, and so (let me call $+1/2$ simply $+$ and $-1/2$ simply $-$ ): $$|S^{tot}=1,S^{tot}_y=0\rangle =a|+,-\rangle+b|-,+\rangle$$ For me this should be the solution. But: in my lecture notes it's stated that the true solution is: $$|S^{tot}=1,S^{tot}_y=0\rangle =\frac{1}{\sqrt{2}}\left[|+,-\rangle+|-,+\rangle\right] \tag{1}$$ I don't get why this must be true, remember that the particles are not identical. This is my first problem. But suppose that we do not have this problem, let's assume (1) to be correct. Then how can I translate (1) from $y$ to $z$ ? What is the link? How does the link work? This is my second problem. I would like to understand what I am getting right and what I am getting wrong. | I guess what a lot of people mean when they say that light travels through empty space is that it doesn't require a physical medium (matter) to travel through - like sound waves do. The fact of the matter is that light does travel in the electromagnetic field, but the electromagnetic field is just a mathematical tool - as are all the other fields in QFT. You can't touch the EM field, just like you can't touch the fields that give rise to physical matter. You might say that we can touch matter and so those fields are more real . However, the sensation of touching those fields is just a result of electromagnetic interactions between the particles in ourselves and those in the object we are touching, and the reason for that being true is no more real than the reason that light can move from A to B. To answer your specific points, the Michelson-Morley experiment proved that light travels at a constant speed for all observers. It proved that there was no physically-measurable medium in which light travels through (like sound). The electromagnetic field is how we describe light moving, however, it is not a physical object - it is just a mathematical tool and so only exists on paper. I think the confusion in terminology comes from people trying to attach physical meaning to fields. Maybe you can attach meaning to them, and one day we will be able to look at them in a different way to how we can now. However, it must be noted that the fields introduced in QFT are mathematical tools, and that is all they are. We should not imagine them as filling space like a fluid, they simply make useful and accurate predictions. In summary, they tell us how everything works - they will never tell us why . | {
"source": [
"https://physics.stackexchange.com/questions/613728",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/265836/"
]
} |
613,798 | When I google "visible light spectrum" , I get essentially the same image. However , in each of them the "width" of any given color is different. What does the "true" visible light spectrum look like, then? It can't be that each and every image search result is correct. I could not find any information about this on the web, so I turn to the experts. | Most computer monitors aren't capable of displaying any spectral color. Some of the RGB monitors could display at most three of them: some red wavelength, some green and some blue. This is because the gamut of the human vision is not triangular, instead it's curved and resembles a horseshoe: In the image above, the black curve represents the spectral colors, with the wavelengths in nm denoted by green numbers. The colored triangle is the sRGB gamut, the standard gamut that most "usual" computer monitors are supposed to have. As you can see, the black curve doesn't even touch the triangle, which means that sRGB monitors can't display any of the corresponding colors. This doesn't mean that you can't see any good representation of the visible spectrum. You can e.g. display what the spectrum would look like if you took a gray card and projected the spectrum onto it, thus getting a desaturated version. CIE 1931 color space , via its color matching functions , lets one find, for each spectral color, corresponding color coordinates $XYZ$ , which then can be converted to the coordinates in other color spaces like the above mentioned sRGB. The inability of the sRGB monitors to display spectral colors manifests in the fact that, after you convert $XYZ$ coordinates to sRGB's $RGB$ ones, you'll get some negative components. Of course, negative amount of light is not something a display device can emit, so it needs some workaround to display these colors (or something close to them). Displaying the spectrum as projected on a gray card is one of these workarounds. Here's how such a desaturated spectrum (with a scale) would look: To get this (or any other, actually) image to display "correctly", ideally you need to calibrate your monitor. Some of the consumer devices have better color rendering out of the box, others have quite poor color rendering and show visibly wrong colors. If you don't calibrate, then just be aware of this nuance. Also, if you happen to be a tetrachromat (virtually never happens in males, rare
in females), then the above image will look incorrect to you in any case. How to see an actual spectrum, without the workarounds discussed above? For this you should use not a computer monitor. Instead you need a spectroscope . These can be found in online stores like AliExpress quite cheap, some using a diffraction grating, others a prism. The ones with grating will give you almost linear expansion in wavelengths, while the ones with a prism will have wider blue-violet part and thinner orange-red part of the spectrum. | {
"source": [
"https://physics.stackexchange.com/questions/613798",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/279353/"
]
} |
614,671 | I was cooking something in the microwave and opened the door early to check on it and the microwave didn't stop. I didn't realize this for a few seconds and when I did I shut the microwave off but I'm concerned what I could have been exposed to for the few seconds that it was open. As far as I can remember I still heard the humming and the light was on so I assume it was still running. I didn't think it was even possible for it to keep running with the door open. The microwave is a fairly new model as well. Is there radiation or other things that could have caused some damage in those few seconds? | The very first thing you should do is stop using your oven and have it checked out by an authorized repair service. If in fact the oven was operating with the door open, there was a failure of the door interlocks to turn the oven off and a failure of the backup system intended to permanently shut the oven off in the event the interlocks failed which, although extremely unlikely, is nonetheless possible. The next thing you need to know is that microwave "radiation" is not the same thing as x-radiation, gamma rays, or other forms of nuclear radiation, which are referred to as "ionizing radiation" and are associated with injury such as cancer if excessive. Microwave radiation is non-ionizing radiation which, to date, has only been conclusively associated with thermal injury, i.e., injury to tissue due to heating (that's how it cooks food). Since you were only potentially exposed for a few seconds, unless you are experiencing discomfort, you are probably all right since the temperature of your tissue was probably not raised all that much. In the early days of commercial microwave ovens, before multiple safety features were required, there were incidents where workers were exposed to multiple exposures and had some loss of muscle function due to heating damage. But those were the cumulative effects of many exposures. A few seconds of exposure is unlikely to cause that effect on you. In any case, I urge you to stop using your oven now. Hope this helps. | {
"source": [
"https://physics.stackexchange.com/questions/614671",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/289062/"
]
} |
614,800 | If I were to burn a pile of wood weighing a hundred kilograms and I would have a big sack hanging over the burning pile. In this sack I would catch all the smoke that came from the burning pile, if all the wood turned to ashes and I'd put this in the sack with smoke. Would the sack weigh a hundred kilograms or would it weigh less? Is it the case that all the mass of the burning pile is converted in to ash and smoke and therefore weigh the same as the unburnt pile of wood? Or is it the case that because of $E=mc^2$ the burning pile emits energy and the mass of the pile is converted into the heat that a burning pile of wood gives and therefore takes away most of the mass? | You would have much more mass than 100 kg after the wood was burned. As it turns out, wood is made of cellulose and lignin. Both are cross-linked glucose polymers, so a good approximation of what you would get is given by the chemical reaction of burning glucose: $$\rm C_6H_{12}O_6 + 6O_2 \to 6CO_2 + 6 H_2O$$ This means that 6 oxygen molecules combine with one glucose molecule when it is burned. The molar mass of the glucose molecule is 180 and the molar mass of the six oxygen molecules is 192. This means that when you burn 180 kg of glucose, 192 kg of oxygen take part in the chemical reaction, producing an equal mass of carbon dioxide and water vapor. At these ratios, when you burn the 100 kg of wood, you would collect 207 kg of carbon dioxide and water vapor. | {
"source": [
"https://physics.stackexchange.com/questions/614800",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/288844/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.