Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to make the Moon spiral into Earth? I recently watched a video of what would happen if the Moon spiraled into Earth. But the video is pretty sketchy on the physics of just what would have to happen for that to occur. At first I thought I understood (just slow the Moon down enough), but my rudimentary orbital mechanics isn't enough to convince me that's sufficient (e.g., wouldn't the Moon just settle into a lower orbit?). What forces would have to be applied to the Moon to get it to spiral into the Earth, at what times? What basic physics are involved? (And why should I have already known this if I could simply remember my freshman Physics?)
Another way: maintain a large electric charge on the moon so that it radiates away low-frequency radio waves and looses energy that way. This method is impractical; but so are all methods. This method is even more impractical than most, but I add it for the interest of the physics. The idea is that any accelerating charge radiates electromagnetic radiation, and the energy for the radiation has to come from somewhere; in this case it comes from kinetic energy of the moving charge. For a charge on an otherwise circular orbit around an attractive centre the net result is the inward spiraling motion mentioned in the question. The frequency of the radiation is equal to the orbital frequency, so very low here. The power is given by Larmor's formula: $$ P = \frac{q^2 a^2}{6 \pi \epsilon_0 c^3} $$ where $a$ is the acceleration, i.e. the centripetal acceleration of the moon in this case, which is currently about $2.725 \times 10^{-3}\,$m/s$^2$. Unfortunately, for any reasonable value of the electric charge $q$ this produces a power which is hopelessly too low. For example with $q$ of order $10^{10}$ Coulombs you could get a power of order $0.2$ watts and this would suffice to remove the moon's kinetic energy ($3.65 \times 10^{28}\,$joules) in about $10^{21}$ years. But it is expected that the Sun would engulf the Earth long before that. Any attempt to make the charge larger would involve electric fields strong enough to rip electrons off the rocks of the moon, so would presumably not work.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 3 }
If force depends only on mass and acceleration, how come faster objects deal more damage? As we know from Newton's law, we have that $\mathbf{F} = m\cdot\mathbf{a}$. This means that as long as the mass stays constant, force depends solely on acceleration. But how does this agree with what we can observe in our day-to-day lives? If I drop a coin on someone's head with my hand standing just a couple centimeters above their hair, they won't be bothered too much; but if I drop the same coin from the rooftop of a skyscraper, then it could cause very serious damage or even split their head open. And yet acceleration is pretty much constant near the surface of the earth, right? And even if we don't consider it to be constant, it definitely has the same value at $\sim1.7\text{ m}$ from the ground (where it hits the person's head) regardless of whether the motion of the coin started from $\sim1.72\text{ m}$ or from $\sim1 \text{ km}$. So what gives? Am I somehow missing something about the true meaning of Newton's law?
the equation for your example (Newton second law) is: $$m\,\ddot h=m\,g$$ thus the acceleration $\ddot h~$ is the same but with the solution $$h=\frac{g\,t^2}{2}\quad \Rightarrow\\ v=g\,t$$ eliminate the time t you obtain that $$v(h)=\sqrt{2\,g\,h}$$ thus the impact $~m\,v~$ is depending on the height $~h~$ from where you drop the coin
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 14, "answer_id": 12 }
Time constant versus half-life — when to use which? In some systems we use half-life (like in radioactivity) which gives us time until a quantity changes by 50% — while in other instances (like in RC circuits) we use time constants. In both cases the rate of change of a variable over time is proportional to the instantaneous value of variable. What is a simple intuitive way to know the difference between the kind of systems where half-life is useful, versus systems where time constants are more meaningful? (Does it have anything to do with the shape of the curve representing the change in value over time, for example?)
It is just a matter of taste whether you prefer to write an exponential decay with the time constant $\tau$ and powers of $e$ $$N(t)=N_0\ e^{-t/\tau} \tag{1}$$ or with the half-life $t_{1/2}$ and powers of $2$ $$N(t)=N_0\ 2^{-t/t_{1/2}}. \tag{2}$$ Both ways are equivalent and you can switch between them by using $$t_{1/2}=\tau \ln(2).$$ (1) appears more natural from a mathematical point of view, because it directly appears as the solution of the differential equation $$\frac{dN}{dt}=-\frac{N}{\tau}.$$ And (2) is easier to grasp even for a mathematical layman, who doesn't know the meaning of $e$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
How to prove time dilatation from the Lorentz transform? How to prove time dilatation from the Lorentz transform formula: $$ t' = \gamma\left(t-\frac{Ux}{c^2}\right) $$ (U: the velocity of the referential R' relative to R) So far I've found this formula : $$ \Delta t' = \gamma\left(\Delta t-\frac{U\Delta x}{c^2}\right) $$ but I don't know how to handle the $ \Delta x $ from here. I have seen in the literature that $ \Delta t' = \frac{\Delta t}{\gamma} $ but I clearly don't know how to infer this from the Lorentz Transform. T.I.A.
The starting point is to be clear on what time dilat(at)ion means. Here it is... The time interval between two events as found in an inertial frame of reference where the events occur in different places is greater than the time interval ($T$, say) in the inertial frame where they occur in the same place. We assume that frames of reference are equipped with synchronised clocks everywhere, so that the time of an event can be registered at the place in the frame where it occurs. With the Lorentz transform equations that you've quoted, it's easiest to use the undashed (unprimed) frame as the one in which the events occur in the same place and at time $T$ apart, and to make substitutions accordingly, remembering that $\Delta x$ is the spatial separation of the events. The dilated value in the dashed frame of the time between the events follows almost immediately – remembering that > 1.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the acceleration of a ramp on a table when a body slides on it? I found an Olympiad problem: Find the acceleration of a ramp on a table when a body slides on it. Assume there is no friction between the body and the ramp, and between the ramp and the table. I found the final solution to this problem but I do not understand it: * *What is $m \vec{a}_1$, and (ii) why $m \vec{a}$ is parallel to the table in the free-body diagram? *How do they come up with the equation in the solution?
TL;DR The free-body diagram should not include the resultant forces, at least not in the way to make you think those are the forces acting on the body in question. The following vectors should be removed from diagrams in your question: (i) $m \vec{a}$ and $m \vec{a}_1$ for the body, and (ii) $M \vec{a}$ for the ramp. Although it does not make any difference for the final solution, the free-body diagram should also include the normal force between the ramp and the table. Finally, the horizontal acceleration of the ramp is negative if the positive direction is to the right, which seems to be the case in your solution. Detailed solution of the problem Let the coordinate system be defined as follows: * *$\hat{\imath}$ is horizontal axis; positive direction points to the right *$\hat{\jmath}$ is vertical axis; positive direction points upwards Write equations of motion in vector form for the two bodies separately: $$m \vec{a} = \vec{w} + \vec{n} \qquad \text{and} \qquad M \vec{A} = \vec{W} + \vec{N} - \vec{n}$$ where * *$m$, $\vec{a}$ and $\vec{w}$ are mass, acceleration and weight of the sliding body, respectively, *$M$, $\vec{A}$ and $\vec{W}$ are mass, acceleration and weight of the ramp, respectively, *$\vec{n}$ is normal force between the body and the ramp, and *$\vec{N}$ is normal force between the ramp and the table. The equations of motion for the ramp are $$M A_x = -n \sin\alpha \qquad \text{and} \qquad M A_y = -Mg + N - n \cos\alpha$$ where $A_y = 0$ since the ramp does not move vertically, and $n$ and $N$ are magnitudes of normal force vectors $\vec{n}$ and $\vec{N}$, respectively. The equations of motion for the sliding body are $$m a_x = n \sin\alpha \qquad \text{and} \qquad m a_y = -mg + n \cos\alpha$$ From these equations it follows $$\boxed{a_x = -\frac{M}{m} A_x} \qquad \text{and} \qquad \boxed{m a_y = -m g - A_x \frac{M}{\tan\alpha}} \tag 1$$ With $\Delta y = -\Delta x \tan\alpha$, where $\Delta x$ and $\Delta y$ are horizontal and vertical displacement of the sliding body relative to the ramp, it follows $$\Delta\ddot{y} = -\Delta\ddot{x} \tan\alpha \qquad \text{and} \qquad a_x = \Delta\ddot{x} + A_x$$ where $a_y = \Delta\ddot{y}$ since the ramp does not move vertically. From this it follows $$\boxed{\tan\alpha = \frac{a_y}{A_x - a_x}} \tag 2$$ From identities in Eq. (1) and Eq. (2) it follows $$A_x \bigl(m + M \bigr) \tan\alpha = -mg - A_x \frac{M}{\tan\alpha}$$ and this finally leads to the acceleration at which the ramp slides on the table: $$\boxed{A_x = \frac{-mg}{(m+M)\tan\alpha + M/\tan\alpha}}$$ After some basic trigonometry, the above expression can be written as $$A_x = \frac{-mg \sin\alpha \cos\alpha}{M + m \sin^2\alpha}$$ which equals your solution. It is now trivial to find $a_x$ and $a_y$ acceleration components for the sliding body $$a_x = \frac{M g \sin\alpha \cos\alpha}{M + m \sin^2\alpha} \qquad \text{and} \qquad a_y = \frac{-(m+M)g \sin^2 \alpha}{M + m \sin^2\alpha}$$ The resultant acceleration for the sliding body is $$\vec{a} = a_x \hat{\imath} + a_y \hat{\jmath} = \frac{(m + M) g \sin\alpha}{M + m \sin^2 \alpha} \Bigl( \frac{M}{m + M} \cos\alpha \hat{\imath} - \sin\alpha \hat{\jmath} \Bigr)$$ However, this does not equal $\vec{a}_1$ from the solution (free-body diagram) in your question, which is defined relative to the ramp $$\vec{a}_1 = \Delta \ddot{x} \hat{\imath} + \Delta \ddot{y} \hat{\jmath} = \frac{(m + M) g \sin\alpha}{M + m \sin^2 \alpha} \bigl( \cos\alpha \hat{\imath} - \sin\alpha \hat{\jmath} \bigr)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Subscript $Q$ in $U(1)_Q$ In most quantum field theory books, you read something like QED is a QFT from Abelian $U(1)_Q$ that describes the electromagnetic interaction ... But what does the subscript $Q$ stands for in the $U(1)_Q$ group?
It specifies that one is referring specifically to the symmetry associated with electromagnetism ($Q$ stands for electric charge, but some texts also write $U(1)_{\text{EM}}$, for example). This is meant to distinguish from other possible $U(1)$ symmetries present in the theory or in related theories that correspond to other physical aspects. For example, in the Standard Model of Particle Physics, one has the gauge group $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$. Notice there is a $U(1)$ factor in there. One of the interactions described by this theory is electromagnetism. However, the generator of $U(1)_Q$ (and hence the boson associated with it, which is the photon) is not the generator of $U(1)_Y$. In fact, due to a process of spontaneous symmetry breaking one ends up with a $U(1)$ symmetry whose generator is a linear combination of one generator of $SU(2)_L$ and one generator of $U(1)_Y$. In short, sometimes there are other interesting $U(1)$ transformations in your theory, with different interpretations. To be clear about which transformation we mean, it is common to add a sub-index to the gauge group. This is the same reason $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$ has all these indices ($C$ means color, since it is associated with QCD; $L$ means left, since the symmetry only transforms left-handed particles; $Y$ means (weak) hypercharge, which is another sort of $U(1)$ transformation relevant in the Standard Model). For an extra example, the neutrinos have weak hypercharge -1 (this relates to $U(1)_Y$), but they have zero electric charge (this relates to $U(1)_Q$). This is not a problem, since these are two different symmetries, both associated with a $U(1)$ group.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why should a clock be "accurate"? Having read that atomic clocks are more accurate than mechanical clocks as they lose a second only in millions of years, I wonder why it is necessary for a reference clock to worry about this, if the definition of the second itself is a function of the number of ticks the clock makes. Why don't we just use a single simple mechanical clock somewhere with a wound up spring that makes it tick, and whenever it makes a tick, treat it as a second having elapsed? (Assuming this clock was broadcasting its time via internet ntp servers to everyone in the world)
why it is necessary for a reference clock to worry about this, if the definition of the second itself is a function of the number of ticks the clock makes. The concern is that somebody else (say a scientist in France or China or Botswana) needs to be able to build a clock that measures seconds at the same rate mine does. If we both have atomic clocks, we can keep our clocks syncronized to within microseconds per year. If we have mechanical clocks they might be different from each other by a second (or anyway some milliseconds) by the end of a year. If we're doing very exact measurements (comparing the arrival times of gamma rays from astronomical events at different parts of the Earth, or just using a GPS navigation system) then a few milliseconds (or even microseconds) can make a difference in our results.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 19, "answer_id": 0 }
Does AdS/CFT help solve the singularity of a black hole? How does thinking of a black hole as encoding its information in its surface help solve what happens inside it, more specifically geodesic incompleteness. Doesn't it tell us that if we can see how light (or matter) behaves around the horizon we can predict what it does inside the black hole
It does not solve what happens inside. The information on the horizon, by the entanglement of virtual particles with the particles that passed, keeps track of the state of the particles inside. The momenta of the infalling particles are entangled with these virtual ones on the horizon. So in a sense the inside physics can be seen on the horizon. The entanglement can last because time has virtually stopped wrt to faraway observers. The Hawking radiation emerging from these virtual particles is seen to radiate over a long time faraway. But in the hole it takes very small time. Maybe the singularity isn't even formed before evaporation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Schwarschild radius and paramaterizing path Consider the metric $$ds^2=-\left(1-\frac{2m}{r}\right)dt^2+\left(1-\frac{2m}{r}\right)^{-1}dr^2 + r^2d\theta^2 + r^2\sin^2\theta d\phi^2.$$ Suppose a particle very large starts at the initial radius $R$ and then radially infalls in a Schwarzchild manifold My text then states: It can be shown that the parameter defining the particle's trajectory (assuming $m$ is large enough) is expressed as $r(\lambda)=C(1+cos\lambda)$ and $\tau(\lambda)=C(\frac{R}{m})^{\frac{1}{2}}(\lambda + sin\lambda)$ where $C$ is a constant and $\tau$ is the proper time along the geodesic. My questions are: How in the world does one show the equations above? How does one find $C$? and how does one show that the statement above is true? If an object radially infalls in the vicinity of a spherically symmetric object, then we can take $\dot{\theta}=\dot{\phi}=0$ Thus, assuming that the mass is large, we can get the equation: $$g_{\mu \nu}\frac{dx^{\mu}}{d\tau}\frac{dx^{\nu}}{d\tau}=-1$$ which implies $$g_{tt}\left(\frac{dt}{d\tau}\right)^2+g_{rr}\left(\frac{dr}{d\tau}\right)^2=-1.$$ However, I am unable to do anything beyond the above and plugging in the components of the metric tensor. Please can someone help?
Start with your equation, which reduces to: $$-1 = -\left(1 - \frac{2M}{r}\right)\left(\frac{dt}{d\tau}\right)^{2} + \frac{1}{1-\frac{2M}{r}}\left(\frac{dr}{d\tau}\right)^{2}$$ Now, you can leverage the fact that $\partial_t$ is a Killing vector to show that $E = \left(1-\frac{2M}{r}\right)\frac{d t}{d \tau}$ is a constant of the motion (easiest way to prove this: use the fact that the arc length of a geodesic is an extremum of the motion, and the fact that the arc length of a path is an integral of the line element to treat the geodesic as the same sort of maximization problem that the Lagrangian is) This makes the above reducible to: $$\left(\frac{dr}{d\tau}\right)^2 = (E^{2}-1)\left(1-\frac{2M}{r}\right)$$ It is a bit of algebra, and you'll need to work out the relationship between $E$, $R$, and $C$, but you can work out that the parametric equations in your expression is a solution to this equation, knowing that $\frac{dr}{d\tau} = \frac{dr}{d\lambda}/\frac{d\tau}{d\lambda}$ by the chain rule. As far as deriving the parametric equations, I have generally only seen "one makes an inspired guess as to the form of $r(\lambda)$, uses that to replace the values for $r$ and $\frac{dr}{d \lambda}$ in the above equation, and then solves the remaining differential equation in $\tau$ and $\lambda$ for $\tau$", so most textbooks skip that and just say "this equation has this parametric solution." As for the constants, you need to set up initial conditions like "at $\tau =0$, ${\dot r} = 0$ and $r = R$, and you're going to need constants to enforce that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lorentz Four-force in General relativity? In special relativity, it's normal to define Lorentz four-force density as $$f_\mu = F_{\mu\nu}\nabla _\lambda F^{\nu \lambda},$$ having Maxwell EM tensor $F_{\mu\nu}$. Can we do it in General relativity? Does "force" even have a meaning?
It should be valid applying the equivalence principle: , --> ; Meaning every regular derivative becomes a covariant derivative.. so that ∆ should be uses with the correspondinh Christoffel symbols.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there any exact law in physics? Even for maybe the most fascinating computation in physics, about anomalous magnetic moment of the electron, theoretical physics gives: $$a = 0.001 159 652 164 ± 0.000 000 000 108$$ which is not "exactly" true: $$a_{exp} = 0.001 159 652 188 4 ± 0.000 000 000 004 3$$ so even laws of quantum field theory are not axact. (in second comment below Dvij D.C. said by this example there no violation and he is right: experimental interval is a subset of theoretical interval. but my question is still there because there are violation by other examples). In first page of "The Feynman Lectures in Physics" Feynman says: In fact, everything we know is only some kind of approximation, because we know that we do not know all the laws as yet. and in next page Finally, and most interesting, philosophically we are completely wrong with the approximate law. But what about some laws such as conservation laws of mass-energy, angular momentum, electric charge,... and sameness of gravitational and inertial mass and so on? is there any experiments which show even extremely small violation from this laws as we see in $a$-coefficient? if these are exact how do you interpret what Feynman says? could you give a list of exact laws in physic?
Any theory or law in physics is only as good as its experimental validation. Every experimental measurement has a non-zero experimental error. Therefore no theory or law in physics can be validated exactly. The speed of light in a vacuum may vary in its $20$th digit. The law of conservation of energy may only be correct to one part in $10^{50}$. The second law of thermodynamics may be violated one time in every $10^{100}$. All physics can aspire to is to set down laws and principles that are correct up to the limits of experimental error - and to devise ever more ingenious experiments to reduce the size of this experimental error.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Are one-dimensional tensors of arbitrary rank just scalars? Consider a tensor of arbitrary rank (2 for this case) $A_{ij}$, and dimension one. Granted there are two indices to specify a component, but since each index can only take one value, there is only one component in this entire tensor: $A_{11}$. So, are all one dimensional tensors scalars? Further. transformation under coordinate transform for this case: $$(A')^{11}={\left (\frac{\partial x'}{\partial x}\right )}^2A^{11}$$ suggests that since in general $(A')^{11}$ is not equal to $A^{11}$, it is not a scalar. So what exactly is this non-scalar one component object?
Tensors of rank $k$ over a vector space $V$ form the space $\otimes^k V$. When the vector space is 1d, then we may as well take it to be the ground field $\mathbb{K}$. But $\otimes^k \mathbb{K} \simeq \mathbb{K}$. A tensor is an element of the left side, so equivalently it is also an element of the right side and so it is just a scalar. So, yes, it's true. Alternatively: Fix a 1d connected manifold without boundary $M$. Now, there are only two such manifolds: the circle and the infinite line. They are both orientated and flat. That they are orientated means that they have a volume form (this is a nowhere vanishing top form) and that they are flat means that they are isometric to the standard circle or Euclidean line. So we may as well choose this metric. Since the manifold $M$ is 1d, the forms of rank 1 are exactly the top forms. There are no higher rank forms, they vanish by antisymmetry. Now choosing a top form, say: $\omega \in \Omega^1M$ This is not a scalar form, however because of the flatness of 1d manifolds, there is a standard metric and we can use the Hodge star to convert to a scalar field: $*\omega \in \Omega^0 M \simeq C^{\infty}M$ In brief: $\Omega^1 M \simeq \Omega^0 M \simeq C^{\infty} M$. This resolves the paradox of a 1d charge density that is not a scalar field as referred by @QMechanic in another post. That is not the full picture because it is also correct that it is canonically a scalar field. However, differential forms are not the only tensors on a 1d manifold, they are only the antisymmetric covariant tensors. However, the argument in the first paragraph shows that they are also isomorphic to scalar fields.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are black holes spinning balls of quark-gluon plasma? I had this idea a few days ago that the Higgs event might have been a naked singularity, i.e. the colliding protons (very briefly) fall into a state of infinite density and release two gamma-ray photons as decay products. One thing led to another, and I was led to extrapolate that perhaps atomic nuclei can be seen as something akin to quark-gluon plasmas; that is, we tend to think of them as bundles of protons and neutrons, but how often do we really observe nuclei directly (hydrogen nuclei don't count)? Wouldn't quantum mechanics imply that all the 'protons' and 'neutrons' are sort of smeared into one another? And, if so, would that not therefore be a quark-gluon plasma? Wouldn't these rigid categories of 'proton' and 'neutron' have somewhat limited applicability in the nuclear setting? Building on that, I thought perhaps it's possible to thereby imagine a black hole as a sort of giant nucleus, and that the difference between neutron stars and black holes is that one passes the Chandrasekhar limit, forcing this lattice of neutrons and electrons to form around the QGP, whereas in the black hole setting everything collapses into QGP and it forms an event horizon. Does this seem likely?
The answer is probably "not for long". When a star collapses the components get squeezed together and the temperature increases, plausibly turning into a quark-gluon plasma... but this does not stop the collapse. Very quickly (from the perspective of an observer falling with the matter) it reaches the singularity and stops being anything we know anything about. There seems to be an implicit assumption in the question that the plasma can resist the collapse. This is not true, due to Buchdahl's theorem: you cannot have a hydrostatic equilibrium for a radius below $(9/8)R_S$ with finite central pressure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why can’t quantum randomness be understood as epistemic? I often hear people say that quantum randomness is “true randomness”, but I don’t really understand it. Please bear with my question. Before the development of quantum physics, randomness is understood as being “epistemic”. That is, things appear random because we couldn’t (or haven’t yet) take a measure. This is also how probability theory was conceptualized by Kolmogorov. My understanding is that quantum physics can also be described using standard measure-theoretic probability theory, or, in other words, an theory with merely “epistemic” randomness. This leads to my question/confusion: in what sense is quantum randomness non-epistemic, given it can be described by standard probability theory? Is there any property of quantum randomness that shows it cannot be epistemic?
There is, in general, no joint probability distribution for the outcomes of quantum measurements, which means that QM, at least as it is usually formulated, is incompatible with Kolmogorov's framework.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why is a relativistic calculation needed unless $pc$ is much smaller than the rest energy of a particle? After introducing the de Broglie wavelength equation, my textbook gives a rather simple example where it asks to find the kinetic energy of a proton whose de Broglie wavelength is 1 fm. In the solution to this problem, it states that "A relativistic calculation is needed unless $pc$ for the proton is much smaller than the proton rest energy." Could someone please explain why this is the necessary condition? I'm not sure what the quantity '$pc$' represents or means here. I know that for massless particles like photons, the total energy $E$ is equal to $pc$. I'm not sure what it means for particles having rest mass like protons.
In relativity, the energy $E$ of a particle is related to its mass $m$ and momentum $p$ by \begin{equation} E = \sqrt{m^2c^4 + p^2 c^2 } \end{equation} Now let's think about the non-relativistic limit, $c\rightarrow \infty$. To do this, we will expand the square root \begin{equation} E = m c^2 \sqrt{1 + \frac{p^2}{m^2 c^2}} = mc^2 + \frac{p^2}{2m} + \cdots \end{equation} where the $\cdots$ refer to terms that vanish in the limit $c\rightarrow\infty$. The first term is the famous equation $E=mc^2$. In non-relativistic physics, the mass is a constant, so this is just a constant term in the energy we can ignore. The second term $E=\frac{p^2}{2m}$ is the non-relativistic expression for kinetic energy. You can see that non-relativistic physics is a good approximation to relativistic physics when we can ignore the higher order terms. Thinking back to how we expanded the square root, this amounts to the condition $p c \ll m c^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does an inkjet printer use CMYK ink instead of RGB ink? Currently, I am studying the inkjet printer in detail. I had come across the ink of printers ink. Why do they not use RGB ink? Why do they use CMYK? I have read somewhere if we mix red and green ink together it will become dark, but not yellow because of absorption. I need a scientific answer related to spectrum and frequency.
Short answer: Because sabtractive and additive color mixing are not the same. Longer answer: With additive color mixing (most screens work like that) you get white when you mix all the colors. This means, the more colors you add, the brighter the result. With subtractive color mixing (like printers do) you start with a white piece of paper and make it darker by applying color. Mixing all colors together results in something resembling black (often a brownish mess, that's why we often use K, a real black in print). So if you had red, green and blue in a printer you could never get yellow, magenta or cyan. On the other hand, by putting a bit yellow into magenta, you get red... A nice article: What is the Difference Between Subtractive and Additive Color Mixing? We also have to keep in mind that the selection of the base colors used to mix the other colors together is always a compromise. We can't represent the whole visible color space in print or on a screen. Some screens and printers can cover a bigger area of the whole visible color space but none can display everything. That's why some printers use additional color inks in addition to the classic CMYK. So it might be a good idea to learn about Color Spaces too.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Energy conservation and time uniformity Reading Peter Atkin's book Four laws that drive the universe, I cannot see the link between time uniformity and conservation of energy the way it is tackled in this excerpt: " Noether’s theorem, proposed by the German mathematician Emmy Noether (1882–1935), which states that to every conservation law there corresponds a symmetry. Thus, conservation laws are based on various aspects of the shape of the universe we inhabit. In the particular case of the conservation of energy, the symmetry is that of the shape of time. Energy is conserved because time is uniform: time flows steadily, it does not bunch up and run faster then spread out and run slowly. Time is a uniformly structured coordinate. If time were to bunch up and spread out, energy would not be conserved."
Energy is conserved because time is uniform: time flows steadily What I can understand from this is that some periodic movements keep the same ratio of their periods. And exactly for that reason they were chosen historically to measure what we call time. The ratios between days (earth rotations), weeks (lunar phases), months (moon phase to phase approximately) and year (repetition of solar position with respect to the stars, are fairly stable. Ratios in the oscillation of springs or pendulums are also stable when changing their parameters and comparing the periods. On the other hand, all that periodic movements can be modelled by equations where a quantity $E = E_p + E_k$ is constant. So, there is a relation between our capacity to measure time as an objective stuff and energy conservation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How can a quasi-static process be reversible? As I understand it, a reversible process is required to be quasi-static because each infinitesimal step in a quasi-static process generates only infinitesimal amounts of entropy at a time which can be reversed with only an infinitesimal amount of work. But my question is: even if only infinitesimal amounts of entropy are generated at each step, when you integrate this over a finite path, doesn’t the work required to reverse the process integrate to a finite value, rendering the process irreversible? Given this, how can any process be irreversible?
No real process is reversible, for precisely the reason you mention: a gradient (e.g., in temperature, pressure, or chemical potential) is required to drive a process, but energy moving down that gradient produces entropy. By skilled engineering (to reduce friction, for instance) and by slow operation, we can reduce entropy generation to an arbitrarily low level, but we cannot make it zero. The idealization of zero entropy generation and reversibility is nonetheless sometimes useful.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why doesn't temperature decrease with an increase of volume in the syringe? I made an experiment using a closed syringe with the volume marks. At the beginning, the piston is at $5\rm\, ml$, then I move it to $20\rm\, ml$. Since the change is approximately adiabatic, we can use the adiabatic expressions, one of them being $$TV^{\gamma-1}=\text{const.}$$ For $T_1=20°C=293.15\rm\, K$, $V_1=5\rm\, ml$, and $V_2=20\rm\, ml$, $$T_2=T_1\frac{V_1^{\gamma-1}}{V_2^{\gamma-1}}=168.4\rm\, K = -104.8°C$$ Such a drastic drop should be rather sensible. However, I haven't felt anything. Was my setup wrong? Is the change not adiabatic? Have I set the equations incorrectly? Explicit data for sake of comments: * *Gas used is just ordinary air: at the beginning it is at standard pressure. *I didn't use any thermometer; according to the 0th law of termodynamics, I expected that the (in theory) cooled air should take away heat from my skin. In fact, the gas at $\approx -100°C$ should be destructive to my skin, but my finger is still intact, therefore my theory is wrong. *Syringe can be thought of as an ideal conductor. Its thickness is less than $1\rm\, mm$, hence it can be neglected. As said, the low temperature could be easily felt. The sketch of the experiment:
In calculating your expectation, you are neglecting the heat capacity of your experiment. Once you take that into account, you will find that your gas will get cold, but there isn't enough energy to cool the syringe by any significant amount; Also, you are neglecting time, which has the effect that the syringe and gas is being heated by the environment while your cooler gas is supposed to cool the syringe and your finger. Taken together, I would not expect anything. To make your experiment work, use a thermometer with a small heat capacitance, directly inside the syringe, to measure the temperature of the gas. That will work just fine. Commercial suppliers do offer similar experiments to demonstrate the ideal gas law, see e.g. here or a simple search on youtube.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the CPT theorem imply $CP=T$? Does the CPT theorem imply $CP=T$? That is, does it imply that the action of Charge Conjugation and Parity inversion on some representation of the Lorentz group, is the same as doing a time reversal? Specifically, given explicit expressions for $P$ and $C$ (in terms of matrices and complex conjugation) in some basis, how does $CP$ relate to the expression for $T$ and $T^{-1}$
The assertion of the CPT theorem is that, under natural hypotheses, the Hamiltonian $H$ operator of a theory is invariant under the simultaneous action of the symmetries (in Wigner's sense i.e. unitary/antiunitary operators) C, P, and T. $$CPT H (CPT)^{-1} = H\:.\tag{1}$$ This action can also be implemented by a direct action on the quantum fields the Hamiltonian is made of. However, the fact that the Hamiltonian is CPT invariant does not imply a precise relation between CP and T, since their combination is equivalent to the identity when they act on the Hamiltonian, not in general. In particular, $CP=T$ or $T^{-1}$ do not make sense (also including phases), since the left hand side is linear and the right hand side is anti linear, when viewing them as operators in the Hilbert space as in Eq.(1). However, from the above reasoning it is evident that the action of T on the Hamiltonian is the same as the combined action of CP on the Hamiltonian.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Necessary and sufficient conditions for operator on $\mathbb C^2$ to be a density matrix Consider a one-qubit system with Hilbert space $\mathscr H\simeq \mathbb C^2$. Define the hermitian operator $$\rho := \alpha\, \sigma_0 + \sum\limits_{i=1}^3 \beta_i\, \sigma_i \quad , \tag{1}$$ where $\alpha,\beta_i \in \mathbb R$, $\sigma_0 = \mathbb I_{\mathbb C^2}$ and $\sigma_i$ are the usual Pauli matrices. What are the necessary and sufficient conditions for $\rho$ to be a density operator, that is a positive semi-definite operator with unit trace? Under which conditions is $\rho$ pure? Can these conditions be derived without using the explicit matrix representation of the Pauli matrices?
Let us first derive necessary conditions on the coefficients, so assume $\rho$ is a density matrix. From $\mathrm{Tr} \rho =1$ it trivially follows that $\displaystyle \alpha=\frac{1}{2}$. To proceed, let $\lambda$ and $1-\lambda$ denote the eigenvalues of $\rho$. As shown e.g. here, we find $$\det \sum\limits_{i=1}^3 \beta_i \,\sigma_i = -\sum\limits_{i=1}^3 \beta_i^2$$ and thus $$\det \left(\rho - \frac{\sigma_0}{2}\right) = -\sum\limits_{i=1}^3 \beta_i^2 \quad . $$ Further, since $[\rho,\sigma_0]=0$ trivially, we have that the eigenvalues of $\rho - \frac{\sigma_0}{2}$ are given by $\lambda-\frac{1}{2}$ and $1-\lambda - \frac{1}{2}$. Hence $$ \left(\lambda-\frac{1}{2}\right) \left(1-\lambda - \frac{1}{2}\right) = -\sum\limits_{i=1}^3\beta_i^2 \quad , $$ which eventually leads to $$\det \rho = \lambda \left(1-\lambda\right) = -\sum_{i=1}^3 \beta_i^2 +\frac{1}{4} \quad . $$ Because of $0 \leq \lambda\leq 1$, we require $\det \rho \geq 0$, so for $\rho$ in $(1)$ to be a density matrix the coefficients must fulfill: $$\alpha=\frac{1}{2} \quad \text{and} \quad \sum\limits_{i=1}^3 \beta_i^2 \leq \frac{1}{4} \quad . \tag{2} $$ Moreover, from $\det \rho = 0$ if and only if $\lambda=1$ or $\lambda=0$, we see that $\rho$ is pure if and only if the equality in $(2)$ holds. Finally, note that these conditions are also sufficient: If an operator of the form $(1)$ obeys equation $(2)$, then $\mathrm{Tr} \rho=1$ and $\det \rho \geq 0$. It remains to show that both eigenvalues are non-negative. But since $\det \rho \geq 0$, we know that both eigenvalues have the same sign and from the trace condition it follows that both must be non-negative.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can we write the effective field theory for the toric code model? If not, then why not? If yes, then what is the effective field theory?
No. Effective field theory only describes system near critical point. Toric code model is far from critical point. Thus "No". Toric code model realizes a $Z_2$-topological order. When a state with a $Z_2$-topological order is close to a continuous phase transition, then the state will have an effective field theory description. But which effective field theory will depend on which continuous phase transition the state is close to.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Work of a spaceship in circular motion Say a spaceship is traveling though space in a uniform circular motion. It's not orbiting any planet, it just flies in circles in an empty space. The only force working on the spaceship would be the centripetal force caused by the ship's engine. Thus, the work would be $0$, as the force would always be perpendicular to the ship's path. But that sounds counterintuitive to me, it would seem that the spaceship must do some work, otherwise it would just float in a straight line. Can anyone point out the error in my reasoning?
You are right in saying that the centripetal force doesn't do any work, in fact the kinetic energy of the system doesn't increase as the absolute value of the velocity $|\vec{v}|$ stays constant. I guess what you find counter intuitive is that the spaceship has to burn some fuel to keep rotating, so where does this energy go? Simply it is in the fuel. To keep the rotation, the spaceship will need to keep ejecting mass, in particular if the centripetal force you need is $\vec{F}$, then from Newton's law, every $dt$ you need the change in momentum $\vec{F}dt= md\vec{v}$, where I neglected the loss of mass of the spaceship for simplicity. Since the system is isolated the momentum has to be conserved, therefore for every $dt$ you need to eject some fuel carrying that much momentum (with vector pointing outwards). This ejected fuel will also carry the energy we were looking for.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Can I see separation of variables as a tensor product? Can I see separation of variables as a tensor product? For example, in a radial potential, the separation of variables brings to the solution $R(r)\Theta(\theta) \Phi (\phi)$. This sounds like an element of the space spanned by $|r\rangle\otimes|\theta\rangle\otimes |\phi\rangle$ where $R(r)$ lives in the space spanned by $|R\rangle$ and so on.
Yes, that is exactly what separation of variables is in terms of the Hilbert space - generally, we have that $L^2(X\times Y) = L^2(X)\otimes L^2(Y)$, i.e. the space of square-integrable functions on a Cartesian product is the tensor product of the square-integrable functions on the factors of the Cartesian product. In particular, $L^2(\mathbb{R}^n) = \bigotimes_{i=1}^nL^2(\mathbb{R})$, which corresponds to writing a function $f(x,y,z)$ as linear combinations of functions $f_x(x)f_y(y)f_z(z)$. The ansatz of "separation of variables" in these terms is nothing but the assumption that the solution is a simple tensor, i.e. the linear combination has only a single non-zero summand. There is a slight subtlely for polar coordinates since they don't really cover all of $\mathbb{R}^3$, but the points where they are fishy is a zero measure set, so it doesn't really matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How do you estimate the core temperature of a star? Given a star's mass, radius and average composition (e.g. 90% H, 10% He), is there a formula to estimate the core temperature of that star? I only found one for a lower bound but that wasn't very accurate.
Yes, the formula you quoted in comments is an application of the viral theorem, that says for a star in equilibrium that twice its internal kinetic energy plus it's potential energy is zero. This can be written $$\Omega = - 3 \int P\ dV = -3\int P\ \frac{dm}{\rho}$$ where $\Omega$ is gravitational potential, $P$ is pressure, $\rho$ is density and $dm$ is a mass element. A rough solution is to assume the star is a uniform sphere with an average pressure, density and temperature. In which case $$-\frac{3GM^2}{5R} = -3\frac{PM}{\rho}$$ where $M$ is the stellar mass and $R$ the radius. Writing $\rho = 3M/4\pi R^3$ and assuming an ideal, perfect gas with a mass per particle of $\mu$ and temperature $T$, then $$\frac{GM}{5R} = \frac{\rho k_B T}{\mu \rho}$$ $$T = \frac{G\mu}{5k_B}\frac{M}{R}$$ This gives the right proportionality but the numerical coefficient of 0.2 is not accurate because the star is not uniform; the gravitational potential is not that of a sphere and the density, temperature and pressure vary with radius. A 1 solar mass, 1 solar radius star, with the composition you mention ($\mu = 0.58\times 1.67\times 10^{-27}$ kg) we get $T = 0.2 \times 1.3\times 10^7$ K. This is not so bad for an average temperature, but not close to the core temperature. A more accurate approximation comes from assuming a polytropic equation of state with $P \propto \rho^{\alpha}$. For a star like the Sun (or of higher mass), where the energy is largely transported radiatively, it turns out $\alpha \simeq 4/3$. Solving the Lane-Emden equation and assuming an ideal gas then gives a new numerical coefficient of 1.17 (for a largely convective star with $\alpha =5/3$, the dtar is more centrally condensed and the coefficient would be 1.86). Thus your answer for a sun-like star is $$ T_c \simeq 1.17 \frac{GM\mu}{R}\ .$$ and for a lower mass main sequence star, where convection is dominant $$T_c \simeq 1.86 \frac{GM\mu}{R}\ .$$ If you want something more accurate than this then a precise stellar evolution model is required to solve for energy transport at each radius, to account for the non-perfect nature of the gas, radiation pressure and accurately describe the pressure and density profile.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Partial trace of local operators applied to maximally entangled states I was looking at a problem where two invertible local operators were applied to a maximally entangled state, and didn't quite understand how some of it works out. We have local operators $A \otimes \mathbb{1}$ and $\mathbb{1} \otimes B$ with $Tr(A^{\dagger}A) = 1$ and $Tr(B^{\dagger}B) = 1$. We also have the maximally entangled state $\rho$. I do not really undersand this step $$ Tr_B (A \otimes B \rho A^{\dagger} \otimes B^{\dagger}) = Tr_B (A^{\dagger}A \otimes B^{\dagger}B \rho) = Tr_B ((A^{\dagger}A \otimes \mathbb{1})\rho) $$ I assume this is some combination of the fact that $\rho$ is maximally entangled (so its partial trace is equal to the identity) and the fact that $Tr(B^{\dagger}B) = 1$, but I cannot really understand how to argue that. It might also be the case that I'm just completely misunderstanding something.
This is not true. A simple counterexample is $A\propto I$ and $B=\lvert0\rangle\langle0\rvert$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is the meaning of an object having an uncertainty of velocity of 2 $\rm m/s$? In several questions we are given the uncertainty in velocity of an object and are asked to calculate the uncertainty in position of an object? Well my doubt is that,as when we say that the uncertainty in position (of an let's say electron) is let's say 2 nm We mean that the electron could be found anywhere within that 2 nanometer it could be present in 1.2 nm Or at 1.5 nm Or anywhere within that 2nm distance But when we say that the uncertainty in velocity (of the electron) is let's say 2m/s (just for the sake of convenience) Then what does that literally mean as it meant in the case of position
As Marko points out, this question is unclear, but I see you have the uncertainty principle as a tag. If you mean that the electron has a velocity with uncertainty $\Delta v=2\ ms^{-1}$ and you want to find its corresponding position uncertainty, then you need to solve the following relation $$\Delta x\Delta p\ge \frac \hbar2$$ and since $p=mv$ then $\Delta p=m\Delta v$ so $$\Delta x\ge\frac{\hbar}{2m\Delta v}$$ where $m$ is the mass of the electron, then plug in the uncertainty in $v$ and Planck's constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Force between two protons Yesterday my teacher was teaching about the production of photons, he told that photons are produced when the electron move from a higher energy level to a lower energy level then suddenly a idea struck in my mind that if electrons are responsible for photons and photons are responsible for electromagnetic force then how will the electromagnetic force will come between two individual protons? Is there more ways to generate photons?
Photons are the quantum elementary particles of the electromagnetic force . In the table there are elementary particles with charge other than electrons so photons can be produced at the basic level by other charged particles too. One way they are produced is the way you have been taught at present, by changes in the energy level in atoms, which are composed of electrons and a positively charged nucleus. Another way is by the scattering of charged particles off the field of other charged particles. This classically is described by the production of light from accelerating charged particles. Classical electricity and magnetism can be shown to emerge from the underlying quantum mechanical level. how will the electromagnetic force will come between two individual protons? Is there more ways to generate photons? Individual protons are composite charged particles and may generate photons when scattering off each other's electromagnetic field. The Coulomb force between two protons can be shown in quantum field theory to derive from the mathematical existence of virtual photons , but this needs graduate studies in quantum field theory to understand.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Difference between the wave forms in the water and in the Young double slit experiment We can observe when we cause a slight disturbance at two points on the water surface which is intially totally undisturbed , it will form water waves which would look like as shown in below image: we can observe that there are constructive and destructive interferences at some places and also which lies in between these type of interferences (that is between fully destructive and fully constructive) . We notice that it doesnt need screen to show the interference effects at all , so why in YDSE we need screen to show the interference patterns, is it because we cant be able to observe the interference being happen in air or any other medium from our naked eyes?
You need the screen because you do not want to stare in the sun with your eyes even if you were doing it through a peep-hole. Imagine adjusting the size of the diaphragm (of the slits) to see if there is already enough light has already fallen on your retina.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why do we use different differential notation for heat and work? Just recently started studying Thermodynamics, and I am confused by something we were told, I understand we use the inexact differential notation because work and heat are not state functions, but we are told that the '$df$' notation is only for functions and that the infinitesimal heat and work are 'not changes is anything' surely they can be expressed as functions of something? and they are still changes as they do change? What is the thermodynamic reason for describing them as not being changes in anything?
Notation Sometimes heat and work are marked by special signs to underscore that they are not real differentials, such as differentials with a stroke, as shown here or something like $$\text dU = \delta Q + \delta W.$$ However, there is no single established notation here, and most of the time one simply does not bother to use any special symbols - the risk of misunderstanding is very low (after one has understood the basics of stat. mech.) Are work and heat functions? Work and heat are, of course functions, but they depend not only on the variables of the system, and therefore they are not functions of the state variables alone. E.g., if we work in $p,V$ variables, then there are many paths connecting states $p_1,V_1$ and $p_2,V_2$ - each such path corresponds to a different combination of work and heat, although the internal energy at the end of the path is always the same. This means that heat and work are not differentials in strictly mathematical sense, whereas the internal energy is (see here for the difference between a derivative and a differential).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
What does it mean mathematically the position of the center of mass relative to other particles is the same? My understanding is that the center of mass of a system of particles is that an imaginary point that travels in space and has position vector $$\vec r = \frac{\sum_{i=1}^{n}m_i\vec r_i}{\sum_{i=1}^nm_i}.$$ My textbook suggests that if you use one coordinate system to find the center of mass of an object, its position relative the particles is the same is the object moves or the coordinate system changes. What does it mean mathematically that the position of the center of mass relative to the positions of the particles is the same? I found if there is some rotation, its position vectos relative to all the other particles change, like the object below.
look at this figure The vector u is the position between the center of mass and the particle position. you can describe this vector in a body fixed coordinate system that located at the center of mass ( blue one). Now if you use other intertial system $I_2$ the center of mass vector and the particle position changes, but the vector u doesn’t change in body fixed coordinate system. If the center of mass coordinate system (blue one) is moving, vector u is remaining unchanged in the body coordinate system
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
When solving problems on linear momentum, when can external forces be neglected? I was recently solving a problem in which one end of a massless string (in vertical orientation) was tied to a block of mass $2m$ and the other end to a ring of mass $m$, which was free to move along a horizontal rod. The block is then given a velocity $v$ (consider that this velocity is not caused by application of an external force). To calculate the velocity of the ring, we would have to apply momentum conservation. The problem is, momentum conservation would require net external force on the system to be zero, but in the solution I saw, the normal force exerted by the rod on the ring was neglected and so was the force of gravity. So, when exactly can external forces be neglected in problem-solving?
Remember that momentum and force are both vectors, and when we write Newton's second law to relate force to the rate of change of momentum this is a vector equation: $$ (F_x, F_y, F_z) = \left( \frac{dp_x}{dt}, \frac{dp_y}{dt}, \frac{dp_z}{dt}\right) $$ which is a set of three equations: $$\begin{align} F_x &= \frac{dp_x}{dt} \\ F_y &= \frac{dp_y}{dt} \\ F_z &= \frac{dp_z}{dt} \end{align} $$ In the example you give the only external forces present are the normal force between the ring and rod and gravity acting on the mass, and both of these act in the vertical direction. Suppose we call the vertical axis $y$ and the horizontal axis $x$ (we don't need a $z$ axis in this example) then since no external forces act in the $x$ direction we have $F_x = 0$ and therefore $dp_x/dt = 0$ i.e. momentum is conserved in the $x$ direction. Since there are forces acting in the $y$ direction we cannot just assume that $p_y$ is conserved, though in fact it is since the vertical forces cancel each other out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Different formal definitions of Lorentz Transformations The formal definition for Lorentz Transformation is a matrix $\Lambda$ such that $$\Lambda^\mu_{\ \ \alpha}\Lambda^\nu_{\ \ \beta}\eta_{\mu\nu}=\eta_{\alpha\beta.}$$ In some books I have found a definition that use the transposition: $$(\Lambda^T)\eta\Lambda=\eta.$$ My question is how to link them. My attempt, so far, is to multiply by the inverse but I get stuck very soon and I don't know how to reach the second equation. Probably the passagges are trivial. Thanks for any help.
This has to simply to do with matrix multiplication. If you have a matrix $A$ that multiplies a vector $x$, this can be written as $$ A_{ij}x_j = A x$$ where summation over double indices is assumed. Of course you can flip the expression around, as in $$ A_{ij}x_j = x_jA_{ij}$$ Similarly, a vector can multiply a matrix, as in $$ x_i A_{ij} = xA $$ For the second step we have to recall that a matrix is transposed as $$ A_{ij}^T = A_{ji}$$ Combining this, you directly obtain the expression you give. Also note that since we are in the context of relativity, each of the indices over which we sum is either up or low (not both up or both down).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Do the components of a force written for a purpose actually exist? On an inclined plane if you put a box, the force of gravity $mg$ is written as sum of two forces $mg\sin\theta$ and $mg\cos\theta$ where $\theta$ is the angle the incline is making with earths surface. Do these forces $mg\sinθ$ and $mg\cosθ$ actually work on the object?
If you put two force meters on the block, one in direction of the incline, one orthogonal to it the two meters show the two forces you calculated and stay so if you take away the inclined plain, So the two forces really exist not just in theorie .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
How can they estimate exoplanet radial velocities using Doppler considering spectrograph resolving power? I read that spectrograph resolving powers, the ratio of wavelength uncertainty to wavelength are like 1000 or 10000. Plugging this into the non relativistic Doppler formula gives a velocity uncertainty like 30000 meters per second. So how can they claim one meter per second accuracy? And what about thermal broadening of spectral lines?
Resolution tells you how well the spectrometer can separate lines with wavelengths close together, but not how precisely it can measure the wavelength of a single line. Measurement precision can be much better than the resolution. Then, techniques like template correlation can effectively average measurements of many lines, improving precision even more.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to derive the Unruh effect (or the thermofield double state) from the path integral? I have been reading about the path integral approach to deriving the thermofield double state for the Minkowski vacuum in terms of the Rindler states: \begin{equation} \left|0_{M}(t=0)\right\rangle=\sum_{n} \frac{e^{-\frac{\beta}{2} E_{n}}}{\sqrt{Z(\beta)}}\left|n_{R}\right\rangle \otimes\left[\Theta\left|n_{L}\right\rangle\right]. \end{equation} According to https://arxiv.org/abs/2001.09869, this result can be derived by considering \begin{equation} \begin{aligned} \left\langle\phi_{M} \mid 0_{M}(t=0)\right\rangle & \propto \int_{\phi(\theta=-\pi)=\phi_{I}}^{\phi(\theta=0)=\phi_{D}} D \phi e^{-I_{E}} \\ & \propto\left\langle\phi_{R}\left|e^{-\pi H^{R}}\right| \phi_{L}\right\rangle \end{aligned}. \end{equation} But my other reference is https://arxiv.org/abs/1409.1231, which claims we should be studying \begin{equation} \left\langle\phi_{L} \phi_{R} \mid \Omega\right\rangle \propto\left\langle\phi_{R}\left|e^{-\pi K_{R}} \Theta\right| \phi_{L}\right\rangle_{L} \end{equation} Which is different because of the CPT operator $\Theta$. (As far as I can tell $K_R$ and $H^R$ are the same thing). Which of these is correct?
It seems I was confused by notation here. After all in the first expression the $| \phi_L \rangle $ can only be evolved by $H^R$ and projected onto $| \phi_R \rangle$ if it is in the right-Rindler wedge states. So $| \phi_L \rangle $ must live in the same space as $| \phi_R \rangle $ which can be achieved by applying $\Theta$ to a left-Rindler wedge state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Hall sensor for electric(!) field? Is it (in principle) possible to measure the strength of an electic(!) field with a hall sensor? I think so, for the following reasons: * *The hall sensor is a conductor. If we place an conductor in an electic field, charges will rearrange so that there will be no electric field in the interior of the conductor (~Faraday's cage). *The field that is created by the new charge distribution in the hall sensor is opposite to the surrounding field, but has the same strength. *Following the usual argumentation, we get a voltage across the hall sensor that is proportional to the field inside the hall sensor, which has the same strength as the surrounding field that we want to measure. Is this line of thought correct? If so, what are technical difficulties that make the hall effect not suitable to measure electric fields?
The only suggestion that I have seen for measuring an electric field (like the one at the surface of the earth) is to look at the AC output from a small, rapidly rotating, dipole antenna. An un-powered DC circuit goes quickly to zero current, field, and voltage difference.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is it possible to derive Navier-Stokes equations of fluid mechanics from the Standard Model? We know that the Standard Model is a theory about almost everything (except gravity). So it should be the basis of fluid mechanics, which is a macroscopic theory from experiences. So is it possible that we can derive equations of fluid mechanics from the Standard Model? If the answer is yes, please give a simple example. If the answer is no, what is the reason that prevent the derivations to be reality.
From your comment : So is it possible to prove the consistence of fluid mechanics with the Standard Model? The standard model is consistent with special relativity and quantum theory. We know those explain everything our normal fluid equations deal with because it just atoms, ions and electrons, so it's a very safe bet that it's consistent with normal fluid mechanics. A direct proof, however, would be rather insane and provide a different result from standard fluid mechanics because it would include terms that model conditions and particles at extreme energies that are irrelevant for normal fluid mechanics. We'd end up with some insane equations that modelled everything, e.g. neutrino fluids or Higgs particle fluids and mixtures of all of these things. You'd end up discarding most of what you found (assuming someone could do such hideous math) to reduce it to a form related to normal fluids. We have separate physics for macroscopic objects precisely because that's the most sane way to work.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 0 }
Example for a physical distribution without a well-defined standard deviation Is there a physical example of a distribution that has a diverging standard deviation (like the Cauchy distribution) and is there an intuitive reason for the standard deviation diverging? Is there a physical context where I should expect my standard deviation to be not well-defined?
The standard deviation is one example of a metric to characterize the width probability distributions. There is no particular physical meaning to the standard deviation diverging, in general; you just need to use a different measure of the width of the distribution. It is just a mathematical tool, that is useful in some contexts, and not others. The Cauchy distribution is also called a Lorentzian distribution in physics, and appears very frequently in characterizing modes of a system with dissipation in classical mechanics, or unstable particles that decay (also called resonances) in quantum mechanics. It arises naturally when you solve the equation for a damped, driven harmonic oscillator in Fourier space (compare Eq 120 in the notes I linked to with the first equation on wikipedia). The fact that the standard deviation of the Lorentzian diverges poses no problem to using this distribution in those contexts. Often, the width of a Lorentzian is characterized in terms of the full width at half maximum (FWHM).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why can't I use vector addition in this way here? I know that this exact question has been asked here a number of times, but none of the answers sit right with me. The question says that the ends of the strings are pulled with a velocity of "u" units. We are to find the velocity of the block in terms of u. The answer is u/cos(θ) units. However, my approach included adding two vectors of magnitude "u" seperated by an angle of 2θ. Evaluating that gives an answer of 2ucos(θ), which is wrong. Where am I going wrong?
The model you are assuming here is not correct for the following reason: Imagine you have a horse carriage that is pulled by one horse with a speed of $v$. Then (if the horse is strong enough) the carriage will travel at a speed of $v$. Now, if you put 10 horses in front of the carriage, the carriage will not travel at the speed of $10v$, but still at a speed of $v$. Otherwise it would overtake the horses and this is clearly not physical. Remember that these are velocities and not forces! This might be confusing since sometimes velocities are broken up in their coordinate components, one horizontal and one vertical component. However, this is a different setup here. The velocities are not components of a larger velocity, but the velocity of some points of the ropes. You can also think of the points of the ropes as the horses in front of the carriage! Also, a good check is always taking the limiting case. Here this would correspond to $\theta\rightarrow0$. The physical situation is then that the mass is very far below from the setup which is outlined here. Then, if you pull by some distance $x$, you will lift the package by the same distance $x$. This is also exactly what you find with the formula since $\cos(0)=1$!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is angular momentum just a convenience? I'm wondering whether angular momentum is just a convenience that I could hypothetically solve any mechanics problems without ever using the concept of angular momentum. I came up with this question when I saw a problem in my physics textbook today. In the problem, a puck with known velocity hits a lying stick. The puck continues without being deflected, and the stick starts both linear and angular motion. There are three unknowns: velocity of puck and stick after collision, and the angular speed of the stick. So, we need three equations: conservation of linear momentum, kinetic energy, and angular momentum. So, for instance, is it possible to solve this problem without using angular momentum? Also, how would a physics simulator approach this problem?
Angular momentum is a fundamentally conserved quantity. One does not need macroscopic, bound objects to see this. Consider two identical particles moving in opposite directions. In the center-of-momentum frame, their trajectories will be parallel. If they are not on a direct collision course, then their trajectories will be separated by some distance d, and the system will have non-zero angular momentum in the center-of-momentum frame. Given this setup, there is no inertial reference frame with zero angular momentum; it would require a rotating (non-inertial) frame. The system's angular momentum is preserved at all times and for all interactions, and this property is required to solve final states. For example, if the particles are an electron and positron, they can annihilate to 2 (or more) photons. But the final photons state must have the same angular momentum as the initial state, which forbids some final states that would be allowed following only conservation of energy and linear momentum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 11, "answer_id": 3 }
Which assumption of the no-hair theorem does the Einstein-Maxwell-Dilaton-Axion (EMDA) black hole violate? I know that generally a black hole should have no hair. For example, as proved by Bekenstein in this paper, we can not couple as massive scalar/vector etc. to a black hole which is stationary. However, I am aware that there is a solution called Einstein-Maxwell-Dilaton-Axion (EMDA) black hole, as introduced in this paper this paper. I wonder what assumption of the "no hair" theorem does this EDMA solution violate so it can end up with a black hole with scalar fields hair.
Specifically the cited Bekenstein's 1972 no scalar hair theorem is evaded because it is for free scalar fields, while in EMDA theory dilaton and axion fields are coupled to Maxwell field and to each other. Moreover, while the EMDA black hole has parameters called dilaton and axion charges, they do not represent a true hair, since variation of these parameters results in variation of asymptotic values of dilaton and axion fields at infinity. Hence these are not true independent degrees of freedom of the black hole itself but more like properties of the world around it. This type of behavior is referred to as “secondary hair” (while electric charge for example, would be primary hair, independent degree of freedom of black hole). For more details about various no scalar hair theorems and various ways those theorems could be evaded see the paper: * *Herdeiro, C. A., & Radu, E. (2015). Asymptotically flat black holes with scalar hair: a review. International Journal of Modern Physics D, 24(09), 1542014, doi:10.1142/S0218271815420146, arXiv:1504.08209.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does air cool as it nears the poles? I understand air is heated by the equator causing it to rise towards the poles. But why does air cool and sink after nearing the poles. Shouldn't the air still possess heat, after being heated by the equator? Could adiabatic expansion cause the cooling of this air as it journeys towards the poles, or is this heat radiated back into space? Could someone please explain deeply how this air cools, and how convection currents work?
Everywhere on Earth, at all times, the air is cooling by radiating to space. That lost heat is replenished by solar energy. There's more solar energy per square meter at the equator than the poles. "Hot air rises", so, very crudely speaking, the air rises at the equator, with the matching downward flow happening at the poles because that's where there is the least solar heating.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can light produce electric and magnetic field when there are no accelerating charged particles? If we see light as a wave, especially in vaccum, there is nothing there, no particles, yet light has an electric and magnetic field. How can this be possible?
Something, such as accelerating charge particles, caused the light to propagate. The particles don't have to accompany the light. Consider an analogy - something creates a sound wave that you hear some distance away. You could ask how could sound produce compression and rarefaction of air when there is nothing pushing on it where you are?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Does amplitude really go to infinity in resonance? I was recapping the forced oscillations, and something troubled me. The equation concerning forced oscillation is: $$ x=\frac{F_0}{m(\omega_0^2-\omega^2)}\cos(\omega t) $$ I don't understand why this equation predicts that the amplitude will approach infinity as $\omega$ approaches $\omega_0$. One can come up with the argument that in the actual world, there are damping forces, friction etc. The trouble is, however, even in the ideal world, the amplitude wouldn't approach infinity as the spring's restoring force will catch the driving force at some point, and the system will stay in equilibrium. What I'm wondering is * *Is my suggestion in the last paragraph correct? *If it is correct, what assumption led us to the erroneous model of $x$? *If it is not correct, what am I missing?
Your solution $$x(t)=\frac{F_0}{m(\omega_0^2-\omega^2)}\cos(\omega t) \tag{1}$$ was derived from the differential equation $$m(\ddot{x}+\omega_0^2x)=F_0\cos(\omega t) \tag{2}$$ So the forced oscillation (1) is indeed a mathematically correct solution of (2). But for the resonance case ($\omega=\omega_0$) the solution (1) becomes ill-defined, and you need to solve (2) in a mathematically more careful way, as done in @Sal's and @Puk's answers. But equation (2) is actually a mathematical idealization of the physical situation because it neglects damping. In reality there will always be a damping term ($\propto\gamma\dot{x}$) with a small positive $\gamma$. So instead of (2) you will have the differential equation $$m(\ddot{x}+\gamma\dot{x}++\omega_0^2x)=F_0\cos(\omega t) \tag{3}$$ You should try to solve this differential equation. Hint: Use the approach $$x(t)=A\cos(\omega t)+B\sin(\omega t) \tag{4}$$ and find the amplitudes $A$ and $B$ as functions of $\omega$. Then you will see that for the resonance case (at $\omega=\omega_0$, and also in the range $[\omega_0-\gamma, \omega_0+\gamma]$) the amplitude will be very large, but not infinitely large.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 9, "answer_id": 5 }
Why can I write $\frac{d}{dt}=\frac{d}{dt'}\frac{dt'}{dt}+\frac{d}{dx'}\frac{dx'}{dt}$? I’m dealing with a Lorentz invariance problem, and in one of the solutions I’ve seen to prove the wave equation the term above was used. However I don’t really understand why it can be written that way. Could someone provide an explanation?
It's the chain rule for partial derivatives under the change of variables $$ x= x(x',t')\\ t= t' $$ You need to be careful to specify what is being fixed in each derivative though, so it should be $$ \left(\frac{\partial}{\partial t}\right)_x = \left(\frac {\partial}{ \partial t'}\right)_{x'}\left(\frac{\partial t'}{\partial t}\right)_x+ \left(\frac{\partial}{\partial x'}\right)_{t'} \left(\frac {\partial x'}{\partial t}\right)_{x}, $$ where $$ \left(\frac{\partial t'}{\partial t}\right)_x=1 $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Lagrangian first integral I want to extremize $$\int dt \frac{\sqrt{\dot x ^2 + \dot y ^2}}{y}.$$ I have thought that, since the Lagrangian $L(y, \dot y, \dot x)$ is $t$ dependent only implicitly, that i could use the fact that $$L(z,z') \implies L - z' \partial L / \partial z' = c.$$ So $$L - y' \partial L / \partial y' = c_1,$$ $$L - x' \partial L / \partial x' = c_2$$ But these two equations, when we substitute the values and arrange it, give us $$dy/dx = c_3 \implies y = c_3 x +b.$$ This is certainly wrong, the answer is supposed to be a circle equation. Even so we can solve it another way, i am still confused: Why did we got the wrong answer using the above two equation? If, for example, the Lagrangian was $\int dt \sqrt{\dot x ^2 + \dot y ^2}$, we could use the above approach to get the answer (in this case, a line is the right answer).
$\newcommand{\bl}[1]{\boldsymbol{#1}} \newcommand{\e}{\bl=} \newcommand{\p}{\bl+} \newcommand{\m}{\bl-} \newcommand{\mb}[1]{\mathbf {#1}} \newcommand{\mc}[1]{\mathcal {#1}} \newcommand{\mr}[1]{\mathrm {#1}} \newcommand{\gr}{\bl>} \newcommand{\les}{\bl<} \newcommand{\greq}{\bl\ge} \newcommand{\leseq}{\bl\le} \newcommand{\plr}[1]{\left(#1\right)} \newcommand{\blr}[1]{\left[#1\right]} \newcommand{\vlr}[1]{\left\vert#1\right\vert} \newcommand{\Vlr}[1]{\left\Vert#1\right\Vert} \newcommand{\lara}[1]{\left\langle#1\right\rangle} \newcommand{\lav}[1]{\left\langle#1\right|} \newcommand{\vra}[1]{\left|#1\right\rangle} \newcommand{\lavra}[2]{\left\langle#1\right|\left#2\right\rangle} \newcommand{\lavvra}[3]{\left\langle#1\right|#2\left|#3\right\rangle} \newcommand{\vp}{\vphantom{\dfrac{a}{b}}} \newcommand{\Vp}[1]{\vphantom{#1}} \newcommand{\hp}[1]{\hphantom{#1}} \newcommand{\x}{\bl\times} \newcommand{\ox}{\bl\otimes} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\qqlraqq}{\qquad\bl{-\!\!\!-\!\!\!-\!\!\!\longrightarrow}\qquad} \newcommand{\qqLraqq}{\qquad\boldsymbol{\e\!\e\!\e\!\e\!\Longrightarrow}\qquad} \newcommand{\tl}[1]{\tag{#1}\label{#1}} \newcommand{\hebl}{\bl{=\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!=}}$ $\hebl$ The Beltrami Identity: If the Lagrangian $\:L\plr{y,y',x}\:$ of a system does not depend explicitly on $\:x$, that is \begin{equation} \dfrac{\partial L}{\partial x}\e 0 \tl{01} \end{equation} then from the Euler-Lagrange equation \begin{equation} \dfrac{\mr d}{\mr dx}\plr{\dfrac{\partial L}{\partial y'}}\m\dfrac{\partial L}{\partial y}\e 0 \tl{02} \end{equation} we have \begin{equation} \dfrac{\mr d}{\mr dx}\plr{y'\dfrac{\partial L}{\partial y'}\m L}\e 0 \tl{03} \end{equation} so \begin{equation} \boxed{\:\:y'\dfrac{\partial L}{\partial y'}\m L\e \texttt{constant}\:\:}\quad \texttt{(Beltrami Identity)} \tl{04} \end{equation} $\hebl$ For your Lagrangian \begin{equation} \begin{split} \frac{\sqrt{\dot x ^2 \p \dot y ^2}}{y}\mr dt & \e \frac{\sqrt{1\p\plr{\dfrac{\dot y}{\dot x}}^2}}{y}\dot x\,\mr dt\e\frac{\sqrt{1\p\plr{\dfrac{\mr dy/\mr dt}{\mr dx/\mr dt}}^2}}{y}\dfrac{\mr dx}{\mr dt}\,\mr dt\\ &\e\frac{\sqrt{1\p\plr{\dfrac{\mr dy}{\mr dx}}^2}}{y}\mr dx\e\frac{\sqrt{1\p y'^{2}}}{y}\mr dx\\ \end{split} \tl{05} \end{equation} that is \begin{equation} L\plr{y,y',x}\e\frac{\sqrt{1\p y'^{2}}}{y} \tl{06} \end{equation} Using the Lagrangian \eqref{06} we could find the $\:x\m$parametric representation $\:\blr{x,y\plr{x}}\:$ of the curve directly bypassing its $\:t\m$parametric representation $\:\blr{x\plr{t},y\plr{t}}$, that is the equations of the motion. $\hebl$ Hint for the Solution Insert the Lagrangian \eqref{06} in the Beltrami Identity \eqref{04} to find \begin{equation} f\plr{y,y'\e\dfrac{\mr dy}{\mr dx}}\e a\e \texttt{positive constant} \tl{H-01} \end{equation} Solve equation \eqref{H-01} with respect to $\:\mr dx$ to find \begin{equation} \mr dx\e g\plr{y}\mr dy \tl{H-02} \end{equation} In equation \eqref{H-02} make a proper convenient change from the variable $\:y\:$ to an angle variable $\:\theta\:$ \begin{equation} y\e h\plr{\theta} \tl{H-03} \end{equation} Convert equation \eqref{H-02} to something like that \begin{equation} \mr dx\e q\plr{\theta}\mr d\theta \tl{H-04} \end{equation} Integrate equation \eqref{H-04} to have \begin{equation} x\e u\plr{\theta} \tl{H-05} \end{equation} Equations \eqref{H-03} and \eqref{H-05} give a $\:\theta\m$parametric representation of the motion orbit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Quantum and classical physics are reversible, yet quantum gates have to be reversible, whereas classical gates need not. Why? I've read in many books and articles that because Schrödinger's equation is reversible, quantum gates have to be reversible. OK. But, classical physics is reversible, yet classical gates in classical computers are not all reversible ! So the reversibility of Schrödinger's equation doesn't seem to be the right reason for quantum gates to be reversible. Or, maybe it is because quantum computers compute "quantumly", whereas classical computers do not compute "Newtonly" but mathematically. Or if I try to put it in another way : the classical equivalent of a "quantum computer" would be an "analog computer". But our computers are not analog computers. In an analog computer, the gates would have to be reversible. So in a way a quantum computer is an "analog quantum computer" But maybe I'm wrong Thanks
Note that classical computing can also be made reversible. Take for example AND gate. As pointed in one of the answers, if the result is 0 you are unable to decide which of the inputs is 0. However, if you copy input to output, i.e. the gate will have three outputs - copy of two inputs and actual AND output, then the gate is perfectly reversible. In quantum computing, any operation (measurement and qubit reset being exceptions) is described by a unitary matrix (this comes from Schrodinger equation). Since for any unitary holds $UU^\dagger = U^\dagger U =I$, the $U$ is invertible ($U^{-1} = U^\dagger$) and hence you have an operation which is naturally reversible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 7, "answer_id": 1 }
What does it mean for two variables to be canonically conjugate? The word "canonical" has been used in many of my classes (canonical ensemble, canonical transformations, canonical conjugate variables) and I am not really sure what it means physically. More specifically, in the context of the Hamiltonian formulation of mechanics , what does canonically conjugate variables mean physically? why is it that because $\{x,p_x\}= 1$ ,they are canonically conjugate variables? what does the Poisson bracket value really mean? and why is it that canonically conjugate variables, when we go to quantum mechanics, have operators that do not commute , $[\hat{x},\hat{p_x}] = i \hbar$, which leads to uncertainty relations. There seems to be a deeper connection here that I do not want to skip over, please recommend me readings as my search efforts have not led me anywhere.
In Hamiltonian dynamics, a change of variables $(q,p) \longrightarrow (Q,P)$ that leaves the form of the equations of motion unchanged (the symplectic structure is conserved) is called a canonical transform. $$\begin{cases} \frac{dq}{dt}= \frac{\partial H(p,q)}{\partial p} \\ \frac{dp}{dt}=- \frac{\partial H(p,q)}{\partial q}\end{cases} \longrightarrow \begin{cases} \frac{dQ}{dt}= \frac{\partial H(P,Q)}{\partial P} \\ \frac{dP}{dt}=- \frac{\partial H(P,Q)}{\partial Q}\end{cases}$$ Variables (q,p) or (Q,P) used in the Halmitonian are called conjugate variables. If they are formed by a canonical transform, they are said to be canonically conjugate. Conjugate variables have the interesting property that their Poisson brackets is one. Poisson brackets encapsulate all the information about the dynamics of the system. Consider a dynamical functions f(q,p,t) that can be expanded in series of time: $$ f(q,p,t)= \sum_{n=0}^ \infty b(q,p)^{(n)} \frac{t^{n}}{n!} $$ Where for simplicity we did not write the argument t=0. Let's define a new operator:[H] by:$b(q,p)^{(1)}=\{b,H\}=[H]b$ Then using Poisson brackets, we can evaluate the time derivatives in the above series. $$ b(q,p)^{(1)}=\{b,H\}=[H]b$$ $$b(q,p)^{(2)}=\{b^{(1)},H\}=\{\{b,H\},H\}=[H]^{2}b$$ $$ b(q,p)^{(n)}=\{b^{(n-1)},H\}=[H]^{(n)}b$$ The time evolution of f(q,p,t) is given by: $$ f(q,p,t)= \sum_{n=0}^ \infty [H]^{n} b(q,p) \frac{t^{n}}{n!}= e^{t[H]} b(q,p)$$ Of course, it is very formal, but Mathematicians use the symplectic structure of Hamiltonian dynamics to derive theorems like the Liouville theorem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Is Polarization Inconsistent with Classical Electricity? I had read that the Bohr-Van Leeuwen theorem shows paramagnetism to be impossible with only classical magnetism. It has been explained to me that magnetism is ultimately a quantum effect. Is there an analogous result with electricity? The motivation for this question comes from the idea that in certain situations and from a certain frame of reference you can "turn" a purely magnetic effect into a purely electrical effect. If this is true, then would the results of the Bohr-Van Leeuwen theorem apply to electricity as well by default?
The Bohr-van Leeuwen theorem states that a system of charged particles that obeys Boltzmann's probability distribution won't get magnetized by external magnetic field, diamagnetically or paramagnetically. The Boltzmann distribution of momenta of charged particles does not allow for preferred direction of electric currents on the surface of the body(or magnetic domain), because boltzmannian distribution of momenta in thermal equilibrium is isotropic, even in external magnetic field. In a classical model of magnetism (Ampere's model), magnetized state of macroscopic bodies is due to molecular currents, which have non-zero density on the surface of the body(magnetic domain). The Boltzmann distribution is inconsistent with this model. In other words, the Bohr-van Leeuwen result is because the boltzmannian assumption prevents the presence of macroscopic electric current, which is necessary for magnetized state. For macroscopic electrically polarized state, no electric current is necessary, and the use of the Boltzmann distribution does not prevent external electric field from polarizing the body. So the theorem is not relevant for electric polarization in external electric field. the idea that in certain situations and from a certain frame of reference you can "turn" a purely magnetic effect into a purely electrical effect. This is not the case here. In discussion of electric polarization effects of external electric field on material body, there is a preferred frame - the frame of the body.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is torque defined as $\vec{r} \times F$? Here I cannot convince myself myself that it is units because the torque is defined to be in units of Newton meter is a reiteration of the law stated above. Why was it not $r^2 \times F$ or $r^3 \times F$ or $r^2 \times F^2$ etc. The argument "in our experience how much something rotates depends on the lever length and the force applied" is really insufficient. Can someone outline a more rigorous proof or motivation?
I personally prefer a derivation using the principle of virtual work where the formula of torque directly comes out. While angular momentum is a natural property to consider for a spherically symmetrical problem, this alternative approach shows its relevance for statics of rigid bodies even when this symmetry is not present. Take a set of points indexed by $i$ at position $\vec r_i$, on which are applied respectively the forces $\vec F_i$. This gives first formula of vitual work for a general displacement: $$ \delta W = \sum \vec F_i \cdot \delta\vec r_i $$ Furthermore, lets assume the points are rigidly constrained and can only rotate around the origin. Any allowed differential displacement can thus be written as $\delta\vec r_i =\delta\vec \phi \times \vec r_i$ where $\vec \phi$ is the differential angular displacement. Injecting in the work you get: $$ \delta W = \sum \vec F_i \cdot (\delta\vec \phi \times \vec r_i) $$ $$ \delta W = (\sum \vec r_i \times\vec F_i ) \cdot \delta\vec \phi $$ So static equilibrium is equivalent to a vanishing virtual work for any relevant virtual displacement, hence $\sum \vec r_i \times\vec F_i $, the torque naturally pops out. It also explains also the useful power formula for rotation (with angular velocity $\vec \omega$): $$ P = (\sum \vec r_i \times\vec F_i ) \cdot \vec \omega $$ Hope this helps and tell me if you find some mistakes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Ehrenfest theorem initial conditions The Ehrenfest theorem states that the expectation value of position $x$ obeys the following equation: $$m\frac{\mathrm{d^2}}{\mathrm{d}t^2}\langle x\rangle=\langle F(x)\rangle$$ But there I only need 2 initial conditions and for the wavefunction I need infinitely many initial conditions. So from where come all the initial conditions? I think that they come from the higher moments of the position operator but for the harmonic oscillator there are also infinitely many initial conditions and the centered moments vanish.
So if I know the initial position and momentum of the expectation value I know how the system behaves. My question is now why I need more initial conditions in quantum mechanics and where they come from. That's just it, you really don't know how the system behaves! While, for the quantum oscillator, the first moment has the magically classical property $$ \frac{^2⟨⟩}{^2}=−^2⟨⟩, $$ specified by two initial conditions, you can check that for the great majority of quantum oscillator states (barring coherent states) this dramatically fails for higher moments $\langle x^n\rangle$. That is to say that many-many states described by the oscillator TDSE do "interesting things. This is actually easiest to "see" in deformation quantization, where the coordinate-space probability density profile is gotten by integrating out the momentum dependence of the rigidly rotating Wigner function. That is, whereas the x-distribution's mean oscillates classically, the rest of it morphs and wiggles in a dramatic way, around that semiclassical mean. For instance, look at the simplest quantum flip-flop, that is, integrate over the p-axis. There is much-much more in ICs you need to specify in QM, and they all come from the astounding $\hbar$-dependence of the Schroedinger equation. Choosing to focus on the semi-classical first moment is almost throwing out the baby with the ... ahh, stop that runaway metaphor!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the speed of signal transport via electricity as fast as light? Let us assume a time synchronization system that comprises a sender and a receiver. The sender generates and sends an encoded signal which presents the current time to the receiver periodically, and the receiver calibrates its clock according to this signal. Is the speed of signal transport via electricity as fast as light? If it is, does it mean no matter which media we use, copper or fiber, even air(WiFi), the time lag between the sender and the receiver is identical theoretically(ignore interference)?
The propagation rate of a electrical signal along a wire depends on how much capacitance and inductance it exhibits on a per-foot basis. These parameters vary according to the diameter of the wire, its construction (parallel vs. coaxial), the proximity of other wires, and the type and thickness of insulation it is coated with (if any). The resulting propagation speed will always be significantly less than that of light in a vacuum and hence will make for example a wire antenna's electrical length different from its physical length. Antenna designers must measure the propagation speed of signals in samples of the wires they use to account for this effect- or else their antennas will not resonate properly at the design frequency.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Two questions about the Higgs decay mode graph I have two questions reading this graph which shows the Higgs decay mode: I know the mass of Higgs bosons is measured to be around 125 GeV, which is the solid line on the graph, so I wonder why could the mass on the x-axis become other values other than the measured results, and why are we interested in different Higgs masses? Another question is why there's no data for the Higgs decay into 4 leptons, on the graph? Thanks for the help!
Based on the date (2013), I assume this figure was made before the Higgs was discovered or its mass was published. The curves are based on a theoretical calculation of the various branching ratios as a function of the Higgs mass within the Standard Model. If the branching ratios are also measured then this would be a way to measure the Higgs, or at least one can check the consistency of the measured Higgs mass and branching ratios, with the Standard Model prediction. There is not a direct coupling between the Higgs and 4 leptons in the Standard Model. I believe the decay process you have in mind is $H\rightarrow ZZ \rightarrow 4\ell$ (Higgs decays to two $Z$ bosons which decay to a set of 4 leptons, either electrons or muons). (As @joseph h mentions in the comments, there are also other decay modes that lead to 4 leptons, like $H\rightarrow WW \rightarrow 4\ell$ and $H\rightarrow t\bar{t}\rightarrow 4\ell$). The four leptons are stable enough to be directly detected. However, I believe this plot is showing direct decays of the Higgs into other particles, ignoring the future decays that will occur before the final products are seen in a detector. I can't be 100% sure about what message this plot was intended to send without additional context, though (like, what paper was it published in).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What frequency of cord shaking maintains the same vertical motion for a point on the cord after increasing the wave speed on the cord? I'm studying for my upcoming AP Physics 1 exam but can't figure out this problem A student shakes a horizontally-stretched cord, creating waves. The graph above shows the vertical position $y$ as a function of time $t$ for a point on the cord. The student then tightens the cord so that waves on it will travel faster than before. How should the student now shake the cord to make the graph of $y$ versus $t$ for the point look the same as above? (A) With fewer shakes per second than before (B) With the same number of shakes per second as before (C) With more shakes per second than before (D) The answer cannot be determined without knowing the wavelength of the waves. My intuition would tell me that increasing the speed of the waves would cause the point to oscillate at a faster rate vertically, thus fewer shakes per second than before are needed to maintain the same frequency for the particle's oscillation. However, the correct answer is B, so I really need a thorough explanation as to why the answer is B.
Since this is a $y$ vs. $t$ graph, the frequency of the wave can easily be picked out as the (inverse) time between matching parts on the wave (e.g. peak to peak, $0$ to $0$, trough to trough, etc.). In this case our analysis does not depend on the wave speed, which relates to the rate at which the wave travels through space. If we want the same $y$ vs. $t$ graph, then we need the same frequency.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
A mysterious phase difference Today my teacher was discussing the Poisson spot and gave a simple explanation for why there must be a bright spot on the axis of the disc when illuminated with parallel monochromatic light. What he said was: Say we instead have a circular aperture in an infinite plane, we know there must be a bright spot at the centre/axis (The Airy Disk pattern). Now, instead of making a circular hole in the plane, we remove the plane itself to get an opaque disk. So, we have 2 systems one of an opaque disk and one of a circular aperture of the same dimensions in an infinite plane. If we superimpose both these systems, the light ceases to exist as there is no longer an opening. And since only bright can cancel bright, the center of our first system (opaque disk) must be brightly illuminated to cancel the airy disk pattern. Now my issue is, how did a phase difference of $\pi$ come about between the two systems to induce a destructive interference? Is it just because of complementarity or is there something fundamental going on. There is a path difference between them for sure, but how are we certain that it is $(n+\frac{1}{2}) \lambda$. Also, how are the intensities same.
Your teacher was referring to Babinet's principle. It is often a good idea to fix your ideas on actual computation. You have an incident light field $\phi_i$ on the plane. As it crosses the plane, it either gets multiplied by $h_a=1{[r\leq R]}$ in the case of the aperture of radius $R$ (origin at the center of the aperture), either multiplied by $h_d=1-h_a$ in the case of the disk due to the complementary nature. The resulting diffraction pattern you are interested in is mathematically described by Faunhofer diffraction, which amounts to a Fourier transform, which is crucially linear. This means that your final field will be $\phi_f=\mathcal F (h\phi_i)$, so you see that for the disk: $$ \phi_f=\mathcal F (\phi_i)-\mathcal F (h_a\phi_i) $$ The first term is what you would get without the plane, the overall forward beam, while the second term is the field diffracted by the aperture with an opposing phase, which came from the complementarity. For example, in the case of a monochromatic plane wave of normal incidence, $\phi_i$ is constant, the first term would be a Dirac peak at the origin and the second term your usual Airy diffraction pattern. Hope this helps and tell me if you need more details.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Speed of light and the equivalence principle In 1913 Albert Einstein wrote: "I arrived at the result that the speed of light is not to to be regarded as independent of the gravitational potential. Thus the principle of the constancy of the velocity of light is incompatible with the equivalence hypothesis." Is the first statement - referring to time dilation in a gravitational field - still considered valid in this form among physicists today? What did Einstein mean by the second statement? Is he talking about the weak or the strong equivalence principle?
In 1913 Einstein was still working on general relativity and it was not complete. Furthermore, it would be decades before the community, including Einstein, really began to understand the important concepts of spacetime geometry. This quote is very early and is not really correct by modern understanding, with a century of hindsight. The speed of light in an inertial frame is c, and the speed in a non-inertial frame need not be c. Those facts hold equally well both in the flat spacetime of SR as well as the curved spacetime of GR. The only difference is that in curved spacetime inertial frames are only local, which is the heart of the equivalence principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Confusion with the variational operator $\delta$ and finding variations I have recently started studying String Theory and this notion of variations has come up. Suppose that we have a Lagrangian $L$ such that the action of this Lagrangian is just $$S=\int dt L.$$ The variation of our action $\delta S$ is just $$\delta S=\int dt \delta L.$$ I have read on other posts that the variation is defined to be $$\delta f=\sum_i \frac{\partial f}{\partial x^i}\delta x^i,$$ which seems like an easy enough definition. Having this definition in mind, I proceded to come across an action $$S=\int dt \frac12m \dot X^2-V(X(t))$$ which implies our Lagrangian is $L(t,X, \dot X)$ which makes our first varition follow as $$\delta S=m \int dt\frac12(2 \dot X)\delta \dot X-\int dt \frac{\partial V}{\partial X} \delta X$$ $$=-\int dt \left(m \ddot X+\frac{\partial V}{\partial X}\right)\delta X.$$ My question is, did that variation of the action follow the definition listed above? That is $$\delta S=\int dt\frac{\partial L}{\partial t} \delta t+\frac{\partial L}{\partial X} \delta X+\frac{\partial L}{\partial \dot X}\delta \dot X,$$ where the $$\frac{\partial L}{\partial t} \delta t$$ term vanishes because there is no $t$ variable.
Yes, that happened. I guess you meant $$ \delta f = \sum_i \frac{\partial f}{\partial x_i} \delta x_i $$ on your third equation. Also you've implicity fixed inital $t_0$ and final $t_1$, so that your action integral really is $$ S = \int_{t_0}^{t_1} dt L $$ and therefore, since the limits are fixed, variation "commutes" with integration:$$\delta \int_{t_0}^{t_1} dt L = \int_{t_0}^{t_1} dt \delta L $$ (you can check out some problems where the end intervals are not fixed for variational problems and some extra stuff is needed - see Elsgolc). Also related: Is the principle of least action a boundary value or initial condition problem?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Am I in a superposition? Someone looks at me. Now, they know my position and my momentum, with some uncertainty. Therefore, they haven't measured either my position nor my momentum, since neither is known perfectly. They measured some other observable $O$, and found me in some eigenstate of $O$: $|\psi\rangle$. $|\psi\rangle$ is such that, if I express it in position basis, I get a (very narrow) superposition of eigenstates, centered at some location. The same with momentum basis. This means that right now, I'm in a superposition of position eigenstates. This means that right now, I don't have a well-defined position. Is this correct? (If it is, it means I have no such thing as a defined position. I'm nowhere?! I also don't have a momentum. In fact, unless my momentum becomes completely undefined, I can't ever be anywhere...)
TL;DR Your position is not well-defined... but for more mundane reasons. Quantum mechanical view You are not in a pure state - that is you are not an object, that can be described by a wave function, but rather a collection of zillions of particles in a state of thermal equilibrium. That is, you can be described by a density matrix, with rather well defined probabilities to find you in a certain place. Classical statistics matters In practice ascribing position to an extended dynamic object is by itself an endeavor plugged with uncertainties, and measuring it involves quite a bit of statistical errors, which are more important (at living conditions) than any quantum uncertainty involved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/707721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 2 }
Applications of Signal and System theory I recently heard a lecture about Signals and Systems and find the subject extremely exciting. I would like to do more in this direction, so I would be interested to know in which modern research area of physics one needs a lot of Signal and System theory, since this is not clear to me from the lecture.
It's very important in instrumentation and data analysis. This paper used a matched filter method that Bill Wheaton and I came up with to dig a spectrum out of some rather crappy data. This caused a colleague to accuse us of witchcraft (ツ), but she was able to confirm and extend our result using better data and more traditional methods. Similar ideas are behind the extreme sensitivity of the CCD detectors used in astronomy. You may also find control theory useful if you need to control something like the temperature of your sensor or the direction your spacecraft is pointing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What happens when the universe runs out of fuel? After some X billion years, one would think the stars in the entire universe will run out of hydrogen. What would happen next? Is there any way to get hydrogen out of heavy metals (extreme fission)? Just curious.
Then star formation ceases and the universe goes dark. At this stage of the universe's evolution, there'll still be plenty of hydrogen, they just don't form stars. In theory you can create hydrogen out of heavy metals, but it's a process that requires energy. If you have the energy banked somewhere (and you'll need a LOT of energy to make enough hydrogen for a new star) then it's possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Using Wick's Theorem in an example with the harmonic oscillator I understand Wick's theorem to be, $$T(x)=\mathcal{N}(x)=\sum:\textbf{all contractions}:$$ And I'm researching combinatorics and quantum theory in general. How would one connect Wicks theorem to the quantum HO, an example would be appreciated.
I'm not sure what you mean by this notation. I think most readers will be more comfortable with $$T\{\phi(x_1)\phi(x_2)...\phi(x_m)\}=N\{\phi(x_1)\phi(x_2)...\phi(x_m) + \text{all possible contractions}\}.$$ For general operators $\hat{A}, \hat{B}, \hat{C},...,\hat{Z}$, you could write $$T\{\hat{A}\hat{B}\hat{C}...\hat{Z}\}=N\{\hat{A}\hat{B}\hat{C}...\hat{Z} \text{ }+ \text{all possible contractions of} \text{ } \hat{A}\hat{B}\hat{C}...\hat{Z}\}.$$ The notation $N\{\}$ stands for normal ordering and it is the same as ": :" - you may use it if you like it. I have never heard of Wick's theorem in relation with the quantum harmonic oscillator. Usually, Wick's theorem is used for evaluating $S$-matrix elements, in which vacuum expectation values of time-ordered strings of operators appear: $$\langle 0|T\{\hat{A}\hat{B}\hat{C}...\hat{Z}\}|0 \rangle.$$ Using Wick's theorem, it becomes much easier to calculate such objects. I will let you find out why!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does the combination of lens create a sharper image? There's a line in a book which states that the combination of lens helps create a sharper image, but I don't understand how. Does more magnification mean sharper image?
It's hard to answer without knowing the context of the statement. But generally, multiple lenses can reduce aberrations. Real lenses aren't perfect, and images suffer because of that. Rays originating at a single point hit the lens at different places and at different angles and they do not converge on a single point. The image of the point is blurred. It's impossible to design a single lens that does not suffer from these aberrations, although some of them can be greatly reduced by figuring the surface in a profile other than spherical. Additional lens elements can correct for these defects to a degree, often at the expense of something such as brightness of the image, size of the lens, weight of the lens, or larger-than-desired depth of field. The resulting compound lens will produce much sharper images. Photographic and cinemagraphic lenses are developed with much effort in design, with high-quality and carefully selected glass types, and with very tight manufacturing tolerances. So they are expensive. The Leica Noctilux lens, which mates to an "ordinary" (in the sense that you wear it around your neck with a strap while on vacation. That's the only sense in which a Leica camera is ordinary) photographic camera is expensive. Magnification generally makes things worse, not better.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is it possible the Black Holes to be pure deformations in the fabric of spacetime and not an effect of super-dense matter? Is there any theory in the literature that supports this hypothesis that BHs in their center do not have a super-dense matter singularity but are pure deformations in the fabric of spacetime itself or vacuum space, possible caused after supernova or other violent event or maybe preexisted as features or defects of spacetime or vacuum space long before any matter creation in the Universe?
Your explanation in your comment: My definition is the absence of spacetime or vacuum space inside the event horizon will not work. If this were correct then the whole event horizon would be a single point in spacetime i.e. it would in effect have zero radius. In this case photons (which follow geodesics in spacetime) would be reflected from the event horizon, and to an outside observer the event horizon would show some distorted image of the black hole’s surroundings. But we know from the EHT images that the event horizon of a black hole is (as expected) black.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
AdS-CFT correspondance from 1D to 4D From what I understand the AdS-CFT correspondence states that the bulk dynamics of a $n$-dimensional gravitational theory are encoded in the degrees of freedom of its dual CFT in the $(n-1)$ dimensional boundary. The question is the following: Suppose we start with a $1D$ CFT theory. This will describe the dynamics of a dual gravitational theory in $2D$. Suppose also that the graviational theory is conformally invariant. Now, knowing the dynamics of the 2D theory we could find the dynamics of the $3D$ and so on. Is this possible? And if not, what am I missing? Also an extra question: Suppose we have an $AdS_{1}$ gravitational theory. It seems that the correspondace saturates since we can't define a $0$-dimensional dual CFT?
A $d$-dimensional conformal field theory with the right properties is `holographic', meaning that it's dual to a $(d+1)$-dimensional gravitational theory in AdS. But that $(d+1)$-dimensional theory cannot itself be holographic in the same sense, for a few reasons: * *It has gravity! The original $d$-dimensional theory is a conventional QFT defined on a fixed spacetime, not on a dynamical spacetime as gravity demands. *Conformally invariance means (roughly speaking) that your theory has no intrinsic length scale. But the gravitational theory has several: the curvature length of AdS and the Planck length, for example. You could imagine something slightly different, which is a gravitational theory with conformally invariant, holographic matter. This has been considered recently in this paper, for example. Sometimes it's called a Karch-Randall model. On the second question: there is not even AdS$_2$/CFT$_1$ in the same sense as higher dimensions, since a one-dimensional theory (quantum mechanics) can't be conformally invariant in the sense you'd need.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Introduction to magnetohydrodynamics does anybody have any reference books for introduction to magnetohydrodynamics? I want to dive into this topic and I don´t know about any good reference.
Goedbloed and Poedt's Principles of Magnetohydrodynamics (Amazon link) is a good primer on MHD.1 If I recall correctly, the book is primarily aimed at the physics of tokamaks, but many of the principles found therein can be applied to other aspect of MHD (e.g., astrophysics). It has been some time since I last looked at it (having sold my copy after leaving academia many years ago), but I recall it requiring mostly just vector calculus background for much of it. 1. I have even referenced this book for answers here a couple times
{ "language": "en", "url": "https://physics.stackexchange.com/questions/708969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Angular Momentum and Coefficient of Restitution If there was a situation where you had two rods pinned in the center and the left rod having an initial angular velocity $\omega_1$, and the right rod was at rest. I am wondering what the final angular velocities would be if there was a coefficient of restitution during the impact between the rods, $e = 0.8$, i.e. can you apply the traditional coefficient of restitution relation for angular velocities? Does $$e = \frac{\omega_2' - \omega_1'}{\omega_2 - \omega_1}$$ still apply? Then you could use the conservation of the total angular momentum equation to find the angular velocity of each rod where $$ H_1 + H_2 = H_1' + H_2' $$ $$ I_1\omega_1 = I_1\omega_1' + I_2\omega_2' $$
No the coefficient of restitution does not apply to with rotational velocity. In fact conservation of momentum will not apply either because the bodies are pinned and can transfer momentum to the earth or receive momentum from the earth as needed. In addition the contact between the bodies is still done through a force applied over time resulting in an impulse exchanged between the bodies. As a result the coefficient of restitution applies to the relative velocity between the bodes at the point of contact $$ v_{\rm rel}^{\rm after} = -\epsilon\; v_{\rm rel}^{\rm before} \tag{1}$$ If each rod has distance $r_1$ and $r_2$ between the pivot and the contact point then, and by convention we have positive rotation being counterclockwise you can work out the relative speed as $$ v_{\rm rel} = \omega_1 r_2 + \omega_2 r_2 $$ For the relative velocity to be zero (as if they were gears) they would need to move in opposite sense such that $\omega_1 r_1 = - \omega_2 r_2$. At the moment of impact, the center of mass of each body does not move (due to the pivot) and as a result an equal and opposite reaction impulse must act through each pivot on each body to cancel out the contact impulse. For the first body this looks like this and for the second body this looks like this These pairs of impulses impart a moment of impulse on each rod changing their rotational speed. According to the above conventions and the diagrams each impulse pair acts in a clockwise (negative) fashion on both bodies. $$ \begin{aligned} I_1 \omega_1 ^{\rm after} & = I_1 \omega_1^{\rm before} - r_1 J \\ I_2 \omega_2 ^{\rm after} & = I_2 \omega_2^{\rm before} - r_2 J \\ \end{aligned} \tag{2} $$ The before and after relative velocity is $$ \begin{aligned} v_{\rm rel}^{\rm before} & = \omega_1^{\rm before} r_2 + \omega_2^{\rm before} r_2 \\ v_{\rm rel}^{\rm after} & = \omega_1^{\rm after} r_2 + \omega_2^{\rm after} r_2 \end{aligned} \tag{3} $$ Use the after rotational velocities from (2) in (3) and then plug the relative velocities in (1) to get an equation in terms of the impulse $J$. $$ J = (1+\epsilon) \frac{ v_{\rm rel}^{\rm before} }{ \frac{r_1^2}{I_1} + \frac{r_2^2}{I_2} } \tag{4} $$ Then back substitute into the expressions for the rotational speed after to get $$ \begin{aligned}\omega_{1}^{{\rm after}}=\left(\frac{\frac{r_{2}^{2}}{I_{2}}-\epsilon\frac{r_{1}^{2}}{I_{1}}}{\frac{r_{1}^{2}}{I_{1}}+\frac{r_{2}^{2}}{I_{2}}}\right)\omega_{1}^{{\rm before}}+\left(-\frac{(1+\epsilon)\frac{r_{1}r_{2}}{I_{1}}}{\frac{r_{1}^{2}}{I_{1}}+\frac{r_{2}^{2}}{I_{2}}}\right)\omega_{2}^{{\rm before}}\\ \omega_{1}^{{\rm after}}=\left(-\frac{(1+\epsilon)\frac{r_{1}r_{2}}{I_{2}}}{\frac{r_{1}^{2}}{I_{1}}+\frac{r_{2}^{2}}{I_{2}}}\right)\omega_{1}^{{\rm before}}+\left(\frac{\frac{r_{1}^{2}}{I_{1}}-\epsilon\frac{r_{2}^{2}}{I_{2}}}{\frac{r_{1}^{2}}{I_{1}}+\frac{r_{2}^{2}}{I_{2}}}\right)\omega_{2}^{{\rm before}} \end{aligned} \tag{5} $$ As you can see from above, the relationship between the before and after rotational speeds is rather complex in the general case. Consider the simplified scenario of both rods having the same length and mass to arrive at $$ \begin{aligned}\omega_{1}^{{\rm after}}=\left(\frac{1-\epsilon}{2}\right)\omega_{1}^{{\rm before}}+\left(-\frac{1+\epsilon}{2}\right)\omega_{2}^{{\rm before}}\\ \omega_{1}^{{\rm after}}=\left(-\frac{1+\epsilon}{2}\right)\omega_{1}^{{\rm before}}+\left(\frac{1-\epsilon}{2}\right)\omega_{2}^{{\rm before}} \end{aligned} $$ For such as case the ratio $\lambda$ you are asking about is $$ \lambda = \frac{ \omega_{2}^{\rm after}-\omega_{1}^{\rm after}}{\omega_{2}^{\rm before} -\omega_{1}^{\rm before}} \equiv 1 $$ which is not equal to the COR.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/709155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are curvilinear coordinates inertial? At 1:46:34 of this lecture by Frederic Schuller, Inertial coordinates are defined as ones which satisfy the following equation: I am confused by the above equation because it would imply any curvilinear coordinate eg: polar is not inertial. I thought 'inertial' meant the frame is at rest w.r.t some absolute space/ absolute time. Could someone explain how to make this consistent with previous knowledge?
Reference frames can be inertial or non inertial. Coodinate systems are not reference frames unless the frame is somehow being tied to the coordinate system. Does the book explain how the frame is attached? If not, the book is making a non-standard definition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/709506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does gravitation really exist at the particle level? As I understand, we usually talk about gravity at a macro scale, with "objects" and their "centre(s) of mass". However, since gravity is a property of mass generally (at least under the classical interpretation), it should therefore apply to individual mass-carrying particles as well. Has this ever been shown experimentally? For example, isolating two particles in some manner and then observing an attraction between them not explained by other forces. To pose the question another way, let's say I have a hypothesis that gravitation is only an emergent property of larger systems, and our known equations only apply to systems above some lower bound in size. Is there any experiment that reasonably disproves this hypothesis?
FWIW, small particles react to the big ones:Experiments have been done with neutrons in a gravity field. The phase of their wavefunction was shifted, as was shown by interference. If the neutron didn't have it's own gravity field, would it react? Would an electron without charge accelerate in an electric field? Food for thought.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/709780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 5, "answer_id": 2 }
Confusion in the showing EM wave exist from Maxwell equation When deriving the mathematical description of a field, we set the current density and charge to zero in Maxwell's equations. However, this condition is not absolutely true anywhere on earth. Yet, we are able to apply EM waves for problems in communication, medicine etc. How is that possible that instead of ignoring the sources of the fields the fields are calculated obviously properly nevertheless?
First of all, a general solution of a system of inhomogeneous linear equations (such the Maxwell equations with sources) can be always decomposed into a particular solution of the inhomogeneous equations and the general solution of the homogeneous ones (i.e., the Maxwell equations without sources, of which EM waves are solutions). Secondly, one needs to include the sources (i.e., the currents and charges) to describe the generation/absorption of the EM waves, but not their propagation. This is widely studied, but it is more complicated (mathematically) than studying the free Em waves, which is why one usually starts with the latter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/709950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the instant velocity? The velocity is the variation rate of the position correct? So does it make sense to talk about velocity without time?
Let's say your position is given by a function $x : I \to \mathbb{R}^3$, where $I \subset \mathbb{R}$ is an interval of time. Then the velocity is defined by $$x'(t) = \lim_{h \to 0}\frac{x(t+h) - x(t)}{h}.$$ If $I$ is just a single point, this limit does not really make sense, because there is only one $t$ we are allowed to put into $x$, we can't approach it using a limit. So, no, velocities do not make sense without time. (This doesn't mean that instantaneous velocities don't make sense. You just need time to exist in a neighbourhood around them for them to work.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/710296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 4 }
What does Specific Heat capacity of a material depend upon? I have read in many places that specific heat capacity is an property of the material but I haven't really understood what it depends upon, as in what factors affect that specific heat, I have thought of different things but none of them are consistent, especially when we consider degrees of freedom in each state, this argument fails to convince me as specific heats of liquids are more than solids and that gasses is much less than that of solids. I would certainly appreciate it if some one could explain where this specific heat comes from and what are these "intrinsic factors" that affect specific heat capacity of solids liquids and gasses.
This is a complicated business which I will simplify a bit. Heating an object causes its constituent atoms to randomly vibrate more vigorously. In so doing, the atoms are continuously exchanging kinetic energy back and forth between each other and so the bonds that connect those atoms are continually being exercised as well. So at any given instant in time, the total vibrational energy in the system is shared between the kinetic energy of the vibrating atoms and the potential energy of the distorted bonds between them. This means that the system's capacity for absorbing heat energy per unit mass will depend on the atomic mass and on the exact nature of the bonds that hold the mass together. Those bonds are chemical in nature and can be ionic, covalent or metallic in a solid, and in a liquid they will be completely different- with the possibility of things like van der waals forces coming into the picture. Then in a gas, all of those considerations disappear and instead the gas atoms bounce off each other like billiard balls because those chemical bonds, whatever they might be, are absent or nearly so. Note also that in a solid held together by metallic bonds, the outermost electrons are delocalized and hence capable of vibrating about through the atomic lattice as if they themselves were a sort of gas. This means that in a metal, the conduction electrons will make a significant contribution to the heat capacity, which you will not see in a solid that is held together by covalent bonds, where the bonding electrons are immobilized.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/710514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do-It-Yourself physics experiment Is there any simple experiment in physics of the first half of the 20th century I could do at home? I tried to make a cloud chamber, but it didn't work at all... A spectrometer from a CD disk is already being implemented. I also thought about the double-slit experiment, but it was set up well before the given time frame (1900-1950). Requirements: the experiment should be conducted using readily available (not very expensive) materials; the experiment should somehow be related to the physics in the first half of the 20th century.
The first cyclotron was a tabletop device. Then again, a cyclotron requires a source of ions, I don't know whether that is doable as a home project. Also, I don't know how high of a vacuum is required. The difficulty, of course, is that the experiments that have gone down in history are the ones that in their time were cutting edge. The mechanics will be high precision and the electronics will be high precision. I suspect the level of precision that was achievable in a cutting edge university laboratory in the 1930's is still quite hard to achieve in a home workshop, if at all.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/710625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The electron's charge/mass ratio In my physics laboratory class, we were discussing the following results after conducting experiments to determine the charge-mass ratio, $e/m$, of the electron. In the experiment, electrons were accelerated and executed circular motion perpendicular to a homogeneous magnetic field which was produced by a pair of Helmholtz coils. The voltage, radius, and strength of the magnetic field could be used to determine the ratio $e/m$. Many of us obtained a positive result as outcome for this ratio, whilst a few other people returned almost the exact same value, however negative. Their reasoning was that the ratio being negative makes sense since we're dividing the negative charge by a positive mass; hence a negative result. This sounded right at first, however, isn't it about the magnitude of the charge, which you have to divide by the mass (hence dividing a positive value by positive mass, resulting in a positive oucome)? I'm trying to understand what's the correct way to view these outcomes. Should this ratio be positive or negative, and why? Is one of us correct in their reasoning? Replies are much appreciated!
It is most definitely the magnitude of charge that matters, reversing the charge all it does is make a mirror image circle, so since in your experiment, you are measuring inherently sign-less/directionless parameters like radius, time, and magnetic field, speed etc. You would get a positive value for e/m. However, when you do report the value of $e/m$, you must report it with a sign, i.e negative. So, the ratio should be negative but, from the experiment As we are dealing with directionless quantities, you get a positive number as an output for the experiment, but when someone asks for the specific charge, not the magnitude then you report it with a sign.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/710890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a mixed field-particle representation of QED? In QFT, we can write the amplitude for a field to be in configuration $A_{in}(x)$ at time $0$ and end in configuration $A_{out}(x)$ as $K_t[A_{in},A_{out}]$, for example. Alternatively we could expand this in terms of photons and calculate the amplitudes of photons to start in position $x$ and end in position $y$, $K_t(x,y)$, With electrons, it really only makes sense to consider them as particles, even though we can formerly write them as Grassmann fields. I wonder if there is a description of QED which treats the photons as a field but the electrons as particles. Thus you might have amplitudes $K(A_{in},x_1,x_2,x_3,..;A_{out},y_1,y_2,...)$ for a configuration to start with electromagnetic field $A_{in}$ and electrons in positions $x_1$, $x_2$,... So you would sum over all electron paths including closed loops and over all fields $A$ consistent with those paths. So treating electrons as particles but keeping the electromagnetic field as a field. The Feynman graphs would still have electron paths but the photons would be treated differently. I wonder if such a description is known?
Sure; this amounts to a choice of a basis in the space of states in which your basis states are eigenstates of the electric field operator and have a definite electron number. This can be done for a free theory, or for an interacting theory you can at least do it approximately.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/711056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing $[\hat{A},\hat{B}] = i\mathbb{1} \Leftrightarrow [\hat{A},\hat{B}^{\textstyle n}] = i\,n\,B^{\textstyle n-1}$ Actually it is self redundant to show having in mind $[\hat{A},\hat{B}^{\textstyle n}] = \,n\,B^{\textstyle n-1}\,[\hat{A},\hat{B}]$ but I suppose it is not supposed to solve it this way. Instead I guess the primal relation $[\hat{A},\hat{B}] = i\mathbb{1}$ has to be exploited. Expanding is not getting me further: $(1) \quad[\hat{A},\hat{B}] = \hat{A}\,\hat{B}-\hat{B}\,\hat{A} = i\,\mathbb{1}$ $(2)\quad[\hat{A},\hat{B}^{\textstyle n}] = \hat{A}\,\hat{B}^{\textstyle n}- \hat{B}^{\textstyle n}\,\hat{A} = i\,n\,B^{\textstyle n-1}$ Briefly how the one follows from the other?
It's quite straightforwards operator manipulation. $we \ have: \ [\hat{A}, \hat{B}]= \hat{A}\hat{B}-\hat{B}\hat{A}=i \mathbb{I} \implies \hat{B}\hat{A}=\hat{A}\hat{B}-i\mathbb{I}$ $THUS:$ $$[A,B^n] = AB^n - B^n A = AB^{n}-B^{n-1}(BA) \\ =AB^{n} -B^{n-1} (AB - i1)\\ =AB^{n} -B^{n-2}(BA)B + iB^{n-1} \\ =AB^{n} - B^{n-2}(AB-i1)B+iB^{n-1} \\ =AB^{n-} - B^{n-2}(AB-i1)B+iB^{n-1} \\ =AB^{n}-B^{n-2}AB^2+iB^{n-1}+iB^{n-1} \\ =AB^{n}-B^{n-2}AB^2+2iB^{n-1} \\ . . . \\ ...\\ ...\\ =AB^n - BAB^{n-1} +i(n-1)B^{n-1} \\ =AB^n - AB^{n} +iB^{n-1}+i(n-1)B^{n-1} \\ =inB^{n-1} $$ hence proved. Hope it helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/711154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Eyes shut, can a passenger tell if they’re facing the front or rear of the train? Suppose you’re a passenger sitting in one of the carriages of a train which is travelling at a high, fairly steady speed. Your eyes are shut and you have no recollection of getting on the train or the direction of the train’s acceleration from stationary. Can you tell whether you’re facing the front or the back of the train? This isn’t a theoretically perfect environment - there are undulations, bends and bumps in the track. Not a trick question - you cannot ask a fellow passenger! Edit: This is intentionally lacks rigorous constraints. Do make additional assumptions if it enables a novel answer.
I never seen a train overtaken by another train in a parallel adjunct track lane. Usually there are trains bypassing opposite direction to each other from parallel track lanes. I think it would be easy to distinguish with confidence if the passing train sound comes initially from your back or from your front. Additionally you would be aided by the Doppler-shift effect of sound of the bypassing train that initially will increase in pitch and then lower in pitch as the two trains separate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/711352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 9, "answer_id": 3 }
The effective action in the linear sigma model I am reading the section 11.4 of Peskin and Schroeder's book (page 373), and there is a step I could not follow. To calculate the effective action of linear sigma model, the determinant of $\frac{\delta^{2}\mathcal{L}}{\delta\phi^{i}\delta\phi^{j}}$, and in peskin's book, it shows that $$\frac{\delta^2\mathcal{L}}{\delta\phi^i\delta\phi^{j}}=-\partial^2\delta^{ij}+\mu^2\delta^{ij} - \lambda[(\phi_{\mathrm{cl}}^k)^2\delta^{ij} + 2\phi_{\mathrm{cl}}^i\phi_{\mathrm{cl}}^j].\tag{11.67}$$ Then, orient the coordinates to make the $\phi^{i}_{cl}$ points in the Nth direction, $$\phi^{i}_{cl}=(0,0,...,0,\phi_{cl}),\tag{11.68}$$ and the operator in the RHS of the first equation is just a KG operator $(-\partial^{2}-m^{2}_i),$ where $$m^2_i=\begin{cases} \lambda\phi^2_{cl}-\mu^2\quad acting\; on\;\eta^1,...,\eta^{N-1};\\3\lambda\phi^2_{cl}-\mu^2\quad acting\; on\;\eta^N.\end{cases}\tag{11.69}$$ I think for all $\eta^i$, the value of $m^2_i$ should equal to $3\lambda\phi^2_{cl}-\mu^2$, how the $\lambda\phi^2_{cl}-\mu^2$ is obtained?
Hint: Eqs. (11.67) & (11.68) yield $$ \frac{\delta^2\mathcal{L}}{\delta\phi^i\delta\phi^{j}}=-\partial^2\delta^{ij}+\mu^2\delta^{ij} - \lambda\phi_{\mathrm{cl}}^2 [\delta^{ij} + 2\delta_N^i\delta_N^j].$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/711497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Could a glass cup containing a vacuum rise into the air? https://what-if.xkcd.com/6/ has been mentioned here before, but I'm questioning whether or not the glass cup with the bottom half as a vacuum would rise at all. To start with, a vacuum exerts no force. Any perceived "sucking" is actually external pressure pushing into the vacuum. So the only force that could be lifting the glass would be buoyancy of the air around the cup. Let's ballpark it with a drinking cup that can hold about 500ml. If we consider the cup as an open-top cylinder, an internal radius of 3.8 cm and height of 11 cm gets us a 499ml volume, and the internal surface area is 308 cm^2. Based on a quick Google search, glass is about 2000x more dense than air, so in order for the glass to rise it would need to displace 2000x as much air as there is glass. That suggests that if the glass was sealed (by a weightless forcefield at the top) and was completely empty instead of having some water in it, the total volume of glass would have to be less than 0.5ml, resulting in an average width of 0.016mm. That's thinner than a human hair. Given that glass cups are significantly thicker than human hair1, is there any truth to the conclusions of that "What If?" Is there some effect that I've misunderstood or underestimated that significantly changes the situation? Or should we conclude, like https://physics.stackexchange.com/a/33642/79374 did with the other vacuum cup, that Randall Munroe either miscalculated or was greatly exaggerating? 1 Citation needed
Only if the cup is big enough and resistant to breaking, it will fly up. The volume, with zero mass inside, grows faster than the glass volume (assuming constant glass thickness). So the upward buoyancy force will exceed the total mass at some point. I only now saw the real problem. If you pull a vacuum in a cylinder with a piston and release the piston, the cylinder will accelerate oppositely. Likewise, the glass with a vacuum on the lower half will jump upward.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/711709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What does general relativity's metric tensor have to do with quantum electrodynamics? Sabine Hossenfelder recently posted a YouTube video titled, The Closest we have to a Theory of Everything. At 9:15, she shows the action $S$ for electrodynamics and, immediately after, the Einstein-Hilbert action for general relativity (with also the matter Lagrangian). Both equations show the square root of negative $g$, $g$ being the determinant of the metric tensor. I can't find the equation for the $S$ of electrodynamics anywhere else... It looks very similar to the regular $S$ used in QED, the glaring addition being the square root of $-g$... Is this $S$ that is shown in the video the same $S$ used in QED? Does this equation have a special name, as the Einstein–Hilbert one does?
The action for QED in flat spacetime is $$ S = \int d^4 x \left( - \frac{1}{4e^2} F_{\mu\nu} F^{\mu\nu} + i {\bar \psi} \gamma^\mu D_\mu \psi + m {\bar \psi} \psi\right) , \quad D_\mu = \partial_\mu + i A_\mu , \quad F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu . $$ Here $\gamma_\mu$ is a $4\times 4$ Dirac matrix satisfying $\gamma_\mu \gamma_\nu + \gamma_\nu \gamma_\mu = - 2 \eta_{\mu\nu}$ and $\psi$ is the fermion field (it is a column vector) and ${\bar \psi} = \psi^\dagger \gamma^0$ is its so-called Dirac conjugate (it is a row vector). The action of electrodynamics is just the first term above. In curved spacetimes with a general metric $g_{\mu\nu}$ it is a bit more complicated because we have a fermion field $\psi$. To construct such an action, we introduce a vielbein $e^\mu_a$ which satisfies $e^\mu_a e^\nu_b \eta^{ab} = g^{\mu\nu}$. We then have to introduce a spin connection $\omega$ which satisfies $$ \partial_\mu e_\nu^a - \Gamma^\lambda_{\mu\nu} e^a_\lambda + \omega_\mu{}^a{}_b e^b_\nu = 0 . $$ The action is then given by $$ S = \int d^4 x \sqrt{-g} \left( - \frac{1}{4e^2} F_{\mu\nu} F^{\mu\nu} + {\bar \psi} e_a^\mu \gamma^a D_\mu \psi + m {\bar \psi} \psi\right) , \qquad D_\mu = \partial_\mu + \frac{1}{4} \omega_\mu^{ab} \gamma_a \gamma_b + i A_\mu . $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/711917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Electric potential generated by spherical symmetric charge density I know this question is pretty basic but I found a supposedly wrong formula in my notes and I'm trying to understand where this is coming from. Suppose we have a spherically symmetric charge density $\rho({\boldsymbol{r}})=\rho(r)$, then the formula I was given for the potential is $$\phi(r)=\frac{1}{r}\int_0^r4\pi\rho(r')r'^2dr'\tag{1}$$ But using Gauss law for electric field one gets $$\int\boldsymbol{E}\cdot d\boldsymbol{S}=4\pi\underbrace{\int\rho(\boldsymbol{r'})d^3\boldsymbol{r'}}_{Q(r)}\implies \boldsymbol{E}(\boldsymbol{r})=\frac{Q(r)}{r^2}\hat{\boldsymbol{r}}=\frac{\int_0^r4\pi\rho(r')r'^2dr'}{r^2}\hat{\boldsymbol{r}}\tag{2}$$ Taking the gradient of $(1)$ $$\boldsymbol{E}(\boldsymbol{r})=-\nabla\phi=\left[\frac{\int_0^r4\pi\rho(r')r'^2dr'}{r^2}-\frac{4\pi\rho(r)r^2}{r}\right]\hat{\boldsymbol{r}}=\left[\frac{Q(r)}{r^2}-\frac{dQ(r)/dr}{r}\right]\hat{\boldsymbol{r}}$$ That is off by a term from what I got from Gauss Law, so I concluded $(1)$ is wrong. Is this correct?
The formula $$\phi(r)= \frac{1}{r}\int_0^r 4\pi\rho(r')r'^2dr' \tag{1}$$ for the potential is indeed wrong, as you have already proven by checking $\mathbf{E}(\mathbf{r})=-\nabla\phi$. It misses the contribution of charges outside of radius $r$ to the potential $\phi(r)$. While these outside charges have no effect on the inside field strength $\mathbf{E}$, they do have an effect on the inside potential $\phi$. A correct formula would be $$\phi(r)= \frac{1}{r}\int_0^r 4\pi\rho(r')r'^2dr' +\int_r^\infty \frac{1}{r'}4\pi\rho(r')r'^2 dr' \tag{a}$$ Equivalently, you may write this as $$\phi(r)= \int_0^\infty \frac{1}{\text{max}(r,r')}4\pi\rho(r')r'^2 dr'$$ It is easy to verify that (a) satisfies $$\mathbf{E}(\mathbf{r})=-\nabla\phi(r) =\frac{Q(r)}{r^2}\hat{\mathbf{r}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/712081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is sand in a vacuum a good thermal insulator? My reason for thinking that sand in a vacuum would be a good insulator is that heat cannot be conducted in a vacuum, and the area of contact between adjacent grains of sand is very small, which means heat would transfer between grains relatively slowly. Is this correct, or is there something I'm missing? Also, the sand is there instead of pure vacuum for structural support.
Comparing sand in air to sand in vacuum: The sand in vacuum is less conductive than sand in air in all reasonable conditions. The reason is, the air conducts some heat and vacuum doesn't, air can even facilitate convection, on the other hand, air does not interfere much with radiative heat exchange between the sand grains. Comparing sand in vacuum to vacuum: Here, things get more complex. At high enough temperatures, the radiative heat exchange will dominate (it increases as T^4). Sand will absorb part of the radiation and will radiate some of it back to the source, e.g. the exchange will be slowed down. On the other hands, at low enough temperatures, the radiative heat exchange will be of much less importance and the heat conduction of the grains will dominate. I.e. at low temperatures, the sand will improve the heat exchange in vacuum. Comparing sand to a single, continuous blob (stone) of the same material Again, we depend on temperature. Sand will exchange heat better if the radiative exchange dominates, else it is the stone.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/712248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
Table of integrals for dimensional regularization Is there any reference (book or paper) that contains a list of integrals useful for dimensional regularization? I would need it to approach integrals like these $$ \int d^dx \frac{x^\mu}{|x|^{2d-4}}, \qquad \int d^dx \frac{(x-y)^\mu}{|x-y|^d |x-z|^{d-2}}. $$ Any suggestion is deeply appreciated.
* *Appendices of QFT textbooks usually have tables of DR integrals used through the textbook, for example in Peskin and Schroeder Appendix A.4, and Schwartz Appendix B. *For general one-loop integrals, there are easy to use computer packages like Package-X on Mathematica, that can deal with massive propagators. *For higher loop calculations, Mincer is a tool that does massless three-loop calculations (written in FORM). Nice lecture on Mincer here. If you want to understand the technology behind higher-loop calculations, Grozin has a lecture notes on multiloop calculations that details the integration-by-parts methods. *For a more in-depth look into Dimensional Regularization, Collins book on renormalisation contains a lot of details. Chapter 4 in particular introduces an axiomatic approach, which (for me) clarified a lot of the mathematical details, and what manipulations are allowed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/712519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Is the gluon also a repulsive force? In the picture of a proton we see 3 quarks, held together by gluons. But the two $u$ quarks repel each other , so the gluons act through the strong force, whereas the $u$ and $d$ quarks attract each other, if all this is correct, then what keeps them a part? can a gluon also act as a repulsive force? EDIT: there is something that escapes me, if the answer is correct :"* at a certain distance the force becomes repulsive*". Presumably, the distance between U-U and U-D is the same, so why in one case it repels and in the other attracts?
Lets start from experimental facts. The proton exists and is stable. This means that at the quantum level there must be one stable wave function a solution of the complicated potentials of both the electromagnetic and the strong force. The proton is even more complicated than your picture as there are innumerable quark-antiquark and gluon pairs in its makup.See this article: The mathematical complication is modeled with lattice quantum field theory, with which the higher bound states of the nucleon have been successfully modeled, for example see the figures here. My basic point is that the proton is a quantum bound state of its constituents, as are the higher mass states of the baryon octet and decuplet. The proton is stable because it is the lowest energy state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/712859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Approximation of Spherical Bessel function I am currently studying the CMB power spectrum from a numerical approach (easier than the analytical approach). In a Mathematica notebook that I am following, they work with spherical Bessel functions in order to free stream the multipole solution of the fluid equations in Fourier space. I understand the analytical implementation of the Bessel functions in the formula, but in the Mathematica code, they approximate these functions in a way which I have not been able to derive for myself or find online. The approximation of the Bessel function is \begin{equation} l^2 j_l^2(xl) = \frac{1}{2x\sqrt{x^2-1}}. \end{equation} I also have to use the derivative of the Bessel function which they approximate as \begin{equation} l^2 j_l'^2(xl) = \frac{1}{2x\sqrt{x^2-1}}\frac{x^2-1}{x^2}. \end{equation} This approximations are done in the limit where both $x$ and $xl$ are large. I would very much appreciate anyone bringing some insights the derive these approximations!
This is too long for a comment so I wrote this answer. I looked in the obvious place, G. N. Watson, "Treatise on the Theory of Bessel Functions", (Cambridge University Press,Cambridge,1980), second edition, in section 8.12 he gives an expansion first derived by Meissel for large order and $x$ times the order large. Watson then discusses the stationary phase approximation in section 8.2. Watson gives the Meissel series, where he says this dominant term had been derived by L. Lorenz earlier, \begin{equation} J_\nu(x) \simeq \sqrt{\frac{2}{\pi\sqrt{x^2-\nu^2}}} \cos\left (Q_\nu(x)-\frac{1}{4}\pi\right) \end{equation} \begin{equation} Q_\nu(x) = \sqrt{x^2-\nu^2}-\frac{1}{2}\nu\pi+\nu\arcsin(\nu/x) \end{equation} If I substitute, $j_\ell (x) = \sqrt{\frac{\pi}{2x}} J_{\ell+1/2}(x)$, \begin{equation} j_\ell(\ell x) \simeq \sqrt{\frac{\pi}{2x\ell}} \sqrt{\frac{2}{\pi\sqrt{\ell^2 x^2-(\ell+\frac{1}{2})^2}}} \cos\left (Q_{\ell+1/2}(\ell x)-\frac{1}{4}\pi\right) \,. \end{equation} Simplifying, squaring, and multiplying by $\ell^2$, \begin{equation} \ell^2 j_\ell(\ell x) \simeq \frac{1}{x\sqrt{x^2-\left (\frac{2\ell+1}{2\ell} \right)^2}} \cos^2\left (Q_{\ell+1/2}(\ell x)-\frac{1}{4}\pi\right ) \,. \end{equation} If, as suggested by Emilio Pisanty in the first comment, that the cosine squared is approximated by its average, $\frac{1}{2}$, in the sums or integrals you are doing, and you approximate $\left (\frac{2\ell+1}{2\ell} \right)^2 \simeq 1$, you get your result. In the stationary phase approximation, it looks to me like the two stationary phase points give integrals that give the $\frac{1}{x\sqrt{x^2-\left (\frac{2\ell+1}{2\ell} \right)^2}}$ factor, and their phases give the cosine term, but as I said, I'm too lazy to spend the time to carefully calculate and check that term.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/713015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pressure inside a gravitationally bound sphere of uniform density I've looked in many places to find an equation for the pressure inside a sphere of uniform density, but didn't find any, so I decided to take a stab at it. I first found the equation for gravitational acceleration inside a sphere, $$g(r)=\frac{GM\left(\frac{r}{R}\right)^3}{r^2}=\frac{GMr}{R^{3}}$$ where $M$ is the mass of the sphere, $R$ is the radius of the sphere, and $r$ is the distance from the center. I took this to get the integral of the density times the acceleration of gravity as the distance from the center changes, $$\rho\int_{r}^{R}g\left(x\right)dr,$$ where $\rho$ is the density $\frac{3M}{4\pi R^3}$, and this expanded to $$\frac{3M}{4\pi R^{3}}\int_{r}^{R}\frac{GMx}{R^{3}}dx=\frac{3GM^{2}}{4\pi R^{6}}\int_{r}^{R}xdx.$$ The resulting equation was $$\frac{3GM^{2}}{4\pi R^{6}}\left(\frac{1}{2}R^{2}-\frac{1}{2}r^{2}\right)=\frac{3GM^{2}}{8\pi R^{6}}\left(R^{2}-r^{2}\right).$$ Did I get it right?
The expression given in the question is correct. Maybe for some context: the relevant equation for the pressure balancing gravity inside a static, spherically symmetric body is given by Newton's hydrostatic equilibrium: $$ \frac{dP}{dr}=-\frac{GM(r)\rho(r)}{r^2}=-g(r)\,\rho(r)\longrightarrow dP = - \rho(h)\,g(h)\, dh. $$ In the present case the enclosed mass is just given by $M(r)=\frac{M r^3}{R^3}$ while the density is constant ($\rho(r)=\rho=\frac{3 M}{4 \pi R^3}$). Inserting those expressions and integrating from the surface of the sphere $R$ with ($P(R)=0$) to $r$ with $0\le r\le R$ yields the expression in question $$P(r)=\frac{3 \,GM^2}{8 \pi R^6}\left(R^2-r^2\right).$$ A similar computation is also possible in General Relativity where this scenario and the corresponding solution is known as interior Schwarzschild solution: $$ P(r)=\frac{3\, G Z }{4 \pi R^2 }\frac{\sqrt{1-2 Z r^2/R^2 }-\sqrt{1-2 Z}}{\sqrt{1-2 Z}-\sqrt{1-2 Zr^2/R^2 }}, $$ with the compactness $Z=\frac{G}{c^2}\frac{M}{R}$. Expanding this expression for small $Z$ -- non-compact objects -- yields the Newtonian result $$ P(r)=\frac{3 Z^2 }{8 \pi R^4}\left(R^2-r^2\right)+O(Z^3). $$ For reference the compactness $Z$ of the sun is $\sim 2\times10^{-6}$ while typical neutron stars have compactnesses of order $10^{-1}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/713115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does current flow in a purely inductive circuit if the net voltage is zero? Considering the equation, $$E=−L\frac{di}{dt}$$ The negative sign in the above equation indicates that the induced emf opposes the battery's emf. If we're talking about a purely inductive circuit, the induced emf is equal and opposite to applied emf. Isn't it just like two identical batteries in opposition? If that's the case, how does the current flow?
How does current flow in a purely inductive circuit if the net voltage is zero? The problem in this question is that it is based on a completely wrong assumption. This concept of “net voltage” isn’t really a thing. In fact, by Kirchoff’s voltage law your “net voltage” is guaranteed to be zero. So the net voltage being zero does not imply anything about the current. Isn't it just like two identical batteries in opposition? No, an inductor is not like a battery. A battery has a voltage that is independent of the current. An inductor has a voltage that is proportional to the change in the current. (A capacitor has a voltage that is proportional to the integral of the current) They are not the same, and having them with opposite voltages does not imply any cancellation of current.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/713443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How you calculate the age of the observable Universe if the acceleration expansion is not constant? What makes us believe that the Cosmological constant was the same in the past? And if there is no way to prove this then could the age of our Universe be different from the current calculated value since the Universe could be expanded at a different rate in the past? Even if the Cosmological constant value was different in the past how are the fail-safe limits calculated giving a finite tolerance to the current prediction of the age of our observable Universe?
The time evolution of a universe in which the cosmological constant is actually a variable can indeed be modeled on a computer, and the results compared to observational data. If we imagine a finite-element expansion model where the time variable is divided into discrete slices, then for any time slice the CC corresponding is updated in the model with a new value and the corresponding difference equations are solved. In this manner the expansion process is clanked forward one time increment at a time with a different value of the CC used at each increment.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/713566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the net flux the same for both spheres? There are 2 spheres both of radius "r" and "R" respectively. Using Gauss' law to find the net flux from the surface, we use: $$ = \frac{Q(charge \ enclosed)}{\epsilon_0} = E × ∮ds × cos(0°)$$ Here, when we use the first formula, we get the same answer for both of the spheres, but using the second formula, we get different values (which makes sense because the distance matters for the density of electrical field passing). So, why is the flux from the first equation same for both the spheres?
The net flux through any closed surface surrounding a point charge $q_{in}$ is equal to $\frac{q_{in}}{\epsilon_0}$, and is independent of the shape of that surface. The following illustration should make things a bit clearer: Clearly all the field lines passing through $S_1$ also pass through $S_2$ and $S_3$. The electric flux is proportional to the number of electric field lines penetrating some surface. As for the mathematical proof, as Triatticus has pointed out, the values of $E$ are not the same for both cases. $$\phi_{E, 1} = \frac{q_{in}}{\epsilon_0} = E_1 \times4\pi r^2$$ With $E_1 = \frac{k_e.q}{r^2}$, we get: $$\phi_{E, 1} = 4\pi k_e.q_{in}$$ Similarly for the second case, with $E_2 = \frac{k_e.q}{R^2}$, we get: $$\phi_{E, 2} = 4\pi k_e.q_{in}$$ which are both equal to $\frac{q_{in}}{\epsilon_0}$ Hope this helps. Image source
{ "language": "en", "url": "https://physics.stackexchange.com/questions/713722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find expansion of slightly modified Coulomb's potential? From here I know that, ${{\frac {1}{|\mathbf {r}_1 -\mathbf {r}_2|}}=\sum _{\ell =0}^{\infty }{\frac {4\pi }{2\ell +1}}\sum _{m=-\ell }^{\ell }(-1)^{m}{\frac {r_1^{\ell }}{r_2^{\ell +1}}}Y_{\ell }^{-m}(\theta ,\varphi )Y_{\ell }^{m}(\theta ',\varphi ').}$ where $|r_1|<|r_2|.$ I was wondering if there exists a strategy to find expansions for general potentials, for instance $$\frac{1}{|r_1-r_2|^{1+\alpha}}, \text{ }\alpha>0$$ or more generally for functions $\phi(|r_1-r_2|)?$
The usual multipole expansion follows from the Legendre identity $$ {\displaystyle {\frac {1}{\sqrt {1-2xt+t^{2}}}}=\sum _{n=0}^{\infty }P_{n}(x)t^{n}} $$ The generalization to arbitrary powers requires the Gegenbauer identity $$ {\frac {1}{(1-2xt+t^{2})^{\alpha }}}=\sum _{{n=0}}^{\infty }C_{n}^{{(\alpha )}}(x)t^{n} $$ which yields $$ {\displaystyle {\frac {1}{|\mathbf {x} -\mathbf {y} |^\alpha}}=\sum _{k=0}^{\infty }{\frac {|\mathbf {x} |^{k}}{|\mathbf {y} |^{k+\alpha}}}C_{k}^{(\alpha/2)}\biggl({\frac {\mathbf {x} \cdot \mathbf {y} }{|\mathbf {x} ||\mathbf {y} |}}\biggr)} $$ You can write this in terms of spherical harmonics, if you wish. For arbitrary potentials, you can use the fact that the spherical harmonics are a basis, and therefore you can expand $$ \phi(\vec r_1,\vec r_2)=\sum_{\ell,m\\\ell',m'}c_{\ell,m;\ell',m'}(r_1,r_2) Y_\ell^m(\theta,\phi)Y_{\ell'}^{m'}(\theta',\phi') $$ where the coefficients are $$ c_{\ell,m;\ell',m'}(r_1,r_2)=\int \phi(\vec r_1,\vec r_2)(Y_\ell^m(\theta,\phi)Y_{\ell'}^{m'}(\theta',\phi'))^* $$ Unless you carefully choose $\phi$ to be a special function (e.g., with some interesting symmetries), this expression cannot in general be simplified much further.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/714083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Permanent magnets - why spontaneous symmetry breaking? I am not a physicist. I'm curious about the cause of permanent magnetism in ferromagnetic materials. So far, I have formed the impression that a macroscopic net magnetic dipole moment is formed from the collective alignment of electron magnetic dipole moments (in suitable metals). But why do the electron spins align? My first instinct would be that such an ordered state forming spontaneously would seem inconsistent with the second law of thermodynamics. I have learned that below the Curie temperature, the spherical symmetry that one might expect of the magnetization direction is spontaneously broken, resulting in a magnetic anisotropy - a magnetic dipole moment. But I still don't understand why this spontaneous symmetry breaking occurs in the first place. Is there a way to explain this to a non-physicist?
a macroscopic net magnetic dipole moment is formed from the collective alignment of electron magnetic dipole moments (in suitable metals). But why do the electron spins align? The magnetic dipole is something that the electron has permanently. It is even a constant. If you put the connection with spin in the background for a short time, it becomes clear what happens in ferromagnetic materials. Above the Curie temperature, the thermal motions of the subatomic particles destroy the self-organisation of the magnetic alignment of these particles. Below the Curie temperature, on the other hand, the mutual alignment of the magnetic dipoles prevails. I have already pointed out several times in this forum that emphasising the electron as an electric charge and not also as a magnet should actually be outdated. The electron, like the proton and the antiparticles, are charges as well as dipoles to the same extent. The consideration of the spin in ferro- and other magnetic alignments is not purposeful. The consideration of magnetic dipoles is sufficient and purposeful.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/714355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why I cannot write the time evolution operator $e^{-i(T+V)t}$ as the product of operators $e^{-iTt}e^{-iVt}$ To calculate the wave equation of a time-independent Hamiltonian we use: $$ \Psi_{i}(r,t)=e^{-iH^{0}t}\psi_{i}(r,0). $$ We also know that the time-independent Hamiltonian $H^{0}=T+V$ is given to the sum of kinetic and potential energies, in an isolated atom. However, we cannot write $e^{-iH^{0}t}$ as the product of the operator, $e^{-iTt}e^{-iVt}$. Why is that?
This is because of the BCH formula $$\begin{align}e^Z~=~&e^Xe^Y \cr\Downarrow~&\cr Z~=~&X+Y+\frac{1}{2}[X,Y]+{\cal O}(X^2Y,XY^2),\end{align}$$ or equivalently, the Zassenhaus formula. But check out the Trotter product formula.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/714531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Non-relativistic approximation of the retarded Lienard-Wiechert field I'm working on a revision of the absorber theory of radiation proposed by Wheeler and Feynman on their paper "Interaction with the Absorber as the Mechanism of Radiation". On page 161 they say the retarded field of the source reduces to \begin{equation} -\left(\frac{e\mathfrak{U}}{r_k c^2}\right) \sin(\mathfrak{U},r_k) \end{equation} together with a term of electrostatic origin. My problem is that I don't completely understand where this terms comes from. Using the Liénard-Wiechert potentials, I get that the field for a point charge with arbitrary motion is \begin{equation} \mathbf{E} = e \left[\left(\frac{\hat{\mathbf{n}}-\boldsymbol{\beta}}{\gamma^2 R^2 (1-\hat{\mathbf{n}} \cdot \boldsymbol{\beta})^3} \right) + \frac{1}{c} \frac{\hat{\mathbf{n}}\times\left((\hat{\mathbf{n}}-\boldsymbol{\beta})\times\dot{\boldsymbol{\beta}}\right)}{R(1-\hat{\mathbf{n}} \cdot \boldsymbol{\beta})^3 } \right] \end{equation} We're interested in the limit $\beta\ll 1$, so we are left with \begin{equation} \mathbf{E} = e \left[\frac{\hat{\mathbf{n}}}{\gamma^2 R^2 } + \frac{1}{c} \frac{\hat{\mathbf{n}}\times\left(\hat{\mathbf{n}}\times\dot{\boldsymbol{\beta}}\right)}{R } \right] = \frac{e \hat{\mathbf{n}}}{\gamma^2 R^2 } + \frac{e\hat{\mathbf{n}}\times\left(\hat{\mathbf{n}}\times\mathfrak{U}\right)}{R c^2} \end{equation} Does this mean that $\hat{\mathbf{n}}\times\left(\hat{\mathbf{n}}\times\mathfrak{U}\right)$ is equal to $-\mathfrak{U}\sin(\mathfrak{U},r_k)$ or am I doing something wrong?
$-\hat n \times (\hat n \times \vec{\mathfrak{U}})$ is the component of $\vec{\mathfrak{U}}$ perpendicular to $\hat n$ and has a length $\mathfrak{U}\sin(\hat n \cdot \vec{\mathfrak{U}})$, so it appears they are just giving the value of this component.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/714945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
I can't seem to figure out a way to compute a gradient without reference coordinates I'm not sure if this question is better asked here or in Mathematics but here it goes: I'm studying electric dipoles, and this exercise I'm working on asks for the energy between 2 dipoles, given by $$U_{DD}=-\vec{p}_1\cdot\vec{E}_2\,\,.$$ The thing is that I can't advance from here since I don't really know how to calculate the gardient of the potential without using a coordinate system for reference, something the solutions simply say not to do, but don't explain how. What I've got so far is as follows: $$V_2=K_e\frac{\vec{p}_2\cdot\hat{R}_2}{R_2^2}=K_e\frac{p_2\cos(\theta)}{R_2^2}$$ $$\vec{E}_2=-\vec{\nabla}V_2=\frac{K_e\cdot\vec{p}_2}{R_2^3}(2\cos(\theta)\hat{i}+\sin(\theta)\hat{j})$$ When the electric field shown in the previously referenced solutions is $$\vec{E}=K_e\frac{3(\vec{p}\cdot\hat{r})\hat{r}-\vec{p}}{r^3}\,\,.$$ My question essentially boils down to: how do I go from the first expression for $\vec{E}$ to the second one, or in an even shorter form, how do I prove the following equality $$p(2\cos(\theta)\hat{i}+\sin(\theta)\hat{j})=3(\vec{p}\cdot\hat{r})\hat{r}-\vec{p}$$
We can use the identity $$\nabla(A\cdot B) = A \times (\nabla \times B) + B \times (\nabla \times A) + (A\cdot \nabla)B + (B\cdot \nabla) A$$ So, $$\nabla(-p_1 \cdot E_2) = \nabla[p_1 \cdot (\nabla V)]$$ $$=\nabla\left[p_1 \cdot \nabla \left[K_e \frac{p_2 \cdot R_2}{R_2^2}\right]\right]= K_e \nabla\left[p_1 \cdot \nabla \left[p_2 \cdot \frac{R_2}{R_2^2}\right]\right]$$ There are some ambiguities now in your original question which makes it harder to proceed further. Is $\hat{r}$ the same as $\hat{R}_2 ?$ Is one of these dipoles fixed to the origin, and the other is allowed to move around, and the moving around of this dipole is what determines the arguments of $U$?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/715071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Will the potential energy is same in both the cases? Suppose there is a charge $Q$. Now bring in another charge $Q'$ from infinity to a position a distance $r$ from charge $Q$. Then the change in potential energy is equal to $kQQ'/r$. My question is: will the potential energy will be same if the same charge $Q'$ is brought from infinity to a distance $r$ from $Q$, but in small portions $dQ'$. I mean that the first $dQ'$ is brought to a distance $r$ from $Q$, and then additional incremental charges $dQ'$ are also brought to the separation $r$, and so on. Will the potential energy will be same in both cases?
It will be the same only if you ignore the electric field of the dQ's that you moved there first, that is, you only consider the electric field of the original charge Q at the origin. Otherwise you would be including in the calculation the self energy of the electric charge Q', which is infinite. the self energy is the energy required to put a charge Q' together from dQ's coming from infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/715225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to find the electrostatic potential due to a uniformly charged rod on its axis? I was trying to find the potential at a point (at a distance $r$) on the axis of a rod (length $L$, charge $Q$) and ran into a problem. Let's assume the farther end of the rod(relative to the point) to be $a$ and the other to be $b$. I know I can use this equation : $$dV= \frac{Kdq}{x}$$ On integrating: $$\int_{Va}^{Vb} dV = \frac{KQ}{L} \int_{L+r}^{L}\frac{dx}{x}$$ Every website I have referred to completely ignores putting limits on $\int dV$ and writes it as $V$. They also put opposite limits on the RHS to what I am using. Is there some convention here to choosing the upper and lower limit?
Let me formulate your problem in my words so that we are sure what we understand each other: we have an infinitesimally thin rod of length $L$ and a total charge $Q$ which is uniformly distributed along the rod. You are interested in the value of the electrostatic potential $V$ in a point on the line on which the rod lies, in a distance $r$ from the nearer end of the rod. SHORT ANSWER to your question in the text: It's not a physics convention, it's just mathematics: if you calculate an integral over a domain of the form of an interval, the lower endpoint of the interval is the lower limit of the integral et vice versa. Hence, you should calculate: $$V = K \frac Q L \int_r^{r+L} \frac {\mathrm d x}{x}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/715628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Issue expanding $\sin \theta$ about $\theta_{eq}$ Quoting a textbook: $$(m_1 + 2m_2\sin^2\theta)\ddot\theta = m_1\Omega^2\sin\theta\cos\theta - \frac g L (m_1 + m_2)\sin\theta.\tag{10}$$ We can simplify this expression a bit by relating $\frac g L (m_1 + m_2)$ to the equilibrium angle $\theta_{eq}.$ $$(m_1 + 2m_2\bbox[yellow]{\sin^2\theta})\ddot\theta = m_1\Omega^2\sin\theta(\cos\theta - \cos\theta_{eq})\tag{11}.$$ Keeping only the first order term in the smallness, we can replace $$\sin\theta(\cos\theta - \cos\theta_{eq}) \rightarrow \sin\theta_{eq}(-\sin\theta_{eq}(\theta - \theta_{eq})).\tag{12}$$ This leads to the equation of motion for small oscillations $$(m_1 + 2m_2\bbox[yellow]{\sin^2\theta_{eq}})\ddot\theta = -m_1\Omega^2\sin^2\theta_{eq}(\theta - \theta_{eq}).\tag{13}$$ Here we are expanding about $\theta_{eq}$ and everything makes sense except the parts highlighted in yellow. How was the jump made for $$\sin^2\theta \approx \sin^2 \theta_{eq}~ ?$$ I know that expanding about $\theta_{eq}$ and keeping only linear terms we get $$\sin\theta \approx \sin\theta_{eq} + \cos\theta_{eq} (\theta - \theta_{eq})$$ logically then $$\sin^2\theta \approx \sin^2\theta_{eq} + 2\sin\theta_{eq} \cos\theta_{eq} (\theta - \theta_{eq}) $$ why is the $\sin\theta_{eq} \cos\theta_{eq} (\theta - \theta_{eq}) $ term dropped? it is linear in $\theta$
with $$f(\theta)=m_1+2\,m_2\sin^2(\theta)$$ hence $$f(\theta_0+\delta\theta)=m_1+2\,m_2\sin^2(\theta_0+\delta\theta)= m_1+2\,m_2\,[\sin(\theta_0)\cos(\delta\theta)+\cos(\theta_0)\sin(\delta\theta)]^2 $$ with $~\delta\theta \ll 1\quad \Rightarrow$ $$\cos^2(\delta\theta)=1\quad,\cos(\delta\theta)=1\quad, \sin^2(\delta\theta)=0\quad,\sin(\delta\theta)=\delta\theta\quad\Rightarrow$$ $$f(\theta_0+\delta\theta)\mapsto m_{{1}}+2\,m_{{2}} \left( \sin \left( \theta_{{0}} \right) \right) ^{ 2}+4\,m_{{2}}\sin \left( \theta_{{0}} \right) \cos \left( \theta_{{0}} \right) \delta \theta \quad\text{and} \\ f(\theta_0+\delta\theta)\,{\delta\ddot\theta}=[m_1+2\,m_2\,\sin^2(\theta_0)]{\delta\ddot\theta}$$ Where $~\delta\theta\,{\delta\ddot\theta}\mapsto 0~$ because both are small . $$\theta\mapsto \theta_0+\delta\theta\quad\Rightarrow \quad \ddot\theta\mapsto {\delta\ddot\theta}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/715750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why are fields described as force divided by mass or charge? I have read that application of force on a body from a distance, like gravitational or electrostatic force is a two-step process, first, the field is created by the body, then, the application of force on the second body by the field. I want to know why the expression for gravitational field is given as F/m or why the expression for electric field is given as F/q?
The definition of field, is there to tell us about the effects of the field on an object of unity value. most force fields have the parameter of the object they effect as a multiplier, hence when you set the parameter to unity value, it is the same as dividing the force by that parameter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/715867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why, in this solution, acceleration is constant even when it depends on distance between two charges? I used integration of $a=dv/dt$ to solve this Why, in this solution is acceleration constant, even when it depends on the distance between two charges? I used integration of $a=dv/dt$ to solve this. Question Two particles have equal masses of $5.0 \ g$ each and opposite charges of $+4.0 \times 10^{-5} C$ and $-4.0 \times 10^{-5} C$. They are released from rest with a separation of $1.0 \ m$ between them. Find the speeds of the particles when the separation is reduced to $50 \ cm$. This involves Coulomb's law, Newton's 2nd law of motion and kinematics of relative acceleration. Solution of above question $$q_1 = q_2 = 4 \times 10^{-5}C \ \ \ and \ \ \ s=1m, \ \ m=5g=0.005 kg$$ $$F=K \frac{q^2}{r^2} = \frac{9 \times 10^9 \times (4 \times 10^{-5})^2}{1^2} = 14.4 \ N$$ $$Acceleration \ \ a = \frac{F}{m} = \frac{14.4}{0.005}=2880 \ m/s^2$$ $$Now \ \ u = 0 \ \ \ \ s = 50 \ cm = 0.5 \ m, \ \ \ a = 2880 \ m/s^2, \ \ \ v = \ ?$$ $$v^2 = u^2 + 2as \ \ \ \rightarrow \ \ v^2 = 0 + 2 \times 2880 \times 0.5$$ $$v = \sqrt{2880} = 53.66 \ m/s \approx 54 \ m/s \ \ \ for \ each \ particle.$$
Actually, acceleration is not constant in this case because in time $dt$ the force would change. So, the acceleration also changes even in time $dt$. I think the solution is wrong but the answer is correct. If you go for energy conservation, which doesn't depend on acceleration you get the same answer: $$ K_{1} + U_{1} = K_{2} + U_{2}$$ $K_{1} = 0$, since they are at rest initially, where: $$U_{1} = - \frac{k q^{2}}{r}$$ $K_{2} = m v^{2}$ (you need to see this carefully, it is $m v^{2}$ not $\frac{1}{2} m v^{2}$, because you have to take the KE (kinetic energy) of the system, since two particles are there each moving with $\frac{1}{2} m v^{2}$. So: $$2 \frac{1}{2} m v^{2} = m v^{2}$$ They move with different KE if their mass is different, so then you need to use momentum conservation to find the relation between their speed and proceed: $$U_{2} = - \frac{2k q^{2}}{r}$$ So, if you substitute these in that you will give the answer to be $24 \cdot \sqrt{5}$, which gives the same answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/716100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Yang-Baxter equation for an $S$ matrix depending on total momentum I have a system where the two-particle scattering matrix $S_{12}(p_1,p_2)$ depends on the momentum difference $p_1-p_2$, and also on the total momentum $P=p_1+p_2$ in some non-trivial way. One can use parametrization of energies and momenta $E=m\cosh(\alpha)$ and $p=m\sinh(\alpha)$ to express everything in terms of the rapidities $\alpha$. Then, I obtain $S_{12}(\alpha,\beta)$ as a function of $\alpha-\beta$ and $\alpha+\beta$. It is known that if $S_{12}$ satisfies the Yang-Baxter equation (TBE), then the $N$-body $S$ matrix factorizes as a product of 2-body $S$ matrices. But if $S$ depends on the combination of $\alpha+\beta$, not only on $\alpha-\beta$, can we already rule out the possibility for such $S$ matrix to satisfy the YBE? Or, is it possible that even depending on $\alpha+\beta$ (and $\alpha-\beta$), the YBE can still be satisfied? The quantity $p_1+p_2$ can always be expressed in terms of the invariant: $$s^{2}=(E_1+E_2)^{2}-(p_1+p_2)^{2}$$ But I don't know if that is of any help in such cases.
It can certainly not be ruled out, that such scattering matrices exist. An example: The YBE (Yang-Baxter Equation) is a matrix equation. So, if you consider a scalar or diagonal scattering matrix (i.e. no exchange of charge/change of particle type), then it is satisfied for any $S(p,q)$ due to commutativity: $$ S(p_1,p_2) S(p_1,p_3) S(p_2,p_3) = S(p_2,p_3) S(p_1,p_3) S(p_1,p_2). $$ If $S$ is non-diagonal (particle type can change in scattering), this is a non-trivial requirement. Even if $S(p,q) = S(\alpha-\beta)$ depends only on differences in rapidities. Non-diagonal scattering matrices: I see no reason why it should be incompatible with general $S(p,q)$, even though the constraint is stronger and it might be more difficult to find an example. WP states: $$S(p,q) = S(\alpha-\beta)$$ as a "common Ansatz", but not as a requirement.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/716268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }