Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Manometer physics? How to keep nozzles from draining? Picture two liquid spray nozzles spaced vertically by some distance. Both nozzles are connected via flexible tubing to a single valve via a tee. Once the nozzle lines are full of fluid how can i prevent the fluid from draining out the bottom nozzle when the valve is shut. Would routing the lower nozzle tubing to the same height as the higher nozzle work? Some other physics magic I could employ? I prefer not to use check valves, two valves, or any additional hardware.
Add another valve onto one of the tees? Inject an air-gap into the lines (with an appropriate configuration of lines)? Once you have a continuous and unbroken line of liquid that is free to move, the entire system simply acts as a container in which the liquid will find the lowest level (and draw air in at the highest). So without additional control hardware of some kind, you cannot allow the nozzles to remain in a configuration where they are at differing levels once the pressure is turned off, if you want the supply lines to each nozzle to retain unvented liquid.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/384041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Magnetic force direction Good day All! while trying to solve this question I used the right hand rule and according to it the Force should be directed outwards (pointing toward me) but here is the answer that puzzeld me I really don't get why it is down , and would feel very grateful if someone can explain me the reason thanks in advance!!!
We note that from the symmetry of the problem, there can be no force directed out of the plane of the page (i.e. it cannot be towards/away from you, as the components from opposite ends of the coil cancel out in this direction). We should also note that to find the direction of the force, we use the Left Hand Rule, not the right. Since the field points at an angle to the normal of the coil's plane, we will get a net force downwards (just show this using the LHR). Alternative methods to determine the force would be as the second picture describes, i.e. using magnetic dipoles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/384200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is quantum mechanical momentum the derivative of the wave function with respect to the position? In classical mechanics the momentum is defined as mass times the time-derivative of position. In quantum mechanics, however, the time-derivative of the wave function is the hamiltonian, while the momentum is defined as $i\hbar \frac{\partial}{\partial x} \Psi(x) $, which is a space-derivative and not a time-derivative. Note that I understand why momentum is an operator on the wave function (it's a measurable quantity, so it's an operator as per a postulate of QM). I understand the derivation from spatial translation, but I don't understand why it's an equivalent of the classical momentum as it's a space derivative and not a time derivative.
This interlinkage between classical and quantum mechanical momentum can be thought of as a consequence of the De Broglie's law - that the velocity of the wave depends on it's wavelength. Though the Hamilton operator is obtained from the same law, looking things at this perspective could help in realizing that this relationship is fundamental in nature, and is not derived from anywhere else.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/384548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Magical equations in statistical mechanics Studying ensembles in Statistical Mechanics I have found some formulas such as $$ S = k\frac{\partial}{\partial T} (T\ln Z) $$ or $$ \langle E \rangle = -\frac{\partial}{\partial\beta} \ln Z $$ or $$ \langle \Delta E ^2 \rangle = - \frac{\partial \langle E \rangle}{\partial \beta}$$ among many others. These formulas are mostly derived in the scope of one particular ensemble and, at least in the bibliography I am reading, they give no argument other than "if we do this calculation it just works". But they give no physical reasons for this to be so. Then I see that every one uses this formulas in any other ensemble as if they were a general result with no proof. Are this kind formulas valid in general for any distribution? If so, why? Or are they valid only for some special distributions?
The partition function is defined as a sum over microstates at a specific temperature (with $\beta = 1/kT$) $$Z(\beta) = \sum_s e^{-\beta E_s}$$ where $E_s$ is the energy of the microstate $s$. Recall that if our system is connected to a large heat bath at temperature $T$, the probability for our system to be in state $s$ is $$\text{Prob}(s) = \frac{e^{-\beta E_s}}{Z(\beta)}.$$ That means that the expected energy $\langle E \rangle$ is $$\langle E \rangle = \sum_s E_s \text{Prob}(s) = \frac{\sum_s E_s e^{-\beta E_s}}{Z(\beta)} = \frac{- \frac{\partial}{\partial \beta} Z(\beta)}{Z(\beta)} = - \frac{\partial}{\partial \beta} \ln(Z(\beta)).$$ The other identities can be derived using similar manipulations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/384708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Running or walking up stairs = same work? I have a question belonging to the picture below. It is mentioned that whether you walk up or run up stairs the same work is done. When work equals (force in the moving direction) times the way, then I dont understand why it should be correct. When I run up the stairs I definitely accelerate much faster, while I "gain" more kinetic energy, caused by my velocity. On the other hand when I define my work by the negative difference of potential energy, this statement would be correct. Why is this not a contradiction? It seems like I am running errors in correctly seperating physical systems, but I can't figure this out. Can you help me out? To make my question more precise: Say we reduce the whole thing to a simple straight vertical movement. The Force upwards is given by acceleration times mass. Faster movement upwards must be an increase of acceleration and so an increase in force and that will give an increase of work. Isn't that correct?
The confusion arises from the conflation of acceleration and work, and the intuition that higher acceleration means more energy. It is more clear if you examine the starting and ending states; they are equivalent. Examining the acceleration component, this is counterintuitive because there is more energy exerted per unit of time, but you must also consider that there are fewer units of time. The total energy is, ideally, equivalent; the additional energy per unit of time is precisely counterbalanced by the reduced amount of time. This can be unintuitive because running feels like more work, but this is an artifact of the way our bodies work (switching from aerobic to anaerobic, which is less efficient), rather than a characteristic of physics itself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/384839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Reproducing Ramond's sunset diagram calculation for $\phi^4$ theory I am unable to reproduce the calculation of the sunset diagram for $\phi^4$ theory in Pierre Ramond's Fied Theory a Modern Primer. This is the second edition chapter 4.4. He starts with eq. (4.4.19) \begin{equation} \Sigma(p) = \frac{\lambda^2 (\mu^2)^{4-2\omega}}{6} \int \frac{d^{2\omega}\ell}{(2\pi)^{2\omega}} \frac{d^{2\omega}q}{(2\pi)^{2\omega}} \frac{1}{\ell^2+m^2} \frac{1}{q^2+m^2} \frac{1}{(q+p-\ell)^2+m^2} \end{equation} He introduces 1 in the form \begin{equation} 1=\frac{1}{4\omega}\left[ \frac{\partial \ell_\mu}{ \partial \ell_\mu}+ \frac{\partial q_\mu}{ \partial q_\mu}\right] \end{equation} to get \begin{equation} \Sigma(p) = \frac{\lambda^2 (\mu^2)^{4-2\omega}}{6} \int \frac{d^{2\omega}\ell}{(2\pi)^{2\omega}} \frac{d^{2\omega}q}{(2\pi)^{2\omega}}\frac{1}{4\omega}\left[ \frac{\partial \ell_\mu}{ \partial \ell_\mu}+ \frac{\partial q_\mu}{ \partial q_\mu}\right] \frac{1}{\ell^2+m^2} \frac{1}{q^2+m^2} \frac{1}{(q+p-\ell)^2+m^2} \end{equation} then uses partial integration and discards the boundary terms to get \begin{equation} \Sigma(p) = -\frac{\lambda^2 (\mu^2)^{4-2\omega}}{6} \times\\ \int \frac{d^{2\omega}\ell}{(2\pi)^{2\omega}} \frac{d^{2\omega}}{(2\pi)^{2\omega}}\frac{1}{4\omega}\left[ \ell_\mu\frac{\partial}{ \partial \ell_\mu}+ q_\mu \frac{\partial}{ \partial q_\mu}\right] \frac{1}{\ell^2+m^2} \frac{1}{q^2+m^2} \frac{1}{(q+p-\ell)^2+m^2}\qquad (1) \end{equation} All of that is fine, but then he says that explicit differentiation gives the result \begin{equation} \Sigma(p) = \frac{1}{2\omega-3}\frac{\lambda^2 (\mu^2)^{4-2\omega}}{6}\int \frac{d^{2\omega}\ell}{(2\pi)^{2\omega}} \frac{d^{2\omega}q}{(2\pi)^{2\omega}}\frac{3 m^2 + p\cdot(p+q-\ell)} {(\ell^2+m^2) (q^2+m^2) [(q+p-\ell)^2+m^2]^2} \end{equation} I cannot find how to reproduce this formula. In fact I do not understand how the coefficient $1/\omega$ disappears and the coefficient $1/(2\omega-3)$ can appear. Indeed, for general momenta $\ell$ and $k$ \begin{equation} \ell_\mu\frac{\partial}{ \partial \ell_\mu} \frac{1}{(k-\ell)^2+ m^2} = \frac{2\ell\cdot (k-\ell)}{[(k-\ell)^2+ m^2]^2} \end{equation} When you use this into (1) you only get inner products of momenta. Without having to do the calculation in detail, how can the dimension (dis)appear in the computation?
Silly me, it is the 't Hooft Veltman regularisation scheme explained 5-6 pages earlier. I guess that's what happens when you start reading a section in the middle of the book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/385031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Electric Motor Project Not Working Long time reader, first time poster :) I'm a first year physics teacher in a high school, and I'm running a science fair where students research their own topics and create demonstrations. One group has chosen electric motors, and they are making a motor out of a battery, paperclips, coil of wire, and a neodymium magnet. They have shaved off the top half of the wire's insulation. However, once we put the coil in the paperclips and try to spin it, though, it spins just a few times before being attracted to the magnet. I've done this project before for my degree, but for the life of me I can't figure out what's going on here. The wire is definitely conducting electricity (quite hot to the touch), and I've double and triple checked that only the top half of the wire touching the paperclips is sanded. We have also tried magnets of varying strength, all to no avail. Any advice? Thanks ahead of time for your time and help!!
You have to check on the supporting ends of the wire how the shaved-off (blank) side of the wire is oriented with respect to the coil plane. If this is not done right, you will not get any sustained rotation. It might perhaps be better do shave off the insulation completely and put on a new half insulation with a marker. As shown here where also possible reasons for a failure to work are given.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/385120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does the potential $V(φ)$ of a scalar field decrease with the expansion of space? If a scalar field (eg. inflaton field) starts with a high potential. Does the potential $V(φ)$ of the scalar field decrease with the expansion of space? If it doesn’t decrease, would it mean that extra energy is created to fill in the additional space so that its potential $V(φ)$ remains the same throughout the space? I’m a layman so a non-mathematical answer would be appreciated.
You're probably used to a Minkowski-space Lagrangian density such as $\frac{1}{2}\eta^{\mu\nu}\partial_\mu\phi\partial_\nu\phi -V(\phi)$. In curved spacetime, this generalises to $\sqrt{|g|}(\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi -V(\phi))$ with $g:=\det g_{\mu\nu}$. We model the expansion of 3-dimensional space with $ds^2=g_{\mu\nu}d^\mu dx^\nu=dt^2-a^2(t)d\mathbf{x}^2$ (I won't go into the form of $d\mathbf{x}$ for now), so $\sqrt{|g|}\propto a^3$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/385294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about calculating |Wavefunction|^2 In one of my homework in Quantum mechanics, I was asked to find $|Ψ(x,t)|^2$, where \begin{align} Ψ(x,t)&=1/\sqrt{10}[3ψ_1(x)e^{-iE_1t/ħ}-ψ_3(x)e^{-iE_3t/ħ}]\, ,\\ &=1/\sqrt{10}[3\sqrt{2/a} \sin(\pi x/a)e^{-iE_1t/ħ}- \sqrt{2/a}\sin(3\pi x/a)e^{-iE_3t/ħ}] \end{align} So, from my calculation steps: \begin{align}|Ψ(x,t)|^2&=Ψ^*(x,t)Ψ(x,t)\, ,\\ &=\frac{1}{10}[3ψ_1(x)e^{-iE_1t/ħ}-ψ_3(x)e^{-iE_3t/ħ}]^* [3ψ_1(x)e^{-iE_1t/ħ}-ψ_3(x)e^{-iE_3t/ħ}]\, ,\\ &=\frac{1}{10}[3ψ_1^*(x)e^{iE_1t/ħ}-ψ_3^*(x)e^{iE_3t/ħ}]^* [3ψ_1(x)e^{-iE_1t/ħ}-ψ_3(x)e^{-iE_3t/ħ}]\, ,\\ &=\frac{1}{10}[9ψ_1^*(x)ψ_1(x)-3ψ_3^*(x)ψ_1(x)e^{iE_3t/ħ}e^{iE_1t/ħ}-3ψ_1^*(x)ψ_3(x)e^{iE_1t/ħ}e^{iE_3t/ħ}-ψ_3^*(x)ψ_3(x)] \end{align} However the answer from the internet shows that the result after tedious simplifications is: $$\frac{1}{10}[9ψ_1^2(x)+ψ_3^2(x)-6ψ_1(x)ψ_3(x)\cos((E_3-E_1)/ħ)t] $$ So, the question is that how does $9ψ_1^*(x)ψ_1(x)$ becomes $9ψ_1^2(x)$, $ψ_3^*(x)ψ_3(x)$ becomes $ψ_3^2(x)$ and $-3ψ_3^*(x)ψ_1(x)e^{iE_3t/ħ}e^{iE_1t/ħ}-3ψ_1^*(x)ψ_3(x)e^{iE_1t/ħ}e^{iE_3t/ħ}$ becomes $-6ψ_1(x)ψ_3(x)\cos((E_3-E_1)/ħ)t$? Because the professor or tutor doesn't go very deeply in calculating wavefunction conjugation and the rules of conjugation, I have no idea how to further simplify the problem in to the form in the answer found in the internet. It would be great if someone can explain to me the rules or tricks in dealing with $ψ^*(x)$. Thanks
First note that in general the wavefunction is complex, not real. This answers the first two questions you asked, since for any complex number $z = a + ib$, $|z|^2 = z^*z$. Then, \begin{equation} \psi_1^*(x) \psi_1(x) = |\psi_1(x)|^2 \quad \text{and} \quad \psi_3^*(x) \psi_3(x) = |\psi_3(x)|^2. \end{equation} You can save time in the next part by recognizing that one term is simply the complex conjugate of the other. To illustrate, define $A$ as, \begin{equation} A = 3\psi_1^* \psi_3 e^{i(E_1-E_3)t/\hbar}, \end{equation} then you are trying to do the following sum, \begin{equation} -A - A^*. \end{equation} Since $A$ is complex, we can say $A = a+ib$, and then the sum looks like \begin{align} -A - A^* &= -(a+ib) - (a-ib) \\ &= -2a\\ &= -2 \text{Re}[A]\\ &= - 2 \text{Re}[3\psi_1^* \psi_3 e^{i(E_1-E_3)t/\hbar}] \\ &=-6 \psi_1 \psi_3 \cos[(E_1-E_2)t/\hbar] \end{align} Where I went from the second to last line, to the last line, by assuming $\psi_1$ was real, so $\psi_1^* = \psi_1$. Also note that in your calculations you have dropped signs in a few places when expanding the products of the complex conjugates, notably, in the exponential factors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/385464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Feshbach -Resonance why are hyperfine structures important? The hyperfine structure of energy levels around the ground state seem to enable Feshbach and be intrinsic to it. Why do we need hyperfine levels? I.e. why is Feshbach specific to ultracold atoms in ground stae, why would a magnetic field in any gas of atoms not achieve the similar effect of inducing atom-atom coupling? Thanks in advance,
I think what you are forgetting is that a given Feshbach resonance is between two atoms in a certain scattering channel. This means, in particular, that it only occurs for one partial wave. As far as I know, all observed Feshbach resonances have been either s-wave or p-wave, but presumably they also exist for higher partial waves. Anyway, the point is that for a room-temperature gas atoms interact with many of these partial waves, and the impact of any particular Feshbach resonance would be small. The threshold energy for a partial wave with angular momentum $l$ goes as $E_{th}(l)\sim A (l(l+1))^{3/2}$, where $A$ depends on the inter-atomic potential and masses and might range from 10 $\mu $K - 1 mK. So, at the K scale and above, it would presumably be difficult to see the effects of any one collision channel. However, I do not know if anyone has tried this experiment, or if it would be feasible with some very careful measurement. There are probably other considerations too. For example, remember that a typical ultracold atomic gas has something like $10^{-6}$ the density of air, which is important for two-body collisions to dominate the physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/385783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do photons interact with a very fine edge? Suppose you have some material which has a low reflectance and is opaque to an incomming photon when the angle of incidence is small. Now take that material and make a narrow (20:1 width:length ratio or narrower) wedge with a fine edge - 1/10th the wavelength of said photon or less, the very edge itself should be transparent - and shoot said photon directly down the edge of the wedge like so. How does the photon react? Does the photon treat the edge as slicing a probability wave, resolving to having been deflected randomly to one side of the wedge or the other when observed... or does it treat the edge as if it was no finer than the wavelength and so act as a perpendicular strike on a blunt edge and be absorbed? Is there other quantum weridness involved, or other information required to answer fully?
For the reasons explained by @WetSavannaAnimal aka Rod Vance you're just asking a classical optics question, and I'll supplement that by trying to answer the classical optics question. A small fraction of light will get scattered or absorbed at the sharp tip, but as the tip gets sharper and sharper, that fraction gets lower and lower. The rest of the light will get absorbed along the two sides of the triangle. (If it was a triangle of glass, the light would reflect off the sides of the triangle, but you said in the question that the triangle is made of a very black material with negligible reflectance.) What is the exact probability distribution for where along the edges the photon is likeliest to be absorbed? It depends on the detailed wavefunction of the incoming photon (or in classical-speak, the incident light wave's phase and intensity profile). See also knife edge prisms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why do we use superposition instead of tensor product in interferometer? In the description of a neutron interferometer here, it says: In an interferometer the incident beam is split into two (or more) separate beams. The beams travel along different paths where they are exposed to different potentials (which results in different phases). At some point the beams are brought together again and allowed to interfere. The resulting beam is the superposition of the separated beams: $$ \psi = \psi_I + \psi_{II} $$ I am interested in why the total wavefunction is not written as $\psi=\psi_I\otimes \psi_{II}$? Because when they are separated, they should be considered in two physical systems and we should use tensor product to describe them, even they are later combined together, right?
The confusion comes from "and allowed to interfere". Superposition is not interaction, and they are describing a superposition of two beams of neutrons. It is similar to the superposition of two laser beams split from the same original, which show interference fringes due to the superposition of the two beams, and it is well known that photons do not interact except in high orders with very low probabilities. The whole neutron interferometer is considered one physical system where the neutrons are not interacting with each other , it is only the potentials that change in too complicated a way to really solve the total system wavefunction, but the addition of the two partial ones at the end is a good approximation. Tensor products would have been used in the density matrix formalism where individual neutrons are considered. The psis in your quote are the beam psis , not the individual neutron ones.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How is particle in a ring is in bound state? Particle in a potential well forms a bound state which gives rise to discrete energy levels. But states for particle in a ring also have discrete energy levels even where there is no potential well. In what aspect hence the system is 'bound'? Or is it that it's not necessary for a potential well to exist for system to be bound?
Discrete energy levels usually appear due to the presence of boundary conditions in solving Schrödinger's equation. Boundary conditions can come from the shape of the potential, but also from symmetries of the problem for example. If we immagine a unidimensional infinite potential well, the confined particle do not have access to the region where the potetial assumes the value $+\infty$. So the wave describing the particle must be null on the walls, because the squared modulus of the wave function represents the probability of finding the particle in a given point: you can not find the particle there. Since not every wave nullifies in those points, only the waves with an appropriate amount of periods are allowed, and from this comes the discrete values of energy allowed. Describing a particle on a ring, instead, requires another condition. Suppose $\psi(\theta)$ is the wavefunction describing the particle, where $\theta$ is the angle the paramterizes the ring. $\psi(\theta)$ allows us to calculate the probability of finding the particle at every angle: but one can immagine that the following condition must be satisfied: $$ \psi(\theta)=\psi(\theta+2\pi) $$ Doing a complete lap, I must return to the same value of the wave function. As before, not every wave satisfies this condition, so only certain waves and energy values are allowed. Note: this condition can be made more weak, because one could require that the probability must be the same after a complete lap, not the wavefunction. This condition can be expressed as follows: $$ |\psi(\theta)|^2=|\psi(\theta+2\pi)|^2 $$ And its solutions can lead to strange results: $$ \psi(\theta)=e^{i\alpha}\psi(\theta+2\pi)\qquad \alpha\in \mathbb{R} $$ It turns out that different values of $\alpha$ describe different types of particles, in particular bosons ($\alpha =0$), fermions ($\alpha =\pi$), and anyons ($\alpha \neq 0,\pi$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do charged particles deflect one way but not the other in a magnetic field? I am well aware that a charged particle moving in a magnetic field will experience a force perpendicular to that magnetic field. But why is it that positive and negative particles experience a force in opposite directions? What exactly determines the direction that a given charge will experience a force? I.e. why does a negative particle experience a force in one direction and not the other?
I'm not really sure to have understood your question, however you have to consider the sign of the charge inside the formula: $$\mathbf{F_{Lor}}=q\mathbf{v}\times\mathbf{B}$$ and so if: $$q=|q|$$ $$\mathbf{F_{Lor+}}=|q|(\mathbf{v}\times\mathbf{B})$$ instead if: $$q=-|q|$$ $$\mathbf{F_{Lor-}}=-|q|(\mathbf{v}\times\mathbf{B})$$ How you can see: $$\mathbf{F_{Lor-}}=-\mathbf{F_{Lor+}}$$ The force acting on a postive and on a negative charge are opposite vectors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
What is "Symmetry of Infinity" in electricity and magnetism? I have this problem from my E&M textbook: Two infinitely long wires running parallel to the x axis carry uniform charge densities $+\lambda$ and $-\lambda$ (see photo). Find the potential at any point $(x,y,z)$, using the origin as your reference. The solution to this uses a random point and solves the problem there: It's stated that "due to the symmetry of infinity, we need only consider the z-y-plane. We plot an arbitrarily located point, without symmetry." Once here I could do the math of this just fine, but I don't understand what "due to the symmetry of infinity" means. I tried to look it up online (including stack exchange) and all I could find were journals that were related to this. I could not access them, and even if I could I probably wouldn't understand what was going on anyway. What is "the symmetry of infinity?" And how is it related to this problem?
To my knowledge, this is not a technical term which you don't know, but merely a hand-wavey and brief way of pointing out that the charge distribution is independent of x and so the potential must also be independent of x. "Infinity" is evocative of this fact because if the wires were not infinite in length, then the charge distribution (and the potential) would be dependent on x.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Tension in the string of a pulley In the diagram above why is the tension of the string attached to the pulley at "A"(the string attached to roof) equal to 2T? Why is it not Mg+(M+m)g?(considering that the pulley is mass less) I have trouble understanding
It would if the weights weren't accelerating but they are accelerating because they are not of equal mass. So to get the overall force on the pulley you have to take acceleration into account; subtract whatever force the acceleration creates from the force created by gravity Tension is not going to equal to what the mass is at the ends of the strings when the masses are accelerating. The competing forces are Mg and (M+m)g so subtract those to get the overall force that drives acceleration. Acceleration is force over mass so $$A =\frac{F_a}{M_t}=\frac{Mg-(M+m)g}{M+(M+m)}=\frac{Mg-Mg+mg}{2M+m}=\frac{mg}{2M+m}$$ Mass is additive so we always add masses up. But forces can be subtractive as the case here. As you can see, the smaller the mass difference is (m) the less net force there is and the bigger M is the more mass the overall system has which means less acceleration created by the net force. Now let's calculate the net force on the pulley by taking the overall force created by the masses and subtracting the force created by acceleration. $$T_t=F_g-F_a=M_tg-M_tA=M_t(g-A)$$ $$T_t=(2M+m)(g-\frac{mg}{2M+m}) = (2M+m)g - \frac{(2M+m)mg}{(2M+m)}$$ $$T_t=2Mg+mg-mg = 2Mg$$ So we can see that the pulley only has to support the smaller weight twice. Any differences in the two weights is in free fall. This also means that the wire holding onto the bigger weight only has to support the smaller weight and this makes intuitive sense since the bigger mass is falling and the wire is holding onto the smaller mass taking it along for the ride.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If Bohr model is outdated and we know that there is no such thing as an "electron orbital circumference" then how is $2\pi r=n\lambda$ still valid? We know that Bohr model is outdated and we know that there is no such thing as an "electron orbital circumference" then how is $2\pi r=n\lambda$ still valid? Edit : If the electrons for higher orbitals are not moving in a circular path then how do we write $2\pi r=n\lambda$?
The Bohr model is a semi classical model, treating the electrons like satellites of the proton, in the successful hydrogen atom solution. The success relied that the Bohr assumptions reproduced the series that fitted the hydrogen emission spectra. The solution of the Schrodinger equation for the hydrogen atom reproduces the success of the Bohr model in fitting the spectra, and gives a theoretical basis for quantum mechanics, with the interpretation of the $ψ*ψ$ of the ψsolutions as probability of finding the electron at x,y,z around the proton. It has been shown that the most probable radius of the hydrogen ground state is the same as the Bohr radius, explaining the success of the Bohr model for this simple potential case. Edit after edit of question: If the electrons for higher orbitals are not moving in a circular path then how do we write 2πr=nλ? It is a useful approximation for rule of thumb, not accurate. One would have to go through the calculations for each n. After all it is the most probable radius of the ground state that is identified with the lowest Bohr orbit radius. The expectation value of the radius ( the average) even for the ground state is 1.5 of the Bohr radius. After all it is a different mathematical model.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/386927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Energy needed to overcome the coulomb barrier I am in my first year of college and doing an EPQ type project on nuclear fusion and am wondering about Coulomb's Law. I am looking into the force experienced by a particle due to the Coulomb force and would like to find an equation that describes how much energy is needed to overcome this Coulomb force so the two hydrogen atoms can fuse. I have come up with an equation that makes sense to me and describes the amount of energy required for the two atoms to fuse. However, I'm uncertain if another equation I've found describes it better. This is my thinking: If Coulomb's law is given by the equation $$F = \frac{k \ q_1 q_2 \ e^2}{r^2}$$ then isn't the energy needed to overcome the repulsion force the integral of that equation between two distances $r_1$ and $r_2$ where where $r_2$ would preferably be smaller than the barrier of the strong nuclear force and $r_1$ is any distance that the particle starts from: $$E = \int_{r_2}^{r_1} \frac{q_1 q_2 \ e^2}{ 4 \pi \xi_0 r^2} dr$$ If it is then what does this equation describe? It is the coulomb potential energy equation. $$ V_C=\frac{e^2}{4\pi\epsilon_0}\frac{Z_aZ_b}{R_a+R_b}$$
Your equation for $E$, the work done by an external force to bring the two charges from a separation of $r_2$ to a separation of $r_1$ should be $$E = \int_{r_2} ^{r_1} -\dfrac {q_1q_2 e^2}{4 \pi \epsilon_0 r^2 } dr = \dfrac {q_1q_2 e^2}{4 \pi \epsilon_0 } \left( \dfrac {1}{r_1}-\dfrac {1}{r_2}\right)$$ If you take the electric potential energy $E$ to be zero when the separation of the charges at the beginning $r_2 =\infty$ then this equation reduces to $$E = \dfrac {q_1q_2 e^2}{4 \pi \epsilon_0 r_1} $$ and this is to be compared with your equation $$V_{\rm c} E = \dfrac {Z_{\rm a} Z_{\rm b} e^2}{4 \pi \epsilon_0 (R_{\rm a} +R_{\rm b})} $$ with $q_1 = Z_{\rm a}, \, q_2 = Z_{\rm b}$ and $r_1 = R_{\rm a}+R_{\rm b}$ the closest approach of the centres of the two nucleii where $R_{\rm a}$ and $R_{\rm b}$ are the radii of the hydrogen nucleii.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/387068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a SI unit for space-time? Space and time are routinely combined into space-time nowadays, which implies that the SI meter and second should be combined into a single SI unit such as [meter-second]. So far, I haven't come across such a SI unit.
The units keep being 'meter'. Since the speed of light $c$ is a constant for all inertial observers, there's no problem in multiplying time by this number to get meters, that is $$ ds^2 = c^2 dt^2 - dr^2 $$ $ds$ has thus units of length
{ "language": "en", "url": "https://physics.stackexchange.com/questions/387324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Electric potential energy and equipotential lines Consider an electric dipole as in the figure. There is a vertical equipotential line/surface. If I understand correctly, electric potential is the amount of work done per unit charge in bringing a charge from infinity to a distance r from a charge. But consider moving along the equipotential line from infinity to a point along the equipotential line. Surely the work done to move the charge must be zero, since we are moving along an equipotential line. However, the electrostatic field is conservative, so work done in moving from infinity to a given point should be the same along any path. Therefore, work done in moving a charge from infinity to a point close to a dipole is zero. This is clearly not the case, where have I gone wrong?? Thank you!
Equipotential lines are always at right angles with the electric field (most clearly shown in the centermost equipotential line). This implies that if a charge were to move along an equipotential line then throughout the entire journey $F_{electric} \perp dr $ and hence, $F_{electric} \cdot dr = 0$. To move a charge along an equipotential line, you'd need to supply two forces: one to cancel out the net force from the two charges, and the other to move it along the equipotential. So: $F = -F_{electric} + F_{tangential}$. Calculating the total work done: $$W = \int{F \cdot dr} = \int{(-F_{electric} \cdot dr)} + \int{F_{tangential} \cdot dr}$$ As we previously argued, $F_{electric} \perp dr$ so $$W = \int{F_{tangential} \cdot dr} $$ which is nonzero. EDIT: So just to clarify, the electric field's contribution to the total work is zero, but the electric field on its own will never be able to pull the particle down from infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/387445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Phase of a Wave and Phase Space What relation does the phase of a wave have with the phase space? Namely, how are they related historically and/or physically? P.S. if it helps, I came across this question while thinking about the phase-space formulation of QM and the pilot-wave theory.
There is no relation between the phase of a wave and the so-called phase space of a mechanical system which consists of the space of all possible generalized coordinate and conjugate generalized momentum variables.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/387568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Standing waves on string with different densities I am rather confused with the interference of waves that must occur in a string with different densities. Say for example we have a string of length 2L. And the first L part has mass per unit length u, while the second part has mass per unit length 9u. A wave is continuously propagated from the lighter string with the desired frequency. Now the wave comes to the junction and some of it gets transmitted and some of it is reflected(W1) with phase difference $\pi$. The transmitted wave hits the other end and comes back with phase difference $\pi$ and again crosses the junction(W2). * *For standing waves to be formed does the W1 need to be in phase with the initial wave or does the W2 need to be in phase with the initial wave. *Let's assume that I observe standing waves at a frequency $ f_1, f_2, f_3 ... $ what would be the shape of the string. It can't be simple one loop,two loops, three loops respectively, as the wavelength of the wave changes when we go from one side to the other.
Thinking about standing waves in terms of reflections from the discontinuity in the middle is a recipe for confusion. Here’s an easier solution. Since the string tension will be uniform, but the mass/length ratio changes by a factor of 9 at the midpoint, you will have to match displacement and slope at the discontinuity. $$\begin{align} D(x)&=A\sin (kx) & \text{for }x&<L \\ D(x)&=B\sin (3k(2L-x))&\text{for }x&>L \\ \\ A\sin (kL)&=B\sin (3kL) \\ A\cos (kL)&=-3B\cos (3kL) \\ \end{align}$$ Divide the first equation by the second to eliminate A & B: $\tan (kL)=-\tfrac{1}{3}\tan (3kL)$. You could solve this nasty little transcendental equation graphically for allowable values of k, then plug in to get A/B.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/387897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why not quarter-life? The number of nuclei left after time $t$ in radioactive decay is given by: $$N(t) = N_0 e^{-t/ \tau}$$ Now if we put $N(t)$ as $\dfrac{N_0}2$, we get half-life. But, if we had put $\dfrac{N_0}4$, we would have quarter-life, which is also independent of $N_0$. Is there anything special about half-life as opposed to quarter-life
The decay time $t_{1/2}$ of half the given number $N_0$ of atoms atoms is just convenient and visually appealing. Of the unit fractions it is also nearest to the decay time constant (mean lifetime) $\tau$ $t_{1/2}=0.6931 \tau$. The decay time to a unit fraction $1/n$ given by the positive integer $n$ is $$t_{1/n}=\tau \cdot ln(n)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/388793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Jeans equation for a spherical equilibrium I am currently studying the Jeans equation for a system in spherical equilibrium. Since the distribution function $f(x,v)$ can be written in depency of energy and angular momentum $f(E, L)$ it seems to follow that most of the velocity moments disappear: $$ \tag{1} \left<\upsilon_r\right> = \left<\upsilon_\vartheta\right> = \left<\upsilon_\varphi\right> = 0 $$ and $$ \tag{2} \left<\upsilon_r \upsilon_\vartheta\right> = \left<\upsilon_r \upsilon_\varphi\right> = \left<\upsilon_\vartheta \upsilon_\varphi\right> = 0. $$ Where $f(x,v)$ denotes the number density of particles in phase space and $\left<Q\right> = \frac{m}{\rho(x)} \int Q\,f(x,v) \text{d}^3v$,$\;\;$ $\rho(x) = m \int f(x,v)\text{d}^3 v$. I don't find it intuitive why these velocity moments are zero. What ist the mathematical way to get these results?
Jeans equation is just an analog to the Euler equations. In the limit you describe, the results of your Equation 1 imply you are working in the center of momentum frame, i.e., the bulk flow rest frame. The results of equation 2 state the pressure tensor can be diagonalized, i.e., there is no viscosity. Viscosity arises from off-diagonal terms in the pressure tensor. I have not heard the term "spherical equilibrium" before but I assume you are implying spherial symmetry? Regardless, in the bulk flow rest frame there is no first velocity moment, i.e., it is zero. The lack of off-diagonal terms implies there are no stress/strains in the flow, e.g., azimuthal flow will not be transported across a radial plane.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/388952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the internal energy of a real gas a function of pressure and temperature only? While studying thermodynamics, I read that the internal energy of an ideal gas is a function of temperature only. On searching the internet, i found an article which stated that the internal energy of a real gas is a function of temperature and pressure only. I could not find a proper reason for this. So my question is: why is the internal energy of an ideal gas a function of temperature only and that of a real gas a function of temperature and pressure only? Is this property of ideal gases and real gases derivable through any equation?
If we describe in short then you'll find that kinetics theory of gases is stated that Total energy of gas = kinetic energy Because there is no attraction between the gas molecules i.e. Potential energy will be zero And I think you know very well that KTG is applicable only for ideal gases while real gas molecules have attraction between gas molecules i.e. Potential energy is there Total energy = KE + PE, for real gas... I hope u have understood...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/389127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 5, "answer_id": 4 }
How does a receiving antenna work given that the electric field is 0 in conductors? The question of how a receiving antenna works has been asked on this site before, such as here How does a receiving antenna get an induced electric current? and here How does a receiving antenna work?. I understand the basic principle that the external EM field from the transmitting antenna causes electrons to move in the receiving antenna, creating a current. My question though is in the title. How can an external electric field, as in the form of a radio or other EM wave, induce a current in a wire when the $E$ field is always zero in there? In terms of physical laws and math how can I calculate the current as a function of time if I know the external fields as a function of time? I wanted to try to calculate the potential difference between two points of a wire from the external field using $$\varepsilon=\int_{\text{start point}}^{\text{end point}}\mathbf{E}\cdot\text{d}\mathbf{l}$$ but that assumes that the electric field in the wire is as it would be if the wire weren't there and the waves were propagating through vacuum...
When a metal antenna wire is put into the field of a propagating electromagnetic wave with time-varying fields, there will be an electric and magnetic field inside the wire and thus also a current but the penetration is exponentially damped. The penetration depth $\delta$ is called the skin depth. In treating boundary conditions with metals for electromagnetic waves often a "perfect conductor" is assumed with conductivity $\sigma \to \infty$. Then this penetration depth $\delta \to 0$ and the current can be assumed to be a surface current $J_s$. Thus, in this idealized situation, there will be no electrical or magnetic field inside the metal and the current is represented by a surface current $J_s= B_t/\mu_0$, which is normal to the tangential magnetic field at the surface (assuming the relative permeabilities are $1$). To get this current, you have to combine the incoming wave with the outgoing electromagnetic field of the wire and satisfy the boundary conditions at the wire surface for the total fields, similarly to a reflection at a plane perfect conductor metal surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/389214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Metric tensor in spherical coordinates using basis vector? I'm using these spherical basis vectors but it's not agreeing with other literature when I use the definition of the metric tensor to derive the metric tensor in spherical coordinates. \begin{align} {\mathbf e}_r &=\sin \theta \cos \phi \,\hat{\mathbf x} + \sin \theta \sin \phi \,\hat{\mathbf y} + \cos \theta \,\hat{\mathbf z} \\[5px] {\mathbf e}_\theta &=\cos \theta \cos \phi \,\hat{\mathbf x} + \cos \theta \sin \phi \,\hat{\mathbf y} -\sin \theta \,\hat{\mathbf z} \\[5px] {\mathbf e}_\phi &=-\sin \phi \,\hat{\mathbf x} + \cos \phi \,\hat{\mathbf y} \end{align} \begin{equation} \mathbf{\overline{g}} = \begin{pmatrix} g_\text{rr} & g_{r \theta} & g_{r \phi} \\ g_{\theta r} & g_{\theta \theta} & g_{\theta \phi} \\ g_{\phi r} & g_{\phi \theta} & g_{\phi \phi} \\ \end{pmatrix} = \begin{pmatrix} \mathbf{e}_\text{r}\cdot\mathbf{e}_\text{r} & \mathbf{e}_\text{r}\cdot\mathbf{e}_{\theta} & \mathbf{e}_\text{r}\cdot\mathbf{e}_{\phi} \\ \mathbf{e}_{\theta}\cdot\mathbf{e}_{r} & \mathbf{e}_{\theta}\cdot\mathbf{e}_{\theta} & \mathbf{e}_{\theta}\cdot\mathbf{e}_{\phi} \\ \mathbf{e}_{\phi}\cdot\mathbf{e}_{r} & \mathbf{e}_{\phi}\cdot\mathbf{e}_{\theta} & \mathbf{e}_{\phi}\cdot\mathbf{e}_{\phi} \\ \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} = \delta_{ij} \\ \end{equation}
Remember that a basis of a vector space only needs to (1) span the vector space, and (2) be linearly independent. In particular, a basis does not have to be orthogonal, and it certainly doesn't have to be normalized. And one of the most common types of basis (a coordinate basis) is usually not normalized. You're confused because you usually see the metric tensor in spherical coordinates given as \begin{equation} \mathbf{g} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & r^2 & 0 \\ 0 & 0 & r^2 \sin^2\theta \end{pmatrix}. \end{equation} This is the metric with respect to the coordinate basis, whereas you've (correctly) written the metric with respect to the orthonormalized vector basis — and it's very important to remember the distinction between those types of bases. I'll explain. Let's write the coordinate basis vectors as \begin{equation} \mathbf{r}, \boldsymbol{\theta}, \boldsymbol{\phi}. \end{equation} (Note that I'm using a bold font to indicate that these are vectors, but I'm not putting hats on them, for reasons that will become clear soon.) These vectors represent the amount you would move through the space if you changed the corresponding coordinate by a certain amount. For example, if $\mathbf{p}(r, \theta, \phi)$ is the position vector to the point with spherical coordinates $r, \theta, \phi$, then those coordinate basis vectors are defined as \begin{align} \mathbf{r} &= \frac{\partial \mathbf{p}} {\partial r} \\ \boldsymbol{\theta} &= \frac{\partial \mathbf{p}} {\partial \theta} \\ \boldsymbol{\phi} &= \frac{\partial \mathbf{p}} {\partial \phi}. \end{align} To relate that back to your basis given in Cartesian components, remember that \begin{equation} \mathbf{p} = r\sin\theta\cos\phi\, \hat{\mathbf{x}} + r\sin\theta\sin\phi\, \hat{\mathbf{y}} + r\cos\theta\, \hat{\mathbf{z}}, \end{equation} which we can differentiate to find \begin{align} \mathbf{r} &= \sin\theta\cos\phi\, \hat{\mathbf{x}} + \sin\theta\sin\phi\, \hat{\mathbf{y}} + \cos\theta\, \hat{\mathbf{z}} \\ \boldsymbol{\theta} &= r\cos\theta\cos\phi\, \hat{\mathbf{x}} + r\cos\theta\sin\phi\, \hat{\mathbf{y}} - r\sin\theta\, \hat{\mathbf{z}} \\ \boldsymbol{\phi} &= -r\sin\theta\sin\phi\, \hat{\mathbf{x}} + r\sin\theta\cos\phi\, \hat{\mathbf{y}}. \end{align} Using these expressions, it's a simple exercise to see that we have \begin{align} \mathbf{r} \cdot \mathbf{r} &= 1 \\ \boldsymbol{\theta} \cdot \boldsymbol{\theta} &= r^2 \\ \boldsymbol{\phi} \cdot \boldsymbol{\phi} &= r^2 \sin^2 \theta. \end{align} So this basis is not orthonormal — and that's where the "usual" metric components come from, which is why the metric isn't just the identity as you expected. In fact, usually the only type of coordinates that lead to orthonormal basis vectors is a Cartesian coordinate systems (though even Cartesian coordinates are not orthonormal in nontrivial geometries). On the other hand, a nearly identical simple exercise shows that your basis $(\mathbf{e}_r, \mathbf{e}_\theta, \mathbf{e}_\phi)$ is orthonormal. In fact, comparing our expressions in the Cartesian basis, we see that \begin{align} \mathbf{r} &= \mathbf{e}_r \\ \boldsymbol{\theta} &= r\, \mathbf{e}_\theta \\ \boldsymbol{\phi} &= r\sin\theta\, \mathbf{e}_\phi. \end{align} In an orthonormal basis, the metric is — essentially by definition — just the identity matrix, which is what you found.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/389327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Distribution of zero-mean, independent, complex-valued, white noise terms In this paper (open access here), in equations (13) and (14) they state that $W(\mathbf{x})$ is a zero-mean, independent, complex-valued, white noise term such that $$\overline{W(\mathbf{x})W(\mathbf{x'})} = 0$$ $$\overline{W(\mathbf{x})W^*(\mathbf{x'})} = 2 \text{d}t \delta_{\mathbf{x},\mathbf{x}'}$$ What does this mean? What is the distribution from which $W$ is sampled? Basically they have some (time) differential equations that they solve numerically, and $W(\mathbf{x})$ is some noise term they add to each pixel located at $\mathbf{x}$ at each time step. I want to know explicitly how they implemented this.
$W(x)$ is a complex-valued Wiener process: $$ W(x)=X(x)+iY(x) $$ where $X,\,Y$ are real-valued, independent Wiener processes. The authors are asserting two conditions: * *The product of two Wiener processes is zero *The product of a Wiener process and its complex conjugate is of order ${\rm d}t$ It seems to me that (1) states the independence of processes in each cell while (2) is the typical stochastic calculus condition of ${\rm d}W^2={\rm d}t$.1 While it's not something I normally do, I suspect that this would be numerically modeled via something like, for each cell in grid: X = draw_from_normal(mean=0, variance=1) Y = draw_from_normal(mean=0, variance=1) W = complex(X, Y) ... continue with code ... but probably extending to 3D positions. 1. This is simply stated in the Wikipedia entry on Itô's lemma with the marker further explanation needed; most stochastic calculus books should cover this more fully.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/389449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is potential energy a type of energy at all? Is potential energy, whether it be that of a charge in an electric field or a mass in a gravitational field or anything like that, actually an energy that the particle itself contains, like kinetic energy? Or is it just a measure of its ability to do work? Is it the case that instead of integrating conservative forces over distances to find work done, we use the fact that the work done by a conservative force doesn't depend on path and hence we just use the notion of 'potential energy' and it's variation with distance, and just take the difference between the potential energy at two points to easily find the work done? And hence, is potential energy nothing but a tool to calculate work done by conservative forces?
A body's kinetic energy is the work it can do because of its motion (as it comes to rest). Calculate this amount of work and, in Newtonian physics, you find it to be equal to $\frac{1}{2}mv^2$. A body's potential energy at point P is the work it can do by changing its position (in a conservative field) from P to another point, O, which has been chosen by convention as the point of zero potential energy. So capacity to do work is common to both KE and PE. I'm not sure that PE is any more 'just a tool to calculate work' than KE is. I do agree, though, that we think of KE as residing in the moving body, whereas PE does not reside in a particular body; it's quite useful to think of it as residing in the field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/389496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Why is the singlet state for two spin 1/2 particles anti-symmetric? For two spin 1/2 particles I understand that the triplet states ($S = 1$) are: $\newcommand\ket[1]{\left|{#1}\right>} \newcommand\up\uparrow \newcommand\dn\downarrow \newcommand\lf\leftarrow \newcommand\rt\rightarrow $ \begin{align} \ket{1,1} &= \ket{\up\up} \\ \ket{1,0} &= \frac{\ket{\up\dn} + \ket{\dn\up}}{\sqrt2} \\ \ket{1,-1} &= \ket{\dn\dn} \end{align} And that the singlet state ($S = 0$) is: $$ \ket{0,0} = \frac{\ket{\up\dn} - \ket{\dn\up}}{\sqrt2} $$ What I'm not too sure about is why the singlet state cannot be $\ket{0,0}=(\ket{↑↓} + \ket{↓↑})/\sqrt2$ while one of the triplet states can then be $(\ket{↑↓} - \ket{↓↑})/\sqrt2$. I know they must be orthogonal, but why are they defined the way they are?
If $\:\mathsf{H}_{\boldsymbol{\alpha}},\mathsf{H}_{\boldsymbol{\beta}}\: $ are the 2-dimensional Hilbert spaces of two particles $\:\boldsymbol{\alpha},\boldsymbol{\beta}\:$ with spins $\:1/2\:$ then the composite system lives in the product 4-dimensional Hilbert space \begin{equation} \mathsf{H}_{\boldsymbol{f}}\equiv \mathsf{H}_{\boldsymbol{\alpha}}\boldsymbol{\otimes}\mathsf{H}_{\boldsymbol{\beta}} \tag{01} \end{equation} which is the direct sum of two invariant orthogonal subspaces : the 1-dimensional subspace $\:\mathsf{H}_{\boldsymbol{1}}\:$ of angular momentum $\;j=0\;$ (the antisymmetric singlet) and the 3-dimensional subspace $\:\mathsf{H}_{\boldsymbol{2}}\:$ of angular momentum $\;j=1\;$ (the symmetric triplet): \begin{equation} \mathsf{H}_{\boldsymbol{\alpha}}\boldsymbol{\otimes}\mathsf{H}_{\boldsymbol{\beta}}=\mathsf{H}_{\boldsymbol{1}}\boldsymbol{\oplus}\mathsf{H}_{\boldsymbol{2}} \tag{02} \end{equation} expressed also as \begin{equation} \boldsymbol{2}\boldsymbol{\otimes}\boldsymbol{2}=\boldsymbol{1}\boldsymbol{\oplus}\boldsymbol{3} \tag{03} \end{equation} Invariance means that if we apply the same special unitary transformation \begin{equation} U_{\boldsymbol{\alpha}}=U=U_{\boldsymbol{\beta}} \in SU(2) \tag{04} \end{equation} in each one of the spaces $\:\mathsf{H}_{\boldsymbol{\alpha}},\mathsf{H}_{\boldsymbol{\beta}}\:$ then the subspaces $\:\mathsf{H}_{\boldsymbol{1}},\mathsf{H}_{\boldsymbol{2}}\:$ are invariant under the product special unitary transformation \begin{equation} U_{\boldsymbol{\alpha}}\boldsymbol{\otimes}U_{\boldsymbol{\beta}}=U^{\boldsymbol{\otimes}\boldsymbol{2}}\in SU(4) \tag{05} \end{equation} Note that application of the transformation (04) corresponds to a rotation in the 3-dimensional real space $\:\mathbb{R}^{3}$ wherein the two particles coexist.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/389946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Generalised representation of a 2x2 Positive Operator-Valued Measure Let {$E_{i}$} be a set of 2x2 POVM operators, satisfying $\sum_{i}E_{i}=\mathbb{I_{2x2}}$. We know that a general 2x2 Hermitian matrix (say, $H$) can be represented by $$ H = \left[{\begin{array}{cc} a_{0}+a_{3} & a_{1}-ia_{2} \\ a_{1}+ia_{2} & a_{0}-a_{3} \end{array}}\right]=a_{0}\mathbb{I_{2x2}}+a_{1}\sigma_{1}+a_{2}\sigma_{2}+a_{3}\sigma_{3} $$ where the quantities $a_{k}$ are real, and $\sigma_{i}$'s represent the Pauli matrices. Is there such a compact way to represent $E_{i}$ [basically satisfying (i) Hermitian peroperty and (ii) positive semi-definiteness], possibly with added constraints?
By applying extra constraints on the above generalized 2x2 Hermitian matrix for satisfying positive semi-definiteness criteria, we can arrive at a generalized representation for 2x2 POVM operator, say $E_{i}$. The constraint is that the Hermitian matrix should have only non-negative eigenvalues [1]. Let $E_{i}$ be $$ E_{i} = \left[{\begin{array}{cc} a_{i0}+a_{i3} & a_{i1}-ia_{i2} \\ a_{i1}+ia_{i2} & a_{i0}-a_{i3} \end{array}}\right]. $$ The characteristic equation for above the matrix would be $$ \left|{\begin{array}{cc} \lambda-(a_{i0}+a_{i3}) & a_{i1}-ia_{i2} \\ a_{i1}+ia_{i2} & \lambda-(a_{i0}-a_{i3}) \end{array}}\right|=0 $$ The roots of the characteristic polynomial should be non-negative, $$ \lambda^{2}-2a_{i0}\lambda+a_{i0}^{2}-a_{i3}^{2}-a_{i1}^{2}-a_{i2}^{2}=0\\ (\lambda-a_{i0})^{2}=a_{i3}^{2}+a_{i1}^{2}+a_{i2}^{2}\\ \lambda=\pm k+a_{i0} $$ where $k=|\sqrt[]{a_{i1}^{2}+a_{i2}^{2}+a_{i3}^{2}}|$. Hence, we have the condition: $$ a_{i0} \ge k $$ because $\lambda_{min}=-k+a_{i0}$. By definition, $\sum_{i}E_{i}=\mathbb{I_{2x2}}$. In total, we have (4+$i$) constraints: * *$\sum_{i}a_{i0}=1$ *$\sum_{i}a_{i1}=0$ *$\sum_{i}a_{i2}=0$ *$\sum_{i}a_{i3}=0$ *$\forall i, a_{i0} \ge |\sqrt[]{a_{i1}^{2}+a_{i2}^{2}+a_{i3}^{2}}|$ [1] Weisstein, Eric W. "Positive Semidefinite Matrix." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PositiveSemidefiniteMatrix.html
{ "language": "en", "url": "https://physics.stackexchange.com/questions/390025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Potential Difference due to a infinite line of charge When a line of charge has a charge density $\lambda$, we know that the electric field points perpendicular to the vector pointing along the line of charge. When calculating the difference in electric potential due with the following equations. $$\nabla V=-\vec{E}$$ Therefore $$\Delta V = -\int_{\vec{r_o}}^\vec{r_f}E\cdot \vec{dr}$$ knowing that $$\vec{E} = \frac{\lambda}{2\pi\epsilon_or}\hat{r}$$ and that $$\left\lVert\vec{r_f}\right\lVert < \left\lVert\vec{r_o}\right\lVert $$ Carrying out the integration (Hopefully correctly) I got $$\Delta V = \frac{\lambda}{2\pi \epsilon_o} \ln(\frac{r_f}{r_o})$$ What confuses me is that the $\ln()$ is negative. I assume that the value should be positive since we move closer towards the line of charge should give us a positive change in electric potential. My best guess for my problem is that I missed a negative somewhere, but looking at online solutions they've got the same answer that I got.
No, it's okay. The pontential difference increases as you go farther. The less you move away, the more similar potential you have (little difference). By the way * *You can't integrate in three dimensions that way. You're using cylindrical coordinates (because of the symmetry of the problem), and you integrate along $r$, which is $|\vec{r}|$. *The limits of integration are thus scalars. However, $\vec{E}$ is a vector, and you do the scalar product inside the integral, but fortunately the angle is 0 degrees. *You missed the minus sign in front of the integral, so it appears outside the $\ln$. Was that your question? Because now $$\Delta V = -\dfrac{\lambda}{2\pi\varepsilon_0} \ln \left(\frac{r_F}{r_o}\right)$$ and the voltage difference increases when you go further, but in a negative sense, which means it becomes "more negative" as you move away.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/390345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why does oil float on water? This might be a silly question but I want to know why oil actually floats on water. I tried to explain it to myself using Archimedes' principle but that didn't help. Archimedes’ principle, physical law of buoyancy, states that any body completely or partially submerged in a fluid (gas or liquid) at rest is acted upon by an upward, or buoyant, force the magnitude of which is equal to the weight of the fluid displaced by the body. I don't get how Archimedes' law is valid in oil-water case, because oil and water don't even mix so there's no displacement of water hence no byouant force is exerted. So what keeps substances like oil which are less dense than water floating atop it?
"...because oil and water don't even mix so there's no displacement of water hence no buoyant force is exerted." This is where you are misunderstanding. There is a displacement. Wood doesn't mix with water either, yet it displaces water and it floats. With oil, there is a slight depression of the lower surface, between the oil and water, where the displacement occurs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/390547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Mass Dropped on Scale When a mass is dropped onto something like a bathroom scale, the reading on the scale temporarily exceeds the actual weight of the mass. How do I explain this using forces and a force body diagram? Also, let's say instead of a mass and a scale, its just a person, a ball, and a scale. The person is standing on the scale with the ball in hand and throws it up in the air. When the person catches the ball, should the scale also read a value greater than the weight of the human and the ball combined? Is the reasoning for this the same as the mass and scale example? Edit: Could the explanation be that at the instantaneous moment when the mass comes in contact with the scale, there is an instantaneous force caused by the impulse?
Forces must always balance. A force is required to support a stationary mass on a bathroom scale. An additional force is required to effect the deceleration of a mass if it has vertical downward motion as it makes contact with the bathroom scale surface. Dropping the mass onto the bathroom scale: $$F = mg + ma \tag1$$ where m is the kg mass of the mass dropped on the bathroom scale, g is gravitational acceleration, and $$a = \Delta v/\Delta t \tag2$$ where $v$ and $t$ are velocity and time. The maximum $a$ determines the maximum force indicated on the scale. The heavier the ball, the harder the scale surface and the stiffer the scale's spring, the higher $F$ will be (figure below). When the person is catching the ball, the person is as the stationary mass above, and the ball has a stationary and decelerating component. See figure below.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/390644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why change in resistivity is proportional to the original resistivity? When there is a temperature change $\Delta T$, the change of resistivity is (1) proportional to $\Delta T$ (2) proportional to the original resistivity $\rho_0$ Hence we can define the temperature coefficient of resistivity $\alpha$ so that $$\Delta \rho = \rho_0 \alpha\Delta T$$ I searched on the internet about (2) but it is usually simply stated as a fact or "experiments show that", without explaining why. Length expansion has similar property but I can understand why intuitively. For the same temperature change, doubling the length will double the change in length as well, because every part of the length expands. But I don't understand why the change in resistivity should be proportional to the original resistivity.
Okay, so if I understand your question properly you're asking for the physical causes behind the change in resistivity being affected by the temperature and why such changes are linearly proportional to the original resistivity. Well, being just a physics student, I went and, uh, looked at the Wikipedia page on conductivity and resistivity which said this: "A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. Most metals have resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure each. When the electron wave travels through the lattice the waves interfere, which causes resistance. The more regular the lattice is the less disturbance happens and thus resistance lowers. The amount of resistance is thus caused by mainly two factors. Firstly it is caused by the temperature and thus speed of vibration of the crystal lattice. The temperature causes irregularities in the lattice." Which makes sense to me.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/390785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Why does the electric field strength for a dipole go as $1/r^3$? I've been given the following graphic to help wrap my head around this. If the potential can be shown to represent a $1/r^2$ relation, then I'm more than happy to accept that the electric field is hence a $1/r^3$ relation, but I need to accept the first part first: Since $V = \frac{kQ}{r}$, this basically implies that $$r = \frac{bc}{c-b}$$ given the geometry of this graphic, yet I simply do not see it.
The key point is that $r$ (the distance to the center of the dipole) is not the same thing as the distances $b$ and $c$ from your test charge to the positive and negative charges of the dipole. If the dipole separation isn't very large, then that isn't a big deal, as $1/r$ and $1/b$ will generally be quite similar, but they will have slight differences, and those are easy to calculate as a power series in $a$ when that is small: $$ \frac{1}{b} = \frac1r + \frac{a\cos(\theta)}{2r^2}+ O\left(\frac{a^2}{r^3}\right) \approx \frac1r + \frac{a\cos(\theta)}{2r^2}. $$ Now, here is the other important bit: because the charges are equal but opposite, the leading term in this series ($1/r)$ will be the same, so it will cancel out, but the corrections go in opposite directions, so that when you subtract the two potentials, the corrections add constructively: \begin{align} \frac{1}{b} &\approx \frac1r + \frac{a\cos(\theta)}{2r^2} \\ \frac{1}{c} &\approx \frac1r - \frac{a\cos(\theta)}{2r^2} \\ \implies \frac{1}{b} - \frac{1}{c} &\approx \frac{a\cos(\theta)}{r^2}. \\ \end{align} This is where the $\propto 1/r^2$ potential comes from - as a leading-order correction to the two distances, and it is most cleanly obtained through the Taylor series of $1/b$ when it is displaced. That means that, if you insist on seeing things on the specific geometry of your diagram, then the geometry is only exact in the limit where $r\gg a$, i.e. when the lines $(-Q)P$, $LM$ and $(+Q)P$ are parallel; if they're not parallel, then the identity $c-b=LM$ is false. That means that the geometric identity you've written down, $$ r= \frac{bc}{c-b}, $$ can never be right, because it makes no reference to $a$. If you want to build a version of that identity which does hold, then your best bet is to work from the cosine law of both triangles, \begin{align} b^2 & = r^2 + \frac{1}{4}a^2 - ra\cos(\theta),\\ c^2 & = r^2 + \frac{1}{4}a^2 + ra\cos(\theta). \end{align} Thus: * *If what you want is the potential, then the thing to do is to re-phrase these as $$ \frac{1}{b} = \frac{1}{r}\frac{1}{\sqrt{1-\frac ar\cos(\theta) + \frac{a^2}{r^2}}}, $$ and expand the square root using Newton's binomial series. *If what you want is the length difference $c-b$, then that's best done via $$ b-c = r\left[\sqrt{1+\frac ar\cos(\theta) + \frac{a^2}{r^2}} - \sqrt{1-\frac ar\cos(\theta) + \frac{a^2}{r^2}}\right] $$ and again expanding using the binomial series. *If what you want is a correct version of $r=bc/(c-b)$, then you can put in those expressions to get $$ \frac{bc}{c-b} \approx \frac{r^2}{a\cos(\theta)}, $$ and since you're looking at the $r\gg a$ limit, that tells you just how wrong the $r=bc/(c-b)$ identification is in this geometry.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/390938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are there any algorithms which conserves the energy for quantum harmonic oscillators simulation? well I am trying to simulate the quantum harmonic oscillator. I have tried a few algorithms like using the Rk_4, etc. the energy is not conserved in that. are there any algorithms which will keep the energy constant for the harmonic oscillator simulation
You want an algorithm in which every time step is unitary. One sketch of such an algorithm for the generic Schrödinger equation is given by approximating the time evolution operator for a time step as $$\mathrm{e}^{-\mathrm{i}H\Delta t} \approx \frac{1 - \frac{1}{2}\mathrm{i}\Delta t H}{1 + \frac{1}{2}\mathrm{i}\Delta t H},$$ where $H$ is of course the Hamiltonian and $\Delta t$ your time step. You can directly check that the r.h.s. is unitary.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/391072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Causal ordering on spacetimes On any spacetime $(M,g)$ we can form the causal ordering $\leq$, where for any two points $a,b \in M$ we have that $a \leq b$ iff there exists some future directed, non-spacelike curve from $a$ to $b$. This causal ordering is transitive, irreflexive, antisymmetry and dense in general. However, causal oddities like CTC's manifest as violations of irreflexivity and antisymmetry. I'm interested in the directedness of the causal order (unfortunate name, I know). Recall that a relation $\leq$ is directed iff for any two elements $a,b \in M$ there is some third element $c \in M$ such that both $a \leq c$ and $b \leq c$. I can see that a Minkowksi space has a directed causal order. I have no proof, but the picture below is good enough to convince me. Clearly anything in the shaded area will serve as the witness for the condition. Note also that the condition is trivially met for timelike or lightlike-related points. I have heard that for non-flat spacetimes, directedness of the causal order is not guaranteed. Are there any concrete examples of this? And, moreover, is there some appropriate causality condition one can impose on the spacetime in order to guarantee directedness of the causal order?
While the example of de Sitter space is a classic one, there is an even simpler example that doesn't require computing geodesics. Take two dimensional Minkowski space, and remove the line $\{ (x,t) | x=0, t \geq 0 \}$. Any event $t \geq 0$ will have its future lightcone restricted to either positive or negative $x$. Then picking any two points on either side of the singularity will produce future light cones that never intersect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/391159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Basic cut-off regularization I've been reading these notes on regularization by Hitoshi Murayama here, and on page 3 there's a few lines of calculations on a quick method of regularizing an integral. But I can't follow the steps - where does the $z$ come from? Why is a second integration variable suddenly introduced, in the step before the original integration variable disappears? I'd really appreciate any help, I've looked absolutely everywhere to find another example like this, but in general other methods like Paul-Villars renormalization seem to be used. So it says the propagator is multiplied by a factor of $\frac{\Lambda^2}{\Lambda^2+p^2}$ so that it becomes (I think the initial integration limits should be infinity and zero for the first line, but it doesn't say): $$\int \frac{d^2 p}{(2\pi)^2} \frac{\Lambda^2}{(p^2+\Lambda^2)(p^2+m^2)}$$ $$=\int^1_0 dz \frac{d^2 p}{(2\pi)^2}\frac{\Lambda^2}{(p^2+z\Lambda^2+(1-z)m^2)^2}$$ $$=\int^1_0dz \frac{1}{4\pi}\frac{\Lambda^2}{z\Lambda^2+(1-z)m^2}$$ $$=\frac{1}{4\pi}\frac{\Lambda^2}{\Lambda^2-m^2}\ln\frac{\Lambda^2}{m^2}$$
I believe you should read up on Feynman's parametrization. $$ \frac1{AB} = \int_o^1dz\ \frac1{(Az+(1-z)B)^2} $$ Substitute $A= (p^2 + \Lambda^2)$ and $B= p^2+m^2$ and see what follows.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/391470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Notation of Maxwell relations The Maxwell relations are often given as for example $$\left(\frac{\partial T}{\partial V}\right)_S = -\left(\frac{\partial P}{\partial S}\right)_V.$$ What does the $S$ and the $V$ in the index of the parantheses mean? I guess that $S$ and $V$ should stay constant for the derivation, but is this not already in the definition of the partial derivative?
Basically in Thermodynamics different functions get the same name if they refer to the same quantity. So, for example, the inner energy $U(p,V,N)$ and $U(T,V,N)$ both are called $U$ although they are not the same function. To my understanding this is why you write the constant variables next to the brackets. It's used to further differentiate between the different functions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/391732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Difference, in relation to minus sign, between the processes: $gg \to q\bar q$ and $qg\to qg$ I have been computing some QCD cross-sections lately and I have followed the instructions given by Peskin and Schroeder. However I stumble upon this situation: Where the minus sign in $qg \to qg$ comes from? Because the crossing symmetry involved is pretty clear, but that minus sign results to me confusing... I attach the differential cross-sections from Peskin and Schroeder
Gluons are vector bosons, but quarks are fermion. By crossing symmetry, you flip an anti-quark in s-channel $gg\to q\bar{q}$ to quark t-channel $qg\to qg$. By fermion statistics, you must change the sign. So there is a "-" sign in the result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/391855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this interpretation of quantum fluctuation in eternal inflation in Wikipedia correct? Wikipedia's article on inflation says Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. But from Sean Carroll’s article, Eternal inflation is a different story. The idea there is that the inflaton field slowly rolls down its potential during inflation, except that quantum fluctuations will occasionally poke the field to go higher rather than lower. When that happens, space expands faster and inflation continues forever. This story relies on the idea that the “fluctuations” are actual events happening in real time, even in the absence of measurement and decoherence. And we’re saying that none of that is true. The field is essentially in a pure state, and simply rolls down its potential So, I asked a friend of mine who knows QFT and he said I never liked the concept of quantum fluctuations, especially when it comes to cosmology. In QFT, the fields always roll down to the exact minimum of the potential. It doesn't fluctuate in any meaningful sense. But the potential is the quantum mechanical one, not the classical one. The quote in the wikipedia may be a vague way to say that the classical potential acquires quantum corrections. Whether that picture is useful or not is beyond me.” Is the interpretation of quantum fluctuation in Wikipedia correct from the point of view of QFT? Or does the inflaton field just simply roll down its potential without any effects from quantum fluctuation?
The classical inflaton potential does receive quantum corrections from the other fields in the theory, but quantum fluctuations of the inflaton field itself have much greater significance during inflation. As a quantum field, the inflaton exhibits fluctuations about its classical trajectory, $\phi(x,t) = \phi_0(t) + \delta \phi(x,t)$. The result is a sort of fuzziness to the trajectory: The fluctuations cause the inflaton to roll down the potential at effectively different rates at different places in the universe, with the result that inflation ends at different times, $\delta t$, in different places. Via the continuity equation, this gives rise to a density perturbation, $\delta \rho/\bar{\rho} \propto H \delta t$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/391946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Using a previously determined detective quantum efficiency for a detector I am doing a radiation damage survey on a few different materials and will need the Detective Quantum Efficiency (DQE) when calculating the dose. I will be using the same detector for each sample and was wondering if it would be sufficient to acquire the DQE once, or even use a previously determined DQE, rather than recalculating it for each sample. My thinking is that since the detector is the same for each sample the DQE shouldn't fluctuate appreciably.
Short version: It's better to measure it in situ and shortly before (or better before and after) taking data. Explanation Quantum efficiency can be a function of various operating parameters, so you would need to know that the previous measurement was done using the same parameter you plan to use. Various classes of detectors can experience changes in their actual quantum efficiency for various reasons: radiation damage; current erosion; mechanical, chemical, or thermal damage at a minimum. It is also possible that the previous measurement intentionally or unintentionally folded some acceptance features into their reported QE. For these reasons it is better to measure the QE * *explicitly for the data taking you plan *in the configuration you will be using *under the operating parameters you will be using *in close temporal proximity to the data taking But, as always, you have to balance the risk against and the costs. If you know that the device was previous testing in a configuration compatible with the one you intend under operating condition compatible with those you intend, and has been stored with due care to protect if from things that could damage it (which could include humidity, bright light, intense radiation fields, mechanical shocks, and others depending on the nature of the detector), and you don't have a lot of money/time then you might decide to chance it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/392044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Electron repulsion force vs gravitational pull I understand the electron repulsion force is 20 orders of magnitude stronger than the force of gravity pull. But in Proton Earth, Electron Moon, where it hypothesized to replace the Moon with 10^52 electrons, it states that The energy from all those electrons pushing on each other is so large that the gravitational pull wins Because it had stepped into the realm of String Theory! Could somebody please explain to me why in layman terms because I have NO knowledge of String Theory.
First of all, the statement in question is NOT a consequence of string theory. It could be, but we don't know the physics of such high energy systems. Accordion to the article "something stringy might happen, we simply don't know".So we will ignore string theory. The next thing that you have to consider is that Newtons Law for Gravitational Attraction is NOT the best description of Gravity that we have. Currently Gravity is best understood by General Relativity and Einstein Field Equations. The important difference between these two theories is the fact that in Einstein's Theory, energy and mass are effectively the same and both of them cause gravitational attraction. Saying that both energy and mass cause gravitational attraction is technically incorrect but you have asked to keep the explaination simple. What Randall (the author of What-If) is saying is that the (electrostatic) energy of the electron moon will be so large that it will create black hole and basically suck all the electrons in it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/392176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
2D Ising Landau theory and the term $A_i( m, T)\nabla_i m$? In (Sethan, 2007; pg$\sim$206) it is said that in the 2D Ising terms of the form e.g. $$A_i( m, T)\nabla_i m$$ are not allowed in the free energy since there is no vector $A_i(m,T)$ that is invariant under $\pi/2$ rotations. I am confused why we need this since: $$A_i(m, T)\nabla_i$$ is invariant under $\pi/2$ rotations why do we need $A_i( m, T)$ alone to be? I have a feeling the answer is to do with active and passive transforms but can't quite get my head around it.
In the 2d Ising there are two symmetries we need to concern ourselves with: * *Symmetry of the Lattice: The lattice is symmetric under $\pi/2$ rotations as stated in the question. *Symmetry of the Order Parameter: The order parameter has summetry under $m\rightarrow -m$ It is the symmetry of the lattice that concerns us here. From this consider $A_i \partial_i$: $$ A_x \frac{\partial}{\partial x}+A_y \frac{\partial }{\partial y}\tag{1}$$ I will here look at a passive rotation rather then an active since I think it is easier to understand. So consdier the change of basis: \[ x \mapsto y, \quad y \mapsto -x\tag{2}\] under this (1) becomes: $$ A_x \frac{\partial}{\partial y}-A_y \frac{\partial }{\partial x}\tag{3}$$ But (2) simply corresponds to a rotation of our coordinate system by $\pi/2$ and as such we require (1) to be equivalent to (3). the only way this can work is if $A_i=0$. Thus proving that such a term is not allowed. Summary The statement that $A_i \partial_i$ is not allowed is a statement from the required symmetry of the lattice and not the order parameter. If the lattice was say a triangular lattice this rotational invariance condition would not hold.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/392391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Separation distance of Michelson Interferometer Suppose I would like to use Michelson Interferometer to observe fringes of equal thickness by creating an angle between the mirrors. Why is it vital for the path difference between the mirrors to be small in order to observe the fringes?
"Fringes of equal thickness" probably means equal spacing. You only get fringes of equal spacing in a Michaelson interferometer when the beams in the interferometer are collimated. The only reason for it might be necessary for the paths in the two arms of the interferometer to be nearly equal is if the light is not temporally coherent. If the coherence length of the light is greater than the largest path difference between the two beams, and if the beams have the same intensity, then you should get high contrast fringes. A typical laser pointer has quite a long coherence length, sometimes greater than a couple of meters.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/392511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why doesn't a new ball-point pen write as smoothly as one being written for a little? Why doesn't a new ball-point pen write as smoothly as one being written for a little? You will say that the friction is more first up.Then why is that so?
Friction is higher at the beginning. Usually, a ballpoint pen comes with a tip to protect the ink inside. This causes the ink at the surface to harden after a while, hence, producing more friction. However, upon writing, this hardened layer slowly goes away, and liquid ink begins to flow out, producing less friction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/392636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why we can assert, in general, that physical processes have the behaviour of low-pass filter? Consequently, why is it not allowed to produce physically some controllers for processes that are described by a transfer function that is an improper function? A simple example is the driven harmonic oscillator. So the equation in this case is $mx'' +Ax'+k=f(t)$, where $x$ is the variable that stands for the position of the mass, $m$ is the mass, $A$ is a certain constant for the friction and $f$ is a force that pull the mass. Omitting some passages we get a transfer function (in the frequency domain) that is: $$W(s)=\frac {1/m}{s^2+2\alpha s+w_0^2}$$ where $\alpha = \frac Am$ and $w_0 = \sqrt\frac km$. So the harmonic function is: $$W(jw) = \frac {1/m}{(w_0^2-w^2)+2\alpha jw}$$ Now the module of the function is $M=\frac {1/m}{\sqrt{(w_0^2-w^2)^2+4\alpha^2 w^2}}$ and $\lim_{x\to+\infty} M = 0$. So the process has a visible response just in "low" values of the frequency of the force ($w$). The professor added that in some controlling problems of some process we could obtain a transfer function of the controller that is an improper transfer function, and that a controller of this type is impossible to produce physically. I'd like just some kind of intuitive explaination for this.
Physical systems are either proper or strict proper. This means, they either show low-pass properties or a 'constant' one (imagine the two end-points of a bar that can move in one direction and can't rotate: every movement, you apply to one end, will immediately be applied at the other end). If a system's transfer function was improper, your system can have non-continuous behavior even if your input is conintuous. For example, if $x$ is your variable denoting some coordinate, it could change value instantaneously even if the input is continuous. This is a property physical system usually don't have. Also this behavior would require an infinite amount of energy. From a control theory point of view, an improper system has a differentiator. So if your input is not smooth, your system has to have some points, at which it changes state in zero time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/392711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to interpret $\langle\pi|J|\pi\rangle$? The decay constant $f$ of a hadron with momentum $P$ or its in-hadron condensate $\kappa$ (See this paper, Eq. (8)) are given by terms like $ \begin{align*} f_\pi P^\mu &\sim \langle 0|J_\text{axial}^\mu (0) |\pi(P)\rangle,\tag{1}\\ \kappa_\pi &\sim \langle 0|J_\text{pseudoscalar} (0) |\pi(P)\rangle.\tag{2} \end{align*} $ Why do we use the vacuum $|0\rangle$ and the pion $|\pi\rangle$ together? How would I interpret these terms if we would replace the vacuum with the pion state?
Are you asking about the meaning of these expressions or their relevance to physics? I'll address the former and mention the latter in brief in closing. Injection of a particular axial current operator onto a pion line gives rise to its decay. This is represented by the matrix element $\langle 0|J^{\mu}_{\text{axial}}|\pi(P)\rangle,$ that is to say the operator valued insertion acts on an initial pion state leaving the vacuum in the final state with respect to the total number of initial state hadrons. Replacing $\langle 0|$ with another pion state $\langle \pi(P’)|$, say, would imply the pion is existing in both the initial and final state. Matrix elements of this kind are common in e.g elastic scatterings where the operator insertion is not responsible for the destruction or decay of the initial state. See e.g the upper electron line in the canonical DIS set up, where the familiar QED vector insertion accounts for interaction of electron with the photon, changing only its momentum. With regards to the utility of these terms, and perhaps to be viewed as an aside, the matrix element in question is useful for predicting the pion decay constant to some loop order using its convenient proportionality to it. The contributing diagram topologies are encoded in the axial current. This is all useful within chiral perturbation theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Would a magnet be able to attract an object through a sheet of stainless steel? For example, if there were a neodymium magnet at position x=0, a sheet of stainless steel at position x=1, and a magnetic object at x=5, would the magnet still attract the object? Is the attraction force less than if the stainless steel sheet were absent? Would a thicker sheet of stainless steel dampen the attraction force?
A simple physical argument should be whether the magnetic field will be able to polarize that medium in between. If that is possible then field intensity will be transported through polarization.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Confusion on quantum numbers So, I've known for a long time the famous quantum numbers $n, l, m, s$ and I thought these were all of the quantum numbers, and then when applying the Schrödinger equation to orbital angular momentum and getting the spherical harmonics, with their numbers $l$ and $m$, I thought, okay here they are that's all. But recently, I've been taught that angular momentum is not only composed of the orbital angular momentum, but also the intrinsic angular momentum, the spin, $\vec J = \vec L \otimes 1\!\!1 + 1\!\!1 \otimes \vec S$. And with this, I'm introduced too to the quantum numbers $j$ and another $m$, which I think can be specified by $m_j$, and also $s$ and $m_s$. I'm confused by so many $m$'s. Are the quantum nubers that I initially wrote the only ones and the other ones can be derived from these ones? Are all $m, m_j, m_l, m_s$ different between them or there's one that englobes them all? Is the $s$ that you get from $\vec S$ the same $s$ as my initial one? What's the physical meaning of all these quantum numbers? Are there any other quantum numbers that I haven't encountered yet?
Hydrogen atom with and without Coulomb potential For H-atom with Coulomb potential and no perturbations such as spin-orbit interaction, relativistic correction etc, $n,l,m_l,m_s$ are good (conserved) quantum numbers because the operators $\textbf{L}^2,L_z,\textbf{S}^2,S_z$ commute with the Hamiltonian, and hence can be used to label the states. $n,l,m_l,s,m_s$ labels the eigenvalues of $H,\textbf{L}^2,L_z,\textbf{S}^2, S_z$ respectively. Since $s=1/2$ for a spin-1/2 particle, $s$ is omitted from the set $n,l,m_l,s,m_s$. With spin-orbit interaction, it is the operators $H,\textbf{L}^2,\textbf{S}^2, \textbf{J}^2,J_z$ commute with the hamiltonian where $\textbf{J}=\textbf{L}+\textbf{S}$. Hence, $n,l,s,j,m_j$ are the complete set of good quantum numbers ($s$ is redundant because $s=1/2$). So in a nutshell, $m_l$ is the eigenvalue of $L_z/\hbar$, $m_s$ is that of $S_z/\hbar$ and $m_j$ is that of $J_z/\hbar$. General theory of quantum angular momenta Any angular momentum $\textbf{J}$ is defined as a set of three hermitian operators $\textbf{J}\equiv (J_1,J_2,J_3)$ which satisfy the following commutation relations $$[J_i,J_j]=i\hslash\epsilon_{ijk}J_k\hspace{0.5cm}\text{i,j=1,2,3}.$$ The symbol $\textbf{J}$ is introduced here for a general angular momentum vector: it can stand for $\textbf{L}$, $\textbf{S}$, $\sum \textbf{L}$, $\sum \textbf{S}$ or $\textbf{L}+\textbf{S}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does reversing time give parity reversed antimatter or just antimatter? Feynman's idea states that matter going backwards in time seems like antimatter. But, since nature is $CPT$ symmetric, reversing time ($T$) is equivalent to $CP$ operation. So, reversing time gives parity reversed antimatter, not just antimatter. What is happening here? Why does nobody mention this parity thing when talking about reversing time? What am I missing?
Positrons have equal and opposite charge and parity to electrons. Hence when combined, they can produce a neutral gamma ray with no parity. https://en.m.wikipedia.org/wiki/T-symmetry
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the difference between a deuterium nucleus and a sexaquark? Assume a sexaquark contains 3 up and 3 down quarks. What is the difference between this and a deuterium nucleus containing a proton bound to a neutron? Is there any difference?
My advisor posed me the same riddle ages ago, and it drove me stark raving mad. To expand on Ben Crowell's comment ... Deuterons have exceptionally weak binding energies, out of line with heavier nuclei, so they may be atypical. Alpha particles seem more representative of differences between nuclear matter and quark matter. A naive shell model says that the 1s shell can accommodate four nucleons or twelve light quarks, so no difference there. In nuclear matter at normal density, quarks somehow clump in color-singlet groups of three. A naive shell model in the Hartree-Fock tradition (described below) does not predict the observed correlations. You would have to include perturbative admixtures of two- and many-body excited states. Unfortunately, low-energy QCD is so poorly understood that you cannot expect good answers. (The Hartree-Fock approximation uses a Slater determinant of single-particle wave functions as a trial wavefunction. The actual wavefunction would be a sum of many such determinants.) At somewhat higher densities, possibly achieved in neutron stars, the nucleons would overlap in space and lose their distinct identities, so you could best picture the nuclear matter as quark matter. QCD forces would get weaker as well, thanks to higher Fermi momenta, so perturbation theory would be more accurate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Would we be able to be consistent in our theory if we were to only assume the particle nature of matter with the uncertainty principle As far as I understood, the uncertainty principle is direct consequence of the wave - particle duality of matter; however, would we be able to be consistent in our theory if we were to only assume the particle nature of matter with the uncertainty principle . I mean for example, in Germer’s experiment, If we say that the electron behaves like a particle and we cannot determine its position and velocity precisely at the same time, we can say that the electron collides with the lattice with different angle and hence scatters. (of course then we could not explain why the intensity of the scattering in different angles have different values, but that is not the point in here. The point is whether we would be consistent in our theory or not).
In the mainstream quantum mechanical formalism the Heiseneberg uncertainty principle comes from the commutation relations of operators . For non-relativistic energies there exists a theory, called Bohmian mechanics which reproduces the mathematics and the probabilistic values having particles accompanied by "pilot waves", which generate the probabilities and uncertainties. As it is a matter of preference whether the mainstream formalism or the "pilot wave" formalism holds really, your question can be answered in the affirmative. For relativistic energies the pilot wave theory is not able to keep up with observations, and thus is not useful for particle physics studies. There are a minority of theoretical studies pursuing this path, trying to reconcile it with special relativity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The path difference when a block covers one slit in Young's double slit experiment A modification of the simplest case of Young’s double slit experiment is when the path length for one of the slits is changed. I've been told that if a strip of material of thickness $ t $ and refractive index $ n $ is placed over one slit then it adds a path difference of $(n − 1)t$, which results in the fringes being shifted. However, I am not sure how $(n − 1)t$ is derived and why this gives the path difference?
Apply the definition of optical path.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the mass of a bicycle directly affect stopping distance? In this answer on the cycling SE, the claim is made that adding more mass to a bicycle increases the stopping distance. I was under the impression that mass should not affect the stopping distance so long as all the other factors remain the same (balance, coefficient of friction, etc.). What factors in this scenario contribute to increasing stopping distance on a bicycle? If the bicycle is balanced the same but weighs more, will the stopping distance be equal?
You have a great theoretical answer from CMS. The work required stop is proportional to mass. Distance to stop is proportional to the force. There are two coefficient of frictioncoefficient of friction: static and kinetic. Static is greater and is when you are not skidding. Max braking is to apply enough pressure to take the tires just short of skidding. Not a given the brakes can apply enough force to cause the tire to skid. Most good bike brakes can skid a tire. Friction as $F_{max} = \mu m g$ where $\mu$ is the coefficient of friction and $g$ assumes an ideal rubber and road. In practice if you double the mass the braking force may not fully double. The properties of the rubber can degrade. In normal rider weight range like 120 lb - 200 lb a rubber tires is close to ideal. You cannot extent this to 2000 lbs as the tire becomes highly deformed may not even hold the weight. The other factor on a bicycle is taking the front tire to maximum friction would typically mean going out the top. If you add the weight low you can come closer to max friction on the front wheel. Braking itself creates a force. The front tires gets more downward force and the rear tire more. Would be linear with on a ideal bike. I think max braking would be zero weight on the rear (negative would be going out the top). And this would need to occur right at max friction of the front tire. I think the theoretical answer from CMS is correct but it assumes an ideal bike and ideal rubber/road. In practice a heavier rider will take longer to stop.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/394975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 7 }
Randomness of radioactive decay I know there are similar questions, but I don't think they address exactly this question. Radioactive decay is a truly random process, according to physics. We believe that there is no hidden particle or cause that we have not discovered yet that governs particle decay. On the contrary, computer generated "random" numbers follow a deterministic rule, and are not truly random. If one did not know the rules behind computer generated "random" numbers, may believe they are truly random, since it may be very hard to understand the rule. How can be sure that there are no yet undiscovered physical laws that would explain away the supposed randomness of the radioactive decay?
The theoretically derived statistical distribution for the radioactive decay, using the formalism of Quantum Mechanics (QM), which is intrinsically probabilistic, agrees well with empirical results. So, first we have to acknowledge that QM is able to explain the radioactive decay observations, just as it is able to explain every other phenomena at the micro level. If you now seek a deterministic law underlying radioactive decay, then it suggests that there should also be a deterministic explanation for all other phenomena for which quantum mechanics has been successful. But attempts to construct such local "hidden variable" theories have not been successful in reproducing all the successes of QM, and a large class of them have already been ruled out by experiments (testing Bell type inequalities). In summary, QM, with its intrinsic uncertainty, is still the best theory we have of the micro world. Is QM the last word? Maybe not. Maybe one day we will find a deeper underlying theory, but it would likely be even further removed from our classical intuition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/395373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why is it said that density of nucleons in a nucleus is constant? Question:Why is it said that density of nucleons in a nucleus is constant? I am studying an introductory course in nuclear and subnuclear physics. Based on the context in which it is cited (I cannot cite the notes because they are private notes) I do not understand if it is a theoretical assumption or hypothesis derived from an experiment. I searched in literature, especially on the book Krane-Introuctory Nuclear Physics but my doubt has not been clarified.
High-energy electron scattering gives a very direct measure of the density of the protons, and the density is found to be fairly constant. Another example of the type of evidence that supports this is that measurements of the Coulomb barrier for nuclear fusion are consistent with a nuclear radius that varies as $A^{1/3}$, which is what is expected if the density is constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/395646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does Power Law Inflation Lead to Eternal Inflation? If we have an inflation model with potential $V(\phi) = V_0 e^{-\sqrt{\frac{2}{\lambda}} \frac{\phi}{M_p}}$, where $V_0$ and $\lambda$ are free parameters, does this lead to eternal inflation for $\lambda > 1$? The slow roll parameter $\epsilon_V(\phi) = \frac{M_p^2}{2} (\frac{V_{'\phi}}{V})^2 = \frac{1}{\lambda}$ appears to be a constant and so for all $\lambda > 1$, $\epsilon_V(\phi) < 1$. This seems to imply that inflation never breaks down.
Yes, inflation does not end for single field inflation driven by a purely exponential potential. Either one interprets this form of the potential as approximating a different potential when observational scales exit the horizon, or, if taken to be exact, one must introduce some mechanism to end inflation. Non-canonical kinetic terms have been investigated for this purpose. Another thought would be to introduce a second, auxiliary field similar to hybrid inflation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/395798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Entangling two quantum systems that are separated by a distance If two quantum systems are entangled their measurements become correlated, even if they are separated by a distance. But what if one has two different quantum systems at hand, initially un-entangled and separated, can one generate entanglement between them? Or is it necessary that they be "close enough"? If there is no such theoretical necessity has any experiment ever been performed to achieve this?
Usually it is impossible to entangle spatially separated systems $A$ and $B$, but you can evade this restriction by a trick called entanglement swapping. You entangle $A$ with a third system $C,$ and $B$ with a fourth system $D$. These operations must be performed in the usual way, by allowing $A$ $(B)$ to interact with $C$ $(D)$. Now $C$ and $D$ are brought close together and projectively measured in an entangled basis (i.e., the outcome of the measurement is always that $C$ and $D$ are in one of a set of entangled states). This measurement simultaneously projects $A$ and $B$ into an entangled state (which entangled state depends on the specific outcome of the $CD$ measurement). Note that $A$ and $B$ can remain arbitrarily far from each other throughout the entire procedure. The entanglement between $A$ and $C$, and between $B$ and $D$ is "swapped" to $A$ and $B$, hence the term. You can think of $C$ and $D$ as representatives for $A$ and $B$, taking their places at the meeting. Here's a news article citing the first experimental demonstration: https://phys.org/news/2007-10-entanglement-swapping-quantum.html To further explain the prohibition on entangling distant systems, the actual statement is that entanglement between $A$ and $B$ does not increase under local operations and classical communication (LOCC). In entanglement swapping we have nonclassical communication (transmission of quantum states $C$ and $D$), and also nonlocal operations (measurement of the joint state of $C$ and $D$), so that's how the trick works.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/396091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What does self-closing bra-ket mean in Robetson-Schrodinger Uncertainty Relation? I was reading: https://en.wikipedia.org/wiki/Heisenberg%27s_uncertainty_principle#Robertson–Schrödinger_uncertainty_relations Where an inequality is presented: $$ \sigma_A \sigma_B = | \frac{1}{2} \langle \lbrace \hat{A}, \hat{B} \rbrace \rangle - \langle{\hat{A}}\rangle \langle \hat{B} \rangle | ^2 + | \frac{1}{2i} \langle [ \hat{A}, \hat{B} ] \rangle ^2 $$ I found the notation hard to understand: $$ \langle \lbrace \hat{A}, \hat{B} \rbrace \rangle$$ Is bra-ket expression, but it contains a single expression inside$\lbrace \hat{A}, \hat{B} \rbrace $ (so it can't be a dot product). How do I interpret this? Similar problem arises here: $$ \langle{\hat{A}}\rangle \langle \hat{B} \rangle$$ where the $A, B$ are single-entities contained in a closed bra-ket expression.
In addition to the other good answers directly explaining the meaning, it would be constructive to see how it is derived: The inequality is the generalized uncertainty principle, which is the most complete form of uncertainty principle. Given Hermitian operators $A,B$, we define vectora $\left|q\right> \& \left|p\right>$ with an arbitrary wavefunction $\left|\phi \right>$ as: $$\left|q\right>=(A-\left< A\right>)\left|\phi\right> \ ;\ \left|p\right>=(B-\left< B\right>)\left|\phi\right>\tag{1} $$ By Schwarz inequality: $$\left<q|q\right>\left<p|p\right> \ge \left<q|p\right>\left<p|q\right>\tag{2} $$ $$(\left<A\right>^2-\left<A^2\right>)(\left<B\right>^2-\left<B^2\right>)\ge \Re^2[\left<p|q\right>]+ \Im^2[\left<p|q\right>]\tag{3} $$ After some work: $$(\Delta A)^2(\Delta B)^2\ge [\frac{1}{2}(\left<p|q\right>+\left<q|p\right>)]^2+[\frac{1}{2i}(\left<p|q\right>-\left<q|p\right>)]^2\tag{4} $$ $$(\Delta A)^2(\Delta B)^2\ge (\left<\phi\right|\frac{1}{2}[AB+BA]\left|\phi\right>+\left<A\right>\left<B\right>)^2+(\left<\phi\right|\frac{1}{2i}[AB-BA]\left|\phi\right>)^2\tag{5} $$ Simplifying it, we have: $$(\Delta A)^2(\Delta B)^2\ge (\left<\phi\right|\frac{1}{2}\{A^*,B^*\}\left|\phi\right>)^2+(\left<\phi\right|\frac{1}{2i}[A,B]\left|\phi\right>)^2\tag{6} $$ Where $\{A,B\}$ is anticommutator: $AB+BA$ ; $A^*$ and $B^*$ are defined as $A-\left<A\right>$ and $B-\left<B\right>$ respectively. In general, the anticommutator part is too complicated to be useful. As an inequality, the generalized uncertainty principle always suggests: $$(\Delta A)^2(\Delta B)^2\ge (\left<\phi\right|\frac{1}{2}\{A^*,B^*\}\left|\phi\right>)^2+(\left<\phi\right|\frac{1}{2i}[A,B]\left|\phi\right>)^2 \ge (\left<\phi\right|\frac{1}{2i}[A,B]\left|\phi\right>)^2 $$ This is the usual uncertainty principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/396286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How does one calculate the stability of a ring cavity? When discussing whether or not a cavity is stable, i.e., if a beam stays inside the cavity, one can either employ the ABCD-matrix approach, in which case the cavity is stable if $$ 0 \leq \frac{A+D+2}{4} \leq 1 $$ In a cavity with two perfectly parallel planar mirrors, for example, $A=D=1$, and the relation holds. Another approach, when using curved mirrors, is to define $$ g_1 = 1 - \frac{L}{R_1}, \quad g_2 = 1- \frac{L}{R_2} $$ where $L$ is the optical path length in the cavity and $R_i$ the radii of the curved mirrors (negative if the mirror is concave). Stability is then given if $$ 0 \leq g_1 g_2 \leq 1 $$ Now, my question: which approach, if any of those two, would I use if I had a ring resonator, specifically a bow-tie type cavity? There are 4 mirrors here, 2 curved and 2 planar, and some other elements like Brewster windows or an etalon. Is there e.g. some numerical approach I could calculate with Mathematica or similar?
The general approach is that the magnitudes of the eigenvalues of the transfer matrix of the ring round trip must both be below unity. This translates to the physical statement that the beam width at any given point along the cavity shrinks with each round trip, i.e. that the whole beam stays within the cavity. If either of the eigenvalues exceeds unity, a few bounces put the beam outside the cavity, thus abruptly quenching the resonance. The first criterion expresses this idea. The second criterion you cite is a special case of the first for the case of two spherical mirrors of curvature radiusses $R_1,\,R_2$ spaced by an on-optical-axis distance of $L$ each way. The eigenvalues of the transfer matrix (or any $2\times2$ matrix) $M$ are: $$\lambda = \frac{1}{2}\left(\mathrm{tr}(M)\pm\sqrt{\mathrm{tr}(M)^2-4\det(M)}\right)$$ but transfer matrices are always symplectic so in particular must have unit determinant. This leads to the first inequality you cite. Now calculate the transfer matrix for the round trip for the spherical mirrors. You will find that the $g_1\,g_2$ quantity is equivalent to $\frac{1}{2}+\frac{1}{4}\mathrm{tr}(M)$. So, obviously, you will use the first criterion for the transfer matrix of the round trip in your setup.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/396432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do physicists integrate? I've always thought that the integration notation in physics is weird, but I understood it nevertheless for a single variable, until I started reading Zee's QFT in a nutshell, where 4 dimensions are used. So I was wondering how the integration notation works. For example, for a free theory with sources at $x_1$ and $x_2$ we have: $$ W(J)= -\frac{1}{2}\int\frac{d^4k}{(2\pi)^4}J_2(k)^*\frac{1}{k^2-m^2+i\epsilon}J_1(k) $$ Now we let: $$ J_a(x)=\delta^{(3)}(\vec{x}-\vec{x_a}) $$ where $a=1,2$. Using the Fourier transform into momentum space: $$ J(k)=\int d^4xe^{-ik\cdot x}J(x) $$ What I don't understand is that W(J) becomes: $$ W(J)=-\int\int dx^0dy^0\int \frac{dk^0}{2\pi}e^{ik^0(x^0-y^0)}\int\frac{d^3k}{(2\pi)^3}\frac{e^{ik\cdot(x_1-x_2)}}{k^2-m^2+i\epsilon} =\int dx^0\int\frac{d^3k}{(2\pi)^3}\frac{e^{ik\cdot(x_1-x_2)}}{k^2+m^2} $$ I know that the $2$ comes from the two terms being equal so don't mind that. It says integrating over $y^0$ will get a delta function setting $k^0$ to zero, and that I don't understand. The only thing I know (and I'm guessing but I'm not sure yet) is that: $$ J_a(k)=\int d^4xe^{-ik\cdot x}\delta^{(3)}(x-x_a)=\int dx^0e^{-ik^0\cdot x^0}\int d^3xe^{-ik\cdot x}\delta^{(3)}(x-x_a)= \int dx^0e^{-ik^0\cdot x^0}e^{-ik\cdot x_a} $$ and the integral over $x^0$ (or $y^0$) is independent of the spatial $k$ so it's outside the integral of spatial $k$, but inside the integral of temporal $k$ in the $W(J)$. And the one with the $x_a$ part is inside the spatial integral $k$. Am I correct about this? But yeah, my questions are: * *What's with the integral over $y^0$? *Why is there even a $y$? Is it equal to $x_2$? *How did the sign changed for the $m^2$ part? *How do you actually know what's inside the integral using this notation? Or actually how to read integrals like this?
$$ J_a(k) = \int dx^0 e^{-ik^0x^0} \int d^3x e^{i\vec{k}\cdot\vec{x}}\delta^3(\vec{x}-\vec{x}_a) = e^{i\vec{k}\cdot\vec{x}_a}\int dx^0 e^{-ik^0x^0} $$ $$ J^{*}_a(k) = \int dy^0 e^{ik^0y^0} \int d^3y e^{-i\vec{k}\cdot\vec{y}}\delta^3(\vec{y}-\vec{x}_a) = e^{-i\vec{k}\cdot\vec{x}_a}\int dy^0 e^{ik^0y^0} $$ So you have $$ J^*_2(k)J_1(k) = e^{i\vec{k}\cdot(\vec{x}_1-\vec{x_2})} \int dx^0 \int dy^0 e^{ik^0(y^0-x^0)} $$ Now let me work with the term $k^2-m^2$: $$k^2-m^2 = (k^0)^2-\vec{k}^2-m^2 = (k^0)^2-(\vec{k}^2+m^2) $$ Now let's talk about $\int \frac{dy^0}{2\pi} e^{ik^0(x^0-y^0)}$: by changing the integration variable $y^0-x^0 = \tilde{y}^0$ (so that $dy^0 = d\tilde{y}^0$), you get $\int \frac{d\tilde{y}^0}{2\pi} e^{i\tilde{y}^0k^0} $ which is the one dimensional Dirac Delta (you can quickly Google it): $\delta(k^0-0)$. Now taking the integral over $dk^0$ simply means setting $k^0=0$ since you have this Dirac Delta function. The sign change comes as a consequence since you have $-(\vec{k}^2+m^2)$ at the denominator and you finally get: $$ W[J] = \int dx^0 \int \frac{d^3k}{(2\pi)^3} \frac{e^{i\vec{k}\cdot(\vec{x}_1-\vec{x}_2)}}{\vec{k}^2+m^2}$$ I hope this helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/396750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why runners lean forward? Why runners tend to lean forward prior to start running? How does it help run faster? What is the physics behind his leaning?
The weight applies to the center of mass (CM) of the person. When the person stands perpendicular to the ground, the force goes downwards. The normal force compensates the weight, so nothing happens. However, when the runner leans downwards, the vector from the ground to his CM is not parallel to the weight force. That causes a moment of forces (torque) on the runner, which leads him/her to rotate around his/her feet (towards the ground). In short: if (s)he leans forward, (s)he can fall forward and hit the ground. This would happen if we had only one point touching the ground, but we've gout our entire feet to avoid rotating and falling. The thing is taht the torque creates a small angualr acceleration because the gravity force creates an acceleration. One component of this acceleration will be compensated by the normal force. The other component will be forward, and that makes us easier to start running. The gravity force does the effort for us.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/396879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to Change Coordinate Systems in General Relativity Let me preface by stating that I have no experience with General Relativity. I am working on a project for school that requires a little knowledge of it, so I am hoping to find some help. I do have experience with Special Relativity. On to the question. I know that one can calculate the age of the Universe using the Lambda-CDM model. After making a few simplifying assumptions, one can find the relation $$H\left ( a \right )=\frac{\dot{a}}{a}=H_{0}\sqrt{\frac{\Omega_{m}}{a^{3}}+\frac{\Omega_{rad}}{a^{4}}+\Omega_{\Lambda }}.$$ One can numerically integrate to find $t_{0}$, the age of the Universe. $$t_{0}=\int_{0}^{1}\frac{da}{aH_{0}\sqrt{\frac{\Omega_{m}}{a^{3}}+\frac{\Omega_{rad}}{a^{4}}+\Omega_{\Lambda }}}$$ Now, if I am correct, when performing this calculation for the age, I was working in a co-variant coordinate system (the system that expands with the universe or the system of CMB). For my project, I want to calculate the age of the Universe in a different coordinate system. More specifically, I would like to calculate the age in a coordinate system that is not expanding with the Universe. I know from other articles on here that I cannot use Special Relativity, but I am unsure how to go about this. If someone could show me how to go about this, keeping in mind my knowledge on this subject is very limited, I would be appreciative.
In short, the same way as before but assume $\Omega_{rad}$ is currently very small $$\frac {\dot a}{a}=H_{0}\sqrt{\Omega_{m}a^{-3} + \Omega_{\Lambda}}$$ which has the solution $$a(t)=(\Omega_{m}/\Omega_{\Lambda})^{1/3}\sinh^{2/3}(t/t_{\Lambda})$$ where $t_{\Lambda}=2/(3H_{0}\sqrt{\Omega_{\Lambda} })$ Set $a=1$ which gives you $t=t_{0}$ the current age of the Universe. See the Lambda-CDM model in Wikipedia.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/397086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Non-integer $k$ value in Friedman-Robertson-Walker model? I understand that $k$ describes positive, negative, or no curvature. However, why can't there be, for example, +0.5 (semi-positive) curvature, etc?
The continuum of curvatures does exist, but we find it more convenient to put it elsewhere. The crucial part of the metric that encodes the curvature is a factor $1-K r^2$, where $r$ is the radial coordinate (using any point as the origin), and $K$ is any real number, positive or negative. By dimensional analysis, there is some length $L$ such that either $K=1/L^2$ or $K=-1/L^2$, so we can rewrite our formula as $1 - k (r/L)^2$, where $k$ is just the sign of $K$, or $0$ if $K=0$. The case $k=0$ corresponds to $L$ equal to infinity. This new variable $k$ can only have the values $\pm 1$ or $0$, but that's okay because $L$ still can be any length, so we have the whole range of curvatures. $L$ is known as the radius of curvature of the universe, and a larger $L$ implies a smaller curvature. $k$ determines whether this curvature is positive or negative. Now, and this is a bit of a technical point, we can make that $L$ go away if we measure our coordinate $r$ in units of $L$. In our formula, we can set $x = r/L$ to get just $1-kx^2$. The continuum of curvatures is still there but it is hidden inside of $x$, because the physical interpretation of $x$ depends on $L$: $x$ is how many times $L$ fits in your distance. So if for example $L = 1\ \text{light-year}$, $x=2$ is a distance of $2$ light-years, but if $L = 3$ light-years then $x=2$ is actually a distance of $6$ light-years. Mathematically, the price we pay is that $L$ now shows up elsewhere in the formulas, in the part we use to calculate lengths. To sum up, the curvature can indeed take any value: the closer it is to zero, the closer space is to being flat. The fact that $k$ can only be $\pm 1$ or zero is just a matter of convenience: we use $k$ to label the three qualitatively different scenarios of positive/negative/zero curvature. $L$ just sets the size scale for the universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/397279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Bogoliubov Transformation with Complex Hamiltonian Consider the following Hamiltonian: $$H=\sum_k \begin{pmatrix}a_k^\dagger & b_k \end{pmatrix} \begin{pmatrix}\omega_0 & \Omega f_k \\ \Omega f_k^* & \pm \omega_0\end{pmatrix} \begin{pmatrix}a_k \\\ b_k^\dagger\end{pmatrix}\tag{1}$$ for bosonic operators ($+$) or fermionic operators ($-$). The standard way to do Bogoliubov transformations is to use the transformations: $$M_{\text{boson}}=\begin{pmatrix} \cosh(\theta) & \sinh(\theta)\\ \sinh(\theta)&\cosh(\theta)\end{pmatrix},\quad M_{\text{fermion}}=\begin{pmatrix} \cos(\theta) & \sin(\theta)\\ -\sin(\theta)&\cos(\theta)\end{pmatrix}$$ However, in this case these won't work as they will give complex values of $\theta$, and to ensure that our (anti-)commutators remain intact we need $\theta$ to be real. Thus my question is: How do we generalize the Bogoliubov to solve problems of the form of (1)? This question is based of this one: Bogoliubov transformation with a slight twist
There is always a bottom-line answer to this question: write the complex boson/fermion in terms of real boson/fermion ($a=a_R+i a_I$, etc), plug it in, and then diagonalize it by orthogonal matrices. This is probably the more natural way to do it for particle non-conserving systems. If one insists on doing it in terms of complex boson/fermion, it's still possible, but many of the time annoying. This is because one (generically) also need to transform within the real and the imaginary part of the variables, which forces one to double the size of the matrix to include $( a,a^{\dagger},b,b^{\dagger})^T$ all together, like the Nambu spinor when one solves for superconductors' mean-field Hamiltonian. The annoying part is that one needs to take care of the redundancy in the matrix components.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/397615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why doesn't the universe collapse under its own gravity? Is the reason the universe doesn't collapse into itself due to gravity because there is an infinite amount of bodies in infinite space, therefore there is an infinite amount of gravitational pull on an infinite amount of objects so it all balances out?
No, the reason the universe doesn't collapse is because it's not dense enough. This can be seen from the Friedmann Equations, the main equations of cosmology. If you work through the derivation you'll find that there's a so-called critical density, $\rho_c = \frac{3H^2}{8\pi G}$ If the universe's average density is above this, then it will collapse under its own gravity into a big crunch. Adding up all the stars, galaxies, etc that we can see gives a density about ~5% of this. Adding dark matter still gives only about ~23%. There isn't enough matter to cause the universe to collapse.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/398026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does the Hamiltonian define symmetry/invariance? In Sakurai's Modern Quantum Mechanics, in Chapter 4, he effectively states that the operation of rotation or translation, represented by a unitary operator $U$, is customarily called a symmetry operator regardless of whether the physical system itself possesses the symmetry corresponding to $U$. It's a symmetry or invariance of the system only when $U^\dagger H U=H$. Why are symmetries defined with respect to invariance of the Hamiltonian?
In classical mechanics, a conserved quantity has vanishing Poisson bracket with the Hamiltonian. Such quantities become "good quantum numbers" in QM: they commute with $H$, so simultaneous eigenstates from a complete basis. The evolution operator $e^{-iHt/\hbar}$ also commutes with good quantum numbers, so their probability distribution is unchanged. (Thus the connection from Noether's theorem of conservation laws to continuous symmetries survives in QM.) For unitary $U$ commuting with $H$, $U^\dagger=U^{-1}$ obtains your equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/398144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Are materials which are bad at conducting heat always bad at conducting electricity also? When defining a material's conductivity, we usually consider its conductivity of heat and conductivity of electricity separately. However, I realize that materials like metal conduct both heat and electricity well. In contrast, materials like wood and glass conduct both heat and electricity poorly. Therefore can we conclude that if a material is bad at conducting one kind of "flow of energy", then it will also be bad at conducting another kind of "flow of energy"? Thanks a lot.
Water is an excellent thermal conductor but a poor electrical conductor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/398266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Is it possible to harvest the energy from the movements of a satellite in orbit? I was thinking about how energy is harvested on Earth from movements of certain forces like wind and ocean currents. Could similar principles be applied in space? Satellites are virtually in perpetual motion when orbiting the Earth. Is there kinetic energy that can be extracted from this orbital motion and harvested for use on Earth?
While I agree with other answers as to the physics of the problem, there is at least one practical area where harvesting energy from the orbital motion of satellites could be of practical utility: space debris removal system. As already mentioned, removing kinetic energy of orbiting body would result in this body falling back on Earth, but this would be the desired outcome for pieces of space garbage. One could imagine a system that removes the kinetic energy (for example, using tethers) from old defunct satellites causing their atmospheric reentry. Part of energy harvested is used to maintain the system in orbit and to maneuver it towards next piece of debris.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/398397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
If momentum and kinetic energy are related, how loss in energy doesn't cause loss in momentum? Kinetic energy and momentum are related to each other by the following equation: $$K.E.=\frac{1}{2}\frac{\textbf{P}^2}{m} $$ In inelastic collisions the momentum is conserved but the energy isn't. How can this be correct in the view of previous equation? Moreover, if I want to rewrite the previous equation in term of change of momentum and change of kinetic energy, is the following true or not? $$\Delta K.E.=\frac{1}{2}\frac{(\Delta\textbf{P})^2}{m} $$ If that is wrong, what is the true form?
For simplicity's sake, let's restrict ourselves to collisions in 1 dimension, where object 1 collides with object 2. They have momenta $p_1$ and $p_2$ before collision and $p_1'$ and $p_2'$ after collision. We then have $$\begin{align} p_1' + p_2' &= p_1 + p_2,\tag{1}\\ K' = \frac{p_1'^2}{2m_1} + \frac{p_2'^2}{2m_2} &= \frac{p_1^2}{2m_1} + \frac{p_2^2}{2m_2} + \Delta K = K + \Delta K,\tag{2} \end{align} $$ where $\Delta K = K' -K$ is the change in kinetic energy. Note that Eq. (1) by itself does not have a unique solution: only the total momentum is conserved, not the individual momenta. Each solution $(p_1',p_2')$ will correspond with a different value $\Delta K$, and there will be only 1 specific solution for which the collision is elastic, i.e. $\Delta K=0$. It is instructive to write the solutions in terms of the so-called coefficient of restitution $$ C_R = \frac{v_2' - v_1'}{v_1 - v_2} = \frac{m_1p_2' - m_2p_1'}{m_2p_1 - m_1p_2}.\tag{3} $$ Evidently, $v_1 > v_2$, otherwise there would be no collision. Also, $v_2' > v_1'$, because object 1 cannot get passed object 2. So $C_R\geqslant 0$. The value $C_R = 0$ corresponds with a perfectly inelastic collision, where both objects stick together after they collide. Using Eq. (1) we find $$\begin{align} 1 + C_R &= \frac{m_1(p_2'-p_2) + m_2(p_1-p_1')}{m_2p_1 - m_1p_2}\\ &=\frac{(m_1+m_2)(p_1-p_1')}{m_2p_1 - m_1p_2}\\ &=\frac{(m_1+m_2)(p_2'-p_2)}{m_2p_1 - m_1p_2}, \end{align} $$ so that the possible solutions are of the form $$\begin{align} p_1' &= p_1 - (1 + C_R)\frac{m_2p_1 - m_1p_2}{m_1+m_2},\\ p_2' &= p_2 + (1 + C_R)\frac{m_2p_1 - m_1p_2}{m_1+m_2}. \end{align} $$ Next we derive the relation between $C_R$ and $\Delta K$. First, note that $$\begin{align} 2m_1m_2(m_1 + m_2)K &= (m_1+m_2)(m_2p_1^2 + m_1p_2^2)\\ &= m_1m_2(p_1 + p_2)^2 + (m_2p_1 - m_1p_2)^2, \end{align} $$ and $$\begin{align} 2m_1m_2(m_1 + m_2)K' &= m_1m_2(p_1' + p_2')^2 + (m_1p_2' - m_2p_1')^2\\ &=m_1m_2(p_1 + p_2)^2 + (m_1p_2' - m_2p_1')^2, \end{align} $$ so that $$ 2m_1m_2(m_1 + m_2)\Delta K = (m_1p_2' - m_2p_1')^2 - (m_2p_1 - m_1p_2)^2. $$ We plug this into Eq. (3), and obtain $$\begin{align} C_R &=\sqrt{1+\frac{2m_1m_2(m_1 + m_2)}{(m_2p_1 - m_1p_2)^2}\Delta K}, \end{align} $$ or alternatively, $$ \Delta K = (C_R^2 - 1)\frac{(m_2p_1 - m_1p_2)^2}{2m_1m_2(m_1 + m_2)}. $$ Since $\Delta K\leqslant 0$, we get $0\leqslant C_R\leqslant 1$. For elastic collisions, $\Delta K = 0$ and $C_R= 1$. To summarize, for each value of $C_R$ between $0$ and $1$ we get a possible solution, each corresponding with a different value of $\Delta K$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/398685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a local source of Hawking radiation? Suppose a black hole is formed at time $t_0$, and after that even more energy falls in, in what we are calling mass shells. I'm inclined to believe that the initial black hole starts radiating before it continues growing, and that each shell of mass falling does change that pre-Hawking radiation. And I say pre-hawking because it's not needed to assume thermal radiation is coming out. However, it's not clear to me if there is a way to locate the source of the radiation, i.e. how much energy is drawn from each mass shell. I may rephase the question by asking this: Is there a way to know how much of the radiated energy comes from each of the different mass shells as sources of pre-Hawking radiation?
The Schwarzschild solution is a vacuum solution of Einstein's equations. So a black holes consists of vacuum; its mass M is in the singularity. Therefor there are no "mass shells". Some people talk about shell observer, who are stationary at a constant r-coordinate outside the event horizon but such shells are definitively not a source of the Hawking radiation. The Hawking radiation is a black body radiation which is proportional to 1/M and can be thought to be emitted at or very close the event horizon. I am not sure what you mean saying "different mass shells as sources of pre-Hawking radiation." The Hawking radiation does not depend on the history of a black hole, it depends only on its actual mass M.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/398839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Population II star orbits Ok so they are usually found in globular clusters and can consider orbits in a static spherically symmetric gravitational field. The orbits are randomly scattered. Would these be considered to be collisional or collisionless orbits?
They are pretty collisionless. As a rough estimate, the time to a close encounter within radius $r$ is $\tau \approx 1/(\pi r^2 v \rho)$ where $v$ is the average velocity ($\approx$ 20 km/s) and $\rho$ the average number density ($\approx$ 0.4 per parsec on average, 100-1000 times more in the core). So for <1 AU encounters that gives $\tau\approx 10^{15}$ years, while the timescale is a 1000 times less in the core - $10^{12}$ years, still really long. Still, 100 AU encounters thappen much more often and a lot of weak interactions sum up, so the cluster relaxes in a few hundred million years anyway. To complicate things slightly, three-body encounters can generate binaries, and "hard" binaries (binding energy bigger than average kinetic energy in the cluster) tend to become harder when interacting randomly with other stars ("Heggie’s law"). But actual stellar mergers likely mostly happen right now due to hard binaries evolving into giant stars and dissipating their orbital velocity rather than actual random close encounters.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/399193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Zero-level of combination of $1/r$ and $r^2$ potential I am solving a problem which involves a central big mass $M$ and around it a spherically symmetrically distributed mass of constant density $\rho$. The force on a mass a distance $r$ from the centre can be shown to be: $$ F = \frac{-GMm}{r^2} - \frac{4\pi\rho Gmr}{3}. $$ Hence the potential: $$ U = - \int F \cdot dr = Gm\left( \frac{2\pi\rho r^2}{3} - \frac{M}{r} + C \right)$$ where $C$ is an integration constant. My question is: what should $C$ be? For the $1/r$ type of potential it is customary to have $U(\infty) = 0$, whereas for the $r^2$ type we commonly have $U(0) = 0$. Is there any smart choice here? Maybe where the forces sum to zero? Leaving it in the form above I have the feeling that the two terms have different zero points, is that okay?
The choice of the zero point for the potential energy is entirely arbitrary. In practice we choose it in a way that makes out calculation simple. In this case your test mass is going to oscillate around the centre of the mass distribution so it will start at rest at some distance $r_\text{max}$, fall through the centre and out to the same distance $r_\text{max}$ on the other side where it will come to rest again. I would set the potential to be zero at the distance $r_\text{max}$. This makes the total energy zero so as the particle falls in the sum of the potential and kinetic energy remains zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/399343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is it easier to apply torque via short bursts There are two popular tools I use to apply torque to a fastener (bolt, screw, etc.): an impact driver and a drill. The drill is a motor hooked up to some gears and eventually a bit that fits over the fastener. If I want to apply 40 lb/ft of torque, I feel as though I have to brace myself for that amount of torque, like by using both hands and my body. The impact driver is a similar motor to the drill, but there is a spring-loaded mechanism that applies the same amount of energy, but in short bursts rather than continuously. I can easily apply 40 lb/ft of torque with my wrist barely moving; using two hands or bracing myself doesn't really make a difference. Why is this the case? Why is there no equivalent force on my wrist when using the impact driver? This may be similar to using a hammer to drive a fastener into the ground: if I generate force by swinging very fast with a hammer, why isn't there an equivalent force that lifts me off the ground?
To get something to turn you need to apply a minimum amount of torque to overcome friction. The impulse of a collision (eg swinging a hammer at the lever) enables you to apply a high torque for a short time, when a constant push with the maximum stationary force you can provide is less than the required minimum torque. The same principle of expending additional energy to increase the maximum applied force is used in the pile driver and jackhammer. After a few blows the friction force has reduced sufficiently to enable you to continue applying a constant force. This is more efficient than swinging a hammer. The mass of the hammer is concentrated at the head. The impact then occurs close to the centre of percussion of the hammer. Minimum reactive shock is felt at the pivot (your hand) when impact occurs at the centre of percussion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/399457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
According to Conservation of Momentum, a gun in a sealed box should not have recoil? According to the law of Conservation of Momentum, there is no way to increase the momentum of a system, except by momentum transfer from interactions with the external. If I fire a rifle while sitting on a go kart, the go kart is going to go backwards but the bullet goes forwards, conserving the momentum. Now lets say I construct a long 1 inch thick steel box (a few meters long), and I position the gun's butt against the back of it, and fire the gun electronically. Would we not get the box flying backwards still (at least until the bullet gets lodged in the front of the box? Even if the bullet burying in the metal at the end of the box causes another force in the box at the opposite direction of the initial kick, haven't we momentarily broken the conservation of momentum?
It seems you completely understand this problem, except between firing and impact, when: $$ \vec p_{bullet} = -(\vec p_{gun} + \vec p_{box}) $$ so the sum of all three remains $\vec 0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/399642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
How is the centripetal force of a car when turning distributed over the wheels? The centripetal force can easily be calculated as: $F = (M*v^2)/R = (M*v^2)*sin(\delta)/L$. But how is this force distributed over the (front and rear) wheels? My initial thought was to just divide it by 4 for each wheel, but when you turn your front wheels 90 degrees, there will be no force over the rear wheels. So when simply dividing by 4 is wrong, then how is the distribution in reality? Is it also safe to assume the forces on the front wheels are equal to each other, and also the same for the rear wheels?
You cannot simply divide by 4, no. If you set up that formula for each wheel, then you'll have to take into account that the speeds and distances are different. The one thing all four wheels have in common is angular velocity $\omega$. Even at a 90 degree turn, the rear and front wheels spin equally fast (degrees per second) around the rotation point. Otherwise the car would be breaking apart. The angular velocity relation $$v=\omega r$$ helps you calculate the linear speed $v$ for each individual wheel, since you know the distances $r$. With this $v$ and $r$ per wheel you can calculate the centripetal acceleration for each wheel. I would then divide the mass by 4. That would be a necessary assumption, namely that each wheel "carries" equally much mass. Then the centripetal force on each wheel can be calculated. I have not done the calculations, but I would expect all four forces to be different. And varying differently with the turning angle of the front wheels (some will be cosine and some sine to the angle). For a 90 degree turn you will have a rotation point located at the first rear wheel. That wheel spins around, but doesn't move linearly. It has zero speed $v$ and distance $r$. According to your formula (if you plug in the relation $r=v/\omega$ in place of $r$), zero speed indeed shows zero centripetal force. Is it also safe to assume the forces on the front wheels are equal to eachother, and also the same for the rear wheels? If the turn is big, then differences in the distance $r$ become negligible. The differences in speeds will as well become negligible. And then all forces are more or less equal, yes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/399761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
How can we make coherent laser? When im studying about fundamental physics, book says that induced emission makes two coherent photons so whole family of that photons will be coherence so laser can be configured by coherent photons. But if there is two 'first photon' which is not coherence each other, then after some actions(induced emissions) there will be two family of photons inside of laser. My question is that, how can we make two incoherent families of photons coherent and make real laser which is commonly used in our daily life. Thank you.
The laser action is due to stimulated emissions. The atom in the ground state absorb energy and jump to higher state. If the state of atom is a metastable state then the scattered photon and the emitted photon have the same phase always. So the emitted light is coherent and have longest wavelength as well. If two photons are not coherent ,the emitted light cannot have the same property as laser.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/399897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why the Fermi temperature isn't zero? The fermi temperature is defined as $$ k_BT_f = E_f$$ But the fermi energy is the energy at $T=0$, where the energy level is the highest occupied for electrons. So, why is the fermi temperature defined as $\neq 0$?, What temperature $T_f$ is measured? Over who is T measured?
In a semi-classical description, if we think of the temperature as being related to the kinetic energy and therefore velocity of the electrons, the Pauli exclusion principle disallows the electrons to all be in the same state, meaning that the vast majority of electrons must have nonzero kinetic energy (since they can't all be in zero kinetic energy states, or more properly they cannot all have zero momentum). Therefore even at zero temperature (the lowest energy state of the entire system) the electrons are still moving, the energy of the system is not zero. The Fermi energy is the energy of the fastest electrons, and the Fermi temperature is the temperature that corresponds to to this energy by the same formula you provided.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding translational acceleration Given $\alpha = 2.44 $ rad/$s^2$ $\omega = 2.44t+8.35$ rad/s $\theta = 1.22t^2+8.35t$ rads Find an expression for the magnitude of translational acceleration at $t=1.82s$, given that the radius of the circle is r. What I did was use the fact that $a = r\omega^2 = r[(2.44)(1.82)+8.35]^2 = 164r$. This answer is correct, however, I was just wondering about something. Isn't the translational acceleration also defined as $a=r\alpha$, which follows from $v=r\omega$ by taking the time derivative of both sides. In this case, $a = 2.44r$, as $\alpha$ is a constant angular acceleration. So then is the answer $a = 164r$ or $a=2.44r$? Am I seriously overlooking something here?
Neither result is correct, but both are part of the solution! The angular motion is given by ($r$ is the radius and $\theta$ is a function of time): \begin{align*} \vec r &= r \begin{pmatrix} \cos(\theta) \\ \sin(\theta) \end{pmatrix} \end{align*} From this we define $\omega = \partial_t \theta$, $\alpha = \partial_t \omega$. The translational acceleration on the other hand is $\vec a = \partial_t^2 \vec r$. By doing this derivative and inserting the definition of $\omega$ and $\alpha$ you will arrive at the general result (and see how it relates to the formulas you used).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why polarization filter do not dim the light completely? In a circle there's infinite amount of degrees (eg. 0 deg, 0.00000000000...1 deg etc.) In a ground school we are thought that there's 360 degrees in a circle. A landscape behind my window is incoherent light source, so it randomly emits photons with all polarization directions. When I put a polarizer between landscape and my eye... i can still see the everything. But how is that possible if the polarizer transmits only $1/\infty$ of all photons (since there's infinite amount of directions of polarization)? Even if we assume that there's just 360 degrees in circle... The landscape behind my window is not 360 times darker when I observe it through filter (eg. polarization glasses). Why won't polarizer dim the light severely?
If you use a polarizing filter for ultraviolet light you could see that visible light will be dimmed more as if you use a polarising filter for visible light. The ratio of reflected and absorbed light to the light which is going through the filter depends from the slits width. If 50% of monochromatic light goes through the filter, this means that for some orientation of the filter the light with the polarisation direction from 0° to 90° and 180° to 270° goes through this filter. Behind the filter all light is polarised in the same direction. From this you can conclude that the lights electric and magnetic field components gets rotated and aligned. To prove the last conclusion one has to put two filters behind one another. If one filter has the orientation to the other filter of 90°, no light is going through. But, now place a third filter between the others and orient this filter in the direction of 45°. Will you see light going through? If yes, does this prove that light is rotated by a well designed filter?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Ginzburg-Landau theory for first-order phase transitions? In AlQuemist's answer to this PSE question:223892 and Thomas' recent answer to one of my questions. There is a mention of the application of the Ginzburg criterion and in general the Ginzburg-Landau theory restricted to second order-phase transitions. This appears to be a consistent theme throughout the literature. I cannot see why this restriction is in place - i.e. why we don't consider the Ginzburg-Landau theory and Ginzburg criterion to hold in the case of first-order phase transitions. As far as I can tell everything we do with these involves either taking a saddle point approximation or Gaussian approximation around the saddle point - either of which seem to not actually rely on the nature of the transition at all. Thus my question is; Why is this restriction to second order phase transitions in place in the context of Ginzburg-Landau theory and the Ginzburg criterion?
The usual Landau-Ginzburg potential can be slightly generalized to $$W(\phi) = t \phi^2 + a \phi^4 + \phi^6.$$ The phase transition is at $t = 0$, and it is continuous if $a > 0$ and discontinuous if $a<0$. $a = 0, t = 0$ is a multicritical point. This basic observation begins to explain why it's so hard in practice to distinguish between a continuous and discontinuous phase transition, since it just comes down to a sign of one phenomenological parameter. On the other hand, if the phase transition is not continuous, and there is no diverging correlation length, it's not so easy to justify something like mean field theory (or even effective field theory), since lattice scale fluctuations could be very important.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is there something similar to Bernoulli effect with electricity? There are many parallels between fluid dynamics and electricity. Is there a thing similar to a Bernoulli effect with electricity? For example, would you see a decrease in voltage as the conductor narrows, and an increase as it widens?
No, I don't think so. Electrons move differently through conductors than fluids move through a given volume, you cannot apply Bernoulli's principle. In fact, a constriction in a conductor leads to an additional spreading resistance and a wider conductor of the same material will have a smaller resistance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Acoustic, optical, ferromagnetic and antiferromagnetic spin-waves? In the context of spin-waves I have seen the following words as descriptors*: * *Acoustic *Optical *Ferromagnetic *Antiferromagnetic which I have seen used together e.g. "acoustic ferromagnetic spin waves" as well as individually e.g. "antiferromagnetic spinwave". I am assuming that acoustic means the dispersion relation goes to zero (see my related question) as $k\rightarrow 0$ whilst optical means it does not. But I am yet to find any clear cut definition of what the qualification of Ferromagnetic and Antiferromgantic mean. Please can someone explain this to me? *The source is not in the public domain but a quick search on your favorite search engine should bring up sources with the individual terms.
A ferromagnetic spin wave belongs to a ferromagnet, where there is a net magnetic moment, while an antiferromagnetic spin wave belongs to an antiferromagnet, where there is no net magnetic moment. Does this answer your question?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can there be general relativity without special relativity? Can General Relativity be correct if Special Relativity is incorrect?
Where gravity is weak the metric of spacetime is flat and SR holds. For example, As $r \to \infty,$ The Schwarzschild Metric of GR $\to $ The flat metric of SR in spherical coordinates: $ds^2 = c^2 dt^2 − dr^2 − r^2 dθ^2 − r^2 sin^2 θdφ^2$ "GR without SR" $\to$ There is never no gravity or weak gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/400910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 8, "answer_id": 6 }
Christoffel symbol derivation in book by Wald In chapter 3 of Wald's General Relativity he starts by defining a covariant derivative $\nabla$ as a map on a manifold M from tensor fields $\mathscr{T}(k,l) \to \mathscr{T}(k,l+1)$ plus some required properties (linearity, Leibniz rule, etc.). He then goes on to show that for any two derivatives $\nabla, \tilde{\nabla}$, their difference (applied to a one-form) can be expressed by a tensor as $$ \nabla_a \omega_b - \tilde{\nabla}_a \omega_b = C^c_{ab} \omega_c. $$ What I don't understand is that he says we choose $\tilde{\nabla}$ as the usual partial derivative $\partial$ and call the tensors $C^c_{ab} = \Gamma^c_{ab} $ the Christoffel symbols. I thought the partial derivative does not satisfy the required transformation properties of the covariant derivative hence I can't substitute it for $\tilde{\nabla}$. Another minor issue is that he calls $C^c_{ab}$ a tensor field while he also says it doesn't transform according to the tensor transformation law. What does he then mean by that? That it is a multilinear map?
Wald states, in eq. 3.1.14, that the difference between two distinct derivative operators is characterized exactly by the tensor field $C^c_{ab}$. Schematically, he is saying that $$ \nabla T = \tilde{\nabla}T + CT $$ Where $\nabla$ and $\tilde{\nabla}$ are distinct derivative operators. He now chooses that one of the derivative operators is the regular partial derivative, i.e. he demands that $\tilde{\nabla} =\partial$, in order to find out how the regular partial derivative differs from the covariant derivative. Note that locally in some coordinate patch, $\partial$ does fulfill all of his 5 requirements. As for the additional question on how Christoffel symbols transform, you can find many places on SE that have answered it, i.e. Under what representation do the Christoffel symbols transform?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/401142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the difference between Quantum Dots and nanoparticles? As far as I understand, both quantum dots and nanoparticles are mainly characterised by the fact that all three dimensions are in the nanoscale. Quantum dots are always mentioned to be made from a semiconductor, while nanoparticles can be anything (dielectric, metal, semiconductor). Is semiconducting nanoparticle always called a quantum dot or is there a stricter definition of both?
There is no strict definition, although as you suspect, quantum dots are a subset of the more generic “nanoparticles”. The reason “quantum dot” is normally referring to a semiconducting system is that it evokes a certain group of physical properties used in a certain way: the band states being quantized, for example, which is useful for engineering the density of states for light emission/absorption. E.g if you want to make a laser, you might start with a semiconducting system with an appropriate band gap and make quantum dots out of it to enhance the internal quantum efficiency. A metallic nanoparticle that’s small enough to result in quantum confinement effects on the conducting electrons could, I guess, be referred to as a quantum dot. But people might be momentarily confused.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/401393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do objects float in liquids denser than themselves? Why do objects float in liquids denser than themselves? I know that a balloon floats on water because it has air in it, but why?
If an object is completely immersed in a liquid denser than it then the resulting buoyant force will exceed the weight of the object because the weight of the liquid displaced by the object is greater than weight of the object.As a result, the object cannot remain completely submerged and this causes the object to float. Search up more about buoyant force to learn more about this
{ "language": "en", "url": "https://physics.stackexchange.com/questions/401502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can a body have two axis of rotation at the same time? I m not concerned with rotation of a body with two simultaneous axis but concerned with how we choose the axis,while going through pure rolling I have observed that there are two axis of rotation one is passing through the center of mass and the other is through the point in contact with the ground,my concern is how can there be any axis of rotation through the point of contact where as m very well finding the body does not rotate in that axis of rotation that is it very well rotates only through the center of mass.
... it very well rotates only through the center of mass. This is not the case. In some applications, it's most convenient to describe the motion of a rigid body as a combination of translation by the center of mass and rotation about an axis passing through the center of mass. You appear to have fallen into the trap of thinking that this is the only way to describe the motion of the rigid body. In other applications, it's even more convenient to describe the motion as a combination of translation of some other central point and rotation about an axis passing through that point. A rigid body can be viewed as having an infinite (uncountably infinite) number of axes of rotation. Suppose you know the velocity of some central point $c$ of the object and the object's angular velocity. The velocity of some other point of the rigid body $p$ is $\boldsymbol v_p = \boldsymbol v_c + \boldsymbol \omega \times (\boldsymbol r_p - \boldsymbol r_c)$. There's nothing special about the center of mass in this construction. You can pick any arbitrary point on, inside, or even outside the body as the central point. Another way to look at it: Angular velocity is a free vector. It's the same for every point inside or on the rigid body.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/401773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Stern-Gerlach experiment with a magnetic field inbetween An experiment is set up so that a beam of spin-1/2 is prepared for $S_{z} = \hbar/2$, it then passes a constant magnetic field $\textbf{B} = B_{0}\textbf{e}_{x}$ with the velcity $v_{0}$ for a distance of $L$ before it passes an aditional Stern-Gerlach apparatus in which only beams in $S_{z} = -\hbar/2$ can pass. I've made a quick sketch of the installation. Now I'm wondering if my thought process is correct. We're searching for the percentage of the initial beam that passes through the last apparatus. The first apparatus blocks 50% of the incoming beam. Inside the magnetic field, I get $$ \textbf{H} = -\gamma B_{0}\textbf{S}_{x} $$ Now through the Schrödinger equation I get $$ i\hbar \frac{\partial \chi}{\partial t} = \textbf{H} \chi $$ $$ \chi(t) = \begin{bmatrix} a e^{i\gamma B_{0}t/2} \\ b e^{i\gamma B_{0}t/2} \\ \end{bmatrix} $$ Intuitively $\chi(0) = \chi_{+}^{(z)}$ since that's what we get after we pass the first apparatus, but this becomes a problem since the probability of getting a spin down beam after the magnetic field becomes 0. $$ \chi(t) =\begin{bmatrix} e^{i\gamma B_{0}t/2} \\ 0 \\ \end{bmatrix} $$ $$ c_{-}^{(z)} = \chi_{-}^{(z)}\chi(L/v_{0}) =[0 \:\: 1]\begin{bmatrix} e^{i\gamma B_{0}(L/v_{0})/2} \\ 0 \\ \end{bmatrix} = 0 \implies P = |c_{-}^{(z)}|^{2} = 0 $$ I'm quite certain that there are errors in my calculations since I'm unfamiliar with this field and would find it very helpful if you could point those out for me.
HINT: your solution to Schrödinger's equation is wrong. Try it out in terms of its components, remembering that $\textbf{H}$ is a matrix.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to properly calculate off-diagonal terms in covariance matrix for entangled Gaussian state? I would like to ask how to properly calculate the off-diagonal terms in covariance matrix for the entangled Gaussian state? E.g. from https://arxiv.org/abs/0810.0534v1 we have a coherent Gaussian state in the following form $$ |\psi\rangle = \sum_{n=0}^{\infty}\sqrt{\frac{N^n}{(N+1)^{n+1}}}|n\rangle_A|n\rangle_B $$ and the covariance matrix $$ V=\frac{1}{4} \left( \begin{array}{cccc} S & 0 & C & 0 \\ 0 & S & 0 &-C \\ C & 0 & S & 0 \\ 0 &-C & 0 & S \\ \end{array} \right) $$ where $S=2N+1$ and $C=2\sqrt{N(N+1)}$. Using the definition of the covariance matrix $$ V_{ij}=\frac{1}{2}Tr[\hat \rho\{\hat q_i;\hat q_j\}] $$ (assuming zero displacement) where $\hat \rho$ is the appropriate density operator and vector $\hat q=(\hat X_A, \hat P_A, \hat X_B, \hat P_B)$ can be expressed using kvadrature, i.e. $\hat X = \hat a + \hat a^\dagger$, $\hat P = \hat a^\dagger - \hat a$. It is clear to me how I get the diagonal terms, but not the off-diagonal. For example $$ V_{13}=\frac{1}{2}Tr[\hat \rho(\hat X_A\hat X_B + \hat X_B\hat X_A ) ] $$ since operators $A$ and $B$ commute $$ V_{13}=Tr[\hat \rho(\hat X_A\hat X_B ) ]\\ =Tr[\hat \rho(\hat a_A + \hat a_A^\dagger )(\hat a_B + \hat a_B^\dagger ) ] $$ but this combination of creation and annihilation operators change the states but never "return" and, therefore, trace will be zero. Probably I do something trivially wrong, but I'm blind. Thanks. Edit 1: in more details: My density matrix reads $$ \hat \rho = \sum_{n=0}^{\infty}\frac{N^n}{(N+1)^{n+1}} |n\rangle_A\langle n| |n\rangle_B \langle n| $$ Then the above described term 13 of the covariance matrix is $$ V_{13}=Tr\left[\sum_{n=0}^{\infty}\frac{N^n}{(N+1)^{n+1}} _B\langle n|_A\langle n| \hat a_A\hat a_B + \hat a_A^\dagger \hat a_B^\dagger + \hat a_A^\dagger\hat a_B + \hat a_A^\dagger\hat a_B |n\rangle_A|n\rangle_B \right]\\ Tr\left[\sum_{n=0}^{\infty}\frac{N^n}{(N+1)^{n+1}} _B\langle n|_A\langle n| \left( n|n-1\rangle_A|n-1\rangle_B + (n+1)|n+1\rangle_A|n+1\rangle_B + \sqrt{n(n+1)}|n+1\rangle_A|n-1\rangle_B + \sqrt{n(n+1)}|n-1\rangle_A|n+1\rangle_B \right) \right] $$ then the trace gives zero. Where I do a mistake? What is wrong?
You need to take two independent summation indices for the ket and the bra vector in the density matrix (and thus in the computation of expectation values).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the wave model an approximation to the photon model at higher (or lower) frequencies? Certain models hold better in certain regimes. For example, Newtonian mechanics are more useful in the regime of speeds much slower than c. I was wondering, are there specific frequencies for light where the wave model breaks down and we need to think in terms of photons? For low frequencies, for example, maybe a detector would begin to detect pulses of light and no longer a constant stream of light (even though photons still exist at higher frequencies, they arrive so frequently that for example our eyes do not pick up on the arriving pulses as being distinct). I was wondering if my thinking was correct, and if the two models mathematically approximate each other, like how $\frac{1}{2}m_0v^2+m_0c^2$ approximates the more correct relativistic energy?
Rather than frequency, a better way to parametrize this is in terms of the quantum concentration. If you consider a radio wave of wavelength $\lambda$ and frequency $\nu$, the smallest volume to which such a wave can be localized is on the order of $\lambda^3$. If the energy density of the wave is $\rho$, then the number of photons per cubic wavelength is $n=\rho\lambda^3/h\nu$, which is called the quantum concentration. When $n\gg 1$, we can do things like sticking an antenna into the wave and sampling its electric field, and quantum-mechanical randomness is not important because the antenna is acted on by a large number of photons. In these situations, the quantum-mechanical description (wave-particle) is good, but the classical approximation (pure wave) is also OK. It is true that when $\nu$ is large, $n$ will tend to be small, and therefore the classical approximation will tend to be worse, for fixed values of all the other variables. This is a decent rough explanation of why the quantum nature of light is so much easier to see for, e.g., gamma rays. It is definitely not always true that the classical approximation is valid at low frequencies. For example, the hydrogen atom has absorption lines in the microwave spectrum (due to the Lamb shift), and there is no way you're going to explain those discrete lines using classical physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What would happen if Jupiter collided with the Sun? This question is inspired by a similar one asked on Quora. Let's say a wizard magicked Jupiter into the Sun, with or without high velocity. What happens? The Quora question has two completely opposed answers: one saying "nothing much happens" and the other saying "the Sun goes out for several hundred years". Both answers give reasons and calculations, and I know enough about physics to find both of them plausible. However ... it's plainly impossible that both answers are correct. Which one (or both?) is incorrect? Why is it incorrect?
Simply calculating the amount of heat generated, and comparing it to the heat capacity of Jupiter (even ignoring, as @rob says, that Jupiter's core temperature is hotter than the surface of the sun) is fallacious reasoning. What mechanism would cause the entirety of the sun's energy output be directed towards heating up Jupiter? I guess if Jupiter were spread completely across the sun's surface, that would cause a layer of mass that would have to be heated up before we would see any solar energy, but that would require Jupiter to somehow spread laterally but not radially. If Jupiter were mixed throughout the sun, the temperature of the sun would decrease slightly, and perhaps it would take a few hundred years for the sun's temperature to return to its previous level, and maybe we would get a few basis points less solar radiation, but it wouldn't go out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44", "answer_count": 4, "answer_id": 3 }
Difference between pure quantum states and coherent quantum states In the post What is coherence in quantum mechanics? and the answer by udrv in this post it seems to imply that a pure quantum state and coherent quantum state are the same thing since any pure state can be written as a projector onto the pure state when written as a density operator. Are they equivalent? If these two concepts are not equivalent, what is a simply counterexample to illustrate the difference? Then there is also the definition of a coherent state which defined it as a quantum state of the harmonic oscillator, quite confusing as to how these concepts are related and distinct, could someone provide some clarity on these distinctions?
Coherence have a wide range of definitions from the simpler ones in the other answers to more complex ones based in resource theory and fisher information. Roughly they all try to quantify the ability of a state to display interference in various properties. This ability depends on what’s making changes(your Hamilton) and how your making a measurement. Purity and coherence don’t have to overlap and are in general different concepts . Depending on your excitement a specific mixed state might display more interference the a specific pure stare. But the cleanest interference will always be from a pure state. A coherent state is a specific pure state used for lasers and are the most coherent for the normal interferometers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
How are real particles created? The textbooks about quantum field theory I have seen so far say that all talk in popular science literature about particles being created spontaneously out of vacuum is wrong. Instead, according to QFT those virtual particles are unobservable and are just a mathematical picture of the perturbation expansion of the propagator. What I have been wondering is, how did the real particles, which are observable, get created? How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair? Can anybody explain this to me and point me to some textbooks or articles elaborating on this question (no popular science, please)?
''The textbooks about quantum field theory I have seen so far say that all talk in popular science literature about particles being created spontaneously out of vacuum is wrong.'' And they are right doing so. See also my essay https://www.physicsforums.com/insights/physics-virtual-particles/ ''How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair?'' It doesn't. There are no such processes. Pair production is always from other particles, never from the vacuum or from a single stable particle. ''I cannot find a calculation for an amplitude <0|e+e-> or something like that.'' Because this amplitude always vanishes. All nonzero amplitudes must respect the conservation of 4-momentum, which is impossible for <0|e+e->. You can see this from the delta-function which appears in the S-matrix elements. It follows from this formula that the requested amplitude vanishes, since delta(q)=0 when q is nonzero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
AP physics 1 rotation problem could someone help me with this problem? the correct answers are a and d. one issue i have with it is that i just don't understand what the problem is asking. like what spool? what table? i tried making some sense of the question and the answers, but i can only see d moving the wheel clockwise. the answer explanation mentions, The key is knowing where the “fulcrum,” or the pivot for rotation, is. Here, that’s the contact point between the surface and the wheel. but why is that? isn't the pivot where the axle meets the wheel?
A, B and C produce a counter-clockwise torque. You can see how that will pevent the body from accelerating to the right with no slipping. The only choice is D that gives the proper torque sense in this case. It is also the only one different from the other three and there has to be the correct answer by the principal that answers have a unique solution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Nuclear Physics Modeling Software I have a nuclear reactor design I would like to model. I would like to show the individual atoms and how they interact with each other in the reactor (specifically, I would like to model decay modes, interactions with photons). I was wondering if there was any software which would help me model this in 3D. This reactor design is one of my own, so I know all about what happens inside of the reactor. I have used Geant4 and similar software before, but I would just like a simple graphical interface in which I could model nuclear systems.
Two codes used by the neutrino community to generate predictions of the anti-neutrino flux expected for our detector systems are DRAGON and MURE. I'm not an expert on either code, but I know they produce slightly different level of detail. DRAGON is a parameterized code while MURE is a full particle-level Monte Carlo. This means that DRAGON will generally be faster but MURE will generally capture more detailed behavior. Like Geant these are frameworks with which the user builds a model of a particular device. So the user has to supply, geometry, fuel composition and distributions, as well as other physical parameters. They are not for the faint of heart. To give you a sense of both the detail available from the codes and the work required to get there consider these preprints: * *Simulation of Reactors for Antineutrino Experiments Using DRAGON *Reactor Simulation for Antineutrino Experiments using DRAGON and MURE (Preprints select because I was part of those collaborations and knew where to look for them).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Spectral lines in Franck-Hertz expreiment In a Franck-Hertz experiment in which mercury vapor has been replaced by atomic hydrogen, it is observed pronounced maximum in the current that circulates through the galvanometer for potential values of 10.2 V and 12.09 V; and if it analyzes spectroscopically the emergent radiation three spectral lines are observed. Explain the results. I'm not sure about the three spectral lines if there are only two potential values; i.e for me there are only two wavelengths using the equation $ E= h\frac{c}{\lambda}$. And the energy is obtained by multiplying the potential value with the charge of the electron $E= eV$.
In the Franck-Hertz experiment, most of the vapor is in the ground state (quantum number n=1) to begin with. The 10.2 V signal corresponds to the 10.2 eV transition from n=1 to n=2 in hydrogen. The 12.09 V signal corresponds to the 12.09 eV transition from n=1 to n=3. The 3 spectroscopic lines in question are then for the 3 possible decays from these excited states: 3 to 2, 3 to 1 and 2 to 1. To find the level structure yourself, check out the NIST Atomic Spectra Database Levels Form and entering "H I" (for neutral hydrogen) and changing the units to eV. For hydrogren, it's even easier to just use the Rydberg formula, and the lines you are asking about are the Lyman series.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/402985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Resistance and resistivity: which one is the intrinsic and which is the geometric property? Why? The electrical resistance $R$ and electrical resistivity $\rho$ of a metal wire are related by $$\rho=\frac{RA}{l}$$ where $l$ is the length and $A$ is the cross-sectional area of the wire. One could also have written $$R=\frac{\rho l}{A}.$$ From the first relation, it implies that resistivity is a geometric property of the conductor while the second relation implies that resistance is a geometric property. However, I know that resistance is a geometric property while resistivity is an intrinsic property. See here. But it's not clear to me why.
Resistivity is the resistance of a given material when the material is of unit length and unit area. So, resistivity is an intrinsic property. Resistance changes with the material geometry, for example, the resistance of the material is doubled when length of the material is doubled and halved when area of cross-section of the material is doubled. However, in both the above cases, the resistivity of the material remains the same because it is still calculated over a unit length and a unit area of the material.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }