Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
$\rho$ and $\omega$ mesons decays As far as I understand, typical decay for $\omega$ meson is into $3\pi$, while for $\rho^0$ is into $2\pi$. In fact they are quite similar particles (same spin, parity, similar masses). Why this difference in their decay?
| The $\rho$ forms an isospin triplet ($I=1$)and the $\omega$ is an isospin singlet ($I=0$). The $g$-parity
$$
g= Ce^{i\pi I_2}
$$ that determines the odd versus number of pions in the decay goes as $(-1)^I$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/707619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does the 4-vector gradient commute with "itself"? If yes, why do they commute? Does $[\partial_{\mu},\partial_{\nu}] = 0$? If yes, why do they commute?
| For a smooth scalar functions $f$ (which are the objects on which the vector fields $\partial_\mu$ are defined to act), we simply have that
$$[\partial_\mu,\partial_\nu]f=\partial_\mu\partial_\nu f -\partial_\nu\partial_\mu f= 0$$
because of the equality of mixed partial derivatives from elementary analysis. Because $f$ is arbitrary, $[\partial_\mu,\partial_\nu]$ (which is itself a vector field) is the vector field which eats a scalar function and spits out zero - i.e. it is the zero vector field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/707901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is there no temperature difference in the Joule expansion experiment? The whole system is adiabatic, and no heat exchange can take place. If the volume of the gas now doubles, it should actually cool down.
That's why I don't understand $dT=0$
| I like to think about these kinds of thermodynamic problems using kinetic theory and Newtonian mechanics, and not really worry about the ideal gas equation. If we look at problems this way, then temperature changes are really easy to understand. For example, let's consider the classic example where you adiabatically compress a gas in a cylinder. Kinetic theory tells us that we can envision a bunch of tiny point masses bouncing around with random velocities. As the shaft of the piston moves downwards, it collides with some of the molecules, and because of the equations for momentum conservation, this means that the gas molecules will ricochet off of the shaft with a higher velocity than when they entered, so they'll move faster and have a higher temperature. The same is true for when you pull a piston shaft upwards, allowing the gas to expand. The gas strikes the shaft as its moving upwards, and thus they ricochet off with a smaller velocity than when they entered. In Joule expansion, there's no momentum changes nor forces exerted on any individual molecule, and therefore there speeds can't increase or decrease, so it's the same temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Fake Perpetual Motion Device using an Electromagnet I was watching a video of one of those fake perpetual motion machines where a ball falls down a hole and then flies off a ramp back onto the starting platform.
As suspected, the large base is hiding an electromagnet. Studying frames of one cycle it seems that the ball seems to suddenly accelerate in an unexpected way around where the blue arrow is pointing.
Here the rail touches the ground and the electromagnet looks to be switched on at that point due to a pressure sensor. However, I am a bit confused how the magnet is working to accelerate the ball, can a magnet ''push'' a ball in this way? How is energy loss due to friction being overcome?
| Another way that this might work is that an electromagnet is turned on when the ball passes through the hole in the platform. This electromagnet would accelerate the ball faster than gravity towards the bottom of the ramp. Before the ball reaches the lowest point in the ramp the electromagnet is switched off allowing the ball to continue around the rest of the ramp to be launched back to the platform with the extra momentum from the small amount of extra speed it got from the time when the electromagnet was turned on.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How does the combination of lens create a sharper image? There's a line in a book which states that the combination of lens helps create a sharper image, but I don't understand how. Does more magnification mean sharper image?
| I'd like to try explain this in the context of information. Images that have the highest resolution contain the most accurate information per unit area. In optics, this image information is carried by photons. The more photons collected from an image, the more accurately the image can be reproduced. In an ideal lens system, photons are concentrated from a larger "surface area" to a smaller "surface area" they pass through the lens system without alteration. This increases the information density which increases resolution. In a non-ideal lens system, aberrations distort some of the information carried by the original photons as they pass through the lens system. This decreases the amount of accurate imformation per unit area and decreases overall resolution. In a well designed lens system constructed from good optical materials, these abberations can be considered a deviation from the ideal lens system.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Kinetic energy of quarks and mass of proton I read that:
Most of a proton mass comes from the energy of motion of its three quarks and from the energy carried by the gluons that keeps them confined together.
Does kinetic energy of the three quarks in a proton contribute to the mass of a proton? I assume the energy carried by the gluons is referring to the binding energy of the three quarks by the strong force.
|
I assume the energy carried by the gluons is referring to the binding energy of the three quarks by the strong force.
It is more complicated than this. See how the strong interaction is figuratively modeled in terms of quantum field theory in this article
The invariant mass of the hadron is the sum of the four vectors of all those virtual particles.
As the actual QFT function cannot be modeled because of the large coupling constant of the strong interaction, QCD on the lattice is used to model how the virtual three valence quarks and an innumerable number of quark antiquark and gluons add up to the hadronic bound states . example :
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
1D bound state for a real potential The prof says: "for 1Dimensional bound states with a real potential, the wave function is real, up to a phase".
The proof goes like this:
1D bound states are never degenerated. So $\Psi_{real}$ and $\Psi_{imaginary}$ are linearly dependent. So $\Psi \equiv \Psi_{real} +i\Psi_{imaginary}=\Psi_{real} (1+ic)=(1+c^2)e^{iArg(1+ic)}\Psi_{real}$
Whatever the proof, I don't understand the statement since any complex number (the wavefunction is one complex number) is in some way real up to a phase. So I don't really understand what this theorem is trying to teach us.
PS: I cannot ask directly the professor because I study from a video recorded 6 years ago
| No, the wavefunction $\psi(\vec{r})$ is not just 1 complex number: it is infinitely many complex numbers, 1 for each value of position $\vec{r}$. In contrast, the professor is making the non-trivial statement that there exists a global (i.e. $\vec{r}$-independent) complex constant $c$.
For more details, see also this & this related Phys.SE posts.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Isometry between Minkowski space and Tangent space In this notes Geometric Wave Equations by Stefan Waldmann at page 70 they have
Having a fixed Lorentz metric $g$ on a spacetime manifold $M$ we can
now transfer the notions of special relativity, see e.g. 50 , to $(M,
g)$. In fact, each tangent space $\left(T_{p} M, g_{p}\right)$ is
isometrically isomorphic to Minkowski spacetime $\left(\mathbb{R}^{n},
\eta\right)$ with $\eta=\operatorname{diag}(+1,-1, \ldots,-1)$, by
choosing a Lorentz frame: there exist tangent vectors $e_{i} \in T_{p}
M$ with $i=1, \ldots, n$ such that $$ g_{p}\left(e_{i},
e_{j}\right)=\eta_{i j}=\pm \delta_{i j} . $$
We say that two manifolds $M$ and $N$ are isometric if we have vectors $v \in T_pM$ ,$u \in T_{\phi(p)}N$ and a map $\phi:M\rightarrow N$ such that
$g(v,v)=g'(\phi^*v,\phi^*v)$ where $g$ is a metric in $M$ , $g'$ is a metric in $N$ and $\phi^*$ denotes a pushfoward.
Now the definition of isometry refers to two manifolds, but in the notes they are claiming an isometry between a manifold and a tangent space.
How is this isometry constructed?
| Two (metric) manifolds $(M,g_M)$ and $(N,g_N)$ are isometric if there exists a diffeomorphism $\varphi:M\rightarrow N$ such that $g_M = \varphi^*g_N$.
On the other hand, two pre-Hilbert spaces $(V, \langle \cdot,\cdot\rangle_V)$ and $(S,\langle\cdot,\cdot\rangle_S)$ (that is, vector spaces equipped with inner products) are isometric if there exists an invertible linear map $A:V\rightarrow S$ such that $\langle A(X),A(Y)\rangle_X = \langle X,Y\rangle_V$.
What Waldmann is saying is that at each point $p\in M$, the vector spaces $(T_pM,g_p)$ and $(\mathbb R^n, \eta)$ are isometric to one another because we can choose a basis $\{\hat e_i\}$ for $T_p M$ such that $g_p(\hat e_i,\hat e_j) =\eta_{ij}$ (such a basis is called an orthonormal frame). From there, we can construct a linear isometry $A$ via
$$A: X\in T_pM \mapsto \pmatrix{-g_p(\hat e_1,X)\\g_p(\hat e_2,X)\\\vdots\\g_p(\hat e_n,X)}\in \mathbb R^n$$
Waldmann's wording is slightly confusing because he says that $(T_pM,g_p)$ is isometric to Minkowski spacetime $(\mathbb R^n,\eta)$; what he means is that it is isomorphic to the tangent space to Minkowski spacetime at any arbitrarily chosen point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Momentum probability density from Wigner distribution I want to prove that $|\hat{\psi}(p)|^2= \frac{1}{2\pi} \int W_\psi \mathrm{d}x $ where $W_\psi $ is the Wigner function.
Starting with the definition I get ($z=-y$ and $u=x+z/2$):
$$\frac{1}{2\pi}\iint \psi^*\left(x-\frac{y}{2}\right)\mathrm{e}^{\mathrm{i}py} \psi\left(x+\frac{y}{2}\right) \mathrm{d}y \mathrm{d}x= \frac{1}{2\pi}\iint \psi^*\left(u\right)\psi\left(u-z\right) \mathrm{d}u\: \mathrm{e}^{-\mathrm{i}pz}\mathrm{d}z$$
Next I want to use the Convolution Theorem to get the product of the Fourier Transforms:
$$\frac{1}{2\pi}\iint \psi^*\left(u\right)\phi\left(z-u\right) \mathrm{d}u\: \mathrm{e}^{-\mathrm{i}pz}\mathrm{d}z= \frac{1}{\sqrt{2\pi}}\mathcal{F}(\psi^{*}*\phi)(p)=\hat\psi^*(-p)\hat{\psi}(-p) $$
But with the definition $\phi(x):=\psi(-x)$ I get $|\hat{\psi}(-p)|^2$. Does somebody know where the problem is?
| Use
$$
\psi\left(x\pm\frac{y}{2}\right)=\int dx e^{ip\left(x\pm\frac{y}{2}\right)}\bar{\psi}(p)
$$
and
$$
\int dxe^{i(p-p')}=2\pi\delta(p-p')
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question about the kinetic energy operator The Kinetic Energy Operator is essentially self-adjoint. Under what circumstances does it have a unique extension?
| The "true" kinetic energy is the self-adjoint extension you are referring to. As you know, the domain of an operator is integral to its definition. The "formula" we want for the kinetic energy operator is (ignoring some constants) given by $-\nabla^2$, but we need to decide which domain it should act on so as to be self-adjoint.
A reasonable start is to define the operator $T_0:\psi\mapsto -\nabla^2\psi$ whose domain is $C^\infty_c(\mathbb R^n)$ - the compactly-supported smooth functions on $\mathbb R^n$. This is a very nice domain to work with, but one can show that $T_0$ is merely essentially self-adjoint. To get a self-adjoint operator, which must consider its closure $T:=T_0^{**}$; one can show that $T:\psi\mapsto -\nabla^2\psi$ with domain
$$H^2(\mathbb R^n)=\left\{ \psi \in L^2(\mathbb R^n)\ \bigg| \ \psi\text{ is twice weakly-differentiable and }\nabla^2\psi\in L^2(\mathbb R^n)\right\}$$
More generally, it's much easier to determine whether an operator is essentially self-adjoint than it is to determine whether or not it is self-adjoint. Working in practice with an essentially self-adjoint operator is perfectly fine, as long as you bear in mind that its true self-adjoint extension has a larger domain.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How to derive Shannon Entropy from Clausius Theorem? I am studying Quantum Information now, and I need to understand the entropy of a quantum system. But before I go there, I need to understand Shannon Entropy which is defined as :
$H(X) = -\sum_{i=1}^{n} {p(x_i) \log_2{p(x_i)}} $
where $X$ is a discrete random variable with possible outcomes $x_{1},...,x_{n}$ which occur with probability $p_{1},...,p_{n}$.
This is entropy that works in information theory, but we know that entropy is already defined back way in thermodynamics by Clausius as :
$$dS = \frac{\delta Q}{T}$$
Then, in statistical physics, entropy is defined by Boltzmann as :
$S=k_B \ln{\Omega}$
where $\Omega$ is the number of microstates of a system. How can I derive the Shannon entropy from these thermodynamics and statistical physics entropy definitions?
| These are not the same.
Shannon entropy (Information entropy), $H_\alpha=-\sum_i p_i\log_\alpha p_i$ applies to any system with specified probabilities $p_i$.
Boltzmann entropy, defined via the famous $S=k\log\Omega$ implies that the system occupies all the accessible states with equal probability, $p_i=1/\Omega$ (this is a particular case of the Information entropy, as can be seen by plugging $p_i=1/\Omega$ into the Shannon formula, taking natural logarithm base, and discarding customary dimensional coefficient).
Gibbs entropy, defined via the Clausius inequality, $dS\geq \delta Q/T_{env}$, is defined empirically, as a quantity that always monotonuously increases, and thus makes thermodynamic processes irreversible.
Furthermore, Boltzmann entropy and Gibbs entropy can be shown to be equivalent, reflecting the equivalence between the microscopic statistical physics and the phenomenological thermodynamics.
Finally, Let me first point out that entropy may mean different things. As Jaynes, in his article The minimum entropy production principle claims that there are six different types of entropy with somewhat different meaning.
Remark:
There is some disagreement about what is called Gibbs entropy, as Gibbs actually introduced two - one along the lines of Clausius, and another more similar to Boltzmann entropy. These are sometimes referred to as Gibbs I and Gibbs II. For more ways to introduce entropy see this answer to Is information entropy the same as thermodynamic entropy?.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Does gravitation really exist at the particle level? As I understand, we usually talk about gravity at a macro scale, with "objects" and their "centre(s) of mass". However, since gravity is a property of mass generally (at least under the classical interpretation), it should therefore apply to individual mass-carrying particles as well.
Has this ever been shown experimentally? For example, isolating two particles in some manner and then observing an attraction between them not explained by other forces.
To pose the question another way, let's say I have a hypothesis that gravitation is only an emergent property of larger systems, and our known equations only apply to systems above some lower bound in size. Is there any experiment that reasonably disproves this hypothesis?
| Mass Spectrometer
I don't recall the formal name for the instrument but I was able to tour a lab at Ohio State University that had mass spectrometers that lofted ionized individual molecules a few feet up a column and waited for them to pass a detector as they fell due to gravity. Organic molecules aren't quite point masses but they are about as close as you can reliably get measurements for. The instrument was considered to be best in class at the time and this was only a few years ago.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 5,
"answer_id": 4
} |
How much time after will two oppositely charged particles collide for both gravitational force and electrostatic force? Suppose two point objects charged with opposite charges $q_1$ and $q_2$ at a distance $r$ in a vaccum.
So, the net electrostatic force on both objects $= F_c = \frac {q_1q_2}{4π\epsilon_0r²}$ [$\epsilon_0$ is vaccum permittivity]
There should be also gravitational force working on those objects. Suppose, the masses of two objects is $m_1$ and $m_2$
Then, the gravitational force $= F_g = \frac {Gm_1m_2} {r²}$
So, the net force working on the objects $= F_{net} = \frac {4π\epsilon_0Gm_1m_2 + q_1q_2} {4π\epsilon_0r²}$
I tried to calculate the time taken by the two objects to collide with each other with the net force but failed. I want to find out the equation. So can anyone help me to find out the period of collision in such a situation mentioned above?
| In this video by Flammable Maths, the solution to a similar problem is given.
The only difference is that we just need to include the electrostatic force, besides that the process is exactly the same.
Let's say we have two objects $1$ and $2$ with mass $m_1,m_2$ and charge $q_1,q_2$ respectivey separated by distance $R$ then-
$$\textstyle\displaystyle{F=F_C+F_G=\frac{Gm_1m_2+kq_1q_2}{R^2}}$$
Where $G$ is the Newtonian constant of gravitation and $$\textstyle\displaystyle{k=\frac{1}{4\pi\epsilon_0}}$$
By newton's third law we have $F_{12}=-F_{21}$ so
$$\textstyle\displaystyle{F_{12}=\frac{Gm_1m_2+kq_1q_2}{(r_2-r_1)^2}=m_1\frac{d^2r_1}{dt^2}}$$
$$\textstyle\displaystyle{F_{21}=-\frac{Gm_1m_2+kq_1q_2}{(r_2-r_1)^2}=m_2\frac{d^2r_2}{dt^2}}$$
Where $R=r_2-r_1$
$$\therefore\textstyle\displaystyle{\frac{d^2r_2}{dt^2}-\frac{d^2r_1}{dt^2}}$$
$$\textstyle\displaystyle{=-\frac{Gm_1m_2+kq_1q_2}{(r_2-r_1)^2}\bigg(\frac{1}{m_1}+\frac{1}{m_2}\bigg)}$$
$$\implies\textstyle\displaystyle{\frac{d^2R}{dt^2}=-\frac{\kappa}{R^2}}$$
Now we just need to solve this differential equation-
$$\textstyle\displaystyle{\frac{dv}{dt}=-\frac{\kappa}{R^2}=\frac{dv}{dR}\frac{dR}{dt}}$$
$$\implies\textstyle\displaystyle{-\frac{\kappa}{R^2}=v\frac{dv}{dR}}$$
$$\implies\textstyle\displaystyle{-\kappa\int\frac{1}{R^2}dR=\int vdv}$$
At $t=0$, $R(0)=R_i$ [The initial radius]
$v(0)=0$ [velocity at the beginning]
$$\therefore\textstyle\displaystyle{\int_{0}^{v(t)}vdv=-\kappa\int_{R_i}^{R(t)}R^{-2}dR}$$
$$\implies\textstyle\displaystyle{\frac{v^2}{2}=\kappa\bigg(\frac{1}{R}-\frac{1}{R_i}\bigg)}$$
$$\implies\textstyle\displaystyle{v=\frac{dR}{dt}=\pm\sqrt{2\kappa\bigg(\frac{R_i-R}{R_iR}\bigg)}}$$
$$\implies\textstyle\displaystyle{\int_{0}^{T_c}dt=\pm\int_{R_i}^{0}\frac{1}{\sqrt{2\kappa\bigg(\frac{R_i-R}{R_iR}\bigg)}}dR}$$
$$\implies\textstyle\displaystyle{T_c=\pm\sqrt{\frac{R_i}{2\kappa}}\int_{R_i}^{0}\sqrt{\frac{R}{R_i-R}}dR}$$
Solving the integral is simple, if you would like to see the steps then see here. Noting that time can't be negative, we have-
$$\textstyle\displaystyle{T_c=\frac{\pi}{2}\sqrt{\frac{R^3}{2\kappa}}}$$
Now simply substituting the value for $\kappa$ and $k$ gives us less cleaner formula-
$$\textstyle\displaystyle{T_c=\sqrt{\frac{\pi^3\epsilon_0m_1m_2R^3}{2(m_1+m_2)(4\pi\epsilon_0Gm_1m_2+q_1q_2)}}}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What happens to the potential energy when we connect 2 water tanks with different water levels?
Imagine combining 2 water tanks (with equal cross section areas) with different water levels.
I'll call them A (tank with the higher water level) & B.
When water is flowing from A to B, what happens to its potential energy? Does it decrease? If so, what happens to that energy?
I would also like to know what happens to the center of gravity of this whole water volume? Does it also lower when water is flowing?
Edit:
I'll tell what I am thinking. In A, some amount of water is going down. Hence the potential energy decreases. And in B the same amount of water is pushed up the same hight. So the potential energy increases. So as the mass and the change of hight is equal the decrease and increase of potential energy is also equal. Doesn't that mean the net change of potential energy is zero?
I have seen so many explanations similar to the answers below. And that seems correct. But still I can't get my head off from the above explanation. Can anyone show me what is wrong in my explanation?
| Looking at the following image, it's pretty obvious that the centre of mass will be lower, because the final state will be, just taking a portion of water (blue rectangle) and lowering it.
And obviously, the potential energy also is reduced (lowering mass centre, lowers total potential), in an ideal world the water would just oscillate, going up and down in both tanks, but in reality, the kinetic energy of the water flowing will disperse very quick through friction and heat.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Showing that the integration measure is preserved under gauge transformation in the non-Abelian case I am trying to show that the integration measure we use in the Fadeev-Popov method of quantisation of non-Abelian gauge theory is invariant under a gauge transformation.
I am using Peskin & Schroeder chapter 16.2. The gauge transformation of the gauge field is given by
$$
(A^\alpha)^a_\mu=A^a_\mu+\frac{1}{g}D_\mu\alpha^a
$$
which is in the adjoint representation as shown by the transformation. Now the integration measure we use in the functional integral is given by
$$
\mathcal{D}A=\prod_x\prod_{a.\mu}dA^a_\mu
$$
So when we take the gauge transformed measure we have
$$
\mathcal{D}A^\alpha=\prod_x\prod_{a,\mu}d(A^\alpha)^a_\mu=\prod_x\prod_{a,\mu}\left( dA^a_\mu+\frac{1}{g}d(\partial_\mu\alpha^a)+f^{abc}d(A^b_\mu\alpha^c)\right)
$$
This looks like a more complicated shift in our integration but I don't quite understand how they leave the measure invariant. The autors mention that this is a shift followed by a rotation of the components of $A_\mu^a$ but how can we see this explicitly?
Some of my (maybe incorrect) reasoning
The second term in the transformed measure is just a shift and since we are integrating over fields $A^a_\mu(x)$ it indeed leaves the measure invariant. It's the third term that I really struggle to make sense of.
| What the authors means is this: When you disregard the shift all that's left is
$$ (\delta^{ab} + f^{abc}\alpha^c)\mathrm{d}A^b_\mu,$$
which is the infinitesimal version of a linear transformation generated by the matrix $M^{ab} = f^{abc}\alpha^c$. Since the structure constants are anti-symmetric, $M^{ab}$ is, too, and so it is the generator of a rotation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Do supermassive black holes at galactic centers and the galaxies containing them spin with the same axis? If the galactic mass is rotating around a central supermassive black hole, should their spin axis not be the same, just as we would obtain for the rotation of a star and its planets ?
| There is no particular reason they need to. A planet does not necessarily have its axis aligned with the solar system or the galaxy. A star does not necessarily have its axis aligned with its stellar system or the galaxy. Our own star's axis is about 7 degrees out of alignment with the plane of the ecliptic.
If a black hole were aligned with the galaxy, and a large mass (say a star) impacted the BH at some weird angle, the result would not still be aligned. There is no particular reason that accretion has to proceed symmetrically. So the evolution of the BH could pass through a phase where it is aligned, but is unlikely to stay there.
Probably it won't be massively far off, because the average of accretion is probably going to be round-about aligned with the galaxy. But it is unlikely to be perfectly aligned.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
How to see that the electromagnetic stress-energy tensor satisfies the null energy condition? I am trying to show that the Maxwell stress-energy tensor,
$$T_{\mu\nu} = \frac{1}{4\pi}\left( F_{\mu\rho} F^{\rho}{}_{\nu} - \frac{1}{4}\eta_{\mu\nu}F_{\rho \sigma} F^{\rho\sigma} \right),$$
satisfies the null energy condition, i.e., that
$$T_{\mu \nu}k^\mu k^\nu \geq 0$$
for all null vectors $k^\mu$. I see that the second term vanishes on contraction with $k^\mu k^\nu$, but I'm struggling to see how to manipulate the first term.
| While the answer provided by Nickolas Alves should suffice, here is an alternate proof of NEC satisfied by free Electromagnetic field using the idea of 2-spinor formalism (and hence, this proof is very particular to 3+1 dim space-time, see [1] )
The idea is that a real null vector $k^a$ can be written as tensor product of two 2-spinors, one being the conjugate of other:
$k^a\leftrightarrow k^A\bar{k}^{A'}$
where $k^A$ is defined upto an overall phase factor. Note that $k^ak_a\leftrightarrow k^Ak_A\bar{k}^{A'}\bar{k}_{A'}=0$ which follows form the fact that $k^Ak_A=\epsilon_{AB}k^Ak^B= \epsilon_{[AB]}k^Ak^B=0$
The Maxwell tensor $F_{ab}$ in this formalism can be written in terms of a symmetric 2-spinors $\phi_{AB}$ as follows:
$F_{ab}\leftrightarrow \phi_{AB}\epsilon_{A'B'}+c.c.$
It turns out that the stress energy tensor $T_{ab}$ of the EM field can be simply written as (see chapter 3 & 5 from [1])
$T_{ab}\leftrightarrow\phi_{AB}\bar{\phi}_{A'B'}$
The Null energy condition follows naturally: $T_{ab}k^ak^b\leftrightarrow |\phi_{AB}k^Ak^B|^2\geq 0$
[1] R. Penrose, W. Rindler, "Spinors and Space-Time. Volume-I: Two-Spinor Calculus and Relativistic Fields", Cambridge University Press (1984)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Eyes shut, can a passenger tell if they’re facing the front or rear of the train? Suppose you’re a passenger sitting in one of the carriages of a train which is travelling at a high, fairly steady speed. Your eyes are shut and you have no recollection of getting on the train or the direction of the train’s acceleration from stationary. Can you tell whether you’re facing the front or the back of the train?
This isn’t a theoretically perfect environment - there are undulations, bends and bumps in the track. Not a trick question - you cannot ask a fellow passenger!
Edit: This is intentionally lacks rigorous constraints. Do make additional assumptions if it enables a novel answer.
| I think you might be able to distinguish the direction of motion by turning sideways and listening for the apparent motion of the clickety-clack sounds and vibrations from the carriage wheels (assuming old fashioned train tracks, on a modern high-speed rail line the sounds may not be perceptible).
At, say 100km/h and 20m between wheels the sounds would be separated by a bit less than a second.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 9,
"answer_id": 5
} |
What is the best way of evaluating time-ordered integrals numerically? In time-dependent perturbation theories, one encounters multi-dimensional time-ordered integrals
$$
(-i)^n\int_0^tdt_1\int_0^{t_1} dt_2 \cdots \int_0^{t_{n-1}}dt_n f(t_1,t_2,\cdots,t_n)
$$
What is the best way of numerically evaluating such multi-dimensional time-ordered integrals? Monte Carlo integration is the first way that comes into my mind. How does Monte Carlo deal with the time-ordering?
| The time-ordered integral can be transformed to a normal multi-dimensional integral over a rectangular volume by changing variables. Setting $t=\beta$, and using the following change of variables:
$$
\begin{cases}
t_1 = y_1\\
t_2 = y_2\frac{y_1}{\beta}\\
t_3 = y_3\frac{y_2y_1}{\beta^2}\\
\vdots\\
t_n = y_n\frac{y_{n-1}\cdots y_1}{\beta^{n-1}}
\end{cases}
$$
and the corresponding Jacobian
$$
\left(\frac{y_1}{\beta}\right)^{n-1}\left(\frac{y_2}{\beta}\right)^{n-2}\cdots \left(\frac{y_{n-1}}{\beta}\right)^{1},
$$
the integral $(-i)^n\int_0^tdt_1\int_0^{t_1} dt_2 \cdots \int_0^{t_{n-1}}dt_n f(t_1,t_2,\cdots,t_n)$
becomes
$$
(-i)^n\int_0^\beta dy_1 \int_0^{\beta} dy_2 \cdots \int_0^{\beta} dy_n \, f(y_1, \frac{y_2y_1}{\beta},\cdots,\frac{y_ny_{n-1}\cdots y_1}{\beta^{n-1}}) \left(\frac{y_1}{\beta}\right)^{n-1}\left(\frac{y_2}{\beta}\right)^{n-2}\cdots \left(\frac{y_{n-1}}{\beta}\right)^{1}.
$$
It is then easy to use a regular Monte Carlo integrator to complete the integral. For example, one may use the vegas algorithm https://vegas.readthedocs.io/en/latest/tutorial.html .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Wavefunction Amplitude Intuition Reading the responses to this question: Contradiction in my understanding of wavefunction in finite potential well
it seems people are pretty confident that, e.g., the wavefunction of a particle in a slanted potential well:
makes physical sense, since the system is non-dissipative, so you are more likely to find the particle in a region of higher potential "where its kinetic energy would be lower" in loose terms.
So the probability of finding the particle in some small region near the minimum of the potential is lowest, got it.
How does this reconcile with e.g. the ground state of the quantum simple harmonic oscillator ($\psi \propto e^{-x^2}$)?
In that case we have a situation where the greatest probability of finding the particle is indeed at the minimum of the potential, and so using the idea of classical turning points to determine the maxima of ψ breaks down.
I can't wrap my head around why sometimes the responses to the linked question are fine and dandy, and other times they are manifestly wrong. Is it something to do with my assumption that any state with a given energy would have a higher probability amplitude at higher potential?
| Having small or large quantum numbers makes the difference.
See also Correspondence principle:
In physics, the correspondence principle states that the behavior
of systems described by the theory of quantum mechanics
(or by the old quantum theory) reproduces classical physics
in the limit of large quantum numbers.
The semi-classical behavior (i.e. the wave function has large amplitude
and long wavelength where the classical particle would move slowly)
is valid only for large quantum numbers $n$.
But usually it is not true for small quantum numbers $n$
(i.e. for the few lowest energy levels).
Then the wave function typically behaves very different
from a classical particle.
You can see this trend both for the slanted potential well and
for the harmonic oscillator.
The particle in the slanted potential well
behaves very classical for $n=61$ and $n=35$,
but it does not for $n=1$.
The harmonic oscillator behaves quite classical for $n=10$,
but it does not for $n=0, 1, 2$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Electric potential generated by spherical symmetric charge density I know this question is pretty basic but I found a supposedly wrong formula in my notes and I'm trying to understand where this is coming from. Suppose we have a spherically symmetric charge density $\rho({\boldsymbol{r}})=\rho(r)$, then the formula I was given for the potential is
$$\phi(r)=\frac{1}{r}\int_0^r4\pi\rho(r')r'^2dr'\tag{1}$$
But using Gauss law for electric field one gets
$$\int\boldsymbol{E}\cdot d\boldsymbol{S}=4\pi\underbrace{\int\rho(\boldsymbol{r'})d^3\boldsymbol{r'}}_{Q(r)}\implies \boldsymbol{E}(\boldsymbol{r})=\frac{Q(r)}{r^2}\hat{\boldsymbol{r}}=\frac{\int_0^r4\pi\rho(r')r'^2dr'}{r^2}\hat{\boldsymbol{r}}\tag{2}$$
Taking the gradient of $(1)$
$$\boldsymbol{E}(\boldsymbol{r})=-\nabla\phi=\left[\frac{\int_0^r4\pi\rho(r')r'^2dr'}{r^2}-\frac{4\pi\rho(r)r^2}{r}\right]\hat{\boldsymbol{r}}=\left[\frac{Q(r)}{r^2}-\frac{dQ(r)/dr}{r}\right]\hat{\boldsymbol{r}}$$
That is off by a term from what I got from Gauss Law, so I concluded $(1)$ is wrong.
Is this correct?
| The first formula seems to assume that the potential at point r inside the sphere is given only by the charge included in the sphere of radius r. This is true for the electric field but not for the potential, in general. You can see that this is the case taking a constant charge density. According to your formula (1) you will get $V(r) =\frac{4\pi \rho}{r} \frac{r^3}{3}$ or $V(r) =\frac{4\pi \rho}{3} r^2$. The potential inside a uniformly charged sphere goes like $r^2$, as you can see in this link. http://www.phys.uri.edu/gerhard/PHY204/tsl94.pdf but the formula of the potential inside the sphere has two terms. This is true for the convention assigning zero potential to infinity. You can see that the extra term is a constant. So, it may be that your formula (1) may work for a different reference for potential. What is the source of the formula? Does it have an expression for the field outside the sphere?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Is sand in a vacuum a good thermal insulator? My reason for thinking that sand in a vacuum would be a good insulator is that heat cannot be conducted in a vacuum, and the area of contact between adjacent grains of sand is very small, which means heat would transfer between grains relatively slowly. Is this correct, or is there something I'm missing?
Also, the sand is there instead of pure vacuum for structural support.
| Simply put: If sand in vacuum had a heat conductivity close to that of vacuum, i.e., at least much closer to zero than the heat conductivity of the silicon dioxide (aka glass) it consists of, something similar would have to be true for sand in air.
But: the heat conductivity (all numbers from the German Wikipedia) of dry sand (in air, I suppose) is 0.58 W/(m·K), while that of glass is 0.76 W/(m·K) and that of air is 0.026 W/(m·K). So, if (air-filled) sand is that far (i.e., 20 times!) away from pure non-convective air (even though the total contact area between grains is supposedly minute), replacing air with vacuum will most likely change nothing significant.
Don't ask me why this is so. Nevertheless it is an interesting question, especially because you probably thought of the sand providing the mechanical support for the vacuum against the outside pressure.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 3
} |
Why is skin depth quoted as when the amplitude has decayed by a factor of $\frac{1}{e}$ The definition of the skin depth is:
"Skin depth defines the distance a wave must travel before its amplitude has decayed by a factor of $1/e$."
My question why is the decay of 37% significant here. The EM wave will still have some penetration abilities after it has lost 37% of its initial amplitude, won't it? That is, it will still be able to penetrate the conductor after the skin depth is reached.
| The mathematics (exponential decay) would suggest that infinite distance is needed for the amplitude to decay to zero. This would not be helpful, so an arbitrary agreed value is used. The choice of 1/e times the original amplitude gives a simpler form to the decay equation than another value would.
Not that the amplitude has not decayed by 37% but to 37% of the original value, i.e. it has lost 63% of its amplitude.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is an intuitive explanation for $T = \mathrm{const}$ when $\Omega(E) = e^E$? Temperature is related to number of microstates as follows:
$$
\frac{1}{k_{\mathrm{B}}T} = \frac{\mathrm{d}\ln{\Omega(E)}}{\mathrm{d}E} \ .
$$
Hence, if $\Omega(E) = e^E$, then $T = \mathrm{const}$. This contradicts my intuition. I expect temperature of a system to increase as the number of microstates increases. Everything works fine for power functions. For example, if $\Omega(E) = E^x$, then
$$
T = \frac{E}{xk_\mathrm{B}} \ .
$$
I don't understand why exponential function yields such a counterintuitive result, considering it grows faster than power functions.
| The basic intuition here is that temperature is not about number of microstates as such. Rather, it is about how the number of microstates varies with the energy---the standard definition of temperature in terms of entropy and energy.
A hot system does not need to have a lot of microstates. The white-hot pieces of metal leaping off a firework 'sparkler' have less entropy than a pool of water freezing on a cold night.
The dependence $S \propto E$ is unusual because it has zero second derivative. Such an entropy function is neither convex nor concave, so it is on the boundary of the stability criterion, which means it is unstable. In other words a system with this entropy function cannot be in a stable equilibrium. In practice the dependence $S \propto E$ will not extend to arbitrarily high energies; it will curve over and then the whole function is somewhat like the one describing a phase transition.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Energy-momentum tensor for the k-essence theory could anyone please explain or show some simple steps how using matter action:
$S = \int d^4x \sqrt{-g} L(X, \phi)$, where $X = \frac{1}{2} g^{\mu \nu} \nabla_\mu \phi \nabla_\nu \phi$
We can derive energy-momentum tensor:
$T_{\mu \nu} = \frac{2}{\sqrt{-g}} \frac{\delta S}{\delta g^{\mu \nu}}$
I understand that we should take variation of action $\delta S$ with respect to $g^{\mu \nu}$ , but I don't understand how to fit X into $L(X, \phi)$
Can I just put instead of $L(X, \phi)$, $X = \frac{1}{2} g^{\mu \nu} \nabla_\mu \phi \nabla_\nu \phi$, into action and take variation or there is anything more in the process ?
Am I wrong about my variant of just putting X into action ?
| When making the variations you need to use the following relation
$$\delta L (X, \Phi) = \delta X \frac{\partial L(X,\Phi)}{\partial X} + \delta \Phi \frac{\partial L(X,\Phi)}{\partial \Phi} \ ,$$
which is just a general fact for derivatives/variations of functions.
The variations with respect to the metric will only include the first term on the RHS - the other term will contribute to the $\Phi$ equations of motion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Invariants of inner product in pseudoreal representation of $SU(2)$ I am reading Peskin's and Schroeder (P&S), "An introduction to Quantum Field Theory", specifically the first paragraph on page 499 in section 15.4 "Basic Facts about Lie Algebras". At some point, the authors claim that the invariant combination of two spinors is:
$$\epsilon^{\alpha\beta}\eta_{\alpha}\xi_{\beta}$$
and I would like to ask what is meant by the above-mentioned inner product? Does the author (secretly) imply that one of the two spinors ($\eta_{\alpha}$ or $\xi_{\beta}$) is actually a complex conjugate of another spinor? Or is the complex conjugate form of one of the two spinors given by contracting the one spinor with the Levi-Civita tensor, i.e.:
$$\epsilon^{\alpha\beta}\xi_{\beta}=\xi^{*\beta}$$
or something like that? And if so, why?
Any help will be appreciated!
| I would like add to Qmechanic's excellent answer a bit of context from the practical point of view.
Apparently the question arises in classical Quantum mechanics not accounting for relativity. So then we can assume that spinors in 3D transform under the effect of rotations like representations of SU(2) which is the universal covering group of SO(3), the 3D rotation group. The "S" (special) means that the determinant of the unitary matrices of SU(2) is 1.
A bilinear object like $\epsilon_{\alpha\beta}\eta^\alpha \xi^\beta$ with $\eta^\alpha$ and $\xi^\alpha$ as components of non-relativistic spinors in 3D space transforms like
$$\eta'^1 = a\eta^1+b\eta^2\quad\text{and} \quad \eta'^2 = c\eta^1 + d\eta^2$$
if $$U = \left(\begin{array}{cc} a & b\\ c & d\end{array}\right)$$
is a transformation matrix $\in SU(2)$.
If $\xi$ transforms in the same way we find that
$$\epsilon_{\alpha\beta}\eta^\alpha\xi^\beta=\eta'^1\xi'^2 - \eta'^2\xi'^1 = (ad-bc)( \eta^1\xi^2 - \eta^2\xi^1) = \eta^1\xi^2 - \eta^2\xi^1$$
due to the unimodularity of the matrices of SU(2) ($det U=1$).
This means that under SU(2) $$\epsilon_{\alpha\beta}\eta^\alpha\xi^\beta$$ and in particular $$\epsilon_{\alpha\beta}\eta^\alpha\eta^\beta$$ transforms as a scalar.
On the other hand we expect the bilinear
$$\eta^1 \eta^{\ast 1} + \eta^2 \eta^{\ast 2}$$
to be invariant under unitary transformations -- it transforms like a scalar too. We now can identify both bilinear products with each other which leads to the identification of $(\eta^{\ast 1}, \eta^{\ast 2})$ with $(\eta^2, -\eta^1)$.
In other words it means that in 3D spinors and complex-conjugated spinors transform in a very similar way, technically speaking both respresentations are equivalent. This is only true in 3D-space. In Minkowski space, however, the transformation group of spinors is SL(2,C) and the representations of spinors and their complex-conjugated counterpart are no longer equivalent.
It is useful to add that if apart from contravariant spinors $\eta^\alpha$ also covariant spinors $\xi_\alpha$ can be introduced and defined:
$$\eta_1 = \eta^2 \quad \text{and}\quad \eta_2 = -\eta^1$$
We can shortcut this as:
$$\epsilon_{\alpha \beta}\eta^\beta =\eta_\alpha \equiv \eta^{\ast \alpha}$$
and then we can write:
$$\sum_{\alpha=1,2} \eta^\alpha\eta^{\ast \alpha} = \sum_{\alpha=1,2} \eta^\alpha \eta_\alpha$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
In order to solve for the states of a spherically symmetric parabolic potential do we need to use cartesian and cylindrical coordinates? In the general case a spherically symmetric potential, the Time Independent Schrodinger Equation is separable in spherical coordinates but not in cartesian, or cylindrical coordinate as in general $V(r)\neq{V(x)+V(y)+V(z)}$, and $V(r)\neq{V(\rho)+V(z)}$. In the special case of a spherically symmetric parabolic potential, $V(r)=ar^2=a\sqrt{x^2+y^2+z^2}^2=a\left(x^2+y^2+z^2\right)$ and so $V(r)=V(x)+V(y)+V(z)$ meaning that the Time Independent Schrodinger Equation $\nabla^2\Psi+(E-V)\Psi=0$ is separable in cartesian coordinates. Also $\rho=\sqrt{x^2+y^2}$ meaning that $\rho^2=\sqrt{x^2+y^2}^2=x^2+y^2$, meaning that $a\left(x^2+y^2+z^2\right)=a\left(\rho^2+z^2\right)$, which means that $V(r)=V(\rho)+V(z)$, so the Time Independent Schrodinger Equation is also separable in cylindrical coordinates in the case of a spherically symmetric parabolic potential.
Does this mean that for a spherically symmetric parabolic potential, in addition to using separation of variables in spherical coordinates, we also need to use separation of variables for cartesian in order to find all the bound states? If not why don't we need to use separation of variables in cartesian and cylindrical coordinates in order to find all bound states for a spherically symmetric parabolic potential?
| You don't need to, but you can.
Essentially, what you're doing is choosing between expressing your energy eigenstates as products of functions of spherical, cylindrical, or Cartesian coordinates. Any one of these options is equally good, while in other problems the calculations will get quite cumbersome in, e.g., Cartesian coordinates.
At the end of the day, you need to specify each point in space with three numbers. Which separation of variables you use is just a matter of whether you prefer $(r, \theta, \phi)$, $(\rho, \phi, z)$, or $(x, y, z)$.
To be a bit more specific, if you use spherical coordinates, the solution will be written in terms of, e.g., spherical harmonics and some radial function. If you use Cartesian, no spherical harmonics will appear. Instead, you'll get products of Hermite polynomials in each direction.
For an example a bit more explicit, I'll notice that the solution in spherical coordinates is given on Wikipedia (e.g., on this article). In there it is mentioned that the energy levels are given by
$$E_n = \hbar \omega \left(n + \frac{3}{2}\right),$$
where $n$ is a non-negative integer and $\frac{1}{2} \mu \omega^2 = a$ ($\mu$ is the mass of the particle and I'm changing the choice of constants to that used in Wikipedia so the notation gets closer to the standard when dealing with quantum harmonic oscillators). Wikipedia also mentions that the degeneracy at energy level $n$ (i.e., the dimension of the eigenspace with energy $E_n$) is given by
$$\frac{(n+1)(n+2)}{2}.$$
Let us reproduce this result in Cartesian coordinates. I'll just sketch the calculations, but they are done in more detail in Sec. 2.5 of Weinberg's Lectures on Quantum Mechanics. In Cartesian, the Hamiltonian can be written as
\begin{align}
H &= - \frac{\hbar^2}{2 \mu} \nabla^2 + \frac{1}{2}\mu \omega^2 r^2, \\
&= \left(- \frac{\hbar^2}{2 \mu} \frac{\partial^2}{\partial x^2} + \frac{1}{2}\mu \omega^2 x^2\right) + \left(- \frac{\hbar^2}{2 \mu} \frac{\partial^2}{\partial y^2} + \frac{1}{2}\mu \omega^2 y^2\right) + \left(- \frac{\hbar^2}{2 \mu} \frac{\partial^2}{\partial z^2} + \frac{1}{2}\mu \omega^2 z^2\right),
\end{align}
which is just three times the Hamiltonian to a one-dimensional QHO. Hence, the system can be treated as three independent QHOs, and hence the allowed energy levels are
\begin{align}
E_{qrs} &= \hbar \omega \left(q + \frac{1}{2}\right) + \hbar \omega \left(r + \frac{1}{2}\right) + \hbar \omega \left(s + \frac{1}{2}\right), \\
&= \hbar \omega \left(q + r + s + \frac{3}{2}\right),
\end{align}
where the non-negative integers $q$, $r$, and $s$ are the quantum numbers for each of the independent 1D-QHOs. Notice then that the allowed energies are
$$E_n = \hbar \omega \left(n + \frac{3}{2}\right),$$
just as we found in the spherical separation of coordinates. However, is the degeneracy right? Fix $n$. How many combinations of $q$, $r$, and $s$ lead to the name $n$? Following Weinberg, we notice we have
$$q + r + s = n.$$
Since $n$ is fixed, this means that once we choose values for $q$ and $r$, $s$ will be given by $s = n - q - r$. The degeneracy then must be given by
\begin{align}
\sum_{q = 0}^{n} \sum_{r = 0}^{n - q} 1 &= \sum_{q = 0}^{n} (n - q + 1), \\
&= (n + 1)^2 - \frac{n (n+1)}{2}, \\
&= \frac{(n+1)(n+2)}{2},
\end{align}
which is exactly the same result.
To show the different separations of variables only lead to different choices of basis will likely take a bit more calculation, but I hope this helps to illustrate the result.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does current flow in a purely inductive circuit if the net voltage is zero? Considering the equation,
$$E=−L\frac{di}{dt}$$
The negative sign in the above equation indicates that the induced emf opposes the battery's emf.
If we're talking about a purely inductive circuit, the induced emf is equal and opposite to applied emf. Isn't it just like two identical batteries in opposition?
If that's the case, how does the current flow?
|
If we're talking about a purely inductive circuit, the induced emf is equal and opposite to applied emf. Isn't it just like two identical batteries in opposition?
If that's the case, how does the current flow?
It isn't, because two real batteries of same emfs acting against each other produce equals emfs whether there is current changing or not changing, and the fact they are real means they have non-zero internal resistance $R$. Then second circuital law from Mr. Kirchhoff states
$$
\mathscr{E} + (-\mathscr{E}) = 2RI
$$
which implies zero current, $I=0$.
When a real inductor is connected to a battery, current will flow through the circuit, because there is no static equilibrium like above; the current has to change in time in order for the induced EMF to exist, so initial change of current after the connection is made, is further maintained as the current increases. Because the real inductor also has some internal resistance $R_c$ (let's ignore real inductor's capacitance for now), the second circuital law from Mr. Kirchhoff states
$$
\mathscr{E} - L\frac{dI}{dt} = (R + R_c)I
$$
which does not imply $I=0$. In this case, induced EMF does not completely cancel battery's emf, because there needs to be some emf remaining to push the increasing current against the resistance forces in the circuit.
In the case of a perfect inductor and perfect battery, both resistances would be zero and we would get
$$
\mathscr{E} - L\frac{dI}{dt} = 0.
$$
which still would not imply $I=0$; instead, current would keep increasing indefinitely.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Why is quantizing the free electromagnetic field in the Lorenz gauge more subtle than in the Coulomb gauge? Quantizing the free electromagnetic field in the Lorenz gauge, $\partial_\mu A^\mu=0$, is subtle. We must add a gauge-fixing term to the action so that $\pi^0$ does not vanish identically. Also, we cannot impose $\partial_\mu A^\mu=0$ directly as an operator equation because again the commutator relations cannot be satisfied even with $\pi^0\neq 0$. So we have to implement it via Gupta-Bleuler's suggestion.
But in the Coulomb gauge $A^0=\vec{\nabla}\cdot{\vec A}=0$, the quantization proceeds in a rather straightforward manner.
What is the root cause of this problem? Does the Lorenz gauge condition fail to remove all the gauge redundancies? Is a simple way to understand this?
| The thing about the Coulomb gauge is that, in vacuum, you get both $A_0 = 0$ and $\nabla \cdot \vec A = 0$, so it eliminates the $(A_0,\pi^0)$ d.o.f. right out of the gate - we just have $A_0 = 0,\pi^0 = 0$ consistently and only need to care about the spatial parts, and due to $\nabla\cdot \vec A= 0$, the whole thing is just a bunch of oscillators in Fourier space, where the condition $\vec p \cdot \vec A(\vec p) = 0$ then eliminates one of the three remaining degrees of freedom. Note that because the Coulomb gauge has broken Lorentz covariance anyway, we don't care that this elimination of d.o.f. is not stable under Lorentz transformations.
The Lorenz gauge, in contrast, is a Lorentz invariant gauge condition and hence we're not allowed to just "eliminate $A_0$" or something because what's $A_0$ in one frame is a linear combination of all the $A_\mu$ in another - the point of picking the Lorenz gauge is precisely to preserve Lorentz covariance. So in a way the "difficulty" isn't as much down to specifics of the gauge condition as it is to us trying to do things covariantly vs. non-covariantly, but either way the core difference is that we're not allowed to drop the 0-th d.o.f., which really make the whole thing much simpler in the Coulomb gauge.
Additionally, the Lorenz gauge indeed does not completely remove gauge freedom since it remains invariant under harmonic gauge functions, see this question and its answers.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does electron-proton interaction and electron-electron interaction in an atom gives rise to a microscopic potential energy? When studying thermodynamics we come across a property of a system called internal energy, which is the sum of all energies possessed by the system at the microscopic level. Internal energy has two components. A kinetic energy component and a potential energy component.
I read that, potential energy at the microscopic level arises because of the interaction
*
*Between the molecules of a system
*Between the atoms of a molecule
*Between the nucleons - protons and nuetron
I wonder if some potential energy arises because of the interaction
*
*between electrons in an atom
*between electrons and protons in an atom
Is some potential energy associated with electron-electron and proton-electron interaction?
| It depends on your point of view. We say that a spring stores potential energy. In microscopic models, part of that is electrostatic, but part is due to "Pauli force" between electrons. However, in quantum field theory, Pauli force isn't a force and doesn't have a potential. Instead, it's associated with increased electron kinetic energy.
I'm a bit of a positivist here: things that register on force gauges should be understood as forces, so the energy stored in the spring should be understood as potential energy. I say yes to your question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Understanding the definition of a path integral In the Book "Quantum Mechanics and Path Integrals" by Feynman & Hibbs the path integral is approximated (page 32 and following) by
$$
K(b,a)\approx\int...\int\int\phi[x(t)]dx_1dx_2...dx_{N-1}\tag{2.20}
$$
with $b=(x_b,t_b)$ and $a=(x_a,t_a)$ being the start and endpoints of the path and $$\phi[x(t)]=const\cdot e^{(i/\hbar)S[x(t)]}=const\cdot e^{(i/\hbar)\int_{t_a}^{t_b} L[x(t),v(t),t]dt}.\tag{2.15}$$
Now I dont quite get this approximation.
*
*First of all I assume that the $dx_1dx_2...dx_{N-1}$ integrals have to be executed first and only after that the $dt$ integral in $\phi[x(t)]$ (or rather in $S[x(t)]$) should be executed. Is that right?
*And the second thing is that I dont get the meaning behind the $dx_1dx_2...dx_{N-1}$ integrals itself (each is integrated from $-\infty$ to $\infty$ according to wikipedia). So in the book the path was divided into straight lines between $x_k$ and $x_{k+1}$ with equal length and $x_0=x_a$ and $x_N=x_b$. That's why I would have thought the integration would not go from $-\infty$ to $\infty$ but rather from $x_k$ to $x_{k+1}$. So the integral would then look sth. like this
$$K(b,a)\approx\int_{x_{N-1}}^{x_N}...\int_{x_1}^{x_2}\int_{x_0}^{x_1}\phi[x(t)]dx_0dx_1...dx_{N-1} $$
Could someone explain to me in an easy way why that is not the case?
| For question 2, the bounds should be from $-\infty$ to $\infty$. That is because at each time, we integrate over every possible position through the use of the identity
$1 = \int _{-\infty}^{\infty} |x \rangle \langle x | dx$
In terms of interpretation, at each point in time $t$, the position on the path can be any real number. So consecutive positions are not bounded by their neighbors.
For question 1, the way I think of it is to think of the $dx_k$ integrals as riemann sums. For each term in the result, you have a piecewise path $\bar{x}(t)$ with which the integral $dt$ can be evaluated. The limit as those riemann sums tend to integrals is the result. In that sense I think of doing the $dt$ integral first.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Unitary evolution and von Neumann entropy In chapter 5 of the book "Statistical Mechanics" by Pathria it says
Since the density matrix evolves in a unitary manner, the von Neumann entropy is time-independent
Where the von Neumann entropy is defined as the trace
$$S[\rho(t)]=-\mathrm{Tr}\left(\rho(t)\ln \rho(t)\right)$$
and the evolution of the density matrix is
$$\rho(t)=\exp(-iHt/\hbar)\rho(0)\exp(iHt/\hbar)$$
and $H$ is the Hamltonian operator of the system we are studying.
I couldn't prove this result, can anyone help?
| Hint: Use the spectral decomposition to write
$$\rho(0) := \sum\limits_k \lambda_k \,|k\rangle \langle k| \tag{1} ,$$
and then find an expression for $\rho(t)$ in terms of $\lambda_k$. Especially note that $\rho(t)$ has the same eigenvalues as $\rho(0)$.
Finally, again using the spectral theorem, derive that
$$ S[\rho(t)] = -\mathrm{Tr} \sum\limits_k \lambda_k \ln \lambda_k \, U(t)|k\rangle\langle k| U^\dagger(t) \tag{2} \quad .$$
The cyclic properties of the trace then yield the desired result, i.e. $S[\rho(t)]=S[\rho(0)]$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What actually are microscopic and macroscopic viewpoints in thermodynamics? The microscopic viewpoint of studying a system in thermodynamics is the one in which we consider the system on a molecular/atomic/sub-atomic level. (is that even right?)
The macroscopic viewpoint is the one in which we ignore the molecular nature of the system and treat it as an aggregation of differential volumes, that have a limiting volume so that the system acts as a continuum.
If the above statements are true, then why temperature is considered a macroscopic concept?
Temperature is the measure of the average KE of the molecules of a system. Clearly, we're talking about molecules when we talk about temperature then why it is a macroscopic concept?
|
Temperature is the measure of the average KE of the molecules of a system. Clearly, we're talking about molecules when we talk about temperature then why it is a macroscopic concept?
One should distinguish Thermodynamics and Statistical Physics.
*
*Thermodynamics is phenomenological macroscopic theory, describing complex systems in terms of parameters like temperature, volume, pressure, etc.
*Statistical physics is a microscopic theory that explains the same phenomena in terms of basic (quantum or classical) mechanics laws.
Thus, quantities like temperature, internal energy, entropy, etc. have different definitions in thermodynamics and statistical physics, which can be shown to be equivalent in terms of observable behavior. See here regarding different definitions of entropy.
Remark Although some textbooks (e.g., Huang, if I am not mistaken) choose to explain in parallel thermodynamics and statistical physics, many authors apparently find such an approach cumbersome and freely switch between the two, which produces confusion about the exact definitions of quantities (notably L&L, which is a very popular and authoritative text.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Can a material be electrically polarized with electromagnetic radiation? Is charge separation possible by bombardnment of electromagnetic radiation?
As conventional dielectric materials can be polarized with a electric field, I am wondering if electromagnetic radiation, which is composed itself of electric fields; could produce this effect.
|
Can a material be electrically polarised with electromagnetic radiation?
Yes, this always happens when radio waves hit an electrical conductor.
Radio waves are a special case of EM radiation. By synchronously accelerating surface electrons back and forth on an antenna rod, they emit polarised photons.
If a receiving rod is now aligned in the same direction as the transmitting rod, then the jointly oscillating electric field of the photons that reach the rod moves surface electrons on the receiving rod.
Since there is also a jointly oscillating magnetic field of the photons, surface electrons can also be moved with a ring antenna (magnetic antenna).
Of course, such electric currents are generated in every piece of metal on which radio waves impinge. The antenna rod is just a part of the radio receiver with which the waves of a certain frequency are filtered out and electronically amplified.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can magnetic field lines form non closed loops? I recently came across this paper
"Topology of Steady Current Magnetic Fields", Am.J.Phys (1953)
The author points out the erroneous implications of representing magnetic field lines as closed loops.
He has used the below example to illustrate the problem.
Let us examine the field generated by a fixed current $I_1$, flowing in a ring solenoid and an adjustable current $I_2$ flowing along its axis. A line of force, produced by the ring solenoid alone ($I_2=0$), which originates at a point $P$ will link the circuit of $I_1$, and return to $P$, the line always remaining in the plane through the axis and point $P$. Now if we insert the circuit $I_2$, we see, from the right-hand rule, that the field of $I_2$ produces the following effects, depending on the magnitude of $I_2$: The lines of force originating at P will link both I1, and $I_2$, and (a) return to P after an integral number of linkages $n$ of $I_1$, and $m$ of $I_2$, or (b) will never return to $P$ (incommensurable case). Thus, if we start an assemblage of lines from a two-dimensional region $R$ and follow it continuously around the wire, we find that the tube of this assemblage does not return to the individual points where it originated.
From the above text can someone explain why the field lines do not reach the same point where we started from in the incommensurable case? what does the integral number of linkages mean? and what do they(n,m) depend on?
| All the author is saying here is that if we follow the magnetic field line starting at $P$, it will do one of two things:
*
*Eventually return to $P$ after making $n$ circuits of the ring and $m$ circuits of the wire. This is what is meant by "an integral number of linkages": the number of circuits (linkages) made is an integer (it's integral.)
*Never return to $P$. This means that the "period" of the loops around the ring and the corresponding "period" of the loops around the wire are incommensurable, i.e., their ratio is not a rational number. So by definition, the loops are not closed in the incommensurable case.
The values of $n$ and $m$ would depend in a complicated, non-continuous way on the magnitudes of the currents.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why we use areal density to measure thickness in different practicals of physics? In many experiments like finding the gamma and beta absorption coefficients we use the thickness of aluminium foils in $gm/cm^{-2}$. Why we do that?, Shouldn't it be possible to only denote thickness in mm?
| First of all, it's not $\rm g/cm^{-2}$. It's $\rm g/cm^{2}$ or $\rm g\cdot cm^{-2}$.
The answer to your question is "Yes: you can specify thickness in millimeters". Call that thickness $d_{\rm rad}$, the radiation length in "length" units.
The thickness in $\rm g/cm^{2}$ is:
$$ L_{\rm rad} = \rho d_{\rm rad} $$
where $\rho$ is the density of the material
For electromagnetic interactions the approximate formula is used:
$$ L_{\rm rad} = 1433{\rm g\, cm^{-2}}\frac A{Z(Z+1)(11.319-\ln{Z})}
$$
Note that it depends only on $Z$ and $A$, the actual density of the material irrelevant. Areal density is the most fundamental form of the radiation length. When the experimentalists are designing experiments and instruments, they must then divide by the density to get a physical length.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Deflecting a belt under tension I have a belt setup around two pulleys and I want to measure the tension on the belt by depressing it at the middle with a force gauge.
My colleague approaches this problem by taking the perpendicular components of the tension, doing
I don’t think this is the right way to approach this as that equation is basically describing a situation where we are pulling that point of contact with a force T but that is not what we are doing. Also, the tension would increase as d does so that’s another thing to incorporate.
I think the problem should be treated as a spring problem but I can’t figure out an equation that describes deflecting a string under some tension T.
| I'm voting with your colleague.
Everything comes down the the force it takes to deflect the belt at the point of contact, and since this is for a static situation, action equals reaction. The reaction is the belt pushing back against the defection, and that's going to be $2T\sin\theta$. And this is all about the belt at the point of the reaction, so it doesn't matter whether the belt is being pushed or pulled or under a force field, or whether the tension is due to a spring, or pulley, or gravity, etc. The action is the force, $F$, and since action equals reaction, $F = 2T\sin\theta$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Will the potential energy is same in both the cases? Suppose there is a charge $Q$. Now bring in another charge $Q'$ from infinity to a position a distance $r$ from charge $Q$. Then the change in potential energy is equal to $kQQ'/r$.
My question is: will the potential energy will be same if the same charge $Q'$ is brought from infinity to a distance $r$ from $Q$, but in small portions $dQ'$. I mean that the first $dQ'$ is brought to a distance $r$ from $Q$, and then additional incremental charges $dQ'$ are also brought to the separation $r$, and so on.
Will the potential energy will be same in both cases?
| I would think no, it wouldn't be the same.
Having two charges already in position would alter the magnitude of the potential field for all the incoming charges- so it would require more or less work to move subsequent charges to the desired location.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Difference between stable manifold and basin of attraction? In 'Nonlinear Dynamics and Chaos' by S. Strogatz, a distinction is made between a stable manifold and basin of attraction of a fixed point in phase space:
Here, the stable manifold of a saddle point is a line, and the basin of attraction of a stable node is a plane. However, the definitions of the two terms are the same, namely:
The set of initial conditions $\bf x_0$ such that $\bf{x} \rightarrow \bf{x^{*}}$ as $t\rightarrow\infty$ for a fixed point $\bf{x^{*}}$.
Why is a distinction being made between the two terms?
| Maybe the most straightforward distinction is:
*
*If the set in question is a true manifold, it’s a stable manifold. By true manifold, I mean that it has a dimension that is not equal to that of its embedding space.
*Otherwise it’s the basin of attraction of an attractor. This basin may also be or contain stable manifold(s), depending on the situation and exact definition.
The existence of a true (stable) manifold excludes the existence of an attractor since arbitrarily close to the attractor, you can find points on the manifold, next to which have to be points that do not converge to the attractor (otherwise the manifold would not be a true one).
While the above may present the two terms as a dichotomy, in the broader scope of dynamical systems, they are not. On the one hand, attractors also comprises things like limit cycles and chaotic attractors. On the other hand, stable manifolds also exist in Hamiltonian systems, which cannot have attractors at all.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How can we experimentally confirm that atoms/molecules in a solid actually "move"? The atoms in a solid are so attracted to each other that they "vibrate" and don't move past each other.
How do scientists "measure" that atomic vibration in a solid (let's say at room temperature)?
As a raw, uneducated person it is easy for me to conclude that the solid is completely at rest and no part of it is "moving". So, what is the experimental evidence which shows that my conclusion is totally wrong and that the tiny invisible atoms are actually "jiggling"?
In the case of the Brownian motion, it is somehow easier (more intuitive and common sense) to assume that the invisible atoms are "moving" and thus "hitting" the colloidal particles. However, regarding a solid... I can't even imagine how I can detect that atomic "vibrations" because I can't see them or feel them.
| Another way is to look at the quantum efficiency of photoelectric sensors using indirect band gap materials like silicon. For such materials, a long wavelength photon needs the assistance of a phonon (lattice vibration) to produce an electron-hole pair. A consequence is that the sensor's quantum efficiency varies with temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 1
} |
Why are fields described as force divided by mass or charge? I have read that application of force on a body from a distance, like gravitational or electrostatic force is a two-step process, first, the field is created by the body, then, the application of force on the second body by the field. I want to know why the expression for gravitational field is given as F/m or why the expression for electric field is given as F/q?
| Answering my question for anyone who benefits from this.
This is what I understood of everything I read about it . Electric field is simply like a constant of force applied by a particle at a particular point in space, a ratio which is made independent of the test charge by dividing the force formulae by the mass/charge of the test particle, since the magnitude of force contains the charge as a multiplication term and is directly proportional to it. Although it's an entirely new physical entity that exists whether or not the test particle is present.
I am reading a paper which further goes into the history of electric fields and other fields.
It's "Introducing electric fields" by John Roche, 2016.
A lecture by Prof. H. C. Verma in Hindi also helped me understand it.(Available on YouTube in a playlist called Classical Electromagnetism)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Why, in this solution, acceleration is constant even when it depends on distance between two charges? I used integration of $a=dv/dt$ to solve this Why, in this solution is acceleration constant, even when it depends on the distance between two charges? I used integration of $a=dv/dt$ to solve this.
Question
Two particles have equal masses of $5.0 \ g$ each and opposite charges of $+4.0 \times 10^{-5} C$ and $-4.0 \times 10^{-5} C$. They are released from rest with a separation of $1.0 \ m$ between them. Find the speeds of the particles when the separation is reduced to $50 \ cm$.
This involves Coulomb's law, Newton's 2nd law of motion and kinematics of relative acceleration.
Solution of above question
$$q_1 = q_2 = 4 \times 10^{-5}C \ \ \ and \ \ \ s=1m, \ \ m=5g=0.005 kg$$
$$F=K \frac{q^2}{r^2} = \frac{9 \times 10^9 \times (4 \times 10^{-5})^2}{1^2} = 14.4 \ N$$
$$Acceleration \ \ a = \frac{F}{m} = \frac{14.4}{0.005}=2880 \ m/s^2$$
$$Now \ \ u = 0 \ \ \ \ s = 50 \ cm = 0.5 \ m, \ \ \ a = 2880 \ m/s^2, \ \ \ v = \ ?$$
$$v^2 = u^2 + 2as \ \ \ \rightarrow \ \ v^2 = 0 + 2 \times 2880 \times 0.5$$
$$v = \sqrt{2880} = 53.66 \ m/s \approx 54 \ m/s \ \ \ for \ each \ particle.$$
| The total change in field energy equals the negative of the total amount of work done on all charges.
For 2 point charges, the total change in field energy is just the change in potential energy between them
$$(U_{2}-U_{1}) = -(\Delta K_{q_{1}} + \Delta K_{q_{2}})$$
$$U_{2} - U_{1} = \frac{1}{4\pi\epsilon_0}\frac{ q_{1} q_{2}}{0.5} -\frac{1}{4\pi\epsilon_0}\frac{ q_{1} q_{2}}{1}
$$
Now because we know the masses are equal, and the situation is symmetric. We know that the change in kinetic energy of both charges is the same, as we say that they are both released at the same time.
$$ \Delta K_{q_{1}} = \Delta K_{q_{2}}$$
Which allows us to write.
$$(U_{2}-U_{1}) = -(2\Delta K_{q})$$
Meaning:
$$-\frac{U_{2}-U_{1}}{2} = \Delta K_{q}$$
$$-\frac{U_{2}-U_{1}}{2} = \frac{1}{2}mv^2$$
$$\sqrt{-\frac{U_{2}-U_{1}}{m}} = v$$
Notice if we set the change in kinetic energy of one of them to be zero, this is is same as fixing one in place, which can be intutatively thought of just the change of potential equals the negative of the total amount of work done on a particle, which is commonly taught. This relies on electrostatics where the other charge is fixed.
More handwavy, you can say that the potential energy is mutually shared so the change in potential of any one charge is halved
Haven't plugged in numbers but if the comment above is correct, it yields then same answer, but have no idea why, it shouldnt, probably is the same as the average acceleration halfed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why do we hear frequencies in the basis of sine waves? When we talk about hearing frequencies and overtones, we almost always mean in the basis of sine waves, according to a standard Fourier decomposition. Couldn't we also decompose a signal into a different basis of orthogonal periodic functions, like square waves? According to a Fourier decomposition, a square wave has many frequencies. Indeed, when I hear a square wave, I can hear overtones. But in the basis of square waves, it has one frequency. Why is the Fourier decomposition more fundamental to human hearing?
| We can and do in some cases. Take a look at the Zernike Polynomials for decomposing 2-dimensional frequency distributions, for example.
The short answer is that sine waves are nice and clean, behave well when applying Fourier or other transforms, so why go make things difficult?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 1
} |
The value of $g$ in free fall motion on earth When we release a heavy body from a height to earth. We get the value of $g=9.8 \ ms^{-2}$. Now, I'm confused about what it means. For example, does it mean that the body's speed increases to $9.8$ every second? Or, does it mean that the speed of the body is $9.8 \ m/s$?
| $$F=-G\frac{Mm}{r^2}$$
$$\frac{F}{m}=-G\frac{M}{r^2}$$
$$a=-G\frac{M}{r^2}$$
Force divided by mass is by definition acceleration. This is denoted as g, as in, the acceleration due to gravity
For the surface of the earth this value is around $-9.81$.
As such the standard definition of acceleration applies. Acceleration is the rate at which velocity changes, a constant acceleration of x, means that every second, the velocity of the body increases by x, or in a formula: $v=at$
Look at the units $$\frac{m}{s} \frac{1}{s}$$
Distance traveled per second, per second
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
XKCD Focusing moonlight - Only the component of light perpendicular to a surface heats the surface? Is this XKCD https://what-if.xkcd.com/145/ saying that a surface is only heated by the component of the rays that are perpendicular to the surface?
Conservation of étendue:
| The claim is
... you can't smoosh light beams together without also making them less parallel, which means you can't aim them at a faraway spot.
I don't see where it says "a surface is only heated by the component of the rays that are perpendicular to the surface".
The diagram you show is a counter to the previous one, where the light all comes out in one direction
Conservation of étendue says you can't do this.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Confusion about the Spinning Top Consider the following symmetric spinning top with a fixed point on the horizontal surface.
I have two questions concerning its motion:
*
*let $\underline{\Omega}$ denote the precession angular velocity, that is, the angular velocity of the reference frame of the spinning top with respect to the inertial reference frame of the laboratory, then, applying the second cardinal equation and Poisson theorem yields:
$$ \underline{\Omega} \times \underline{P_c} = \underline{r_c} \times M\underline{g} $$
where $\underline{P_c}$ represents the angular momentum with respect to the center of mass and $\underline{r_c}$ represents the position of the center of mass with respect to the fixed point; my question is how do we find out that $\underline{\Omega}$ must be directed vertically as in the above figure, since actually infinitely many vectors satisfy the previous relation?
*assuming we have demonstrated that the spinning top follows a precession around the vertical axis, then, in that case, the center of mass is also rotating around such axis, therefore it should have a centripetal acceleration but what is the force responsible for it? (I thought about friction on the fixed point but the fact is that in any reference I have found friction is never talked about)
As always, any comment on answer is highly appreciated! Also, let me know if I can explain myself in a clearer way!
| I think your equation $\Omega\wedge P_c=r_c\wedge Mg$ is not correct. The l.h.s. should be the derivative of the total angular momentum w.r.t. to point of contact with the floor, not w.r.t. the center of mass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Did Huygens understand light to be a transverse wave or a longitudinal wave? We have this source that claims Huygens "assumed light to be longitudinal", which contradicts this source which claims "Huygens believed that light was made up of waves vibrating up and down perpendicular to the direction of the wave propagation".
I must admit, neither source is entirely accurate or reliable, so what exactly was Huygens formulation regarding the nature of light?
| Possibly interesting quote
from the "Note by the translator" section (page ix) of
"Treatise On Light" by Huygens, Christiaan
https://archive.org/details/treatiseonlight031310mbp/page/n10/mode/1up
(bolding mine)
The Treatise on Light of Huygens has, however, withstood the test of time: and
even now the exquisite skill with which he applied
his conception of the propagation
of waves of light to unravel
the intricacies of the phenomena of the double refraction
of crystals, and of the refraction of the atmosphere,
will excite the admiration of the student of Optics.
It is true that his wave theory was far from the complete
doctrine as subsequently developed by Thomas Young and Augustin
Fresnel, and belonged rather to geometrical
than to physical Optics. If Huygens had no conception
of transverse vibrations, of the principle of interference, or of the existence
of the ordered sequence
of waves in trains, he nevertheless
attained to a remarkably
clear understanding
of the principles of wave-propagation;
and his exposition of the subject marks an epoch
in the treatment of Optical problems.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does the the dielectric constant of a ferroelectric increases with temperature, below $T_C$?
The above figure is taken from C. Kittel.
When a ferroelectric substance (say, BaTi${\rm O}_3$) at room temperature is gradually heated, the dielectric constant $\varepsilon_r$ first increases and then attains a peak at a temperature called the Curie temperature $T_C$, and above $T_C$, further increase in the temperature causes a rapid decrease in the dielectric constant $\varepsilon_r$. The decrease in $\varepsilon_r$ above $T_C$ can be understood from the ferroelectric to paraelectric transition in which there is a structural phase transition from the tetragonal unit cell structure (carrying a nonzero dipole moment) to the cubic unit cell structure (carrying no nonzero dipole moment).
But what is the reason for the initial growth in the dielectric constant when the temperature is raised from room temperature to $T_C$?
| This reference may help: . "In a crystalline solid, there are only certain orientations permitted by the lattice. To switch between these different orientations, a molecule must overcome a certain energy barrier ΔE", which requires enough thermal energy. With decreasing temperature, "the orientational mode becomes “frozen out” and can no longer contribute to overall polarisation, leading to a drop in the dielectric constant".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Photon-Atom Interaction: Atomic Spectrum vs Photoelectric Effect Apologize if the question is elementary or already asked (not aware of it).
Far as I understand:
*
*Ground state electrons in atom can only absorb photons of certain (discrete set of) energies to jump to higher energy levels;
*In photoelectric effect there is a threshold $E_0$ such that photons of energy greater than $E_0$ can eject electrons from metal atoms.
Is the following understanding correct?
*
*Low energy photons interact with atoms only if they have energy equal to difference of electron energy levels (to bring an electron to excitation);
*High energy photons can always interact with atoms, e.g. eject the electron(s).
| you state:
Ground state electrons in atom can only absorb photons of certain (discrete set of) energies to jump to higher energy levels;
The correct statement is "atoms can absorb photons of certain (discrete set of) energies to jump to higher energy levels;"
The electrons with the nucleus are one quantum entity ,the atom.
your 1. correct, the electrons change energy levels with the correct photon energy input.
your 2. The energy levels close to ionisation are very dense, see for the hydrogen atom. It will depend on the particulars of the interaction. In general part (equal to the difference in the energy level of the electron to the ionization level) of the photon energy can go to releasing an electron.
On a conducting solid surface the binding of the electrons is with the whole lattice, and this is the effect seen in the photoelectric effect, depending on the atoms that make the solid.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't $dU=nC_{v}\,dT$ hold for all substances? Consider the following proof for change in internal energy of real gases, liquids and solids(assuming Non-$PV$ work $=0$):
*
*Let X denote real gases, liquids, and solids
*The First law of thermodynamics is $dU=dQ-dW=dQ-PdV$, which also holds for X
*At constant volume, $dU_{v}=dQ_{v}-0$.
*Now, $dQ_{v}=nC_{v}\,dT$ is a trivial expression and thus, will also hold for X.
*So we have $dU=nC_{v}\,dT$.
*Since U is a state function(in terms of V and T), $dU_{v}=dU$ since the path is irrelevant.
*Thus, we get $dU=nC_{v}\,dT$ for all X.
However, some sources indicate that $dU=nC_{v}\,dT$ is applicable only for ideal gases. Are they correct? If so, what is the mistake in this proof?
Addendum:
It seems the issue is in point 6 in that $dU_{v}=du$ cannot be used. This is because the internal energy change does not depend on the path, but if you are choosing an alternative path to calculate $du$ (like isochoric), that path needs to exist between the two states. So $dU=nC_{v}\,dT$ is true for an isochoric process for all X, but not in general for any process. But, why doesn't this issue arise in ideal gases?
| There is a difference between a constant volume (isochoric) process (step 3 in your "proof") and the infinite number of possible paths between two equilibrium states where the initial and final volume is the same.
For a gas where the initial and final volume is the same, it is true that $dq=nC_{v}dt$, even if the volume is not constant for the path between the two states (i.e., it is true for all paths between the two states).
When the initial and final volumes of a gas is not the same, it is only true that $du=nC_{v}dT$ in the case of an ideal gas. See my derivation here: How can internal energy be $\Delta{U} = nC_{v}\Delta{T}$?
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Gauge-invariant vertex structure for $h\to\gamma\gamma$ via fermion loop I am struggling (a bit) with the following diagram for scalar Higgs to two photons.
$h\to\gamma\gamma$" />
If I put $q_\mu$ on-shell (or at the very least if I put both $q_\mu$ and $q'_\nu$ on-shell),
the vertex function should have the following form:
$$
i\Gamma^{\mu\nu} \sim [\eta^{\mu\nu}qq' - q'^\mu q^\nu]
$$
However, from the explicit calculation of the amplitude I have:
$$
i\Gamma^{\mu\nu} \sim \int \frac{d^4k}{(2\pi)^4} \frac{Tr\{(\not k - \not q' + m_f)\gamma^\nu
(\not k + m_f)\gamma^\mu(\not k - \not q + m_f)\}}{[(k-q')^2-m_f^2][k^2-m_f^2][(k-q)^2-m_f^2]} \\
\sim \int_0^1 dx \int_0^{1-x} dy \frac{\eta^{\mu\nu}[m_f^2-x^2 q'^2 + (1-2xy)qq']
+q'^\mu[2x(2x-1)q'^\nu + (4xy-1)q^\nu]}{m_f^2-q'^2x(1-x)+2xyqq'}
$$
From this result I can get the gauge invariant part, which agrees for instance with
this article (New Barr-Zee contributions to $(g−2)_μ$ in two-Higgs-doublet models), but I'm left with additional terms that shouldn't be there. Even if I put $q'^\nu$ on-shell, I still get an additional term:
$$
i\Gamma^{\mu\nu} \sim [\eta^{\mu\nu}qq' - q'^\mu q^\nu] + \eta^{\mu\nu} m_f^2
$$
Initially, I was hoping these additional terms might cancel with those from the diagram, with opposite fermion direction, but adding this diagram just results in an overall factor of 2.
The calculation was done just using standard Feynman parameters and then setting $q^2=0$ and ignoring terms $\sim q^\mu$ (and $q'^\nu$ for $q'$ on-shell). After shifting the integration momentum, the trace was evaluated using FeynCalc.
Please let me know if you need more details on the calculation.
| Ok i made a pretty instructive mistake. Since the diagram is power-counting finite, i assumed it would be fine to take the momentum integral to be 4-dimensional from the start. Then at some point i used
$$
\int\frac{d^4k}{(2\pi)^4} \frac{4k^\mu k^\nu}{[k^2-\Delta]^3} = \int\frac{d^4k}{(2\pi)^4} \frac{k^2 \eta^{\mu\nu}}{[k^2-\Delta]^3}
$$
However, this is wrong, since the resulting integral is divergent. Instead i need to stay in D-dimensions which means
$$
\int\frac{d^4k}{(2\pi)^4} \frac{Dk^\mu k^\nu}{[k^2-\Delta]^3} = \int\frac{d^4k}{(2\pi)^4} \frac{k^2 \eta^{\mu\nu}}{[k^2-\Delta]^3}
$$
Then all the gauge-variant terms magically drop out from the calculation
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Heat death of the Universe in LCDM I have often read that the heat deat of the Universe occurs in cosmologies where its age can be arbitarily large, even with a cosmological constant. However the standard LCDM cosmology's conformal age is bounded, even in the arbitarily far future. It seems to me that for the Universe to necessarily reach equilibrium then it must also conformally reach equilibrium, but I don't see how that is a given if the conformal age is bounded. My question is the LCDM model how can the Universe definitely reach heat death?
| There are TWO LCDM possible assumptions related to what the universe will be like towards the end part of time.
*
*All matter gravitationally bound together in a galaxy, or a collection of galaxies, will ultimately form into a single black hole. All other matter will become so far away from this black hole so that it will have no significant influence on it. This is one possible result. It depends on the assumption that Hawking radiation is only a false conjecture and is not a real phenomenon. The universe would then have many such black holes, each not having any relationship with any other black holes. There is no currently present firm experimental or observational evidence that Hawking radiation is a real phenomenon.
See https://en.wikipedia.org/wiki/Hawking_radiation.
*If Hawking radiation is a real phenomenon, than eventually the black hole described in #1 will eventually radiate all of its contents away, and the radiation particles will not interact with anything. Thus the universe will eventually everywhere become a vacuum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How do resistors work? Why is the current before and after a resistor exactly the same? I understand the same amount of charge that enters the resistor leaves, but current is defined to be charge per time. The way I understand it, resistors slow down the speed of electrons, so even though the same amount of charge that enters, leaves, the speed is different, so the current must be different. What is going on here?
| Actually the resistor is decreasing the drift velocity because of more collisions and interaction between electrons and the lattice. However, think about what happens if an electron suddenly slows down when entering the resistor. A still faster electron in the wire approaching the resistor will feel the negative charge of the slowed down electron in front of it, which repels the faster electron and slows it down while still being in the wire. Vice versa the faster electron will push the electron in front of it. An equlibrium forms and in summary, the resistor will slow down the drift velocity in the entire circuit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 11,
"answer_id": 0
} |
How can the Cosmic Neutrino Background (CνB) have a temperature? How can any neutrino have a 'temperature'? The word temperature usually refers to the average velocity of massive particles, correct?
And the Cosmic Microwave Background (CMB) has a 'temperature' based on the temperature of a 'black body' that would emit photons of energies corresponding to those seen in the CMB, correct?
But, how can a neutrino or neutrinos have a temperature? What does it correspond to?
| A single particle is not assigned a temperature (which is exhibited by a very large ensemble of particles), it is described by its kinetic energy. So a single neutrino can be described as having a certain amount of kinetic energy- but it in and of itself has no "temperature".
Now if we imagine instead a huge burst of neutrinos released during a supernova collapse inside a supermassive star, that burst will contain a range of kinetic energies which start out being all characteristic of the process which created them and then when the neutrinos interact with themselves and with the matter and radiation surrounding them in the core of that star, that distribution will get averaged into a blackbody distribution with a peak to which a temperature can be ascribed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Does dusk really remain for a shorter period of time at the equator? It is said that the dusk remains for shorter time at equator than the poles. Because, the equator rotates faster than poles. But it is also true that time is the same in every latitude, and if it's true, then the dusk should remain the same at equator as the poles. So, does dusk really remain for a shorter period of time at the equator?
| The line that separates day and night (illuminated vs dark side of the Earth) is called the shadow terminator. Now, because we don't experience a sudden lights-on/lights-off transition, but a gradual shift towards nighttime (or dawn, at the other end), you can imagine there's a transitional region - a band of sorts - attached to the shadow terminator, where we experience twilight as we pass through it. A "twilight zone", if you will.
But it is also true that time is same in every latitude and if it's true, then the dusk should remain same as the poles.
Think of the circle that a stationary person or a place describes as the Earth rotates. They complete the full circle in 24h. As this "twilight band" has basically the same width everywhere, it will take up a larger part of the circle at higher latitudes (because the circles get smaller), which means people there will spend more time in the band. The situation is further complicated by the fact that the Earth's axis of rotation is tilted.
The rotation axis of the Earth is at an angle with respect to the ecliptic plane (Earth's orbital plane), and it maintains this orientation in space as the Earth travels around the Sun (Milankovitch cycles aside).
That means that the day-night line (the shadow terminator line) does not pass through the poles throughout most of the year.
For example, when it's winter in the northern hemisphere, the north pole faces away from the Sun for months. So, at the poles, the Sun can dip below the horizon for 6 months continuously, but about half (or more) of that time is some degree of twilight (for details, see this).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 4,
"answer_id": 2
} |
Why doesn't the variation of resistivity with temperature go both ways? I've learnt that the variation of resistivity with temperature for a conductor is:
$\rho=\rho_0(1+\alpha (T−T_0))$
Let's consider resistivity at 0℃ and 100℃.
When heating the conductor from 0℃ to 100℃,
$ρ₁₀₀=\rho_0(1+\alpha (100-0))$
α=$\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₀}\, $
Now, when cooling the conductor from 100℃ to 0℃,
$\rho_0=ρ₁₀₀(1+\alpha (0-100))$
α=$\displaystyle \frac{ρ₀-ρ₁₀₀}{-100ρ₁₀₀}\, $= $\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₁₀₀}\, $=$\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₀(1+α(100-0))}\, $=$\displaystyle \frac{α}{1+100α}\, $
Why does this discrepancy exist? Even if the relation only holds for smaller temperature differences, the discrepancy seems to hold, as the new value of α only seems to depend on the old one, as $\displaystyle \frac{α}{1+T'α}\, $.
|
I've learnt that the variation of resistivity with temperature for a conductor is:
$\rho=\rho_0(1+\alpha (T−T_0))$
This is not a real physical relationship. It is just a convenient first-order approximation. Suppose we have some arbitrary resistivity $\rho(T)$ as a function of temperature. Then, at any $T=T_0$ we can do a series expansion to get: $$\rho(T) = \rho(T_0) + \rho'(T_0) (T-T_0) + O(T-T_0)^2$$ $$ \rho(T) \approx \rho(T_0) \left( 1+\frac{\rho'(T_0)}{\rho(T_0)} (T-T_0) \right) $$ which is the same as your formula with $\rho_0=\rho(T_0)$ and $\alpha = \rho'(T_0)/\rho(T_0)$.
The big issue is that $\alpha$ is not some sort of actual constant of the material itself. It is just the ratio of $\rho'$ to $\rho$ at a specific temperature $T_0$. So you cannot assume that $\alpha$ at $T_0$ is the same as $\alpha$ at any other temperature. You can only use this formula with $T_0$ as the temperature at which $\alpha$ was measured.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 2
} |
How does an electron move in the $p$ orbital? This is my first time learning about orbitals and I am very confused over how do electrons move around the nucleus in the
$p$ orbital.
Wouldn't it have to move out of the orbital where probability of finding an electron is low in order to complete its revolution? Maybe my understanding of orbitals is flawed.
Can somebody please help.
| While this isn't above criticism, I think that a good starting point to get used to quantum physics is to picture the situation in the following way:
*
*The electron doesn't have to behave like a point-like object that has a trajectory.
*Whether it behaves like a point-like object, a wave, or any hybrid of the two depends on the experimental setup, especially which measurements are made or aren't made.
*Inside the atom, it's safer to consider that electrons are mostly wave-like, described by their wavefunction.
*The probability density doesn't really describe where the electron has a chance to be, but more where it can manifest if you force it into a particle-like behavior (typically by subjecting it to an interaction that depends on position).
Of course, those are only words so their scientific value is limited, but it's a reasonable starting point until you can rely on more reliable tools (Schrödinger's equation and how to use its solutions).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
If water is nearly as incompressible as ground, why don't divers get injured when they plunge into it? I have read that water (or any other liquid) cannot be compressed like gases and it is nearly as elastic as solid. So why isn’t the impact of diving into water equivalent to that of diving on hard concrete?
| Adding another perspective to the existing answers:
In your usual diving scenario, water is not confined to the points in space it occupied before, while a slab of ground is – on account of water being liquid and ground being solid.
To construct a scenario where you primarily experience the compressibility when diving into water, you would have to exactly encase the body of water with a perfectly rigid wall with only an exactly diver-shaped hole in it – through which the diver needs to enter. (Also, your diver would have to have the same cross-section everywhere along the direction of diving.) In that case, the water cannot escape to the sides anymore and the diver would fully feel that water is incompressible: They would not be able to enter the water at all and crash into it like a wall.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 13,
"answer_id": 7
} |
Translating Ashcroft and Mermin's "Second Proof" of Bloch's Theorem to Dirac's Notation At the end of this post I attach Ashcroft and Mermin's proof of Bloch's theorem which is not essential per se (the proof using lattice symmetries is more general), but is key in being used later as a jumping off point for the nearly free electron model.
Now I am trying to translate it to Dirac's bra-ket notation, since that always helps me think in more general, coordinate-free terms (once I pull out all of the identity operators which have been tacitly inserted). Essentially, I am trying to arrive at (8.38) by beginning at the coordinate-free eigenvalue equation for $H$.
Thus I thought to write something like
$$H|\psi\rangle=\epsilon|\psi\rangle \implies \langle \mathbf{r}|H|\psi\rangle=\langle \mathbf{r}|\int \mathrm{d}{\mathbf{k}}\left(\frac{\mathbf{p}^2}{2m}+U\right)|\mathbf{k}\rangle\langle \mathbf{k}|\psi\rangle=\int \mathrm{d}{\mathbf{k}}\left(\frac{\hbar^2\nabla^2}{2m}+U(\mathbf{r})\right)\langle \mathbf{r}|\mathbf{k}\rangle c_\mathbf{k}=\int \mathrm{d}{\mathbf{k}}\left(\frac{\hbar^2\nabla^2}{2m}+U(\mathbf{r})\right)c_\mathbf{k}e^{i \mathbf{k} \cdot\mathbf{r}}$$
where the integration is over all of momentum (k-space up to a proportionality factor). The first term in the equation above recovers (8.36), but (8.37) I cannot seem to "find". Obviously I want something of the form (8.32), so I thought to insert completeness in coordinate space ($\mathbf{r}$), but then I have some extra $\mathbf{r}$ kets in the second term which aren't there in the first.
Any help in completing the steps which I cannot would be greatly appreciated.
| Here's how to do it by inserting an extra complete set of momentum states.
\begin{align*}
\langle \mathbf{r}|H|\psi\rangle
&=\langle \mathbf{r}|
\int \mathrm{d}{\mathbf{k}} \mathrm{d}{\mathbf{k'}}\,|\mathbf{k'}\rangle\langle \mathbf{k'}|
\left(\frac{\mathbf{p}^2}{2m}+U\right)|\mathbf{k}\rangle\langle \mathbf{k}|\psi\rangle\\
&=\int \mathrm{d}{\mathbf{k}} \mathrm{d}{\mathbf{k'}}\,
\langle \mathbf{r}|\mathbf{k'}\rangle
\left(
\langle \mathbf{k'}|\frac{\mathbf{p}^2}{2m}|\mathbf{k}\rangle
+
\langle \mathbf{k'}|U|\mathbf{k}\rangle
\right)
\langle \mathbf{k}|\psi\rangle\,.
\end{align*}
Now, because $U$ is periodic, in the lattice, it only connects momentum states if they differ by a reciprocal lattice vector $\mathbf{G}$, i.e.
$$
\langle \mathbf{k'}|U|\mathbf{k}\rangle = U_{\mathbf{G}}\delta^3(\mathbf{k}+\mathbf{G}-\mathbf{k'})\,,
$$
where $U_\mathbf{G}$ is a Fourier coefficient of $U$. Then
\begin{align*}
\langle \mathbf{r}|H|\psi\rangle
&=\int \mathrm{d}{\mathbf{k}} \mathrm{d}{\mathbf{k'}}\,
\langle \mathbf{r}|\mathbf{k'}\rangle
\left(
\frac{k^2}{2m}\delta^3(\mathbf{k}-\mathbf{k'})
+
U_{\mathbf{G}}\delta^3(\mathbf{k}+\mathbf{G}-\mathbf{k'})
\right)
\langle \mathbf{k}|\psi\rangle
\\
&=
%\frac{e^{i\mathbf{k}'\cdot\mathbf{r}}}{\sqrt{2\pi}}
\int \mathrm{d}{\mathbf{k}}\,
\langle \mathbf{r}|\mathbf{k}\rangle
\frac{k^2}{2m}
\langle \mathbf{k}|\psi\rangle
+
\int \mathrm{d}{\mathbf{k}}\,
\langle \mathbf{r}|\mathbf{k}+\mathbf{G}\rangle
U_\mathbf{G}
\langle \mathbf{k}|\psi\rangle
\\
&=
\int \mathrm{d}{\mathbf{k}}\,
\frac{e^{i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{2\pi}}
\frac{k^2}{2m}
\tilde{\psi}(\mathbf{k})
+
\int \mathrm{d}{\mathbf{k}}\,
\frac{e^{i(\mathbf{k}+\mathbf{G})\cdot\mathbf{r}}}{\sqrt{2\pi}}
U_\mathbf{G}
\tilde{\psi}(\mathbf{k})
\\
&=
\int \mathrm{d}{\mathbf{k}}\,
\frac{e^{i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{2\pi}}
\frac{k^2}{2m}
\tilde{\psi}(\mathbf{k})
+
\int \mathrm{d}{\mathbf{k}}\,
\frac{e^{i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{2\pi}}
U_\mathbf{G}
\tilde{\psi}(\mathbf{k}-\mathbf{G})
\end{align*}
where in the last step we have used a change of variables $\mathbf{k}\to\mathbf{k}+\mathbf{G}$. We have also notated the Fourier transform of $\psi$ as $\tilde{\psi}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Are we always allowed to treat an inductor as a battery with the same voltage? When there is an induced emf, Kirchhoff's Loop Rule no longer is true, because electric fields are nonconservative when there is an induced current, as stated by Faraday's Law:
However, I have seen explanations that incorporate inductors and induced emfs into circuit analysis by treating them like batteries. For example, for the following circuit, if V is the voltage of the battery, Vinduced is the induced emf from the inductor, R is the resistance of the resistor, and I is the current, then V - Vinduced = IR:
To me, this seems to be treating the inductor like a battery with voltage Vinduced. I see why this is justified; the only difference between the electric field created by a battery and by an inductor is that the inductor's field is nonconservative, while the battery's field is conservative due to the electric field inside the battery. However, are there any cases where an inductor acts differently than a battery with the same voltage, at least for circuit analysis purposes?
| It seems I was mistaken in my original answer.
You can treat (emphasis on the word treat) an inductor as a battery if you took into consideration the emf in the voltage formula and apply Kirchoff's law. But in technical details, since the electromagnetic field is not conservative here, you can't in general apply Kirchoff's law.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Application of "real" Grassmann Gaussian integrals In Appendix 2B of the CFT yellow book by Francesco et al, the authors introduced two types of Grassmann Gaussian integrals (the $\theta$'s below are generators of a Grassmann algebra):
*
*The "real" one
$$
I = \int d\theta_1 \cdots d\theta_n \exp(-\frac{1}{2} \theta^T A \theta)
\tag{2.223}
$$
*
*The "complex" one
$$
I_2 = \int d\bar{\theta} d\theta \, \exp(-\bar{\theta} M \theta)
\tag{2.231}
$$
$$
d\bar{\theta} d\theta = \prod_{i=1}^n d\bar{\theta}_i d\theta_i
\tag{2.232}
$$
I know that the "complex" is useful because its generalization to functional integral is the coherent state path integral for free fermions. But I have not encountered applications of the "real" integral, and did not search the yellow book for it. Can anyone kindly tell me applications of the "real" Grassmann Gaussian integral? Is it related to Majorana fermions (which is "real") or some other coherent state path integral?
| The "real" integral evalautes to the Pfaffian of $A$ where for an $2n$-by$2n$ skew symmetric matrix $A$
$$
{\rm Pf}A= \frac 1{2^n n!} \epsilon_{i_1, \ldots, i_{2n}} A_{i_1,i_2}\ldots A_{2n-1,2n}.
$$
The Pfaffian has the property that $({\rm Pf} A)^2= {\rm det}A$,
and has applications many places in combinatorics. For Majorana fermion path integrals it replaces the one-loop Matthews-Salam determinant.
2:
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
A cylinder rolling down an inclined plane A few questions popped into my mind while studying rotational motion.
Take a cylinder to the top of an inclined plane. Suppose there is friction. Let go of the cylinder. If it is rolling without slipping, is its acceleration constant over the time interval it is rolling down? If so, why? Why does the acceleration depend on the rotational inertia of the body in this case? And the final and most important question that had me struggling: why can't we simply apply $F = ma$ on these objects and get the same result on every object, regardless of their rotational inertia, since all the forces acting on the object in this system are proportional to the mass?
| The acceleration of the center of mass (CM) is the net force divided by the mass; the net force is the component of gravity down the incline minus the force of friction up the incline. You do just apply $\vec F = m \vec a_{CM}$ to determine the acceleration of the CM, $\vec a_{CM}$; however, you need to consider the rotational motion to evaluate the force of friction, which is not constant for rolling without slipping. Friction provides a torque that causes rotational motion, so the force of friction depends on the moment of inertia. The force of friction is not constant and equals $ma_{CM}/2$ for a cylinder of mass $m$. The acceleration of the CM is constant, equal to ${2 \over 3} g sin(\theta)$ for a cylinder, where $\theta$ is the angle of the incline. For rolling without slipping the force of friction does no work and the potential energy at the top of the incline is converted to kinetic energy of the CM plus rotational energy around the CM. This problem is evaluated in numerous physics textbooks, such as one of the many textbooks by Halliday and Resnick.
See Consistent Approach for Calculating Work By Friction for Rigid Body in Planar Motion for more details.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does the front of a light wave always propagate at $c$ in media Consider light moving along one dimension at the classical level. I am interested in the situation where a wave front impacts a material with some generic index of refraction $n(\omega)$, and propagates through. My calculations seems to suggest that the very front of the wave travels at exactly $c$, conditional only on $n(\omega)\to 1$ as $|\omega|\to \infty$ (which I believe must be the case). This seems to conflict with common intuition that light is slowed by media, but perhaps it is the case that the front amplitude is strongly suppressed relative to the main amplitude of the wave? Could someone clarify what is going on?
My calculation is quite simple. Suppose the light wave is moving along the positive $x$ direction, and the material is defined as starting at $x=0$. Let the vector potential be $\mathbf A(t,x)=\hat{\mathbf A}\psi(t,x) $ here be given as $\psi(t,0)=\Theta(t)u(t)$ for some smooth function $u(t)$. For simplicity let us take it to be $e^{-0^+t}$ for convergence purposes. Then:
$$\psi(\omega,0)=\int_0^\infty\mathrm{d} t\,\mathrm{e}^{i(\omega+i0^+) t}=\frac{i}{\omega+i0^+} \ .$$
Hence the wave front at any point in the material may be calculated to be:
$$\psi(t,x)=\int_\mathbb{R}\frac{\mathrm{d}\omega}{2\pi}\frac{i\mathrm{e}^{-i\omega(t-n(\omega)x/c)}}{\omega+i 0^+} \ . $$
As long as $n(\omega)\to 1$ as $|\omega|\to \infty$, for $x>ct$ we can add to the integral a semicircle contour, such that we enclose the upper complex $\omega$ plane. By the residue theorem we therefore have $\psi(x>ct)=0$ as expected. However, no matter the dispersion, when $x<ct$ we can add the semicircle contour in the lower half plane. Now in general, $n(\omega)$ will have poles in the lower half plane (but none in the upper), so we cannot perform the integral, but unless a miraculous cancellation occurs, surely we have $\psi(x=ct-0^+)\neq0$, independent of the material.
| the earliest appearance of the front of an electromagnetic disturbance (the precursor) travels at the front velocity, which is c, no matter what the medium.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Question about gravitational waves Gravitational waves are measured by interferometers, in particular by the change in length of one of the arms, with respect to the other. In this scenario, the light that has always the same speed, measures a delay by traveling one of the arms. My question is: if an arm pass from length $L$ to length $L+dL$, and if I am inside the arm and measure it with my ruler, I will measure from my point of view always the same length $L$, because also my ruler will be distorted like the arm. This means that the delay of light is related to the different "speed" of time between the two arms?
| The ruler resists attempts to change its length, due to electrostatic forces between atoms in the ruler (eg see Young's modulus). This means the ruler will not change as length to the same extent as space, when a gravitational wave passes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expression of Klein-Gordon field in Heisenberg picture In Schrodinger picture, the scalar field is
$$
\phi(\vec{x}) = \int \frac{d^3 p}{2E(\vec{p})} \left( a(\vec{p}) e^{i\vec{p}\cdot\vec{x}} + a(\vec{p})^{\dagger} e^{-i\vec{p}\cdot\vec{x}} \right). \tag{1}
$$
We change to the Heisenberg picture, we have
$$
\phi(x) = e^{iHt} \phi(\vec{x}) e^{-iHt} \quad (2)
$$
where $x=(t,\vec{x})$. For $a(\vec{p})$ or $a(\vec{p})^{\dagger}$, for example, we have
$$
e^{iHt} a(\vec{p}) e^{-iHt} = a(\vec{p}) e^{-iE(\vec{p})t} \quad (3)
$$
In Peskin and Schoroeder's book (Page:25), it gives exact expression for $\phi(x)$, i.e.,
$$
\phi(x) = \int \frac{d^3 p}{2E(\vec{p})} \left( a(\vec{p}) e^{ip\cdot x} + a(\vec{p})^{\dagger} e^{-ip\cdot x} \right) \quad (4)
$$
where $p$ and $x$ is four momentum vector and four position vector, respectively.
From the Eq.(1) and Eq.(2), we can see that $e^{\pm iHt}$ is outside of integral. In addition, $E$ is a function of $\vec{p}$, namely, $E=E(\vec{p})$. I think that in the book, it puts the Eq.(3) inside of integral. I can not understand why can we do this? From the Eq.(2), I think $e^{\pm iE(\vec{p})t}$ should be outside of the integral. Since $E$ is dependent on $\vec{p}$, if we put it inside of integral, does it affect the result of integral?
| One easy to deal with these difficulties is to just imagine the integral is discretized, then the measure even more clearly is just a c-number multiplying the annihilation and creation operators, $H$ commutes with a c-number (you can put identity with it, to be more clear), then you can move the factor inside.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can plasmas be black bodies? I have recently heard the claim that sun can not be composed of plasma because plasma can not be a black body.
I am an uneducated layman, I've seen a lot of people (laymen) deviate from accepted scientific consensus. I am skeptical and I don't have enough knowledge about physics to argue it.
| A necessary but not sufficient property that a volume of emitting atoms and molecules needs to have, in order to emit light as a blackbody is that they are in local thermodynamic equilibrium (LTE) with a single well-defined temperature.
In many cases this is just an idealization and even in earth's stratosphere and beyond the molecules are too rarefied to be in local thermodynamic equilibrium (non-LTE).
In such a case the emission is determined by looking at how fully the different energy levels of the molecules and atoms are populated.
An important application of this are e.g. lasers.
The precise emission spectrum of the sun in terms of irradiance is the so-called Kurucz-spectrum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Why do ceramics have a yield strength? From what I've learned so far, I look at yield strength as the beginning of plastic deformation in an object.
If ceramics don't (well usually don't) undergo plastic deformation, how can it be said that ceramics have a higher yield strength then metals?
| You can also think of yield strength as the end of the elastic region. For ceramics this is convenient because they do indeed have an elastic region. Alternatively, instead of saying they have "no" plastic region, say that ceramics have zero plastic region, which fits with the fact that the breaking point is at the point where yield strength is measured.
And remember that, in practice, all materials are... well... real. No real material follows the simple 2-part stress-strain curve. Most follow it pretty darn well, but there's always complications due to the real life nonhomogenaity of materials. So when there's funny corner cases, that's probably okay. The reality of physics will round them out!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is the slowest point of Earth's rotation in the middle of the year? The following image is taken from Wikipedia's article on the leap second.
Why is the slowest point each year in the middle of the year around July? Does being further from the Sun cause the Earth's rotation to slow down? What's the mechanism in play here?
| I don't have access to the paper but the abstract from Carter(1984) suggests that the major cause of rotational variations is due to "the exchange of angular momentum between the atmosphere and the mantle."
This is echoed by Earth Rotation Variations from Hours to Centuries:
Variations with periods of five years or less are driven primarily by exchanges of angular
momentum with the atmosphere
One could hypothesize that the middle of the year corresponds with some weather pattern that maximizes atmospheric momentum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What would happen if you reduced the coupling of $SU(2)$ in the standard model to zero? Ultimately, my goal is to find a free parameter that you could change in order to significantly reduce the strength of, or eliminate, the weak interaction. Would such a modification leave other parts of the Standard Model unchanged?
| It is straightforward to see, even though your ultimate vision should be in trouble. I assume you mean decrease the coupling g of just SU(2), and leave the EM coupling e and the Higgs v.e.v. v alone, which cannot be done.
You then just look at the formulas:
$$\cos \theta_\text{W} = \frac{g}{\,\sqrt{g^2+g'^2\,}\,}, \qquad \sin\theta_\text{W} = \frac{g'}{\,\sqrt{g^2+g'^2\,}\,} \\
e=g\sin\theta_W= g'\cos \theta_W\\
m_\text{Z} = \frac{m_\text{W}}{\,\cos\theta_\text{W}\,}, \qquad m_W= {ev\over 2\sin\theta_W}, \qquad G_F= 1/(v^2\sqrt{2}). $$
As $g\to 0$, the Weinberg angle increases to π/2, its cosine vanishes, and its sine goes to 1.
*
*But note, the EM charge e cannot stay invariant, since $e\to g$.
The Fermi constant stays put; the mass of the W goes to zero (it stops coupling!), and the mass of the Z goes to $g'v/2$.
NB aside: If you are seeking decoupling, the opposite limit, $g\to \infty$, paradoxically is better behaved, and often taught in class: in that case, e can stay unchanged, $e=g'$, since $\theta_W=0$, the cos is 1, and $m_Z=m_W=\infty$, while the Fermi constant is what it always was: the old Fermi theory! So you may think of EW unification as a descent from infinite to a finite g...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why flapping rudder produce net thrust if one half-stroke produce thrust and second half-stroke drag? In small sailing boat like optimist is well know technique when there is no wind, rudder pupming which push boat forward.You just need push-pull rudder stick left to right with fast movement.
Rudder works complety under the hull, so there is no pressure interaction between stern and rudder.
Forward half-stroke is when rudder rotate from centerline to left or right
(from 2 to 1 or from 2 to 3).
Why stiff rudder(not felxibile like flippers) produce net thrust if forward half-stroke produce drag?
(Or maybe forward half stroke produce thrust as well? I dont think so..)
Please explain your answer with pressures at rudder sides for two condition;
*
*boat speed zero
*boat is moving
Avoid Newton 3 law.
| The key point may be that stern of the boat moves laterally. During the first half of the stroke, the force exerted on the rudder by the water is forward and to the side. The sideways component causes the boat to rotate, so that, if you were to release the tiller and allow to rudder to align itself with the flow (which means, in the inviscid approximation, no net force on the rudder) the rudder would naturally continue past center, allowing it to reach the starting position for the return stroke without generating a backward force. The rotation of the boat eventually stops—let’s say a skeg provides some lateral resistance—and now you’re ready for the return stroke having produced some forward thrust and no backward force.
Of course, you wouldn’t actually let go of the tiller for the second half of the stroke, but by mimicking the motion you’d get if you did, it seems logical that net thrust could be generated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Does the force between two magnetic poles ever reach zero? If we hold two magnetic like-poles together and start to move them away, would the repelling force reach absolute zero at certain point?
In that scenario, as a layman, I think that there is something paradoxical :(
We can never reach absolute ZERO in Physics. Theoretically, it will always be bigger than zero... it just gets smaller and smaller... ad infinitum. And that reminds me of Zeno paradox.
| The magnitude of the force between the two magnets will approach zero as they get further and further apart. It never (in theory) actually reaches zero because they are always a finite distance apart - we say the force approaches a limit of zero as the distance between them approaches (but never reaches) infinity. Of course, the force will eventually become too small to measure, so we might say it becomes zero for all practical purposes.
The term “absolute zero” is usually reserved in physics for a temperature that is so low that all molecular and atomic motion ceases. It is true that the laws of thermodynamics mean that this theoretical temperature can never be achieved in practice. However, many other physical attributes can (and do) become zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the relative acceleration composition law in General relativity? In Euclidean geometry we have the following relative acceleration composition law:
$$ \vec a_{DE} + \vec a_{EF} = \vec a_{DF} $$
Where the relative acceleration between $i$ and $j$ for any $i$ and $j$ is given by:
$$ a_{ij} = a_i - a_j$$
with $a_i$ being the acceleration of $i$ and $a_j$ being the acceleration of $j$.
Is there a nice geometric way to calculate the relative acceleration composition law for $3$ intersecting (at a point) geodesics? I know the separation vector $n$ between $2$ neighboring geodesics obey:
$$ \nabla_u^2 n = R (u,v) n $$
Where $ R(u,v) = (\nabla_u \nabla_v - \nabla_v \nabla_u)$ and $\nabla_u v$ is the derivative of $v$ along $u$.
| Precisely this question has been asked and answered in the following paper:
*
*Bini, D., Carini, P., & Jantzen, R. T. (1995). Relative observer kinematics in general relativity. Classical and Quantum Gravity, 12(10), 2549, doi:10.1088/0264-9381/12/10/013, free pdf at archive.org.
Abstract. The straightforward reformulation of special relativistic concepts about relative observer kinematics in the context of the flat affine geometry of Minkowski spacetime, so that they respect the manifold structure of that spacetime, allows one to derive the general relativistic ‘addition of acceleration law’. This transformation law describes the relationship between the relative accelerations of a single test particle as seen by two different families of test observers.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How do quantum probabilities transform under Lorentz transformations? I think I get how scattering probabilities transform under Lorentz transforms. Once the interaction phase is over, the final probabilities become time independent. Hence, every observer could describe the final state using the same probabilities.
But I don't understand how time-dependent probabilities would transform under a change of frame. Suppose there's a quantum system in a box whose probabilistic state at time $t$ is described by some wavefunction/wavefunctional $\psi (t)$. How would a moving observer describe the probabilistic state of the same system? I think the concept of "probability at a time" gets screwed up because of different planes of simultaneties for the two observers.
| Wavefunctions are not compatible with Special Relativity where the number of particles can be changed over the course of an experiment, perhaps what you mean then is something like the electron field? If so the problem is just not there since all formulations of QFT are manifestly Lorentz invariant (just look at their lagrangians), meaning that observables like squared scattering amplitudes (which are the only thing you can observe about a system) are automatically Lorentz Invariant and thus all observers agree on their value.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Mass definition One definition of mass is 'a measure of the quantity of matter in an object at rest relative to the observer'. What do 'at rest' and 'relative to the observer' mean here? I know it has to do with mass resisting motion, but I cannot get what these mean.
| I would guess the confusion arises because of the (widespread but misguided) belief that in special relativity mass increases with velocity. For more on this see Why is there a controversy on whether mass increases with speed?
If we define the relativistic mass as:
$$ m_r = \gamma m = \frac{m}{\sqrt{1 - v^2/c^2}} $$
then $m_r = m$ only when the speed $v = 0$. That is, the object is at rest relative to the observer measuring the mass.
A better way of defining the mass is to use the expression:
$$ E^2 = p^2c^2 + m^2c^4 $$
where $E$ is the total energy of the mass and $p$ is the momentum. When you define the mass this way it does not change if the object is moving relative to the observer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
Shape of fastest spinning rod A one-meter steel rod of variable thickness is attached at one end to a spinning hub. The cross-sectional area of the rod is a function $f(x)$ of the distance $x$ in meters from the hub, x ranging from 0 to 1. My question is: how can I choose the function $f(x)$ to maximize the speed at which the rod can spin without flying apart?
Additional constraints: the rod has a minimum cross-section of 1 cm$^2$ everywhere, and the rod weighs 10 kg. Density of steel = $\rho$ = 8 g/cm$^3$, and the ultimate tensile strength is $F_{tu}$ = 800 MPa.
What I have: assume that at each distance c from the hub, the rod's cross-section at that distance has just enough tensile strength to support the rest of the rod. By setting $F_{tu} f(c)$ equal to the sum of centripetal forces needed for the rest of the rod, with $\omega$ angular velocity, I get
$$F_{tu} f(c) = \int_c^1 \rho \cdot x \cdot f(x) \cdot \omega^2 dx$$
But I do not know how to solve for f.
| Assume that you know the primitive F(x) of the integrand.$$ \frac{dF(x)}{dx}= \rho. x.f(x). \omega ^{2}$$ Then your equation reads:$$ F_{tu} f(c)=F(1)-F(c)$$ Differentiate both sides relative to c to get:$$F_{tu} \frac{df(c)}{dc} =- \frac{dF(c)}{dc}=-\rho. c.f(c). \omega ^{2}$$ From which you get:$$ f(c)=k. e^{- \frac{ \rho c^{2} \omega ^{2} }{2 F_{tu} } }$$ The negative sign indicates the rod is getting thiner as the distance from the axis increases. It makes sense because the centrepital force increases with x. k is a constant to be found with the other constraints.The mass of the rod is given by:$$ m= \int_0^1 \rho. f(x)dx $$
The integral involves the erf function, all calculus done, one finds:
$$ k= \frac{m. \omega \sqrt{ \frac{2 \rho }{ F_{tu} } } }{ \rho . \sqrt{ \pi }.erf \big( \omega. \sqrt{ \frac{ \rho }{2. F_{tu} } } \big)} $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Boundary conditions of the Casimir's effect on Sakurai On 3rd edition of Sakurai's modern relativistic quantum mechanics, section 7.8.3 when discussing the Casimir effect, we want to write down an expression for the vacuum energy for two metal plates separated by distance $d$:
$$\tag{7.183}E_0(d)=\hbar\sum_{k_x,k_y,n}\omega_k=\hbar c\sqrt{k_x^2+k_y^2+\bigg(\frac{n\pi}{d}\bigg)^2}$$
The book says this follows from the previous equation where we impose periodic boundary conditioin
$$\tag{7.172}\boldsymbol{k}=(k_x,k_y,k_z)=\frac{2\pi}{L}(n_x,n_y,n_z).$$
How did we get 7.183 from 7.172, why is it not $E_0(d)=\hbar\sum_{k_x,k_y,n}\omega_k=\hbar c\sqrt{k_x^2+k_y^2+\big(\frac{2n\pi}{d}\big)^2}$ ?
Later the book says
$$\tag{7.184} E_0(d)=\hbar c\bigg(\frac{L}{\pi}\bigg)^2\int_0^\infty \mathrm dk_x\int_0^\infty \,\mathrm dk_y \sqrt{k_x^2+k_y^2+\big(\frac{n\pi}{d}\big)^2} $$
Why is the integral from $0$ to $\infty$ not $-\infty$ to $\infty$? How did we get $(\frac{L}{\pi})^2$? Shouldn't it be $\big(\frac{2\pi}{L}\big)^2?$
$\textbf{Edit:}$ I see, so using equation 7.172, we have $$E_0(d)=\hbar\sum_{k_x,k_y,n}\omega_k=\sum_n\hbar c\bigg(\frac{L}{2\pi}\bigg)^2\int_{-\infty}^\infty dk_x\int_{-\infty}^\infty dk_y \sqrt{k_x^2+k_y^2+\big(\frac{2n\pi}{d}\big)^2}$$ this is equal to the integral in 7.184.
| Regarding the former issue, the book simply integrates twice from $0$ to $+\infty$ both in $k_x$ and $k_y$ and multiplies the result by a factor $2\times 2$, because the integrated function is symmetric under $k_j\to -k_j$.
Notice that, in fact, a factor $1/4$ which arises from $$dn_x dn_y = \frac{L}{2\pi}\frac{L}{2\pi} dk_xdk_y$$ has been cancelled out by the factor $2\times 2$.
Regarding the latter issue, the book is using Dirichlet boundary conditions: the modes vanish at $0$ and $d$ along $z$. Periodic boundary conditions are imposed only along the $x$ and $y$ direction.
I do not have the book but I expect that it uses vanishing boundary conditions on the two surfaces represented by the plates. This condition produces modes labelled on the positive integers only (think of a particle confined in an infinite double well). The boundary conditions in the directions $x$ and $y$ are not very important since we are considering the limit of infinite plates.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does the intermolecular forces change during phase transition? When water is heated but not yet boiling, I understand that the intermolecular attraction does not change, but the molecules vibrate more.
But when water boils to gas, does the forces of attraction between the molecules change, or are the intermolecular forces simply broken?
| Interactions between molecules of water are always the same, regardless of the temperature and the phase. What happens when you heat water is that the increase in temperature causes the water molecules to have higher average velocity, so they can overcome the attractive interactions and a gas forms.
The reason why this happens discontinuously, in a first-order phase transition, is actually quite complicated, but intutively what happens is that the liquid state (of high density) becomes more energetically unfavourable than gaseous one (of low density). As a consequence, all the system "jumps" at once to the gaseous state and a sudden change in water's properties happens. That is what makes boiling water different from heated water. Notice that no change in the interactive forces was involved in this sudden change.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If angle random walk (ARW) is the integration of white noise [°/s], why is its unit [°/sqrt(hr)] and not [°]?
How can ARW have the unit [°/sqrt(hr)], if it's the integration of white noise which has the unit [°/s]? Shouldn't ARW be given in [°]?
I don't understand the correlation between these two. Besides, how can I imagine [°/sqrt(hr)]?
| The angular random walk is a Wiener process in the angular dimensions, which means that the increments of the process are independent (uncorrelated) and that the differences follow a Normal Distribution (ND) with 0 mean and variance $t-s$ (with $t>s$),
$$W_{t}-W_{s}\sim\mathcal{N}(0,t-s)$$
Since every ND can be expressed as a scaled & shifted standard ND (mean of 0, unit variance), then the above follows
$$W_{t}-W_{s}\sim\sqrt{t-s}\mathcal{N}(0,1)$$
Taking this as an infinitesimal, we get $\mathrm{d}W\sim\sqrt{\mathrm{d}t}\mathcal{N}(0,1)$. Hence, there is a factor of sqrt(time) associated with the random variable, rather than simply time or unitless.
Books on stochastic processes will almost surely provide a better answer than the fast-and-loose one here, so it would be worth your time investigating that branch of mathematics (for some pointers on a book, see this physics Q&A)
I think a good way to think of this square-root-time relationship is that the accumulated deviation is not constant and not linear, but somewhere between.
For instance, consider an ARW of $0.2^\circ/\sqrt{\text{hr}}$, then after 1 hour the standard deviation is $0.2^\circ$. After 2 hours, it's $\sqrt{2}\cdot0.2^\circ=0.28^\circ$. Keep on going and after 10 hours, the accumulated deviation is now $0.63^\circ$. Had this been a linear process, we'd have a deviation of $2^\circ$, as depicted in the figure I made with Python below,
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What percentage of sunlight isn't scattered by the atmosphere? What percentage of sunlight isn't scattered by the atmosphere and instead will arrive at your eyes directly from the sun.
It's been aksed here before but a proper answer hasn't been given.
I was thinking about the effects looking directly at the sun would have for someone on the ground relative to someone in space.
| A good approach to the question is the Air Mass Coefficient, widely used in solar/photovoltaic context.
It deals with the scattering and extinction of the solar radiation in visible and near-visible spectrum.
This may or may not be a good measure for you, depending on what use you have for your sunlight. E.g. human eyes have different sensitivity at different wavelengths and the AM coefficient does not deal with the spectral distribution at all.
From the linked article:
Above the atmosphere, Sun gives off some 1350 watt per square meter.
The best one can hope for at the surface is like 1100W/m2 (noon, summer, clear sky, high in the mountains)
Depending on the particular elevation, time of the day, weather conditions, etc... it can go as low as less than a single watt (e.g. under a violent thunderstorm where you see more scattered than direct light).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Brightness of bulbs in Parallel When adding bulbs in parallel, the brightness is brighter than that of series. But does that mean adding bulbs in parallel will increase the brightness of the other bulbs?
My intuition is as follows: When adding a bulb in parallel the current doubles, but that current splits between the two branches such that both bulbs receive the same current and the same voltage, so brightness doesn't increase, but it is still brighter relative to adding bulbs in series. Is this correct?
| You have it correct except that the parallel bulbs aren't in a branch and the current doesn't double. The power generator produces a constant voltage. Both are part of the same system so both have full voltage through that system and will always have full voltage.
The difference with the other bulbs comes because you add a branch to that series circuit. Adding bulbs to that branch means there is an additional path for the current to take. So, that voltage will make those bulbs appear less bright while the current from the bulbs in the parallel never changes.
It's like a river. If you plant a tree by a river, it will always get the same amount of water. If the river has a branch veering off, some water will go in that direction but it won't be at the same speed or amount as the larger river. A tree planted along the branch isn't going to get the same amount of water as the original tree since the total amount of water in the river remains the same.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does the opposing force differ in when falling on concrete vs on water in spite of Newton's third law? If a person jumps from the first floor of a building and lands on a concrete surface, they will suffer serious injury because of Newton's third law.
If the same person jumps the same distance and lands in swimming pool filled with water, however, then there will not be any serious injury.
The person in both cases lands with same amount of force. Why doesn't water offer the same amount of force in return as concrete?
| We should calculate the Force required to break the concrete. I don’t think that the force which is generated from falling certain height is enough to break the concrete since it’s Mechanical properties are strong enough to withstand, that’s what we say resistance by the solid body when you apply force on it on some particular area.
Same if you compare to the water it’s mechanical properties are very very small compared to solid and it is not able to resist that falling body force, it shear off and it couldn’t able to resist like a solid. If we think this way we may get some answer.
Thanking You
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 9,
"answer_id": 6
} |
Why does this fan with one blade missing rotates counterclockwise while running? Video: Fan with one blade missing rotates while running.
The fan worked just fine until my friend tried to stop the spinning blades with her finger and knocked one off. Now it always rotates counterclockwise when running. Can someone explain in details why? Does this have something to do with the shape of the blades?
|
the red points are the blades center of mass . the rotation about the y-axes ,cause a wind force $~F_w~$ towards the y-axes. the torque about the z-axes ,$~\tau_z~$ cause the ventilator to rotate .
with
\begin{align*}
\begin{bmatrix}
\tau_{xi} \\
\tau_{yi} \\
\tau_{zi} \\
\end{bmatrix}
=\begin{bmatrix}
r_{xi} \\
r_{yi} \\
r_{zi} \\
\end{bmatrix}\times
\begin{bmatrix}
0 \\
F_w \\
0 \\
\end{bmatrix}\quad\Rightarrow
\end{align*}
$$\tau_{zi}=r_{xi}\,F_w\quad\text{hence }\\
\tau_z=F_w\,\sum_{i=1}^n\,r_{xi}$$
the torque $~\tau_z~$ is zero only if $~\sum_{i=1}^n\,r_{xi}=0~$.
obviously is for this ventilator not the case
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Why is it easier to raise AC current to high voltage than DC? In my country (and maybe all around the world I don't know) once electricity has been generated, it is then raised to 200k Volts for transportation.
I know this is to reduce the loss. Given $P=U.I$ and $P=I^2.R$, raising U will lower I and so limit the loss by joule effect.
From what I've read, one of the reason electricity is transported in AC is because this is easier/cheaper to raise AC to 200k Volts than if it was in DC.
Why?
| Because voltage is induced by the rate of change in the magnetic field.
If we tried to build a DC transformer, then to maintain the rate of change in the magnetic field the magnetic field would have to increase without bound, this is clearly impossible for two reasons.
*
*It would imply the input current increasing forever, this is clearly impossible.
*Ferromagnetic materials undergo a phenomenom known as saturation, when if the magnetic field gets to strong the relative permeability drops like a stone.
The result is we simply cannot build a DC voltage converter using static electromagnetic components alone. We need to resort to either moving parts or electronics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 3
} |
Nature of tangential friction force When a ball rolls down a ruff slope the frictional force acts tangent to the ball and causes the angular acceleration of the ball but at the same time the frictional force is acting to reduce the translational acceleration of the ball. How is this possible when the frictional force is acting only tangentially and not through the centre of mass of the ball?
|
How is this possible when the frictional force is acting only
tangentially and not through the centre of mass of the ball?
It is possible because the static friction force that enables rolling (without slipping) gives the ball both rotational kinetic energy and translational kinetic energy of its center of mass (COM). The sum of the rotational kinetic energy and translational kinetic energy, given a ball that begins rolling from rest, equals the loss of gravitational potential energy.
If the slope were frictionless, the ball would slide down the surface without rolling and all of its KE would be the translational KE of its COM. Consequently the acceleration of the COM would be greater if the ball slides down a frictionless slope without rolling than if rolls down a slope without slipping due to static friction, simply because all of its KE is the KE of its COM, all other things being equal.
For the above reasons, the COM of a ball sliding down a frictionless slope with reach the bottom sooner than the COM of a ball rolling without sliding on a slope with friction.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question Regarding The Movement of Charges We know that two electrons repel each other since they have like charges, which means they move in opposite directions. But how can they move if they exert equal and opposite charges, aren't the forces balanced which means there is no movement?
| Let $-q_1$ be placed at x = 0. Let $-q_2$ be placed at x = 1.
Charge 1 exerts a force on charge 2 via Coulomb’s law. The only force exerted on charge 2 comes from the electric field of charge 1. I.e. the only force on charge 2 is a repelling force due to charge 1. Because the repelling force is the only charge exerted on charge 2 and no other force is present to balance it, charge 2 accelerates and moves due to the force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Illuminance Formula This page says illuminance is
$$E=\frac{I}{L^2} cos \space \alpha$$
This page does something similar, but it ignores the $cos \space \alpha$ factor. Which is the correct formula?
Note: I don't have a physics background. I was looking at optimization problems in Calculus (which is why I came across the first page).
| I believe the first formula is the correct illuminance. It takes into account the angle of the table's surface. The second definition seems to be for luminance or something similar. See this page, for example.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
What is the analogy of cross section for particle decays? So if two particles are fired at each other the chance they interact is the cross section of the interaction. What is the equivalent term for the chance that a particle decays into certain particles? If it is the branching ratio, then what does the decay width mean?
| Here is how the cross section has been used in particle physics.
Early collision experiments were intended to measure the size of particles from their collision rate. Rutherford’s experiment, which collided alpha particles and gold nuclei in 1911, revealed that nuclei are much smaller than previously supposed. But soon, disparities arose
.....
Even though hard spheres is the wrong mental image, the term “cross section” stuck,
and the quantum probability of interaction is necessary to calculate quantum particle interaction crossections.
There exists a calculable quantum probability distribution for the lifetime of a particular decay, as can be seen here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is turbulence more likely to form with the Euler equation as opposed to Navier-Stokes? The Euler equation models perfectly inviscid fluids. Under this assumption, with $\nu = 0$, the Reynolds number should be infinite. I would guess that this implies the Euler equation is always turbulent, but this is not the case as in practice it is used to model regular (low viscous) fluids. Why does this occur?
And second, does this imply that the Euler equation is always more turbulent than the Navier-Stokes, which accounts for viscosity?
| Wiki states on turbulence:
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration;
So every continuum equation which includes material derivative of flow velocity term :
$$ {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}= {\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} $$
includes time-dependent flow velocity change and convective acceleration term (flow velocity gradient) and is able to describe how flow parcel speed changes along travel trajectory with passage of time. Thus by definition, continuum equations with flow velocity material derivatives can describe flow turbulence, including but not limited to Navier–Stokes equations and Euler equations. So your intuition based on high Reynolds number in Euler equations was right.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Can I really see what is on the opposite side of a black hole? This question is only about objects outside the event horizon. Both the observer and the object are just outside the event horizon.
I have read this question:
An observer can see the back side of the neutron star to some extent and can actually see the whole of the neutron star surface if the radius is below 1.76 times the Schwarzschild radius for its mass, $r_s = 2GM/c^2$. See https://physics.stackexchange.com/a/350814/43351 for details and some attempts to visualise this. e.g.
A neutron star with a radius less than $1.5r_s$ would distort a background star field in a similar way to a black hole, including the photon ring at an apparent radius of $2.7r_s$ caused by unstable circular photon orbits at $1.5r_s$.
Neutron star accurate visualization
Now based on this answer, the neutron star can bend light similar way to a black hole in certain cases, and using visible light, the whole surface might be visible, which means we can get information from the other (opposite) side of the object via visible light. Now visible light is just EM waves, like radio signals, and my question is about whether we can similarly receive visible light from someone on the opposite side of the black hole (because as the answer explains, the path of these EM waves are so bent, that they actually can go around the object).
Question:
*
*Can I really see what is on the opposite side of a black hole?
| There is this new video showing the effect on Messier 87 black hole.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Methods in Numerical Relativity I have been reading the book of Masaru Shibata Numerical Relativity to grasp some ideas on the methods used. I see that at the heart of the method the system of differential equations is "converted" in a set of algebraic equations, by the means of finite difference method and this works fairly well.
On the other hand, comparing with other areas of physics and engineering, I know from my own experience that the finite element method has many advantages over the finite difference method, and, in fact, that is one of the reasons why in many applications it is used much more.
I wonder why in numerical relativity this the case is not also. I suspect that it is the following difficulty, in two and three dimensions there are very good algorithms to divide the domain where the system of differential equations is solved in a partition or set of "finite elements", but that in 4D this collides with certain difficulties. I understand that if these practical issues were solved, simulations in numerical relativity could be done using the finite element method and take advantage of all its benefits.
I would like to know what people who know the subject think and see if they share my suspicion that the difficulty of constructing a proper mesh in 4D is part of the problem.
| One reason is inertia. FDM has worked (and continues to work) for over 50 years, and a lot of development has been done in that time to optimize and build on the early codes. Is there any indication FEM will be better than what we already have? Not that I'm aware, so it would be a huge task to get back to the cutting edge starting from scratch with FEM, and there is no guarantee it will be any better. You risk reinventing the wheel and making it square
You could argue that spectral methods and FEM are the same thing, and spectral methods are quite popular. But any "new" method has to fight against the proven success of finite differences
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How is Newton per meter Cubed related to Newton per meter squared (=Pascal)? Is there a way to relate $\frac{N}{m^3}$ to $\frac{N}{m^2}$?
| I have used force density ($N/m^3$) in calculations of optical forces. That is, when light interacts with a three-dimensional object, how is the optical force distributed in space? A total force ($N$) is applied to the whole object, and then that force is distributed through the object as a density. If there is a direction of interest (say, defined by the normal to a surface), then the optical force density could be integrated along that dimension, resulting in a radiation pressure ($N/m^2$).
So, you relate force density to pressure by integrating the force density over a spatial dimension. Conversely, the force density is the spatial derivative of the pressure. It’s up to you to define exactly how that derivative is calculated based on what is relevant to your problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Questions about Maxwell's demon I've been reading about Maxwell's demon and the current accepted solution for it (deleting information results in an increase in entropy), but there are two things I don't understand about the solution.
*
*Suppose the demon has a large enough memory to store all the information about the system, making deletion unnecessary. Wouldn't the 2nd law of thermodynamics be broken in that case?
*But even that aside, there is necessarily some amount of time between when the data is stored and when it is deleted, so wouldn't the 2nd law be broken during that period of time?
| The Demon's memory store acts like an entropy reservoir. In the process of measuring the speed of each molecule, the Demon reproduces the random pattern of fast and slow gas molecules on either side of the barrier in the memory store, so the entropy for the entire system is exactly the same. When the Demon deletes the data, it is returning it from $2^n$ states to one state, and thus reducing the entropy in exactly the same way as separating the gas molecules would.
For the sake of illustration, let's suppose the Demon is using an abacus to store the data. Initially, the gas is in one of $2^n$ states, while the abacus (initially empty) is in one of one possible states. We have $n$ bits of entropy.
Now the demon measures the speed of each molecule and checks it against the threshold, gaining one bit of information per measurement. It uses this information to sort the molecules. So now the gas is in one of only one possible states, and the abacus is in one of $2^n$ possible states, exactly reproducing (the relevant part of) the initial state of the gas. We can think of the beads on the abacus wires as molecules in one of two compartments, at the top and bottom of the abacus.
The number of states of the system as a whole (gas + Demon) is exactly the same: $2^n$. The process is reversible. The entropy has neither increased nor decreased, the 2nd law of thermodynmamics has not been violated.
When the information is deleted, the Demon has to go through the beads, check whether it is at the top or bottom of the wire, and push it in the appropriate direction to cancel out the information. It is doing exactly the same sort of task as it was in sorting the gas molecules - going from one of $2^n$ states to a single defined state. And so, for exactly the same reason, it has to tranfer the information gained from looking at each bead-bit somewhere else, into some other reservoir of entropy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Destroying a black hole Is there any (known? theoretical?) way to destroy a black-hole?
*
*"Destroy" means forcing it to disappear - before it evaporates through Hawking radiation.
*"Disappear" means that it stops being a black-hole: no more event horizon, no more impossibility for light to escape it, etc - it becomes just a "regular" object of mass or loses the mass completely. (i.e. releases its mass to energy or loses its properties in some other way)
*"Before" means any time before it would fizzle away through Hawking radiation. Even if it's achieved a split-second earlier, it's a win.
| The standard definition of a black hole in classical GR is that it has an event horizon. By that definition, there is no way to convert the stuff that has fallen into the hole to other stuff that can then be observed from infinity. That would just mean that the spacetime never met the definition of being a black hole spacetime.
If you had something that formed a singularity by gravitational collapse, but the singularity was observable from infinity (at any time, even much later), then that might be somewhat like what you're describing, although it wouldn't be a black hole by the standard definition. However, the statement that that doesn't happen is the cosmic censorship conjecture. (What Aslan Monahov's answer describes sounds like the kinds of scenarios that have been cooked up in attempts to find counterexamples to cosmic censorship.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why is current finite for point charges? If an electron passes through a flat plane, then there will only be a single point in its entire path which lies on the plane,i.e the entire charge of an electron passes through in an instant (as it is a point charge), then why isn’t the current infinite at that instant and zero at all the others?
| Yes, single point particle with finite charge crossing a control plane means infinite current on that plane, in that instant of time. However, this infinite current does not last for any finite amount of time; it is there only for that instant, i.e. zero time interval.
If there are more such particles, we have current that is zero most of the time, and infinite at few special time instants.
Infinite current for zero time is not really a problem. If it bothers you, don't think in terms of instantaneous current, but in terms of average current, i.e. charge transported through the control plane during some chosen unit of time. This average current is usually finite.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Circular motion of two bodies: how to determine when they meet up again?
Let's say that there are two satellites, one of them moves in the red orbit and the other one in black one. At the time t0 they start together on the green point. How can I set equations to deduce when are they going to meet each other again? Mass is not important.
| If you use Keplers third law $T^2/a^3=\textrm{const},$ you will know the relative orbital periods by knowing the relative size of the major axes of the two orbits. You can calculate $nT_1=mT_2.$ If there are not whole numbers $n$ and $m$ that make the equation true, they will never meet exactly again.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How do you visualize the electric field exciting this vibration in a molecule? This image is very common in chemistry, where most people don't really visualize the electric field that produces molecular excitation.
What would be a good first picture to think about? Maybe it could be classically reduced to a dipole interacting with light, correct?
Is there a not overly complex way to understand how would light make a dipole to vibrate ? (I am not a physicist.)
| On the picture the wave is monochromatic end describes by formula
$$E(x,t)=sin(\omega t-kx)$$
So if we put there coordinate of dipole $x_d$ we'll obtain
$$E(t)=sin(\omega t+kx_d)=sin(\omega t + \Delta\varphi)$$
I think the best way to visualize it is to draw vectors of field, that point to the direction where dipole will be orientated:
https://qph.cf2.quoracdn.net/main-qimg-61a9b7da3df652e758498852a14e2101
And also you should draw the dipole orientation from negatively charged side to positively charged one
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If resistance in an electric current is 0 (ideally) then would there even be current flow? From my understanding batteries are used to charge electrons with electric potential which they then use to do work on resistors in a circuit. After doing work the electrons return to the opposite terminal with less potential energy, and the difference between the two potential energies is how batteries create voltage. If resistance is 0,meaning the electrons can't do work on the circuit, thus their potential energy remains the same, the battery wouldn't be able to create voltage. Wouldn't that mean that electrons would be unable to move from one pole to the other?
| No, in fact it would be quite the opposite. The current would be so high that all of the voltage would be dropped over the internal resistance of the battery. Thus the terminals would be at the same voltage even with the large current.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why Is Capacitance Not Measured in Coulombs? I understand that the simplest equation used to describe capacitance is $C = \frac{Q}{V}$. While I understand this doesn't provide a very intuitive explanation, and a more apt equation would be one that relates charge to area of the plates and distance between them, I'm having trouble understanding it in general. Capacitance seems to be describing, well, the capacity of two plates to store charge (I understand that the electric field produced between them is generally the focus more so than the actual charge). Shouldn't it just be measured in units of charge such as coulombs? I'm sure this is due to a lack of more fundamental understanding of electric potential and potential difference but I'm really not getting it.
| Capacitance, as you describe it, is capacity to store charge - it's not charge itself. So why you expect it to be measured in unit of charge?
For SI Units, it has been decided to measure every physical quantity in terms of only 7 base units, namely, second, meter, kilogram, ampere, kelvin, mole and candela. All other units, called derived units, are to be defined using how they are related to two or more of the base units.
Since charge is current multiplied to time, its SI unit is $A.s$. Next, since electric potential is potential energy divided by charge, its SI unit becomes $kg.m^2.s^{-2}/A.s$ which is named as volt to honor Alessandro Volta. Now, since capacitance is charge divided by potential (difference), its SI unit becomes $A^2.s^4.kg^{-1}m^{-2}$ which is named as farad to honor Michael Faraday.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/728239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 3
} |
Partial derivative of momentum with respect to position in Poisson bracket representation The representation of a Poisson bracket is given by the following equation:
$$\tag{1} \{f,g\} = \sum_{s=1}^n \sum_{i=1}^{d=3}\left ( \frac{\partial f}{\partial x_i^{(s)}} \frac{\partial g}{\partial p_i^{(s)}} - \frac{\partial f}{\partial p_i^{(s)}} \frac{\partial g}{\partial x_i^{(s)}}\right),$$
where $n$ is the number of particles, and $d$ is the number of dimensions.
Assume we have an arbitrary Hamiltonian $H$ (possibly explicitly time-dependent). Then according to the Hamilton equation we have:
$$\frac{d p_j^{(r)}}{dt} = \{p_j^{(r)}, H\} + \frac{\partial p_j^{(r)}}{\partial t} =\{p_j^{(r)}, H\}.$$
Using the representation given in (1) we can show that $\frac{d p_j^{(r)}}{dt} = -\frac{\partial H}{\partial x_i^{(r)}}$.
In the process of derivation, I get that
$$\tag{2} \frac{\partial p_i^{(r)}}{\partial x_i^{(r)}} = 0.$$
I don't understand why the quantity in (2) is zero? Can't the momentum depend on time as well as $x$?
P.S.
Unfortunately, I don't have any physics background whatsoever, so I will appreciate an intuitive answer or a mathematical proof which does not rely on Lagrangian mechanics.
| Variables $x_i$ and $p_i$ are independent variables used as arguments for the Hamiltonian $H(x_i, p_i)$. Since they're independent variables, the partial derivative of one w.r.t. the other is identically zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/728359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Varying energy density of photons? I know photon energy density is proportional to the fourth power of the scale factor, because it dilutes and redshifts.
I want to take into account the added photon energy density from astrophysical sources along the scale factor to the CMB energy density from the beginning of the age of the universe. I know for most of the cases, this one can be disregarded, but I still want to calculate it assuming a certain rate of energy density production proportional to the scale factor.
How can I introduce the creation of photons along the universe scale factor in the Friedmann equations?
| For self-consistency, we need the energy to come from somewhere. For astrophysical sources, it of course comes from matter. In that case you can assume that there is some energy exchange rate $\Gamma$ such that
$$
\frac{d}{dt}\rho_m + 3H\rho_m = -\Gamma\rho_m,
\\
\frac{d}{dt}\rho_r + 4H\rho_r = \Gamma\rho_m,
$$
where $\rho_m$ and $\rho_r$ are the energy density of matter and radiation, respectively. $\Gamma$ has dimensions of inverse time and can be viewed as the "decay rate" of matter. It can be time dependent, if you like; if it is constant, that just means that a given mass of matter always produces radiation at the same rate.
The above equations, together with the first Friedmann equation, are now a coupled system of differential equations that you can solve (probably numerically) to obtain $\rho_m$, $\rho_r$, and $H$ as functions of time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/728750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |