Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
The value of $g$ in free fall motion on earth When we release a heavy body from a height to earth. We get the value of $g=9.8 \ ms^{-2}$. Now, I'm confused about what it means. For example, does it mean that the body's speed increases to $9.8$ every second? Or, does it mean that the speed of the body is $9.8 \ m/s$?
| The other guys here (@Thomas Fritsch and @AWanderingMind) are perfectly right, and just to see that: g is an acceleration, and acceleration is change of velocity with time, or velocity per time. Like velocity itself is distance per time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Double slit experiment: Are electrons interacting with other electrons to create a wave? Assume a double slit experiment with electrons and no observer (light source). Can the wave-like behavior and resulting interference pattern be explained by the single electron that is being shot, doesn't really travel to the detector, but interacts with other electrons in the medium (e.g. air) between source (electron gun) and target (detector), and this creates the wave? I imagine it as if the shot electron is repelled from other electrons and they again repel other electrons and so forth.
Furthermore, in case of a light source acting as an observer. Could they interact electromagnetically with all these electrons, between the source and target, in a way that the electrons are not moving freely and repel each other. But, acts a contiguous block and the shot electron hits the first electron, that transfers the energy to the second electron, then to the next until the last electron hits the detector. Similarly to Newton's cradle?
| No not like Newton's cradle. Experiments with electron beams are done in vacuum.
The EM field is responsible for the interaction of the electron with its surroundings (the starting electrode, the slit, the detector, the walls of the chamber, etc). The EM field fills all space! A famous theory is from Richard Feynman, he stated that every photon determines its own path and the same can be said for the electron. Following Feynman's theory for light we realize that photons prefer (higher probability) paths and do not prefer other paths (lower probability). Feynman used his path integral method to calculate the paths and it agrees with the DSE. IN the DSE for photons there are NO photons landing in the dark areas, all photons land in the bright areas .... the same is true for electrons.
The EM field is very dynamic and works at the speed of light, even before the excited electron even leaves the electrode to begin its path many forces are occurring .... these forces influence the path.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Does melting a metal affect its electronic band stucture? Given that the band structure of a metal emerges from the periodicity of the crystalline lattice and the corresponding symmetry arguments, what happens to the band structure as the metal is melted into its liquid state? Would some form of a limited band structure remain due to atomic clustering or would this not exist at all?
In addition, would melting the metal completely remove interband transitions due to the degradation of the band structure?
| What really emerges from the periodicity of the crystalline lattice and symmetry arguments is the so-called dispersion of the energy bands, i.e., the introduction of a (vector) parameter for the electronic states, the wavevector ${\bf k}$, labeling the electronic states that are simultaneously eigenstates of the Hamiltonian and the lattice translation operators.
The concept of energy bands is related but more general than the ${\bf k}$-space dispersion. It clearly emerges from any experiment probing the energy density of states (EDOS) in amorphous and liquid systems, as well as in their crystalline phases.
In every system, including metallic systems, the melting may affect the EDOS to a larger or lesser extent, depending on the ionic rearrangement accompanying the melting. For example, in metallic systems like Nickel, Copper, or Gold, the passage from a compact fcc crystalline structure to a dense simple-liquid structure, almost preserving the number of nearest neighbors, does not introduces dramatic changes in the EDOS. Of course, features specifically related to the ${\bf k}$-space dispersion, like the van-Hove singularities, disappear.
In other cases, like Si and Ge, undergoing a transition from an open crystal structure and a semiconducting behavior to a higher coordinated metallic liquid, the EDOS shows the gap's closing.
An example of the changes induced by the melting on the EDOS can be seen in calculations like this.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can quantum tunneling happen conceptually? I have read in Griffiths' Quantum Mechanics that there is a phenomenon called tunneling, where a particle has some nonzero probability of passing through a potential even if $E < V(x)_{max}$.
What I don't understand about this is how to conceptualize how this can happen. I have read on Wikipedia that tunneling means that objects can "in a sense, borrow energy from their surroundings to cross the wall". How can the object "know" that across the wall there's going to be a lower energy and, thus, the borrowed energy will be restored and not depleted.
| You're treating Quantum objects as balls. This is misleading. When working with Quantum Mechanics, picture the object as the entire wave. So tunneling happens because parts of the wave pass through the potential. If there isn't a lower energy across the wall, the wave won't pass.
I like to imagine it as a water wave and it might leak if there is nothing behind the wall.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Does dusk really remain for a shorter period of time at the equator? It is said that the dusk remains for shorter time at equator than the poles. Because, the equator rotates faster than poles. But it is also true that time is the same in every latitude, and if it's true, then the dusk should remain the same at equator as the poles. So, does dusk really remain for a shorter period of time at the equator?
| It is faster because the sun takes a higher trajectory through the sky typically, and crosses the horizon steeper and thus faster.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 4,
"answer_id": 0
} |
Why doesn't the variation of resistivity with temperature go both ways? I've learnt that the variation of resistivity with temperature for a conductor is:
$\rho=\rho_0(1+\alpha (T−T_0))$
Let's consider resistivity at 0℃ and 100℃.
When heating the conductor from 0℃ to 100℃,
$ρ₁₀₀=\rho_0(1+\alpha (100-0))$
α=$\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₀}\, $
Now, when cooling the conductor from 100℃ to 0℃,
$\rho_0=ρ₁₀₀(1+\alpha (0-100))$
α=$\displaystyle \frac{ρ₀-ρ₁₀₀}{-100ρ₁₀₀}\, $= $\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₁₀₀}\, $=$\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₀(1+α(100-0))}\, $=$\displaystyle \frac{α}{1+100α}\, $
Why does this discrepancy exist? Even if the relation only holds for smaller temperature differences, the discrepancy seems to hold, as the new value of α only seems to depend on the old one, as $\displaystyle \frac{α}{1+T'α}\, $.
| Maybe we can see this as a purely mathematical misunderstanding, and disregard the discussion about whether such a formula is an approximation (there could in principle exist a material for which the linear relationship was exact, at least in some temperature interval).
So more abstractly, the relation:
$$
y = y_0(1 + \alpha x)
$$
(with $\alpha\ne 0$) represents a straight line in the $(x,y)$ coordinate system going through the fixed point $(0,y_0)$ and having a slope $y_0\alpha$. However, the equation:
$$
y = y_{100}(1 + \alpha(x - 100))
$$
describes a straight line in the $(x,y)$ plane going through $(100,y_{100})$ with slope $y_{100}\alpha$.
So the two equations do not describe the same line. The slopes differ. (And if $y_0=y_{100}$ the equations differ at the 0th-degree coefficient.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 0
} |
If we apply a magnetic field to a core saturated by a permanent magnet, what will happen? If we apply a magnetic field to a core saturated by permanent magnet, will the magnetic field of the permanent magnet and electromagnet get combined?
I mean to say superposition will be applied?
| Yes, they will add up. They will get stronger if they are pointing in the same direction, and weaker if the electromagnet's magnetic field is pointing oppositely to the permanent magnet's magnetic field.
But the material inside won't get magnetized further. The magnetic field from an electromagnet and a permanent magnet is the same. So if one cannot increase the magnetization further, the other naturally won't as well.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can the operator field Dirac equation be expressed as Heisenberg's equation? The Dirac equation of the operator spinor field is:
$$(i\gamma ^{\mu}\partial _{\mu} -m)\psi =0$$
where $\psi$ is interpreted to be a quantum field.
I'm wondering, can this be derived from the Heisenberg equation?
$$\frac{d\psi}{dt}=\frac{-i}{h}[\psi, H]$$.
I'm in doubt because the above equation has a commutator while the Dirac quantisation involves anti-commutators.
| Suppose you want to change $\psi$ by a "tiny" amount $\delta \psi$. This $\delta \psi$ has to fulfill the same anti-commutation relations as the $\psi$, in particular
$$
\{ \delta \psi, \psi \} = 0
$$
You can generate such a $\delta \psi$ by use of the commutator (not the anticommutator): Suppose for example that
$$
H = \left(\delta \psi\right) \pi
$$
with
$$\{\psi, \pi \} = i
$$
$\psi$ and $\pi$ satisfy the usual anticommutation relations!
Then we can compute:
$$
-i[\psi, H] = -i[\psi, \left(\delta \psi\right) \pi] = -i \{\psi, \delta \psi \} \pi - -i\delta \psi \{\psi, \pi \} = \delta \psi
$$
Here we use the properties of commutators and anticommutators (which hold in general). Your Hamiltonian (or any other observable that you will use to generate transformations) will in general look different (or involve sums or integrals over more degrees of freedom), but this calculation should illustrate that one can generate anticommuting operators, using the usual commutator from the Heisenberg equation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Worldsheet constraint Bosonic String I am currently studying David Tong's notes on String theory and there’s a step taken in writing out the worldsheet constraint in lightcone coordinates $\sigma^{\pm}$ for the closed string that I’m not sure about. We have the constraint eq 1.38 written out on page 26 as
$$(\partial_{-}X)^{2}=\frac{\alpha^{‘}}{2}\sum_{m,p}\alpha_{m}\cdot\alpha_{p}e^{-i(m+p)\sigma^{-}}=\\\frac{\alpha^{‘}}{2}\sum_{m,n}\alpha_{m}\cdot\alpha_{n-m}e^{-in\sigma^{-}}.$$
It looks like my $p$ index was changed to $p=n-m$ but I’m unsure how this action is valid considering I have an exponent hanging around. Also Wouldn’t this change in $p$ change my summation? How am I able to have a summation for $n$ after this change in $p$. I’m not sure if I’m overthinking this change but I can’t seem to convince myself why this change in $p$ would be valid.
| It is just a dummy variable change, from $p$ to $n:=m+p$. Since $m$ and $p$ run through all integers, $n$ also runs through all integers. $\newcommand{\ex}[1]{\mathrm{e}^{#1}}$ Stripping off the physics we have
$$ \sum_{m=-\infty}^\infty\sum_{p=-\infty}^\infty a_m\ b_p \ c_{m+p} = \sum_{m=-\infty}^\infty\sum_{n-m=-\infty}^\infty a_m\ b_{n-m} \ c_{n} = \sum_{m=-\infty}^\infty\sum_{n=-\infty}^\infty a_m\ b_{n-m} \ c_{n}.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What's the importance of all four fundamental forces being "curvature"? I've heard about how, in a gauge theory, the gauge covariant derivative of the field around a closed curve is generally not zero, and this is how you can quantify force or field strength. And that this is the same basic idea as curvature, with the gauge field being equivalent to the connection.
So since gravity is already known to be curvature, we can say that all the forces of nature are curvature in their own way. So what's the significance of that? Is there some deeper reason that we should expect that to be the case? And are the current unification programs based on that similarity?
| In the 1920s–1940s, people developed a unified classical theory of gravity and electromagnetism using just this sort of approach. It's called Kaluza-Klein theory. Some aspects of it even generalize to classical non-abelian Yang-Mills theories (R. Montgomery: Canonical formulations of a classical particle in a Yang-Mills field and Wong's equations). I think I've heard there's some subtle problem with its quantum version that prevents it from describing quantum non-abelian Yang-Mills theories well. That might be one reason it's not talked about much these days.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
How can Entropy be maximal when it is undefined everywhere else? This question is about classical thermodynamics.
I learned that when an isolated system is not in equilibrium, its thermodynamic variables such as Entropy are undefined.
I also learned that when an isolated system is in equilibrium, its Entropy is maximized.
However, both statements together don't make sense. How can a value be maximized, when everywhere else it is undefined? It's not smaller everywhere else, it's undefined! It can't be maximal or minimal because there isn't anything else nearby to compare it with.
So how am I supposed to interpret these statements?
| Entropy could be minimized in equilibrium. Or it could be 70% of the min. It's not. Instead, it's maximized.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 6,
"answer_id": 5
} |
Transverse component of distorsion tensor in GR On pages 164-165 of Eric Gourgoulhon's lecture notes on Numerical Relativity, the author introduces the decomposition (9.49) for the distorsion tensor related to a foliation $(\Sigma_t)_{t\in \mathbb{R}}$ with induced metric $\gamma_{ij}$. This tensor is defined as
$$ Q_{ij}= \dfrac{\partial \gamma_{ij}}{\partial t} - \dfrac13 \gamma^{kl}\dfrac{\partial \gamma_{kl}}{\partial t} \gamma_{ij}.$$
Then, the author introduces the decomposition (9.49), i.e.
$$ Q^{ij}= (LX)^{ij} + Q^{ij}_{TT},$$
with $(LX)^{ij}= D^iX^j + D^j X^i - \dfrac23 D_k X^k \gamma_{ij} $, and $X$ is a vector field. By definition, the $LX$ (longitudinal) part of this decomposition is traceless. As $Q_{ij}$ is also traceless by definition, this makes $Q^{ij}_{TT}$ traceless as well. However, it is stated at the top of page 165 that $Q^{ij}_{TT}$ is also transverse, i.e. $D^iQ_{ij}^{TT}=0$.
How can I see that is statement is true? This decomposition first appeared in this paper, in which the author translates the transverse condition to a constraint on the vector field X. It does not automatically conclude that the transverse-traceless part is indeed transverse.
| The paper you are referring to proves that any symmetric tensor field can be decomposed into transverse-traceless, longitudinal and trace parts, eq. 2 and 7.
$$ \psi_{ab} = \psi_{ab}^{\rm TT} + \psi_{ab}^{\rm Tr} + \psi_{ab}^{L}$$
In the above,
$$ \psi_{ab}^{\rm Tr} = \frac{1}{3}\Psi g_{ab} = \frac{1}{3}\psi_{cd}g^{cd}g_{ab}\\
\psi_{ab}^{L} = \nabla_{a}W_{b} + \nabla_{b}W_{a} - \frac{2}{3}\nabla_{c}W^{c}g_{ab}
$$
and the transverse-traceless part is defined as:
$$ \psi_{ab}^{\rm TT} = \psi_{ab} - \psi_{ab}^{\rm Tr} - \psi_{ab}^{L}$$
The transversality requirement on the TT part
$$ \nabla^{b}\psi_{ab}^{\rm TT} = 0 $$
is equivalent to the vector field in the longitudinal part satisfying a certain equation that involves the trace-less part of $\Psi_{ab}$.
$$ \nabla^{b}\Psi^{L}_{ab}= \nabla^{b}(\nabla_{a}W_{b} + \nabla_{b}W_{a} - \frac{2}{3}\nabla_{c}W^{c}g_{ab}) = \nabla^{b}(\Psi_{ab} -\frac{1}{3}\Psi g_{ab}) $$
If there exists a vector field $W^{a}$ such that the equation is satisfied, it means that the decomposition as outlined is possible.
To answer the last part of your question - demanding that $ \nabla^{b}\psi_{ab}^{\rm TT} = 0 $ leads to an equation for $W^{a}$. If such a $W^{a}$ exists (which is proven under certain assumptions) and we find such a $W^{a}$, we know the $\Psi^{L}_{ab}$, and thus, the TT part from $ \psi_{ab}^{\rm TT} = \psi_{ab} - \psi_{ab}^{\rm Tr} - \psi_{ab}^{L}$. The conclusion is that the original tensor field can indeed be decomposed into these parts.
By the transversality demand (satisfied once you find a suitable $W^{a}$) the transverse-traceless part is indeed transverse.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Product notation for operators If I have a Hamiltonian
$$\mathcal{H} = \prod_j^N Z_j$$
where $j$'s are different sites on a lattice and $Z$'s are Pauli $Z$ operators does that mean that the Hamiltonian can also be written as
$$\mathcal{H} = Z_1 \otimes Z_2 \otimes \cdot \cdot \cdot Z_N$$
and if they are all Pauli operators could it just be
$$\mathcal{H} = Z ^{\otimes N}$$
| Short answer: yes. Long answer: yes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Differential charge existing We define current by $I=\frac{\mathrm{d}q}{\mathrm{d}t}$. Here, $\mathrm{d}q$ is the infinitesimal element of charge. But again,we know that charge is quantised meaning there is a finite value to the smallest amount of charge which is $e$. Since $\mathrm{d}q$ is infinitely small, $\mathrm{d}q<e$. Then how can $\mathrm{d}q$ charge even exist?
| You're mixing up two descriptions that are, in practice, separate.
$i=dq/dt$ is usually used in macroscopic physics, when it is understood that you don't study actual individual electrons. In fact, most of the corresponding physics laws predate quantum mechanics, even predate the discovery of the electron.
In other words, whether $dq$ is the "small" charge contained in a "small" volume $dV$ or crossing a section during a "small" duration $dt$, it must be understood as containing a mesoscopic number of charge carriers (large compared to unity, small compared to $\mathcal{N}_A$).
In this context, reducing either $dV$ or $dt$ to the point that $dq$ contains only a few electrons makes you step outside the validity of those laws of physics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 0
} |
Can plasmas be black bodies? I have recently heard the claim that sun can not be composed of plasma because plasma can not be a black body.
I am an uneducated layman, I've seen a lot of people (laymen) deviate from accepted scientific consensus. I am skeptical and I don't have enough knowledge about physics to argue it.
| Plasma in many concrete cases often is not a black body, e.g. plasma in Earth's ionosphere, or in a discharge lamp, or in tokamak. This is because plasma in these cases is very thin (rarified gas), and not a good absorber of radiation, as there is not enough layers to make it absorb close to 100% of incoming radiation and make it opaque.
However, if enough plasma layers are present in a plasma "cloud", it can become good absorber (opaque) and with additional assumptions, it can become close to a black body in its radiation characteristics (which is an idealized theoretical concept that does not exist in reality). Sun is very large and density of plasma is believed to increase towards its center, so after some large distance into the center, there is enough plasma between our eyes and the rest of the Sun, that it makes that plasma non-transparent and close to a very good absorber. This layer of plasma on the "surface" of the Sun beyond which we can't see (due to good absorption in the layer) is called photosphere. Its thickness is claimed to be around 300 km, so this is an estimate of how much of the Sun surface plasma is needed to make it opaque.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Would water flow from the higher container to the lower one? Two identical open-topped containers contain identical amounts of water, and they are connected by a tube at their bases. Container A is higher than Container B. How much water, if any, will flow from Container A to Container B? How does this change if the height of container A increases or decreases?
I left this out initially, because I didn't want to influence answers. I expected the water to reach an equilibrium and stop flowing from container A to B. When I tried running the experiment, water flowed from container A until it was emptied. Can someone explain why this happened? Did this happen because of poor design, or is it expected?
| I like the willingness to experiment!
The result of the experiment is indeed expected. Basically, because there is a connection this is all one body of water. If the surface of a body of water is higher on one side then the water will flow downhill until the surface is level. So here the water will continue flowing until the top surface in each bucket is at the same height. In the photo the bottom bucket is completely below the top bucket, so there is no surface that could be at the same level in both buckets. Thus the water will all flow downhill to the bottom bucket.
This answer assumes that the tube is filled with water. If the tube is initially filled with air then it is a little more complicated. You will usually get a bubble of air that floats up on each side and a blob of water that flows into the tube. Depending on the length and diameter of the tube that may happen several times until the tube is full of water. Then the previous paragraph happens.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why does the graviton polarization satisfy $\epsilon_{ij}(\mathbf{k},\lambda)\epsilon^{ij}(\mathbf{k},\lambda') = 2 \delta_{\lambda\lambda'}$? I am reading the paper ``Graviton Mode Function in Inflationary Cosmology'' by Ng (link here). The graviton $h_{ij}$ is here expanded (in the TT gauge) where
$$
h_{ij}(x) \sim \epsilon_{ij}(\mathbf{k},\lambda) h_{\mathbf{k}}(\lambda,x)
$$
and in equation 9 it is said that
$$
\epsilon_{ij}(\mathbf{k},\lambda)\epsilon^{ij}(\mathbf{k},\lambda') = 2 \delta_{\lambda\lambda'} \ .
$$
Where does this come from? And how is the relation adjusted for differing momenta $\mathbf{k} \neq \mathbf{k}'$, ie. can one write down a relation for $\epsilon_{ij}(\mathbf{k},\lambda)\epsilon^{ij}(\mathbf{k}',\lambda) = \ldots$?
EDIT: Why does one need this condition? Is it so the Lagrangian is properly normalized when written in terms of $h_{\mathbf{k}}$?
| The 2 is conventional, but it makes sense when you think of the simplest form these tensors take when $\hat{k} \propto \hat{z}$:
$$\epsilon_{ij}^+ =\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 0 \\
\end{pmatrix}
$$
and
$$\epsilon_{ij}^\times =\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{pmatrix}
$$
which both satisfy $\epsilon_{ij} \epsilon_{ij} = 2$; if you wanted a 1 there you'd have to rescale both of them by $ 1 / \sqrt{2}$.
As the other answers have stated, this has no physical consequence: rescaling the basis tensors by some factor will just mean the components will be rescaled by the inverse of that factor.
Still, having plain 1s in the tensor looks nice, which is why this convention is chosen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Why flapping rudder produce net thrust if one half-stroke produce thrust and second half-stroke drag? In small sailing boat like optimist is well know technique when there is no wind, rudder pupming which push boat forward.You just need push-pull rudder stick left to right with fast movement.
Rudder works complety under the hull, so there is no pressure interaction between stern and rudder.
Forward half-stroke is when rudder rotate from centerline to left or right
(from 2 to 1 or from 2 to 3).
Why stiff rudder(not felxibile like flippers) produce net thrust if forward half-stroke produce drag?
(Or maybe forward half stroke produce thrust as well? I dont think so..)
Please explain your answer with pressures at rudder sides for two condition;
*
*boat speed zero
*boat is moving
Avoid Newton 3 law.
| Below the horizontal line is my original answer, submitted 5 hours ago, but there is a better explanation that I overlooked.
In a comment to another answer Gordon McDonald points out that since the rudder hinges right at the stern the rear edge of the rudder sweeps out a sector of a circle. That alone will tend to result in pushing water rearward. Most likely that effect is the main factor.
When the stiff rudder is moved side-to-side it creates vortices in the water. It could be that an overall effect of the side-to-side motion is to create pairs of vortices that are constantly being shed of the rudder edge.
Hypothesis:
The side-to-side motion of the (stiff) rudder tends to continuously create vortex pairs that detach from the rudder. The vortex pair extending from the rudder will tend to lag behind when the rudder is swung the other way, making the vortex pair act somewhat like a very short flipper. That would make the side-to-side motion of the rudder act as a very inefficient flipper.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Problem trying to graphically represent a 2D vector given angle and intensity I'm very new to physics so forgive me if it's a trivial question, but it's something I have trouble figuring out.
I'm trying to solve an exercise, and the exercise says that we're given
$|\overrightarrow{v}| = 2.8 N$, where the vector and the $x$ axis form an angle of $45$ degrees
and $|\overrightarrow{a}| = 2.3 N$, where the vector and the $x$ axis form an angle of $-30$ degrees.
The vectors are supposed to act on a point particle. I'm not sure how to represent this graphically. If we suppose that the point particle is located in (0,0), I tried to do it like this:
However, I am quite sure that this is not correct but I can't explain why and I'm not sure what would be correct. If the vector $v$ forms an angle of $45$ degrees with the $x$ axis, what would its direction be? Obviously there are two possible ways you can direct the vector, but I'm not sure which one is correct. I'm also not sure about vector $a$ either, but I could maybe figure it out if I figure out vector $v$.
| *
*The angle of a vector is usually measured from the positive x-axis to the vector, with clockwise angles counting as positive. Thus a vector in the same direction as the positive y-axis has an angle of 90 degrees and a vector in the same direction as the negative y-axis has an angle of 270 degrees (or -90 degrees).
*Vectors representing forces on a particle are best drawn with their tails at the particle instead of their heads. This makes it easier to see geometrically how the vectors add together.
*The “size” of a force vector is called its magnitude, not its intensity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Computing the maximum force a rod can bear Suppose I had a rod of diameter $d$ composed of some material with tensile strength $T$. If I then exterted a pulling force $F$ on the ends of the bar, how do I compute the force $F$ for which the rod will break apart? Is there some general equation that I can use to compute this?
| If you know ultimate tensile strength $T$ of material, then knowing breaking force of rod is trivial,
$$ F_{~br} = T \cdot A $$
,where $A$ is rod cross-section area.
However there is no way to compute ultimate tensile strength of materials, it can only be known from Tensile testing of materials, with some exceptions. For example if metal is heated in the process of annealing, then metals changes it's properties due to structural changes in recrystallization. Then if you'll draw some metals Ultimate tensile strength Vs Young's modulus chart,- you may see correlation :
So in this case one can predict annealed metals ultimate tensile strength from it's Young's modulus by linear equation :
$$ T[\text{MPa}] = 1.3~E[\text{GPa}] + 34.1$$
Thus if you know that some materials shares common properties, you may try to extrapolate ultimate tensile strength based on some other key variables, such as Young's modulus, density, etc. However in general no common method exist to predict material ultimate strength and this can only be measured by destructive testing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Integral of time dependent Hamiltonian Computing the time evolution of a quantum system described by a time-dependent Hamiltonian, $H(t)$, amounts to constructing the time evolution operator
$$U = \mathcal{T} \exp \Biggl( -i \int_{0}^{t} \mathrm{d} \tau \ H(\tau) \Biggr) \ . $$
What if the time-dependence in $H(t)$ can be integrated analytically, e.g., if the Hamiltonian is of the form
$$ H(t) = H_1 + t H_2$$
with $H_1, H_2$ time-independent?
In that case, can I write the following?
$$U = \exp \Bigl( -i (H_1 t + \frac{t^2}{2} H_2) \Bigr)$$
This seems naïve, but take for example the Hamiltonian in this question. Could one not simply compute the $t$-integral over $\sin(\omega_0 t)$?
| You have to be careful about the expression of $U$. The expression you put down mathematically means
\begin{align}
U & = \lim_{n \rightarrow +\infty}{\big(e^{-{i \over \hbar}{t \over n}H(t)}\big)\big(e^{-{i \over \hbar}{t \over n}H(t(1-{1 \over n}))}\big)\big(e^{-{i \over \hbar}{t \over n}H(t(1-{2 \over n}))}\big)\cdots \big(e^{-{i \over \hbar}{t \over n}H({t \over n})}\big)} \\
& = \lim_{n \rightarrow +\infty}{\big(e^{-{i \over \hbar}{t \over n}(H_1+t H_2)}\big)\big(e^{-{i \over \hbar}{t \over n}(H_1+t(1-{1 \over n}) H_2)}\big)\big(e^{-{i \over \hbar}{t \over n}(H_1+t(1-{2 \over n}) H_2)}\big)\cdots \big(e^{-{i \over \hbar}{t \over n}(H_1+{1 \over n} H_2)}\big)} \\
& = \lim_{n \rightarrow +\infty}{\Pi_{j=1}^{n}{e^{-{i \over \hbar}{t \over n}(H_1+j{t \over n}H_2)}}}
\end{align}
If you want to integrate over $t$ to get the result you wrote, you will need $[H_1,H_2]=0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why are bloch factor orthogonal? The Bloch wave can be expressed as:
$$
\psi_{n\mathbf{k}}(\mathbf{r}) = u_{n\mathbf{k}}(\mathbf{r})\,e^{i\mathbf{k}\cdot \mathbf{r}} \tag{A1}
$$
In this problem Bloch wave they say that $u_{n\mathbf{k}}(r)$ is orthogonal. I would like to ask whether $u_{n\mathbf{k}}(r)$ itself can be non-orthogonal, but if the Bloch wave is a set of orthonormal basis, the premise is that $u_{n\mathbf{k}}(r)$ must be orthogonal, so we mandate:
$$
\int_{\mathrm{unit \,cell}} u_{n\mathbf{k}}(\mathbf{r})\,u_{m\mathbf{k}}(\mathbf{r})d\mathbf{r} = \delta_{nm} \tag{A2}
$$
$\delta$ is the Dirac Function.
Thanks to a commenter for the reminder that in both answers one and two they give the origin of the $u_{n\mathbf{k}}(\mathbf{r})$ quadrature and state that this is derived from such an equation:
$$
[\dfrac{(i\hbar\nabla + \hbar\mathbf{k})^2}{2m} + V(\mathbf{r})] u_{n\mathbf{k}}(\mathbf{r}) = E_{n\mathbf{k}}u_{n\mathbf{k}}(\mathbf{r}) \tag{B1}
$$
I know where this wave equation came from, First use the momentum operator $\hat{p}=-i\hbar \nabla$:
$$
\begin{align}
\hat{p}\psi_{n\mathbf{k}}(\mathbf{r}) =& e^{i\mathbf{k}\cdot \mathbf{r}}(\hat{p} + \hbar \mathbf{k})u_{n\mathbf{k}}(\mathbf{r}) \\
\hat{p}^2\psi_{n\mathbf{k}}(\mathbf{r}) =& e^{i\mathbf{k}\cdot \mathbf{r}}(\hat{p} + \hbar \mathbf{k})^2u_{n\mathbf{k}}(\mathbf{r})
\end{align}
$$
Substituting this into the Schrodinger equation gives eq(B1), but I don't know how to derive eq(A2) from eq(B1)
| Bloch's theorem tells us that the energy eigenvectors of a Hamiltonian with a periodic potential can be written
$$\psi_{n\mathbf k}(\mathbf x) = e^{i\mathbf k \cdot \mathbf x} u_{n\mathbf k}(\mathbf x)$$
where $n\in \mathbb Z$, $\mathbf k\in \mathrm{BZ}$ (the first Brillouin zone), and $u_{n\mathbf k}(\mathbf x)$ is periodic with the same periodicity as the lattice. Applying the Hamiltonian operator yields
$$\big(H \psi_{n\mathbf k}\big)(\mathbf x)= \left[-\frac{\hbar^2}{2m} \nabla^2 + V(\mathbf x)\right]e^{i\mathbf k \cdot \mathbf x}u_{n\mathbf k}(\mathbf x) $$
$$= e^{i\mathbf k\cdot \mathbf x}\left[-\frac{\hbar^2}{2m}(\nabla + i\mathbf k)^2 + V(\mathbf x) \right]u_{n\mathbf k}(\mathbf x) = E_{n\mathbf k} e^{i\mathbf k\cdot \mathbf x} u_{n\mathbf k}(\mathbf x) \tag{$\star$}$$
Cancelling the factor $e^{i\mathbf k \cdot \mathbf x}$ from both terms in $(\star)$ yields that $u_{n\mathbf k}$ is a solution of the equation $H_\mathbf k u_{n\mathbf k} = E_{n\mathbf k} u_{n\mathbf k}$, where $H_{\mathbf k} \equiv -\frac{\hbar^2}{2m} (\nabla +i\mathbf k)^2 + V(\mathbf x)$, defined on the unit cell with periodic boundary conditions.
More concretely, let $\mathscr u$ denote the unit cell. Consider the Hilbert space $L^2(\mathscr u)$ of square-integrable functions on the unit cell equipped with the standard inner product
$$\langle \psi,\phi\rangle := \int_{\mathscr u} \mathrm d^n x \ \overline{\psi(\mathbf x)} \phi(\mathbf x)$$
Further define the Bloch Hamiltonian $H_\mathbf k$ to act on the twice-weakly differentiable elements of $L^2(\mathscr u)$ with periodic boundary conditions. One can show that $H_{\mathbf k}$ is self-adjoint with discrete spectrum, and therefore that one can construct an orthonormal basis $\{u_{n\mathbf k}\}$ of solutions to the eigenvalue equation $H_\mathbf k u_{n\mathbf k} = E_{n\mathbf k}u_{n\mathbf k}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there any problem in having a stress-strain constitutive relation that relates time-derivative of stress with strain? We usually use two empirical laws to model viscoelastic behaviour:
*
*Hooke's law of elasticity that relates stress with strain
*Newton's law of viscosity that relates stress with time-derivative of strain.
Why isn't there an equation of the form that relates time-derivative of stress with strain?
Or, does it already exist?
| In short
Proving such model impossible is probably too ambitious as it does not seem to violate thermodynamic requirements in absolute.
It is nonetheless absent from the literature.
This makes sense as it enables a variety of unusual behaviors.
Examples
For the example below, consider a 1D material that behavior follows
$$
\dot\sigma = \mu\varepsilon.
$$
*
*The ever-growing stress at rest is a good example proposed in the comment of @Toffomat: holding a constant strain causes the stress to grow without a limit.
It seems unacceptable because it implies the existence of an intrinsic (and infinite!) source of enthalpy, hence some other phenomena at play (maybe chemical or thermal, but a concrete example seems hard to find).
*It would also allow a material to work negatively ($\sigma:\dot\varepsilon<0$) which does not meet any empirical observation. That would for example be a tensile experiment where the material extends when put in tension, but immediately contracts if the tension slows down.
To illustrate that consider the simple cyclic test obtained from the model above and described by:
$$
\sigma = \mu\frac{\epsilon_0}\omega (1-\cos(\omega t))
,\qquad
\varepsilon = \epsilon_0\sin(\omega t)
$$
The corresponding strain-stress curve forms an ellipse and describes a succession or tension/compression deformations for which the stress is never negative.
In conclusion
A relation only linking the stress rate to the strain is not forbidden in theory, but seems to fail to find any reality to describe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can the energy-momentum tensor influence the metric outside an energy-momentum distribution? The elements in the energy-momentum tensor are determined by the mass-energy-impuls distribution as viewed from an inertial frame.
So, if you see a collection of masses with different momenta you can fill in their values in the MEI-tensor and calculate the metric by applying the Einstein field equations.
My question is hoe that gives you the values of the metric outside of the masses. Outside the masses the components are all zero, so you would expect a flat metric, which obviously is false. Is there a kind of analytical continuation going on?
Let me give an example of what I mean. Take a point in the vacuum around the Earth. Clearly the energy-momentum tensor is zero. How does one calculate the components of the metric at that point? What's different from calculating the metric of a "totally empty" vacuum? You need infinitesimal differences of the metric in the first place to plug in the equations.
|
... how that gives you the values of the metric outside of the masses.
Outside the masses the components are all zero, so you would expect a
flat metric, which obviously is false. Is there a kind of analytical
continuation going on?
Metric is spacetime. Its existence presuppose a presence of matter there. A solution of Einstein field equations (EFE) for a given matter distribution, the metric describes, the whole spacetime. As second order differential equations EFE are defined locally. However, their boundary conditions act globally, see for example https://physics.stackexchange.com/a/679431/281096 from equation (7).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Brightness of bulbs in Parallel When adding bulbs in parallel, the brightness is brighter than that of series. But does that mean adding bulbs in parallel will increase the brightness of the other bulbs?
My intuition is as follows: When adding a bulb in parallel the current doubles, but that current splits between the two branches such that both bulbs receive the same current and the same voltage, so brightness doesn't increase, but it is still brighter relative to adding bulbs in series. Is this correct?
| You are correct.
When you put them in parallel, each bulb is seeing the full supply voltage. Hence each bulb will get the same current as it did on its own. So, each bulb shines with the same brightness it would have if there was only one bulb. Of course this assumes the supply is able to provide twice the current.
When you put the bulbs in series, the total resistance in the circuit doubles, hence the current halves. This half current flows through both bulbs, so they shine at a reduced brightness.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why does the opposing force differ in when falling on concrete vs on water in spite of Newton's third law? If a person jumps from the first floor of a building and lands on a concrete surface, they will suffer serious injury because of Newton's third law.
If the same person jumps the same distance and lands in swimming pool filled with water, however, then there will not be any serious injury.
The person in both cases lands with same amount of force. Why doesn't water offer the same amount of force in return as concrete?
| Lets look at the energy conservation
$$\frac{m}{2}\,v_i^2+m\,g\,x_i=\frac{m}{2}\,v_{f}^2+F_{f}\,x_{f}$$
where f is the final state ans i is the initial state
if both case is the final velocity $~v_f=0~$ but the distance
$~x_{fc} \ll x_{fw} $ this means that the force that injured you $F_{fc} \gg F_{fw}$
where "c" for concrete and "w" for water
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 9,
"answer_id": 4
} |
If Aristoteles was right and heavier objects falled faster towards the ground how would be Newton's Laws of Motion described? It seems like it would be like:
a(m)=km
and may be a(m1,m2)=K(m1-m2)
Am I doing any sense?
Btw I'm no negationist, nor I'm trying to create a negationist movement here, I just wonder how physics would be If we lived on different physical environment in order to understand better the physics we have now.
| You can define it in multiple ways. Let's say near the earth, the weight is not
$$-mg$$
(pointing downwards)
rather:
$$-m^2g$$
Then from Newton's second law:
$$ma=-m^2g \implies a=-mg$$
So objects with greater mass accelerates more.
But you can also define the gravitational force as:
$$-me^{m/k}g$$
for a constant k, and then the acceleration would be:
$$a=-e^{m/k}g$$
which increase as mass increases. After that, it's just a matter of finding a function that fits with reality
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why is it easier to raise AC current to high voltage than DC? In my country (and maybe all around the world I don't know) once electricity has been generated, it is then raised to 200k Volts for transportation.
I know this is to reduce the loss. Given $P=U.I$ and $P=I^2.R$, raising U will lower I and so limit the loss by joule effect.
From what I've read, one of the reason electricity is transported in AC is because this is easier/cheaper to raise AC to 200k Volts than if it was in DC.
Why?
| Changing the Voltage of AC can be done with a simple iron core transformer. That's a simple device without moving parts that only consists of a magnetic core, copper wire and some isolation (optionally a cooling fluid). Almost nothing that can break. Good transformers can have amazing efficiency of way more than 95%.
There are other benefits to using AC over DC as well (and also downsides). With AC you have way less problems with arcing on switches because. If arcing starts with AC it will often stop the next zero crossing of the AC. With DC, the arc won't stop by itself. Also, with AC you have less problems with material starting to wander because of electrolytic effects. And running motors with (especially 3 phase) AC is close to trivial without the need for brushes. With DC you need brushes or some smart electronics (BLDC-Motors are basically AC motors with some smart electronics attached).
Also, a power grid with AC is self stabilizing (to some extent) via the frequency of the AC.
Downside of AC is losses due to capacitance (blind current also causes resistive losses). Phase shift is always an issue as soon as you work with AC.
Converting DC to another voltage takes more effort. One way is to drive a DC motor that is mechanically coupled with a DC generator. Such systems are big, have moving parts and have lower efficiency.
Today, we have the electronics to do that better. We basically chop the DC up into AC, put that trough a transformer and rectify the output of that again... voila, a DC to DC converter (this is all very simplified).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 1
} |
Dispersion equation with variable wavenumber The wave equation
$$u_{tt}=c^2 u_{xx}$$ is known to have a simple wave solution $u(x,t)=Ae^{i(kx-\omega t)}$ where the dispersion equation is simply $c=\omega/k$. Yet, let the wavenumber be a function in $x$, then the independent variable $x$ will appear in the dispersion solution cause the first and second derivatives are functions in $x$ as the following:
$$ \dfrac{\partial{u}}{\partial x} = (ik+ixk_x) e^{i(kx-\omega t)}$$ and $$ \dfrac{\partial^2 u}{\partial x^2} = \left( (ik_x+ik_x+ixk_xx) + (ik+ixk_x)^2 \right) e^{i(kx-\omega t)} .$$
then $$c^2=\dfrac{-\omega^2}{(2ik_x+ixk_xx)+(ik+ixk_x)^2}$$
Did anyone encounter an independent variable as $x$ explicitly in the dispersion relationship as in the terms $ixk_x x$ and $ixk_x$?
| Since you have only one wave $k=\frac{2\pi}{\lambda}=cst$, so $k_{x}=0$ , your formula is simpler : $$c^{2}=\frac{-\omega^{2}}{(ik)^{2}}$$
i.e.$$k=\frac{\omega}{c}$$
Note: the last relation gives:
$$(2+x^{2})k_{x}i=k+xk_{x}-\frac{\omega^{2}}{c^{2}}$$
$k(x)\in \mathbb{R}$, for the equation to be homogeneous,
in the left side of the equation $k_{x}(x)$ must be pure imaginary, CONTRADICTION.
PS: The only theory i know that takes into account the variation of the frequency,length, vector,...., of wave with the variable space (height x) is general relativity: Einstein's shift https://en.wikipedia.org/wiki/Gravitational_redshift
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
What is the fractional frequency stability of a thermal damped harmonic oscillator? Suppose I have a lightly driven (classical) damped harmonic oscillator at temperature $T$. Suppose $\omega$ and $Q$ are specified as well as the mean energy $\bar{E}$ in the oscillator due to the driving/dissipation equilibrium. What will be the fractional frequency stability of this oscillator? What is the formalism to treat this problem?
| The by-now ancient work/publication but also time-tested by billions of well-designed oscillators and probably will answer your question is Leeson: A Simple Model of Feedback Oscillator Noise Spectrum, Proc. IEEE, 1966 pp329-330.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Friction coeffecient between two stacked blocks moving at constant velocity So I came across a problem, it says that there are two masses, $m_1$ and $m_2$, stacked on top of each other, and they are moving at a constant speed. There is also friction between the two blocks, with coefficient $\mu$. It gives us the values of $m_1$ and $m_2$, and it asks us to find the coefficient $\mu$. Is there even friction? If so, how do I find $\mu$?
| As you see in the Free Body Diagram, the equilibrium equations are:
$$
F_y=N-m_2g=0 \to N=m_2g \\
F_x=f_r=0
$$
As you see, if your assumption is the block is moving at a constant speed, the friction force $f_r$ over him must be zero.
If $m_1$ start moving from $v=0$ with a positive acceleration, the friction force $f_r$ will be nonzero, since you are accelerating. only due to the action to the friction.
But once you are at a constant speed $v=v_0$, that friction dissapears. The friction coefficient, stating the maximum allowable value of that friction force, $f_r^{max}=uN$, is still there. But the friction is zero.
Finally, if the mass $m_1$ starts stopping its movement with a negative acceleration, then, the friction force will reappear, pointing against your movement, and if the acceleration $|a|$ is equal or over $uN$, it will increase up to its maximum allowable value $uN$.
Obviously, if $|a|$ is greater than $uN$, then the friction will be insufficient, and you only will accelerate at $uN$, slow than $m_1$, and you will fall from $m_1$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What happens to resistance of tap water as voltage is increased? In recent days I have done a few experiments measuring the current of water as it goes up from 9 volts up to 36 volts, and following Ohms law to convert it to resistance. And I discovered a very interesting trend. In between 9 and 18 volts, there is a massive drop in resistance (by around a 40% reduction) but then as I go up to 27 volts, its a 5% reduction, and is even less of a reduction when reaching 36 volts. I've done this experiment a few times and this has continued to happen. This is it visualised on a graph;
I am curious to know why this happens, why there seems to be a reduction as I go from 9 to 18 volts, yet the reduction seems to reduce at 27 volts and reduce further at 36. Is there a reason that? And as I go further up the voltages (don't want to test with higher), does this continue with the reduction in resistance continuously reducing, and if not at what voltage does it change?
Specifically am asking why this is happening and what happens when the voltage gets higher. I kinda want to know what resistance can I expect at around 240 volts
| You are electrolytically decomposing your test electrodes. They must be made of platinum to prevent this effect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does a rocket engine that produces a constant thrust over a set period of time have less energy if it has more mass? (Zero-$g$) A rocket engine with the thrust of 1N working for 10 seconds will add more kinetic energy to the rocket if it is attached to a 10kg rocket and less if it is attached to a 20kg rocket. The rocket should consume the same amount of fuel if producing the same thrust for the same time with the same engine and so convert the same energy. So why doesnt it have the same energy. (I know the formula but why)
| Because you are ignoring the exhaust. Also, I think you are only looking at this in the frame where the rocket starts at rest. Let's look at that frame first.
When we have a force between two masses (like a bullet and a rifle) then:
*
*The change in momentum between both masses are equal
*The change in KE is greater in the object with lower mass.
If a suspended rifle fired a bullet, both would be given the same momentum, but the bullet would get far more of the energy from the work done by the expanding gases.
In the case of the rocket the same is true. The greater the disparity between the exhaust mass and the rocket mass, the greater the proportion of the energy given to the exhaust (and the less the energy given to the rocket). In the ground frame, the exhaust from the heavier rocket will have more energy (because it is moving faster).
(When you look at it from a frame where the rocket is already moving, this relationship isn't as simple because the rocket already has some KE that can be moved around).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What reason/evidence do we have to think that the Planck length is the smallest length possible? From what I've gathered, Planck length is the smallest measurable length, though we do not know whether it is the smallest length physically possible. The Planck temperature is called the theoretically highest temperature, meaning this theory assumes that the Planck length is the smallest possible length. Given that it is (presumably) a theory, that makes me wonder what reasons we have to think the Planck length is the smallest length. I get speculating that it is the smallest length, given that it is the smallest measurable length; but calling it a theory means there's actually some reasons, or even evidence. So, what are those reasons?
|
Or are there only amateurs who think so?
Bingo :) The Planck units don’t give sharp limits on the existence of quantities like length, time, etc (or rather, there’s no reason to think that they do). Instead, they provide a scale at which a more complete theory of fundamental physics (which incorporates quantum gravity) will be relevant.
For example, when the mass of a black hole is equal to the Planck mass, then its Compton wavelength will be on the order of its Schwarzschild radius (which is also the order of the Planck length). The weird bits of quantum mechanics and gravity will intersect, and we just don’t know what happens in that scenario.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/725987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How is Newton per meter Cubed related to Newton per meter squared (=Pascal)? Is there a way to relate $\frac{N}{m^3}$ to $\frac{N}{m^2}$?
| As already pointed out, this is the unit of pressure gradient. But it could also be a “weight density”.
From the standpoint of a physicist, it’s conceptually cleaner to express the weight per unit volume of a substance as a mass density (which for incompressible substances is invariant, and thus a more fundamental characteristic) multiplied by the gravitational acceleration, rather than as a “weight density,” which would have units of newtons per cubic meter.
But ~99.9999% of human activity occurs in a region where gravitational acceleration varies by less than half a percent, so for practical purposes, there is nothing wrong with using a weight density. If you know that your rope is rated for 10 $kN$ and your goop weighs 2000 $N/m^3$, then you can easily calculate that it’s not safe to lift more than 5 cubic meters of the goop with the rope. This calculation would be wrong on the moon, but as you don’t have any goop or rope anywhere but the surface of the earth, that doesn’t matter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Limit definition of scalar curvature for flat vs curved space in 2D, 3D and so on in Zee In Zee's book, Einstein Gravity in a Nutshell, p. 6 + p. 77, he says that
\begin{equation}
R = \text{lim}_{\text{radius} \rightarrow 0} \frac{6}{(\text{radius})^2} \left(1 - \frac{\text{circumference}}{2\pi \text{ radius}} \right)
\end{equation}
Then proceeds to use that on two 2d metrics in appendix 1 and that all makes sense. My question is how does this generalize to 3d an beyond? For 3d is it a
\begin{equation}
R = \text{lim}_{\text{radius} \rightarrow 0} \frac{6}{(\text{radius})^2} \left(1 - \frac{\text{Surface Area}}{2\pi \text{ radius}} \right)
\end{equation}
like object? Then generalized,
\begin{equation}
R = \text{lim}_{\text{radius} \rightarrow 0} \frac{6}{(\text{radius})^2} \left(1 - \frac{\text{Spatial Measure in N-1 D}}{2\pi \text{ radius}} \right)
\end{equation}
I would love if someone had a better word than "Spatial Measure in N-1 D" for the analogue of Circumference to 2D, Surface area to 3D and on.
| On p. 6 + p. 77 Ref. 1 is apparently talking about the Gaussian curvature in $d=2$, which is half the scalar curvature in $d=2$.
Later on p. 345 + p. 350 Ref. 1 is talking about the scalar curvature $S=g_{ij}R^{ij}$, so let's do the same here.
The Wikipedia page lists that in $d$ dimensions, the scalar curvature is
$$
S~=~ \text{lim}_{r\to 0} \frac{6d}{r^2} \left(1 - \frac{{\rm Vol}(\partial B(0,r)\subset M)}{{\rm Vol}(\partial B(0,r)\subset\mathbb{R}^d)}\right),
$$
with the volume
$${\rm Vol}(\partial B(0,r)\subset\mathbb{R}^d)~=~2\frac{\pi^{\frac{d}{2}}r^{d-1}}{\Gamma(\frac{d}{2})}$$
of a $(d\!-\!1)$-sphere $\partial B(0,r)$ [which is the boundary of a $d$-ball $B(0,r)$].
References:
*
*A. Zee, Einstein Gravity in a Nutshell, 2013; p. 6 + 77 + 345 + 350.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Friedmann equation I've seen in literature
$$\dot{H} + H^2=\ldots$$
Source: https://en.wikipedia.org/wiki/Friedmann_equations
Defining the LHS. Since
$$H = \frac{\dot{a}}{a}$$
And that
$$\left(\frac{\dot{a}}{a}\right)^2 = \frac{8\pi G}{3}(\rho + 3P)$$
Then replacing gives
$$H^2 = \frac{8\pi G}{3}(\rho + 3P)$$
So my question is how to you arrive at the additive Hubble term
$$\dot{H} + H^2 = \frac{8\pi G}{3}(\rho + 3P)?$$
| Note that $H=\frac{\dot{a}}{a}\implies\dot{H}+H^2=\frac{\ddot{a}}{a}$. You're asking about the special case $k=0,\,\Lambda=0$, but seem confused about what results we obtain. In this case, the Friedmann equations are$$H^2=\frac{\dot{a}^2}{a^2}=\frac{8\pi G\rho}{3},\,\dot{H}+H^2=\frac{\ddot{a}}{a}=-\frac{4\pi G(\rho+3p/c^2)}{3}.$$Hence$$\dot{H}=-4\pi G(\rho+p/c^2).$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Scaling of thermal resistance to air vs oil For electronic components, thermal resistance from component to ambient(air) is often given, and used with a resistive model of 'thermal resistors' to make temperature rise calculations simple. These values are usually given in some context such as to component to air, to heat sink, etc.
I have been looking for resources for calculating component temperatures when submerged in oil, i would like to know if i can as a rough rule of thumb multiply the thermal resistivity (in air) of an object, with the ratio of thermal conductivity of air to oil for example. In order to derive the thermal resistivity (in oil). Since at glance, it looks like a thermal resistance has thermal conduction as a linear component when calculated.
Can someone elaborate if this rule of thumb is applicable, assuming the object is submerged in an infinite ambient medium of oil, instead of air. Given that we know the thermal resistance in air ?
| No, one cannot generally estimate the thermal resistance of one fluid by scaling the value for another fluid by the ratio of their thermal conductivities.
The reason is that heat transfer in fluids is often dominated by convection, and convection is mediated by many more parameters than just the thermal conductivity.
For example, we usually correlate the convection coefficient with a characteristic length $L$, a thermal conductivity $k$, and certain combinations of the Reynolds, Rayleigh, and Prandtl numbers. Of these, the geometry (and thus $L$) remains unchanged with a fluid switch, and you already propose accounting for differences in $k$ (albeit by proportional scaling only).
But $\text{Re}$, $\text{Ra}$, and $\text{Pr}$ also incorporate the fluid density, viscosity, specific heat, and coefficient of thermal expansion, all of which I'd expect to differ substantially between air and oil. Scaling by the ratio of the thermal conductivities does not address these other dependencies.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/726823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Deriving the Vlasov equation in {$\vec r, v_{||}, \mu, \varphi$} coordinates I'm reading some lecture notes on drift kinetics and I'm having trouble with one derivation. The general idea is changing phase space coordinates from {$\vec r, \vec v$} to {$\vec r, v_{||} \text{ (parallel velocity)}, \mu \text{ (magnetic moment)}, \varphi \text{ (gyrophase)}$} and writing the Vlasov equation (2.1):
in these new coordinates. The coordinate transform is done by using the chain rule term by term to obtain these simple relations:
Now, using these relations (2.1) should get the form:
,
and this is what I'm in trouble with. In the notes it says that this operator has been used to obtain (2.9):
,
but writing this using (2.6)-(2.8) simply gives me equation (2.1) just with a lot more terms because of the coordinate transformation. If I write it for $f_s$ using (2.6)-(2.8), do I substitute this into (2.1) (in place of the first term) and how can I discard the the third term (dot product with $\nabla_v f_s)$? So, to put it simply, how do I obtain (2.9), what do I do with (2.10)? This should be a relatively simple derivation, but I just can't wrap my head around it, so any help is greatly appreciated.
| Equation (2.9) is a mathematical identity -- it is always true because (in the collisionless limit) $f$ satisfies
$$\frac{df}{dt}=0$$
Fundamental to the calculation in Felix's notes is that
$$\dot{\varphi} = \Omega_s + \text{small corrections}$$
It is not possible to make this derivation comprehensive with any simple tricks or explanations. The key points to make, however, are:
(1) The entirety of these complex notes is to carry out an asymptotic expansion in the limit $\Omega_s \rightarrow \infty$.
(2) Page 5 in the notes is a detailed discussion of the justification for $\partial f/\partial \varphi \sim 0$, meaning the distribution function is independent of gyroangle -- except for $\tilde{f}_{s,1}$ in (2.37).
The reason for employing this ordering, and for the notes, is to write a drift-kinetic equation (2.45) for the gyroangle-independent part of $f$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Deriving $\langle H\rangle$ from average momentum and position for a LHO Assume that we know the values of $\langle x\rangle$ and $\langle p\rangle$ for a LHO, that is in a random superposition of zeroth and first state. Derive $\langle H\rangle$.
So I tried solving this problem with writing $H=\hbar\omega (a^*a+1/2)$. We know that $a^*a= (\text{Re} a)^2+(\text{Im} a)^2$ and we can derive those from $\langle p\rangle\propto\text{Im}\langle a\rangle$. However if I average this expression for creation and annihilation operator, I end up needing $\langle p^2\rangle$ and not $\langle p\rangle^2$, which I have (same for position).
How should I solve it?
| The system is in a state
$$\lvert\Psi\rangle=\alpha\lvert0\rangle+\beta\lvert1\rangle\qquad \alpha,\beta\in\mathbb{C}$$
The complex constants $\alpha$ and $\beta$ are to be determined up to an arbitrary phase factor using the condition on the average values and the normalization condition
$$|\alpha|^2+|\beta|^2=1.$$
We shall express momentum and position operators in terms of ladder operators to make the math easier
$$\hat{p}=\frac{1}{i}\sqrt{\frac{\hbar m\omega}{2}}(a-a^{\dagger})\qquad \hat{x}=\sqrt{\frac{\hbar}{2m\omega}}(a+a^{\dagger})$$
By definition
\begin{align}
\langle p\rangle&=\frac{1}{i}\sqrt{\frac{\hbar m\omega}{2}}\left\{\langle 0\lvert\alpha^*+\langle1\lvert\beta^*\right\}(a-a^{\dagger})\left\{\alpha\lvert0\rangle+\beta\lvert1\rangle\right\} \\
&=\frac{1}{i}\sqrt{\frac{\hbar m\omega}{2}}\left\{\langle 0\lvert\alpha^*+\langle1\lvert\beta^*\right\}\left\{\beta\lvert0\rangle-\alpha\lvert1\rangle-\sqrt{2}\beta\lvert2\rangle\right\}\\
&=\frac{1}{i}\sqrt{\frac{\hbar m\omega}{2}}(\alpha^*\beta-\beta^*\alpha)
\end{align}
\begin{align}
\langle x\rangle&=\sqrt{\frac{\hbar}{2 m\omega}}\left\{\langle 0\lvert\alpha^*+\langle1\lvert\beta^*\right\}(a+a^{\dagger})\left\{\alpha\lvert0\rangle+\beta\lvert1\rangle\right\}\\
&=\sqrt{\frac{\hbar}{2 m\omega}}\left\{\langle 0\lvert\alpha^*+\langle1\lvert\beta^*\right\}\left\{\beta\lvert0\rangle+\alpha\lvert1\rangle+\sqrt{2}\beta\lvert2\rangle\right\}\\
&=\sqrt{\frac{\hbar}{2 m\omega}}(\alpha^*\beta+\beta^*\alpha)
\end{align}
Isolating $(\alpha^*\beta\pm\beta^*\alpha)$ and summing side by side
$$\beta\alpha^*=\frac{1}{2}\left(\sqrt{\frac{2m\omega}{\hbar}}\langle x\rangle+i\sqrt{\frac{2}{\hbar m\omega}}\langle p\rangle\right)\implies\beta=\frac{1}{2\alpha^*}\left(\sqrt{\frac{2m\omega}{\hbar}}\langle x\rangle+i\sqrt{\frac{2}{\hbar m\omega}}\langle p\rangle\right)$$
Thanks to the arbitrary phase factor, $\alpha$ can be chosen to be real i.e. $\alpha=\alpha^*$
$$\implies\beta=\frac{1}{2\alpha}\underbrace{\left(\sqrt{\frac{2m\omega}{\hbar}}\langle x\rangle+i\sqrt{\frac{2}{\hbar m\omega}}\langle p\rangle\right)}_{\eta}:=\frac{\eta}{2\alpha}. \tag{A}$$
Where the constant $\eta$ is known because it is a linear combination of the given expectation values. Imposing normalization condition
$$\alpha^2+\frac{\eta^2}{4\alpha^2}=1$$
This equation admits the solution
$$\alpha^2=\frac{1+\sqrt{1-4\eta^2}}{2} \tag{B}$$
You can choose either the positive or the negative root as it won't change the phase difference between $\alpha$ and $\beta$ and $(A)$ together with $(B)$ gives you the coefficients of the state. Now you can find that
$$\langle H\rangle=\langle\Psi\lvert H\lvert\Psi\rangle=\hbar\omega\left\{\langle 0\lvert\alpha+\langle1\lvert\beta^*\right\}\left(a^{\dagger}a+\frac{1}{2}\right)\left\{\alpha\lvert0\rangle+\beta\lvert1\rangle\right\}=\frac{\hbar \omega}{2}\alpha^2+\frac{3\hbar\omega}{2}|\beta|^2$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Propagators for a generic Lagrangian density Suppose we have a generic Lagrangian density, for example:
$$\mathcal{L} = \alpha A_{\mu\nu}A^{\mu\nu} + \beta B_{\mu}f_\nu(p^2) A^{\mu\nu} + \gamma B_\mu\partial^\mu h$$
where $A_{\mu\nu}$,$B_\mu$ and $h$ are generic fields, $f_\nu(p^2)$ a generic function and $\alpha$,$\beta$,$\gamma$ some real parameters.
I have doubts about the calculation of the propagators for the system of fields $\{A_{\mu\nu},B_\mu,h\}$, correct me if I'm wrong.
I know that the propagator of a field is the green function for its equation and in general to calculate them I put the equations of the fields in Fourier space inside a matrix and look for the inverse.
My question is if, just by looking at the Lagrangian density, I can can say something about the propagators.
For example, is it correct to say that since there are no quadratic terms in $B_\mu$ and in $h$, their propagators are null? And since there are no coupling terms between $A_{\mu\nu}$ and $h$ the propagator
$$\langle0|T[A_{\mu\nu}^\dagger(x)h(0)]|0\rangle$$
is also null?
| *
*Formally speaking, if the Hessian matrix of the quadratic action is non-degenerate, the free propagator is given by the inverse matrix, cf. e.g. my related Phys.SE answer here.
*Concerning OP's last question: If a matrix element is zero, the corresponding inverse matrix element might not necessarily be zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
On the proof that $4$-velocity transforms like vector Let $U$ and $U'$ be the $4$-velocities associated to the coordinates $(t,x)$ and $(t',x')$ related through the Poincaré transformation $P:\mathbb R^4\to\mathbb R^4$, i.e. $(t',x')=P(t,x)$.$^1$
Of course the Jacobian $\Lambda\in\mathbb R^{4\times 4}$ of $P$ is a Lorentz transformation. I extracted the following derivation of the tranformation rule for $U$ from this question:
\begin{equation}
U'=\frac{\mathrm dX'}{\mathrm d\tau}=\frac{\mathrm d(P\circ X)}{\mathrm d\tau}=\Lambda\cdot\frac{\mathrm d X}{\mathrm d\tau}=\Lambda\cdot U\in\mathbb R^{4\times 1}
\end{equation}
As far as I understand, we used the fact that
\begin{equation}
\forall \tau:X'(\tau)=(P\circ X)(\tau),
\end{equation}
but this is not trivial, is it? I will explain my reasoning and I hope for a confirmation/verification:
We have $X(\tau):=X(t(\tau))$, where $I\ni t\mapsto X(t)$ is the $4$-position and $\tau\mapsto t(\tau)$ is the inverse of proper time, i.e. the inverse of the function
$\newcommand{\d}{\mathop{}\!\mathrm{d}}$
\begin{align}
I\ni t\mapsto \tau(t)=\int_{t_0}^t\sqrt{1-\frac{v(\widetilde t)^2}{c^2}}\d\widetilde t+c
\end{align}
for some $t_0\in I$ and $c\in\mathbb R$. So what we are really assuming is the following:
\begin{equation}\tag{1}
X'\circ t'=P\circ X\circ t
\end{equation}
Let $\Pi:\mathbb R^4\to\mathbb R$ be the projection to the time component, then $X'=P\circ X\circ(\Pi\circ P\circ X)^{-1}$ and hence $(1)$ follows from the fact that
\begin{equation}
t'=\Pi\circ P\circ X\circ t
\end{equation}
which is equivalent to
\begin{equation}
\tau=\tau'\circ\Pi\circ P\circ X
\end{equation}
and which can be proven through a change of variables.$^2$ Am I right?
$^1$ The reader familiar with manifolds will note that $(t,x)$ is a chart $\phi: M\to\mathbb R^4$ and that $P=\phi'\circ\phi^{-1}$.
$^2$ We are exploiting the fact that $\tau$ and $\tau'$ are only defined up to a constant when we assume that $t$ and $t'$ have the same domains.
| I think you are making this unnecessarily difficult. Here is a proof.
$$
U = \lim_{\delta\tau \rightarrow 0} \frac{X(t+\delta\tau) - X(t)}{\delta\tau}
$$
Now use that $\delta\tau$ is invariant and the difference of two 4-vectors evaluated at a given event is itself a 4-vector (which is easy to prove). It follows that $U$ is a 4-vector multiplied by a scalar invariant, hence it is a 4-vector.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does the earth’s rotational angular velocity change? This is what is written in The Feynman Lectures on Physics, Vol. 1 (ch.5)
We now believe that, for various reasons, some days are longer than others, some days are shorter, and on the average the period of the earth becomes a little longer as the centuries pass.
Why should some days be longer than the others? There is no “gravitational” source of external torque acting on the earth, so why does its rotational angular velocity change?
| The Earth is not a single rigid body, but consists of at least five separate regions which can move relative to one another. These are the crust (which is the region that we use to measure day length), the mantle, the core, the oceans and the atmosphere. Although the total angular momentum of the Earth may not change, these regions can and do exchange angular momentum between themselves over timescales ranging from days to decades. This leads to fluctuations in the angular velocity of the crust, and hence fluctuations in the length of a day.
This Wikipedia article describes some of the mechanisms by which the different regions exchange angular momentum.
Over long periods of time, the Earth and the Moon exchange angular momentum through tidal effects, leading to a gradual but steady increase in the average length of a day. This effect is of the order of a few milliseconds per century.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/727970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
} |
If reference frames are equally valid, then why do teachers say the geocentric view is wrong? If all reference frames are valid, then why is the geocentric model taught as "wrong" in schools?
I've checked many websites but none of them clear the issue. Wiki says that in relativity, any object could be regarded as the centre with equal validity. Other websites and answers make a point on the utility of the heliocentric model (simplicity, Occam's razor...) but just because something is not so easy to deal with doesn't mean it is wrong.
Note: I am not asking for evidence that geocentrism is wrong; I am asking for a way to resolve the contradiction (from what I see) between relativity and this "geocentricism is wrong" idea.
| If 'geocentrism' means that you can regard the Earth as stationary and describe the motion of Sun and planets accordingly, then geocentrism isn't wrong.
But if 'geocentrism' means that The Sun and planets have simple (for example circular) orbits about the Earth, then it is wrong. Almost 2000 years Ago, Ptolemy knew that a geocentric solar system based on circles needed the planets to move in circles nested on circles nested on circles in order for theory to match observation – which for some planets even involves their stopping and going backwards for a while. [The nested circle treatment is analogous to a Fourier analysis of a complicated shape of orbit.]
A heliocentric system based on circles rather than ellipses still needs these 'epicycles', but smaller ones and fewer of them. I'd add that I think it's perfectly reasonable to teach children that the Earth and other planets "go round the Sun". There's no reason, though, to say to them that the Sun, any more than the Earth, is stationary.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/728129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 10,
"answer_id": 4
} |
Why Is Capacitance Not Measured in Coulombs? I understand that the simplest equation used to describe capacitance is $C = \frac{Q}{V}$. While I understand this doesn't provide a very intuitive explanation, and a more apt equation would be one that relates charge to area of the plates and distance between them, I'm having trouble understanding it in general. Capacitance seems to be describing, well, the capacity of two plates to store charge (I understand that the electric field produced between them is generally the focus more so than the actual charge). Shouldn't it just be measured in units of charge such as coulombs? I'm sure this is due to a lack of more fundamental understanding of electric potential and potential difference but I'm really not getting it.
| The definition of capacitance, $C=Q/V$, suggests that it should be measured in the units of charge per units of potential.
Remark: What is more amusing is that in some system of units (e.g., in cgs) the units of capacitance turn out to be the units of length.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/728239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 1
} |
Why does particle leave circular motion after string slacks? If a particle is attached to a string and made to move in a vertical circle with initial velocity of $\sqrt{4gl}$ $m/s$ where l is the length of string, at some angle (approx $131°$ with the initial position), the string slacks and the particle leaves the circular path and undergoes projectile motion. Why does this phenomenon occur even though the component of weight can provide centripetal acceleration?
If we throw the particle at $\sqrt{2gl}$ $m/s$ , the velocity and tension both become 0 when the string is in horizontal direction. How can we know whether the particle will now oscillate or leave the circular path?
| Because tension and weight works in tandem to break the circle this time, and tension is not supported by any of centripetal forces, so after some time circle collapses :
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/728852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Pascal's law further simple proof for students of an high school Pascal's law states that a force applied on a surface of a fluid is transmitted within the fluid in all directions of the fluid with the same intensity on equal surfaces. Similarly, it can be stated that pressure exerted at one point of a fluid mass is transmitted with the same intensity to every other point and in all directions.
Stevin's law states that, if only atmospheric pressure $p_{\text{at}}$ acts on the surface of a fluid of density $\rho$ then at a depth $h$ below the surface we have
$$p=p_{\text{at}}+\rho gh$$
Suppose that the atmospheric pressure is increased by an amount $\Delta p$, that is, by
$$p_{\text{at}}\to p_{\text{at}}+\Delta p$$
Then at the depth $h$ will be
$$p=p_{\text{at}}+\Delta p+\rho gh=(p_{\text{at}}+\rho gh)+\Delta p \tag 1$$
so increasing the pressure at the fluid surface by an amount $\Delta p$ increases the pressure at each point in the fluid by the same amount.
I am looking for a simple demonstration for my high school students (15 years old) Is there another one that is a little better because I did not understand the motivation of (1), for example.
| The pressure varies linearly with depth $h$, but the other two pressures are constant without any depth variation transmitted everywhere by Pascal's Law.
School students who understand histograms as changes in height of a graph with respect to a horizontal independent varying quantity may also hopefully follow pressure variation with depth depicted graphically shown alongside a container holding a liquid.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
The size of the universe and the scale factor of $\Lambda$CDM model I wonder is there a relation between the size of the universe and the scale factor calculated by solving Friedmann equations.
I mean if the volume of the universe nowadays is a round $V= 10^{78} m^3$, does this mean the current value of the cosmological scale factor is around $10^{26} m $ ? can we say $V=a^3 ~m^3$?
When solving Friedmann equation:
$$\left( \frac{\dot{a}}{a} \right) = H_0 \sqrt{\Omega_ra(t)^{-4}+\Omega_ma(t)^{-3}+\Omega_{\Lambda}}$$
According to this thread: The scale factor of ΛCDM as a function of time
Or according to this code:
The scale factor of ΛCDM
It gives the normalized dimensionless scale factor with $a(t_0)=1$, where $t_0$ is the current age of the universe $\sim 13$ Gyr .
Now I think if we wish to get a dimensionful scale factor with units of length, we should use an alternative formula for Friedmann equations. I tried
$$\dot{a}(\eta) = \frac{H_0}{c} \left(\Omega_m a_0^3 a + \Omega_r a_0^4 + \Omega_\Lambda a^4\right)^{1/2}$$
Where $\eta$ is a dimensionless conformal time . This formula is from Notes equation (28). But when using NDSolve in Mathematica in this Thread the equation has not been solved.
So any help to understand that? I thought when the equation is solved it gives $a(\eta_0) = a(13) = 10^{26}$ meter ?
| The size of the universe and the size of the observable universe are different things. The radius of the observable universe is equal to the conformal time times the scale factor if they're appropriately normalized.
If $k=\pm 1$, the scale factor is the radius of curvature of the spatial slices. Commonly it's called $R$ instead of $a$ in that case. If $k=1$ then $R$ is the size of the universe (or its reduced circumference). If $k=-1$, it at least sets a characteristic scale, though the total volume is of course infinite.
If $k=0$, as it is in ΛCDM, then there is no geometrical basis for assigning units to the scale factor. Your formula for $a'(η)$, although correct, doesn't help since it's invariant under a rescaling of $a$ (keeping in mind that the unitless conformal time also depends on $a_0$).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How electrolytes conducts electricity? While studying electrochemistry, I came across two key points that I'm unable to understand.
why does DC alone break down the electrolytic liquid
and
b) Why doesn't AC do the same?
| I'm going to assume breakdown means the reassociation of the dissolved analyte in solution. Suppose the dissolved salt is potassium. The redox potential for K is:
K+ + e− ⇌ K(s) at -2.93 V
Any voltage less than -2.93 V will create solid potassium at the electrode and reduce the number of K ions near the surface. This creates a concentration gradient at the electrode surface. The concentration gradient promotes diffusion of K ions to the electrode surface for reduction to a solid. Since K is conductive, over time, all the K ions will be reduced leaving only water.
Conversely, any voltage greater than -2.93 V oxidizes any K on the electrode surface back into solution and changes the concentration gradient by increasing the ions at the electrode surface.
If an AC voltage is introduced where the voltage oscillates about the redox potential, the result is oxidation and reduction about the electrode and no net change in ion concentration at the electrode surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Work Integral and its derivation The work integral is something I saw long time ago and in completely understood it.
\begin{align}
W_{12} & =\int F(x)dx=m\int^{t_2}_{t_1}adx=m\int\left(\frac{dv}{dt}\right)dx=m\int\left(\frac{dv}{dx}\right)\left(\frac{dx}{dt}\right)dx\\
&=m\int\left(\frac{dx}{dt}\right)dv=\frac12\left(mv_2^2-mv_1^2\right)
\end{align}
which is clear as day.
But then i saw another version and i cannot follow it:
So my question is:
How did $$m\int\frac{d}{dt}[\dot{x}(t)]\dot{x}(t)dt$$ become $$\frac{m}{2}\int\frac{d}{dt}[\dot{x}(t)]^2dt$$
Where does the $\frac{1}{2}$ come from?
And how did that expression become the change in kinetic energy equation?
I'm sure this is a simple question but i havnt been able to find a solution online so im asking here. Thanks for the help
| They're using the $2^{nd}$ principle of dynamics $\frac{d \mathbf{Q}}{dt} = \mathbf{F}$ to replace $\mathbf{F}$ with $\frac{d \mathbf{Q}}{dt} = \frac{d}{dt}(m\mathbf{v})$.
With the assumption $\dot m = 0$, you can further manipulate the expression $\mathbf{F} = m \frac{d \mathbf{v} }{dt} $, before writing the work integral as
$W = \displaystyle \int_{t_0}^{t_1} \mathbf{F} \cdot \mathbf{v} dt = m \int_{t_0}^{t_1} \underbrace{\frac{d \mathbf{v} }{dt} \cdot \mathbf{v}}_{=\frac{d}{dt} \left(\frac{1}{2} \mathbf{v} \cdot \mathbf{v} \right)} dt = m \int_{t_0}^{t_1} \dfrac{d}{dt} \left( \dfrac{1}{2} |\mathbf{v}|^2 \right) dt = \int_{0}^{1} dT = T_1 - T_0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why does a piece of thread form a straight line when we pull it? Experience tells that if we pull a piece of thread, it forms a straight line, a geodesic in the Euclidean space. If we perform a similar experiment on the surface of a sphere, we will get an arc of a great circle, which is also a geodesic.
How to show this in general, for any geometry?
| As @Fardin pointed out, you only get a straight line in the absence of gravity. While gravity is active, the string will form a catenary.
In general, the string will try to follow the shortest distance between the 2 endpoints. If it didn't do that, the tension on the string would try to shorten it. The definition of a geodesic is that it is the shortest path between 2 points on a surface. In flat space, a straight line is that shortest distance; on a sphere it's a great circle.
EDIT
To answer a comment by @MonsieurPeriné, here's my intuitive answer why tension straightens the string.
Assume there is a point A on the string that is not on the straight line. At A, the tension obviously deviates from the straight line. If the string were a rod, that tension would create a bending moment that would try to twist the rod around one of its endpoints. That would try to move A closer to the straight line. As the string is not a rod, it will deform instead, until all points are on the line (or geodesic).
For the mathematical reason see the answer by @linkhyrule5.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
} |
Sakurai on the time-evolution operator I have three questions about Sakurai's discussion of the time-evolution operator.
*
*First question:
In equation 2.12, Sakurai requires the composition property of the time-evolution operator:
$$U(t_2,t_0)=U(t_2,t_1)U(t_1,t_0)$$
Why is this required?
*Second question: In equation 2.15, Sakurai asserts the requirements of the time-evolution operator are satisfied by
$$U(t_0+dt,t_0)=1-i\Omega dt$$
with $\Omega$ a Hermitian operator. Where does this come from?
*Third question: Why is time evolution represented by an operator at all when, as Sakurai points out, time is not an observable like position or momentum? Observables are represented by operators.
|
Third question: Why is time evolution represented by an operator at all when, as Sakurai points out, time is not an observable like position or momentum? Observables are represented by operators.
In addition to J. Murray's answer, I'd like to add an answer only to this specific question, because I suspect the OP might miss a crucial point. In quantum mechanics observables are represented by a special class of operators, that of self-adjoint (or Hermitian if you wish) operators. This only implies that operators which are not self adjoint cannot represent observables, not that operators (whether self-adjoint or not) cannot represent other properties.
In particular, since time evolution in quantum mechanics is deterministic, that is, since the state at time $t_0$ uniquely determines the state at time $t$, one can intuitively expect to be able to represent this mapping of the states from $t_0$ to $t$ with an operator (which is, in fact, a mapping of the Hilbert space into itself). This intuition is then confirmed by the mathematics, which also shows that the time evolution is unitary, and not self-adjoint.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Confusion about Transforming Christoffel Symbols I'm trying to understand how transforming Christoffel symbols works. Specifically I'm thinking about the transformation between Schwarzschild and Eddington-Finkelstein coordinates,
$$\Gamma^v_{\;vv}=\frac{\partial v}{\partial x^m}\frac{\partial x^n}{\partial v}\frac{\partial x^p}{\partial v}\Gamma^m_{\;np}+\frac{\partial^2 x^m}{\partial v^2}\frac{\partial v}{\partial x^m}$$
With $cv=ct+r+r_{S}\ln(r-r_{S})$, $m$, $n$, and $p$ only get summed through $t$ and $r$. I'm just not sure how to deal with the $\frac{\partial t}{\partial v}$ and $\frac{\partial r}{\partial v}$. I've never done anything like that before. I'm assuming it cannot be the reciprocals of $\left(\frac{\partial v}{\partial t}\right)$ and $\left(\frac{\partial v}{\partial r}\right)$, since it is a multivariable function, but beyond that I'm not sure. Just to be clear, I could just find it normally with the new metric, but I want to understand how the Christoffel symbols transform as well.
| transformation between Schwarzschild and Eddington-Finkelstein coordinates.
the line element of Schwarzschild metric is:
$$ds_S^2=- \left( 1-{\frac {{\it r_s}}{r}} \right) {{\it dt}}^{2}+{{\it dr}}^{2}
\left( 1-{\frac {{\it r_s}}{r}} \right) ^{-1}+{r}^{2}{d\Omega }^{2}
$$
and of Eddington-Finkelstein metric
$$ds_E^2=-\left(1-\frac{r_s}{r}\right)\,dv^2+2\,dr\,dv+r^2\,d\Omega^2$$
from here you can obtain the transformation (Jacobi-Matrix) of the coordinates
$$ \begin{bmatrix}
dt \\
dr \\
d\Omega \\
\end{bmatrix}=\mathbf{T}\,
\begin{bmatrix}
dv \\
dr \\
d\Omega \\
\end{bmatrix}\quad,
\mathbf T= \left[ \begin {array}{ccc} 1&-{\frac {r}{r-{\it r_s}}}&0
\\ 0&1&0\\ 0&0&1\end {array}
\right]
$$
or
$$
\begin{bmatrix}
dv \\
dr \\
d\Omega \\
\end{bmatrix}=\mathbf T^{-1}\,
\begin{bmatrix}
dt \\
dr \\
d\Omega \\
\end{bmatrix}\quad,
\mathbf T^{-1}=\left[ \begin {array}{ccc} 1&{\frac {r}{r-{\it r_s}}}&0
\\ 0&1&0\\ 0&0&1\end {array}
\right]
$$
with those equations you can obtain the transformation of the Christoffel symbols.
to obtain the transformation matrix $~\mathbf T~$ you make this ansatz
$$\mathbf T^T\,\mathbf G_S\,\mathbf T=\mathbf G_E$$
where $~\mathbf G~$ is the metric .
thus for example:
$$dt=dv-\frac{r}{r-r_s}\,dr=\frac{\partial t}{\partial v}\,dv+
\frac{\partial t}{\partial r}\,dr$$
$$dv=dt+\frac{r}{r-r_s}\,dr=\frac{\partial v}{\partial t}\,dr+
\frac{\partial v}{\partial r}\,dr$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
I'm having trouble understanding the intuition behind why $a(x) = v\frac{\mathrm{d}v}{\mathrm{d}x}$ I was shown
\begin{align}
a(x) &= \frac{\mathrm{d}v}{\mathrm{d}t}\\
&= \frac{\mathrm{d}v}{\mathrm{d}x}\underbrace{\frac{\mathrm{d}x}{\mathrm{d}t}}_{v}\\
&= v\frac{\mathrm{d}v}{\mathrm{d}x}
\end{align}
However, this feels somewhat unintuitive, and somewhat questionable mathematics-wise. Perhaps it's the best way to explain it, but I was hoping for a more intuitive understanding of this formula.
| Writing for a general case, $v$ can be an explicit function of both $t$ and $x$ (for 1D motion along $x$).
$\therefore$ \begin{equation}
dv=\frac{\partial v}{\partial x}dx+\frac{\partial v}{\partial t}dt \Rightarrow \frac{dv}{dt}=\frac{\partial v}{\partial x}\frac{dx}{dt}+\frac{\partial v}{\partial t}=a \quad ...(\star)
\end{equation}
if $v$ is an explicit function of $x$ only($v=v(x))$, $\frac{\partial v}{\partial t}=0 $ and $\frac{\partial v}{\partial x}=\frac{dv}{dx}$
$\therefore$ ($\star$) becomes: \begin{equation}
a=\frac{dv}{dx}\frac{dx}{dt}=\left(\frac{dv}{dx}\right)v
\end{equation}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 1
} |
Cross product and spinor correspondence I wonder if there is a correspondence between a cross product of two vectors $\vec{x}, \vec{y} \in \mathbb{R}^3$ and their associated spinors $\lambda^\alpha, \tilde{\lambda}^\dot{\alpha}$ and $\omega^\alpha, \tilde{\omega}^\dot{\alpha}$.
Here is what I mean by that:
Given two vectors $\vec{x} = (x_1, x_2, x_3)$ and $\vec{y} = (y_1, y_2, y_3)$ one can associate the two complex matrices
\begin{equation}
\vec{x} \mapsto X^{\alpha \dot{\alpha}} =
\begin{bmatrix}
x_3 & x_1 - i x_2 \\
x_1 + i x_2 & -x_3
\end{bmatrix}
\quad and \quad
\vec{y} \mapsto Y^{\alpha \dot{\alpha}} =
\begin{bmatrix}
y_3 & y_1 - i y_2 \\
y_1 + i y_2 & -y_3
\end{bmatrix}
,
\end{equation}
with
\begin{equation}
det\left|X^{\alpha \dot{\alpha}}\right| = det\left|Y^{\alpha \dot{\alpha}}\right| = 0.
\end{equation}
Since the determinant of the matrices is zero these matrices may be written as an outer product of two complex 2-vectors:
\begin{equation}
X^{\alpha \dot{\alpha}} = \lambda^\alpha \otimes \tilde{\lambda}^\dot{\alpha}
\quad and \quad
Y^{\alpha \dot{\alpha}} = \omega^\alpha \otimes \tilde{\omega}^\dot{\alpha}
\end{equation}
The cross product of $\vec{x}, \vec{y}$ can now be associated with these matrices like:
\begin{equation}
\vec{x}\times\vec{y} = i\frac{1}{2}\left( XY-YX \right)
\end{equation}
My question now is, how can $i\frac{1}{2}\left( XY-YX \right)$ be expressed by means of the spinors $\lambda^\alpha, \tilde{\lambda}^\dot{\alpha}$ and $\omega^\alpha, \tilde{\omega}^\dot{\alpha}$?
| from the Wikipedia
$$\vec x\mapsto X\quad,\vec y\mapsto Y
\quad,\vec z=\vec x\times\vec y\mapsto Z$$
$$\frac 12\left(X\,Y-Y\,X\right)=i\,Z\quad,\rm det(Z)=0$$
with
\begin{align*}
&X=\begin{bmatrix}
\xi_{x1} \\
\xi_{x2}\\
\end{bmatrix}
\begin{bmatrix}
-\xi_{x2} & \xi_{x1} \\
\end{bmatrix}\quad ,\vec x\cdot \vec x=0
\end{align*}
\begin{align*}
&Y=\begin{bmatrix}
\xi_{y1} \\
\xi_{y2}\\
\end{bmatrix}
\begin{bmatrix}
-\xi_{y2} & \xi_{y1} \\
\end{bmatrix}\quad ,\vec y\cdot \vec y=0
\end{align*}
where
\begin{align*}
&\xi_x=\begin{bmatrix}
\xi_{x1} \\
\xi_{x2}\\
\end{bmatrix}\quad,
\xi_y=\begin{bmatrix}
\xi_{y1} \\
\xi_{y2}\\
\end{bmatrix}
\end{align*}
are the spinors
Other solution
\begin{align*}
&\vec x=\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
\end{bmatrix}=
\left[ \begin {array}{c} {\xi_{x1}}^{2}-{\xi_{x2}}^{2}
\\ i \left( {\xi_{x2}}^{2}+{\xi_{x1}}^{2}
\right) \\ -2\,\xi_{x1}\xi_{x2}\end {array}
\right] \quad \text{with}~\vec{x}\cdot\vec{x}=0\\
&\vec y=\begin{bmatrix}
y_1 \\
y_2 \\
y_3 \\
\end{bmatrix}=
\left[ \begin {array}{c} {\xi_{y1}}^{2}-{\xi_{y2}}^{2}
\\ i \left( {\xi_{y2}}^{2}+{\xi_{y1}}^{2}
\right) \\ -2\,\xi_{y1}\xi_{y2}\end {array}
\right] \quad \text{with}~\vec{y}\cdot\vec{y}=0\\
\end{align*}
\begin{align*}
\vec{z}&=\vec{x}\times\vec{y}\\
&=\left[ \begin {array}{c} 0\\ -2\,\xi_{x1}\xi x_{
{2}}{\xi_{y1}}^{2}+2\,\xi_{x1}\xi_{x2}{\xi_{y2}}^{2}+2\,{
\xi_{x1}}^{2}\xi_{y1}\xi_{y2}-2\,{\xi_{x2}}^{2}\xi_{y1}
\xi_{y2}\\ 0\end {array} \right]\\
&+i\,
\left[ \begin {array}{c} -2\,{\xi_{x2}}^{2}\xi_{y1}\xi_{y2}-
2\,{\xi_{x1}}^{2}\xi_{y1}\xi_{y2}+2\,\xi_{x1}\xi_{x2}{
\xi_{y2}}^{2}+2\,\xi_{x1}\xi_{x2}{\xi_{y1}}^{2}
\\ 0\\ -2\,{\xi_{x2}}^{2}{\xi y
_{{1}}}^{2}+2\,{\xi_{x1}}^{2}{\xi_{y2}}^{2}\end {array} \right]
\end{align*}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A pendulum in a superfluid Imagine to submerge a pendulum in a supefluid. Of course we assume an ideal pendulum, whose joint does not freeze or deteriorate due to the extremely low temperature. We also assume the superfluid to be at zero temperature, so we can neglect its normal component.
What happens to its oscillations? Are they somehow damped? Or do they keep a constant amplitude?
In other words: can some energy be transferred from the pendulum to the superfluid (for example, thanks to the excitation of sound waves in the superfluid)?
| If the pendulum is fully submerged (and the velocity of the sphere is sufficiently small) then the flow around the sphere is Stokes flow, and the drag is proportional to viscosity. As a result there is no drag in a zero temperature superfluid. If the sphere is only partially submerged, then it can excite surface waves which take away energy. This setup is not completely academic — torsion pendulums have been used to measure the viscosity of liquid helium.
Further remarks:
*
*The drag is described by Stokes formula $F=6\pi \eta RV$, where $\eta$ is the viscosity of the normal fluid, $R$ is the radius of the sphere, and $v$ is the velocity relative to the fluid. Drag vanishes if $\eta$ vanishes, or the normal density vanishes.
*If the sphere is only partially submerged then there is a free surface and Stokes solution does not apply. Indeed, more generally, d'Alembert's paradox does not apply. An object will generate a bow wave in an inviscid fluid, and this leads to drag.
*Every superfluid has a critical velocity. If the velocity of the object exceeds that velocity, then superfluidity breaks down, even at very small temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
How to derive the $vx/c^2$ term from first principles? In Lorentz transforms, the formula for time transformation is
$$t' = \gamma \left( t - \frac{v x}{c^2} \right)$$
I understand that the term $\frac{v x}{c^2}$ represents "time delay" seen by a stationary observer but I don't understand how to derive it from first principles. I understand $v/c$ as speed and $x/c$ as distance. Why multiply speed with distance? I thought time is distance divided by speed?
| I will try with the diagram below, we suppose that the container ijfg is filled with water, the light crosses this container of the face $f$ towards the face $g$ with a speed $v$ and put a time $t$ to cross it, we have $$l=vt$$
if there is no water, the light will travel a distance L =ct (during the same time t),we have : $$\frac{L}{c}=\frac{l}{v} $$
which gives $$L=\frac{c}{v}l=nl\;\;\;\;\;(1)$$
which is the optical path.
we can see that : $$l=vt=cT=c(t-t')$$
wiht:$$ct=vt+ct'$$ $$t'=\left(t-\frac{vt}{c}\right)\;\;\;\;\;(2)$$
from (1), we have :$$\frac{l}{c}=\frac{v}{c^{2}}L=T$$
and (2) becomes :$$t'=\left(t-\frac{vL}{c^{2}}\right)$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Is there a known closed-form expression for the susceptibility of the 2-D Ising model at $B = 0$? The Onsager solution for the 2-D Ising model allows us to find (among other things) complicated expressions for the internal energy of the system (in the thermodynamic limit and in zero magnetic field):
$$
u \equiv \frac{U}{JN} = - \coth \frac{2}{t} \left\{ 1 + \frac{2}{\pi} \left[ 2 \tanh^2 \left( \frac{2}{t} \right) - 1 \right] K\!\left[4 \, \text{sech}^2 \left( \frac{2}{t} \right) \tanh^2 \left( \frac{2}{t} \right) \right] \right\}
$$
where $t \equiv kT/J$ is the dimensionless temperature and $K(x)$ is a complete elliptic integral of the first kind. We can then (in principle) find a closed-form expression $C = \partial U/\partial T$.
Further, the net mean magnetization is known to be
$$
m = \begin{cases} \left[ 1 - \text{csch}^4 (2/t) \right]^{1/8} & t < 2/\ln(1 + \sqrt{2}) \\ 0 & t > 2/\ln(1 + \sqrt{2})
\end{cases}
$$
The question is then:
Is there a known closed-form expression for the magnetic susceptibility $\chi$ of the 2-D Ising model at zero field?
My (limited) intuition tells me that there should be, because energy and heat capacity are related to the first and second derivatives of the partition function with respect to $\beta$, and we have closed-form expressions for both of those quantities. Similarly, the magnetization and susceptibility are related to the first and second derivatives of the partition function with respect to the external field—but I have not been able to find a source that discusses a closed-form expression for $\chi$, only for $m$. Am I just looking at the wrong sources, or is there not actually a known expression for $\chi$ at zero field?
| There are no explicit expressions, as far as I know, only expressions in the form of (complicated) infinite series, originating from expressing the magnetic susceptibility as a sum over 2-point correlation functions and using the exact expressions known for the latter. These have been used to analyze the remarkable analytic properties of the magnetic susceptibility.
The resulting expressions being very complicated, it seems pointless to reproduce them here. You can find them (together with links to the relevant literature) in McCoy's 2009 book; see Section 10.1.9 therein. You may also have a look at his article on scholarpedia.
In addition, a 2010 review of the history of this problem by some of its main investigators can be found here.
My (limited) intuition tells me that there should be, because energy and heat capacity are related to the first and second derivatives of the partition function with respect to β, and we have closed-form expressions for both of those quantities. Similarly, the magnetization and susceptibility are related to the first and second derivatives of the partition function with respect to the external field—but I have not been able to find a source that discusses a closed-form expression for χ, only for m.
Note that there are no known expressions for the free energy as a function of the magnetic field. This prevents the computation of the susceptibility by differentiating the free energy, which is the reason the available computations rely instead on correlation functions. (This is actually also the typical way the spontaneous magnetization is computed: $m^2=\lim_{n\to\infty} \langle \sigma_{(0,0)}\sigma_{(n,0)}\rangle$.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Could you feel your weight falling through the a tube drilled through the center of the earth? Suppose you drill a hole through the center of the earth (assume the earth is uniform and no air resistance) and you jump in. Would you be "weightless" throughout the entire fall?
The reason I ask is that if you jump off a cliff, throughout the entire fall you feel weightless (just like when astronauts train for the weightless feeling in orbit, they practice by going in an airplane and having the airplane fall to approximate the experience). Does this same weightless experience happen when you are falling through the center-of-the-earth tube?
I know that if you are stationary at the center of the earth, then you are weightless; but, I'm interested in falling throughout the entire hole.
The reason why I'm confused is that it's well-known that when you fall, you oscillate (simple harmonic motion) up and down the tube and this oscillation seems to imply that you will feel your weight.
| In order to feel your weight something must be present that prevents you from free falling.
In everyday life the ground we are standing on provides the barrier that keeps us from free falling.
You refer to the implication that an object in free fall inside a corridor straight through the center of gravity of a gravitating mass will undergo oscillation. That is: the direction of gravitational acceleration is in opposite direction on opposite sides of the center of gravitational acceleration.
The point is: if you grant that inertial mass and gravitational mass are equivalent then an accelerometer-strapped-to-the-object will not register change of direction of gravitational acceleration, since given inertial-gravitational mass equivalence the accelerometer will not register any acceleration in the first place!
If equivalence of inertial and gravitational mass is granted then you expect that an accelerometer-strapped-to-the-object will register acceleration only if some barrier is preventing free fall.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Regarding Lenz's Law presented in hyperphycsics The following diagram is presented in hyperphysics as an introduction of Faraday's Law and Lenz's Law.
If the red arrows represent the direction of current, then what do the positive and negative poles across the resistor means? From my understanding, resistors do not produce a potential difference therefore poles shouldn't be present in the first place.
| It seems you got confused by the drawing about what is cause
and what is effect.
Of course the resistor does not produce the voltage.
The voltage is produced by the coil.
And this voltage is then consumed by the resistor.
For reducing confusion let us first consider the situation without the resistor.
The changing magnetic field $\frac{\Delta B}{\Delta t}$ produces,
according to Faraday's law, a voltage between the two ends of the coiled wire
$$V=-NA\frac{\Delta B}{\Delta t}$$
where $N$ is the number of turns of the coil, and $A$ is
the area enclosed by one turn.
There is no current flowing, because the resistance
between the open ends of the coiled wire is infinite.
You could measure this voltage by connecting a volt-meter
to the two ends of the coil.
You will find the voltage has a polarity as shown
by the $+$ and $-$ in the drawing.
And there is still no current flowing, because the volt-meter
has a very high (ideally an infinitely high) resistance.
Now let us add the resistor (with resistance $R$).
The resistor reacts to the given voltage $V$ by letting
a current $I=\frac{V}{R}$ pass through the resistor from $+$ to $-$,
i.e. in the direction shown by the red arrow.
This current $I$ produces a magnetic field $\color{red}{B_\text{induced}}$
of its own. And according to Lenz's law it
is directed opposite to $\color{blue}{\Delta B}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Various expressions for total amplitude in Frederic Schuller's German QM lectures In this lecture at around 53:31, the equation for total amplitude in terms of elementary amplitude along a path is written down.
$$\phi_{\text{total} } ( \overline{x_0} , \overline{x_n} , t_N- t_0) = \lim_{N \to \infty} \prod_{i=1}^n \int_{\overline{x_0}}^{\overline{x_n } } dx_i \phi_{path(x_1,x_2...,x_N)}^N \tag{1}$$
The set up is that a particle is shot by a source to a detector screen and the total probably of the particle being at position $\overline{x_n}$ starting at position $x_0$ starting at time $t_0$ is found by consider the integral of the elementary probability amplitude over all possible paths. Later the elementary probability amplitude is shown to be given as
$$ \phi_{path(x_1,x_2...,x_N)}^N = e^{ i \frac{S \left[\text{path} \right]}{\overline{h}}}.\tag{2}$$
Now my confusion happens due to the successive lecture-3 at 15:41. There he introduces two factors $A$ and $\epsilon$ into the equation. $A$ is a normalization factor and $\epsilon$ is the total time and calls the equation the path integral. Here is a picture:
$$ \phi( x_A,x_B; t_A,t_B) = \lim_{N \to \infty} \left( \prod_{i=1}^{N-1} \int dx_i \right) \frac{1}{A(\epsilon)^N} \exp \{ \frac{i}{ \hbar } S\left[ \text{path} (x_0,...,x_N ) \right] \} \tag{3}$$
What is the difference between (1) and (3)? Has prof maybe made typo in (1)?
| Yes, $$A(\epsilon)~=~\sqrt{\frac{2\pi i\hbar \epsilon}{m}}, \qquad \epsilon~=~\frac{t_N-t_0}{N},$$ in eq. (3) is the famous Feynman fudge factor, which needs to be included in the path integral measure. For details, see e.g. this, this & this Phys.SE posts.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Associativity of covariant derivatives I'm having trouble proving that covariant differentiation is an associative operation.
Essentially I'll have to show
$$\nabla_\mu( \nabla_\nu \nabla_\sigma) = (\nabla_\mu\nabla_\nu) \nabla_\sigma. $$
But is it enough to show that both LHS and RHS yield the same result when acted up on a scalar or a contravariant vector?.
Will this hold for any general tensor?
Is there any other method to show this ?
| Frankly it boils down to function/operator composition, which is associative.
Take a general tensor $T^\alpha_\beta$.
$$(\nabla_\nu \nabla_\sigma)T^\alpha_\beta = \nabla_\nu (\nabla_\sigma T^\alpha_\beta )$$
so
$$\nabla_\mu( \nabla_\nu \nabla_\sigma)T^\alpha_\beta = \nabla_\mu(\nabla_\nu (\nabla_\sigma T^\alpha_\beta ))$$
Similarly for the right hand side
$$ (\nabla_\mu\nabla_\nu) (\nabla_\sigma T^\alpha_\beta)= \nabla_\mu(\nabla_\nu (\nabla_\sigma T^\alpha_\beta ))$$
So both sides of your equation are the same, when expanded.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is there a conflict, or is there not a conflict between the Pusey-Barrett-Rudolph (PBR) theorem and the information theory interpretation? In the wikipedia article, it says that the PBR theorem sort of rules out the psi epistemic interpretations. I want to know, is this the end of the information theory interpretation and relational interpretation?
I am thinking that there is no conflict. The statement of this theorem says that one physical reality is not consistent with multiple pure states. But psi epistemic models do not
attribute different pure states to the same physical situation, do they?
For example, in the Wiger's friend experiment, the information theory/relational interpretation says that the friend observes a collapsed state, say $|\text {spin up}\rangle$. But Wigner will describe the experiment using something like $|\text{spin up, friend measured up}\rangle +|\text{spin down, friend measured down}\rangle$.
So it is true that Wigner and Wigner's friend are using different states, but they're not describing the same physical situation. Wigner's friend is only describing the state of the particle. But Wigner is describing the joint system of his friend and the particle.
Is this correct? Is there a conflict or not, between the PBR theorem and the information theory/ relational interpretation?
| OKay so PBR is solely a statement about hidden variable theories, i.e. theories which say that the pure state is an incomplete description of physical reality, and that a hidden state provides the complete description.
Since the relational interpretation and the information theoretic interpretation do not assert the existence of any hidden variables, I think PBR is indeed not in any conflict with them
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Weight in Interplanetary Space How is weight zero in interplanetary
space? The Moon is orbiting the Earth because of the gravitational pull of earth. Then gravity must exist in interplanetary space too. So any body in space must also have an acceleration due to gravity ($g$) but $g$ must actually be 0 for weight to be zero.
Can anyone please help me with this?
| Depends how you define weight. Operational weight, (which you measure with weight scales) is zero, of course because body doesn't exert any force on scales/support operating in Earth orbit or space.
However, gravitational weight defined as $$ W = G \frac {Mm}{r^2} $$
is not zero, because body $m$ is attracted gravitationally to the bigger body $M$.
Besides if to be technically correct, even in orbit body and scales will be attracted towards each other due to acting microgravity force between them, so even operational weight may be not plain zero, but on the order of $μN$ or so.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Why do we need an earth wire? Apologies if this question has already been asked before.
In this video and other sources, it says that the ground/earth wire is connected to the outside metal casing of an electrical appliance in order to create a low-resistance path back to the live wire in case of a fault. However, if the ground wire wasn't there, how would current be able to flow into the person and cause an electric shock if there was no "other end of the circuit"?
The best explanation I can find so far is that the live wire will "electrify" the casing or to make it become "live", but I'm not sure if this is possible even if the circuit is not completed. Is it possible for electrons to flow along the casing even if there is nowhere for them to go?
Edit: Sorry, my question wasn't phrased very well. If the neutral wire is also connected to the ground, how does the resistance of the circuit inside the machine compare to the ground? My assumption is that the resistance of the ground is higher, but I may be wrong on this.
| The ground the person is standing on would be the return path. The Earth is an effectively infinite sink for current, at 0 V.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is the interaction picture in QFT properly used? $\newcommand{\ket}[1]{\left\vert#1\right\rangle}$In Quantum mechanics, we have the interaction picture. When the Hamiltonian is in the form of $H=H_{0}+V$, we can transform the evolution equation into $\frac{d}{dt}\ket{\phi}_{I}=V_{I}\ket{\phi}_{I}$, where $\ket{\phi}_{I}=e^{iH_{0}t}\ket{\phi}$, and $V_{I}=e^{iH_{0}t}Ve^{-iH_{0}t}$.
In QFT, we apply the interaction picture as well. But for most of the text I have seen, take $V=\frac{g}{4}\phi^{4}$ as an example, when calculating the Dyson series or Feynman amplitudes, the interaction Hamiltonian is still $\frac{g}{4}\phi^{4}$, not $e^{iH_{0}t}Ve^{-iH_{0}t}$. Why is that the case?
| This confused me as well. Remember that the $\phi$ in $\frac{g}{4}\phi^4$ that we used in the Interaction picture has time dependence. More precisely, the $\phi$ that we use is the free field solution, i.e. it is the solution to $\frac{d \phi}{dt}=-i[\phi, H_{0}]$, which is the same as the value of $e^{iH_0t}\phi e^{-iH_0t}$ (You can differentiate this to get the commutator equation)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Can we just take the underlying set of the spacetime manifold as $\mathbb{R^4}$ for all practical purposes? In mathematical GR and also in some informal GR presentations (eg: MTW), manifolds are always mentioned before talking about GR... but now I am starting to wonder.. if it even actually neccesary?
In this answer, it is said that it doesn't really matter what topological manifold we use to model a situation in space time because all of them are homeomorphic to some subset of $R^4$ by definition of manifold and it's apparently impossible to actually check the topology at a global level due to the censorship theorem.
All of this tells me that other than getting Physicists and Mathematicians to use similar terminology, the manifolds in it's full generality self is probably not relevant to GR except at the highest levels of study at very specialized research (beyond grad school for instance). Is this conclusion correct or am I missing something?
| Topological censorship is a theorem from the 1993 paper "Topological censorship" by Friedman, Schleich and Witt. It is a technical statement about certain manifolds (!), and it does not say that "it's apparently impossible to actually check the topology at a global level" as the question claims.
The paper explicitly says on the implications of its theorem:
Thus general relativity prevents one from actively probing the
topology of spacetime. However, note that one can passively observe that topology by detecting light that originates at a past singularity.
What follows in the paper is further discussion of what restrictions, if any, there are on such passive observation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How do we prove that the 4-acceleration transforms as a 4-vector in Special Relativity? In order to define the acceleration of a body in its own frame, we need to first prove that the acceleration is a four-vector so that its dot product with itself can then be labeled as acceleration squared in the rest frame. For velocity and displacement vectors, we can show that they have a constant dot product. But how do we prove that for acceleration?
| Is it not so by definition?
$$
{\bf a}= \frac {d{\bf v}}{d\tau}
$$
where
$$
{\bf v}= \frac{d{\bf x}}{d \tau}
$$
is a 4-vector and $\tau$ is a scalar.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Experimentally Measuring the Velocity of Water coming out of an Orifice I plan on doing an investigation into Torricelli's Law, where I will be looking at one of the following:
*
*How the cross-sectional area of an orifice affects the velocity of water coming out of it (constant height).
*How the height of an orifice affects the velocity of water coming out of it (constant orifice area).
However, I was unsure about how to accurately measure the velocity of water coming out of the orifice. Videos on YouTube only suggest one method, which is using the horizontal and vertical displacements of the water stream to calculate velocity. However, when I've done this experimentally I've found an about $15\%$ error compared to expected values. The process is also not very exact per se, i.e. it is hard to judge the exact marking of a ruler that the stream lands on.
Therefore, I was wondering if there were any accurate means to measure the velocity of water coming out of an orifice, using equipment typically found in a school laboratory.
| Place a measurement grid by the stream and start filming. As clear water flows, add food color to the water. Measure the movement of the front of the colored water versus the grid. Preferably, use clear pipes to allow measurement of the water velocity before it leaves the orifice.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Why does a sensitive thermometer absorb little heat? In an experiment to measure the specific heat capacity of water I'm trying to make it as accurate as possible. And somewhere I read that a sensitive thermometer absorbs little heat. By "sensitive" I am referring to the amount of change in thermometric property for a unit change in temperature.
| A thermometer that absorbs a lot of heat will change the temperature of what it is measuring and then measure the wrong temperature.
To be sensitive, any sensor must disturb its enivronment in a very predictable way. For a thermometer that is easiest to achieve for one that minimizes heat absorption.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Why does using images that are not really formed work in ray optics? It's all in the title. For instance, if I have two lenses , I have been taught to first find the position of the image formed by the first lens, and then use that image to find the final image formed by the 2nd lens, if the first image is formed beynd the 2nd lens. Why does this work?
edit:- image for reference
| I addressed this before but will elaborate further. Refer to the diagram here.
Suppose there is an object R on the axis a distance r to the left of a lens with focal length f and r<f. When the rays leave the lens they diverge as if coming from a point P a distance p to the left of the lens. So P is a virtual image. We have
1/r + 1/p = 1/f
1/p = (r-f)/rf
where r>0 and p<0.
Rays are reversible so consider rays from the right heading toward P i.e. P is now a virtual object. They have to converge at R. Your question is basically can we use the same lens equation for this case. Let's see if
1/p = (r-f)/rf
works. Well, f is the same, the absolute values of p and r are the same. In this case r > 0 as it's now a real image and still r<f. So we will end up with the right magnitude for p but it will be negative.
So we conclude that we can use a virtual object if the image distance is negative.
Edit: fixed equation for 1/p. Conclusion still holds.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Local $SU(2)$ symmetry breaking and unitary gauge In a $SU(2)$ gauge field theory with scalar field $\phi$ in the fundamental representation of the $SU(2)$ group with lagrangian $$\mathcal{L} = -\frac{1}{2}TrF_{\mu\nu}F^{\mu\nu} + (D_{\mu}\phi)^\dagger(D^{\mu}\phi) + \mu^2\phi^\dagger\phi - \frac{1}{2}\lambda(\phi^\dagger\phi)^2,$$ we can pick a vev $\phi_{0} = \frac{1}{\sqrt{2}}(0 \;\; v)^T$, with $v^2 = \frac{2\mu^2}{\lambda}$, and break the symmetry doing $\phi = \varphi + \phi_0$.
Then, the new lagrangian has a couple mixing terms $ig\partial_{\mu}\varphi^{\dagger}A^{\mu}_{a}t^a\phi_0$ + h.c., which can be set to zero by choosing the unitary gauge. This implies that the field $\varphi$ satisfies the conditions $\varphi^{\dagger}t^a\phi_0 = 0$, where $t^a$ are the generators of $SU(2)$ picked so that $t_a = \frac{1}{2}\sigma_a$, where $\sigma_a$ are the Pauli matrices.
The unitary gauge imposes constraints on the field $\varphi$. However, if we consider $\varphi^{\dagger}t^1\phi_0 = 0$ and $\varphi^{\dagger}t^3\phi_0 = 0$ and take $\varphi^{\dagger} = (\varphi^*_1 \;\; \varphi^*_2)$, I get that
$$
\begin{pmatrix} \varphi^*_1 & \varphi^*_2 \end{pmatrix}
\frac{1}{2}\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}
\frac{1}{\sqrt2}
\begin{pmatrix} 0 \\ v \end{pmatrix} = \frac{1}{2\sqrt2}\varphi^*_1v = 0
\Rightarrow \varphi^*_1 = 0.
$$
Similarly,
$$
\begin{pmatrix} \varphi^*_1 & \varphi^*_2 \end{pmatrix}
\frac{1}{2}\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}
\frac{1}{\sqrt2}
\begin{pmatrix} 0 \\ v \end{pmatrix} = -\frac{1}{2\sqrt2}\varphi^*_2v = 0
\Rightarrow \varphi^*_2 = 0.
$$
I'm confused because this would imply that
$\varphi = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$.
| Your results are right. That is exactly what it should be here. Quoted from Weinberg
$$0=\sum_{nm}\tilde{\phi}_m (t_\alpha)_{mn} v_n\qquad\qquad(21.1.2)$$
Eq. (21.1.2) shows that there are no Goldstone boson fields in unitarity
gauge. Since the theory is gauge-invariant this means that there are no
physical Goldstone bosons, whatever gauge we choose.
(21.1.2) is what you had enforced to let couple mixing term vanish.
When you choose unitary gauge, you absorb the degree of freedom into the longitudinal mode of the gauge bosons. Notice that in the unbroken phase you have three gauge bosons which are vector fields each has only two transverse polarizations as they are massless. But if spontaneous symmetry breaking happens they obtain mass and the longitudinal polarized mode now is not unphysical as before, so the physical degree of freedoms are the same.
As a summary: in unitary gauge, 3 goldstone bosons vanish but 3 longitudinal polarized mode of the gauge bosons occur. The total physical degree of freedom is unchanged.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Definition of momentum We say that momentum is the measure of how a body is moving or the quantity of movement inside a body
But what this definition really mean?
This terms are very vague
$p=mv$,why the movement inside the body depend on it's mass?
| (In classical mechanics), the definition of momentum is $\vec{p}=m\vec{v}$.
The reason this is a good definition is because it is useful. In particular, the momentum of a collection of particles that are not in an external potential is conserved. Conserved quantities make it possible to understand aspects of the behavior of a system without solving complicated equations.
My advice would be not to get stuck on any philosophical musings on "why this definition." You can make any definition you want; the reason definitions stick around and make it into textbooks is because they are useful and help us solve problems.
The above reasoning is good enough reason to justify the definition of momentum -- we make a classical mechanics definition, and we see a benefit of using that definition to solve classical mechanics problems. But in fact, momentum is in some sense even better than it needs to be. In particular, when we generalize classical mechanics by (a) introducing relativity, and (b) moving to quantum mechanics, we find that many concepts like force or velocity do not have nice translations into those more general frameworks, but momentum does.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/735142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Optical theorem Peskin and Schroeder I'm trying to understand the optical theorem of Peskin and Schroeder
$$\tag{7.50} \text{Im} M(k_1,k_2\rightarrow k_1,k_2)=2E_{cm}p_{cm}\sigma_{tot}(k_1,k_2\rightarrow\text{anything})$$
which Peskin and Schroeder says follows from
$$\tag{7.49} -i[M(a\rightarrow b)-M^\ast(b\rightarrow a)]=\sum_f\int d\Pi_f M^\ast(b\rightarrow f)M(a\rightarrow f)$$
and $$\tag{4.79} d\sigma=\frac{1}{2E_A2E_B|v_A-v_B|}\times\prod_f\frac{d^3 p_f}{(2\pi)^3}\frac{1}{2E_f}\times |M(p_A,p_B\rightarrow\{p_f\})|^2(2\pi)^4\delta^{(4)}(p_A+p_B-\sum p_f)$$
and $$\tag{4.80} \int d\Pi_n=\prod_f\frac{d^3 p_f}{(2\pi)^3}\frac{1}{2E_f}\times(2\pi)^4\delta^{(4)}(P-\sum p_f)$$
But how did we integrate (4.79) to get (7.50)?
| From your comments it looks like you're mainly confused about the $E_1 E_2 | v_1 - v_2 |$ prefactor, so I'll try to be very explicit about that part. Start from Eq. (7.49) and take the initial and final states to be the same (that is, $b = a$). Then we get
$$
2 {\rm \:Im \:} \mathcal{M}(a\rightarrow a) = \sum_f \int d \Pi_f | \mathcal{M}(a \rightarrow f) |^2 .
$$
Now the formula for the cross section (Peskin's Eq. (4.79)) relates the right hand side of the above equation to a cross section:
$$
d\sigma(a \rightarrow f) = \frac{1}{4 E_1 E_2 |v_1 - v_2|} d \Pi_f |\mathcal{M}(a \rightarrow f) |^2 .
$$
Combining these two equations we get
$$
{\rm \:Im \:} \mathcal{M}(a\rightarrow a) = 2 E_1 E_2 |v_1 - v_2| \times \sum_f \sigma(a \rightarrow f) = 2 E_1 E_2 |v_1 - v_2| \times \sigma(a \rightarrow {\rm anything}) .
$$
Now it is just a matter of working out some kinematics. Working in the CM frame, we can express the momenta of the two incoming particles as $p_1^\mu = (E_1, \vec{p})$ and $p_2^\mu = (E_2, -\vec{p})$. Their velocities are $v_1 = |\vec{p}|/E_1$ and $v_2 = -|\vec{p}|/E_2$ (note the relative sign because they point in opposite directions).
Then we can calculate
$$
E_1 E_2 |v_1 - v_2| = E_1 E_2 \left( \frac{|\vec{p}|}{E_1} + \frac{|\vec{p}|}{E_2} \right) \\ = |\vec{p}| (E_1 + E_2)
\\ = p_{cm} E_{cm} .
$$
Note the definitions which Peskin gives immediately below Eq. (7.50): $p_{cm} = |\vec{p}|$ is the magnitude of either momentum in the CM frame, while $E_{cm} = E_1 + E_2$ is the total energy in the CM frame.
Anyway, plugging this equation into the previous one gives the desired result,
$$
{\rm \:Im \:} \mathcal{M}(a\rightarrow a) = 2 E_1 E_2 |v_1 - v_2| \times \sum_f \sigma(a \rightarrow f) = 2 E_{cm} p_{cm} \times \sigma(a \rightarrow {\rm anything}) .
$$
By the way, you can write the cross section formula in a manifestly Lorentz-invariant way (see e.g. Griffith's elementary particles text):
$$
d\sigma(a \rightarrow f) = \frac{1}{4 \sqrt{(p_1 \cdot p_2)^2 - m_1^2 m_2^2}} d \Pi_f |\mathcal{M}(a \rightarrow f) |^2 .
$$
You can also calculate the prefactor using this expression. Instead of $E_1 E_2 |v_1 - v_2|$, we have (still working in the CM frame with $p_1^\mu = (E_1, \vec{p})$ and $p_2^\mu = (E_2, -\vec{p})$):
$$
\left[ (p_1 \cdot p_2)^2 - m_1^2 m_2^2 \right]^{1/2} = \left[ (E_1 E_2 + |\vec{p}|^2 )^2 - (E_1^2 - |\vec{p}|^2)(E_2^2 - |\vec{p}|^2) \right]^{1/2}
\\ = \left[ |\vec{p}|^2 (E_1 + E_2)^2 \right]^{1/2}
\\ = E_{cm} p_{cm} .
$$
As expected we get the same result in the end.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/735269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to describe the physics process of scintillation? I want to find some references on describing the physics of scintillation. As we know the lights generated by scintillator through atom activation and de-activation, and each material has a
spectrum and its intensity veries with wave length as shown in the figure below. How to calculate the number of lights generated by the scintillator materials and their distribution? Can we get the light emission spectrum in the figure below from theory?
| To calculate scintillation yield of materials is impractical. It's determined experimentally.
The power of mathematics in physics blinds many to its severe weaknesses. While it may be insightful, even very simple problems are often very difficult to compute. Of course, textbooks avoid these problems, thus perpetuating the illusion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/735790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Relation between velocity and mobility of electrons and holes I have been studying band theory and semiconductors in condensed matter physics and I am confused about the relation
between mobility and velocity of electrons and holes in semiconductors.
My standard text book reference, Introduction to Solid State Physics, by Charles Kittel, says this:
i.e., the velocities of electrons and holes are the same in a semiconductor.
However, I was also reading about the dependence of the Hall coefficient on temperature and found this:
Now I can't understand how the mobilities of electrons and holes are different if their velocities are the same. What am I missing here?
Also, intuitively why should the mobilities be different for electrons and holes? Does it depend on doping too? Holes are just the gaps left behind by electrons and can practically be regarded as positive versions of electrons. Is it due to the mass factor coming into play due to electrons having some mass but holes being massless? Even then, holes should be more mobile than the electrons, right?
| The expression
$$
\mathbf{v}(\mathbf{p}) = \nabla\epsilon(\mathbf{p})
$$
is the velocity of an electron with momentum $\mathbf{p}$. This velocity can be calculated for an electron anywhere in the band.
On the other hand, the velocity associated with mobility is the drift velocity,
$$v_d = \mu E,$$
which describes the velocity of carriers in a stationary current, which is obtained by solving the kinetic equation (or some equivalent equation) taking into account the accelerating electric field and the dissipation. The electrons participating in transport are usually the ones close to the Fermi surface. E.g., one could solve a Drude-like equation
$$
\frac{d\mathbf{p}}{dt}=-e\mathbf{E} -\frac{1}{\tau}\mathbf{v}(\mathbf{p})
$$
and obtain the drift velocity or momentum.
Another important thing to keep in mind is that, while the hole velocity is the same as that of the missing electron, when discussing Hall effect we are talking not about missing electrons, but about electrons in the conduction band, and holes in the valence band, which have different dispersion relations and hence different velocity, different effective mass, and different mobility.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/736250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How do I catch someone falling from a short height without hurting or bruising them I am the backspot in the stunts for cheer and I keep catching our flyer but I am not absorbing her fall so she has bruises underneath her arms. What is the physics of catching her without hurting her?
| Cheer coaches can give much better practical advice than we can, but from a physics perspective the goal is to have a uniform de-acceleration over a long time and distance instead of an abrupt de-acceleration. Once you receive them on the way down you want to slow them as smoothly as possible over the longest possible distance, spreading the force over the largest catch area. In cheer terms, the goal is "Catching high and absorbing well".
Bruises form under a wide variety of conditions, but a plausible rough criterion is that a bruise may form if the peak contact pressure is more than about a MPa and the total transferred energy per unit area is more than about $100\,\textrm{kJ/m}^2$.
If you throw a flyer up a distance $h$, when they come back down they will have a velocity $v=\sqrt{2gh}$. The constant de-acceleration $a$ needed to slow them down to a stop in a catching distance $s$ is
$$a=\frac{v^2}{2s}=\frac{h}{s}g$$ where $g\approx10\,\textrm{m/s}^2$ is the acceration due to gravity.
If you are catching the flyer under their arms, then the contact area might be about $300\,\textrm{cm}^2$. If a $50$ kg flyer is thrown up $3$ m, then their energy when they hit your arms is about 1500 J, so the energy transfer is only about $50\,\textrm{kJ/m}^2$.
If you slow them down smoothly over a $30$ cm vertical distance, then they are de-accelerating them at about $10$ g $\approx 100\,\textrm{m/s}^2$, which requires a force ($F=ma$) of $5000$ N, and the pressure over the catch contact area is only about $0.16$ MPa. So a single smooth catch onto wide arms would not be expected to cause bruises.
If, however, you are throwing them higher (e.g. $4$ m), mostly slowing them down over a shorter distance e.g. $15$ cm, and most of the force is over a smaller area, e.g. $100\,\textrm{cm}^2$, then the energy transfer and pressure (e.g.$100\,\textrm{kJ/m}^2$, $1.3$ MPa) could certainly be high enough to cause a bruise.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/736792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Magnetic dipole Hamiltonian from current-current interaction In the Coulomb gauge, we can write the electromagnetic Hamiltonian as
\begin{equation}
\label{eq:em-hamiltonian}\tag{1}
H_\mathrm{EM} = - \int d^3 x \, \mathbf{j}(\mathbf{x}) \cdot \mathbf{A}(\mathbf{x})
+ \frac{1}{2} \int d^3 x \, d^3 x' \, \frac{\rho(\mathbf{x}) \rho(\mathbf{x}')}{4\pi \vert \mathbf{x} - \mathbf{x}' \vert},
\end{equation}
where $\mathbf{j}(\mathbf{x})$ is a current density, $\mathbf{A}(\mathbf{x})$ is the vector current, and $\rho(\mathbf{x})$ is a charge density.
For a magnetic dipole in a uniform magnetic field, we can write the interaction Hamiltonian as
$$
\label{eq:dipole-hamiltonian}\tag{2}
H_\mathrm{dipole} = - \mathbf{\mu}\cdot\mathbf{B}
$$
where we can express the dipole moment $\mathbf{\mu}$ in terms of the current density via
$$
\mathbf{\mu} = \frac{1}{2} \int_{\mathbb{R}^3} d^3 x \, \mathbf{x} \times \mathbf{j}(\vec{x}).
$$
How can we recover \eqref{eq:dipole-hamiltonian} from \eqref{eq:em-hamiltonian}? We know that $\mathbf{B}(\mathbf{x}) = \mathbf{\nabla} \times \mathbf{A}(\mathbf{x})$, and can assume that $\mathbf{j}(\mathbf{x})$ is localized in space.
| Start with the (particular choice) of $\mathbf{A}(\mathbf{x})$ for a uniform field,
$$
\mathbf{A}(\mathbf{x}) = -\frac{1}{2} \mathbf{x} \times \mathbf{B},
$$
and set $\rho(\mathbf{x}) = 0$. Then $H_\mathrm{EM}$ reduces to
$$
H_\mathrm{EM} = \frac{1}{2} \int d^3 x \, \mathbf{j}(\mathbf{x}) \cdot \big( \mathbf{x} \times \mathbf{B} \big).
$$
Using the scalar triple-product identity $\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = - (\mathbf{b} \times \mathbf{a}) \cdot \mathbf{c}$, we can rewrite $H_\mathrm{EM}$ as
$$
H_\mathrm{EM} = -\frac{1}{2} \int d^3 x \, \big( \mathbf{x} \times \mathbf{j}(\mathbf{x}) \big) \cdot \mathbf{B}.
$$
Because $\mathbf{B}$ is independent of $\mathbf{x}$, we can pull it out of the integration, writing
$$
H_\mathrm{EM} = -\left(\frac{1}{2} \int d^3 x \, \mathbf{x} \times \mathbf{j}(\mathbf{x}) \right) \cdot \mathbf{B}.
$$
We then recognize the term inside the brackets as $\mathbf{\mu}$, and recover
$$
H_\mathrm{EM} = - \mathbf{\mu} \cdot \mathbf{B} = H_\mathrm{dipole}.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of free parameters in $SU(5)$ GUT model Lately, I have been studying the potential of scalar fields in this theory. In general, what is the point of this GUT if, there, more free parameters have been added?
The standard Higgs potential in the Standard Model with only 2 free parameters (Higgs mass and self-coupling)is
$$V(φ) = \frac{1}{2}m^2φ^2+\frac{λ}{4!}φ^4.$$
For the $SU(5)$ GUT, we have (ignoring possible odd terms),
$$V(Φ,φ) = \frac{1}{2}m_{Φ}^2\operatorname{tr}(Φ^2)+\frac{a}{4!}\operatorname{tr}(Φ^2)^2+\frac{b}{4!}\operatorname{tr}(Φ^4)\\ +\frac{1}{2}m_{φ}^2φ^2+\frac{λ}{4!}φ^4+αφ^\dagger φ\operatorname{tr}(Φ^2)+βφ^\daggerΦ^2 φ,$$
so, in total, 7 free parameters.
I mean, it is clear with the mass and self-coupling of both scalar fields, but why do we need 3 more terms? Are they important?
| There is no fixed significance to free dimensionless parameters in a theory examined; e.g., m, as used in the SM, is not quite a mass, etc.
The GUT action you wrote is the most general renormalizable SU(5)-invariant potential given the fields involve, a 24 and a 5, and the discrete symmetries excluding the odd terms.
See here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How can we say that work done by carnot engine in a cycle equals net heat released into it even when it is operated b/w 2 bodies and not 2 reservoir? When a carnot engine is operated between 2 reservoir then after each cycle it return to its initial state so change in internal energy is zero and so work done by it equals net heat released into it. But suppose it is operated between 2 bodies so when higher temperature body releases heat into carnot engine and the engine releases heat into lower temperature body the temperature of bodies will change (unlike the reservoir). So how can work done by carnot engine still equals net heat released into it as given in the example 13.6 of book 'Concepts in Thermal Physics'?
| I think the question is asking how—if the Carnot engine's original state was at $T_\mathrm{high}$, corresponding to the initial temperature of a high-temperature finite body—can the engine return to this original state after a cycle that removes heat from that body, bringing it to $T_\mathrm{high}-\delta T$. Is this correct?
If you wish, in the idealization of the Carnot engine (already quite far from reality, as this engine completely lacks friction and operates infinitely slowly, for example), you could also assume that its thermal mass is negligible. Thus, returning to the original state doesn't place any requirements on the actual engine's physical temperature.
In addition, you could dynamically update the (decreasing) Carnot efficiency as the high-temperature finite body cools down and the low-temperature finite body heats up with continued cycling during engine operation.
In all cases, the engine isn't a store or sink of energy or mass, so the net heat input must equal the work output. There's just no other possible mode of energy transfer.
But yes, the use of finite bodies instead of idealized infinitely large reservoirs places another burden on the idealization, since there's no way the engine will ever get to $T_\mathrm{high}$ again.
Does this get at what you're asking about?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why does high frequency have high energy? The electromagnetic spectrum's wavelengths all travel at the same speed, $c$. Also, the wavelength $\lambda$ and frequency $\nu$ are related by $c = \lambda \cdot \nu$. Since all moving particles here would have the same speed, why would higher frequencies have more energy?
| Massless and massive particles (like photons and electrons respectively) have different dispersion relations, i.e., the relations between the particle momentum and its energy, $\epsilon(p)$. Thus, for electrons we have
$$
\epsilon(\mathbf{p})=\frac{\mathbf{p}^2}{2m}
$$
whereas for photons
$$
\epsilon(\mathbf{p})=c|\mathbf{p}|
$$
The velocity is then defined as the derivative of the dispersion relation in respect to the momentum (in E&M momentum and velocity are usually referred to as phase velocity and group velocity):
$$
\mathbf{v}(p)=\nabla\cdot\epsilon(\mathbf{p})
$$
We thus obtain $\mathbf{v}=\mathbf{p}/m$ for electrons and $\mathbf{v}=c\frac{\mathbf{p}}{|\mathbf{p}|}$ for photons (with magnitude $c$.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 8,
"answer_id": 1
} |
Magnetic field modeling with noises I am trying to make a 3d grid of a magnetic field with some noises (which will be added to the ordinary field) for a computer simulation. I have the formula for the ordinary field, also I am using a fast Fourier transform (FFT) to create Gaussian Random Field for noises.
The problem is, that the noise field I have created is a scalar field not a vector. So I need to find a way for creating vector-valued Gaussian random field whose divergence will be equal to zero.
| You can use the fact that the divergence-free constraint, $\nabla\cdot\mathbf{B}=0$, becomes
$$
\mathbf{k}\cdot\tilde{\mathbf{B}} = 0
$$
in Fourier space, where $\tilde{\mathbf{B}}$ denotes the Fourier transform of $\mathbf{B}$ and $\mathbf{k}$ is the wavevector.
To get a vector-valued field, you could first generate a random field for each of the three components (e.g. $\tilde{\mathbf{B}} =(\tilde{B}_x,\tilde{B}_y,\tilde{B}_z)$ in Fourier space). Once you have these, you can subtract off the projection of $\tilde{\mathbf{B}}$ along $\mathbf{k}$ to satisfy $\mathbf{k}\cdot\tilde{\mathbf{B}} = 0$:
$$
\tilde{\mathbf{B}} \to \tilde{\mathbf{B}} - \frac{\mathbf{k}}{k^2}(\tilde{\mathbf{B}}\cdot\mathbf{k}).
$$
Taking the inverse Fourier transform of this new $\tilde{\mathbf{B}}$ will then give you a divergence-free noise field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can both of these equations for pressure be correct? Consider the Gibbs equation:
$$du=Tds-pdv$$
Identifying partial derivatives, one obtains:
$$-p=\left( \frac{\partial u}{\partial v} \right)_T$$
But you can also show that:
$$p=T\left( \frac{\partial s}{\partial v}\right)_T -\left( \frac{\partial u}{\partial v} \right)_T $$
In fact for an ideal gas, the latter partial derivative is $0$ and therefore it is the first term the one that determines its pressure. But how come both of these equations are true, at the same time?
| Yes, both are true. Let consider this equation first, $$p=T\left( \frac{\partial S}{\partial V}\right)_T -\left( \frac{\partial U}{\partial V} \right)_T $$ For an ideal gas at constant temperature $\frac{\partial U}{\partial V}_T$ is not zero, but this is derived from constant gibbs energy thus it becomes zero.
Now, $\ T\frac{\partial S}{\partial V}_T=-\frac{\partial U}{\partial V}$. That is why both are correct. Although first term in it is, $\frac{-Nk}{V}=\frac{\partial S}{\partial V}$.
Reason: From first law,$$pdV=TdS-dU$$$$p=T\left(\frac{\partial S}{\partial V}\right)_T-\left(\frac{\partial U}{\partial V}\right)_T$$$$\text{Also,}\ \left (\frac{\partial U}{\partial V}\right)_T=kT\left(\frac{\partial N}{\partial V}\right)_T$$Now if, $dG=0=\mu dN$, then $\frac{\partial N}{\partial V}=0$
You can check it also, https://physics.stackexchange.com/a/736889/344834
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Peeling theorem for a generic field We know that in asymptotically simple space-times, if the generators of conformal boundary $\mathscr{I}^{\pm}$ satisfies the asymptotic Einstein's condition, then any purely outgoing (incoming) field can be written as polynomial in $1/r$ (Refer to section 9.7 of "Spinors and space-time. Volume-2" by R.Penrose and W.Rindler. A brief derivation for Peeling effect in asymptotically flat space-times can be found in an earlier post)
Is there a peeling theorem for a generic propagating field (which contains both outgoing and incoming field contributions)? How should one modify the line of reasoning presented in the above reference to account for this generic scenario?
|
Is there a peeling theorem for a generic propagating field <…>?
“Peeling” implies that spacetime is smooth near null infinity (in the sense that it possesses conformal compactification with smooth boundary), but suitably generic spacetimes are not, they develop logarithmic singularities around null infinities. So literature calls such phenomena “obstructions to peeling”, “failure of peeling” etc.
For such generic spacetimes the expansions of NP scalars around null infinities are polyhomogeneous, that is, written in powers of both $1/r$ and $\log r$ (with generic term being $r^{-i}\log^{j}r$). Overall, the detailed and rigorous treatments of failure of peeling along null infinities and its connection with behavior of solutions along spacelike and timelike infinities are still missing, but the research area is quite active.
Relatively recent review of the peeling problem could be found here:
*
*Friedrich, H. (2018). Peeling or not peeling — is that the question? Classical and Quantum Gravity, 35(8), 083001, doi:10.1088/1361-6382/aaafdb, arXiv:1709.07709.
A monograph about modern state of the art for conformal methods (that also discusses peeling and its failures):
*
*Kroon, Juan A. Valiente. Conformal methods in general relativity. Cambridge university press, 2017, Open Access pdf.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why can the coriolis force potential be written as $ E_\text{cor}=m\dot{\theta}\begin{vmatrix} X & Y \\ \dot{X} & \dot{Y} \end{vmatrix}$? I found the following formula for the Coriolis force written here:
$$ E_\text{cor}=m\dot{\theta}\begin{vmatrix}
X & Y \\
\dot{X} & \dot{Y}
\end{vmatrix}=m\dot{\theta}\ (\dot{Y}X-\dot{X}Y)$$
I have the following questions:
*
*Does this matrix have a certain name? Is it related to Jacobian or Hessian?
$\begin{vmatrix}
X & Y \\
\dot{X} & \dot{Y}
\end{vmatrix}$
*Where does it come from?
*Also this seems like a commutator relation: $(\dot{Y}X-\dot{X}Y)$, is that true?
*Can a similar matrix be used for the centrifugal force?
| the potential energy for Coriolis force is:
$$U=-m\,\vec v\cdot (\vec \Omega\times \vec r)$$
with
$$\vec r=\begin{bmatrix}
{x} \\
{y} \\
{z} \\
\end{bmatrix}\quad,
\vec v=\begin{bmatrix}
\dot{x} \\
\dot{y} \\
\dot{z} \\
\end{bmatrix}\quad,
\vec \Omega=\dot\theta
\begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}\quad\Rightarrow$$
$$U=m\,\dot\theta\,(y\,\dot x-x\,\dot y)=m\,\dot\theta\,\det(\mathbf A)\\
\mathbf A=\begin{bmatrix}
y & x \\
\dot{y} & \dot{x} \\
\end{bmatrix}
$$
nothing to do with determinate , this is spatial case because
$~\vec\Omega=\dot\theta\vec e_z$ ?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is it better to average each reading before applying a formula, or apply the formula to each set of readings and then average? If $f$ is some function of independent variables $a,b ...z$ and readings of each of them have some inherent (random and systematic) error, would it better to average the readings of each variable and then apply $f$, or apply $f$ to each set of readings and then average $f$? I.e. which of $$1)\ \ \ f(\bar{a},\bar{b},...\bar{z})$$
$$2)\ \ \overline{f(a,b,...z)}$$ would minimise the error in $f$ in general?
| Ask yourself: What would I like to know? Usually, we are most interested in the end result $y=f(x)$ and not in the intermediate result $x$. Therefore, we are interested in the distribution/average value/standard deviation etc. of the end result. However, $f(\bar x)$ is not the average value the end result, and $f(\bar x \pm 2\cdot\sigma_x)$ is not the 95% confidence interval of the end result. Thus, usually it makes sense to first calculate the end result for each data point $y_i = f(x_i)$, and then to apply the statistics of interests, e.g. the average or the standard deviation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does a small puddle of water evaporate faster at the edges than the center? I have read that ceiling tile stains and coffee ring stains are darker on the edges than the center because the puddles evaporate fastest at the point of contact between the surface, air, and water and water that is evaporated leaves behind its sediments. My question is: why does water evaporate faster at this boundary than in the center or any other part of the water puddle?
| All liquids are not evenly spaced like a rectangular block, but rather like an irregular ellipsoid with a bulge in the center. Its impossible to discern this bulge with the naked eye, however, it is very visible in mercury:
Why this bulge
is created in the first place because of surface tension. The liquid tries to have the least surface area possible to make surface energy minimum, and ideally, the least surface area is possible in the sphere, liquids like water don't have enough surface tension to hold themselves and create droplets like mercury, but it tries and creates very eccentric(squashed down) ellipsoid
Because of the bulge, more water molecules are exposed to air at the corners than at the bulge which facilitates evaporation.
I hope it helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interesting relationship between the 2D Harmonic Oscillator and Pauli Spin matrices I have an isotropic 2D Harmonic Oscillator in cartesian coordinates
\begin{equation}
H = \frac{p_x^2}{2m} + \frac{p_y^2}{2m} + \frac{1}{2} m\omega^2 (x^2 + y^2)
\end{equation}
In terms of the usual creation and annihilation operators for the $x$ and $y$ modes, $n_x$ and $n_y$ this can be written as
\begin{equation}
H = \hbar\omega(n_x^\dagger n_x + n_y^\dagger n_y + 1)
\end{equation}
Now we can 'construct', three operators that commute with the Hamiltonian, apart from the (trivial) number operators $N_x$ and $N_y$:
\begin{align}
V_x &= a_x^\dagger a_y + a_y^\dagger a_x\\
V_y &= i(a_y^\dagger a_x - a_x^\dagger a_y)\\
V_z &= a_x^\dagger a_x - a_y^\dagger a_y
\end{align}
Now these operators can been proved to satisfy the commutation relations of angular momentum. In fact,
\begin{equation}
V_i = a^\dagger \sigma_i a
\end{equation}
Where $a = \begin{pmatrix} a_x & a_y \end{pmatrix}$ and $\sigma_i$ are the Pauli matrices. This is really surprising to me as I don't see why and how this must be true. Can someone shed some light on this? I think for $2s+1$ dimensions, we can 'construct' such operators using the matrix representations of spin $s$. However, I haven't seen a proof of this.
| The Hamiltonian in question has cylindrical symmetry, and can be transformed to cylindrical coordinates (i.e. $x,y\longrightarrow r, \phi$). The wave function then decomposes into radial and angular part, and the eigenstates obtained called Fock-Darwin states. Interestingly, the solution is simple also works with magnetic field, which affects only the angular momentum. You can google to find more information, e.g., here.
Perhaps another way to see it is in terms of Schwinger bosons (google again), which is a representation of angular momentum in terms of two bosons, i.e., two oscillators.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/739040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the state of an entangled photon after its twin is absorbed? Let's two photons are entangled in polarization after a laser beam passes through a Betha Barium Borate crystal. They take different paths and one of them (1) is absorbed in a black sheet. What is the state of the leftover photon (2)? Is it in superposition of polarization h/v or it must flip spontaneously in a certain polarization? What if the black sheet atoms absorb photons only with a certain polarization (say h)? Will the absorbed photon (1) take h polarization in the process of absorption and hence the second twin flip to v?
| What do the experiments say? With a quantum measurement, the measured state depends on what measurement is performed. If you assume that the photons individually have states before measurement, you get Bell's Inequality, and the experiments falsify this. It thus doesn't make sense to ask what the state of the photon is before measurement: all you can predict is the correlation between measurements.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/739483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 5
} |
Conservative Force: Translational Invariance I have a question about the following.
Why if there are two masses, $m_1$ and $m_2$ respectively, and the only force acting on them is from their mutual interaction which is conservative and central, the following is true?
$U(\vec{r_1},\vec{r_2})=U(\vec{r_1}-\vec{r_2})$
(It says this is from Translational invariance.)
Also, it says since it is central, meaning it is along the line joining them, why is it
$U(\vec{r_1}-\vec{r_2})=U(\lvert\vec{r_1}-\vec{r_2}\rvert)$?
| Here is a slightly artificial but mathematical proof. I will look at scalars, not vectors, to make the derivation slightly easier. But it should be easily extendable to vectors. Define
\begin{align}
r&=r_2-r_1&\iff &&r_1&=\tfrac 12(R-r)\\
R&=r_2+r_1&&&r_2&=\tfrac 12(R+r)\tag{1}
\end{align}
These quantities have the same information as $(r_1,r_2)$. If you know $(r_1,\vec r_2)$ you can uniquely determine $(r,R)$ and vice versa. So without loss of generality we can define $U$ in terms of $r,R$
$$U=U(r_1(r,R),r_2(r,R))=U(\tfrac 12(R-r),\tfrac 1 2(R+r))\tag 2$$
Translation invariance is defined by
$$U(r_1+a,r_2+a)=U(r_1,r_2)\quad \text{for all }a.\tag 3$$
Let us now show that $U$ does not depend on $R$ by calculating $\frac{\partial U}{\partial R}$ and seeing that it is zero.
\begin{align}
\frac{\partial U}{\partial R}&=\lim_{h\rightarrow 0}\frac{U\rvert_{R+h}-U\rvert_R}{h}\\
&=\lim_{\color{red}h\rightarrow 0}\frac{U(\tfrac 12(R+\color{red}h-r),\tfrac 1 2(R+\color{red}h+r))-U(\tfrac 12(R-r),\tfrac 1 2(R+r))}{h}\\
&=\lim_{\color{red}h\rightarrow 0}\frac{U(\tfrac 12(R-r),\tfrac 1 2(R+r))-U(\tfrac 12(R-r),\tfrac 1 2(R+r))}{h}&\text{(trans. inv.)}\\
&=0
\end{align}
A more direct proof would be to write $U$ in these new coordinates: $U(r,R)$. If we translate by $a$ we get $r\rightarrow r$ and $R\rightarrow R+2a$. So we should have
$$U(r,R+2a)=U(r,R)\quad\forall a$$
Where $\forall$ means 'for all'. If we focus on the $R$ dependence we can also read this as $f(x+a)=f(x)\ \forall a$. From this we can conclude that $f$ is a constant and likewise $U$ is constant if we vary only $R$.
This last proof we can easily reuse for rotational invariance by writing the potential as $U(r,\theta,\phi)$ and noting that rotational invariance means that
\begin{align}U(r,\theta+\Delta \theta,\phi)&=U(r,\theta,\phi)\ \forall \Delta\theta\\
U(r,\theta,\phi+\Delta \phi)&=U(r,\theta,\phi)\ \forall \Delta\phi
\end{align}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/739662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
A particle on a ring has quantised energy levels - or does it? The free particle on a ring is a toy example in QM, with the wavefunction satisfying $-\frac {\hbar^2}{2mr^2}\psi_{\phi\phi}=E\psi$ and the cyclic BCs $\psi(\phi+2\pi)=\psi(\phi)$. This problem is easily solved to give $\psi_m=e^{\pm im\phi}$ for $m=0,1,2,\dots$
However, it occured to me that the BCs are too restrictive for the usual quantum mechanical axioms. As only the probability amplitude, $|\psi|^2$, is directly observable, we should strictly only require that the modulus stays the same after a $2\pi$ rotation - i.e., that $\psi(\phi+2\pi)=e^{i\alpha}\psi(\phi)$. This obviously gives rise to a multivalued function in general, but the probability amplitude is still well defined.
However, any function $e^{ik\phi}$ is now a solution to the equation, where $k$ can be any real number, so we do not get quantisation.
Does this discussion relate to the rotation group and spin? And what is the justification for imposing the restrictive cyclic BCs, even though we strictly shouldn't? I believe that it is somehow resolved by relativistic QM.
| $\psi^2$ is most certainly not observable: Observables are the eigenvalues of hermitian operators. For a state $\psi$ to be physical, the expectation values for any observable under this state needs to be physical, ie for instance $\langle\psi|\hat{H}|\psi\rangle<\infty$. The problem is that for the proposed $\psi$ (which is discontinous), the energy would have to be infinite.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/739892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 5
} |
Paradox regarding Young-Laplace equation Recently I've been working with the Young-Laplace equation and came across the following physical paradox that I couldn't wrap my head around. It goes like this:
Imagine a spherical droplet (filled with water) in air. By the Young-Laplace equation, we know that the pressure in the spherical droplet is larger than the pressure in the air:
$$P_{\rm droplet}-P_{\rm air}=\frac{2\gamma}{R}$$
Where $\gamma$ is surface tension and R is the radius of the droplet.
Now, lets take a stationary fluid element that encompasses both side of the droplet. It is clear that the pressure on both sides of the fluid element are not equal, and by Newton's third law, all internal forces within the fluid element cancels out.
In this case, what keeps the fluid element stationary?
| Pressure is uniform all around the droplet. Remember that stress is a vector quantity.
Equilibrium of an elementary part of the interface.
The equilibrium of the elementary surface of the interface reads,
$\mathbf{t_n} dS = \Delta P \ \mathbf{\hat{r}} \ dS + d \boldsymbol{\Gamma}$,
being $\Delta P \ \mathbf{\hat{r}}$ the contributions of the pressure jump across the interface, and $d \boldsymbol{\Gamma}$ the elementary contribution of the surface tension.
Equilibrium of the bubble.
Now, let's consider the equilibrium of the whole interface: the first contribution is the contribution of external actions, while the contribution of the surface tension is an internal contribution that cancels out (by action/reaction principle) when integrating over the whole system, to compute the resultant of the forces acting on the system.
If the bubble has a spherical shape with radius $R$, $\Delta P = \frac{2 \gamma}{R}$ is constant by Young-Laplace equation, and the normal vector points in the radial direction $\mathbf{\hat{n}} = \mathbf{\hat{r}}$.
Integration of this contribution to on the closed surface of the interface to get the resultant on it, gives us
$\mathbf{F}^{ext} = \displaystyle \oint_{\partial V}\mathbf{t_n} dS = \displaystyle \Delta P \oint_{\partial V} \ \mathbf{\hat{n}} \ dS = \mathbf{0} $,
where the last step comes from the gradient theorem (there is the divergence theorem, curl theorem, and also the always forgotten gradient theorem, poor gradient!),
$ \displaystyle \oint_{\partial V} f \mathbf{\hat{n}} \ dS = \displaystyle \int_{ V} \nabla f \ dV $,
with $f(\mathbf{r}) = 1$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is this form of writing the six antisymmetric gamma matrices correct? I encountered the following expression in Ashok Das' QFT Lectures:
$$\sigma_{\mu \nu } =\frac{i}{2}[\gamma ^\mu,\gamma^\nu]=i(\eta ^{\mu \nu}-\gamma^\nu \gamma^\mu)=-i(\eta ^{\mu \nu}-\gamma^\mu \gamma^\nu).$$
I understand why the expression with $+i$ is correct, but I have no idea how we go from that expression to the one with $-i$. I tried something like changing the indexes and using the fact that the metric is symmetric, but that doesn't help me with the negative sign.
| Well, we know that the gamma matrices obey the Clifford algebra $$\{\gamma^\mu,\gamma^\nu\}=\gamma^\mu\gamma^\nu+\gamma^\nu \gamma^\mu=2\eta^{\mu\nu}$$
Because of that we have
$$i(\eta^{\mu\nu}-\gamma^\nu\gamma^\mu)=i\left[\eta^{\mu\nu}-(2\eta^{\mu\nu}-\gamma^\mu\gamma^\nu)\right]=i(-\eta^{\mu\nu}+\gamma^\mu\gamma^\nu)=-i(\eta^{\mu\nu}-\gamma^\mu\gamma^\nu)$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is energy "equal" to the curvature of spacetime? When you are solving the Einstein field equations (EFE), you basically have to input a stress–energy tensor and solve for the metric.
$$
R_{\mu \nu} - \frac{1}{2}R g_{\mu \nu} = 8 \pi T_{\mu \nu}
$$
For a vacuum solution we have:
$$
T_{\mu \nu} = 0
$$
This yields:
$$
R_{\mu \nu} = 0
$$
This means that the local curvature of an inertial frame of reference is zero.
But, setting the stress–energy tensor to zero, could be given in multiple situations: In flat spacetime, around a non-rotating black hole, around a planet, etc.
When I read about this equations in divulgation books, they portrait the Einstein field equations as:
$$
\text {Space-Time Curvature} = \text{Energy}
$$
But with this interpretation, by setting $T_{\mu \nu} = 0$, you are saying that the energy is zero, hence no curvature, but you are able to get more solutions than the Minkowski metric (which is the only solution with truly no energy and with no curvature).
Are this books interpretations wrong or is there something I'm not getting from the true meaning of the equation? How would you distinguish, while solving the EFE, from a truly flat spacetime, from a locally flat spacetime?
| To provide a more plain answer to your question:
No, energy and the curvature of spacetime are two different things. Energy is a physical quantity that is associated with the motion or arrangement of matter and radiation, while the curvature of spacetime is a property of space and time that is determined by the distribution of matter and energy within it. While the two concepts are related in some ways, they are not the same thing. For example, the curvature of spacetime can be affected by the presence of energy, but energy itself is not equivalent to the curvature of spacetime.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 3
} |
Question about uncertainty principle and attempts at simultaneous measurement of position and momentum Uncertainty principle for position and momentum: $$
\Delta x \Delta p \ge \frac{h}{4\pi}$$
So suppose we have a particle... and we have 2 different measuring devices. The first measuring device measures position. The second measuring device measures momentum. The two devices act simultaneously on the particle. What will happen? Will we get a definite value on both measuring devices?
I'm not asking about the practical impossibility of simultaneous measurements. I'm asking, what does the QM formalism say will happen in this situation when these 2 measuring devices act simultaneously on the particle. Or is such a situation impossible for some theoretical reason? If so, what is that reason?
Thanks.
EDIT: I don't think I'm communicating what I want to ask properly. Let's suppose I'm a scientist pre-QM. I want to construct an experimental setup that simultaneously measures position and momentum. Meaning in a single instant I get position and momentum measurement with arbitrary precision. Is such a setup possible in classical physics? What would the same setup actually do when taking QM into account?
| The question you are asking simply does not make any sense in Quantum Mechanics. Quantum Mechanics says that, upon a position measurement, the particle becomes a position eigenstate. And upon a momentum measurement, the particle becomes a momentum eigenstate.
There is no state that is simultaneously both position and a momentum eigenstate. So there is no state that the particle can take after the measurement that you're proposing. Quantum Mechanics says that particles must always be described by some state in the Hilbert space.
Hence, this question cannot be answered in Quantum Mechanics. If Quantum Mechanics is right (which we have no evidence against so far), then this question is meaningless because such a measurement does not exist.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Why do channels arise from "failing to record measurement outcomes"? In Preskill's notes, the need for quantum channels began with the following situation: System A starts out in a pure state and interacts with system B, therefore forming a joint state of system AB. We then imagine measuring system B (the pointer) but fail to record the measurement outcomes, therefore, forcing us to take the final state being the sum of all post-measurement states weighed by their probability outcome.
My question is, what do we 'mean' by failing to record measurement outcomes? Is this a human error wherein we forget to record a measurement outcome, or is it something entirely different?
(Preskill's Lecture notes 3, page 11)
http://theory.caltech.edu/~preskill/ph219/chap3_15.pdf
| Quantum mechanical systems interact with the environment. If they do so, we can think of the environment as effectively measuring the system -- formally, this is completely equivalent. However, the information is "hidden" in the environment, and we don't have access to it (and it is typically very hard to access it, as the information is "smeared out" over the $\sim 10^{23}$ degrees of freedom of the environment). Thus, we can think of this as "measuring and ignoring the outcome", i.e., averaging over all possible outcomes (with the respective probabilities).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Given a magnetic field how to find its vector potential? Is there an "inverse" curl operator? For a certain (divergenceless) $\vec{B}$ find $\vec{A} $ such that $\vec{B}= \nabla \times \vec{A} $.
Is there a general procedure to "invert" $\vec{B}= \nabla \times \vec{A} $? An inverse curl?
(I was thinking of taking the curl of the previous equation:
$$ \nabla \times \vec{B}= \nabla \times \nabla \times \vec{A} = 0. $$
Then using the triple cross product identity $ \nabla \times \nabla \times \vec{V} = \nabla (\nabla \cdot V) - \nabla^2 V$ but that does not quite simplify things... I was hoping to get some sort of Laplace equation for $\vec{A}$ involving terms of $\vec{B}$.)
| No, there is no inverse curl operator. In that way it is just like the ordinary derivative: $f(x)$ cannot uniquely be determined by integrating $f'(x)$. In general the relation $\vec{B} = \nabla \times \vec{A}$ defines a set of differential equations, which don't uniquely determine $\vec{A}$. For example you can add any curl-less vector field to $\vec{A}$ without changing $\vec{B}$. Boundary conditions can reduce this redundancy for example.
Usually a "gauge" is chosen for $\vec{A}$ which reduces the redundancy in $\vec{A}$. But not all gauges uniquely determine $\vec{A}$, either: Take for example one of the most common gauges, the Coulomb Gauge, whose condition is:
$$\nabla \cdot \vec{A} = 0$$
Notice that you can still add a constant vector to $\vec{A}$ and it will fulfill both its divergence and curl conditions.
Edit: I think the answer by Nullius in Verba is great. Once you have your boundary conditions and/or gauge, this can be used to find the corresponding vector potential. Note that also in many physics problems you can just guess the solutions to these differential equations - not as intimidating as it sounds.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How to go from Lagrange equations to d'Alembert's principle? All sources I know show how to use d'Alembert's principle and/or Hamilton's principles to derive Lagrange equations. It is also common to use d'Alembert's principle to derive Hamilton's principle (see Lanczos "the variational principles of mechanics", p.112) But what about the opposite direction? If you only have Lagrange equations, how can we derive d'Alembert's principle?
| On one hand Lagrange equations
$$\frac{d}{dt} \frac{\partial T}{\partial \dot{q}^j} -\frac{\partial T}{\partial q^j}~=~Q_j,\qquad j~\in~\{1, \ldots, n\}, \tag{LE}$$
make sense in pretty much any setting, while on the other hand d'Alembert's principle
$$ \sum_{i=1}^N \left(\dot{\bf p}_i-{\bf F}_i\right)\cdot \delta {\bf r}_i ~=~0\tag{DAP} $$
basically only makes sense within the context of Newtonian point mechanics$^1$. It also seems that information about the point particle positions $${\bf r}_i(q^1,\ldots, q^n,t), \qquad i\in\{1, \ldots, N\},$$ in terms of the generalized coordinates $(q^1,\ldots, q^n)$ and time $t$ is needed.
In that case the equivalence of DAP and LE now follows from the following key identity
$$ \sum_{i=1}^N \left(\dot{\bf p}_i-{\bf F}_i\right)\cdot \delta {\bf r}_i
~=~ \sum_{j=1}^n \left(\frac{d}{dt} \frac{\partial T}{\partial \dot{q}^j} -\frac{\partial T}{\partial q^j}-Q_j\right) \delta q^j, $$
where the kinetic energy is
$$ T~:=~\sum_{i=1}^N\frac{m_i}{2}v^2_i,$$
and the generalized forces are
$$ Q_j~:=~\sum_{i=1}^N{\bf F}_i\cdot \frac{\partial {\bf r}_i}{\partial q^j},\qquad j~\in~\{1, \ldots, n\}, $$
cf. e.g. my related Phys.SE answer here.
References:
*
*H. Goldstein, Classical Mechanics, Chapter 1.
--
$^1$ A relativistic extension is possible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Impact of distance from galactic centre on the value of energy in the cosmic ray spectrum where knee is observed? This question is based on the recommendation and great explanation by @Kyle_Kanos. Is it known what causes the "knee" in the observed Cosmic Ray spectrum?
Accepting the reason for the occurrence of knee around a few PeV energies of Ultra High Energy Cosmic Rays (UHECR) to be the shift in majority source from intergalactic to extra-galactic due to Larmor radius becoming comparable to the sscale height perpendicular to the galactic plane. Does that mean that observers farther from the centre would observe the value of knee energy to be lower due them getting more vulnerable to lower energy extra-galactic sources than the knee value for observer closer to the centre?
| The position of the knee should shift with your location in the galaxy, yes, but not to first order. Since the galaxy is disk-like, the appropriate Lamor radius is the thickness of the disk, which is very roughly independent of distance from the center. Or at least to a sufficient approximation, given that this anyway is just a back-of-the-envelope estimate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What would a standing wave of light look like? I want to know what a standing wave of light would like and what properties it might have that are interesting.
| This answer is about how a technical diagram of such a wave looks like, since with the naked eye you won't be able to see a standing wave of light for the simple reason that you can only see the wave part travelling in your direction.
If the two wave components of the standing wave are of the same wavelength like the red and blue one, the standing wave looks like the green one:
If we are moving with v=c/2 to the right relative to the setup so that the wavelengths get shifted and are no longer of the same frequency, the scene would be
so the zero points of the standing green wave would have the lorentzcontracted distance to each other as they should and keep the correct position, but the rest of the wave beside the zeros would not look as uniform as if the observer was at rest relative to the two wave emitters.
Due to the relativity of simultaneity the hulls of the standing waveform (dashed) would be drawn in a wavier way like in the second image, where the shown setup (for example the inside of the laser pointer cavity from here) is moving to the left relative to the observer with c/2.
The colors are of course just for separation and do not represent the color of the lightbeams.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/742157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 3
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.