Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Why does $H_2$ have $C_V$=$7/2 R$ at high temperatures, while the total number of degrees of freedom is 6? The two hydrogen atoms have 6 degrees of freedom in total. Of them, $3$ contribute to translation, $2 $contribute to rotation and $1$ contribute to the vibration.
I know that the vibrations motion is frozen at low temperature due to quantum mechanical effects.
However, then the $C_V$ at high temperature should be $6/2 R$, while experimentally, it is $7/2 R$ (source: Principles of Physics by Walker, Resnick and Halliday)
Edit: The answers reveal that the missing part of specific heat is due to potential energy of vibration. So I am extending the question for clarification. $CO_2$ has total 9 degrees of freedom, of which 3 are translational, 2 are rotational, 4 are vibrational. So, at high temperature, will the $C_V$ of $CO_2$ be $\frac{R}{2} × [3+2+4+4] $? The two 4s are due to kinetic and potential energy of vibrational motion.
| The elastic potential energy of the bond is another degree of freedom which contributes to the heat capacity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/422535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Arithmetic of Hamiltonian in canonical transformation I have the following Hamiltonian:
$$ \mathcal{H} = \frac{p^2}{2m} + V(q-X(t)) + \dot{X}(t)p, $$
and I make the usual canonical transformation for the momentum:
$$ p \rightarrow p' = p + m\dot{X},$$
and complete the square, which should give the following:
$$ \mathcal{H}' = \frac{p'^2}{2m} + V(q - X(t)) - \color{red}{m\ddot{X}(t)q} - \frac{m\dot{X^2}}{2}.$$
I can get most of this expression apart from the one in $\color{red}{red}$.
This has to come from the cross term $(\frac{(\hat{p} \cdot m\dot{X}+m\dot{X}\cdot\hat{p})}{2m}),$ but I can't get the $q$ to come out.
Any pointers?
| The Hamiltonian transforms according to the following rule:
$H^\prime = H - \partial_t f$, where $f=f(q,t)$ (1).
We can find this function by using that:
$p = p^\prime-\partial_qf=p^\prime-m \dot{X}$.
So we see that $f=m \dot{X}q +c , c \in \mathcal{R}.$ (2).
Plug equation (2) into equation (1) gives:
$H^\prime=H-\partial_t(m\dot{X}q+c)=H-m\partial_t(q\dot{X})$, now looking at the final form of the equation that you have to show I guess that $\dot{q}=0$ such that $H^\prime=H-mq\ddot{X}$. Where $H=(p^\prime)^2/2m+V(q-X(t))-m\dot{X}^2/2$ is the Hamiltonian you already derived.
Warning: I am not sure what exactly $q$ is in your case and whether $\dot{q}=0$ holds. This is something you should check yourself since I do not know the context and so on. But this works completely fine.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/422671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Electric field of an infinite sheet of charge I am trying to derive the formula for E due to an infinite sheet of charge with a charge density of $ \rho C/m^2$. I assumed the sheet is on $yz$-plane. I used Coulomb's law to get an equation and integrated the expression that over $yz$-plane. But, I have not succeeded in deriving the correct expression. The answer I am getting is $0$.
Below is the picture of my work. Kindly, have a look and let me know where did I make mistakes.
In actual, E due to a charge sheet is constant and the correct expression is
E $=\rho/2\epsilon$0 aN , where aN is unit vector normal to the sheet.
| Method 1 (Gauss’ law):
Just simply use Gauss’ law:
$$\int_{\partial V} \vec{E} \cdot \vec{da} = \frac{Q}{\epsilon_0}.$$
A pillbox using Griffiths’ language is useful to calculate $\vec{E}$. The pillbox has some area $A$. And due to symmetry we expect the electric field to be perpendicular to the infinite sheet. Imagine putting a test charge above it, in which way does it move? Right, perpendicular to the sheet. Using $Q=\rho A$ for the charge enclosed in the pillbox we get:
$$ \rho A = \epsilon_0 \int_{\partial V} |\vec{E}| |\vec{da}| = \epsilon_0 \int_{\partial V} E da = \epsilon_0 E \int_{\partial V} da = \epsilon_0 (2AE), $$
since we expect $E$ to be constant for fixed distance for the infinite sheet. Note that the sides of the pillbox do not contribute to the integral since $\vec{E} \cdot \vec{da} = 0$ in that case.
All together we find that $E=\frac{\rho}{2 \epsilon_0}$ and the direction we thought already of is some unit vector $\hat{n}$ orthogonal to the infinite sheet:
$$ \vec{E} = \frac{\rho}{2 \epsilon_0} \hat{n} .$$
Method 2: (Coulomb/direct calculation)
Another method goes as follows:
$$E=E_x= k \int \frac{x}{(r^2 + x^2)^{3/2}} r dr d\theta = 2\pi k \int \frac{xr}{(r^2 + x^2)^{3/2}} dr = 2\pi kx [ (r^2 + x^2)^{-1/2}]^0_{\infty} = 2\pi k x \frac{1}{x}= 2\pi k.$$ Let us see, I called $$k= \frac{\rho}{4 \epsilon_0 \pi}$$ we get indeed that $E=\frac{\rho}{2 \epsilon_0}$.
Errors in your calculation:
- the $y$ in the nominator should be a $x$.
- missing term in the denominator, namely $z^2$ because now you consider an infinite line and integrate over a surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/422953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Significance of Compton scattering being second order? In the Compton scattering equation for changes in wavelength, for small angles the equation is second order in the angle. Is there any significance to this?
To me it seems to say that,if a photon takes a smooth path with no abrupt turns, then it shouldn't have much of a change in its wavelength.
edit: I guess my big question is, why is this change second order and not first
| The expected probability density is
\begin{equation*}
\langle|\mathcal{M}|^2\rangle=
2e^4\left(
\frac{\omega}{\omega'}+\frac{\omega'}{\omega}
+\left(\frac{m}{\omega}-\frac{m}{\omega'}+1\right)^2-1
\right)
\end{equation*}
The Compton formula is
\begin{equation*}
\frac{1}{\omega'}-\frac{1}{\omega}=\frac{1-\cos\theta}{m}
\end{equation*}
It follows that
\begin{equation*}
\cos\theta=\frac{m}{\omega}-\frac{m}{\omega'}+1
\end{equation*}
Then by substitution
\begin{equation*}
\langle|\mathcal{M}|^2\rangle=
2e^4\left(
\frac{\omega}{\omega'}+\frac{\omega'}{\omega}+\cos^2\theta-1
\right)
\end{equation*}
The differential cross section for Compton scattering is
\begin{equation*}
\frac{d\sigma}{d\Omega}
=
\frac{\alpha^2}{2m^2}
\left(\frac{\omega'}{\omega}\right)^2
\left(
\frac{\omega}{\omega'}+\frac{\omega'}{\omega}+\cos^2\theta-1
\right)
\end{equation*}
A complete derivation of $\langle|\mathcal{M}|^2\rangle$ for Compton scattering is here:
http://www.eigenmath.org/compton-scattering.pdf
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dresselhaus linear and cubic terms I've been trying to understand Dresselhaus effect, described here.
I've been looking up references to find when the cubic term becomes more dominant than the linear term and vice versa.
For example, in this paper ( or on Arxiv ), they give $ H_{so} = (\beta-\alpha)p_y \sigma_x + (\beta + \alpha)p_x \sigma_y$ where $\alpha$ and $\beta$ are Rashba and Dresselhaus parameters.
But, here on page 1235 ( or on Arxiv ), $H_D = \beta[-p_x\sigma_x + p_y\sigma_y]$.
I understand that the former is the cubic term and the latter is the linear term and also i somewhat see why it is the case when the linear term becomes dominant in the second case.
I'd like to know what determines which version of the Dresselhaus effect to use and specifically under what conditions the cubic term becomes dominant.
| This is VERY late.. but neither of these are the cubic term (there is no explicit $p^3$). These are both the result of the $<p_z^2>$ term; i.e. the linear term. And the confinement direction is assumed to be the z-direction.
For a zincblende structure, in its full glory, the Dresselhaus Hamiltonian is:
$H_D = \frac{\gamma}{\hbar}\Big(p_x(p_y^2-p_z^2)\sigma_x + p_y(p_z^2 - p_x^2)\sigma_y + p_z(p_x^2 - p_y^2)\sigma_z\Big)$.
Assuming $p_z^2 \rightarrow <p_z^2>$ and $p_z \rightarrow <p_z>=0$, due to confinement in a well:
$H_D = \frac{\gamma}{\hbar}\Big(p_x(p_y^2-<p_z^2>)\sigma_x + p_y(<p_z^2> - p_x^2)\sigma_y + <p_z>(p_x^2 - p_y^2)\sigma_z\Big)$
$H_D = \frac{\gamma}{\hbar}\Big(p_x(p_y^2-<p_z^2>)\sigma_x + p_y(<p_z^2> - p_x^2)\sigma_y \Big)$.
Finally, $\frac{\gamma}{\hbar}<p_z^2> = \beta$,
$H_D = \underbrace{\frac{\beta}{\hbar}\Big(p_y \sigma_y - p_x \sigma_x \Big)}_\text{linear} + \underbrace{\frac{\gamma}{\hbar}\Big(p_xp_y^2\sigma_x - p_yp_x^2\sigma_y\Big)}_\text{cubic}$.
This reproduces your second equation's form. Your first has the incorporation of the Rashba effect and has a specific, "privileged orientation," as my advisor has said in one of her papers, such that the linear Dresselhaus looks much like the Rashba effect (this better shows how $\alpha = \beta$ generates pure spin current, as well as some other points of interest).
My understanding of when the cubic or linear terms are more appropriate has to do with the width of the confinement. Essentially, when the $\beta$ definition is used, you have that the expectation of the wave-vector squared is $<k_z^2> \approx \big( \pi/W \big)^2$, where $W$ is the confinement width. The cubic term, and the bulk parameter $\gamma$ are not $k_z$ dependent and will then dominate when $W$ is large. Hence, why the linear term is usually taken for 2D systems, but is less important in 3D bulk systems.
Here are a couple links that may be useful:
*
*Understanding the point at which cubic term takes over, Marinescu (2017).
*Learning about the orientation used in the first form as presented in the question, Pan (2019)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Nuclear physics and half life of a radioactive element Half-life for certain radioactive element is
5 min. Four nuclei of that element are observed
a certain instant of time. After five minutes
Statement-1: It can be definitely said that two
nuclei will be left undecayed.
Statement-2: After half-life i.e.5minutes, half of
total nuclei will disintegrate. So only two nuclei
will be left undecayed.
(A)Statement-1 is true, statement-2 is true and
statement-2 is correct explanation for
statement-1.
(B)Statement-1 is true, statement-2 is true and
statement-2 is NOT the correct explanation for
statement-1
(C)Statement-1 is true, statement-2 is false.
(D)Statement-1 is false, statement-2 is false
The correct answer for this question (D) . What is the reason?
I approached the problem in this way
After one half life , exactly half of the undecayed atoms will be left and this only depends on intial number of undecayed nuclei . So the correct answer according to me must be (A)
| Imagine the following experiment:
I have two buckets; in one bucket there are N balls. Every 5 minutes, I take each ball in turn; I toss a fair coin, and if it comes up "heads" I put the ball in the other bucket. If it comes up "tails", I discard the ball.
How many balls will there be in the second bucket after five minutes? On AVERAGE, there will be N/2 (as for each of the N balls, the probability of being discarded is exactly 50%). In reality, we know from the binomial distribution that there is a chance I have 0, 1, 2, 3 or even 4 balls.
In a radioactive sample, each nucleus can be thought of as one of these balls, and the passage of time is the "tossing of a coin". But instead of tossing a fair coin once per half life, we actually "toss a coin" with a VERY small probability of coming up tails, a great number of times - so that the cumulative probability after one half life is exactly 0.5. This results in the observed number of decays following the Poisson distribution. When the population becomes very large, it will look like "exactly half" decayed - but in reality if there are initially 2N atoms, then after a half life the number left will be $N±\sqrt{N}$. This means the relative error is $\frac{1}{\sqrt{N}}$, and as $N$ becomes very large, that error becomes vanishingly small. But when N=4, that's a big uncertainty...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
In a vacuum can a cooler body radiate Infrared radiation to a warmer body? I mentioned vacuum, because I want to discount the effects of conduction or convection. I simply want to know if some of the infrared radiation(IR) goes from the cooler body to the hotter body? How does each body know how much to radiate at any particular time? I assume that it ultimately comes down to temperature difference but how does the hotter body know what the temperature is of the cooler body and vice versa? We all know that both bodies will radiate IR at the 4th power of its temperature and obviously they will be eventually in equilibrium with each other, each of them then radiating an equal amount to each other.
|
can a cooler body radiate Infrared radiation to a warmer body?
Yes, the cooler body will radiate, according to its temperature, as you've mentioned, and some of this radiation energy could be absorbed by the warmer body.
This will depend on the percentage of the cooler body radiation the warmer body is exposed to and on the ability of the warmer body to absorb this radiation (as opposed to reflecting or transmitting it).
obviously they will be eventually in equilibrium with each other, each
of them then radiating an equal amount to each other.
This could be the case only if all radiation energy was bouncing between the two bodies and was not radiated away. If some of the energy did radiate away, it would be more difficult to predict how exactly the temperatures of the bodies would be changing without knowing all the relevant details, but, eventually, the temperature of both bodies would be zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 0
} |
How can a transformer produce a high voltage and a low current? I understand that in ideal transformers, power is conserved. Because of this the product of voltage and current in the secondary winding is a constant.
This means that voltage and current are inversely related, which seems unintuitive because they are directly related by ohms law.
Shouldn't the emf induced in the secondary winding by the alternating magnetic flux be directly related to the current by some constant, such as the resistance of the secondary winding?
I also came across a term known as impedance that seem to be related to the question. Wondering if it is of any relevance.
|
I understand that in ideal transformers, power is conserved. Because
of this the product of voltage and current in the secondary winding is
a constant.
This isn't true. The expression of power conservation for an ideal transformer is
$$V_s\cdot I_s = V_p\cdot I_p$$
There is no requirement for $V_s\cdot I_s$ to be equal to a constant.
This means that voltage and current are inversely related, which seems
unintuitive because they are directly related by ohms law.
Power conservation doesn't imply that the secondary voltage and current are inversely related. Further, the secondary voltage and current (for an ideal transformer) are related by Ohm's law only if the load is a resistor but not otherwise.
For example, when the load is a resistor of resistance $R$ then...
*
*Ohm's Law
$$V_s = R\cdot I_s,\quad V_s\cdot I_s = \frac{V^2_s}{R}$$
*Power conservation
$$V_p \cdot I_p = V_s\cdot I_s$$
*Ideal transformer voltage relation
$$V_s = N\cdot V_p$$
where $N = \frac{N_s}{N_p}$. Thus
$$I_p = N^2\frac{V_p}{R} = \frac{V_p}{R/N^2}$$
That is, when the load is a resistor, both the primary and secondary voltage and current are related by Ohm's law and power is conserved. See that the load resistance $R$ connected to the secondary appears as a resistance $R/N^2$ to the circuit connected to the primary.
Shouldn't the emf induced in the secondary winding by the alternating
magnetic flux be directly related to the current by some constant,
such as the resistance of the secondary winding?
The resistance of the secondary winding is zero for an ideal transformer. If the secondary winding has non-zero resistance, power conservation does not hold, i.e., the power delivered to the load is less than the power delivered to the primary.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why not free electrons in atom doesn't radiates em waves\photons? Why not free electrons in atom doesn't radiates em waves\photons, although they move with acceleration? Like 1s electron of Titan, it doesn't emits em waves, yes? Why?
| It's because the classical model of an atom as a little solar system simply doesn't work for atoms. That was the message of the quantum physicists starting with Bohr (1913) and Heisenberg et al (1925).
We don't know what an atom "looks like" inside. Hard little balls going in orbits? Um, we cannot watch them. Some quantum field sloshing about? Some strange state that we cannot observe in detail.
So the early quantum physicists, starting with Bohr, simply declared that electrons in atoms can only have various discrete states, and that no radiation is emitted while an electron remains in a state, only when it transitions between two states. Why is this? Well, we don't know. But it's what the experimental data tells us. Quantum physics tells us how to calculate things, like energies and chemical bonds, but it doesn't tell us why it is so.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Max Born's statistical interpretation of the wave function How did Max Born derive the probability of finding a particle between $x$ and $x+dx$ at instant $t$?:
$$ \left |\psi(x,t)\right|^2dx$$
Was this result mathematically derived? Or is it just a postulate, like the Schrödinger equation itself?
| Wave Mechanics was heavily influenced by classical electromagnetic waves, and in classical EM, the intensity of a wave is proportional to the square of the field's strength. Luckily, this turned out to be true in QM as well.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Woodworking clamps, does force add up? I was watching a woodworking video about glue, and the guy was clamping two pieces of wood together using a total of 8 clamps. He argued that by doing so he would apply 8 times the maximum force of 150N (a property of the clamp), resulting in 1200N in total.
I think he's wrong. I think the force of 150 N is only working locally where the clamp is and will decline drastically radially from that spot. And so the clamping force on any given spot on the board will never exceed the max. force of the clamp.
Who's right?
|
And so the clamping force on any given spot on the board will never
exceed the max. force of the clamp.
This would, quite obviously, be the case, if we assumed that the clamps are evenly distributed around a circle.
Under these conditions, due to symmetry, any redistribution of the reaction force, which, in total, is equal to the total applied force, would not be possible, so the reaction force applied locally by each clamp would have to be $150$N and the pressure under all clamps would have to be the same.
If the clamps are not placed symmetrically, we can still state that no redistribution of the forces will occur by looking at one pair of clamps at a time and observing that, if that was not the case, the work (the two pieces of wood) would rotate, since two different reaction forces would create a net torque acting against two equal applied forces.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/423996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
How to put $c$ back into relativistic equations? Many books set the speed of light $c=1$ for convenience. For example, Weinberg in his textbook "Gravitation and Cosmology" (though $G$ is still left as a constant):
$$\begin{align}
\mathrm{d}\tau^2 &= \mathrm{d}t^2 - R^2(t)~[f(r,\theta,\phi)] \tag{11.9.16} \\
t &= \frac{\psi + \sin\psi}{2\sqrt k} \tag{11.9.25}
\end{align}$$
If I now want to run real numbers in SI units, do I
*
*Assume $t'[\mathrm{sec}] \rightarrow t = t'/c = t'/299792458$ where the new time unit is $3.34\ \mathrm{ns}$
*Multiply each $t$ and $\tau$ with $c$:
$$\begin{align}
c^2 \mathrm{d}\tau^2 &= c^2 \mathrm{d}t^2 - R^2(t)~[f(r,\theta,\phi)] \tag{11.9.16} \\
ct &= \frac{\psi + \sin\psi}{2\sqrt k} \tag{11.9.25}
\end{align}$$
*Something else?
| Setting c = 1 essentially causes quantities with dimension of time to be expressed in units of length. If you know the length dimension of a calculated quantity, you can re-express it in terms of time by plugging in the corresponding amount of factors of $c$, which just act as conversion factors.
This amounts to doing (1) at the very end of the calculation.
Alternatively, you can add in the $c$'s like in option (2) before starting your calculations. This might be more convenient if you are calculating complicated objects in which the dimension of the end result is difficult to see.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/424318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Value of the invariant $R_{\mu \nu}F^{\mu \nu}$ Is there a simple way to find the value of $R_{\mu \nu}F^{\mu \nu}$ (where $R_{\mu \nu}$ is the Ricci tensor and $F^{\mu \nu}$ is the electromagnetic tensor), knowing that it is an invariant?
Inputting the definitions of $R_{\mu \nu}=R^{\lambda}_{\mu \lambda \nu}= \partial_\lambda \Gamma^\lambda_{\mu \nu}-\partial_\nu \Gamma^{\lambda}_{\lambda \mu}+\Gamma^\lambda_{\lambda \sigma}\Gamma^{\sigma}_{\mu \nu}-\Gamma^\lambda_{\nu \sigma} \Gamma^\sigma_{\lambda \mu}$ and $F^{\mu \nu}=\partial^\mu A^\nu - \partial^\nu A^\mu$ would be a straightforward way, but I have a hunch that there is a much more simple solution if I could interpret the expression physically or transform it to a specific frame of reference where it is easy to evaluate.
| $R_{\mu \nu}F^{\mu \nu}$ is simply zero. No computations are needed to see this. Just note that $R_{\mu \nu}$ is a symmetric tensor while $F^{\mu \nu}$ is antisymmetric.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/424538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The definition of the hamiltonian in lagrangian mechanics So going through the "Analytical Mechanics by Hand and Finch". In section 1.10 of the book, the Hamiltonian $H$ is defined as: $$H = \sum_k{\dot{q_k}\frac{\partial L}{\partial \dot{q_k}} -L}.\tag{1.65}$$
And then author affirms that this quantity is constant and takes the derivative $\frac{dH}{dt}$:
$$\frac{dH}{dt} = \sum_k {\ddot{q_k} \frac{\partial L}{\partial \dot{q_k}} + \dot{q_k}\frac{d}{dt}(\frac{\partial L}{\partial \dot{q_k}}) - \frac{d L}{d t}}.\tag{1.66}$$
Now the book writes: "According to the chain rule for differentiating an implicit function over time": $$ \frac{dL}{dt} = \sum_k {\frac{\partial L}{\partial q_k}\dot{q_k} + \sum{}\frac{\partial L}{\partial \dot{q_k}}{\ddot{q_k}} + \frac{\partial L}{\partial t}}.\tag{1.67}$$
And the summing the second and third gives: $$\frac{dH}{dt} = - \frac{\partial L}{\partial t}.\tag{1.68}$$
Now I don't understand how the third equation is derived and also why is the Hamiltonian $H$ is defined in the way it is in the first equation?
| In your first equation, the Hamiltonian is defined in terms of the Lagrangian via an operation called a Legendre transformation. They are most commonly seen in thermodynamics and classical mechanics, and are used to convert functions of one variable into functions of another variable without disturbing the physics. Several thermodynamic variables are obtained using this procedure; for example, the Helmholtz free energy $A(T,V,N)$ is obtained by transforming the internal energy $U(S,V,N)$ by the Legendre transformation $A=U-TS$. Likewise, the Gibbs free energy $G(T,P,N)$ is obtained from the enthalpy $H(S,P,N)$ by the Legendre transformation $G=H-TS$. (The enthalpy is obtained from the internal energy using a non-standard Legendre transformation $H=U+PV$.)
In this case, we wish to switch the set of variables from $\{q_i,\dot{q}_i\}$ to $\{q_i,p_i\}$, and so we use the Legendre transformation to create the Hamiltonian $H(\{q_i,p_i\})$ from the Lagrangian $L(\{q_i,\dot{q}_i\})$ via the Legendre transformation in your question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/424617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Power factor in Transamision What is normally the power factor in the transmssion.? Is there some standard or guidelines for that.?
Also, the low factor is not desirebale as it leads to more power loss. That makes sense. They say power factor can be improved adding a capacitor in parallel. Why in Parallel.?
| Power factor is a control variable used by electric utility transmission system control software (Energy Management System) to provide voltage control in the transmission system. Voltage sags on a node can be lifted up using added capacitor banks that are switched in or out of the circuit in real-time by EMS software.
Capacitors are shunts to ground so that is sometimes known as a parallel circuit however in my 40+ years working in this industry, I have never known a power systems engineer to call it a parallel connection.
Capacitance in a transmission grid also occurs in series and this is part of the transmission line lumped circuit model. Capacitor shunts to ground though are reactive voltage power control devices and pose a much larger capacitance in the circuit when switched in at a node.
In the mathematics, the actual control variable is the phase angle between buses separated by transmission line. The phase angle of course determines the instantaneous voltage difference between two nodes that are nominally at the same KV.
There are other control devices in the transmission grid too. For example, phase angle transformers that are adjusted by software also affect the phase angle control variables. And, Real Power is managed of course by traditional transformers that are used to bridge nominal voltage levels.
As to what is the normal power factor in the transmission grid, I don't know the answer to that and I have never thought it to be that important. I am sure that there are those who monitor the actual values and it is even possible that some of the large regional electric utility (ISO or RTO) has web displays that report such things. Just a guess though but it is possible. There is usually a lot of good information found on web sites of such companies -- at least in the US but other countries likely as well.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/424934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Do compact symplectic manifolds play a role in physics? In classical mechanics, the phase space is the cotangent bundle of the configuration space, and it is a symplectic manifold, but not compact.
Do compact symplectic manifolds have physical meaning? Or just of mathematical interest?
| Every Calabi-Yau manifold, being Kähler, is symplectic. Compact Calabi-Yau manifolds play an important role in string theory, though their symplectic structures did not initially seem to play an important role (for as far as I know).
However, one context in which these do play a role is in homological mirror symmetry, an attempt to formulate the concept of mirror symmetry as observed in string theory in purely mathematical terms. In it, the duality between a Calabi-Yau manifold and its mirror partner is stated in terms of the algebraic/analytic structure of one, and the symplectic structure of the other.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What direction does electric current flow in when the voltage drop is negative? We were given the below diagram, and we needed to determine whether the unknown component is supplying energy into the system.
$\hspace{200px}$
I thought the unknown component was supplying energy since charge is flowing to a higher potential, but my teacher said it was the other way around.
What does the $`` -2 \, \mathrm{V} "$ label in the diagram mean? Is it:
*
*the difference from $\left(+\right)$ to $\left(-\right);$ or
*the difference from $\left(-\right)$ to $\left(+\right) ?$
I'm sure there is a convention to this sort of stuff but I'm not aware of it yet.
| I would have thought:
$$
\begin{align}
P &= V \cdot I \\[5px]
P &= -\frac{\mathrm{d}U}{\mathrm{d}t}
\end{align} \\ \, \\
\Delta U = -\int{P\cdot \mathrm{d}t} = - \int{V\cdot {I}\cdot \mathrm{d}t}
$$
$ V\cdot I > 0$ since the current is entering in the device through the positive reference of the voltage and I supposed a passive sign convention, so$$
-\int{V\cdot I\cdot \mathrm{d}t} < 0
{\qquad} \text{and} {\qquad}
\Delta U < 0
\, ,
$$so: it absorbs energy.
I have also done it by studying the system by simplifying it in this way, adjusting the signs:
About Convention
The number which represents the voltage is the number attached to the "+" symbol on the drawing. It means that the voltage written is referred to the positive pin of the bipole. The difference between the pin + and the pin - is the written one near the symbol as: $ V(+) - V(-) = $ written number, in this case $V(+) - V(-) = -2 \, \mathrm{V}.$ The current is inverted in verse if changed in sign.
The passive sign convention is used, but someone can use a different one.
(Please feel free to comment if there are mistakes)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Is any particle's energy quantized?
In the above picture, the author is trying to summarize the correlation between particle and wave packet. In doing so, he assumes that frequency is related to energy as: $E=h\nu$. Is this apparent assumption correct?
Because as far as I know, the quantization is only for oscillators on blackbody surface. And Einstein extended it for light waves.
But when it comes to matter wave, is it still true? That the particles energy can only be an integral multiple of the frequency associated with its matter wave times the Plank's Constant?
In fact, the slide doesn't even mention the integral multiple. So where am I missing out?
| Any particle has an energy given by the relativistic equation for the energy:
$$ E^2 = p^2 c^2 + m^2 c^4 \tag{1} $$
In this equation the variable $m$ is the rest mass and $p$ is the (relativistic) momentum. The momentum is related to the de Brogie wavelength by:
$$ p = \frac{h}{\lambda} \tag{2} $$
For photons the rest mass is zero, so equation (1) for the energy simplifies to:
$$ E = pc = \frac{hc}{\lambda} = h\nu $$
And that's why the energy of a photon is always equal to $h\nu$. With a massive particle the same equation applies, but with two big differences. Firstly the rest mass is no longer zero, and secondly the phase velocity is not $c$. So we can write the energy of the particle as:
$$ E = \sqrt{p^2c^2 + m^2c^4} = \sqrt{\frac{h^2c^2}{\lambda^2} + m^2c^4} $$
which sadly is a rather less elegant expression.
However we can define a matter wave frequency:
$$ \nu = \frac{E}{h} $$
and this automatically makes the energy equal to $h\nu$. Proceed with caution though as unlike light, with massive particles this frequency is not something that is directly measured. We can directly measure the de Broglie wavelength by diffracting the particles with some suitable grating, but the de Broglie frequency is rather more abstract.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the electron field of QED similar to the Higg's field? If I understand it (I probably don't...) the Higg's field is a field that permeates the universe, and its excitations are the Higg's bosons. It seems that this implies that the Higg's field is a property of the universe itself, and not generated by something like an electric field would be generated by charged particles.
Quantum Electro Dynamics seems to state that the electron is the excitation of the "electron field" - if this is incorrect then stop me there, of course.
Does this mean that the electron field is also some field that simply permeates the universe? Or is the electron field somehow generated by something else? Alternately, is the electron field a mathematical convenience without physical existence?
| The Higgs, electron and electromagnetic fields all permeate space. In QFT their excitations are indeed what we call particles.
They are different types of fields, however. Higgs is scalar, electron in spinor, and EM is vector.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Factors affecting Battery Voltage How do batteries produce a certain voltage, such as 1.5V or 9 V? From what I understand, battery EMF comes from oxidation of the anode, which releases electrons that can flow through a circuit. But how do batteries regulate that voltage? What is it about the chemical reaction that creates an electric potential difference of a predictable quantity?
| The voltage generated by an individual electrochemical cell will be in the approximate range of a fraction of a volt to a volt and a half for most cells and is determined by the particulars of the anode oxidation process, which vary from one metal to another as you point out.
However, to get 6 volts or 9 volts or 12 volts we have to string together individual cells in series, so their cell voltages add up. The result is called a battery; a 12 volt car battery consists of 6 lead/acid cells in series and a 9 volt radio battery contains 6 zinc/carbon cells in series.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Could black holes be formed by highly energetic gravitational waves? Could the gravitational waves released by two merging black holes contain enough energy to produce another black hole?
| Colliding gravitational waves can indeed form black holes, but the conditions for doing so are fairly strict. As it happens this has just been discussed in the preprint Black Hole Formation from the Collision of Plane-Fronted Gravitational Waves.
But it's exceedingly unlikely this would every happen. The gravitational waves near merging black holes will be spherical not plane waves and it's unlikely they could concentrate enough energy to create a black hole. The waves will become increasingly planar as the move away from their source, but of course their intensity will fall off with distance squared so far from the black holes they certainly won't have enough energy to form a black hole.
The tl;dr is that yes it's theoretically possible but in practice it's hard to imagine circumstances where it would actually happen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why isn't average speed defined as the magnitude of average velocity? Speed is usually defined as the magnitude of (instantaneous) velocity. So one could assume that average speed would be defined as the magnitude of average velocity. But instead it is defined as
$$s_{\textrm{average}} = \frac{\textrm{total distance traveled}}{\textrm{total time needed}}$$
which generally speaking is not equal to the magnitude of the corresponding average velocity.
What historical, technical or didactic reasons are there to define average speed this way instead of as the magnitude of average velocity?
| Given a velocity as a function of time, the speed as a function of time is the magnitude of the velocity at each point in time. The average speed is then the average of this magnitude, as it would be for any function of time - such as density or temperature. The question of whether the average magnitude of the velocity is equal to the magnitude of the average of the velocity then becomes a conjecture to check. Since an object can move around at high speed while returning to the same place, and so have an zero average velocity with a high average speed, this shows by counter example that the average speed, defined just like any other average, is actually not equal to the magnitude of the average velocity.
It is very common to find that the average of some function is not the function of the average. For example, the average $x^2$ is not typically the square of the average of $x$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 6
} |
Magnetic field of a solenoid Why, when explaining the magnetic field of a solenoid, are the individual "quasi-rings" split up into two layers with opposite direction of current (see attached pictures; where the x-es and the dots indicate opposite directions). The original picture given clearly states the direction of the current, so why does it "split up into two"?
| Just cut the solenoid in image 1 from a horizontal diameter and indicate the direction of current again in its wires
Note:- the dots mean current is coming out and the crosses mean its going inside
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why is it much more difficult to horizontally throw a toy balloon than a football? If you horizontally throw a sphere of radius $R$ it will feel, in this direction, a drag force due to air. Assume the drag is given by Stokes law, $F_D=6\pi\eta R v$, where $\eta$ is the air viscosity and $v$ is the horizontal speed. This force cannot "see" the internal structure of a toy balloon, a football or even a metal sphere. However, anyone who ever played with balls and toy balloons noticed that for the same throwing, the ball will have higher horizontal reach for the same time interval. Just think about someone kicking toy balloons and footballs and the distances reached in each case. How is the resistive force considerably greater for the toy balloon?
Even if we consider a quadratic drag, $bv^2$, I suppose the coefficient $b$ would depend only on the fluid and the geometry of the bodies. Again the drag would be equal.
Another way to put this question: How does the density of the sphere contribute for the resistive force?
| For a balloon and a football (soccer ball for Americans) of the same size, shape, and initial velocity, the aerodynamic drag will be the same. The effect on the motion of these objects will be very different because of their different masses. A football has a mass of about 0.430 kg while a balloon has a mass of less than 0.010 kg--1/40 of the football. This means that the balloon will decelerate 40x faster than the football.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Is the probability density of an electron in a hydrogen atom static? Probability densities are illustrated in text books and on Wikipedia as static pictures. Is the probability density of an electron within an isolated Hydrogen atom static or does it oscillate in some way?
| In this link, one can see how the plots you show are derived from the solution of the Schrodinger equation for the hydrogen atom. There are checkable options.
There is no time dependence in the wavefunctions, or the corresponding probability distributions for the given eigenvalues (seen by checking next to the solutions).
Emilio Pisanti has given here the case of the time dependent hydrogen atom solutions:
and it will then show oscillations in both the position-space and momentum-space probability distributions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reducing multi body system to a single body using reduced mass A two body system can be treated as a single body using reduced mass and the motion can be described using one generalized coordinate.
Can this concept be somehow used to reduce a body of say 3 or more particles to a single body?
| Not in general, no. The "three-body" problem does not even have closed-form solutions in general, let alone a simplification like this that would make it easy. It's not clear how you intended to use continued fractions here, but I don't see how that would help.
See, e.g., https://en.wikipedia.org/wiki/Three-body_problem for more information.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\sqrt{-1}$ coefficient in a function In a simple harmonic oscillator with $\ddot{x} = -\omega^2x$, it can be shown through differentiation that one solution can be given by $\dot{x} = i \omega Ae^{i \omega t}$. What does the factor of $i$ do here? What effect does it have on velocity?
| The terms in a differential equation representing an oscillating system may be out of phase. Such equations can be represented by a “phase diagram” Each term in the equation can be considered as representing one component of a vector rotating around the origin in a 2D “phase space”. The phase space can be chosen as an xy system, but there some mathematical advantages to putting it in a complex number plane (if you are comfortable with such a system) where R$e^{iθ}$ = Rcos(θ) + iRsin(θ). (The θ's may be modified with a phase shift). Generally, the real component of each vector represents the instantaneous value of the corresponding term in the equation, and the angle represents a phase. In your equations, the multiplying, i, indicates that there is a 90 degree phase difference between the vector representing velocity and the one representing position. (The sine term becomes real and the cosine imaginary.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
The theater, the actors and the 'graviton' In my perception of the universe, there's the theater which is the 'spacetime' and the actors meaning the 'particles'.
If i got it right, GR claims that the 'actors' effect the 'theatre' by 'bending' it.
Also in order to combine GR and QM, some beleive in the existence of the 'graviton'.
Does this mean that the 'theater' is no longer needed since gravity is not a property of spacetime anymore?
Does the hypothetical existaence of the graviton cancels out the spacetime and it's 'curvature'?
If gravity is 'transfered' trough 'gravitons' and its not a property of spacetime and its curvature, how can we explain its impact to EM transmision ? How does a particle (graviton) can influence the path of motion of another particle (photon) in space?
| The theater is still needed in either case simply because we observe and measure the theater. The difference is whether or not the theater is built by the actors.
In GR the theater arises dynamically from the distribution of the actors’ stress energy tensors. In QM the theater is a static background that is added separately.
At this point we don’t have a solid quantum gravity theory, so how that works is still unfinished. Gravitons would naively be starting with a pre-existing theater and then the actors doing a small remodel dynamically.
Of course, this is a fuzzy metaphor and hides a lot of important complications, especially regarding the dynamic nature of the theater in GR.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interaction-Picture Field as Solution of Klein-Gordon Equation I am following a problem in a QFT textbook (Srednicki) which asks us to show that the interaction-picture field
$$\phi_I(\textbf{x},t)=e^{iH_0 t}\phi(\textbf{x},0)e^{-iH_0 t}$$
obeys the Klein-Gordon equation. This is implicit in the rest of the text, but the solutions also say that this can be confirmed directly by writing the time derivatives as commutators, and computing them.
Could someone confirm what the author means by it: is it something related to the Heisenberg equation of motion, or the fact that operators commute with their derivatives?
| We take the derivatives of the interaction-picture field and simplify to obtain the Klein-Gordon equation.
$\phi_I(\textbf{x},t)=e^{iH_0 t}\phi(\textbf{x},0)e^{-iH_0 t}$
$\frac{\partial \phi_{I}}{\partial t} = iH_{0} e^{iH_{0}t}\phi(\textbf{x},0)e^{-iH_{0}t} + e^{iH_{0}t}\phi(\textbf{x},0)(-iH_0)e^{-iH_0 t}$
$\frac{\partial^2 \phi_I}{\partial t^2} = -H_0^2 e^{-H_0 t}\phi e^{-iH_0 t} + iH_0 e^{iH_0 t}\phi(-iH_0)e^{-iH_0 t} + iH_0 e^{iH_0 t}\phi (-iH_0)e^{-iH_0 t} + e^{iH_0 t} \phi H_0^2 e^{-iH_0 t}$
$\partial \phi_{Itt} = 2 [H_0,\phi_I] = [\frac{1}{2}\nabla^2 + \frac{1}{2}m^2,\phi_I] = 2[\frac{1}{2}\nabla^2,\phi_I] + 2[\frac{1}{2}m^2,\phi_I] = \nabla^2 \phi_I - m^2 \phi_I$
$\frac{\partial^2 \phi_I}{\partial t^2} - \nabla^2 \phi_I +m^2 \phi_I = 0.$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Confusion about how an electron gun works I'm a little unclear about the charge balance aspect of an electron gun. Referring to this diagram and similar diagrams I've seen, what I don't get is wouldn't the target of the electrons have to be connected to the positive anode so that the electrons fired at a target can be recycled if the electron gun is needs to operate continuously? Is the target generally placed on the anode opening so it's connected to the positive?
| This is an Electrical Engineering question. The target is usually not in the hole (opening) of the anode. This is because, first, the hole is small. Second, you often want to be able to manipulate the electron beam, like what people do in the CRT TV. So the target is usually at the right end of your diagram, where your blue arrow points to.
The target is usually connected to the anode so its potential is equal to the anode. You can either connect the anode (and your target) or the cathode to ground. In a microwave oven, the anode of the magnetron (a kind of vacuum tube with anode, cathode) is grounded. In a TV's CRT, the anode is at high positive voltage and other component (maybe the cathode) is grounded.
This diagram might make it clear to you,
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Is continuum mechanics a generalization or an approximation to point particle mechanics? Newtonian Mechanics is usually presented as a theory of point particles (and forces). My impression of the status of continuum mechanics is that it is mostly taken as an approximate description for certain situations, where many particles are present and where we are interested only in bulk motion. In situations like modeling a string, we see that the continuum limes leads to a very good approximation for the motion of pointlike masses connected by springs under certain conditions. This seems to point in the direction that continuum mechanics is an approximation to point particle mechanics.
But point particles should also be easy to accommodate as a special case in continuum mechanics by admitting delta distributions and the like. Taking this viewpoint, continuum mechanics seems to be a generalization of point particle mechanics.
Therefore my question: Is continuum mechanics a generalization or an approximation to point particle mechanics? Or can it be argued that both are equally valid starting points?
Part of my motivation to ask this question is that I sometimes have difficulties to connect concepts from one viewpoint to the other (see e.g. my question about the point of application of a force).
| My guess is that point particles were initially introduced as approximations to avoid the effect of mass distribution in inertial and gravity phenomena description. Spring - mass systems usually only offer a very crude model of solids, although they can sometimes be used as a first approach (e g. phonons in crystals). You can always isolate a finite volume within a medium and apply the equations of mechanics, the difference is that you need to take into account internal efforts acting on its boundary from the surrounding part of the medium (see the central notion of stress). And no, continnuum mechanics is certainly not only concerned with bulk motion, think of the entire field of elasticity!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Confusion about the meaning of steady current I am trying to learn some elementary EM, but I have some confusion about the basic concepts of steady current.
Suppose I have a wire of uniform cross section area. The current is always flowing from left to right.
I imagine that I can cut a segment of this wire (with the area vectors at both ends parallel to the flow of current) and compute the surface integral $\int\int_S \mathbf{J}\cdot d\mathbf{A}$, where $\mathbf{J}$ is the current density vector, $\mathbf{A}$ is the area vector, and $S$ is the boundary of the segment. I assume that $\mathbf{J}$ does not vary with time.
I believe that the magnitude of $\mathbf{J}$ can depend on its position, so lets say that its magnitude is greater on the right end, as compared to the left end. Because of that, the surface integral $\int\int_S \mathbf{J}\cdot d\mathbf{A}$ should have a non-zero value (the dot products on both ends do not cancel).
However, $\int\int_S \mathbf{J}\cdot d\mathbf{A}$ is precisely the net change in charge out of this segment, i.e., $-\frac{dq}{dt}$.
Now, for a steady current, $\frac{\partial \rho}{\partial t}$ is zero at every point, thus the $-\frac{dq}{dt}$ should also be zero, which contradicts my understanding that $\int\int_S \mathbf{J}\cdot d\mathbf{A}$ can be non-zero.
This is perhaps a very stupid question, but I just cannot figure out what has gone wrong. Any help would be greatly appreciated~
| $\int\int_S \mathbf{J}\cdot d\mathbf{A}$ represents the current $(\frac {dq} {dt})$ flowing through a cross-section area. It is not "the net change in charge out of this segment".
The "net change in the charge" would be equal to the difference between the current flowing in (through the left cross-section) and the current flowing out (through the right cross-section) of the segment.
Updating the answer based on the comments:
If $S$ is the whole surface of the segment, then $\int\int_S \mathbf{J}\cdot d\mathbf{A}$ represents the net charge flow and, in a steady state, should be zero. That does not change, if the current density through the left cross-section is different from the current density through the right cross-section, since the differences in the current density would be compensated by the differences in cross-section areas, yielding the same currents.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Gravitational Time dilation for motion in a gravitational field An astronaut is travelling towards a black hole in a space ship traveling at 0.8 C. The guy’s position is not fixed relative to the black hole. He is travelling towards it from a distance. How to calculate the gravitational time dilation in this case, as the "relative time" keeps getting slower as one approaches the event horizon?
Please note that I have already factored the ‘time dilation’ due to the space ship traveling at 0.8 C. I just want to understand the ‘gravitational time dilation ‘ part of it.
Can some form of integration be applied on the gravitational time dilation equation for the case of motion in gravitational field?
| A radial free fall to a Schwarzschild black hole from rest a distance $R$ in geometrized units is given by the geodesics
$$ \tau=\dfrac{R}{2}\sqrt{\dfrac{R}{2M}}\left(\arccos\left(\dfrac{2r}{R}-1\right)+\sin\left(\arccos\left(\dfrac{2r}{R}-1\right)\right)\right) $$
And
$$ t=\sqrt{\dfrac{R}{2M}-1}\cdot\left(\left(\dfrac{R}{2}+2M\right)\cdot\arccos\left(\dfrac{2r}{R}-1\right)+\dfrac{R}{2}\sin\left(\arccos\left(\dfrac{2r}{R}-1\right)\right)\right)+ $$
$$ +\, 2M\ln\left(\left|\dfrac{\sqrt{\dfrac{R}{2M}-1}+\tan\left(\dfrac{1}{2}\arccos\left(\dfrac{2r}{R}-1\right)\right)}{\sqrt{\dfrac{R}{2M}-1}-\tan\left(\dfrac{1}{2}\arccos\left(\dfrac{2r}{R}-1\right)\right)}\right|\right) $$
You can figure the time dilation by differentiating these equations by $r$
$$ \dfrac{d\tau}{dt}=\dfrac{d\tau}{dr}\dfrac{dr}{dt}=\dfrac{d\tau}{dr}\cdot \dfrac{1}{\dfrac{dt}{dr}} $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why can you assume the current remains the same after changing resistance? In my physics textbook there is a question about a circuit:
In question b) We find out the current in the circuit is 0.003A. This is before the voltmeter is added.
In question c) a voltmeter is added that affects the p.d across the first 1k resistor which causes its voltage to drop from 3.0V to 2.0V. In the mark scheme they say R(from B to A) = 666.7 ohms which is means they did 2/0.003. How is it known the current doesn't change at all, or am I missing something?
|
In the mark scheme they say R(from B to A) = 666.7 ohms which is means they did 2/0.003. How is it known the current doesn't change at all, or am I missing something?
The given solution is incorrect.
If we connect the voltmeter as shown and measure 2 V, then we know that
$$ 6 \frac{R'}{R'+1000} = 2$$
where $R'$ is the parallel combination of the voltmeter and the 1 kohm resistor, from the voltage divider rule. This means $R'$ must be 500 ohms.
Therefore
$$\frac{1000 R}{1000+R} = 500,$$
from which we can find
$$R = 1000.$$
Sometimes the problem is just that the guy who wrote the textbook didn't pay much attention when writing the solutions.
How is it known the current doesn't change at all, or am I missing something?
A well-designed voltmeter should have a burden resistance of $10^6\ {\rm \Omega}$ or higher, so often it is reasonable to make this assumption. However, it should not have been made in this particular problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does work depend on distance? So the formula for work is$$
\left[\text{work}\right] ~=~ \left[\text{force}\right] \, \times \, \left[\text{distance}\right]
\,.
$$
I'm trying to get an understanding of how this represents energy.
If I'm in a vacuum, and I push a block with a force of $1 \, \mathrm{N},$ it will move forwards infinitely. So as long as I wait long enough, the distance will keep increasing. This seems to imply that the longer I wait, the more work (energy) has been applied to the block.
I must be missing something, but I can't really pinpoint what it is.
It only really seems to make sense when I think of the opposite scenario: when slowing down a block that is (initially) going at a constant speed.
| Often it is important to know if a given formula is a simplification of a more general equation and, when you encounter a conceptual problem, check the general formula. In this case it is a simplification of this formula:
$$W=\int_S F\cdot ds $$
Where $S$ is the path over which we are interested in the work and $ds$ is an infinitesimally small segment of $S$.
So back to your question, wherever $F=0$ the integrand is $0$ regardless of how long that segment of the path is. So it is only that first segment where you are applying the 1N that work is done. Once you stop pushing the distance increases but the work does not.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 11,
"answer_id": 3
} |
Definitions of $\vec{B}$ and $\vec{H}$ From here, I have got the definition of $\vec{H}$. However even in wikipedia and other sites, I cannot find a definition for $\vec{B}$ which shows its similarity with $\vec{H}$.
Similarity:
I know for free currents, $\vec{B}$ and $\vec{H}$ are same. I also know outside the magnet, for bound currents, $\vec{B}$ and $\vec{H}$ are same.
What is the definition for $\vec{B}$ whic makes it so much similiar to $\vec{H}$?
| I would say the more "fundamental" field is actually the magnetic field $\vec B$. The definition of $\vec H$ from this is then
$$\vec H=\frac{\vec B}{\mu_0}-\vec M$$
Where $\vec M$ is the magnetization of the medium your magnetic field is in.
Both are useful depending on the context. Since the magnetization in free space is $0$, $\vec B$ is more useful when looking at fields in a vacuum or media where magnetization can be neglected. $\vec H$ is more useful when your magnetic field is in some medium where a net magnetization arises.
Either way, it's all based on how useful the quantity is. For example, we technically don't need a definition of moment of inertia, angular momentum, etc., but those definitions are very useful for discussing rotational motion. The same is true here. We technically would only need one definition, but they are both useful, so we use both. One is more useful over the other when we are looking at fields in different media (or in lack of a medium).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interefernce pattern for points where condition for constructive and destructive interference is not met I get the creation of alternate dark and bright fringes in the double slit light interference experiment. Where I am confused is there will be points on the screen where condition for constructive as well as destructive interference is not met. Example where path difference = 1/3 (wavelength). On those points there should be some light? how come we see only alternate dark and bright fringes and nothing in between?
| It's because of the way your eyes work.
The intermediate areas are actually intermediate. The boundary between the light and dark areas is not a sharp line. But your eye doesn't pick up on that.
What you see is intensity. Leaving out the details, intensity is proportional to the sine squared of something-or-other. (It would be simpler if you projected the light onto the inside of a cylinder instead of a flat surface.)
If you look at the graph of sine-squared, it's exactly like the graph of a sine function except the frequency is doubled and the range is 0 to 1 instead of -1 to 1.
So the places where the light is maximum or minimum change slowesr, and the places in between change fastest.
So the in-between areas will look thin. Byt they're still there and you can see them if you watch for them. Your eye tends to de-emphasize threm.
intensity
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Which field Passes Through the Polarization Film? Why Isn’t the Perpendicular Field Stopped? When one EM field is aligned so that it can pass through a polarizing lens the other field (E or B) is 90 degrees out. Is only one of the EM fields affected by a polarizing lens or film? How is it that one field is stopped yet the other seems unaffected?
| Both fields are equally affected by the polarizing rods/wires, it is just more conventional to visualize a small wire as being directly interfering with the E-field that is parallel with it than the B-field that is perpendicular to it.
You would never ask this question if had been taught to view the B-field not as a vector field but instead what it really is a bi-vector or an anti-symmetric tensor, which is a bit of a mouthful. In some more advanced texts you would be told the B is not really a vector but instead it is a pseudo- or axial-vector and its properties are similar to torque that is the vector product of two (true) vectors.
In any case, when viewed as such the bi-vector field is actually parallel with the wire, and its source is the current. The bi-vector field itself is a set of surfaces emanating/ending in the current carrying wire. The conventional vector B-field consist of the orthogonal rays to these bi-vector surfaces.
If you do not care about all that stuff then just recall that an infinite long wire generates a magnetic "vector" field that are in circles perpendicular to the wire and therefore those polarizing rods that are parallel with the E-field will also block the B "vectors" perpendicular to it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Fresnel's Equations Range of Validity Do Fresnel's equations ever break down at extremely small length scales?
I am wondering if I can apply Fresnel's equations to a very thin film (~10-20nm) at an interface with air with a free-space wavelength in the micron range. I know Fresnel's equations are derived from Maxwell's equations and material properties, and I'm not aware of cases where these equations aren't valid so long as non-idealities of the materials are taken into account and the particle nature of light isn't important.
| The Fresnel equations are based on the plane-wave solutions of the electromagnetic field. As such, there is nothing that forces them to break down for films that are thinner than the wavelength (and, indeed, they are essential in describing anti-reflectant coatings that are films of about $\frac14$-wavelength thickness.
On the other hand, you do need the medium to be describable as a bulk dielectric, and you need the interface to be sharp. If you have some smooth degrading of the medium's density at the interface, or your dielectric has some funky behaviour like e.g. the polarizability increasing at the boundary because of surface effects, and those things happen on length scales that become comparable with the thickness of the film, then those would certainly impact the applicability of the Fresnel formalism.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simple DC motor with one bar magnet https://www.instructables.com/id/The-Simple-DC-Motor/
I'm struggling to get my head around how this motor works.
Let's say for the sake of argument that the field lines are running from the LHS of the bar magnet round to the right and that current flows through the coil in a CW direction.
In the bottom right of the coil this would give us a force out of the page. but as the current travels round to the bottom left side of the coil it now has a vertical upward component and the force would be in to the page.
What am I missing?
| You are right that if the magnet's field lines run from left to right in the illustration, the coil will not turn. The type of magnet shown is magnetized in the vertical direction, so that the field lines exit the top and bottom faces. When a current flows through the coil, the coil roll rotate so that the magnetic field density inside the circle formed by the coil is maximized.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Calculating expectation values over an observable I have a question pertaining to a seemingly (to me) arbitrary, yet necessary distinction one must make when calculating the expectation value of, say, momentum $\hat p$:
I forgot to ask her, but my lecturer made it very clear that when computing this, that the following is necessary:
$$<\hat p> \ = \ m \frac{d}{dt} \ \int_{-\infty}^{\infty}\psi* \ \hat x[\psi] \ dx$$
However, why can't it be this?
$$<\hat p> \ = \ m \frac{d}{dt} \ \int_{-\infty}^{\infty}\psi \ \hat x[\psi*] \ dx$$
It seems oddly arbitrary that the operator must act on the non-complex conjugate version of the function. Why is this necessary?
Note: A commenter told me this more a case-by-case thing, as position doesn't have such a requirement but momentum does. Could someone care to explain why?
| As long as the operator you're calculating the expectation value of is an observable, it will be Hermitian and (hence) the expectation value must be real.
So you might as well take the complex conjugate of the entire expression; it will not change the result.
If the operator is not Hermitian, it does matter on which function the operator acts.
The default setting is to have the operator acting on $\psi$, not $\psi \ast$. Acting on $\psi\ast$ instead will give the expectation value of the Hermitian adjoint of the operator, which is not at all surprising, but important to keep track of.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing that the exterior surface charge density of two parallel plates with thickness are equal I have some problems with the next exercise. It states:
Two infinite conducting parallel plates I and II, with thickness $t_1$ and $t_2$ respectively are separated by a distance $L$ from its nearer faces. The surface charge density of the plate I is equal to $q_1$ ($q_1$ = the sum of the interior surface density plus its exterior surface density) and the surface charge density of the plate II is equal to $q_2$.
a) Show that the surface charge density of the interior faces are equal in magnitude and with opposite signs.
b) Show that the surface charge density of the exterior faces are equal.
I will set some notation first. I will denote by $a$ to the exterior surface charge density of the plate I, by $b$ the interior surface charge density of the plate I, by $c$ the interior surface charge density of the plate II, and by $d$ to the exterior surface charge density of the plate II. This is illustrated in the following image:
Then the conditions are $a+b=q_1$ and $c+d=q_2$. And we have to show that $a=d$ and $b=-c$.
By the symmetry of the problem we can deduce that the electric field must be perpendicular to the surface in every point and that the surface charge density must be uniform in all the conductors. Since the surface charge density is uniform we also know that the electric field must be constant in the regions above the plates, between the plates and down the plates. Since the plates are conductors we also know that the field inside them must be zero.
With all of that in mind, I can show that $b=-c$, by taking as my gaussian surface a cylinder with its faces in the middle of the conductors (as shown by the green rectangle in the last image). Using Gauss' Law we have that the flux must be equal to zero, but since the flux is proportional to the charge inside we conclude that $b= -c$. So far so good.
But when I try to show that $a=d$ I get stuck. All the gaussian surfaces that I take give me the same result $a+d= q_1 + q_2$ or some variation of that. I was hoping that you could help me solve that problem. Thank you ! (:
| The key is to consider a point inside one of the conductors
Now, we know the field must be $0$ in the conductor. We also know that the field due to an infinite sheet of charge is given by
$$E=\frac{\sigma}{2\epsilon_0}$$
Each surface forms an infinite sheet of charge. So if you add up the field from each (taking direction into account) in the conductor, and use the information presented above, you should be able to show $a=d$. I will leave the details to you.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Damped oscillations and generalized friction I'm reading damped oscillations from the book Classical Mechanics by Landau and Lifshitz, quoting from the text -
"There exists, however a class of problems where motion in medium can be
approximately described by including certain additional terms in the
mechanical equations of motion. Such cases include oscillations
with frequencies small compared compared with those of dissipative
processes in the medium. When this is fulfilled we may regard the body
as being acted by a force of friction which depends (for homogeneous
medium) only on its velocity."
Can someone explain me how did he argue the portion in italic?
| The paragraph may possibly be interpreted along these lines:
1) Compare an object being pushed on the surface of a rough table with an object being pushed while immersed in fluid.
Both objects experience an opposing frictional force. For the immersed object, frictional force is dependent on velocity. For the table, frictional force does not depend on velocity. The proof of velocity-dependence in fluids is outside the scope of this answer and can be found in many references.
2) The classification between these two cases can be attributed to dissipation processes within the medium. The table surface experiences microscopic vibrations which quickly convert to heat. The fluid experiences mass displacement, both longitudinal and transverse, which decay slowly. We can say that for fluids, the typical decay-time-constant is long.
3) An oscillating object imposes an additional complication. An oscillating object on the table (e.g. mass connected to a spring), will experience the same frictional force whatever the oscillating frequency is. This is not true for an oscillating object in fluids.
If the typical time constant of the oscillation is shorter than the decay-time-constant, than the effect of fluid on the oscillating object is much more complex and does not exhibit a simple velocity dependency (just think of the fluid masses displaced back and forth).
4) To conclude using the paragraph terms: Only if the oscillating time constant is long (oscillating frequency small) even when compared with the typical long dissipative processes in the medium (long decay constant) -- will then standard fluid physics apply and friction will be strictly velocity dependent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Irreducible decomposition of Lorentz tensors with Young tableaux I want to understand the irreducible decomposition of Lorentz tensors by using Young tableaux. Let me start with a trivial example. Suppose we work in $n=4$ dimensions, and that we have a rank 2 homogeneous tensor $T_{ab}$. Doing the Young tableau, we find that $4\otimes 4=10\oplus 6$, where the tensor splits into a symmetric and an antisymmetric part with subspace dimensions 10 and 6, respectively. If the antisymmetric part is real, then it is irreducible. Assume that the tensor is not traceless. The symmetric part can be further reduced to a symmetric traceless part and the trace. How is this further reduction visible when using Young operators? Shouldn't the Young tableau directly give the irreducible parts of this tensor, i.e. $4\otimes 4=1\oplus 6\oplus 9$? I know this is probably not the case, because doing Young tableaux we only consider (anti-)symmetrisations. I ask then, what is the logic behind this further reduction of the symmetric part?
Furthermore, increasing the rank, there exists for example the usual irreducible decomposition of the torsion tensor into an axial, a vectorial and a traceless tensorial part. I do not see how this split can be obtained with Young diagrams. Again, while the physical motivation may be clear, I cannot find a general mathematical prescription for this type of decomposition, nor have I found a formula for tensors of arbitrary rank. I would definitely appreciate some good references, since I have some gaps to fill in this topic.
| Another alternative for irreducible decompositions is by the aid of Lorentz group projectors as explained in https://www.mdpi.com/2218-1997/5/8/184 and sections 3 and 4 there.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Simple question about change of coordinates Suppose we have two coordinate systems (Cartesian and spherical)
$$x^{\mu} = (t,x,y,z)$$
$$x'^{\mu'} = (t',r,\theta,\phi)$$
where $r= \sqrt{x^2 + y^2 + z^2} , \theta = \cos^{-1}(z/r), \phi = \tan^{-1} (y/x)$. My question is, in general, what are the components of a vector $A_{\mu} = (A_t,A_x,A_y,A_z)_{\mu}$ in the primed coordinates? From GR, I believe the answer is $A'_{\mu'} = (A_{t'},A_{r},A_{\theta},A_{\phi})_{\mu'} = \frac{\partial x^{\mu}}{\partial x'^{\mu'}} A_{\mu}$, with the inverse matrix used for upper-index vectors.
If this is the case, in particular it should work for position vectors. That is, $x'^{\mu'} = \frac{\partial x'^{\mu'}}{\partial x^{\mu}} x^{\mu}$. However, applying this transformation gives $x'^{\mu'} = (t',r,0,0)$, not $(t',r,\theta,\phi)$. Am I doing something wrong?
Edit: The second paragraph incorrectly applies the formula I've cited, as pointed out by mike stone.
As for the first question, since we have $x'_r = \sqrt{x_1^2 +x_2^2 + x_3^2}, x'_{\theta} =\cos^{-1}(x_3/x'_r)$,$x'_{\phi} = \tan^{-1}(x_2/ x_1)$, does it follow for any vector $A'_{\mu}$ (for instance, the EM gauge field) that $A'_r = \sqrt{A_1^2 + A_2^2 + A_3^2}$, $A'_{\theta} = \cos^{-1}(A_3/ A'_r)$, and $A'_{\phi} = \tan^{-1}(A_2/A_1)$?
| Your transformation matrix:
I will ignore the "t" coordinate
\begin{align*}
&\text{The position vector for a sphere is: } \\
&\vec{R_s}=
\begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}=
\left[ \begin {array}{c} r\cos \left( \vartheta \right) \cos \left(
\varphi \right) \\ r\cos \left( \vartheta \right)
\sin \left( \varphi \right) \\ r\sin \left(
\vartheta \right) \end {array} \right]&(1)
\\\\
&\text{we can now calculate the transformation matrix $R$:}\\\\
&R=J\,H^{-1}\\
&\text{$J$ is the Jakobi matrix }\quad\,, J=\frac{\partial\vec{R_s}}{\partial\vec{q}}\quad \text{with:}\\
&\vec{q}=\begin{bmatrix}
r \\
\varphi \\
\vartheta \\
\end{bmatrix}\quad, H=\sqrt{G_{ii}}\,,H_{ij}=0\quad\text{and } G=J^{T}\,J\quad\text{the metric.}\\\\
&\Rightarrow\\\\
&R=\left[ \begin {array}{ccc} \cos \left( \varphi \right) \cos \left(
\vartheta \right) &-\sin \left( \varphi \right) &-\cos \left(
\varphi \right) \sin \left( \vartheta \right) \\
\sin \left( \varphi \right) \cos \left( \vartheta \right) &\cos
\left( \varphi \right) &-\sin \left( \varphi \right) \sin \left(
\vartheta \right) \\ \sin \left( \vartheta
\right) &0&\cos \left( \vartheta \right) \end {array} \right]&(2)
\end{align*}
\begin{align*}
&\text{We can solve equation (1) for $r\,,\varphi$ and $\vartheta$}\\\\
&r=\sqrt{x^2+y^2+z^2}\\
&\varphi=\arctan\left(\frac{y}{x}\right)\\
&\vartheta=\arctan\left(\frac{z}{\sqrt{x^2+y^2}}\right)\\\\
&\text{and with equation (2):}\\\\
&R= \left[ \begin {array}{ccc} {\frac {xz}{\sqrt {{y}^{2}+{x}^{2}}r}}&-{
\frac {y}{\sqrt {{y}^{2}+{x}^{2}}}}&-{\frac {x}{r}}
\\ {\frac {yz}{\sqrt {{y}^{2}+{x}^{2}}r}}&{\frac {x}
{\sqrt {{y}^{2}+{x}^{2}}}}&-{\frac {y}{r}}\\ {\frac
{\sqrt {{y}^{2}+{x}^{2}}}{r}}&0&{\frac {z}{r}}\end {array} \right]
&(3)\\\\
&\text{The components of a vector can transformed either with equation (2) or with equation (3) }
\end{align*}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Under what circumstances are general relativistic coordinate transformations physically meaningful? Although the field equations of GR are covariant under arbitrary coordinate transformations, such as the transformation given by Dirac (in Princeton Landmarks pp 34) that eliminate the singularities in the Schwartzschild metric, is it necessarily the case that the new coordinate system, with its singularity free metric, is physically meaningful?
This question comes up because there are transformations that are clearly meaningless. For example, if an observer makes a transformation from his coordinate system to that of his virtual image seen in a mirror, general covariance is preserved, but there is no actual space that can be entered by an observer.
Is there a criterion for deciding when the transformation is strictly virtual in the sense above?
Does the world inside a black hole as described, by say Kruskal coordinates, have the operational meaning of consisting of a world that can be entered and explored by an observer, or is it a strictly virtual construct like the mirror image coordinates?
| The coordinate transformation is never physically meaningful. Certain coordinate systems have some specific meaning.
The coordinates are just sets of arbitrary numbers with some convenient mathematical properties (a smooth and invertible map) but they are otherwise arbitrary. Coordinate transformations are just bookkeeping required to go from one set of such arbitrary numbers to another.
Because the numbering is largely arbitrary, a physicist may arbitrarily choose to assign them based on some physically meaningful quantity, but they are not required to do so. Any physical meaning (or lack thereof) comes from that choice.
Edit: regarding the question about the meaningfulness of the coordinate transforms on the interior of the event horizon in Schwarzschild coordinates. Again, the coordinates are not in themselves physically meaningful; instead physical meaning comes from the invariants. All of the curvature invariants are well behaved and finite at the event horizon, so there is no GR-based reason to think that it is any kind of a barrier. Geodesics crossing the horizon are well behaved and so forth. All invariants are reasonable, so absent unknown quantum effects and direct experimental evidence, the assumption is that the interior is physically meaningful based on invariants, not on coordinates.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Lorentz Velocity Transform With Tensor Notation So I'm attempting to prove the Lorentz Velocity tranform:
$${v_x}' =\frac{v_x-u}{1-v_xu/c^2} $$
using tensor notation. In this case obviously $\beta = u/c$ and $\gamma=(1-\beta^2)^{-1/2}$. The velocity transform tensor can be represented as
$$\Lambda = \begin{pmatrix}
\gamma & \beta \gamma & 0 & 0 \\
\beta \gamma & \gamma & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix} $$
assuming the the boosted frame $F'$ is moving in the x-direction from frame $F$. I'm also using the following two facts:
$${\partial_v}' = {\Lambda^u}_v \partial_u \hspace{20mm} {x^v}'={(\Lambda^{-1})^v}_u x^u$$
where
$$\partial_u = \left(\frac{1}{c}\frac{\partial}{\partial t},\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z} \right)$$.
My proof begins as follows:
$${v_x}' = c{\partial_0}'{x^1}' = c({\Lambda^{u}}_0 \partial_u)({(\Lambda^{-1})^{1}}_{\sigma}x^{\sigma}) $$
I use $\sigma$ to keep the summation seperate. Now I expand:
$${v_x}' = c({\Lambda^{0}}_0 \partial_0+{\Lambda^{1}}_0 \partial_1)({(\Lambda^{-1})^{1}}_{0}x^{0}+{(\Lambda^{-1})^{1}}_{1}x^{1}) $$
Subbing in the appropriate elements from the tensor yields:
$${v_x}' = c(\gamma \frac{1}{c}\frac{\partial}{\partial t}+\beta \gamma \frac{\partial}{\partial x})(-\beta \gamma c t+\gamma x) $$
$$=c(-\beta \gamma^2 \frac{\partial t}{\partial t}+\frac{\gamma ^2}{c}\frac{\partial x}{\partial t} - \beta^2\gamma^2c\frac{\partial t}{\partial x}+\beta \gamma^2 \frac{\partial x}{\partial x})$$
at this point I make the (perhaps incorrect) assumption that $\partial t/\partial x=1/v_x$. Canceling out the obvious terms leaves me with
$${v_x}'=\gamma^2v_x -\frac{\beta ^2 \gamma^2 c^2}{v_x} $$
which I know to be incorrect.
| 4-velocity is defined as
$$
u^{\mu}=\frac{dx^\mu}{d{\tau}}
$$
$\tau$ does not change under transformation. It is the proper time.
so in your formula $$v_{x'}=c\partial_{0'}x^{1'}$$ is not the correct way. Do you see the point?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Why can we see the cosmic microwave background radiation? This radiation (CMBR) is said to have its origin at the surface of last scattering that exposed itself when the big bang universe had expanded for less than a million years.
In order to see radiation from a source, one has to be on its future light cone. In a universe that is flat and open, which our Universe is asserted to be at the large scale, we are not on the future light cone of this radiation, but almost maximally remote from it. One can also say that the surface of last scattering is not on our own past light cone.
How is this visibility to be understood within standard big bang cosmology?
(This question is different from an earlier one with the same wording.)
| One has to keep remembering that in the Big Bang model, the (0,0,0,0) is located in all points of the present day universe. Each of us is sitting at the center of the universe.
As the universe expanded all points expanded away from each other.
Light that decoupled from matter at 380.000 years after the Big bang, decoupled and left with velocity c from our points to wherever they were pointing when they decoupled. In our instruments we measure photons from the other parts of the universe that were pointing at us and which have undergone the doppler effect of the expansion. This is not radiation from a surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
UV Preconditioning test I wanna build a solar panel with a new material and I wanna to test the endurance of the material against UV light. I've found in internet a Standard for UV tests. It says it has to be irradiated with 15kwh/m² from 280nm to 385nm. Since I just have a 8w UV lamp I decided to do some scaling to the test. Instead of 1m² I made an small sample of 100cm². To get the same energy in this surface I calculate that it must be at least 18hours under 8w UV light. Am I right? If not, please show me how to do it.
Thanks!
| Does your 8W lamp put out 8W of light in that band of wavelengths? or does it use 8W of electricity? With most lamps, the Wattage rating tells you how much electricity it uses, which can be much greater than the amount of optical power that it puts out. That becomes even more of a concern if you are only interested in that fraction of the optical power that falls within a narrow band of wavelengths.
Also, if your lamp is only putting out 8W of optical power, it's going to be nearly impossible to get 100% of that light to evenly irradiate the target. Depending on the geometry of your optical system, it's likely that some, or a lot, of the total power emitted by the lamp will be absorbed by other surfaces in your test fixture.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Non-acceleration and 0 force If a mass is moving at the rate of 30/ft. per minute for 5 minutes on a straight line and it strikes a second stationary mass and effects a change of position to this second mass, then we know from F=ma that the force is 0 since the acceleration is 0. Then why do we say because it is 0 that there is no force when in fact the first mass changed the position of the second.
| Hey when the collison occur the force will act on it this force will persist until they are in contact which is given by
F =(m1v1-m1u1)/t=(m2v2-m2u2)/t=impulse/t
Here v1 v2 are final velocities and u1 u2 are initial velocities t is the time upto which force acts or the time of contact
Here we can see that force is not 0 it will act during collison for very short period of time .Here F is the average force acting for time t.This force will be equal and opposite for both particles and force will cause the change in momentum
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Once a black hole is formed, is there anything other than Hawking radiation which shortens its life? Hawking radiation is supposed to very slowly evaporate a black hole (terms and conditions apply :] ).
Apart from Hawking radiation, is there any mechanism or effect that can make a black hole cease to exist? Or once they are formed are they expected to exist in this form "forever"?
| Edit: It seems that the answer is no, as far as we can tell presently.
For relevance, I keep my previous answer, which raised some controversy, below.
--
After searching a little and asking around, I've also come across the Penrose process, and in the comment above Count Iblis mentioned Phantom energy accretion.
So, all in all, it seems that according to our current knowledge there are only these, rather exotic and slow, processes that could shorten the life of a black hole.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
"Iron Core" in Inductive Charigng Inductive charging used for wireless charging often faces the hindrance of being too short ranged for many cases. There appear to be some workarounds such as using a capacitor to resonate them at the same resonant frequency. Please excuse the naiviety of the question, but when looking back at the humble solenoid, a simple iron core can drastically boost it's magnetic field strength. So, why not just stick an iron core into the middle of the inductive charging coils?
| At the high frequencies implied by your question the eddy current heating of the iron core and the hysteresis loss due to the rapid oscillation of the magnetic field in the iron core will result in the Q-value of the circuit being very small.
In other words the energy losses of an iron cored inductor would be too high.
As long as the frequencies are not too high ferrite cores are used as they have a low electrical conductivity which means that eddy currents losses are small, low hysteresis loss (and a high magnetic permeability).
At higher frequencies no core is required and the losses due to air are very small.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why doesn't using more appliances at home decrease the electricity bill? I know that at home the electric circuits are parallel, and this explains why if one appliance (eg bulb) fails, everything else continues to work, but if more devices are added in parallel to each other, their combined resistance should decrease, and thus the total power supplied by them should increase, as $\mathrm{Power} = V^2/ R$. It doesn't look like that's the case, what am I getting wrong?
| To minimize your power consumption and save on electricity bills, you actually want to maximize your home's equivalent resistance, for the reason you mention. This is done when every appliance is turned off, because each turned-off appliance has (approximately) infinite resistance (up to some current leakage). Adding a new turned turned-off appliance doesn't really create a new current path or decrease the equivalent resistance. Every time you turn on an appliance, you are closing a new circuit and decreasing your home's equivalent resistance, therefore increasing your power consumption.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What makes the disturbance in Electromagnetic waves move? I get that changing electric field will have a curly changing magnetic field and changing magnetic field will have curly changing electric field. So when we move a charge up and down, electric field will change, which will produce this changing magnetic field and this changing magnetic field will inturn produce new changing electric field. I get till here. Where I am stuck is why should this disturbance move? charge is not travelling. It is simply oscillating. Shouldnt the changing electric and magnetic field be localized? I know I am missing something very basic here but unfortumatley not able to put my finger on it.
| I think that it is a natural property of the universe, just like how a disturbance of the water at one point in a lake spreads out to the whole lake.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
What if... you had a bowl of electrons? My chemistry teacher used to tell us that if you had a soup bowl with only electrons in it, the explosion could make you fly to Pluto. Was he right? Could this happen?
| A uniform sphere of radius $r$ and total charge $Q$ has an electric potential energy of $3Q^2/20\pi \epsilon_0 r$.
Let's say your bowl is like a sphere of radius 5 cm. If it's all water ( molecular weight 18$m_u$) and $\mu= 9/5$ of a mass unit for every electron, then the number of electrons is
$ N_e = 1000 \frac{4\pi r^3}{3}/\mu m_u = 5\times 10^{26}$
If all the nuclei were removed, the electric potential energy would be $7\times 10^{26}$ Joules.
Pluto is at about 40 au from the Sun, effectively to get there, you have to escape from Earth and escape from the Sun. To do this you would need to launch from the Earth with a carefully directed speed of at least 16 km/s. If your mass is 100 kg, that takes a mere $1.3\times 10^{10}$ J.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Instantaneous velocity So here’s a question I’ve been thinking of for a while. Suppose we say, “an object is having an instantaneous velocity along a particular direction ( say 10 m/s along the $x$-direction)” . Is it fair to conclude that it is traveling in a straight line along the $x$-axis? Well my opinion on this is,
For instance, a projectile ( on earth ) , the instantaneous velocity ( which is constant through out the journey ) is always in the $x$-direction while the body is executing a parabola in the $x$-$y$ plane? Please acknowledge me if I’m wrong.
| Well a projectile(obliqe projectile moves with parabolic path and its velocity is not always along x axis. This happen only when it reaches its maximum height .At any time in the projectile motion its Velocity is given by v⃗ =v⃗ₓ+v⃗y where v⃗ₓ and v⃗y are the velocities along x axis and y axis respectively and this v⃗y changes with time because of gravitational force but v⃗ₓ remain contant because of no force(in vaccum) acting along x axis
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Covariance matrix after projection on gaussian state I cannot understand the proof in Eisert article (https://arxiv.org/abs/quant-ph/0204052) about finding the covariance matrix of a state after projecting some of its modes onto a gaussian state.
We have the characteristic function of the remaining (unprojected) modes being
\begin{equation}
\chi(\xi_A)=\frac{1}{\pi^2}\int d\xi_5\dots d\xi_8 e^{-\xi^T\Gamma'\xi/2}e^{-\xi_B^T \gamma\xi_B}
\end{equation}
where $\xi=(\xi_A, \xi_B)$, the labels $A$ and $B$ referring respectively to the subsystem that ''remains'' and to the subsystem that is projected to a gaussian state with covariance matrix $\gamma=\mathrm{diag}(1/d,d,1/d,d )$ and where
\begin{equation}
\Gamma'=\begin{pmatrix}C_1 & C_3\\C_3^T &C_2 \end{pmatrix}
\end{equation}
He then states that the resulting covariance matrix, after carrying out the integration, is
\begin{equation}
M_d=C_1-C_3(C_2+\gamma^2)^{-1}C_3^T
\end{equation}
but I have no idea of how he manages to carry out the integration, since the term $\xi\Gamma'\xi$ has mixed terms in it reading $\xi_B^T C^T \xi_A$ and $\xi_A^T C \xi_B$ and I don't know how to treat them.
Any hint?
| As suggested by @flippiefanus, it’s a case of completing the square. (Or, alternatively, it is a well known formula for Gaussian integration, but this sounds a bit like cheating)
If I’m not mistaken, you made a typo in your first equation, which is equation (8) in the Eisert, Scheel, Plenio article. You should replace $γ$ by $\frac{γ^2}2$. We have then
\begin{align}
χ(ξ_A)=\frac1{π^2}∫dξ_B\exp(-\tfrac12ξ_A^TC_1ξ_A
-\tfrac12ξ_B^TC_3^Tξ_A-\tfrac12ξ_A^TC_3ξ_B-\tfrac12ξ_B^T(C_2+γ^2)ξ_B)\\
=\frac{\exp(-\tfrac12ξ_A^TC_1ξ_A )}{π^2}∫dξ_B\exp(
-\tfrac12ξ_B^TC_3^Tξ_A-\tfrac12ξ_A^TC_3ξ_B-\tfrac12ξ_B^T(C_2+γ^2)ξ_B).
\end{align}
The idea is to express the argument of the integrand as $-\tfrac12(ξ_B+Δ)^TM(ξ_B+Δ)+\tfrac12Δ^TMΔ$, the “$+Δ$” translation of $\xi_B$ being irrelevant in the integration. This expression becomes
$$-\tfrac12ξ_B^T M Δ - \tfrac12 Δ^T M ξ_B - \tfrac12 ξ_B^T M ξ_B $$
which, when identified with the argument of the exponential, leads to
\begin{align}
M& =C_2+γ^2 & M\Delta&=C_3^Tξ_A & Δ^T&=ξ_A^TC_3M^{-1}\\
&& && &=ξ_A^TC_3(C_2+γ^2)^{-1}
\end{align}
We have then
\begin{align}
χ(ξ_A)&=\frac{\exp(-\tfrac12ξ_A^TC_1ξ_A )}{π^2}
∫dξ_B\exp( -\tfrac12(ξ_B+Δ)^TM(ξ_B+Δ)+\tfrac12Δ^TMΔ)\\
&=\frac{\exp(-\tfrac12ξ_A^TC_1ξ_A +\tfrac12Δ^TMΔ)}{π^2}
∫dξ_B\exp( -\tfrac12(ξ_B+Δ)^TM(ξ_B+Δ))\\
&=\frac{\exp(-\tfrac12ξ_A^TC_1ξ_A +\tfrac12Δ^TMΔ)}{π^2}
\underbrace{∫dξ_B\exp( -\tfrac12ξ_B^TMξ_B)}_{\text{scalar independent of $ξ_A$}}\\
&∝\exp(-\tfrac12ξ_A^TC_1ξ_A +\tfrac12ξ_A^TC_3(C_2+γ^2)^{-1}C_3^Tξ_A)\\
&=\exp(-\tfrac12ξ_A^T
\underbrace{\left(C_1+C_3(C_2+γ^2)^{-1}C_3^T\right)}_{M_d}
ξ_A)
\end{align}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to measure angles in Minkowsky space, and how do they transform? I want to know how an ordinary angle $\theta$ transforms under a Lorentz boost. For that purpose I consider a 4-vector given by
$$ a ^ \mu = ( t , \cos \theta , \sin \theta , 0 ) .$$
The angle I will analyze is the one between this 4-vector and the $a$ axis, $\theta$. I consider a boost
$$ \Lambda ^\mu _{\ \ \ \nu} = \left( \begin{matrix}\gamma & -\beta\gamma & 0 & 0\\
-\beta\gamma & \gamma & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{matrix} \right) .$$
Thus the transformed 4-vector is $$ a ^{\prime \mu} = ( \gamma (t - \beta \cos \theta), \gamma (\cos \theta - t \beta), \sin \theta, 0 ) $$
Now the angle subtended between $a^\prime$ and the $x^\prime$ axis should be the transformed $\theta^\prime$, i.e.
$$ \theta ^ \prime = \tan ^-1 \left( \frac{\sin \theta}{\gamma (\cos \theta - t \beta)} \right) .$$
Now I have some doubts:
*
*Which should be the correct value of $t$, the time component of the 4-vector $a$ used to define the angle $\theta$? I would say $t=0$ in order to have the angle $\theta$ defined by a space-like vector, the same idea as when you define the proper distance between two points. In this case I would think of a sort of proper angle.
*In the case $t=0$ I find that $ \theta^\prime \leq \theta $... This is not consistent with the length contraction along the $x$ axis... In fact, the $x$ component of $a^\prime$ is greater than that of $a$...
I was expecting to obtain something like this:
but I didn't although I have transformed the vector using the transformation law... What is wrong?
| Length measurement in the moving frame should be done at simultaneous times in the moving frame, so you need to do a Lorentz transformation fixing $t'= 0$.
This gives
$$ t = \beta \cos \theta $$
So
$$ a'^\mu = ( 0 , \frac {\cos \theta} \gamma , \sin \theta , 0 ) $$
So
$$ \theta ' = tan^{-1} ( \gamma \tan \theta ) $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Confusion with regards to uncertainty calculations Let’s say we have a scenario of a ball being released from the top of the building. This can be modeled simply with the kinematics equation $S=ut +\frac{1}{2}at^2$, which reduced to $S=\frac{1}{2}at^2$. We are given $\Delta t, t, \Delta S, S$, are we are to find $a, \Delta a$.
Firstly, I have no problems calculating the absolute portion of the uncertainty.
Here is my problem: Differentiating $S=\frac{1}{2}at^2$ gives me $\frac{\Delta S}{S}=\frac{\Delta a}{a}+2\frac{\Delta t}{t}$. However, substituting these values gives me a wrong value of $\Delta a$.
The correct approach should have been to rearrange the equation to $a=\frac{2S}{t^2}$, and then solve $\frac{\Delta a}{a}=\frac{\Delta S}{S}+2\frac{\Delta t}{t}$. As can be seen, there appears a contradiction.
Further substitution of $S=82m,\Delta S=1m,t=4.1s, \Delta t=0.2s$ to solve for $a, \Delta a$ using the second equation and then putting this value back into the first gives me a contradiction.
I would like to know which one is correct and which should be used because both seem correct to me.
I have discovered that the addition/subtraction of uncertainties is as follows. Let’s say $(A\pm\Delta A)+(B\pm\Delta B)=(C\pm\Delta C)$.
Then $C_{max}=(A+\Delta A)+(B+\Delta B), C_{min}=(A-\Delta A)+(B-\Delta B)$. Referring back to the definition of uncertainty, $C+\Delta C$ is the average of the minimum and maximum of $C$, thus giving us $C=A+B$ and $\Delta C=\Delta A+\Delta B$.
Using this principle, I am however confused by what I get. $C_{max}=(A+\Delta A)(B+\Delta B), C_{min}=(A-\Delta A)(B-\Delta B)$. Expanding, I got $C=AB +\Delta A\Delta B$, which was contradictory to what I have learnt. I got $\Delta C=A\Delta B + B\Delta A$, which was correct though... This raises a new problem, as I am now unsure as to why the rule applies to multiplication.
| Aaron Stevens' answer above has already answered the question but I wanted to add something extra that might help others who see this question.
To calculate uncertainty for a multiplication or division, we add the fractional uncertainties in quadrature.
So, considering $S=\frac{1}{2}at^2$, we have
\begin{align}
\frac{\Delta S}{S} = \sqrt{ \left(\frac{\Delta a}{a}\right)^2 + 2\left(\frac{\Delta t}{t}\right)^2}
\end{align}
whereas when considering $a=2\frac{S}{t^2}$ we have
\begin{align}
\frac{\Delta a}{a} = \sqrt{ \left(\frac{\Delta S}{S}\right)^2 + 2\left(\frac{\Delta t}{t}\right)^2}
\end{align}
In both expressions, the multiplicative factor of $2$ comes from the power of $t$. The multiplicative factor in the expressions for $S$ or $a$ has no bearing on the calculation of fractional uncertainty.
Why if we put $\frac{\Delta S}{S}$ equal in both formulae do we find a contradiction? As Aaron Stevens above explains in his answer above, this is to do with which variables were measured. The $\Delta a$ and $\Delta t$ describe uncorrelated, random uncertainty. That is, it's normally distributed. Therefore, the error will resultant quantity $\frac{\Delta S}{S}$ will have a different value than if you measured $\Delta S$ and $\Delta t$ and tried to determine $\frac{\Delta a}{a}$.
Aside:
You mention that you differentiated and saw a contradiction.
Starting with
\begin{align}
S = \frac{1}{2}at^2
\end{align}
and considering the differential $dS$ gives
\begin{align}
dS = \frac{1}{2}t^2da + atdt
\end{align}
which, by dividing through by $S$, can be expressed as
\begin{align}
\frac{ds}{S} = \frac{da}{a} + 2\frac{dt}{t}
\end{align}
Now, rearranging $S$ to make $a$ the subject so that
\begin{align}
a = 2\frac{S}{t^2}
\end{align}
and performing a similar calculation gives
\begin{align}
da = 2\frac{dS}{t^2} -4S\frac{dt}{t^3}
\end{align}
which can be expressed as
\begin{align}
\frac{da}{a} = \frac{dS}{S} - 2\frac{dt}{t}
\end{align}
This differs in your assertion by a minus sign in the second term. So, just looking at differentials there seems to be no contradiction. However, uncertainties are not calculated by differentiating.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How is momentum conserved in this example? Suppose a sticky substance is thrown at wall. The initial momentum of the wall and substance system is only due to velocity of the substance but the final momentum is 0. Why is momentum not conserved?
| The wall will move a little bit as well as exert a small force on whatever it's attached to, etc., etc., until you get to applying a force to the Earth. Everything else is so massive, so we can't see this happening. You are assuming an immovable wall, which is not physically the case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/433634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Why do we use the RMS but not the fourth root mean quad? Why do we use the power of $2$? What is the relation between this and having the same heat energy in both AC and DC?
| The root mean square can be derived from something more general.
First lets look at the space of T-periodic, real functions. Its inner product is
$$\langle x,y\rangle = \int_0^T x(t)y(t) \mathrm{d}t.$$
Induced from this inner product, we can define a norm on this space:
$$\lvert\lvert x \rvert\rvert_2 = \sqrt{\langle x, x \rangle} = \sqrt{\int_0^T x(t)^2 \mathrm{d}t}.$$
We already see some similarities between the definition of the norm and the definition of the RMS.
Now if our function $x$ has the unit $\lbrack V \rbrack$, the inner product will have the unit $ \lbrack V^2s \rbrack$ and the norm $\lbrack Vs^\frac12 \rbrack$. As we want the RMS to express a quantity per time division, we have to divide the inner product by the period $T$ to not only match the desired unit $\lbrack V \rbrack$ again but also to relate it to a moment in time and not a timespan (the period e.g.).
So our RMS is really just $\frac{\lvert\lvert x \rvert\rvert_2}{\sqrt{T}}$.
To read more about why the 2-norm is used in a lot of places, I invite you to have a look into 1 and 2.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/433726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
How does the Lorenz gauge eliminate the scalar component of the vector field? Wikipedia states that by using the Lorenz gauge, $\partial_\mu A^\mu=0$, we eliminate the scalar part (spin-0) of the vector potential that previously had spin-1 and spin-0 components${}^1$.
However, this excellent Phys.SE answer by @AccidentalFourierTransform states that for a gauge fixing term in the Lagrangian $\delta\mathcal L = -\frac{1}{2\xi}(\partial_\mu A^\mu)^2$, the propagator for $A^\mu$ is given by${}^2$
$$
\tilde\Delta_{\mu\nu}(p)=\underbrace{\frac{\eta_{\mu\nu}-\frac{p_\mu p_\nu}{\color{red}{m^2}}}{p^2-m^2+\text i\epsilon}}_{j=1} + \underbrace{\frac{\frac{p_\mu p_\nu}{m^2}}{p^2-\xi \,m^2+\text i\epsilon}}_{j=0}\tag{1}
$$
where we can clearly identify the spin-1 and spin-0 parts.
The Lorenz gauge corresponds to setting $\xi=0$, which yields the following propagator:
$$
\tilde\Delta_{\mu\nu}(p)\stackrel{\xi=0}=\dfrac{\eta_{\mu\nu}-\frac{p_\mu p_\nu}{\color{red}{p^2}}}{p^2-m^2+\text i\epsilon}
$$
Due to the statement on Wikipedia I would have assumed that $\xi=0$, i.e. choosing the Lorenz gauge, just eliminates the second (scalar) term in (1). Apparently, it is not that simple. Where lies my misunderstanding?
${}^1$ Because $A^\mu$ belongs to the Lorentz representation $\big(\tfrac{1}{2},\tfrac{1}{2}\big)$.
${}^2$ The textbook source for this is Itzykson & Zuber's QFT book.
| The easiest way to see it is to write it in momentum space
$$p_\mu A^\mu = 0$$
In the center of mass frame, where the spin components are well defined, you have
$$\bar p_\mu=(M,0,0,0) \implies \bar p_\mu A^\mu =M A^0 = 0 \implies A^0 = 0$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conceptual question in thermodynamics about isothermal processes If a process is isothermal then Δ U is zero. So ΔQ is non zero. But isn't ΔQ=nCΔT. Implying that ΔQ is zero. Where am I wrong? (Considering ideal nature)
| In freshman physics, we learned the $Q = nC\Delta T$, but in thermodynamics, we change the definition a little to reflect our understanding that, while Q depends on process path, C is supposed to be a physical property of the material experiencing the temperature change, independent of process path. So we redefine C, not in terms of the heat exchanged, but rather in terms of the physical property changes in internal energy U and enthalpy H:
$$C_V=\left(\frac{\partial U}{\partial T}\right)_V\tag{1}$$
$$C_P=\left(\frac{\partial H}{\partial T}\right)_P\tag{2}$$
Eqn. 1 coincides with the old definition in cases where no work is done so that $\Delta V=0$. Eqn. 2 coincides with the old definition in cases where the pressure is constant. Otherwise, they don't; however, they are much more general and widely applicable than the old definition.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Why position and momentum operators are both continuous spectrum while angular momentum is discrete? We know that position $\hat{r}$ and momentum $\hat{p}$ are both continuous spectrum operators, i.e.
$$\hat{r}|r'\rangle=r'|r'\rangle, \quad \hat{p}|p'\rangle=p'|p'\rangle.$$
But the angular operator $\hat{L}=\hat{r}\times\hat{p}$ is not:
$$\hat{L}^2|l\rangle=\hbar^2 l(l+1)|l\rangle.$$
Could anyone give some explanation?
| In general for a bounded state, its energy spectrum is discrete. Free particles on the other hand has continuous energy spectrum. This is true for a rotating motion as it is also bounded, i.e. the starting point is exactly equal to the end point upon rotation by $2\pi$, therefore its eigenvalues are also discrete.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How to relate random matrix theory with Quantum mechanics approach In Quantum mechanics we first built Hamiltonian and then find its eigenvalues and eigenvectors. How I can relate this with random matrix theory?
| Random matrices were suggested as Hamiltonians in very complex systems from nuclear physics. Here, you had a lot of different energy levels appearing in your spectra, but unlike with, say, the hydrogen atom, it was no longer clear how to label them in a systematic way. So rather than trying to derive or propose an explicit (and presumably practically unsolvable) model, Wigner proposed to use random matrices which reproduce the correct eigenvalue statistics.
More precisely, Wigner suggested that the entries of the random matrix are i. i. d. Gaußian variables under the constraint that the matrix be hermitian, and to normalize the matrices so that the eigenvalues fall into a pre-determined window. You call the set of such matrices the Gaußian orthogonal ensemble (GOE). Due to the normalization, you can study e. g. the eigenvalue distribution and level spacing distribution as the size of these matrices becomes large. For GOE you get the “semicircle law” whee the eigenvalue density approaches the function $P(x) = \frac{2}{\pi} \, \sqrt{1 - x^2}$.
This has been refined to include e. g. symmetries such as even and odd time-reversal symmetries, which leads to the Wigner-Dyson classes. (These are also relevant for the study of topological insulators.) Of course, the presence of these symmetries has an effect on the eigenvalue statistics. Or you could look at “band matrices” where only entries close to the diagonal can be non-zero.
In any case, it is important to keep in mind that within the framework of random matrix theory you do not study individual matrices, but the statistics of distributions of matrices.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Specifying the initial nonequilibrium distribution $f(\textbf{r},\textbf{v},t)$ in Boltzmann equation? Within the single relaxation time approximation, the collision term in the Boltzmann equation is approximated as $$\Big(\frac{\partial f}{\partial t}\Big)_{\rm coll}=-\frac{(f-f_{\rm eq})}{\tau}$$ where $f\equiv f(\textbf{r},\textbf{v},t)$ is the distribution out of equilibrium and $f_{\rm eq}=f_{\rm eq}(\textbf{v})$ is the Maxwell-Boltzmann distribution, for example.
To show that at $t\to \infty$ the $f$ relaxes to $f_{\rm eq}$, one needs to specify the nonequilibrium distribution $f(\textbf{r},\textbf{v},t)$ at some initial time $t=t_0$. How does one specify that?
| Note that this is really a non-linear equation, because $f_{eq}$ depends on the 0'th moment (particle number=chemical potential), 1st moment (mean velocity) and 2nd moment (mean energy=temperature) of $f$. As a result, establishing convergence is not entirely trivial. Also note that $f$ has non-trivial $x$ dependence, so you really expect convergence to local equilibrium, with $T(x,t)$, $\mu(x,t)$ and $\vec{u}(x,t)$ governed by solutions to the Navier-Stokes equation.
Regarding initial conditions we expect that any positive, reasonably smooth $f(x,0)$ provides an acceptable initial condition. Note that existence and uniqueness of the full Boltzmann equation (with a 2-body collision kernel) has been established (the corresponding problem for Navier-Stokes is a Millenium Prize Problem). There is a fair amount of work on the convergence of the latticized version of the Boltzmann equation with a BGK kernel (known as the lattice Boltzmann equation), see, for example, the book by Succi. A little bit of googling also provides references on existence and uniqueness for the continuum case, see here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the experimental evidence that the nucleons are made up of three quarks? What is the experimental evidence that the nucleons are made up of three quarks? What is the point of saying that nucleons are made of quarks when there are also gluons inside it?
|
What is the experimental evidence that the nucleons are made up of
three quarks?
Some strong pieces of evidence for the quark model of the proton and the neutron, not stated in another answer, are the magnetic moment of the proton and the magnetic moment of the neutron, which are consistent with the quark model and are inconsistent with the magnetic moment that would be predicted by quantum electrodynamics in a point particle model.
What is the point of saying that nucleons are made of quarks when
there are also gluons inside it?
The reasons this is done is called in the field of science communication "lies to children".
Complex topics are often initially taught in a manner the oversimplifies the reality in order to develop key salient points.
Emphasizing the quark composition of the nucleons while ignoring the gluon contribution allows one to explain many key conclusions of the quark-gluon model including:
*
*The charge of all of the hadrons
*Beta decay
*The list of all possible baryons and of all possible pseudoscalar and vector mesons
*The full list of Standard Model fermions
*The magnetic moment of hadrons
*Deep inelastic scattering of hadrons
*A formula for hadron spin
The existence of gluons as a constituent doesn't have to be explained in detail to reach these results.
Also, while the valence quark content of a hadron is specific to a particular kind of hadron, the gluon content of a hadron is not useful for hadron taxonomy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 6,
"answer_id": 0
} |
Is Quantum Mechanics Compatible with Conservation of Information? What is exactly the law of conservation of information? In quantum mechanics we have truly random outcomes in experiments, but doesn't this randomness mean that new information is produced and the law of conservation of information is violated?
| Any conservation law -- energy, momentum, you name it, holds only in an isolated system. If a system interacts with its environment, then neither energy nor information associated with the system will be conserved. Of course, you can consider the system and its environment together as an isolated system, to which the conservation laws apply.
Since quantum measurement of a system involves interacting with it, it therefore should not be surprising that conservation of information associated with the system is violated. Of course, you could try to consider the system and the experimenter doing the measurement as a single isolated quantum system. An outside observer -- a "Wigner's friend" -- to the system-experimenter composite will describe this composite system as an isolated system undergoing unitary evolution. The relationship between that, and the subjective experience of the experimenter, is the "measurement problem" which various interpretations of quantum mechanics try to address in various ways.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What part of special relativity is factored in in relativistic redshift velocity equations? While doing research into redshift equations (doppler redshift and cosmological redshift), both types of redshift had two equations for finding recession velocity: a 'non-relativistic' equation and a 'relativistic' equation. I looked up SR, and found out about time dilation and length contraction. My question is which of these phenomena have been factored in in relativistic equations (maybe both, or possibly a third phenomena I'm unaware of?)
I'm doing this for a high school project, so please explain in very simplistic language. Also showing a bunch of complicated equations will seriously fly right over my head, so I'd greatly appreciate it if you could please use words only :)
| The simplest case is the "ordinary" relativistic Doppler redshift or blueshift due to relative motion.
Considering the relativistic effect (redshift or blueshif), it is necessary to take into account the time dilation. At relativistic velocities all processes in moving source or moving observer run slower, thus a source oscillates slower and this effect must be added to the classical one.
In simple terms: take the equation for the classic Doppler effect and "attach" time dilation:
Due to time dilation, the relativistic effect will be more reddish than the classic one. Hence, if you consider the source to be moving; the equation will look like that:
$$ f_r = \frac{f_s}{\gamma\left(1 + \beta \cos\theta_r\right)}$$
Or to the detector, if you consider the detector to be moving in the reference system of the source.
$$ f_r = \gamma \left( 1 - \beta \cos \theta_s \right) f_s $$
here $\gamma = \frac{1}{\sqrt{1 - \beta^2}}$ is the Lorentz factor.
If source and receiver moving directly towards or away from each other, by simple transformations, these equations can be reduced to a general form
$$\frac{f_s}{f_r} = \sqrt{\frac{1 + \beta}{1 - \beta}}$$
Observed frequency shift is invariant, it doesn't depend on choice of frame,even though relative contribution of time dilation are frame - dependent.
The Feynman Lectures - Relativistic Effects in Radiation
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Frustrated Ising model Consider a 2D Ising model with nearest neighbour, and second nearest neighbour interactions
$\mathcal{H}= -J_1\sum_{\langle ij\rangle}\sigma_i \sigma_j-J_2\sum_{\langle\langle ik\rangle\rangle}\sigma_i \sigma_k$
where $\sigma =\pm 1$. And $|J_1|=|J_2|$
For $J_1>0$ and $J_2<0$ the system is frustrated since $J_1$ prefers ferromagnetic ordering but $J_2$ prefers antiferromagnetic ordering. How do I calculate which state minimizes the energy?
I was thinking one could try different combinations and see which arrangement minimizes the "frustration", but maybe there's a better way?
Seems like a lot of work
Would really appreciate some input
| This model (assuming a square lattice) has been studied in the literature for a while. K Binder and DP Landau Phys Rev B, 21, 1941 (1980) discuss the three possibilities for the ground state: ferromagnetic (F), antiferromagnetic (AF), and superantiferromagnetic (SAF). Basically one considers the likely structures and picks the one with the lowest energy. They also conducted Monte Carlo simulations to determine the phase behaviour at higher temperatures, and never saw any more complicated structures. There's more discussion in J Oitmaa J Phys A, 14, 1159 (1981), and I drew the following diagram based on Fig 1 from that paper.
The SAF phase consists of alternating rows of $+$ and $-$ spins in one direction
(which can be horizontal or vertical, giving an additional degeneracy
on top of the usual up-down spin degeneracy).
The relevant energies per spin are
\begin{align*}
E_\text{F}/N &= -2J_1 -2J_2 \\
E_\text{AF}/N &= 2J_1 -2J_2 \\
E_\text{SAF}/N &= 2J_2
\end{align*}
The lines along which SAF becomes energetically equal to either AF or F
correspond to $J_2=-\frac{1}{2}|J_1|$.
So the case in which you are interested, $J_1>0$ and $J_2=-J_1$,
lies well within the SAF region.
There may be discussions of this model in standard texts, but I don't have any of those to hand, sorry, and the papers I cited both require a subscription. There is a connection with the anisotropic next-nearest-neighbour Ising or ANNNI model (which I saw mentioned in a comment on your earlier question), and that model is of interest both experimentally and theoretically.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Global destructive Interference and conservation of energy As an engineer I see it like this. Imagine I send a wave and then I send another wave in phase shift to cancel that wave. Unless I am sending the wave from exactly the same point in both instances, then I will not have perfect destructuve interference everywhere. Now if I do send the wave and the phase shifted wave from the same location, then it is as if I try to push a cart and pull a cart at the same time. The forces will cancel out and there is no net energy or force. The point I am trying to make is that in practice there will be a small separation between the origin of my two waves and hence there will be areas of destructive and of constructive interference as I cannot perfectly overlap the waves. Does this make sense to you Physicists?
| You are right about what happens in reality. But physical models are simple, and generally, they don't care about practical nuances like this.
So, according to any physical model, even when you emit two waves like that (from the same point, with precisely the phase difference that you want), the mathematical background implies that the model should hold.
Thus it doesn't matter what happens in reality, from the point of view of the theoretical framework. In fact, these kinds of thought experiments can even be used to test the validity of the model, because the model's predictions shouldn't cause any contradictions to the assumptions on which it is based.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why are tall block stacks so hard to make? Consider a stack of wood chips: each 0.5cm thick and 2x2 cm in length and width. There are 200 of them all stacked on each other. For some reason they all instantly fall. Evwn though their centre of gravity is at the centre of the stack and they have the extra added help of friction the further down the chips you go (because the second to bottom chip is squashed against the first because of the heavy load). So why does it fall?
| An additional factor that I haven't seen mentioned yet is the fact that the environment acts upon a structure such as that. Errant air currents would have a deleterious effect on such a stack the higher it went. Even a tiny breeze would topple a perfect stack if it was beyond a certain height.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is the relative speed of light really invariant, irrespective of the motion of the observer? If 3 observers are on a planet which 100 light years from a star, and the star goes supernova, if one observer moves towards the star and one moves in the opposite direction, each observer will see the explosion at a different time. The observer who stays on the planet, sees the explosion a hundred years after it occurs. The person moving towards the star, sees it in less than a hundred years, while the person moving in the opposite direction, will see it more than 100 years after it occured. Doesn't this prove that the speed of light, relative to a moving observer, is not invariant.
| the concise answer: by moving the observers you changed the distance to the supernova, not the speed of light in your experiment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
} |
What is the most useful to learn out of complex analysis and differential equations for undergraduate studies in physics? Next year I'm planning to start on my bachelor's in physics, however, I have already started taking some undergraduate courses in mathematics and next semester I will have to choose between complex analysis and differential equations - I will take the other course at some other time, but that will at least be a year away.
I know both subjects are important in physics, but my question is which subject will have the largest impact when I start on my bachelor's degree.
| As an undergraduate, having a decent understanding of complex numbers, rather than complex analysis, goes far. Complex analysis - complex functions and integrals of complex functions in the complex plane and power series of complex functions and the multiple of theorems, etc... - is not very useful in an undergraduate curriculum, unless it is an advanced undergrad curriculum.
However, what's more useful for an undergrad is a good understanding of the complex plane and how to use it to make problems easier. The main application is in quantum mechanics, where we are dealing with complex numbers and functions ubiquitously. Most of the relevant information necessary for introductory quantum mechanics regarding complex numbers is usually covered by an intro. text book, and I strongly recommend David McynTyre's book, and/or Griffith's book. For more mathematical treatments of complex numbers, see this post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Work done by a gas In the expression for work done by a gas,
$$W=\int P \,\mathrm{d}V,$$
aren't we supposed to use internal pressure?
Moreover work done by gas is the work done by the force exerted by the gas, but everywhere I find people using external pressure instead of internal pressure.
| The work done by the gas on the piston is
$$W_1 = \int P_{\text{int}} \, dV$$
where $P_{\text{int}}$ is the pressure of the gas right next to the piston. This is just a mild rephrasing of the definition of work. The work done by the piston on the outside is
$$W_2 = \int P_{\text{ext}} \, dV$$
where $P_{\text{ext}}$ is the pressure of the external air right next to the piston. These two pressures may be different, so we may have $W_1 \neq W_2$.
For example, the two may differ if the piston has friction, with the difference $W_1 - W_2$ dissipated into heat. (Friction exists no matter how slowly the piston is moving, so this also holds for a quasistatic process.) Or the piston may be accelerating, in which case $W_1 - W_2$ goes into the piston's kinetic energy.
In high school physics, $P_{\text{int}}$ and $P_{\text{ext}}$ are always assumed to be the same, to keep things simple.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 7,
"answer_id": 1
} |
Solution of diffusion equation with spherical sink I hope this question is not too basic, but I have no experience with partial differential equations and would like to ask for some hints on how to solve the following problems:
The visual idea is to describe the diffusion of some dilute chemical around a spherical sink or a sink at some point.
It would be nice to obtain a time evolution when starting with a uniform density (this is only possible in problems 1) and 5)), but I would already be satisfied with a "nice" steady-state solution.
1) The simplest case would be a diffusion equation in one dimension with a sink at $x=0$, i.e. $$\frac{\partial{\rho}(x)} {\partial{t}} = - \nabla \cdot J(x), \qquad x \in \mathbb{R}$$ with $J(x) = -D \nabla \rho(x)$ for $x \neq 0$ and $J(x) = -D \nabla \rho(x) -k\rho(x)$ for $x = 0$ with $k$ some large depletion constant.
Moreover, I would like to require that $\rho(x) \rightarrow c$ for $|x| \rightarrow \infty$ for some constant $c >0$.
The initial conditions are not so important, let's say $\rho(x;t=0) = c_0$ for all $x \in \mathbb{R}$. Actually, I am not as much interested in the time evolution as in finding a nice steady-state solution, which does not diverge for $x \rightarrow \infty$. Thus, some more appropriate initial conditions could be chosen.
2) Alternatively, One could just require the boundary condition $\rho(x=0;t) = 0$ for all $t$ in the above situation. Then the initial condition needs to be adjusted accordingly.
3) The problem in 3D is the same: to solve the diffusion equation with $x \in \mathbb{R}^3$ and boundary conditions $\rho(x) = 0$ for $x = 0$ and $\rho(x) \rightarrow c$ for $|x| \rightarrow \infty$ with some $c >0$. I guess with spherically symmetric initial conditions, this case is completely equivalent to 2).
4) Now I would like to have a real sphere in $\mathbb{R}^3$ of radius $r >0$, i.e. the boundary conditions $\rho(x) = 0$ for $|x| < r$ and $\rho(x) \rightarrow c$ for $|x| \rightarrow \infty$ with some $c >0$.
5) Maybe 4) is simpler when formulated with a finite depletion rate $k$ as in 1) instead of the boundary condition around $0$. Physically, this is very similar anyway.
Any literature suggestions are also highly appreciated.
| For steady state diffusion in an unbounded region toward a spherical sink of radius $r_0$, we have $$4\pi r^2 D\frac{dc}{dr}=Q$$where Q is the diffusion rate of the species toward the sink (a constant). The solution to this equation is $$c=c_{\infty}-\frac{Q}{4\pi rD}$$where $c_{\infty}$ is the concentration far from the sphere. Applying the boundary condition $c=c_0$ at $r=r_0$ for the surface of the sphere, we have: $$Q=4\pi r_0D(c_{\infty}-c_0)$$Therefore, the concentration profile is $$\frac{(c_{\infty}-c)}{(c_{\infty}-c_0)}=\frac{r_0}{r}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
The charge of the electron before measurement Knowing that electrons do not have a definite position before they are being measured, how can their charge be described before the measurement? Where is the charge? Does it make sense to talk about their charge before measurement?
| The charge and mass of an electron are natural constants. All electrons have the same charge. The position of an electron is not constant at all. Where is the charge? Wherever the electron is. The statement that a particle "does not have a definite position" before being measured of course means that the position of its charge is similarly undefined, but the charge is still very well defined.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determining the acceleration of the Universe from a single star? It Occurs to me we might be able to find an entirely independent method of determining the Universe's acceleration using a single source.
If one was to watch a single high source consistently one should be able to simply simply watch for the change in it's redshift with time. I know it would be a small effect (perhaps choosing a high Z source would help).
Is such a task feasable?
Maybe something like finding a source that could line up with the recoiless resonant absorption (Mossbauer effect) in some crystal would have enough sensitivity? (ie a cosmological scale Pound Rebka experiment)??
Anyway, I hadn't heard of it, but maybe someone can tell me why its a bad/good idea. Thanks!
NOTE: I get that ultimately one would use several sources for a proper analysis.
| The idea of measuring the change in redshift over time of a distant galaxy has been around since at least the 1960s. Unfortunately, it remains far beyond our technical capabilities. In a previous post, I derived the equation for $\dot{z}$:
$$
\dot{z} = (1+z)H_0 - H\!\left(\!\frac{1}{1+z}\!\right),
$$
where $H(a)$ is the Hubble parameter, expressed in terms of the scale factor. In a $\Lambda$CDM model with $H_0 = 68\ \text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}$, this gives the following plot:
As you can see, $\dot{z}\sim 10^{-10}$ per year. With current technology, quasar redshifts can be measured with an accuracy up to $10^{-5}$ (see Davis & Lineweaver (2003), section 4.3). In other words, with current technology it would still take ~100,000 years to measure any change in redshifts. It's a great idea, but we have a long way to go before it becomes feasible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Are uncertainties higher than measured values realistic? Whenever I measure a positive quantity (e.g. a volume) there is some uncertainty related to the measurement. The uncertainty will usually be quite low, e.g. lower than 10%, depending on the equipment. However, I have recently seen uncertainties (due to extrapolation) larger than the measurements, which seems counter-intuitive since the quantity is positive.
So my questions are:
*
*Do uncertainties larger than the measurements make sense?
*Or would it be more sensible to "enforce" an uncertainty (cut-off) no higher than the measurement?
(The word "measurement" might be poorly chosen in this context if we are including extrapolation.)
| If your extrapolated results have more than 100% uncertainty, which is possible, it just means that either you sample data was unrepresentative of the population, or that your extrapolation is wrong. Depending on what your experiment is, a linear extrapolation might lead to vastly incorrect results.
I'm not sure what you mean by 'enforce', but an uncertainty that high should tell you something is probably wrong. It makes sense to choose a cut-off point for your uncertainty, if that's what you mean by 'enforce', but your first strategy should probably be to look at how you're extrapolating.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
Is the classification of (Symetry Protected) Topological Order for 3 band models different than for two band models? I was reading this article: https://arxiv.org/abs/1512.08882 on the 10 fold way which gives a nice explanation of the possible topological phases for each of the symmetry classes. The example explanation(in section III.B.) for symmetry class A (T=S=C=0) seems to be band dependent:
Topological order requires a gap, but allows for continuous deformations. This allows one to break the Hamiltonian (dimension N+M) at a momentum $k$ into two sets of eigen states(of dimention N and M) such that the first set had energy +1 and the second had energy -1. For symmetry class A, a valid transformation is part of the lie group U(N+M), while a valid transformation which doesn't mix bands is part of $[U(N)\otimes U(M)]$. Since mixing states within a band doesn't change the topology, topological order was classified by the homotopy groups of $U(N+M)/[U(N)\otimes U(M)]$.
If I generalize this to 3 bands, topological order would be classified by the homotopy groups of $U(N+M+L)/[U(N)\otimes U(M)\otimes U(L)]$. I'm then left at the following questions:
*
*Are these homotopy groups different?
*Does this generalize to the other symmetry classes and give new homotopy groups?
I know that for ground state physics, the third band is irrelevant because mixing the the second and third band doesn't change the topology of the first band, but maybe in a dynamical situation the restriction for not mixing the bands would matter. Anyways, we could save the discussion on the physical relevance of this restriction to a separate question.
| That is a good question. Yes, the classification theory will be different, but the answer will be much, much more complicated than the Ten-Fold Way. There are mathematical frameworks (e. g. the K-theoretic framework by Freed and Moore) which are powerful enough to investigate the question you have, but keep in mind that you are asking a difficult question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why are there only odd numbered harmonics in one closed end resonant tube?
*
*Why do we only have odd numbered harmonics at one-end closed tubes, however, if we do a frequency spectrum we have some periodic spikes between the odd harmonic spikes, just like the picture below shows.
*What do these "even" spikes mean?
| *
*Hint: You have to count nodes & antinodes of a single closed end pipe.
*The suppressed even harmonics presumably indicates that the actual "single closed end pipe" can behave partly like a second open end, i.e. an antinode.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Additive constant in Hamilton-Jacobi theory? In Hamilton-Jacobi theory Hamilton's principal function $S$ is a function of $n+1$ constants. But we take one of the $n+1$ constants as an additive constant. I don't get this step?
| The HJ equation is a non-linear first-order PDE for $S$ in $(n+1)$ variables $(q^1, \ldots, q^n, t)$, but the PDE does not depend on $S$ directly, only its derivatives. Therefore one additive integration constant $S\to S+\alpha_{n+1}$ is trivial.
For more information, see also this related Phys.SE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can any body be uniform in the universe? If I take any body in the shape of a rod and stretch that, after it reaches breaking stress it breaks at one point.
Even though we apply the same the stress on each and every part of the rod it broke at one point. If it's uniform it should break at all points because breaking stress is same for all the parts of body as it's uniform.
| You could consider it as one more demonstration of the underlying quantum mechanical frame keeping atoms and molecules bonded together. Quantum mechanics is a probabilistic theory, and which bond will "break" depends on the square of the wavefunction describing the rod, with a probability which manifests in this one instance of breakage.
To get the probability curve for a hypothetical completely uniform rod would be a laborious process, as it consists of order of 10^23 atoms/molecules and the number of experiments needed to plot a probability of which atom or group of atoms "breaks" will take forever.
In reality, no rod can be completely uniform, as the other answer states. Even crystals have defects.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 3
} |
Where does the time asymmetry come from in Hawking Radiation? Taking General Relativity and Quantum Field Theory, Hawking predicts radiation emitted from a black hole.
Both GR and QFT are time CPT symmetric.
Taking just GR by itself, a black hole will stay the same forever.
Taking QFT by itself a vacuum will stay the same forver (on average).
So why, when combining the two do we get something time asymmetric: small black holes evaporating?
Or equally a very large black hole will increase in size due to in coming radiation from the background.
I mean, in classical mechanics I can see how a clump of atoms will generally tend to disperse due to random processes. But a black hole is not a clump of atoms... at least not in terms of GR.
How does time asymmetry enter the equations? Do we have to take into account the history of the black hole and it's collapse from a star?
e.g. it is also described as an electron-positron pair created near the horizon with one falling into the black hole and one shooting off to infinity.
The time reverse of this would be an electron coming from infinity and annihilating a positron emitted near the horizon. So does this suggest different boundary conditions for past and future?
Perhaps on the microscopic scale it is time symmetric with black holes randomly being created and evaporating?
Equally, what is the time reverse of the Unruh effect?
| The generic answer to this kind of thing is that the asymmetry comes from the choice of boundary conditions. Here, I imagine the boundary conditions are chosen to be that there are only outgoing photons, no incoming ones.
The justification for this choice of boundary conditions would be that we're really talking about an astrophysical black hole, which formed by gravitational collapse, not a Schwarzschild black hole, which is eternal. The logic is no different from the logic behind the fact that the sun is predicted to emit light but not absorb it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Should loop choice affect induced electric field? Let's say we have a time varying magnetic field, but that it is uniform over a region, for instance $B(x,y)=(t+2)\hat{z}$ for $x,y,z\in[-5,5]$. Since we have a changing magnetic field, there will be an accompanying induced electric field given by $\oint\vec{E}\cdot \mathrm{d}\vec{l}-\frac{\mathrm{d}\Phi_B}{\mathrm{d}t}$, where the electric field is integrated along some closed path, and $\Phi_B$ is the flux through that path. Let's take a simple example, two circles, one centered at $(1,0)$ and the other at $(-1,0)$, both with radius 1. If we draw these out though we get
.
Since $B$ is uniform, and they're the same path just shifted, the magnitudes of the induced fields are the same, but they give opposite directions at the origin where they meet. So from this, it appears as if different choices of loop can give wildly different values for the field at the same point, but in real life that electric field must have some value, so how is this?
In application there would be an actual wire loop in the field we could integrate along, but the induced electric field should still exist and be self consistent, even with a lack of wires.
| When there is a time-varying magnetic field, the electric field is no longer conservative, meaning that $\nabla \times \mathbf{E} \neq \mathbf{0}$. Thus, $\oint \mathbf{E} \cdot \text{d} \mathbf{l}$ will depend on the path chosen.
they're the same path just shifted
This is not true in general. The paths may be geometrically identical, but they are located at different positions in space and have different equations. The red path has equaton $(x+2)^2 + y^2 = 4$ and the blue path has equation $(x-2)^2 + y^2 = 4$. The limits of integration and the parameterized paths will also be different.
they give opposite directions at the origin where they meet
You cannot uniquely determine $\mathbf{E}$ just from knowing $\nabla \times \mathbf{E} = - \frac{\partial \mathbf{B}}{\partial t}$. You need to know $\nabla \cdot \mathbf{E}$ as well. Even then, the solutions to these equations aren't necessarily unique, and additional boundary conditions must be applied. $\mathbf{E}$ is not necessarily uniform in this case, as symmetry might suggest. $\oint \mathbf{E} \cdot \text{d} \mathbf{l}$ only tells you the total component of $\mathbf{E}$ along the path $\mathbf{l}$, and the fact that it encloses an area of uniform $\frac{\partial \mathbf{B}}{\partial t}$ tells you nothing about $\mathbf{E}$ itself. In your scenario, $\mathbf{E}$ does have a definite direction everywhere in space, but that is unknown without further information about the field.
P.S. See my comments below for more elaboration.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding the Eigenvalues and Eigenvectors of the Hamiltonian for three spin-1/2 particles coupled antiferromagnetically Problem
Given three spin-1/2 particles with the total spin operator $\vec{S}=\sum\limits_{i=1}^3 \vec{S}_i$ and its $z$ projection $S_z=\sum\limits_{i=1}^3 S_{z,i}$, and the Hamiltonian
$$H = J\sum\limits_{i=1}^3 \vec{S}_i \cdot \vec{S}_{i+1}
$$
(assuming for $i=3$ that $i+1=1$), calculate the eigenstates and the eigenenergies.
Hint: Rewrite $H$ as a function of $S^2$ and $S_i^2$.
Work
I've already calculated the basis for $\vec{S}^2$ and $S_z$
$$
\vert 3/2,3/2\rangle \equiv \vert\uparrow\uparrow\uparrow \rangle \\
\vert3/2,1/2\rangle \equiv \frac{1}{\sqrt{3}}\Big( \vert\downarrow\uparrow\uparrow \rangle + \vert\uparrow\uparrow\downarrow \rangle + \vert\uparrow\downarrow\uparrow \rangle \Big)\\
\vert3/2,-1/2\rangle \equiv \sqrt{\frac{2}{3}}\Big( \vert\downarrow\downarrow\uparrow \rangle + \vert\downarrow\uparrow\downarrow \rangle + \vert\uparrow\downarrow\downarrow \rangle \Big)\\
|3/2,-3/2\rangle \equiv \vert\downarrow\downarrow\downarrow \rangle \\
$$
with eigenvalues according to
$$S^2\vert s,m \rangle = \hbar^2s(s+1)\vert s,m \rangle = \frac{15\hbar^2}{4}\vert 3/2,m \rangle \\
S_z\vert s,m \rangle = \hbar m\vert 3/2,m \rangle \text{ for } m=-\frac{3}{2},-\frac{1}{2},\frac{1}{2}\frac{3}{2}.
$$
I'm now attempting to rewrite the Hamiltonian according to the hint, using
$$\vec{S}_i \cdot \vec{S}_{i+1} = \frac{1}{2}\Big[\Big(\vec{S}_i + \vec{S}_{i+1}\Big)^2 - \Big(\vec{S}_i^2 + \vec{S}_{i+1}^2\Big) \Big].$$
Issue
I'm not certain I'm applying the hint correctly. With the above,
$$H = \frac{J}{2}\Big[ \Big(\vec{S}_1 + \vec{S}_2 \Big)^2 + \Big(\vec{S}_1 + \vec{S}_3 \Big)^2 + \Big(\vec{S}_2 + \vec{S}_3 \Big)^2 -2\Big( \vec{S}_1^2 + \vec{S}_2^2 +\vec{S}_3^2\Big) \Big],$$
which once expanded and using the expansion of $\vec{S}^2 = \Big(\vec{S}_1 + \vec{S}_2 +\vec{S}_3\Big)^2$ gets me to
$$H = J\sum\limits_{i=1}^2 S_i^2,$$
which just seems wrong to me, since it's not written as a function of $S^2$ and $S_i^2$, as the hint suggests.
| Note that
$$S_{1}S_{2}+S_{2}S_{3}+S_{3}S_{1}=\frac{1}{2}\left(S^{2}-S_{1}^{2}-S_{2}^{2}-S_{3}^{2}\right)$$
and that's what you need. Your calculation is almost correct (your last Hamiltonian is wrong), but longer than it should be.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Color and the absorption of light in quantum mechanics In an answer to a another question, the poster states without sources the following:
From a quantum mechanical perspective, all light scattering is a form
of absorption and re-emission of light energy. Photons don't bounce
off a surface.
If this is true, what is the evidence for it, or is it a theoretical postulate under quantum mechanics?
The reason I ask this is that the statement would seem to be contradicted by the phenomena of specular reflection. Since specular reflections always faithfully reproduce the spectrum of the incident light, that would suggest that no light is be absorbed and that, indeed, photons are "bouncing" off of the surface.
| Reflection is just some type of coherent scattering. Scattering happens because of electric dipoles oscillating and radiating their energy to all directions. But how do they oscillate? They absorb the energy of the electric wave, meaning the electric wave scattered is the result of energy being absorbed and re-emitted from dipoles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Theoretical definition and pratical mesurement of differential cross section In Sakurai's book, the definition of differential cross section is:
$$d\sigma/d\Omega= transition \;rate / probability\; flux $$
However this def doesn't contain any information about the thickness of the material or the density of target particle. How do one compare the experiment with the theory?
I checked Wikipedia but didn't find anything useful.
| Correct, the scattering cross-section is a measure of the intrinsic probability for a given process. It doesn't know anything about the experimental conditions under which the process is actually observed (density of the target material, flux of incoming particles, etc.).
These are instead encoded in what is called the luminosity of the experiment. The rate of a process in a given experiment is given by cross-section times luminosity:
$$R=\sigma \mathcal L.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How can a particle in circular motion about a fixed point accelerate, if the point doesn't too? When a particle is performing uniform circular motion attached to a string about a fixed centre, at any instant of time its acceleration is directed towards the centre but the centre has no acceleration. But I was taught in school this is not possible because of the string constraint:
The accelerations of the ends of a string are the same if the string is not slack.
Where am I wrong?
|
accelerations of both the ends along the string is same if the string is not slacked
Now, I understand your problem.
If you have a string placed in the shape of s in vacuum and if you start pulling it from one end it finally becomes l i.e straight.Here you can say that string isn't slackened because acceleration of both ends in same.
In case of circular motion i.e particle rotating about a fixed centre the string provides the necessary force to keep the particle moving around the centre and this force is called as tension.
Now,since string isn't slackened, is the acceleration of both ends same?
I want to explain what's happening in terms of forces rather than acceleration:
Newton's third law of action and reaction states that if the string exerts an inward centripetal force on the particle, the particle will exert an equal but outward reaction upon the string,the reactive centrifugal force.
The string transmits the reactive centrifugal force from the particle to the centre, pulling upon the centre. Again according to Newton's third law, the centre exerts a reaction upon the string, pulling upon the string. The two forces upon the string are equal and opposite, exerting no net force upon the string (assuming that the string is massless), but placing the string under tension i.e no slackening of the string.
The reason the centre appears to be "immovable"(not accelerating) is because it is fixed. If the rotating ball was tethered to the mast of a boat, for example, the boat mast and ball would both experience rotation about a central point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 2
} |
How does Newton's corpuscular theory explain the speeding up of corpuscles when entering a denser medium? I can't find an explanation for this anywhere. Intuition would imply that the corpuscle would slow down. I mean a person running at a constant speed enters a crowd of people or a forest. The presence of obstacles would cause a reduction in velocity. How did Newton think it would speed up the particles ?
| They were talking classically in analogy to sound wave. Sound wave travel faster in metal.
See this post. Did Newton argue that particles speed up when entering a more dense medium? and http://www.physics-and-radio-electronics.com/blog/corpuscular-theory-light/ under the title "Newton’s Corpuscular Theory Statement".
I found another article which made more sense see http://galileo.phys.virginia.edu/classes/609.ral5q.fall04/LecturePDF/L20-LIGHTII.pdf page 3 which explain it with conservation of energy.
It's almost just an physical assumption. In his age, the usual physical sense is different, he probably picked the knowledge in mechanic wave from sound because the theory of EMM doesn't exists unitl hundreds years later. Bascially, he coulnd't do the math, because, remember he just invented calculus, which was not yet wildly accepted.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Orbital parity of simple bound states in atomic and particle physics The parity operator commutes with the Hydrogen atom Hamiltonian. The energy eigenfunctions are parity eigenstates with orbital parity $(-1)^\ell$ which follows from the fact that $Y_{\ell m}(\theta,\phi)$ is an eigenstate of parity with parity eigenvalue $(-1)^\ell$.
Question 1 What about the Helium (He) atom? The Hamiltonian of the He atom also commutes with parity but is not central. But I am not sure whether the energy eigenfuctions are still given by a product of a radial and an angular part with the angular part given by $Y_{\ell m}(\theta,\phi)$. So I cannot decide whether or not the He atom energy eigenstates also have the orbital parity $(-1)^\ell$?
Question 2 What about a meson $\bar{q}_1q_2$ i.e., a bound state of a quark and an antiquark? The strong interaction Hamiltonian commutes with parity. Mesons being strong interaction eigenstates (in absencce of weak interaction contamination) should also have definite parity. Can we say that mesons also carry an orbital parity $(-1)^\ell$? Apart from orbital parity there exists contribution from intrinsic parity which is not my concern here.
| Basically, yes to both.
When you solve the Schroedinger equation you use separation of variables, getting two equations, one for $r$ and one for $(\theta,\phi)$. The first involves the potential $V(r)$ but the second does not. So you end up with the $Y_{\ell m}(\theta, \phi)$ spherical harmonics whatever the form of $V(r)$: Coulomb (as in hydrogen), screened Coulomb (as in Helium) or confining QCD (as in mesons). Provided the potential depends only on the distance and not on $\theta$ or $\phi$,
it's always the same story as far as the angular dependence is concerned.
So for example the $a_1$ meson is a $u$ or $d$ quark and
$\overline u$ or $\overline d$ antiquark, just like a $\pi$, but orbiting in an $\ell=1$ state and with their spins aligned. The total angular momentum $J$ is also 1 (as 1+1= 0, 1 or 2). The overall parity is +1,(in contrast to the $\pi$, for which it is$-1$) as the $-1$ from $-1^\ell$ multiplies the intrinsic parities, which are opposite for $q$ and $\overline q$.
s,p,d,f... are just code for $\ell=0,1,2,3... $. They stand for sharp, principal, diffuse and fine in old spectroscopists' notation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Are there any general results about the nodes of energy eigenfunctions in higher dimensions? A well-known result of quantum mechanics is that for a single particle in one dimension in a bounding potential $V(x)$ that goes to $+\infty$ as $x \to \pm \infty$, the energy eigenfunctions are discrete and the $n$th eigenfunction has exactly $n-1$ nodes at which $\psi(x) = 0$. (Moreover, we can say more - for example, between any two consecutive nodes in the $n$th eigenfunction, there exists a node in the $(n+1)$th eigenfunction.)
Do any similar results apply for single particles in higher than one dimension, or for multiparticle systems (for which the wave function is defined on configuration space rather than real space)? If not, is there an explicit example of a higher-dimensional or multiparticle system whose ground state wavefunction has a node?
| The result is actually applicable to 1d-equivalent motion, and as such is applicable to the radial part of the Schrodinger equation in any dimension if this radial part can be separated.
In general, the twist is that the equivalent 1d motion depends on the effective potential - in the case of a 3D central potential the effective 1d potential would include the centrifugal part proportional to $\ell(\ell+1)/r^2$ - so the result may be angular-momentum dependent or might depend on other parameters in the effective potential.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why does gauge invariance in electrodynamics mean that there are redundant degrees of freedom? It is possible to choose different gauges in electrodynamics. I am familiar with two of them: Coulomb gauge and Lorenz gauge. Let us stick to the Coulomb gauge. It sets $$\nabla\cdot\vec{A}=0.$$ The wisdom is that with this choice the physical electric and magnetic fields $\vec{E},\vec{B}$ do not change. But there is more to it. It is also important for me to understand why this gauge condition implies that there are superfluous degrees of freedom.
What are these superfluous and non-superfluous degrees of freedom? With which mathematical quantities should we identify them?
First of all, at each spacetime point, we have four numbers $$\phi(\vec{x},t),A_1(\vec{x},t),A_2(\vec{x},t),A_3(\vec{x},t).$$ I understand these four numbers as the four degrees of freedom. Now, Coulomb gauge means that the latter three can be related, without any loss of generality, through the differential equation $$\partial_1A_1+\partial_1A_2+\partial_3A_3=0.$$ Given this, how to understand the rest?
| The existence of "superfluous" degrees of freedom is implies not so much by the gauge condition, but by the gauge transformations itself.
The point is that you four "numbers", i.e. functions, $\phi$ and $A_i$ are equivalent to four other functions, $\phi + \dot\alpha$ and $\vec A+\text{grad}\, \alpha$ for any function $\alpha(t,x)$. (I might have mixed up a sign somehwere, but that doesn't matter here.) So, for example, the values of the potentials at any given point in spacetime can be changed to any other value, and physically measurable quantities have to be gauge invariant -- such as the $\vec E$ and $\vec B$ fields and certain contour integrals (cf. Aharonov-Bohm effect).
The gauge conditions you impose serve to fix the gauge transformations. Thus, they
*
*need to be attainable by a gauge transformation (i.e., you should be able to find an $\alpha$ such that the gaueg transformed potentials satisfy the condition), and
*preclude further gauge transformations (i.e., there should be no other such $\alpha$).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Relativistic space rocks. Are they possible? I was thinking that the Universe is full of extreme events. Colliding galaxies, exploding stars, colliding planets (like in one of models of our Moon formation), colliding black holes...
Can it be possible, that such event would accelerate some rocks to let's say $0.1c$ ?
Do we observe such objects? If not, why?
| Yes, if a rock is on a sling shot trajectory around a black hole, or a neutron star. However that speed would be relative to that black hole, or the neutron star. The speed of that rock can be very different relative to earth. I think spaghettification would not be an issue at .1c but actual calculations need to be done. And even if it is an issue at .1c, there can be rocks that can survive it.
This is not likely to happen as a result of explosive events like supernova etc. Because that causes a sudden acceleration and will likely vaporize the rock or crush it into dust.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
How do we know quantum entanglement works no matter the distance? It is said quantum entanglement works regardless of distance. 2 particles can be entangled and information is shared instantaneously, even if they are lightyears away from each other.
But how do we know this still works with such a vast distance between both particles?
I can image experiments in a lab, or even on opposites sides of the planet, but not with light years between them.
So how do we know?
| I consider the question of how far quantum entanglement works this way:
For as long as entanglement is considered a mysterious Quantum phenomenon and discussed in terms of wave functions or other such mathematical descriptions it is hard to think of it as something that can survive great distances across space (and time).
However, if you go right back to the root of it - we discovered entanglement when we were experimenting with correlated properties. Properties we know to be correlated because - if they were not - then conserved quantities - like angular momentum - would not be conserved.
Now, while GR leaves some loopholes and the universe can make energy - there are no indications in the universe that conservation is strictly local and energy and momentum and whatever other else conserved properties can be trivially broken if you just add a bit more space to the middle of the experiment.
i.e. any distance we can show is too short to cause a symmetry violation is also a distance we can expect Quantum entanglement to work over.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 10,
"answer_id": 8
} |
How many eyes are needed to see a four-dimensional world? If a four-dimensional world were to exist, how many eyes would a creature minimally need to see it (in three dimensions)?
Three? Four?
(Bonus question: how should these eyes be spatially configured?)
| You can already see four dimensions with the two eyes you've got. It's just that two of those dimensions happen to be the same, so you don't distinguish them. But it could be otherwise.
In addition to distances up/down and left/right, you can perceive distances ahead of you by the fact that the light rays reaching your two eyes from a single object form an angle, and from that angle you can infer the distance to the object.
Or you can perceive distances ahead of you from the fact that your lenses need to adjust their focal length in order to bring things into focus, and from the amount of adjustment you can again infer the distance to the object.
The info you get from the angle and the info you get from the focal length is, in ordinary circumstances, redundant, so it adds a total of one dimension to what you can perceive. But in principle, if you were in an artificial environment where you got one piece of information from the angle and an independent piece of information from the focal length, you'd have all the information you need to perceive an additional dimension.
Whether you brain would learn to make use of this information in a way that feels subjectively like seeing a fourth dimension is an open question --- but of course any non-standard configuration of eyes leads to the same open question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can a free electron accelerating in a gravitational field absorb photons? An 'free' electron accelerated in an electromagnetic field can both absorb and emit a photon. What about an election accelerating in a gravitational field?
Edit:
Some users have suggested that the question is a duplicate. However, my question asks about photon absorption, not radiation of photons.
|
A 'free' electron accelerated in an electromagnetic field can both absorb and emit a photon.
Both electrons and photons belong to the table of elementary particles in the standard model of particle physics, i.e. are quantum mechanical entities and have to be modeled as such. Thus , an electron does not absorb a photon, it interacts with a photon according to the rules of quantum mechanics. Feynman diagrams are used to model the integrations necessary to find the probabilities of interacting electrons and photons, in this case called Compton scattering.
What about an electron accelerating in a gravitational field?
If we accept the effective quantization of gravity, i.e. that gravitons will be part of the future standard model of elementary particles, an analogous diagram will exist, where a graviton will replace one of the photons in the diagrams.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between Wien's Displacement Law for peak frequency vs peak wavelength? While doing research for a high school report I came across the fact that WDL actually has two forms, one for peak frequency and one for peak wavelength, and that these two forms are not the same and can not be used interchangeably.
My question is why peak frequency isn't the same as peak wavelength? That is, since wavelength is directly determined by frequency (since frequency = speed of light divided by wavelength), there is a one-to-one correspondence between a given wavelength and its frequency. Therefore why doesn't a peak in frequency correspond to a peak in wavelength, and visa versa (meaning that the two forms of WDL could be used interchangeably)?
I know that this question was already posted elsewhere, but I did not understand the answers. Since I am a high school student, complicated terminology can fly right over my head, so I would greatly appreciate it if someone could take the time to explain it clearly and simply (i.e. no monster equations).
| The blackbody radiation curve presents the spectral intensity density, that is, the radiant power per unit of ... whatever unit you please. Watts per unit interval of wavelength or watts per unit interval of frequency.
The problem is that a unit interval of wavelength is not the same as a unit interval of frequency, and more to the point, the relationship between the intervals changes depending on where you are in the spectrum. What does remain constant is the fractional interval $$\frac{\Delta\lambda}{\lambda}=\frac{\Delta f}{f}$$
As a consequence of the "shifting size" of a unit interval, the maximum value of intensity per unit interval of frequency does not occur at the same place in the spectrum as the maximum value of intensity per unit interval of wavelength. Hope that helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why isn't the emissive power of a black body 1? My text book has a question which says that the emissive power of a black body isn't one but the answer states that the absorptive power is 1, considering that $$e=a \tag{Kirchoff's law}$$
and a black body is defined as an object which has $$e=1$$ Then why isn't $a=1$
*Choose correct options
(a) Good absorbers of a particular wavelength are good emitters of same wavelength.This statement was given by Kirchoff.
(b) At low temperature of a body the rate of cooling is directly proportional to temperature of the body.This statement was given by Newton.
(c) Emissive power of a perfectly black body is 1
(d) Absorptive power of a perfectly black body is 1
The answer is given as (a,d)
Waves and Thermodyanamics by DC Pandey 15th edition
| The emissivity ($e$) of a perfectly black body is 1. Emissivity is the ratio of energy emitted by a body to the energy emitted by a black body. The emissive power is the energy emitted per unit area per unit time. The emissive power of a black body is not 1, and it varies with temperature.
It is true that emissivity = absorptive power = 1 for a black body, i.e., $e=a=1$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Geodesics of anti-de Sitter space It is said that (p. 9), given the anti-de Sitter space $\text{AdS}_2$, let's say in the static coordinates
$$ds^2 = -(1 + x^2) dt^2 + \frac{1}{(1+x^2)} dx^2$$
Every timelike geodesic will cross the same point after a time interval of $\pi$. That is, if $(x_0, t_0) \in \gamma$, then $(x_0, t_0 + \pi) \in \gamma$.
So I've been trying to find out how to show it. The non-zero Christoffel symbols are
$${\Gamma^x}_{xx} = - \frac{x}{1+x^2},\ {\Gamma^x}_{tt} = x + x^3, {\Gamma^t}_{xt} = {\Gamma^t}_{tx} = \frac{x}{1+x^2}$$
So that the geodesic equation is
\begin{eqnarray}
\ddot{x}(\tau) &=& \frac{x}{1+x^2} \dot{x}^2 - \dot{t}^2 (x + x^3)\\
\ddot{t}(\tau) &=& -2 \frac{x}{1+x^2} \dot{x} \dot{t}\\
\end{eqnarray}
We also have the two following equality : the timelike geodesic is such that $g(u,u) = -1$
$$\frac{1}{(1+x^2)} \dot{x}^2 -(1 + x^2) \dot{t}^2 = -1$$
and since the metric is static, there is a timelike Killing vector $\xi$ such that $g(\xi, u)$ is a constant.
$$(1 + x^2) \dot{t} = E$$
or
$$\dot{t} = \frac{E}{(1 + x^2)}$$
This gives us
$$\dot{x}^2 = -(1 + x^2) + E^2$$
And so
\begin{eqnarray}
\ddot{x}(\tau) + x &=& 0\\
\ddot{t}(\tau) &=& -2 x \dot{x} \frac{E}{(1 + x^2)^2}\\
\end{eqnarray}
Which gives us for a start that $x(\tau) = A \sin(\tau) + B \cos(\tau)$. Not quite periodic in $\pi$ (it should be $2\pi$ here), but more importantly this periodicity is in $\tau$ only and not in $t$, and it doesn't seem that $t = \tau$ in this scenario. Is there something wrong here or did I commit a mistake, either in interpreting the statement or the derivation here?
Given $x(\tau) = \sin(\tau)$, Wolfram Alpha gives out the following solution for $t(\tau)$, for instance :
$$t(\tau) = c_1 \tau + c_2 - \frac{1}{2\sqrt{2}} \arctan(2 \sqrt{2} \tan(\tau))$$
which doesn't seem to be particularly helpful here.
| "Every timelike geodesic will cross the same point after a time interval of $\pi$" will be true if the half-period is $\pi$. You found the general solution for $x(\tau)$, namely
$$x(\tau)=A\sin\tau+B\cos\tau$$ or, alternately, $$x(\tau)=A\sin{(\tau-\tau_0)}.$$ When $\tau$ increases by $\pi$, $x$ does come back to what it was, after a half-period.
But we want to show that, when $x$ comes back, $t$, and not just $\tau$, has increased by $\pi$. So what is $t$ doing?
When you substitute $x(\tau)=A\sin{(\tau-\tau_0)}$ into
$$\frac{\dot{x}^2}{1+x^2}-(1+x^2)\dot{t}^2=-1$$
and solve for $t$, you get
$$t(\tau)=\tan^{-1}{[\sqrt{A^2+1}\tan{(\tau-\tau_0)}]}+t_0.$$
To see what is going on here, let's take $\tau_0$ and $t_0$ to be zero (since they just represent uninteresting time translations) and look at the function $\tan^{-1}{(\sqrt{A^2+1}\tan{\tau})}$. Here is a plot of it when $A=\sqrt{3}$ (just an arbitrary value as an example):
But $t$ isn't really discontinuous like this. The arctangent function is multivalued, and we have to take the appropriate branch of it so that t increases continuously with $\tau$. This means we move up the second blue curve by $\pi$, the third blue curve by $2\pi$, etc. to get a continuous function $t(\tau)$ that looks like this:
The result is that whenever $\tau$ increases by $\pi$, so does $t$!
So, to summarize, the timelike geodesics are
$$\begin{align}
x&=A\sin\tau \\
t&=\tan^{-1}{[\sqrt{A^2+1}\tan{\tau}]}
\end{align}$$
where we have dropped the uninteresting time-translation constants.
When $\tau$ increases by $\pi$, $t$ also increases by $\pi$, and $x$ comes back to what it was. This is what you were trying to show.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.