Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
What happens to the period of a pendulum if a spherical bob were to spin around the axis of the string? Consider a normal pendulum with a spherical bob oscillating back and forth. Would the period of the pendulum be longer, shorter or unchanged if the bob were to spin around the axis of the string that holds it?
| You say the pendulum has a bob "on a string". That's a complicated geometry to analyze, because even the spherical bob that is not spinning will have a complex motion - if you consider the angle of the string to the vertical $\theta_1$ and the angle of the bob to the vertical $\theta_2$, then these two can oscillate either in phase, or out of phase, for the two possible modes.
Alternatively, the bob is rigidly linked to the pendulum. In that case, when you make the bob spin, it will act like a gyroscope and it will slow down the pendulum. In fact, if you look from the top, you need the angular momentum to remain constant - so as the angle of the pendulum drops the entire thing needs to start precessing. The motion will be complicated (look up nutation for more on this, or see this earlier question).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Can a measurement partially "collapse" a wavefunction? Let's say I have a wavefunction $\Psi$ which can be decomposed into a sum of it's energy eigenstates:
$$ \Psi = a|1\rangle + b|3\rangle + c|8\rangle + d|10\rangle$$
Where, of course, $|a|^2 + |b|^2 + |c|^2 + |d|^2 =1 $.
And let's say I have a device which can measure the energy of this wavefunction. Unfortunately, the device has an inherent uncertainty of $\pm3$.
I measure $\Psi$ and find it to have an energy of $7\pm3$. After my measurement the wavefunction has "collapsed" (to some extent?). I can think of a few possibilities for post-measurement $\Psi$:
1) $\Psi$ really is in either $|8\rangle$ or $|10\rangle$. The problem statement is wrong: any uncertainly in energy measured is a laboratory issue. It must have an exact physical answer.
2) $\Psi = c'|8\rangle + d'|10\rangle$
Where, $|c'|^2 + |d'|^2 =1 $.
I might even go so far as to say $c' = c/(|c|^2 + |d|^2)^{\frac{1}{2}}$
My immediate answer: the measurement simply eliminated the possibility of the $|1\rangle$ and $|3\rangle$ eigenstates.
3) $\Psi = e|4\rangle + f|5\rangle + g|6\rangle + h|7\rangle +k|8\rangle + m|9\rangle + n|10\rangle$
The "measurement" isn't really a measurement; just a disruption to the wavefunction.
Would any of my answers be correct? Is what I've described not a 'measurement' according to QM? What would it be then?
| A Positive Operator Valued Measurement (POVM) to describe your measurement could be given by elements $$M_{i}=\frac{1}{3+\min\{i,4\}}\sum_{|j-i|\leq 3} |j\rangle\langle j|$$ for $i=1,2,3,...$, which are positive semidefinite and sum up to the identity. Maybe in practice not all outcomes within your $\pm 3$ uncertainty have the same probability, but I assume that's the measurement you describe in your question.
The post-measurement state following outcome '7' is $$|\Psi'\rangle\propto M_7|\Psi\rangle=c|8\rangle + d |10\rangle,$$ exactly as you suggest in 2).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Do fields describing different particles always commute? Is it true that field operators describing different particles (for example a scalar field operator $\phi (x) $ and a spinor field operator $\psi (x) $) always commute (i.e. $ [\phi (x), \psi (y) ]=0, \forall x,y $) in interacting theory?
Or is it true only at equal times? (i.e. $ [\phi (t,\vec x), \psi (t, \vec y) ]=0, \forall \vec x, \vec y $)
Or is it in general not true even at equal times?
Finally, if the fields in account are both fermionic must the commutator be replaced with an anticommutator?
| No.
In full generality, the super-commutator of two fundamental fields is identical to the Dirac bracket of the corresponding classical variables (modulo the standard obstructions). If the system is unconstrained, then the Dirac bracket agrees with the Poisson bracket, which means that two independent fields super-commute. But if there are non-trivial constraints, then the Dirac bracket may very well mix different fields, so that the corresponding operators will fail to super-commute.
As a trivial example, consider the non-zero commutator of the scalar potential and the Dirac field in QED in the Coulomb gauge (see eq. 15.11 in Bjorken&Drell), or how a Stückelberg field fails to commute with its associated B-field (see eq.38 in 1510.03213). More generally, the Nakanishi-Laudrup field does not usually commute with the rest of fields of the theory.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Question about Lorentz scalar I have a simple question about Lorentz Scalars.
In my course they are introduced like that.
$\phi$ is a scalar of Lorentz if it follows the following property :
A function $\phi$ is a scalar of Lorentz if it follows the following rules :
$\phi(x)=\phi'(x')$ and $\phi'=\phi$
But what would mean $\phi'$ ? For me, for a scalar quantity $\phi'$ doesn't mean anything.
Indeed, as I have a scalar the only thing I can change of coordinates is the variable : $x=f(x')$.
And we have $\phi(x)=\phi(f(x'))$.
So, maybe I misunderstood something but what would $\phi'$ mean in a general case ?
Furthermore, do you agree with me if I say that in fact all scalar quantities in physics must be Lorentz scalar (because as I just wrote, the only thing we do is a change of variable so we don't need any "property" on the quantity described by the scalar).
| Lorentz scalars are a subset of Lorentz invariant quantities. A Lorentz scalar is a scalar that is invariant under Lorentz transformation. For example, the dot product of a four-vector with itself is a Lorentz scalar. The 4-velocity is defined as:
$$\textbf{U}=\gamma(c,v)$$
So the dot product of 4-velocity with itself is:
$$\textbf{||U||}^2=U^{\mu}U_{\mu}=\gamma^2\left(c^2-v^2\right)=c^2$$
The 4-momentum is defined as:
$$\textbf{P}=m\textbf{U}=\gamma\left(mc,mv\right)=\gamma\left(\frac{E}{c},\textbf{p}\right)$$
The dot product of 4-momentum with itself is:
$$\textbf{||P||}^2=\frac{E^2}{c^2}-\textbf{p}^2=m^2c^2$$
A Lorentz scalar is a scalar that remains the same in every inertial frame of reference. Energy is a scalar quantity, but it can have different values in different frames of reference, so it is not a Lorentz scalar.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Wick contraction corresponding to a connected diagram in $\phi^4$-theory to second order I am trying to understand the diagrams that comes from a two-point correlation function, $$\langle \Omega|T\{\phi(x)\phi(y)\}|\Omega\rangle$$, in $\phi^4$-theory. The zeroth order contribution, i.e. $\lambda^0$, is simply $D_F(x-y)$, and in $\lambda^1$ we get an additional internal point such that $$-\underbrace{12}_{\text{possible connected contractions}}\frac{i\lambda}{4!}\int d^4z \phi(x)\phi(y)(\phi(z))^4\\ = -12\frac{i\lambda}{4!}\int d^4z D_F(x-z)D_F(z-z)D_F(y-z)$$
As for the contributions from the $\lambda^2$ term I can understand 2 of the 3 connected contractions of $\phi(x)\phi(y)(\phi(z))^4(\phi(w))^4$. The first one is $$-P\frac{\lambda^2}{4!4!}\int d^4z d^4w D_F(x-z)D_F(y-z)D_F(z-w)D_F(z-w)D_F(w-w),$$
and the second one is
$$-P\frac{\lambda^2}{4!4!}\int d^4z d^4w D_F(x-z)D_F(y-w)D_F(z-w)D_F(z-z)D_F(w-w),$$ where $P$ is the possible configurations of the contraction. The third contraction yields the last diagram in the photo below, while the other diagrams correspond to the contractions above. My question is really how the last diagram would look in terms of contractions when using Wick's theorem. What does the expression look like for that diagram? There's clearly something i have misunderstood!
| Although the third $\lambda^2$ diagram is a little bit unusual in that there are three lines going between the two vertices, the rule continues to apply that there is a factor of $D_F$ for each line, meaning that the third $\lambda^2$ term is
$$-P\frac{\lambda^2}{4!4!}\int d^4z d^4w D_F(x-z)D_F(y-w)[D_F(z-w)]^3\ .$$
(I've continued using your notation of identifying the vertices as $z$ and $w$ in the expression, even though they are identified as $z_1$ and $z_2$ in the diagram.)
You've actually already correctly used multiple identical factors of $D_F$ when multiple lines connect two vertices; your expression for the first $\lambda^2$ term contains two factors of $D_F(z-w)$.
For calculating $P$, there are eight ways that the $\phi(x)$ can contract with one of the four $\phi(z)$'s or four $\phi(w)$'s, four ways that the $\phi(y)$ can contract with whichever of $\phi(z)$ or $\phi(w)$ that $\phi(x)$ didn't contract with, and $3!$ ways that the remaining three $\phi(z)$'s can contract with the remaining three $\phi(w)$'s, so
$$P=8\cdot 4\cdot 3!=192\ .$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What are the real life examples of Double Dirac-Delta Potential barrier/well?
My question is do we see in nature any potential which is close to Double Delta Potential barrier/well? If yes then which are those?
Thanks in advance.
| This is often used as a simplified model for a diatomic molecule. Each $\delta$ represents an atom, and the distance between the wells is the nuclear separation.
The solutions can be identified with the so-called bonding and anti-bonding wave functions. The strength parameter for the potential is often set by considering the dissociation regime, i.e. imagining that the two nuclei are so far apart that the electron is trapped in a single $\delta$ well (and - say - setting the binding energy of the one bound state to some known value for the atom).
For instance, when the nuclei are very far apart, the electron sees only one well. Thus, the value of $V_0$ for a model of the $H_2^+$ could be chosen to produce a binding energy of $−13.6$eV in a single isolated $\delta$-well.
This is a fairly common example of many textbooks. If you are interested in published work see:
*
*Ahmed, Zafar, et al. "Revisiting double Dirac delta potential." Eur. J. Phys 37.045406 (2016): 045406. (arXiv version here.)
*Lapidus, I. Richard. "One-dimensional model of a diatomic ion." American Journal of Physics 38.7 (1970): 905-908 (behind a paywall)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If I say time is the fourth dimension am I wrong? As far as I know the prevailing view is that time is the fourth dimension, but I've read there is also a spatial fourth dimension and even higher spatial dimensions after that so I hesitate to say that time is the fourth dimension. So, if I say time is the fourth dimension am I wrong?
| You can invent any number of mathematical spaces. There's no reason why time can't be the first dimension of the one you invent, nor why it couldn't be the seventh or that you just have no time dimensions, and infinite other dimensions.
All that matters for whether it makes physical sense to do so or not is whether you can use it to obtain useful [and hopefully at some point falsifiable] results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Planetary motion: with a different nature of potential. Condition for circular orbit
Consider a particle moving in the potential $U (r)= -A/r^n$, where $A>0$. What are the values of $n$ which admit stable circular orbits?
I tried to solve by putting $dr/dt=0$ in the total energy equation $E= T + U_\mathrm{eff}$, but it didn't work. Then I came across a solution which said that for the orbit to be circular, $U_\mathrm{eff}(r)$ needs to have a minimum when plotted against $r$, where $U_\mathrm{eff}$ is the effective potential $(L^2/2mr^2+ U (r))$. But I don't understand why it has to, because when $n=1$, where circular orbits are possible, $U_\mathrm{eff}$ does not have a minimum since it varies with $1/r$.
| I'm not sure where you got this idea:
when $n=1$, where circular orbits are possible, $U_\mathrm{eff}=L^2/2mr^2+ U (r)$ does not have a minimum since it varies with $1/r$.
Here, have a look at that function:
At small $r$, the $+1/r^2$ dominates and the function is positive and monotonously decreasing. At large $r$, the $-1/r$ term dominates and the function is negative but monotonously increasing. The only way to reconcile those two behaviours is to have a minimum in the middle, which can easily be found by setting $\frac{\mathrm dU_\mathrm{eff}}{\mathrm dr}=0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Terminology confusion - "particle" I am confused about the word "particle" being used in academic contexts. Some professors at my university are adamant on the fact that particles do not exist, and only fields, as per QFT. One of them even showed me a citation from one of Julian Schwinger's QM books where he himself states this supposed fact. I've been going around to different professors asking for explanations, but I'm still a bit confused, so I thought I could make a consultation here. Some of the profs I've asked say there are only simulations of particles, yet "particle physics" is still a valid area of research, and even the Wikipedia page (I know) for QFT defines it as "...the theoretical framework for constructing quantum mechanical models of subatomic particles in particle physics." So I am just confused about the usage of the word "particle" being used today, if QFT is the widely accepted theory, and QFT says that there are no particles, only excitations of fields.
| This is an ontological question. Different people may express different point of views on the subject. There is some truth to both particles and field being fundamental.
In a sense all our experiments involve particles. We accelerate and collide hadrons, leptons, and are generally interested in the particles that they produce. If viewed this way - one could say that the fields are just a convenient instrument to describe this.
On the other hand, magnetic and electric fields are observable (e.g. through particle tracks in a cloud chamber. Which may suggest that the fields are fundamental.
All in all, both views are correct as long as you know how it works from underlying principles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 7,
"answer_id": 0
} |
Is energy $E$ in Schrödinger equation an observable/ Can $E$ be measured? Take this quantum approach to estimate mean energy of a molecule:
$$\langle\psi|H|\psi\rangle=\overline E$$
Question:
Is $E$ an observable? How we can compare it to an experimental value? i.e how to experimentally measure it and what are the states involved (as energy is all about differences there must be two states)
Edit
It is not a question about how is theoretically defined an observable.
Any help?
| Absolute energy is observable since it is the source of gravity. In practice this is hard to do but its observable status is indisputable. The Schrödinger energy is not absolute but relative to the rest energy of the particles that make up the system. Relative energy is observable by observing the products of a transition between states. For example binding energy can be observed by calorimetry, or if the binding reaction is optical such as in atomic deionisation, by observing emitted light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Do gravitational sources move along ‘geodesics’? Assume we have a system of say two bodies which are orbiting each other. Now assume that we wish to find an equation of the orbits of the two gravitational sources. Do they follow a ‘geodesical’ path, if we assume that the sources may or may not be singularities, which in this case may require a puncture method or so I’ve heard.
I have also read several articles which suggest that it does move ‘geodesically’ if one does not take into account the self-interaction of the space-time field. If we were to take into account the self-interaction, back-reaction and what nots, will it still move ‘geodesically’ in the metric of the entire manifold?
| I think this is largely a matter of terminology.
If we take for example a test particle moving in a Schwarzschild metric then we can calculate the geodesics in the usual way. However an real particle has a non-zero mass (or energy) and therefore it perturbs the metric. So the particle is not moving in a Schwarzschild metric but instead in a time dependent metric that is similar to the Schwarzschild metric but not identical. So the actual path of the particle will be different from the geodesic calculated assuming a pure Schwarzschild geometry.
Typically we assume the particle is small compared to the mass it orbits, so the perturbation to the background metric is small. Typically we would assume the perturbation is approximately linear so we get:
$$ g_{\alpha\beta} = s_{\alpha\beta} + h_{\alpha\beta} $$
where $s_{\alpha\beta}$ is the Schwarzschild metric and $h_{\alpha\beta}$ is the perturbation caused by the non-zero mass of the orbiting particle. In that case:
*
*the trajectory of the particle is not a geodesic of $s_{\alpha\beta}$
*but the trajectory of the particle is a geodesic of $g_{\alpha\beta}$
How significant the difference between the trajectories is depends on the relative masses. For spacecraft, or even planets, the difference is relatively small. For example the orbit of Mercury is correctly described to within experimental error without considering the perturbation to the metric due to Mercury's mass. However the spacetime geometric of merging black holes is totally different to a simple Schwarzschild calculation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How exactly does vapor pressure relate to saturation pressure? I'm confused as to what exactly is vapor pressure and saturation pressure. From what I understand, vapor pressure is just the equilibrium pressure of a vapor above a liquid at some temperature. Is this not also the definition of the saturation pressure (ie. an equilibrium pressure at some temperature)? Does a system always tend towards its saturation pressure? Or is it that there is one saturation pressure at some specific temperature, and that upon increasing the temperature, the vapor pressure will eventually reach the saturation pressure? Is the saturation pressure dependent on overall pressure?
| You can have vapour when there is no liquid present and that vapour would exert a vapour pressure.
If however you have liquid and vapour present in dynamic equilibrium with one another then the pressure exerted by the vapour is the saturated vapour pressure.
So start off with a container with only vapour in it.
The vapour exerts a vapour pressure.
Now do something eg add liquid, cool the vapour, reduce the volume of the container, etc, so that there is also liquid in the container.
Then the pressure exerted by the vapour is the saturated vapour pressure.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Electric field charged disc and L'Hôpital's rule I have been looking at the electric field of a charged disk and have a question about the use of l'Hopital's rule for the limiting case of electric field at points along the axis $z\gg$ disc radius $R$.
$$E = \frac {q}{2\pi\epsilon R^2} \left(1 - \frac {z}{\sqrt{z^2+R^2}}\right)$$
I have applied l'Hôpital's rule in the limit of $R$ approaching zero, and see that the electric field approaches that of a point charge, as intuition suggests. HOWEVER, when I use l'Hôpital's rule in the limit that $z$ approaches infinity, I get a repeating loop of indeterminate forms that doesn't arrive at the point charge expression.
My question is does this difference in results using l'Hopital's rule have any physical or mathematical significance?
| Update as the result of a comment from @garyp this time using L'Hôpital's rule.
$$E = \frac{q}{2 \pi \epsilon}\frac{(z^2+R^2)^{\frac 12}-z}{R^2 (z^2+R^2)^{\frac 12}}$$
Now differentiate twice with respect to $R$ the numerator and the denominator individually to get something like
$$ \frac {(z^2+R^2)^{-\frac 12} + R(.........)}{2(z^2+R^2)^{\frac 12} +R(.........)} $$
which has the limit $\dfrac{1}{2z^2}$ as $R$ tends to zero and gives the desired equation for the electric field due to a point charge.
$\left(1 - \dfrac {z}{\sqrt{z^2+R^2}}\right)$
Divide top and bottom of the fraction by $z$ and expand using the binomial theorem as far as the second term and for the electric field you will find that the $R^2$ cancels leaving the point charge field in terms of $z$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Distribution of loss in a transmission line to minimize power dissipation This post will ask how to distribute loss in a transmission line so that the line has a known total loss, while dissipating the least amount of power.
We'll refer to "gain" of a transmission line, but we're thinking of the case where the line is lossy, so the gain is always less than one.
This post is in a sense a warmup for a somewhat more relevant and complex question that I will post after this one is resolved.
Discrete case
Consider a short section of transmission with a gain $G$, meaning that if a signal goes into that section with squared amplitude $A^2$, then it comes out with squared amplitude $G \, A^2$.
If this gain is really coming from losses in the line, then $G<1$.
If we cascade many section of transmission line with gains $\{G_1, G_2\ldots \}$, then the total gain is
$$\prod_{i=1}^n G_i = \exp \left(\sum_{i=1}^n \ln G_i \right) \, .$$
Each section of line dissipates power $P_i$ where
$$P_i = P_\text{in} - P_\text{out} = A^2 - G A^2 = A^2 ( 1 - G ) \, .$$
Continuous case
Now suppose we have a continuous transmission line of length $L$ where the gain per length at each point $x$ along the line is $g(x)$.
Extending the formula for the discrete case given above, it's clear that the total gain of the line is (remember that $g(x)<1$)$^{[a]}$
$$G = \exp \left( \int_0^L dx \, \ln g(x) \right) \, . \tag{$\star$}$$
The power dissipated in a bit of line of length $\varepsilon$ at position $x$ is
\begin{align}
P(x)
=& A(x)^2 \left[ 1 - \exp \left( \int_x^{x+\varepsilon} dx \, \ln g(x) \right) \right] \\
\approx & A(x)^2 \left[ 1 - \left( 1 + \varepsilon \ln g(x) \right) \right] \\
=& -A(x)^2 \varepsilon \ln g(x) \, .
\end{align}
The total power dissipation is of course
$$P \equiv \int_0^L dx \, P(x) = - \int_0^L dx A(x)^2 \ln g(x) \, . $$
The problem
Given a fixed value of $G$, calculate $g(x)$ that minimizes $P$.
This is a constrained optimization problem and I think some kind of variational calculus is needed.
Before we get to that, however, we should write the thing we're minimizing, $P$, in a better way by replacing $A(x)$ with an expression involving $g(x)$.
In particular, for an input amplitude $A_\text{in}$, the amplitude at a particular point $x$ along the line is
$$A(x)^2 = A_\text{in}^2 \exp \left( \int_0^x dx \, \ln g(x) \right) \, .$$
Therefore,
\begin{align}
P
=& - \int_0^L dx \, A(x)^2 \ln g(x) \\
=& -A_\text{in}^2 \int_0^L dx \, \ln g(x) \exp \left( \int_0^x dx' \ln g(x') \right)
\end{align}
How do we minimize $P$ subject to the constraint $(\star)$?
It's pretty obvious that if the total gain is fixed, then the power dissipation is also fixed because they're the same thing.
In other words, the form of $g(x)$ should not matter.
Therefore, I suppose a rewording of this question could be "how do we prove using variational calculus that the form of $g(x)$ doesn't matter?".
$[a]$: It's weird to have $\ln g(x)$ because $g$ has dimensions of length$^{-1}$. I suppose we can imagine multiplying $g$ by some length unit and dividing $dx$ by that same unit.
| The solution is to take $\ln g(x)$ to be equal to a delta function concentrated at the point where $A(x)^2$ is minimal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Use of negative frequency for the sake of simplifying mathematics? How can we use the idea of negative frequency for the sake of simplifying mathematics if negative frequency does not exist (to my knowledge) in nature ? For example, when plotting the spectra of a Fourier series.
| The only time I’ve seen negative frequency used is in Quantum field theory where it is proportional to the energy of a particle (i.e. $E=\hbar\omega$) In QFT, Feynman interpreted the negative frequency/energy results of the Klein-Gordon equation (used to find the field for a relativistic particle) as corresponding to an antiparticle with positive energy and frequency moving backwards in time rather than a normal particle with negative energy and frequency moving forward in time. This makes computation easier as it allows for the particle and antiparticle parts of the field to be incorporated into one field. I have never seen it used elsewhere, although it may have. I hope this answers your question
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Physical explanation of Joule heating The heat $Q$ generated in a wire, for a current $I$ flowing through a wire of a given resistance $R$, for a time $t$ is given by $Q=\mathscr{k}I^2Rt$ where $\mathscr{k}$ is the proportionality constant. For a given wire the resistance R is fixed. Is it possible to explain physically why $Q$ is proportional to the square of $I$?
| Joule heating occurs when the electrons carrying current in a wire lose energy to the metal atoms of the lattice. If there is a current $I$ being driven by a potential difference $V$, that power is $P=VI$.
For a linear resistor, by Ohms law, $I=V/R$ captures how much of the electrical energy is "lost" to heating, through a macroscopic constant $R$. Plug them together and $P=I^2R$. Maybe the water analogy helps, more current means more "turbulence" , i.e. resistance to flow. But the square just falls out of the math.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
In physics sometimes we find energy that is negative. What does the negative sign indicate? Sometimes we see energy that is negative, for example, the energy of an electron in orbit. We know energy is something that can do something. In this view does negative energy mean something opposite someway?
| Negative sign simply means that the system is releasing energy. For instance, take the case of gravitational force. The potential energy function of gravity is,
$$U_g(r)=-G\frac{m_1m_2}{r}; U_g(\infty)=0$$
Gravitational potential energy is described as the work done in bringing a body of mass $m_2$ towards $m_1$ from $\infty$ to a certain separation $r$. Since, the two bodies pull each other, no external work is done on the system. Instead we obtain work from it. This explains the negative sign.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Are there elementary forces acting in directions than 0 or 90 relative to their fields? Some forces act in the same direction as their field orientation, like a gravitation. Other forces, for instance the force acting on a charged particle in a magnetic field, are perpendicular to that their field orientation.
Are there elementary forces acting in other directions than 0 or 90 relative to their fields?
If not, is there an explanation other than because that is what the product of the vectors is?
| The question is not well adapted to modern treatment of general relativity and quantum theory. In these theories, the notion of "force" is not very useful. Also, it supposes that the fields are characterized by a vector. This is not always the case. For example, general relativity uses a tensor field to describe space time. Staying with general relativity, the net effect can be different from an attraction on a line between the source and the test particle. For example, there is a frame dragging effect when the source is rotating, that adds components to the gravitational "force", to use your terms, that result in a total effect that is not purely directed towards the source (and of course not purely perpendicular to the source).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How does one calculate frictional force for an object sliding down a wall? Let's say that there is a book sliding down a vertical wall such that the only fundamental force acting on it is gravity. I want to say that there is a frictional force slowing the book down; however, I can't find a normal force acting on it, so I can't calculate the kinetic friction force.
Is there any frictional force?
For reference, the book has a mass of $3.5kg$ and a coefficient of kinetic friction of $0.13$.
| If there is no normal force to the surface there is no friction. The book will just fall by gravity.
In real life if starting pressed on the wall and released it may hit a small extrusion from the wall and get a rotation by scattering off it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What determines the energy of a photon emitted by an electron when it changes it's energy level? A nuetral helium atom in is in the excited electronic state, with one electron in the (2p) level and the other in the (3d) level. The atom is initally in a region with zero applied magnetic field.
(a) Can the electron in the (3d) level emit a photon and change to the (2p) level? If so, how many different photon energies would you expect to measure? Explain your answer.
(b) A magnetic field of 1.0 Tesla is applied to the atom. Can the electron in the (2p) level emit a photon and change to the (1s) level, with the electron in the (3d) level remaining in the (3d) level? If so, how many different photon energies would you expect to measure? Explain your answer.
I already know the answers. However, there is something I need help understanding. Since the principle quantum number n can only change by +/-1, and the photon energy is determined by the change in n, how can there be multiple photon energies for a single transition?
| The n is called principle quantum number because it gives the basic energy level. l and m
give corrections on this basic energy level, and this allows for the possibility of many more transitions, as long as quantum numbers are conserved.
If you look at the hydrogen energy levels at extremely high resolution, you do find evidence of some other small effects on the energy. The 2p level is split into a pair of lines by the spin-orbit effect. The 2s and 2p states are found to differ a small amount in what is called the Lamb shift. And even the 1s ground state is split by the interaction of electron spin and nuclear spin in what is called hyperfine structure.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
With a very intense light on a black object, will it reflect? I was wondering about the nature of object's colour. I know that an object get its colour from the absorption of visible electromagnetic radiation, reflecting all the other wavelength. But if we take the case of a black object that absorbs every visible light, I know that photons will be absorbed by some molecules, then will be re-emitted with less energy because some of the energy has been "passed on" the molecule, in which it goes faster, thus, giving us heat. So, if we put a very intense light, does it simply change the amplitude and the object is still black or does the wavelength shift and gives a different result?
My guess would be that no matter the amplitude, the wavelength are the same and thus the black object will still appear black but I want to be sure with, maybe, a more scientific explanation? If you have also links of some sort, I would gladly appreciate it!
| As I understand your question, you are asking about an ideal black body. Keep in mind that such a thing does not exist in nature. But as long as we know we are talking about an ideal, first, the body will never reflect any light. What will happen is that the intense light falling on it will drive up the temperature of the black body, and the light emitted will depend, both in amplitude and wavelength, on that temperature. The more intense the light, the higher the black body temperature will become. The resulting spectrum will become both more intense and more blue, following the standard black body spectrum, which you can read about in many sources.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How torque about every point on Axis is same? I read somewhere that torque about every point on an axis is same. But I am really confused about how this can be. Please help me and give an satisfactory answer
| I believe you are referring to the following: if the nett resultant of a system forces on a body is zero, then we can say that the moment of that system is independent of the point about which the moment is calculated. In symbols, suppose we have a system of forces $\vec{F}_i$ acting at positions $\vec{r}_i$, relative to our co-ordinate origin. The total moment of this system is $\vec{\tau}=\sum\limits_i \vec{r}_i\times \vec{F}_i$.
Now suppose we shift our co-ordinate origin, so that $\vec{r}_i \mapsto \vec{r}_i+\vec{r}$, for some global displacement $\vec{r}$. Then:
$$\vec{\tau}\mapsto \sum\limits_i(\vec{r}+\vec{r}_i)\times \vec{F}_i = \vec{\tau}+\vec{r}\times \sum\limits_i\vec{F}_i$$
since $\times$ distributes over $+$. But if the resultant is zero, i.e. $\sum\limits_i\vec{F}_i=0$, then the last term on the right vanishes, and we see that $\vec{\tau}$ is unaffected by our shift in origin.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Co and contravariant: tensors or components? I am learning Special Relativity and have a question: given a four vector $\vec{x}$ whose contravariant components are $x^\mu$, do the covariant components $x_\mu = g_{\mu\nu}x^\nu$ make reference to a different physical/geometrical object other than $\vec{x}$?
I mean, for the physical/geometrical object $\vec{x}$ we can say
$$ \vec{x} \underset{\text{has components}}{\longrightarrow} x^\mu $$
Then, who is $\vec{?}$ in the following expression?, is it $\vec{x}$ to?
$$ \vec{?} \underset{\text{has components}}{\longrightarrow} x_\mu $$
| Since we are dealing with a finite dimensional metric space, it's fine to just think of a field $\mathbf v$ and two bases $\mathbf e_\mu$ and $\mathbf e^\nu$ such that $\mathbf e_\mu \cdot \mathbf e^\nu = \delta_\mu^\nu$. Then $v^\mu$ and $v_\nu$ are just the coefficients for $\mathbf v$ in the two bases: $\mathbf v = v^\mu \mathbf e_\mu = v_\nu \mathbf e^\nu$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
What is the form of the $n$-th order term of the perturbation series of an eigenvalue? Suppose I have a matrix given by a sum $A=D+\epsilon B$, where $D$ is diagonal and $\epsilon$ is small, and I want the eigenvalues of $A$ as power series in $\epsilon$. The leading order is just the eigenvalues of $D$, the first corrections are the diagonal elements of $B$, the second order is also well known.
I would like to know what is the particular form of the $n$-th order term in the eigenvalue perturbation series. Apparently it can be written as a sum over partitions, but I can't find this anywhere.
| The answer can be found in Kato's book Perturbation theory for linear operators. I will use Kato's notation. In fact I will answer a more general question where you have an operator which depends analytically on a parameter $x$ (your $\epsilon$). Let such operator be $T(x)$ and let
$$
T(x) = \sum_{n=0}^{\infty} x ^n T^{(n)}
$$
such that the series converges in a neighborhood of $x=0$. I also call $T=T^{(0)}$. In your case you simply have $T^{(n)}=0$ for $n\ge 2$. We seek the perturbation series of an eigenvalue $\lambda$ of $T$. This means that there exist an eigen-projector of $T$, $P$ such that
$$
TP = \lambda P +D,
$$
where $D$ is a nilpotent term that may arise from the Jordan decomposition. If $m = \mathrm{dim} P$ is the dimension of the range of $P$, $D^m=0$. Note that for non-degenerate eigenvalue ($m=1$) we have necessarily $D=0$.
Define also $Q=1-P$ and the reduced resolvent
$$
S = \lim_{z\to \lambda} = Q (T - z)^{-1} Q
$$
lastly let's define
$$
S^{(0)} = -P, \ \ S^{(n)} = S^n, \ \ S^{(-n)} = - D^n, \ \mathrm{for}\ n\ge 1.
$$
Let $P(x)$ be the eigenprojector of $T(x)$ analytically connected to $P$. Then one has the following series:
$$
(T(x) - \lambda) P(x) = D + \sum_{n=1}^{\infty} x^n \tilde{T}^{(n)}
$$
with
$$
\tilde{T}^{(n)} = - \sum_{p=1}^{\infty} (-1)^p \sum_{\mathcal{A}} S^{(k_1)} T^{(n_1)} S^{(k_2)} \cdots S^{(k_p)} T^{(n_p)} S^{(k_{p+1})},
$$
where $\mathcal{A}$ corresponds to the indices satisfying the following constraint
$$
\mathcal{A} = \left \{ \sum_{i=1}^p n_i = n ; \sum_{j=1}^{p+1} k_j = p; n_j \ge 1; k_j \ge -m+1 \right \}.
$$
In the non-degenerate case ($m=1$) this provides the final answer, i.e.
\begin{eqnarray}
\lambda(x) &=& \lambda + \sum_{n=1}^{\infty} x^n \lambda^{(n)} \\
\lambda^{(n)} &=& \mathrm{Tr} \tilde{T}^{(n)}.
\end{eqnarray}
Note that in this case one must have $D=0$. Moreover taking the trace already kills many terms because of the cyclic property of the trace and noting that $SP = PS = 0$.
To make contact with possibly more familiar expressions, note that for a self-adjoint unperturbed operator $T$, the reduced resolvent should look familiar:
$$
S = \sum_{\lambda_j \neq \lambda} \frac{ |j\rangle \langle j|}{ \lambda_j - \lambda},
$$
where I called here $\lambda_j$ and $|j\rangle$ the eigenvalues and eigenvector of $T$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What if the Sun became a black hole? Upon talking to someone about the concepts of black holes, a question arised that I did not know the answer to.
If the Sun became a black hole, but the mass remained the same as it is now, the Earth would orbit in the same manner that it currently does because of the reason that the mass does not change (gravitational field stays constant).
However, does the curvature of spacetime change in the region where the Sun used to be located? And how would the curvature change in the region of the Earth's orbit? It must have an effect on spacetime, but I cant seem to form a reasonable enough argument for why....
| Assuming nothing else changed about the body then the change will have no effect on the curvature of spacetime outside the radius that the sun had had previously.
Inside the sun’s radius, however, is a different story. As it currently stands, the curvature of spacetime increases as one approaches the sun’s radius and then, after passing the boundary, decreases until levelling out in the center of mass of the sun.
Note that the above is only true for a sun with uniformly distributed mass, but it’s enough to get a good picture.
In the case of a black hole, rather than starting to decrease, the curvature of spacetime would continue to increase—all the way down to the singularity (because there would be no counteracting mass once one passed through the sun’s radius).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is the Lorentz transformation's time transformation not just the time dilation? In Taylor's Classical Mechanics text, he derives the Lorentz transformation from length contraction which, in turn, uses time dilation. But doesn't the use of length contraction necessitate that the time transformation is just the time dilation?
| If the time coordinate transformation was given simply by
$$
t' = \gamma t,
$$
all events simultaneous in the $x,t$-frame would be simultaneous also in the $x',t'$-frame. That would be in contradiction to relativity of simultaneity, which is an important part of special relativity.
For example, imagine a spherical expanding light wave that reaches two distant observers at the same time as observed in the $x,t$-frame. In a different frame $x',t'$ that moves along the line joining the observers, the wave cannot reach both observers at the same time $t'$, because one is moving towards the wave and the other is moving away from it. So $t'$ cannot depend on $t$ coordinate of the event and $\gamma$ only. In fact it depends on the spatial coordinates of the event too.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Motional EMF and the flux rule contradiction I have a metallic rod which is being rotated in a constant magnetic field. The EMF is produced in it as per motional EMF and can explained using the Lorentz force. But how can we explain the production of EMF in it using Faraday's flux rule. In this case the rod is in constant magnetic field and even though rod is rotating, the flux is not changing. So as per Faraday's law, there shouldn't be any EMF.
|
... how can we explain the production of EMF in it using Faraday's flux rule? In this case the rod is in constant magnetic field and even though rod is rotating, the flux is not changing.
Faraday’s law is derived from the observation of changing electric or magnetic fields like in transformers.
The induced electromotive force in any closed circuit is equal to the negative of the time rate of change of the magnetic flux enclosed by the circuit. (Wikipedia
More general there are three cases of the involved components:
*
*Lorentz force / electric device: An electric current in a magnetic field give a deflection of the wire (of the electrons in this wire).
*Electric generator: The movement of a wire in a magnetic field (of course not parallel to the magnetic field) induces a current in the wire.
*Electromagnet: A current in a coil induces a magnetic field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Kepler's genius, How? I have a very simple question. How Kepler knew that orbits are elliptical, say I was living in his time. How would Kepler explain that the orbits are elliptical (since none of his 3 laws explain why orbits are elliptical; I assume he must have had other reasons to believe why orbits are elliptical)? Also calculus was not invented, so how did he do that? How did he know that the distance to Sun was changing, and the velocity of the planet was changing to compensate for that? Was it solely because of the observational data provided by Tycho Brahe?
| An ellipse was the only thing which fitted the data (without adding the circles within circles special fixes needed for Ptolemy's epicycles)
I suppose his (Kepler's) genius was in trying different mathematical shapes to fit the data rather than arguing from Divine Insight or Ancient Greek authority that orbits must be some special shape because that is what God would do.
ps. You don't need calculus to calculate any of this - it just makes it easier. Newton worked out his gravitational law with calculus but then proved it with the same geometrical tools available to Kepler.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to find the direction of velocity of a reference frame where two events are simultaneous in case of a space-like interval Suppose in a inertial reference frame $S$, an event $A$ occurs at $(ct_A, x_A, y_A, z_A)$ and event $B$ occurs at $(ct_B, x_B, y_B, z_B)$.
Now the invariant interval of these two events is,
$$I = -c^2 (t_A - t_B)^2 + (x_A - x_B)^2 + (y_A - y_B)^2 + (z_A - z_B)^2 = -c^2 \Delta t^2 + \Delta \bar x^2,$$
where I'm using the $(-, +, +, +)$ metric.
Now there can be $3$ particular cases of interest corresponding to time-like, space-like and light-like events.
For $I = 0 \implies c^2 \Delta t^2 = \Delta \bar x^2$, events are light-like.
For $I < 0 \implies c^2 \Delta t^2 > \Delta \bar x ^2$, events are time-like and a reference-frame $\bar S$ exists(accessible by appropriate Lorentz Transformation) for which these two events occur at the same location. The velocity(magnitude and direction) can be computed.
For $I > 0 \implies c^2 \Delta t^2 < \Delta \bar x^2$, events are space-like and a a reference frame $\bar S$ exists(again accessible by appropriate Lorentz Transformation) for which these two events are simultaneous.
I know how to calculate the velocity(direction and magnitude) of the $\bar S$ frame relative to the $S$ frame in case of a time-like event. I also know how to calculate the magnitude of velocity of the $\bar S$ frame relative to the $S$ frame for a space-like event.
How to find the direction of the $\bar S$ frame relative to $S$ for a space-like event?
| Here is an image which should correspond to the graphical solution of dmckee:
You have to proceed as follows:
*
*Put event A into the coordinate origin (by coordinate displacement).
*Then connect both events by a line x' which is the space axis of the researched new reference frame
*Then draw accordingly the time axis ct' which is the new space axis x' mirrored on the 45° axis (α=β).
*In your coordinates, this new time axis ct' is the worldline of the researched reference frame, and the direction of its relative velocity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
How does a Higgs triplet transform under $SU(2)_L \times U(1)_Y$ when written as a $2\times 2$ matrix? I recently learned that a Higgs triplet can be written as a $2 \times 2$ matrix:
\begin{equation}
\Delta=\begin{pmatrix} \frac{\Delta^{+}}{\sqrt{2}} & \Delta^{++} \\
\Delta^0 & - \frac{\Delta^{+}}{\sqrt{2}}\end{pmatrix}
\end{equation}
Typically, an $SU(2)$ triplet $\phi$ transform like:
\begin{equation}
\phi \rightarrow\exp(-i\vec{T}\cdot \vec{\theta})\phi
\end{equation}
where $\vec{T}$ are some $3\times3$ matrix representation of the $SU(2)$ generators.
How to modify the $\vec{T}$'s such that $\Delta$ transform like a triplet under $SU(2)_L$?
| Indeed, a triplet $\vec{\phi}$ transforming under the triplet representation matrices T can be doted to a Pauli vector to yield a formal adjoint, so a traceless 2×2 matrix $$\Phi=\tfrac{1}{2}\vec{\phi}\cdot \vec{\tau}~,$$ transforming as the doublet representation, conjugately on both sides. This is dubbed adjoint action,
$$
\Phi \mapsto e^{i\vec{\tau}\cdot \frac{\vec{\theta}}{2}}~ \Phi ~e^{-i\vec{\tau}\cdot \frac{\vec{\theta}}{2}}.
$$
In your case,
\begin{equation}
\Delta=\begin{pmatrix} \frac{\Delta^{+}}{\sqrt{2}} & \Delta^{++} \\
\Delta^0 & - \frac{\Delta^{+}}{\sqrt{2}}\end{pmatrix}= \sqrt{2}\Delta^+ \frac{\tau_3}{2} +\Delta^{++} \frac{\tau_1+i\tau_2}{2} + \Delta^0 \frac{\tau_1-i\tau_2}{2} = \sqrt{2}\Delta^+ \frac{\tau_3}{2} +\Delta^{++} \frac{\tau_ +}{2} + \Delta^0 \frac{\tau_-}{2} \\ = \begin{pmatrix}\Delta^{++}+\Delta^0\\i(\Delta^{++}-\Delta^0)\\\sqrt{2}\Delta^+ \end{pmatrix}\cdot \frac{\vec{\tau}}{2},
\end{equation}
where you note the normalization
$$
\vec{\bar{\phi}}\cdot \vec{ \phi }=2(\Delta^{++~~2}+\Delta^{+~~2}+\Delta^{0~~2}).
$$
You can easily see, then, that the Cartesian 3-vector can be unitarily rotated to the more transparent spherical basis vector $\sqrt{2}(\Delta^{++},\Delta^+, \Delta^0)$.
The matrix $U^\dagger$ achieving that Cartesian to spherical basis change is given in footnote nb 3 of the WP article
or this answer . In an innocuously rephased form,
$$
U^\dagger \begin{pmatrix}\Delta^{++}+\Delta^0\\i(\Delta^{++}-\Delta^0)\\\sqrt{2}\Delta^+ \end{pmatrix} = \frac{1}{\sqrt{2}} \left(\begin{matrix} 1 & -i & 0 \\ 0 & 0 & \sqrt{2} \\ 1 & i & 0\end{matrix}\right) ~\begin{pmatrix}\Delta^{++}+\Delta^0\\i(\Delta^{++}-\Delta^0)\\\sqrt{2}\Delta^+ \end{pmatrix}= \sqrt{2}\begin{pmatrix}\Delta^{++}\\\Delta^+ \\ \Delta^0 \end{pmatrix}.
$$
*
*To confirm the normalization of the half-angle and the signs, take only the 3rd component of θ to be non-vanishing and infinitesimal, so the rotation increment of Δ is a simpler matrix with vanishing diagonals. Check that the increment components of $\vec{\phi}$ are now given by commutators (adjoint) and fully comport with the classical cross-product increment of the triplet representation you specified. It is then evident upon commutation with $\tau_3/2$ that the $T_3$ eigenvalues of the triplet $(\Delta^{++},\Delta^+, \Delta^0)$ are (1,0,-1), so that its $Y=Q-T_3=1$. I normalize the weak hypercharge the modern ("alternative") way, i.e. dropping the superfluous strong denominator of 2.
*You may find the derivation of T to all orders through this construction in this answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Books on non-perturbative phenomena in quantum field theory I am looking for any good places (preferably textbooks) to study about introductory non-perturbative phenomena in Quantum field theory.
Any suggestion will be appreciated.
| The OP did not explain what "nonperturbative" means, which can vary. So I will go with beyond perturbation theory, i.e., not just talking of correlations of a QFT as formal power series in $\hbar$ or the renormalized coupling constant. In that case, the literature on constructive quantum field theory deserves to be mentioned (although it might be too mathematical for OP's taste).
The classical reference is the book by Glimm and Jaffe "Quantum Physics: A Functional Integral Point of View". It starts from classical mechanics and statistical mechanics and goes through QM and finally QFT. Not an easy read but it is quite thorough and mathematically rigorous.
Another reference is the book "From Perturbative to Constructive Renormalization" by Rivasseau which is more technical and is a better source for topics like cluster expansions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 1
} |
Does twisting a wire heat it? I was playing with a key chain loop in a (very boring) chemistry class and then I straightened the loop into a wire keeping two end of the loop (now wire) curved so as to easily twist it. It was more or less a S shaped structure of metal with a longer straight part in the middle of S.
On twisting a lot, it started getting hotter. Why did that happen?
It was a circular cross section wire and I do not exactly know which metal, if it helps.
| By twisting and untwisting the wire, you did work (in the physics-specific sense of the word) on the wire. Effectively, by exerting a force on the wire, you transfer energy into it. That energy has to go somewhere, and in this case it ended up as heat.
Some of your body heat was probably transferred into the wire in the process as well; but if the wire felt warm to the touch after this process, its temperature was probably above your skin temperature, which would imply that the heating wasn't just due to your own transferred body heat.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Linear approximation of time dilation. In what point is it? I have watched a video about linear approximation and there was an example, exactly here: https://www.youtube.com/watch?v=BSAA0akmPEU&feature=youtu.be&list=PL590CCC2BC5AF3BC1&t=32m50s about linear approximation of time dilation. I have started to count and this is the result:
$$ f(t,v) = \frac{t}{\sqrt{1-\frac{v^2}{c^2}}} $$
$$ \frac{df(t,v)}{dv} = \frac{tv}{c^2(1-\frac{v^2}{c^2})^{\frac{3}{2}}} $$
so combining those two things into linear approximation I have:
$$ f(t,v) \approx f(t,v_0) + f(t,v_0)'(v-v_0) $$
then putting real bodies of functions:
$$ f(t,v) \approx \frac{t}{\sqrt{1-\frac{v_0^2}{c^2}}} + \frac{tv_0}{c^2(1-\frac{v_0^2}{c^2})^{\frac{3}{2}}}(v-v_0) \approx \frac{t}{\sqrt{1-\frac{v_0^2}{c^2}}} - \frac{tv_0^2}{c^2(1-\frac{v_0^2}{c^2})^{\frac{3}{2}}} + \frac{tv_0v}{c^2(1-\frac{v_0^2}{c^2})^{\frac{3}{2}}} $$
then I can imagine that:
$$ v_0 << c $$
So I think that I can do something like that:
$$ \frac{1}{\sqrt{1-\frac{v_0^2}{c^2}}} \approx 1 $$
then
$$ f(t,v) \approx t - \frac{tv_0^2}{c^2} + \frac{tv_0v}{c^2} $$
and then compering to result from video it should be equal to
$$ T' = T(1 + \frac{1}{2}\frac{v^2}{c^2}) $$
but unfortunately it is not equal. Can somebody tell me what mistakes I have made? What is wrong with my reasoning? I will be really appreciate. :)
| You effectively killed $f(t,v)$ when you set
$$\frac{1}{\sqrt{1-\frac{v_0^2}{c^2}}} \approx 1$$
The result can be obtained much simpler by using known approximation rules
$$\dfrac{1}{\sqrt{1-\dfrac{v^2}{c^2}}} \approx\dfrac{1}{1-\dfrac{v^2}{2c^2}}\approx 1+\dfrac{v^2}{2c^2}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the Bandgap energy of Rubidium? Could anyone please tell me the bandgap energy for alkali metals like rubidium and cesium?
| The optical spectra of the alkali metals show a kind of gap: interband transitions have an onset. In first approximation this is explained by the empty-lattice band structure where the periodicity makes possible vertical transitions from the conduction band from states with a wave vector smaller than the Fermi vector.
Here is an image with the extended zone scheme. The gap is 0.64 times the Fermi energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the equation of relative motion for two objects moving in straight lines? If two objects, A and B, are moving in the same direction along straight lines in a plane, they might be diverging, converging or moving in parallel.
If we wish to describe B's motion with respect to A, what is the equation of motion?
For example, imagine that A is moving at 10 knots along the line described by the parametric equation:
x = 30t
y = 20t
and B is moving at 9 knots along the line described by the parametric equation:
x = 35t
y = 10 - 15t
what is the motion of B with respect to A? In other words, if we hold A to always be at the origin, what would be the parametric equation (and/or non-parametric equation) for B's motion?
I guess the shape will be a parabola or hyperbola, but am not sure how to compute it.
| There are two objects that are moving in a straight line. The parametric equation for the first may be:
$ \vec x = \vec a + t \vec b $
The equation for the second may be:
$ \vec y = \vec p + t \vec q $
The relative position from $ \vec y $ watched from the point $ \vec x $ is:
$$ \bbox[5px,border:2px solid red]
{ \vec r = \vec y - \vec x = ( \vec p - \vec a ) + t ( \vec q - \vec b ) } $$
The second object watched from the first object moves in a straight line. If both objects are moving in a plane, we can multiply the last equation by the normal vector to $ \vec q - \vec b $ and the result is the cartesian equation of the movement:
$$ \bbox[5px,border:2px solid red]
{ \vec n\cdot ( \vec r - \vec p + \vec a ) = 0 } $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Why is flux at an interface purely diffusion? In many textbooks, the flux at the point of interface of two phases/regions is given through Fick's first law, with purely diffusive flux, even when there can be bulk convection in both phases/regions.
Essentially,
$$
N_{A,y}\vert_{\xi=0}=N_{A,y}\vert_{y=\delta}=-D_{AB}\left.\frac{\partial c_A}{\partial y}\right|_{y=\delta}
$$
where $\delta$ is the point of interface. I am wondering why this is. Can you show this mathematically? Intuitively I would say that there is a no-slip condition at the point of interface that allows for no bulk motion, and so flux is purely diffusive. But if we have a moving interface, such as a falling film, I don't understand why this is still valid.
| Interface is by definition a separating surface across which there cannot be bulk flow. So if follow an interface (Lagrangian approach, and not Eulerian) the only way mass can be transferred across the interface is by diffusion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Heat to work or thermal energy to work? A system consists of different forms of energy like thermal energy, mechanical energy, chemical energy, nuclear energy etc. If these energies are to be transferred to another system (call it system 2), it can either be done as heat or work (or mass but here I take system approach) and again at the other system (system 2) it will be held as (change the) one of the forms of energy (thermal, mechanical chemical etc).
So when the second law implies that heat cannot be completely converted to work is it actually implying that thermal energy of a system cannot be completely transferred as work to another system? or does it mean energy that is transferring as heat cannot be changed to transfer-of-energy-as-work mid transfer?
It cannot be about the quality of energy because heat and work are not energy they just imply transfer of energy.
| The 2nd law implies that heat can't be completely converted to work using a cyclic process. Obviously, heat can be converted to work if the process does not have to be cyclic. An example is isothermal reversible expansion of an ideal gas.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Fock space with mixed anti-commutation/commutation relations? Let's say we have two modes, with the following labeling of occupation number states:
$ \lvert \Psi \rangle = \begin{pmatrix} 0,0 \\ 0,1 \\ 1,0 \\ 1,1 \end{pmatrix} $
An example of (what I assume to be) fermionic creation operators for the two modes is
$\hat a_1^\dagger = \begin{pmatrix} 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{pmatrix} \quad
\hat a_2^\dagger = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}$
These operators obey full anti-commutation relations.
$\{\hat a_1,\hat a_1^\dagger\} = \{\hat a_2,\hat a_2^\dagger\} = 1$
$a^\dagger_1 a^\dagger_1 = a^\dagger_2 a^\dagger_2 = 0$
$\{\hat a_1,\hat a_2^\dagger\} = \{\hat a_1,\hat a_2\} = 0$
If we don't include the ($-$) sign, then operators corresponding to the same mode still anti-commute, but those corresponding to different modes commute.
$\hat b_1^\dagger = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{pmatrix} \quad
\hat b_2^\dagger = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}$
$\{\hat b_1,\hat b_1^\dagger\} = \{\hat b_2,\hat b_2^\dagger\} = 1$
$b^\dagger_1 b^\dagger_1 = b^\dagger_2 b^\dagger_2 = 0$
$[\hat b_1^\dagger,\hat b_2^\dagger] = [\hat b_1^\dagger,\hat b_2] = 0$
It looks like we started constructing a boson Fock space, but only included states for which the occupation numbers are 0 or 1. Is there some reason these operators aren't suitable, other than the observation that all elementary particles are either fermions or bosons? Are there any quasi-particles in condensed matter physics that behave like this?
| The operators $b_i$ defined by the OP correspond to the algebra of hardcore bosons, that is, bosons that cannot be put at the same place.
Hardcore bosons correspond to the limit of infinite interaction ($U\to\infty$) of the Bose-Hubbard model
$$
H=-t\sum_{\langle i,j\rangle}b^\dagger_i b_j-\mu\sum_i n_i+\frac U2 \sum_i n_i(n_i-1) ,
$$
with $n_i=b^\dagger_i b_i$.
Hardcore bosons are also related to $\frac12$-spins, with the mapping $b=\sigma^-$, $b^\dagger=\sigma^+$ and $b^\dagger b-\frac12=\sigma^z$. In particular, the Bose-Hubbard model at infinite interaction can be mapped onto the XY model in transverse field (up to a constant)
$$
H_{XY}=-J\sum_{\langle i,j\rangle } (\sigma^x_i\sigma^x_j+\sigma^y_i\sigma^y_j)-h\sum_i \sigma^z_i.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How to represent a axisymmetric, stationary metric in a coordinate independent way? A classic example of a stationary, axisymmetric metric in GR is the Kerr metric. In Boyer-Lindquist coordinates $(t,r,\theta,\phi)$ it is obvious that the metric is independent of $t,\phi$ and so is stationary and axisymmetric.
Now, often in GR we want to work in a covariant, coordinate independent way and just deal with 4-vectors, tensors etc. In this case the metric is just represented by $g^{\mu \nu}$.
My question is, is there a way to enforce stationarity and axisymmetry onto this metric tensor $g^{\mu \nu}$, without reference to a coordinate system? For instance, can this be done with Killing vectors?
| A spacetime is said to be stationary if it has an (asimptotically) timelike Killing vector.
Similarly, if one has a Killing vector which has closed spacelike trajectories, then we get an ignorable coordinate, which corresponds to the axisymmetry.
In the example of the Kerr metric, the timelike Killing vector is $\frac{\partial}{\partial t}$ and the "axisymmetric" Killing vector is of course $\frac{\partial}{\partial \phi}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why melting temperature of polymers depends on the prior crystallization temperature? This graph should show that the melting temperature of a polymer rises when the crystallization temperature of a polymer was higher (the melting temperature of a 100% crystalline polymer should be equal to crystallization temperature). Basically, the melting temperature depends on the history of the crystallization.
I'm not sure how I should explain this to myself. For now, I'd say that the higher crystallization temperature would mean slower crystallization and higher crystallinity, thus higher temperature required for melting.
| I think you have basically answered your own question. I will expand on it here:
The degree of crystallinity of a polymer will differ, depending on a number of factors. Polymers of higher crystallinity tend to have longer folding lengths.
In general, the longer the length of the individual 'crystals', the higher the melting point will be (so as you state in your question, higher crystallinity implies a higher melting temperature). The maximum melting temperature will occur at the maximum crystallisation, i.e in the idealised case where the polymer crystallises into a single crystal. If we denote this hypothetical temperature as $T_m^0$ then the relation is something like
$T_m = T_m^0\left( 1- \frac{2\sigma_e}{lh_f}\right)$,
where $T_m$ is the melting temperature, $\sigma_e$ is the surface free energy per fold, $h_f$ is the enthalpy of fusion, and $l$ is the length of the individual 'crystals', or ordered regions (also called lamellae, I think). We see that if the length $l$ increases the melting temperature increases also.
As to the question in the title: the melting temperature depends on the prior crystallisation temperature to the extent that the crystallisation temperature influenced the degree of crystallinity of the polymer.
Further reading
Melting Temperature and Change of Lamellar
Thickness with Time for Bulk Polyethylene
and
Okui, N., 1990. Relationship between crystallization temperature and melting temperature in crystalline materials. Journal of Materials Science, 25(3), pp.1623-1631. (I'm afraid I couldn't find a link for this one).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lorentz transform of force
If a particle of mass $m$ and velocity $v$ is moving due to a constant electric force what would the force be in the the frame where the particles velocity is 0?
To try and solve this I used the four force and did a Lorentz transform of the four momentum. However I got different answers in each component of the force and if this scenario was taken as one dimensional I got no change in the force. So I was wondering how to find a equation relating the new force to the old force.
| You cannot just transform $d \bf p \rm/dt=q(\bf E + v \wedge B \rm )$, as it is not a tensorial equation. The tensorial form of this equation is
$$\frac {d p^\mu }{d\tau } = -\frac q mp^\lambda F_\lambda^{\; \mu} $$
The tensorial nature of this equation guarantees it is valid in any coordinate system. Turning back now to your question, we can use this equation to calculate the force in the coordinate system that is momentarily comoving with the particle. In this coordinate system, the momentum fourvector $p^\mu$ reduces to $(m,0,0,0)$ and consequently the equation reduces to $$ \frac {d p^\mu }{d\tau } = -q F_0^{\; \mu}. $$ Replacing the components of the EM field tensor $F_\lambda ^{\; \mu}$ by the corresponding electrical and magnetic field components (in the momentarily comoving frame!), we get $$ \frac {d p^0 }{d\tau} =0 \\ \frac {d p^i }{d\tau} = q E_i\ \ ,i=1,2,3 $$ with $E_i$ being the three components of the electrical field. This means that the particle will move according to the classical laws in the momentarily comoving frame, but you need of course first to calculate the components of the electrical field in this frame. In order to do this, you plug in your $\bf E$ and $\bf B$ components in your EM field tensor $F_\lambda^{\; \mu}$. You transform the field tensor using the Lorentz transformation, what will allow you to recuperate the searched $$E_i = -F_0^{\; i}.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why these patterns form in captured image while zooming?
This is a gif format video that shows zooming of an image of computer LCD screen which i captured using my mobile phone. You can see that some fringes are forming and disppearing and hence some patterns are forming while i zoon in or out. How will you explain this phenomena?
1. I could also see same kind of pattern formation in photograph of a mobile screen also
2. Try zooming this image ,check wether you can see it .
3.when i tried to reduce the image file size to upload it here i could see that reducing file size less than a boundary give away that effect. So i am not sure you will see the effect in above image thats why i used a gif video of zooming
4.i should also check wether i get the same effect when trying to zoom original picture using computer( i will update this when done)
If more details needed please ask me.
Whatever ,How is this happening ?
| what you are seeing is a form of something called a moire' pattern which is created because as Bill Oertell points out, the spacing between adjacent pixels in the screen does not match that of the camera photographing the screen. You can learn more about moire' patterns on wikipedia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
induced charges inside a capacitor with two dielectrics consider that we have two dielectrics inside a capacitor as shown in the picture, let0s consider also that Q is the charge of the capacitor and d the distance between the two plates , the first dielectric occupy a surface of S/3 with a dielectric constant of er1 and the second a surface of 2S/3 with a dielectric constant of er2, the question is calculate the electric field inside the capacitor and the surface density of the induced charge
*
*I have no problem in calculating the electric field inside the capacitor, but what pauzzeld me most was the answer of the book
because the interface between the two dielectric is parallel to the electric field E1=E2=E we can say that the electric field is conserved HOW come ???? i could find those results analytically but how could they spot this just by saying such sentence???
2.While calculating the induced charges, they just calculate the induced charge on the left side of the dielectris (the ending point of the electric field) ?? how come because i thought that the induced charge in this case should be on both internal surface of the dielectrics
Many thanks in advance
| You can consider the two dielectrics as two capacitors connected in parallel. Since the two are connected across the same potential difference and the distance between both the capacitors(upper and lower one) are same, E= V/d , the electric field in both the dielectrics is the same.
As the dielectric is neutral, the charge appearing on the right side will be exactly opposite of that appearing on left side. So, I guess there's no need to mention charges on both sides of dielectric.
Hope it helps!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Energy gap of $In_{0.53}Ga_{0.24}Al_{0.23}As_{0.77}$ Does anybody know how can I calculate the energy gap of $In_{0.53}Ga_{0.24}Al_{0.23}As_{0.77}$ ?
| The first place to look would be Akio Sasaki et al., "Energy Band Structure and Lattice Constant Chart of III-V Mixed Semiconductors, and AlGaSb/AlGaAsSb Semiconductor Lasers on GaSb Substrates", Japanese Journal of Applied Physics 19(9) 1695-1702 (1980). They provide the equations and parameters to calculate quaternary band gaps, along with contour plots.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What's the purpose of shorting the base and collector of a transistor in current mirrors? I often see this diagram of a current mirror (as shown below).
As far as I know, the purpose of a current mirror is the ensure that the collector current for both transistors are equal.
This can simply be achieved by making sure that their base-emitter voltage is the same. This can be done without shorting the base and collector of the left hand side transistor... Is shorting it redundant in any ways?
|
This can simply be achieved by making sure that their base-emitter
voltage is the same.
Not quite. The collector current can be written as
$$I_C = \beta_0\left(1 + \frac{V_{CB}}{V_A}\right)I_B$$
and so depends on the base current and the collector-base voltage (Early effect). Connecting the collector and base together $(V_{CB}=0)$ removes this dependence on the collector-base voltage and the relationship simplifies to
$$I_C = \beta_0 I_B$$
Since $V_{BE1} = V_{BE2}$, the base currents are equal (assuming identical transistors etc.) and so we can then write
$$I_{REF} \equiv \frac{V_{DD} - V_{BE1}}{R_1} = I_{C1} + I_{B1} + I_{B2} = I_{B2}\left(2 + \beta_0\right)$$
and it follows that
$$I_{C2}=I_{REF}\frac{\beta_0}{2 + \beta_0}\left(1 + \frac{V_{CB2}}{V_A}\right) $$
If, on the other hand, $V_{CB1} \ne 0$ (for example, place a resistor between the collector and base of Q1 rather than a wire), the equation relating $I_{C2}$ to $I_{REF}$ is more complicated
$$I_{REF} \equiv \frac{V_{DD} - V_{BE1} - V_{CB1}}{R_1}$$
$$I_{C2}=I_{REF}\frac{\beta_0\left(1 + \frac{V_{CB2}}{V_A}\right)}{2 + \beta_0\left(1 + \frac{V_{CB1}}{V_A}\right)}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How does friction increase energy of a system? I had this doubt while thinking through a question about centre of mass. Consider a system, consisting of a man standing on one end of a plank which rests on a frictionless surface. Now the man starts running towards the other end of the plank(friction is present between the man and the plank). Once he reaches the end of the plank he jumps down and both the man and the plank keep on sliding endlessly on the surface with equal and opposite momentum. Although the net momentum is still zero, both of them now have some velocities and thus the kinetic energy of the system has increased. Therefore work is done on the system by friction. This has prompted the following ques. in my mind :-
1)How is the kinetic energy of the system defined ? In this case if we add the individual kinetic energies of the 2 bodies, we get a net increase in the KE of the system. However, if we take it as $\frac{1}{2}m_{sys}v_{cm}^2$ the KE will still be zero as velocity of centre of mass is zero.
2)If friction is doing work on the system, which energy is being converted into mechanical energy ? As this is an isolated system(assuming no form of heat exchange is present between the bodies and the surrounding) the total energy should always remain conserved. I thought that it must be the tiny deformations caused in the bodies by friction resulting in change of potential energy which is converted into KE. Is this right ?? Or will the bodies get cooler to keep the energy conserved ??
| Friction isn't the force doing work. Human bodies are much better than inanimate blocks, it's the man using his internal energy from his muscles to kick the plank behind, the friction will stick his leg to the plank while he kicks and prevent it from slipping behind while he is lifting the other leg, ideally work done by friction should be zero if the man is walking properly and his shoes don't undergo plastic deformation. It's the man's internal energy causing the system to gain kinetic energy.
Yes, Kinetic energy is defined as how you stated, however for 2 particle systems of mass $m_1$ and $m_2$, you could write the total kinetic energy as
$KE = \frac{1}{2}M_{sys}v_{cm}^{2} + \frac{1}{2}\mu v_{rel}^{2}$, where $\mu$ is the reduced mass $\frac{m_1m_2}{m_1 + m_2} $.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Rotation in Higher Dimensions In a world of three spatial dimensions plus time, every atom rotates around a line, the axis of rotation.
In a world of $N$ spatial dimensions where $N$ is greater than 3, must every atom rotate, and if so does it rotate around a line, a plane, or a subspace of smaller number of dimensions?
| In 2d, a rotation matrix has the form
$$
r(\theta)=\left(\begin{array}{cc}
\cos\theta&-\sin\theta\\
\sin\theta&\cos\theta\end{array}\right):=
\left(\begin{array}{cc}
c(\theta)&-s(\theta)\\
s(\theta)&c(\theta)\end{array}\right)
$$
and rotates vector in a plane.
In 3d a rotation matrix can be written as a product
$$
r_{12}(\psi)r_{13}(\theta)r_{12}(\varphi)
$$
where
\begin{align}
r_{12}(\psi)&=\left(\begin{array}{ccc}
c(\psi)&-s(\psi)&0\\
s(\psi)&c(\psi)&0\\
0&0&1
\end{array}\right)\\
r_{13}(\theta)&=\left(\begin{array}{ccc}
c(\psi)&0&-s(\psi)\\
0&1&0\\
s(\psi)&0&c(\psi)
\end{array}\right)
\end{align}
leaving one axis invariant. This axis can be identified by the row or column containing $0$s everywhere except for one entry.
In SO(4), one can write a rotation matrix as a sequence or $r_{ij}$ matrices. $r_{12}$ would have the form
$$
r_{12}(\psi)=\left(\begin{array}{cccc}
c(\psi)&-s(\psi)&0&0\\
s(\psi)&c(\psi)&0&0\\
0&0&1&0\\
0&0&0&1
\end{array}\right)
$$
and so leaves a 2-dimensional subspace invariant.
An SO(4) matrix can be written in the factored form
$$
r_{34}(\beta_1)r_{23}(\beta_2)r_{12}(\beta_3)
r_{34}(\beta_4)r_{23}(\beta_5)r_{34}(\beta_6)
$$
by restricting to real values the entries of the $SU(4)$ matrix factored as as done here. This is not by any means the only possible factorization.
Obviously, an SO(5) rotation can be written in terms of matrices leaving a 3-dimensional subspace invariant etc.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Water molecules on the Sun? How does that work? I've been reading about water on the Sun. The water they talk about is supposedly in a gaseous state because the Sun is so hot. But I'm wondering how even that could exist. Wouldn't the extreme temperature of the Sun ($> 5000^\circ{\rm C}$ on the surface) split it into hydrogen and oxygen through thermolysis, which can occur at just $2000^\circ{\rm C}$?
| It's true that the temperature on the Sun will split water molecules very rapidly. However, the research you referred to discovered trace amounts of water on sunspots, not an atmosphere of water vapor (or worse: oceans)
The water molecules form then are broken down by the temperature very quickly. It is not that no water vapor can form on the Sun, rather the conditions are unfavorable thus the quantities are very small.
For reference, at 2200C, around 3% of water molecules are dissociated. At 3000C, around half of the molecules are split. From a chemistry POV, this thermal energy is required to overcome the bond energy within water.
The rate constant of the reaction is $k=Ae^\frac{-Ea}{RT}$, where Ea is activation energy and T is Temperature. A is a constant based on the reaction and R is ideal gas constant. You can see the correlation between temperature and rate here, k will never reach infinity as long as the temperature is finite.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Why can the Klein-Gordon field be Fourier expanded in terms of ladder operators? Using the plane wave ansatz
$$\phi(x) = e^{ik_\mu x^\mu}$$
the solution to the Klein-Gordon equation $(\Box + m^2) \phi(x) =0$ can be written as a sum of solutions, since the equation is linear and the superposition principle holds, as
$$\phi(x) = \sum_{{k}} \left( Ae^{ik_\mu x^\mu} + Be^{-ik_\mu x^\mu} \right).$$
How does one find the coefficients? More exactly, why does it turn out they are the annihilation and creation operators with the factor $1/\sqrt{2E}$?
The various books and sources I've checked just confused me even more. Peskin and Schroeder just plug in the integral equation (Fourier modes) by analogy with the harmonic oscillator solution. Schwartz gives a very strange reason that the energy factor is just for convenience. In Srednicki the author writes it as $f(k)$ without an explicit form. In Mandl and Shaw, they just state the equation without any justification.
My best guess is that those come from the quantization process, but how does one do it in this case explicitly?
| Let us start with the ansatz (I'll assume mostly plus metric signature)
\begin{equation}
\hat\phi(x) = \int \frac{\mathrm{d}^3 \mathbf{k}}{(2\pi)^{3/2}}\left(\hat A_\mathbf{k} e^{i k\cdot x} + \hat B_\mathbf{k}e^{-ik\cdot x}\right)
\end{equation}
where $\hat A_\mathbf{k}$ and $\hat B_\mathbf{k}$ are some arbitrary operators and $k^0 = E_k$. We first note that $\phi$ is a real field which means that we must have $B_\mathbf{k} = A_\mathbf{k}^\dagger$. Also a quantum field must obey the equal time commutation relation
\begin{equation}
\left[ \hat \phi(\mathbf{x},t),\hat{\dot\phi} (\mathbf{y},t)\right] = i\delta(\mathbf{x-y}).
\end{equation}
Plugging our ansatz into this relation we get the condition
\begin{equation}
2E_k \left[\hat A_\mathbf{k}, \hat A_\mathbf{k'}^\dagger \right] = \delta(\mathbf{k-k'})
\end{equation}
This is just the ladder operator commutation relation with an extra normalization factor. For convenience we can define a rescaled operator $\hat a_\mathbf{k} \equiv \sqrt{2E_k}\hat A_\mathbf{k}$ which has the conventional commutation relation.
Detailed calculation
Taking the time derivative of the field we get
\begin{equation}
\hat{\dot \phi}(x) = \int \frac{\mathrm{d}^3 \mathbf{k}}{(2\pi)^{3/2}}\left(-iE_k\hat A_\mathbf{k} e^{-i E_kt + i\mathbf{k\cdot x}} + iE_k\hat A_\mathbf{k}^\dagger e^{i E_kt - i\mathbf{k\cdot x}}\right).
\end{equation}
Then the commutator between the field and its time derivative (more generally its conjugate) is
\begin{eqnarray}
\left[ \hat \phi(\mathbf{x},t),\hat{\dot\phi} (\mathbf{y},t)\right] = \int \frac{\mathrm{d}^3 \mathbf{k}\mathrm{d}^3 \mathbf{k'}}{(2\pi)^{3}}\left\{iE_{k'}\left[\hat A_\mathbf{k},\hat A_\mathbf{k'}^\dagger\right] e^{-i(E_k-E_{k'})t+i(\mathbf{k\cdot x - k'\cdot y)}} + iE_k\left[\hat A_\mathbf{k'},\hat A_\mathbf{k}^\dagger\right] e^{i(E_k-E_{k'})t-i(\mathbf{k\cdot x - k'\cdot y)}}\right\} = \int \frac{\mathrm{d}^3 \mathbf{k}\mathrm{d}^3 \mathbf{k'}}{(2\pi)^{3}}\left\{iE_{k'}\left[\hat A_\mathbf{k},\hat A_\mathbf{k'}^\dagger\right] e^{-i(E_k-E_{k'})t} + iE_k\left[\hat A_\mathbf{-k'},\hat A_\mathbf{-k}^\dagger\right] e^{i(E_k-E_{k'})t}\right\}e^{i(\mathbf{k\cdot x - k'\cdot y)}}
\end{eqnarray}
where we have used the fact that operators commute with themselves and changed variables $\mathbf{k\rightarrow -k}$, $\mathbf{k'\rightarrow -k'}$ in the second term. This should be equal to the delta function
\begin{equation}
i\delta(\mathbf{x-y}) = i \int \frac{\mathrm{d}^3 \mathbf{k}}{(2\pi)^3}e^{i\mathbf{k\cdot(x-y)}} = i \int \frac{\mathrm{d}^3 \mathbf{k}\mathrm{d}^3 \mathbf{k'}}{(2\pi)^3}\delta(\mathbf{k-k'})e^{i(\mathbf{k\cdot x - k'\cdot y)}}
\end{equation}
which means we must have
\begin{equation}
\left[\hat A_\mathbf{k},\hat A_\mathbf{k'}^\dagger\right] + \left[\hat A_\mathbf{-k'},\hat A_\mathbf{-k}^\dagger\right] = \frac{\delta(\mathbf{k-k'})}{E_k}
\end{equation}
to which $\left[\hat A_\mathbf{k},\hat A_\mathbf{k'}^\dagger\right]=\delta(\mathbf{k-k'})/2E_k$ is the solution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 0
} |
What is so natural and inevitable about the expansion of the universe? The opening paragraph of the section the Introduction to Standard Big-Bang Model in the PDG review claims that
The observed expansion of the Universe is a natural (almost inevitable) result of any homogeneous and isotropic cosmological model based on general relativity.
Why is the expansion of the Universe claimed to be natural and inevitable? In other words, why would a static Universe be unnatural?
| A static universe is not viable in GR. It requires a precarious fine tune of the total mass density and the cosmological constant. Furthermore, this balance is unstable. Any deviation from it will either cause the universe to start an accelerated expansion, or start contracting towards a big crunch. This is why Einstein called his introduction of a cosmological constant to allow for a static solution, his biggest blunder.
So we are left with the option of either an expanding or contracting universe. Both could have been the case. We could also have had scenarios where expansion eventually turned into contraction. Instead we (surprisingly) find that the universe experiences accelerated expansion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Operation on Complex conjugate Why do we sandwich operators in quantum mechanics in such a way that the operator acts on the wavefunction and not on its complex conjugate?
| Take the kinetic energy operator and ground state wave function for a particle in a box with width of $L$ as an example (in 1D)
$$\hat{T}=-\frac{\hbar}{2m}\frac{d}{dx^2}$$
and
$$\psi=\sqrt{\frac{2}{L}}\sin\frac{\pi}{L}x$$
Obviously, $\hat{T}\psi$ means something and $\psi^{*}\hat{T}$ means something else.
Please look at my previous answer for a more complete explanation of the notation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is this system isolated?
(Image updated for clearance)
Hello,
Assuming that there is no friction anywhere and both box 1 and ramp 2 start at rest, I was wondering why this is an isolated system in terms of momentum calculation. My professor approached calculating the velocities of the box and the ramp assuming that the momentum is conserved before and after the release of box 1. However, I don't understand how the system can be treated as if there is no net force acting on the system. If we draw the two axes horizontal and vertical to the surface of ramp 2, then the vertical component of gravity is canceled out by the normal force but the horizontal component of gravity remains unhindered and this is precisely why box 1 will be moving at all. I'm having a hard time understanding how the conservation of momentum approach can be applied here.
| I am sorry but I have had to write this as an answer as it is too long as a comment.
There is no external horizontal force acting on the system comprising the two blocks so for the system of two blocks the total horizontal momentum must stay constant and it is zero if the blocks start from rest.
There is no horizontal component of the weights of the blocks as those forces act vertically downwards and so the weights of the two blocks cannot influence the horizontal momentum of the system.
Block 2 has the normal force (yellow) and the vertical force (red) mg acting on it and block 1 has a normal forces equal in magnitude and opposite in direction to the yellow force on block 2, a vertical weight force acting on it and a vertical (there is no friction) upward force due to the surface on which it rests.
This arrangement is discussed in this answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Physical meaning of gauge choice in electromagnetism In electromagnetism, it is often referred to gauges of the electromagnetic field, such as the radiation or Coulomb gauge. As far as I know, the definition of a gauge helps us to redefine the problem in terms of a vector potential and a scalar potential that, since we have some freedom in choosing them, can be chosen in cleverest way it is possible for the given problem.
Here comes my question: is the choice of the gauge a mere mathematical simplification of the given problem? Does this choice have a physical meaning?
My troubles are actually in understanding the physical meaning of this choice of the gauge and what will change if I choose a different gauge.
| Physical observables in a gauge theory$^1$ are independent of gauge-fixing choices$^1$. Conversely, gauge-fixing choices are unphysical.
--
$^1$ Here we have applied a narrow definition of a gauge theory where gauge symmetry represents a redundant description of a physical system, cf. e.g. this Phys.SE question. In other words, we have ignored (large) gauge transformations that actually change the physical configuration, cf. answer by tparker.
$^2$ By a gauge-fixing condition, we assume a condition that intersect each gauge-orbit precisely once. Note that some conditions do not actually fulfill this, e.g. only partially fixes a gauge. Also there might be Gribov problems.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Quantum State Representation with Commuting Operators Let $[A,B]=0$. Then, we can find a set of eigenvectors $\{|a_n,b_n\rangle\}$ common to both $A$ and $B$. According to this, and my own understanding, it makes sense to write an arbitrary quantum state as
$$\tag{1}|\Psi\rangle=\sum_n \sum_i c_n^i |a_n,b_n,i\rangle,$$
where the sum over $n$ goes over all the eigenvectors, and the sum over $i$ allows for degeneracy to exist.
To me, it seems like we're saying $|a_n,b_n\rangle$ is a single eigenvector common to $A$ and $B$, that could have very well be written as $|w_n\rangle$. This also makes sense.
Yet, Cohen's quantum mechanics text writes
$$\tag{2}|\Psi\rangle=\sum_n \sum_p \sum_i c_{n,p,i}\ |a_n,b_p,i\rangle.$$ This has greatly confused me as it seems like we are dealing with two different sets of eigenvectors, one for $A$ and one for $B$. This representation (at least to me) says for each $n$, we are going over all $p$ eigenvectors and account for their degeneracy. Whereas the representation in Eq. (1) says to simply go over the eigenvectors $|a_n,b_n\rangle$ and account for their degeneracy.
Any help in trying to understand where I'm going wrong is appreciated.
| If $A$ and $B$ are commuting self adjoint operators (more precisely operators with pure point spectrum whose spectral measures commute), then the Hilbert space is decomposed into a direct orthogonal sum of common eigenspaces, where $A$ and $B$ are trivially represented as multiplicative operators: $aI$ and $bI$.
The crucial observation is that common eigenspaces of $A$ and $B$ are one-to-one with the pairs of eigenvalues $(a,b)$. Now there are several ways to bijectively label all these couples, i.e., all these common eigenspaces.
In the first case $n$ labels different common eigenspaces whose dimension is the range of possible values of $i$ (which depends on $n$). So it may happen that $a_n= a_{m}$ for example even if $n\neq m$. But $(a_m,b_m) \neq (a_n,b_n)$ necessarily if $n\neq m$ (that is $b_n\neq b_m$ in the considered example).
In the second case $n$ and $p$ separately label different eigenvalues of $A$ and $B$ respectively, and $i$ (whose range depends on $n$ and $p$) accounts for the dimension of every common eigenspace as before. So, in particular, $a_n \neq a_m$ necessarily if $n\neq m$ and $b_p \neq b_q$ if $p\neq q$.
These two procedures to describe the decomposition of the Hilbert space into a direct orthogonal sum of common eigenspaces of $A$ and $B$ are completely equivalent. Using one or the other is just matter of convenience.
A sort of confusion is generated from the absence of indication of the ranges of indices in the sum, especially that of $i$.
A less sloppy notation would help understand the equivalence of decompositions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
For all practical purposes can light be bent (without the help of gravity) or just reflected? For example, if a single beam of light was directed directly at the tangent of a semi circular mirror, would it be considered bending or redirecting many times to form a near circular pattern? When I say bend I mean in a curved trajectory, not at an angle.
| The phenomenon you're describing, of light being bent, is observed when light passes through a medium with progressively increasing, or decreasing refractive indexes.
You actually observe it when you see a mirage. When it's very hot, the temperature being progressively higher as you approach the ground, the refractive index decreases (because density decreases) and light coming from the sky is bent upwards explaining why you see "water" on the ground when it's very hot. When in fact, what you actually see is light from the sky being bent upwards into your eyes.
Hope my answer helped! :)
This image sums it up pretty well
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What if cosmological constant was zero? Physicists always ask why the cosmological constant is not exactly zero!
I would ask here, what if cosmological constant was zero? The universe wouldn't expand and matter would exert gravitational force and shrink the universe into a big crunch!
So, why physicists want the constant to be zero then? I must have missed something here!
Can cosmological constant be zero since we see the universe already expanding? How would the universe support life further as some claim?
| The universe can expand just fine without a cosmological constant. In fact, it was this fact that made Einstein originally add it to the equations when he was making his first cosmological model: he did not know space-time is expanding, so he used a constant as an allowed but ugly fudge-factor to make it static in his model. Later he felt he had made a mistake and should have trusted the math (in an extra heaping of irony current cosmological measurements do find acceleration best described by having a constant). But expansion can happen without it.
Assuming the universe to be spatially homogeneous and isotropic, and combining this with the Einstein field equations produces the two Friedmann equations $$\frac{\dot{a}(t)}{a(t)} = \frac{8\pi G}{3}\rho - \frac{k}{a^2(t)}+\frac{\Lambda}{3}$$ and $$\frac{\ddot{a}(t)}{a(t)}=-\frac{4\pi G}{3}(\rho+3p)+\frac{\Lambda}{3}$$ where $k=+1,0,-1$ depending on curvature. $\Lambda$ is the cosmological constant.
Note that if we want $\ddot{a}(t)=\dot{a}(t)=0$ (no expansion) and $\Lambda=0$, then the first equation implies $\frac{8\pi G}{3}\rho a^2(t) = k$. This will not work if $k=0, -1$ since the left side is nonzero and positive. The second equation leads to $\rho+3p=0$: for any positive density there has to be negative pressure even if we are just thinking of the contents of the universe as pressure-free dust. So it looks like $\dot{a}(t) \neq 0$... unless one adds a suitable nonzero value of $\Lambda$ to make things stand still.
In reality we observe that the universe expands at an accelerating rate, and the best fit to the observations is a nonzero constant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Reason for 6π factor in Stokes' law According to Stoke's law, the retarding force acting on a body falling in a viscous medium is given by $$F=kηrv$$ where $k=6π$.
As far as I know, the $6π$ factor is determined experimentally. In that case, how is writing exactly $6π$ correct since we obviously cannot experimentally determine the value of the constant with infinite precision?
| It is not determined experimentally, it is an analytical result. It is verified experimentally.
As @Mick described it is possible to derive the velocity and pressure field of a flow around a sphere in the Stokes flow limit for small Reynolds numbers from the Navier-Stokes equations if the flow is further assumed to be incompressible and irrotational.
Once the flow field is determined, the stress at the surface of the sphere can be evaluated:
$$\left.\boldsymbol{\sigma}\right|_w = \left[p\boldsymbol{I}-\mu\boldsymbol{\nabla}\boldsymbol{v}\right]_w$$
from which follows the drag force as:
$$\left.\boldsymbol{F}\right|_w = \int_\boldsymbol{A}\left.\boldsymbol{\sigma}\right|_w\cdot d\boldsymbol{A}$$
From this it follows that the normal contribution of the drag force (form drag) is $2\pi\mu R u_\infty$, while the tangential contribution (friction drag) of the drag force is $4\pi\mu R u_\infty$, where $u_\infty$ is the free-stream velocity measured far from the sphere. The combined effect of these contributions is evaluated as $6\pi\mu R u_\infty$ or the total drag force.
This result is also found by evaluating the kinetic force by equating the rate of doing work on the sphere (force times velocity) to the rate of viscous dissipation within the fluid. This shows nicely there are often many roads to the same answer in science and engineering.
For details i suggest you look at the Chapter 2.6 and 4.2 from Transport Phenomena by Bird, Steward & Lightfoot.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 0
} |
Why is the helium atom wavefunction a product of the two 1s wavefunctions? From S.M. Blinder's Introduction to QM book, p. 116:
In seeking an approximation to the ground state, we might first work out the solution in the absence of the $1/r_{12}$ term. In the Schrodinger equation thus simplified, we can separate the variables $r_{1}$ and $r_{2}$ to reduce the equation to two independent hydrogen like problems. The ground state wavefunction (not normalized) for this hypothetical helium atom would be:
$$\psi(r_1, r_2) = \psi_{1s}(r_1)\psi_{1s}(r_2) = e^{−Z(r_1 + r_2)}$$
Why is it only the product and not some linear combination of the two wavefunctions? I heard somewhere that it has something to do with "tensor product". Can someone provide a detailed explanation about this?
Reference:
*
*Blinder, S. M. Introduction to Quantum Mechanics: in Chemistry, Materials Science, and Biology; Elsevier, 2012,. ISBN 978-0-08-048928-5.
NB: This question has also been asked on Chemistry Stack Exchange: Why is the electronic wavefunction of helium a product of the two 1s wavefunctions when ignoring electron-electron repulsion?
| Because if
$$\require{cancel}\hat{H}=\hat{H}_{1}+\hat{H}_{2}=\left[-\frac{\hbar^{2}}{2m}\nabla_{1}^{2}-\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{1}{r_{1}}\right]+\left[-\frac{\hbar^{2}}{2m}\nabla_{2}^{2}-\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{1}{r_{2}}\right]+\cancel{\color{red}{\frac{e^{2}}{4\pi\varepsilon_{0}}\frac{1}{r_{12}}}}$$
is your Hamiltonian and $\psi_{0}$ is the ground state of $\hat{H}_{1,2}$, i.e.
$$\hat{H}_{1}\psi_{0}\left(\vec{r}_{1}\right)=E_{0}\psi_{0}\left(\vec{r}_{1}\right)$$
$$\hat{H}_{2}\psi_{0}\left(\vec{r}_{2}\right)=E_{0}\psi_{0}\left(\vec{r}_{2}\right)$$
then you can argue that
$$\hat{H}\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)=\left[E_{0}+E_{0}\right]\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)$$
In other words, $\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)$ is a solution with energy $2E_{0}$ which is of course the lowest possible.
A common hand-waving argument is that this choice satisfies the fact that
$$P\left(A\cap B\right)=P\left(A\right)P\left(B\right)$$
for independent events $A$ and $B$. Here $A=\color{blue}{{\rm electron\:1\:is\:at\:}r_{1}}$ and $B=\color{blue}{{\rm electron\:2\:is\:at\:}r_{2}}$ so
$$P\left(A\right)=\left|\psi_{0}\left(\vec{r}_{1}\right)\right|^{2}$$
$$P\left(B\right)=\left|\psi_{0}\left(\vec{r}_{2}\right)\right|^{2}$$
$$P\left(A\cap B\right)=\left|\psi_{0}\left(\vec{r}_{1}\right)\psi_{0}\left(\vec{r}_{2}\right)\right|^{2}=\left|\psi_{0}\left(\vec{r}_{1}\right)\right|^{2}\cdot\left|\psi_{0}\left(\vec{r}_{2}\right)\right|^{2}=P\left(A\right)P\left(B\right)$$
in agreement with the probabilistic interpretation of quantum mechanics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Matter effects in neutrino oscillation The neutrino oscillation probability in matter is given as:
where
Now what I do not understand is that "As the energy increases, the probability
of oscillation within the sun through the matter effect increases, so the survival probability decreases". I have read this (page 28) in couple of books but unable to cross check it through formula because as I try to increase the energy, both the $sin^2$ decreases.
So what's going wrong here?
| I think the crucial point here is that in the sun you have two variables, the neutrino energy $E$ and the electron number density $N_e$. These two enter the matter potential. In order to run into the MSW resonance (and therefore effectively oscillate into muon neutrinos) these two variables have to combine in a way that $\frac{\Delta V}{\Delta m^2} - \cos{2\theta}\approx 0$.
To really check this claim, you would have plug in the best fit values for vacuum mixing for $\Delta m^2$ and $\theta$, but most importantly the continuum of electron number densities of the sun $N_e(x)$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Wave function in tensor product of Hilbert spaces If I had the wave function
$$\Psi\equiv\psi(r,\theta,\phi)\otimes\chi \in \mathscr{L}^2(\mathbb{R}^3)\otimes\mathbb{C}^{2S+1},$$
where $S$ is the spin of the state, is it correct to normalize the spin part of $\Psi$, namely $\chi$, regarding the spatial parameters $(r,\theta,\phi)$ as if they were fixed?
I mean: if $\psi\propto\sum Y_l^m (\theta,\phi)$, is it correct to say that the $Y_l^m(\theta,\phi)$'s are just numbers in $\mathbb{C}^{2S+1}$?
| The norm squared of your wavefunction is
$$\left<\Psi\big|\Psi\right>=\left<\psi\big|\psi\right>\left<\chi\big|\chi\right>$$
and this should be $1$. In particular, you can normalize both $\psi$ and $\chi$ separately and your $\Psi$ will be also normalized.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Strange interference pattern of light on top of tower, pattern was seen on air. What was it? I was just looking out of window at night when I saw a tower with a light on top. It had a red light.
When I looked at it through my curtains with net on, I saw an interference fringe, one is the main light itself and band of lights on either side of it (like interference of waves).
Although there was no screen, why did I see it? Did the air act like screen, so I saw it? Was it because nets of curtain acted like slits, which produced that pattern? Or is it some other simple diffraction or anything of light? I don't think it is because of other simple reason, because interference pattern was clear.
Any process that can explain this phenomenon?
| Regarding the "no screen" part: Your retina can be a pretty good "screen" in this case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Will movement of charged gaseous particles constitute electrical current? As electrons flow, they constitute current.
So if we manage to supply an extra electron to each of the gaseous molecules present in an enclosure and make the gaseous molecules travel in that enclosure, will that constitute current?
Is this kind of a thing feasible?
| Any spatial movement of charge constitutes an electrical current. Thus also the movement of gas ions, whether positively or negatively charged, produces an electrical current. Currents produced by ionized gas molecules are very common. For example in gas discharge tubes used for illumination.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is a canonical transformation equivalent to a transformation that preserves volume and orientation? We have seen the reverse statement: Lioville's Theorem states that canonical transformations preserve volume (and orientation as well). Is the reverse true?
If I demand a map from the phase space to the phase space to preserve volume, is it necessarilly a canoncial transformation?
I couldn't come up with a counter example, that's why I ask.
| *
*Counterexample: The transformation
$$Q^1~=~2q^1 ,\qquad P_1~=~p_1,\qquad Q^2~=~\frac{1}{2}q^2 ,\qquad P_2~=~p_2 $$
preserves phase space volume & orientation, but is not a symplectomorphism.$^1$
*For 2D phase space, the canonical phase space volume form $$\Omega~=~\frac{1}{n!}\omega^{\wedge n}$$
is the symplectic 2-form $\omega$ itself, so that the orientation & volume preserving transformations are the symplectomorphisms.
--
$^1$ Here we will assume that OP defines a canonical transformation (CT) as a symplectomorphism. Be aware that several non-equivalent definitions of CTs appear in the literature, cf. e.g. this Phys.SE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Extreme life - energy source for living tens of kilometers underground? Living cells were found up to at least 12 miles underground (article), and in other extreme places (BBC survey article), for which beside the problem of just surviving in such extreme conditions, a basic physics thermodynamical question is: what energy source it is based on?
And in such extreme temperatures there is needed a lot of energy just to fight 2nd law of thermodynamics - actively protect cell's structures against thermalization.
Such energy source needs to be relatively stable for past billions of years - what seems to exclude chemical energy sources (?)
One stable energy source in such high temperatures are thermal IR photons, and thermophotovoltaics is generally able to harvest energy from them. However, cell living in such extreme conditions would rather have the same temperature, hence 2nd law seem to forbid harvesting energy from such IR photons?
| One article to which your article points to is this one they state:
How would these microbes have survived? Counterintuitively, the exceedingly high pressure in a miles-deep habitat — in the neighborhood of 5,000 times the pressure exerted by the atmosphere at sea level — could have helped. High pressures actually can stabilize biomolecules, such as DNA, offsetting the heat’s destructive effects.
They also didn't stress the idea that food was unavailable...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What are the Feynman diagrams for neutrino oscillations? Which Feynman diagrams are at the basis of neutrino oscillations? I find no clear explanation via Google.
| There are none, and the question isn't really even sensible.
Neutrino oscillation is not mediated by force carrying particles any more than any other change of quantum basis is.
This is similar to asking "What are the Feynman diagrams for the uncertainty principle?". In both cases we're talking about what happens when you treat a single quantum system in terms of observable $\hat{A}$, then in terms of observable $\hat{B}$ and then in terms of observable $\hat{A}$ again when $[\hat{A},\hat{B}] \ne 0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Venus, Earth and Mars Magnetic fields Why does Earth have a magnetic field, while it appears that Venus and Mars have none or very little?
| The most accepted theory for the existence of a magnetic field in our planet is that the field is generated by electric currents due to the motion of convection currents of molten iron in the Earth's outer core driven by heat escaping from the core, a natural process called a geodynamo.
As you can see here https://nssdc.gsfc.nasa.gov/planetary/factsheet/mercuryfact.html, Mercury has a magnetic field which is smaller then Earth's magnetic, but is stable. We also believe that the origin of this field is the same that generates Earth' magnetic field.
And I don't think that there's a good theory for the absence of magnetic field in Venus of its own. But the solar wind induce a magnetosphere in Venus.
And about Mars... "Mars has multiple umbrella-shaped magnetic fields mainly in the southern hemisphere, which are remnants of a global field that decayed billions of years ago" (extracted from wikipedia). I didn't found a better way to explain. You can also read this for further info on Mars magnetic field:http://www.space.dtu.dk/english/research/universe_and_solar_system/magnetic_field
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
Do bubble universes in eternal inflation have edges? I've been having a hard time understanding bubble universes in eternal inflation. So they are just finite regions of space where inflation has stopped and Hubble expansion has taken over? I just can't understand how a finite universe can work with out current understanding of Cosmology without curvature, which I don't think eternal inflation deals with. Is the multiverse itself flat? Am I missing something?
| The bubbles do have edges, because bubble collisions have been sought after (thus far unsuccessfully, I believe) as evidence of field-based inflation, and the collision of objects without any sort of edge would be indeterminate. The terminology varies with the model: Guth, who's usually credited with originating inflationary cosmology, tends to refer to its spatially or spatial-temporally "local" universes as "pocket universes", whereas Vilenkin often refers to them as "bubble universes". (Guth is very favorable to the "eternal inflation" idea sketched by Dr. Kohli, and may favor the term "pocket universes" for visualization, as it's more suggestive of a universe open to the future, which is usually toward the top of the spacetime diagrams originated by Einstein's teacher and colleague, Minkowski.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What are the initial conditions associated with solving the geodesic equation in General Relativity? Can we say that initial conditions for solving the geodesic equation in general relativity be intial velocity of a particle?
| The most practical way to solve the geodesic equations is using a dynamical systems approach, where one re-writes the 2nd-order ODEs as a coupled system of first-order ODEs. Let the geodesic equation be given as:
$\ddot{x}^a + \Gamma^{a}_{bc} \dot{x}^{b} \dot{x}^{c} = 0$.
Then, this can be written as a system of first-order equations as follows:
$\dot{x}^{a} = y^{a}$,
$\dot{y}^{a} = -\Gamma^{a}_{bc} y^{b} y^{c}$
Now, this is a dynamical system, and while explicit solutions may not be possible, one can use all the beautiful tools of dynamical systems theory to get a very detailed description of information about the system. (It's also much easier to solve numerically when cast in this form!) The initial conditions are $(x^a_i, y^a_i)$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What is the relation of the mass of vector field in bulk and the scaling dimension of current operator in CFT? In AdS/CFT correspondence, we know that,
$$m^2=\Delta(\Delta-d)$$
where m is the mass of a scalar field and $\Delta$ is the scaling dimension of the dual operator in CFT. What about the relation of the mass of vector field in bulk and the scaling dimension of current operator in CFT?
| A vector field $A_{\mu}$ with spin-1 in the bulk has a dual spin-1 operator on the field theory, let's call it $J_{\mu}$.
If the vector field is massless, then $\Delta_{J}= d-1$ and $J_{\mu}$ is a conserved current.
In any other case, the current is not conserved.
The relation you wanted
$$m^2 = R^2(\Delta − 1)(\Delta − 3)$$
where $R$ is the AdS radius and $\Delta$ is the conformal dimension.
More generally in the context of the AdS/CFT; gauge symmetries in the
gravity theory correspond to global symmetries in the CFT.
One relevant reference is
http://laces.web.cern.ch/Laces/LACES09/notes/dbranes/lezioniLosanna.pdf
and of course, there are many more.
Cheers!!!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the actual reason for the effects of fictitious forces? Suppose a person is standing in a bus moving with constant velocity. Assume that static friction between his feet and bus surface is very low (you could assume he is on roller skates) and we are observing this situation from the frame of reference of the bus. Then the driver applies the brakes and thus we feel accelerated or jerked in the forward direction.
We could explain this by saying that there is a fictitious force acting behind his back (which is opposite to the direction of acceleration of the bus) by using Newton's law in an accelerating frame.
But we know that there is no such force acting on a person standing inside the bus. What's the actual reason for him accelerating forward, or backward when the bus accelerates forward? (There would be no help from static friction for the motion of the person in this case.)
| The simple answer is the the person on the roller skates is not accelerating and no force is acting on them.
Suppose I am standing on the pavement outside the bus when the bus starts to move. Clearly I am not accelerating because I am just standing there. If I watch the roller skater through the bus windows then when the bus starts to move I will see the roller skater remain stationary relative to me. Since I am not accelerating, and the skater remains stationary relative to me, that means the roller skater isn't accelerating either.
The reason it appears to you that the roller skater is accelerating is because you are accelerating.
Or we can look at this another way. Most mobile phones have a built in accelerator. If I look at my phone now it tells me that my acceleration is zero. If I looked at my phone while I was standing on the pavement watching the bus it would also tell me I wasn't accelerating. And if the roller skater looked at their phone it would tell them that they weren't accelerating. By contrast your phone, and the phone of everyone else on the bus, would show a non-zero acceleration.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
What is a destructive measurement? What are destructive measurements or incomplete measurements, and what is the (conceptual) difference between them and a usual measurement?
I read somewhere that destructive measurements consume their qubit.
reference: measurement calculus
p.6 we simplified equations 2 and 3 to equations 4 and 5.
| *
*Destructive measurements are processes that completely destroy the system they are measuring, and they are primarily used when detecting light. As an example, to detect the polarization of a photon you can pass it through a polarizing beam splitter and put detectors on either output port: you get complete information of a projective measurement on the photon, but the photon also ceases to exist.
This describes most measurements performed on light, but not all measurements do this, and the alternative, called quantum non-demolition experiments (example, doi), earned Serge Haroche the 2012 Nobel prize in physics.
*An incomplete measurement is a completely different beast: it's a projective measurement that doesn't fully collapse the wavefunction, i.e. it completely collapses some superpositions but it leaves others untouched. As a simple example, suppose that you had the wavefunction
$$ |\psi⟩ = a |1⟩ + b|2⟩ + c|3⟩ + d|4⟩;$$
an example of an incomplete measurement is one that determines whether the state is in either the pair $|1⟩$ and $|2⟩$ or in $|3⟩$ and $|4⟩$, leaving behind the state
$$ |\psi_{12}⟩ = \frac{a |1⟩ + b|2⟩}{\sqrt{|a|^2+|b|^2}}
\quad\text{or}\quad
|\psi_{34}⟩ = \frac{c|3⟩ + d|4⟩}{\sqrt{|c|^2+|d|^2}},$$
with probabilities $|a|^2+|b|^2$ and $|c|^2+|d|^2$, respectively. This can be achieved by e.g. making the system interact with some auxiliary two-level system which is later projectively measured, but the details there are not particularly important.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why can large objects at greater distance be treated as a point particle? Why can large objects at greater distance be treated as a point particle?
"The bodies of our solar system are so far apart compared with their diameters that they can be treated as particles to an excellent approximation."
was the statement written in a textbook.
It got me thinking whether apart from mere visualization of these giant bodies as point particles due to the great distance between them, is there any explanation about why they can be treated as point particles (to an excellent approximation).
How does the large distance between the planets influence the way the planets or any other objects are treated?
| You can expand the gravitational field of an arbitrarily shaped body systematically in something called the multipole expansion (this is also done in electromagnetism). The lowest order term will be the slowest to decrease with distance and that lowest order term is just that of a point particle.
For instance say you have a two point masses of mass m at the points (0,0,0) and (0,0,dz)
The gravitational potential is $$U=-Gm\left(\frac{1}{\sqrt{x^2+y^2+z^2}}+\frac{1}{\sqrt{x^2+y^2+(z-dz)^2}}\right)$$
Taylor expanding about $dz=0$, and using $r\equiv\sqrt{x^2+y^2+z^2}$
$$U=-Gm\left(\frac{2}{r}+\frac{z\,dz}{r^3}+\mathcal{O}(dz^2)\right)$$
The first term is just the gravitational potential of a point mass of 2m, and this is the term that is most important when $dz/r\ll 0$.
The second term is the dipole moment term $$\frac{\vec{p}\cdot \hat{r}}{r^2}$$ where $\vec{p}=m\vec{dz}$. And higher order terms are 'multipole moment' terms that have to do with higher order spherical harmonics.
This example I'm using is very similar to user3727079's answer, but the idea is we can do this expansion for an arbitrarily shaped body and it is always the case that the most important term far away is that of a point mass (and the next most important is the dipole moment).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Where does the fine structure constant come from? I have this question: Where does the fine structure constant come from? Is it derived? Is it assumed? I will be most thankful if you will also include other detailed info that you think is also good to know, or just suggest a reading on it.
| The fine structure constant is one of the fundamental constants in nature, just like the speed of light or Planck's constant. It is there, and that's all we know for sure. We don't really have a compelling theory on its origin, nor a mechanism that explains its value.
In short, the fine structure constant is not a derived quantity, it is fundamental. You may want to read more about the following aspects of $\alpha$:
*
*Its origin. Some modern theories, like String theory or AdS/CFT, propose mechanisms on how this constant emerges from more fundamental objects, but -- in practical terms -- they are not really able to predict its value. One could also argue that the anthropic principle partially fixes this object to its observed scale, but to a wide community, this explanation is just as unconvincing and useless as it gets.
*Its measurement. This constant is measured in ion (Penning) traps (by Gabrielse
et al.) and by means of the so-called electron anomalous magnetic moment. It is one of the fundamental constants that have been measured to a highest precision. Truly marvellous if you ask me.
*Its constancy. Finally, it bears mentioning that some people have suggested that this constant is not really a constant, that it varies from place to place (and, consequently, from time to time) in the universe. This has been tested not to be the case, to an astonishing precision.
As stressed by other answers, you may write $\alpha$ in terms of $e$, the charge of the electron. In this case, you may want to argue that $\alpha$ is fundamental and $e\sim\sqrt{4\pi\alpha}$ is derived from it, or that $e$ is fundamental and $\alpha\sim e^2/4\pi$ is derived from it. Needless to say, both interpretations are perfectly valid.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 0
} |
Why does Griffiths's book say that there can be no surface current since this would require an infinite electric field for an incident wave? In sec. 9.4.2 Griffiths shows the well known boundary conditions for E and B fields, one of them is this:
$$\frac{1}{\mu_{1}}\textbf{B}_{1}^{\parallel}-\frac{1}{\mu_{2}}\textbf{B}_{2}^{\parallel}=\textbf{K}_{f}\times\hat{\textbf{n}}$$
Where $\textbf{K}_{f}$ is the free surface current. Griffiths says in this section:
"... For Ohmic conductors ($\textbf{J}_{f}=\sigma\textbf{E}$) there can be no surface current, since this would require an infinite electric field at the boundary."
I just can't understand it yet. Why is this true?
I have another question:
The boundary for the normal component of E is
$$\epsilon_{1}E_{1}^{\perp}-\epsilon_{2}E_{2}^{\perp}=\sigma_{f}$$
Where $\sigma_{f}$ is the free surface charge.
When we treat with incident EM waves on a conductor, is it necessary to consider $\textbf{K}_{f}$ and $\sigma_{f}$ different to zero?
I am asking this because in this section of the book Griffiths made $\textbf{K}_{f}=0$ and $\sigma_{f}$ vanishes naturally because he only studies normal incidence, but my question goes to the most general case in which the normal component of E is nonzero.
| I think this is so because for finite conductivity and for ohmic conductors, J=$\sigma$E would require that the current density be parallel electric field. Since for conductors, electric field is perpendicular to the surface, so J (current) would also be normal to surface. But the boundary condition n $\times$ H = K requires K not to be normal to surface (as it should be perpendicular to the normal), thus there would be no surface current.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
} |
Small inter nuclear separation limit for Diatomic molecule Let’s take the a simple $H_2^+$ molecule, where there is only electron which is $r_a$ away from the first proton and $r_b$ away from the other one.
Let’s call the separation between the two protons $R$.
As $R\rightarrow \infty$, the electron will stick to one of the two protons, so the wavefunction will be: $$ \phi = N_{\pm}(1s_a \pm 1s_b).$$
I recognise the two solutions as the gerade and ungerade orbitals. The $1s$ means ground state around each proton.
I can work out the normalisation constant to be $$ N_{\pm} = \sqrt{\frac{1}{2(1\pm S)}}, \, S = \int 1s_a 1s_b \mathrm{d}^3 r $$
Now in the limit of $R \rightarrow 0$, the gerade solution becomes just $1s$ which makes sense, but the ungerade is not defined - what happens to it?
| Think physically about the shape of the odd state. It has a nodal plane at the midpoint of the line connecting the two nuclei; this is the only node to the wave function. As the nuclei become coincident, that nodal plane ends up passing through the combined nucleus; since this is the only node, the resulting wave function is the $2p_{z}$ state, if the axis of the molecule was originally oriented along the $z$-axis.
You will not get this by taking a limit of $1s$ wave functions. When the nuclei are very far apart, the overlap integral that determines the energy difference between the odd and even states is very small, and the wave function in the vicinity of each nucleus looks very close to a $1s$. However, as you the distance between the nuclei gets comparable to the Bohr radius $a_{0}$ or smaller, the true wave function will need to be constructed out more than just $1s$ basis states. When the separation is small comparable to $a_{0}$, you get a very complicated wave function, although it simplifies to just the $2p$ state once the separation gets very small compared to $a_{0}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can humans hear transverse sound waves? According to Wikipedia and validated by a clever thought experiment here, sound waves can be transverse as well as longitudinal, if they're propagating through a solid. Consider my mind blown and my curiosity piqued. However, is this a phenomenon we can hear? And is there any reason these waves would be different from longitudinal sound waves?
Understandably, this kind of sound wouldn't travel through the normal path as that requires traveling through the air to get to the eardrum. Bone conduction headphones seem to offer a promising avenue, but I see no reason that those sound waves wouldn't be longitudinal.
| sound transmission through a solid can occur by either compressive waves or shear (transverse) waves because a solid is capable of sustaining shear stresses. sound transmission through air is exclusively by compression waves because air cannot sustain shear stresses.
You can certainly hear both sorts of waves by pressing your ear against, for example, a steel girder which is carrying both sorts of waves because someone on the other end of the girder is whacking it with a hammer, but the amount of each you will hear depends on details of how exactly your ear is coupled to the girder.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Experiment on friction coefficient Here you can see the results of the experiment about a friction coefficient:
The mean of the friction coefficient becomes 0.262 but when I do a linear regression in the form of y=mx the slope is 0.31. Shouldn't it be the same? I used $F_N$ as x values and $F_D$ (friction force) as y values.
regression: https://www.desmos.com/calculator/njj4utvsdk
| Expounding on sammy's comment, when you divide $F_D$ by $F_N$, since $F_D$ only is measured to one significant figure you can only report $\mu_S$ to one significant figure.
Value of $\mu_S$ from calculating the mean of all experiments: 0.262 = 0.3
Value of $\mu_S$ from linear regression: 0.311 = 0.3
So for your case you do get the same answer either way.
If you would take a ton more data points with more significant figures on $F_D$ you should still get the same answer both ways. It would probably be a crapshoot to guess which way will give you a more accurate answer with four data points. Plotting the data points and looking at the regression will at least give you a heads up if the experiment shows non-linearity (which would be an indication of some limitation of the experimental setup to reflect the model).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Which vacuum is the Universe really in? There ate two types of vacuum of the Standard model-the vacuum of the Higgs potential and that of the vacuum of the Yang-Mills fields labelled by the Chern-Simons number. See the figure 5 here.
The Lagrangian of the Standard electroweak theory contains both the gauge fields and Higgs doublet. Through the gauge covariant derivative the Higgs doublet couples to the gauge fields. So are they really different theories? As I understand, after the electroweak symmetry breaking the Universe is locked at one point/direction of the vacuum manifold of the Higgs potential. But I also hear about the Universe being in one of the vacua labelled by the Chern-Simons number.
My question is which vacuum is the Universe really in?
| A "pure" Higgs theory (i.e. containing only the Higgs field) has a vacuum labeled by the VEV of the Higgs field, a pure YM theory has a vacuum labeled by the $\theta$-angle, and the combined theory, i.e. a YM theory with a Higgs field as we find it in the standard model, has a vacuum labeled by both the Higgs VEV and the $\theta$-angle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Divergence of Electric Field Due to a Point Charge I am trying to formally learn electrodynamics on my own (I only took an introductory course). I have come across the differential form of Gauss's Law.
$$ \nabla \cdot \mathbf E = \frac {\rho}{\epsilon_0}.$$
That's fine and all, but I run into what I believe to be a conceptual misunderstanding when evaluating this for a point charge.
I know the math looks better in spherical coordinates, but I will be using Cartesian.
So when I calculate the divergence I obtain:
$$ \nabla \cdot \mathbf E = \nabla \cdot kQ\langle\frac{x}{(x^2+y^2+z^2)^{\frac{3}{2}}},\frac{y}{(x^2+y^2+z^2)^{\frac{3}{2}}},\frac{z}{(x^2+y^2+z^2)^{\frac{3}{2}}}\rangle = \frac{-3(x^2+y^2+z^2)}{(x^2+y^2+z^2)^{\frac{5}{2}}}+\frac{3}{(x^2+y^2+z^2)^{\frac{3}{2}}}.$$
This can further be simplified:
$$\frac{-3(x^2+y^2+z^2)}{(x^2+y^2+z^2)^{\frac{5}{2}}}+\frac{3}{(x^2+y^2+z^2)^{\frac{3}{2}}} = \frac{3}{(x^2+y^2+z^2)^{\frac{3}{2}}}-\frac{3}{(x^2+y^2+z^2)^{\frac{3}{2}}} = \frac{3-3}{(x^2+y^2+z^2)^{\frac{3}{2}}}.$$
Now instinctively I would say that 3-3 is zero and then the while thing is zero everywhere. I am confused as to why (purely mathematically) this expression is not equal to zero at the origin. I completely understand why it physically has to be that way. And I also understand that it is modeled with the delta dirac function. But what (again, mathematically) is stopping me from saying that equation is just zero even at the origin?
| What you want to compute is essentially $$\vec\nabla \,\frac{\vec x}{\left|\vec x\right|^3}$$
at the origin. Of course, that doesn't exist as a function since the field is singular. On the other hand, you have already shown that it vanishes everywhere else.
So you need to interpret the expression in a weak sense, i.e. as a distribution, and consider the integral $$\int_{B_\epsilon}\vec\nabla \,\frac{\vec x}{\left|\vec x\right|^3} \,\text{d}^3x$$
over some volume containing the origin, conveniently chosen as a ball of radius $\epsilon$, convert it to a surface integral which does not include the singularity and see that the result is finite.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does radiation cause a change in temperature? If yes, then is there a limit to the temperature decrease? If no, then can the body which radiates heat attain an absolute zero temperature?
| Radiation obviously causes a change in temperature. Sit in front of a fire for a while.
The upper limit of the decrease in temperature is the temperature of the cold source which is acting as the counterpart.
A body which only radiates heat could only attain absolute zero if heat were being pumped out of it somehow and concentrated elsewhere - it would not automatically reach absolute zero, because the cold source would heat up from absolute zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Weird sound coming from thermos flask I had a thermos flask that I had never used since I had bought it. Today, I decided to take it out and use it for storing hot water, so that I don't have to heat water every time I feel thirsty.
An interesting incident occurred. I heated water to near about boiling temperature, and filled the thermos flask with it. Then I tightened the lid. And then I started to hear this sound from the flask.
I realised that the sound is coming only if I put in hot water, not cold water. It's a weird sound. It is stopping if the flask is kept undisturbed for some time, but starting again if I move the flask.
Can anyone account for this sound?
The room temperature is about 15°C.
| What is probably taking place is this: when something hot is inside the thermos, the air trapped between the outside of the thermos liner and the inside of the plastic sleeve surrounding it begins expanding due to (slow) heat loss from the thermos liner. where the liner meets the sleeve around the upper end of the assembly there is a press-fit joint which is probably 1) not totally airtight and 2) has a little water, tea, coffee, etc. sitting in it. when the pressure inside this space rises to a certain level, it pushes air out through that space against the surface tension of the water in that crevice, the fluid rebuilds the closure, it gets pushed out, etc. creating that brief bubbling noise. once the internal pressure falls below that threshold, the bubbling stops and the pressure builds up again.
You can test this idea by filling the thermos with something hot, capping it, and immersing it in a tub of water. watch for bubbles coming from one of the joints.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to measure a static electric field? I looked up google but didn't find any design for measuring electric field that doesn't vary with time.
My own idea is to use two parallel plates (like a capacitor but without the dielectric). In an electric field E a potential difference V = Ed (d is separation between the plates) will develop, which can be measured using a voltmeter. Will this work?
| You can't measure the voltage between two plates in a static electric field because the field will also exist within the wires of you meter. You need to rotate the plane of the plates and use slip rings to feed the voltage to an AC meter. As I recall, there is often a vertical electric field near the surface of the earth, but it usually does not produce noticeable effects.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Can we consider non-inertial frames in Lagrangian dynamics formulated through d'Alembert's principle? When we derive Euler-Lagrange equations from an action principle, there is no explicit mention of a reference frame, so I assumed that the formulation is correct even in non-inertial frames (is this true?).
But I have trouble in accepting this when we derive Lagrange equations from d'Alembert's principle.
The principle states that the sum of the differences between the forces acting on a system of mass particles and the time derivatives of the momenta of the system itself projected onto any virtual displacement consistent with the constraints of the system is zero. Thus, in symbols d'Alembert's principle is written as following, $\sum _{ i=1 }^{ n }{ ({ F }_{ i }-m{ a }_{ i }).\delta { r }_{ i }=0 } $
Now I see that we implicitly assume Newton's equation hold, but for this to be true the frame must be inertial.
*
*So is it possible to formulate lagrangian in non-inertial frames using d'Alembert's principle?
*The term applied force is ambiguous to me, I am aware that we cannot consider dissipative forces are there any other forces we should disregard?
| Yes, you can safely use non-inertial reference frames provided the reactive forces due to contraints are ideal and provided you include all inertial forces in the set of non-reactive forces.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/379955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can we prepare a superposition of two many body state efficiently using quantum circuit? Let's say we have two quantum many body states $|\psi_1\rangle$ and $|\psi_2\rangle$(or equivalently, two quantum circuit $U_1$ and $U_2$ ), also an ancilla qubit $\alpha|0\rangle+\beta|1\rangle$. The goal is to prepare a state $\alpha|\psi_1\rangle+\beta|\psi_2\rangle$.
Can this be done efficiently using quantum circuits?
If $|\psi_1\rangle$ and $|\psi_2\rangle$ are logical state in quantum error correction. This is called encoding circuit. My question is, in general, could this be done?
| Yes.
Given a circuit for a unitary $U$ on $\mathbb C^d$, we can always build a circuit for a controlled-$U$ gate acting on $\mathbb C^2\otimes C^d$, this is, a gate acting as
\begin{align}
|0\rangle|x\rangle&\mapsto |0\rangle |x\rangle\\
|1\rangle|x\rangle&\mapsto |1\rangle (U| x\rangle)
\end{align}
(i.e., $U$ acts on $\mathbb C^d$ if the control qubit is $|1\rangle$). Such a gate is obtained by replacing any elementary gate $G$ in the circuit for $U$ by a controlled-$G$ gate. (E.g., if the gate set is single-qubit rotations and CNOTs, replace them by controlled unitaries and Toffolis, for which efficient circuits are known.)
Now you can build your state by starting as follows:
*
*Start with $(\alpha|0\rangle + \beta |1\rangle)|x\rangle$.
*Apply $U_1$ to the second register.
*Apply a controlled-$(U_2U_1^{-1})$ gate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding time period of SHM from equation of displacement Say for example I've got the equation of a SHM as: $$x = A \cos (\omega t + \phi)$$ where $A$ is the amplitude.
How do I find the time period of this motion?
I tried by finding the second order differential of the given equation.
$a = \dfrac {d^2 x}{d t^2} = - A \omega ^2 \cos (\omega t + \phi)$
Comparing it with the general equation for acceleration $a = - \omega ^2 x$, we can find $\omega$ from here.
But that is where the problem is coming. It makes no sense if I write $\omega = \omega \sqrt {A}$.
What is the correct method to find the time period of the SHM? What am I missing?
| There is a very simple mistake in your math. Notice $A$
is part of $x$, it is factored so you'll get to $\omega=\omega$ again. If you want to find a meaning to $\omega T = 2\pi$, consider the fact that $\cos$ (or $\sin$) are periodic functions with period $2\pi$. Hence, every time you have a time difference such that $\omega(t_1− t_2)=2 \pi$ you are back at the same point. Hence the period is given by $\omega T = 2\pi$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do anomalies work in the causal formulation of QFT? In the Epstein-Glaser formulation of a QFT, the would-be divergences are taken care of by meticulously splitting the distributions that appear in the construction of the $S$-matrix (or correlation functions). As a result, there are no divergences anywhere and the theory is perfectly rigorous1.
How do anomalies fit into this picture? These can be understood as the clash between a symmetry of the action and a regulator that refuses to respect it. In more pragmatical terms, the symmetry would be restored if the regulator is removed, so it is $\mathcal O(\epsilon^n)$, while the divergences are $\mathcal O(\epsilon^{-m})$; and, if $n=m$, a finite piece survives the physical limit $\epsilon\to 0$. But in the EG formulation, there are no divergences and no regulators, so how do anomalies arise? What is their precise role?
1: and – naturally – it agrees, in a general sense, with what naïve perturbation theory predicts; formally speaking, in the EG formulation the would-be divergences are recast as polynomials in the external momenta, i.e., they are subtracted in momentum space, in the sense of BPHZ.
| Anomalies may (or may not) appear as obstructions in the proof of the Ward-Takahashi identities, which provide gauge invariance. See
D.R. Grigore, The structure of the anomalies of gauge theories in the causal approach, J. Physics A: Math. Gen. 35 (2002), 1665.
See also Chapter 15 (Interacting quantum fields) from the recent course ''Mathematical quantum field theory'' by Urs Schreiber.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Why do Newton's laws have to be used only when working with a particle? I have a small understanding of physics but I am not studying the subject.
Whilst trying to model a plane landing in Differential equations (an A-level maths module), we were told that you have to assume that the plane is a particle to be able to apple newtons laws to it, is this the case? If so, why?
| Because Physics, though a precise discipline, often works with approximations.
In the question you're concerned with the plane can be modelled accurately by thinking of it as a particle. Nothing essential to the question is lost by this approximation.
If we, on the other hand, were concerned about the aerodynamics of the plane, then to approximate as a particle wouldn't do.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/380894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Commutator identities and Fourier transform Is it possible to derive one side of the arrow below from the other by using only the Fourier transform and its reciprocal?
$$[\hat{p},f(\hat{x})]=-i\hbar f'(\hat{x}) \leftrightarrow [\hat{x},f(\hat{p})]=i\hbar f'(\hat{p})$$
| Under some hypotheses on $f$ the answer is positive. I consider the simplest case below.
If $U$ is a unitary operator on the Hilbert space $H$ and $A: D(A) \to H$ is a self-adjoint operator over the same Hilbert space, form spectral calculus it arises that $$Uf(A)U^{-1} = f(UAU^{-1})\tag{1}$$ for every measurable function $f : \mathbb R \to \mathbb R$.
Regarding momentum $P$ and position $X$ operators over $H= L^2(\mathbb R, dx)$, it holds $$U P U^{-1} =-X\:,\quad U X U^{-1} =P \tag{2}$$
where $U : L^2(\mathbb R, dx) \to L^2(\mathbb R, dx)$ is the unitary operator given by Fourier(-Plancherel) transform. If $f : \mathbb R \to \mathbb R$ is for instance in the space ${\cal S}(\mathbb R)$ of Schwartz' functions (also using the fact that (1) this space is invariant under $X,P$ and functions of each such operator $g(X)$, $g(P)$ for $g \in {\cal S}(\mathbb R)$, (2) the explicit form of $P$ over ${\cal S}(\mathbb R)$, and that (3) $f \in {\cal S}(\mathbb R)$ entails $f' \in {\cal S}(\mathbb R)$),
$$[P, f(X)] =-i\hbar f'(X)\:.$$
As a consequence, since $U$ preserves ${\cal S}(\mathbb R)$,
$$[UPU^{-1}, Uf(X)U^{-1}]= U[P, f(X)]U^{-1} =-i\hbar Uf'(X)U^{-1}\:.$$
From (1) and (2). the found result can be written
$$[X, f(P)] =i\hbar f'(P)\:.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does the expanding universe affect quantum fields? The universe is expanding. It then seems logical to say that the QM fields are expanding as the universe expands. My question is how does this happen? When I consider an expanding field it forces me to consider the actual properties of the field. Are the QM fields becoming less dense or are these fields only mathematical constructs that enable predicts to be made. If the fields are not only mathematical then does their expansion require the creation of new field “stuff” to fill in the new space being created. QM mathematics is way out of my skill level so an answer that respects my curiosity and understands my limitations is appreciated.
| Photons are quantum particles you get when the electromagnetic field is quantized. So consider the photons which make up the cosmic background radiation. They were generated when the mean temperature was about 3,000 K, but now represent a temperature of about 2.7 K. They didn't "cool off", but instead their wavelengths increased with the cosmic expansion, which can be easily calculated as 1100 = 3,000 K/ 2.7 K, which is the expansion since the cosmos cooled below the hydrogen ionization level.
No fancy math is required, just experimental results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Gravity in vector We know that gravity is a force. But what is it's direction? Can it be expressed by vector and how can we do that? This question can also be asked for Coulomb's Law.
| As a first statement, I like to begin by stating that gravity is always towards the mass (e.g. always attractive). In other words, if mass A pulls on mass B, I would state that the direction of the gravitational force is towards mass A. If you set up coordinate system, you may then put this in by hand. (The same line of reasoning applies to the coulomb force, except now the direction of the force may be either toward or away depending on the signs of the charges.)
If we wish to be more formal, we can set up a spherical coordinate system centred on mass A. The direction of the gravitational force on mass B will always be in the $-\hat{r}$ direction, where $\vec{r}$ is a vector that describes the location of mass B.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Energy density in string wave The total energy density in a harmonic wave on a stretched string is given by
$$\frac{1}{2}p A^2 \omega^2 sin^2(kx-\omega t).$$
We can see that this energy oscillates between a maximum and a minimum. So the energy is maximum at 0 displacement when the string is stretched and at its maximum speed (both KE and PE density are maximum at the same time) and minimum when the displacement is maximum as it is unstretched and doesnt have any velocity.
This makes sense but I am having trouble merging this with SHM oscillations. In SHM the KE and PE are not in phase. And if we consider each particle of the wave acting as a shm oscillator then would the PE not be maximum at the maximum displacement?
| In a travelling wave the total energy of a piece of string between $x$ and $x+dx$ is not constant. This is because each piece of of string is doing work on it's neighbour to the right at a rate
$$
P= - T\frac{\partial y}{\partial x}\frac{\partial y}{\partial t}.
$$
The local version of the energy conservation law is then
$$
\frac{\partial}{\partial t}\left(\frac 12 \rho \left(\frac{\partial y}{\partial t}\right)^2+ \frac 12 T \left(\frac{\partial y}{\partial x}\right)^2\right)+ \frac{\partial }{\partial x}\left( - T\frac{\partial y}{\partial x}\frac{\partial y}{\partial t}\right)=0.
$$
The expression in parenthesis in the first term is the total energy density (KE+PE).
This equation says the local time-rate-of-change of the total energy between $x$ and $x+dx$ is equal to rate-work being done on the string in the interval at $x$ minus the rate at which energy is flowing out due to work being done at $x+dx$.
Assuming I have made no typos, this energy equation can be verified by using the equation of motion
$$
\rho\frac{\partial^2 y}{\partial t^2}- T\frac{\partial^2 y}{\partial t^2}=0.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Energy-momentum tensor from the variation of action of RNS strings In exercise 4.6 p. 121 of Becker, Becker, Schwarz's book 'String theory and M-theory', they state that under using the variation $\delta_+X=a^+\partial_+X$ and $\delta_+\psi_A=a^+\partial_+\psi_A$ where $A=\pm$, we may identify the components of the energy momentum tensor of the RNS strings from the variation of the action $$\delta_+S=\frac{1}{\pi}\int d^2\sigma \: \delta_+\mathcal{L},$$ where
\begin{align}
\delta_+\mathcal{L}&=\delta_+(2\partial_+X\cdot\partial_-X+i\psi_-\cdot\partial_+\psi_-+i\psi_+\cdot\partial_-\psi_+)\\
&=a^+(-2\partial_-(\partial_+X\cdot\partial_+X)+i\partial_+(\psi_+\cdot\partial_-\psi_+)-i\partial_-(\psi_+\cdot\partial_+\psi_+))\\
&=-2a^+(\partial_-T_{++}+\partial_+T_{-+}),
\end{align}
and similarly using $\delta_-$. So my question is how did the authors go from the first line to the second one, because naively I would think that:
\begin{align}
\delta_+\mathcal{L}=&\delta_+(2\partial_+X\cdot\partial_-X+i\psi_-\cdot\partial_+\psi_-+i\psi_+\cdot\partial_-\psi_+)\\
=&2[\partial_+(\delta_+X)\cdot\partial_-X+\partial_+X\cdot\partial_-(\delta_+X)]+i[(\delta_+\psi_-)\cdot\partial_+\psi_-+\psi_-\cdot\partial_+(\delta_+\psi_-)]\\
&+i[(\delta_+\psi_+)\cdot\partial_-\psi_++\psi_+\cdot\partial_-(\delta_+\psi_+)]\\
=&a^+\partial_+(2\partial_+X\cdot\partial_-X+i\psi_-\cdot\partial_+\psi_-+i\psi_+\cdot\partial_-\psi_+).
\end{align}
Then I would find totally incorrect $T_{-+}$ and $T_{++}$.
| I will just write the variation for Bosonic field, similar logic will follow for $\psi$.
Start with $2 nd $ line of your calculation and substitute the value of $\delta_+ X$. Expression will look like this
$$2 a^+ (\partial_-X) \partial_+ \partial_+ X + 2 a^+ (\partial_+X) \partial_- \partial_+ X $$
Do the integration by parts and throw away boundary terms. After integrating
$$ -2 \partial_+(\partial_-X) \partial_+X -2 \partial_-(\partial_+X) \partial_+X
$$
Using the fact that $\partial_-\partial_+X= \partial_+\partial_-X$, and just take $\partial_-$ comman to full expression. Hence you will get the correct $T_{++}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Where is humidity? During hot and humid weather, we sweat incessantly due to high humidity. But when we sit under a fan, we feel cold and comfortable. Why do we feel cold and chilled? Why don’t we feel the humidity?
| The moving air produced by the fan causes forced evaporation of the sweat secreted by the sweat glands. It takes energy (heat) to change sweat (liquid water) into water vapor and that heat energy is taken from the body thus cooling the body down which makes one feel more comfortable. This is one of Nature’s way of regulating body temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/381923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is coherent stimulated emission possible for particles other than photons? By coherent stimulated emission I am referring to any process analogous to stimulated emission in lasers, where one particle interacts with an excited energy state, which leads to a second photon being emitted with the same phase, frequency, and direction as the first. Are there in particles other then photons for which this can happen?
I am curious for what particles it is even theoretically possible. I am not as concerned with whether it has been experimentally confirmed.
I remember reading that it is essential that the particle is a boson which makes some sense, is this true? Could it possibly work for composite bosons like mesons and He-4? Which elementary bosons other than photons could undergo stimulated emission?
| There are "atom lasers", coherent states of propagating atoms that can be emitted from Einstein-Bose condensates. One can quibble about whether it is a laser since the 'L' is for light. In any case, it is experimentally demonstrated. While I have not seen any papers on alpha particles, helium atoms have been used.
Coherent stimulated emission seems to require bosons since it needs a large population inversion: fermions will not crowd into the same energy level, and hence decay from a population inversion will not produce coherence. The atom laser works because the BEC atoms become bosonic.
Are there any other bosons that could lase? I have a feeling it is unlikely to be practical for elementary bosons: W and Z bosons interact weakly (requiring a very dense medium) and quickly decay, gluons are colour confined, gravitons (beside being hypothetical) interact very weakly with matter. Maybe some mesons may be possible candidates inside the right kind of nuclear matter?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why do gravitational mass and inertial mass appear to be indistinguishable? I have learnt that heavier the object is (the more gravitational mass it has), the more resistance to the change of motion it is (the more inertial mass it has).
I can accept this fact but I can't find out the reason behind it. What dynamic, what phenomena could cause this? Does it have something to do with the atomic structure of the object?
| The answer is that more mass is defined to provide more inertia.
Newton noticed that, for any given object, $F\propto a$, that is to say the force on an object and the acceleration of that object are proportional. Whenever we find a proportionality like this, we assign a multiplier to turn that $\propto$ into an $=$. Thus we have $F=ma$. Mass is defined to be the constant of proportionality that converts accelerations into forces.
Once you define mass as such, you can rearrange the equation to $a=\frac{F}{m}$, and that shows how if you push on a more massive object with a specified force, the acceleration is smaller than if you pushed on a less massive object. This is true simply because we defined the idea of "massive" around this equation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Gravitational field strength Can I use $g=GM/r^2$ to calculate the gravitational field strength proton or electron or any other particles? If not then why? If yes then what would be that really mean?
| Usually (that I know of) the noticeable differences between General Relativity and Newtonian gravity only become apparent on a macroscopic, $>=$ planetary scale. For example the precession of Mercurcy, light bending, black holes etc...
So for atomic scales, you can use the Newtonian formula, yes.
It tells you the strength of the gravitational field generated by, say, a proton of mass m. So if you place a test mass of mass $m_1$, it will experience a force towards the proton $F = m_1g$. Due to the low masses and the small value of $G$, gravitational effects on atoms are almost always superseded by the electromagnetic interaction - i.e. a proton and an electron will form an EM-bound state before they would form a gravitational bound state (an orbit), or simply scatter from one another.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Time Reversal Operator and Rotations In Sakurai's Modern Quantum Mechanics Chapter 4, in the discussion about the Time Reversal Operator, the following formula is presented
$$\Theta \mathbf{J} \Theta^{-1} = -\mathbf{J} $$
This is a requirement necessary to conserve the canonical commutation relations between the generators of rotations.
Now, when speaking about time reversal in Spin-1/2 systems, we have the following eigenstate of the $\mathbf{S \cdot \hat{n}}$ operator
$$|\mathbf{\hat{n}},+ \rangle = \exp\left( -\frac{i}{\hbar}S_z \alpha \right) \exp\left( -\frac{i}{\hbar}S_y \beta \right) |+ \rangle$$
where $\alpha$ and $\beta$ are the azimuthal and polar angle respectively and $|+\rangle $ is the eigenstate of $S_z$ with eigenvalue $\frac{\hbar}{2}$.
If we now consider the action of the time reversal operator $\Theta$ on the above state, the following equality is presented
$$\Theta |\mathbf{\hat{n}},+ \rangle = \exp\left( -\frac{i}{\hbar}S_z \alpha \right) \exp\left( -\frac{i}{\hbar}S_y \beta \right) \Theta|+ \rangle$$
My question is : How does the $\Theta$ act directly on $|+\rangle$ without also affecting $S_z$ and $S_y$ like this?
$$S_z \rightarrow -S_z $$
$$S_y \rightarrow -S_y $$
which should follow from the first equation I wrote down.
| I am not sure to understand the question. Are you asking why exponent do not change as a consequence of $\Theta S_k \Theta^{-1} = -S_k$?
You are forgetting that $\Theta$ is anti linear, so $$\Theta e^{-i a S_k} \Theta^{-1} =e^{\Theta(-i a S_k)\Theta^{-1}} =e^{-(-i) a \Theta S_k\Theta^{-1}}= e^{-i a S_k}\:.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability of measuring eigenvalue of non-normalised eigenstate This came up while working on a question about measuring the angular momentum of a particle in a superposition of angular momentum eigenstates:
Given that:
$$\langle\theta,\phi|\psi\rangle \propto \sqrt{2} \cos(\theta) + \sin(\theta)e^{-i\phi} - \sin(\theta)e^{i\phi}$$
What are the possible results and the corresponding probabilities for measurements of $\hat{L}^2$ and $\hat{L}_z$?
$\hat{L}^2$ is simply $2\hbar^2$ as all three terms are eigenstates of $\hat{L}^2$ with eigenvalues $2\hbar^2$
However the three terms are eigenfunctions $\hat{L}_z$ with different eigenvalues, namely $0$, $\hbar$ and $-\hbar$. Now my question is whether I first have to normalise the eigenfunctions and then take the modulus squared of the coefficients to find the probabilites of measuring the corresponding eigenvalue, or whether it is possible to straight away write down: $$p(L_z=0)=\frac{|\sqrt{2}|^2}{|\sqrt{2}|^2+|1|^2+|-1|^2}$$
So basically my question is:
Given a wave function $|\psi\rangle$ and an operator $\hat{A}$, with eigenvalues $\lambda_i$ and non-normalised eigenfunction $|a_i\rangle$, and: $$|\psi\rangle = \sum_i{c_i|a_i\rangle}$$
Is it still true that the probability of obtaining a measurement $\lambda_i$ is given by $p_i=|c_i|^2$?
| Yes and no. You can just normalize the results with:
$$ \langle \psi | \psi \rangle$$
but you have computed that incorrectly. Remember, the differential solid angle is:
$$ d\Omega = d(\cos{\theta})d\phi, $$
you have used:
$$ d\Omega = d\theta d\phi.$$
I suggest you verify with a table of $l=1$ spherical harmonics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Recognizing speech at 1bit quantise depth? i found on german wikipedia an audio example of 1 bit depth quantising, where the speech still can be recognized. how is it possible if at 1 bit depth we have just two values: "signal" and "no-signal"?. here is the examle: https://upload.wikimedia.org/wikipedia/commons/4/43/Ampl1rp.ogg
| A 1 bit depth quantised signal still contains more than one bit of information. The signal level varies from moment to moment, and this provides extra information.
In the case of speech we tend to recognise the rhythmicity and structure as speech even if we cannot make out individual words. Some formant sounds may be recognisable if they get the signal stream to flip from 0 to 1 and back again at the same frequency: we tend to recognise voice sounds by their peak formant frequencies, so again this helps recognizing the signal as speech.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/382997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is so special about the factor $\sqrt{1-{v^2/c^2}}$ in special relativity? I am studying a book about relativistic equations and special relativity, and I keep seeing $\sqrt{1-{v^2/c^2}}$ everywhere. It is not, as with most of the concepts in special relativity, simply a mathematical construct; it is a logical consequence of accepting the experimental fact that the speed of light is the same in every inertial reference frame. Why, then, is this expression so significant?
| 1) At least at low speeds, you expect $x'=x-vt$, just from elementary considerations. ($vt$ is, after all, the distance traveled in time $t$, so a person traveling at speed $v$ will have his origin displaced by the amount $vt$.
2) If you believe space and time should be treated symmetrically, then you are led to expect something like $t'=t-vx$.
3) So in matrix terms, our first guess is
$$\pmatrix{x'\cr t'}=\pmatrix{1&-v\cr -v&1\cr}\pmatrix{x\cr t}$$
4) But if the transformation matrix is to preserve geometric structure (or, pretty much equivalently, if you want the matrix associated to $-v$ to be the inverse of the matrix associated to $v$) you want its determinant to be $1$, whereas it currently has determinant $\Delta=1-v^2$.
5) So to fix the determinant (while making changes that are negligible when $v$ is small, you multiply the transformation matrix by the appropriate constant, which is $1/\sqrt{\Delta}$. That explains where the constant comes from.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/383290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.