Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
When solving problems on linear momentum, when can external forces be neglected? I was recently solving a problem in which one end of a massless string (in vertical orientation) was tied to a block of mass $2m$ and the other end to a ring of mass $m$, which was free to move along a horizontal rod. The block is then given a velocity $v$ (consider that this velocity is not caused by application of an external force).
To calculate the velocity of the ring, we would have to apply momentum conservation. The problem is, momentum conservation would require net external force on the system to be zero, but in the solution I saw, the normal force exerted by the rod on the ring was neglected and so was the force of gravity.
So, when exactly can external forces be neglected in problem-solving?
|
momentum conservation would require net external force on the system to be zero
You are correct - moment of a system is conserved when net external force equals zero, because by third Newton's law of motion all internal forces cancel. This means that there can be external forces, it is just that their vector sum must be zero in order for momentum to be conserved.
when exactly can external forces be neglected in problem-solving?
In some special cases when the event happens for (very) short period of time, such as collisions or explosions, the effect of non-impulsive external forces on the system can be neglected during the event. In these cases the momentum is not theoretically conserved just before and just after the event, but for all practical purposes it could be considered as conserved.
This follows directly from the impulse-momentum theorem
$$\Delta \vec{p} = \vec{J} \qquad \text{where} \qquad \vec{J} = \int \vec{F}_\text{ext} dt$$
where $\vec{J}$ is the impulse, $\vec{p}$ is the linear momentum, and $\Delta$ denotes change, i.e. final value minus initial value. If the impulse is (very) small compared to momentum, which happens when either $\vec{F}_\text{ext}$ or $t$ or both are very small, the change of momentum is approximately zero $\Delta \vec{p} \approx \vec{0}$, which means that final momentum equals initial momentum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/701392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Different formal definitions of Lorentz Transformations The formal definition for Lorentz Transformation is a matrix $\Lambda$ such that $$\Lambda^\mu_{\ \ \alpha}\Lambda^\nu_{\ \ \beta}\eta_{\mu\nu}=\eta_{\alpha\beta.}$$
In some books I have found a definition that use the transposition: $$(\Lambda^T)\eta\Lambda=\eta.$$
My question is how to link them.
My attempt, so far, is to multiply by the inverse but I get stuck very soon and I don't know how to reach the second equation. Probably the passagges are trivial.
Thanks for any help.
| Kontle's answer contains the main idea, but let me be a bit more specific about your example: all is about matrix notation.
Recall in general linear algebra that, given $n \times n$ matrices $X,A \in \mathbb{R}^{n \times n}$ one can write the $(i,j)$-coefficient of the matrix $B= X^TAX$ as follows:
\begin{equation}
(b_{ij})=\sum_{l=1}^n\left(\sum_{k=1}^n x_{ki}a_{kl}\right)x_{lj}
= \sum_{k,l=1}^nx_{ki}a_{kl}x_{lj},
\end{equation}
To check this try to write explicitly the matrices and try to write in summation notation what you are multiplying (remembering that since at the beginning you have $X^T$, multiplying rows of $X$ is the same as multiplying columns of $X^T$).
More specifically, in your notation and for a $4 \times 4$ matrix, you have:
$B= \Lambda^T\eta \Lambda$ as follows:
\begin{equation}
(b_{\alpha\beta})=\sum_{\nu=0}^3\left(\sum_{\mu=0}^3 \Lambda_{\mu\alpha}\eta_{\mu\nu}\right)\Lambda_{\nu\beta}
= \sum_{\mu,\nu=0}^3\Lambda_{\mu\alpha}\eta_{\mu\nu}\Lambda_{\nu\beta},
\end{equation}
and now you agree that by standard commutativity of product of real numbers you get: $$= \sum_{\mu,\nu=0}^3\Lambda_{\mu\alpha}\eta_{\mu\nu}\Lambda_{\nu\beta}= \sum_{\mu,\nu=0}^3\Lambda_{\mu\alpha}\Lambda_{\nu\beta}\eta_{\mu\nu}$$
Now, writing it with Einstein's convention you get exactly what you needed:
$$= \sum_{\mu,\nu=0}^3\Lambda_{\mu\alpha}\Lambda_{\nu\beta}\eta_{\mu\nu} = \Lambda^\mu_{\ \ \alpha}\Lambda^\nu_{\ \ \beta}\eta_{\mu\nu}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/701547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Do the components of a force written for a purpose actually exist? On an inclined plane if you put a box, the force of gravity $mg$ is written as sum of two forces $mg\sin\theta$ and $mg\cos\theta$ where $\theta$ is the angle the incline is making with earths surface. Do these forces $mg\sinθ$ and $mg\cosθ$ actually work on the object?
| In the case of no friction, a load cell on the incline surface measures $F_N = mgcos(\theta)$. And the box has an acceleration of $a = gsin(\theta)$, what by the Newton's second law implies a force $F_t = mgsin(\theta)$.
So, the $2$ forces are real in the meaning that they can be measured. Their vectorial sum happens to be $mg$, which could be verified placing the same box over an horizontal load cell surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/701644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 3
} |
How can they estimate exoplanet radial velocities using Doppler considering spectrograph resolving power? I read that spectrograph resolving powers, the ratio of wavelength uncertainty to wavelength are like 1000 or 10000. Plugging this into the non relativistic Doppler formula gives a velocity uncertainty like 30000 meters per second. So how can they claim one meter per second accuracy? And what about thermal broadening of spectral lines?
| Typical resolving powers for exoplanet-finding spectrographs are 50000-100000, but nevertheless, this still means a resolution element has a FWHM of 3-6 km/s. This is to be compared with the radial velocity amplitudes caused by the planets of anywhere from 100 m/s for close-in hot Jupiters, to less than 1 m/s for Earth-like planets.
Measuring the precision of the shift of a spectral line against a wavelength scale boils down to how well you can estimate the centroid of the line. When you estimate the centroid of a Gaussian, the precision of your answer is not limited to the width of the Gaussian. It can, in principle, be of much higher quality. Roughly speaking, the uncertainty in the Gaussian centroid (or mean) is (to a small numerical factor) something like the FWHM divided by the signal-to-noise ratio of the flux in the Gaussian.
In addition, when measuring the spectral lines of stars there is a further $\sqrt{N}$ gain in precision from measuring $N$ spectral lines, since all the lines are shifted by the same velocity. $N$ can be of order 1000 for a sun-like star in a 100 nm spectral range.
The process is entirely analogous to estimating the mean of some distribution by taking repeated measurements. Whilst the estimated width (standard deviation or FWHM) remains approximately constant as the number of measurements increases, the standard error in the mean shrinks by a factor equal to the square root of the number of measurements.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/701868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does the Schrödinger equation apply to spinors? I was reading about Larmor precession of the electron in a magnetic field in Griffiths QM when I came across the equation
$$
i\hbar \frac{\partial \mathbf \chi}{\partial t} = \mathbf H \mathbf \chi,
$$
where $\mathbf\chi(t)$ is a 2D vector that represents only the spin state and does not include information of the wave function. The Hamiltonian is
$$
\mathbf H = - \gamma \mathbf B \cdot \mathbf S = - \frac{\gamma B_0 \hbar}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}
$$
for a uniform magnetic field $\mathbf B = B_0 \hat k$. Why should these spinors also obey the Schrödinger equation? The book does not provide any further information as to why this should hold.
| If you want to describe non-relativistic, interacting spins you need to extend the Schrödinger equation with the Pauli interaction, $H_{spin} = -\gamma {\bf B} \cdot {\bf S}$. This is the simplest of a class of hamiltonians known as spin hamiltonians. For non-translating spins, as in your case, you only need the Pauli term.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/701978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Why is Avogadro constant used to calculate the number density? My book says:
The number density of particles is $nN/V$, where $n$ is the total amount of molecules in the container of volume $V$ and $N$ is Avogadro's constant.
I can do something with the concentration $n/V$, it tells me how many moles of a particle I have in a certain volume, but why times $N$?
And another thing, where is the difference between number density (according to Wikipedia $n/V$) and molar concentration $n/V$?
| If n is the number of moles then n/V will be the number of moles per volume.
If you want to know the number of molecules per volume, you need to multiply this by Avogadro's constant.
An example:
If you measure a container's volume is 2 liters. And you know there is 2 moles of gas in this volume. Then you know there is 2moles/2liters = 1 mole per liter.
If you want to know the molecules per liter you can see it as first converting the 2 moles of gas to number of molecules in those 2 moles of gas which is 2N. So in the volume of 2 liters we have 2N molecules.
To get the particle density we have then 2N molecules / 2 liters = N molecules per liter.
Now it doesnt matter if we do "2N molecules/2 liters" or
"N molecules/mole*2moles/liter", they arrive at the same thing.
If that is not clear to you I suggest checking out dimensional analysis.
Hope this helped.
Also, I think number density is usually reserved for gases and molar concentration for solutions.
Number density, as far as I know, usually describes amount of particles per volume.
Molar concentration describes amount of moles per volume.
Indeed they convey a similar concept. Theres a big difference in that with a gas the number density literally describes how many gas particles are in a given volume.
The molar concentration describes how many moles of a substance are dissolved in a given volume of solution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/702263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Measuring radius vs diameter for a circular ring? Consider an experiment where I have to measure the radius of a circular ring diffraction (or interference) pattern (think Newton's rings).
It is advised to measure the diameter and then divide it by half instead of directly measuring the radius.
One reason for this could be due to the fact that it is difficult to pinpoint the center of the circle.
But is it better to measure the diameter if I know the position of the center with high accuracy?
Consider that I know the position of the center with as much confidence as I know the position of the circumference points. If my measuring device (let's say a Vernier caliper) adds an error of 0.02mm then if the true value of the radius for 5mm, I'll measure $5 mm\pm0.02mm$ whereas if I had measured the diameter I would have measured the value as $10mm\pm0.02mm$ and dividing by two would give me $5mm\pm0.01mm$, so it seems (if I not doing something wrong) that measuring the diameter is a better choice even if I know the center properly.
Is this reasoning correct?
| Yes, your reasoning is correct.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/702390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Single gravitational plane wave or their interference can carry spin angular momentum? I would be grateful if anybody could tell me if I had one gravitational wave in the form of a plane wave, it still would carry spin angular momentum? We know that gravitational waves are mostly the interference between many gravitational waves from different sources like binary black holes. I think they carry spin and angular momentum due to conservation laws but I do not know, Do we have a single GW and it can carry Spin and angular momentum.
| I'm guessing that any angular momentum carried by a (transverse) wave would be associated with a circular (or elliptical) polarization.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/702904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Warm air syphon to cool down greenhouse I'm thinking about a very energy efficient way of controlling temperature inside a greenhouse when it's too hot.
The goal is to use the syphon effect in order to draw hot air from the top of the greenhouse to the outside, perhaps using a cooler fan as a trigger for this air motion. The supposed advantage of having that fan is to be able to invert rotation and block/revert the airflow (if temperatures are still within working range, of course)
Would that work at all, similar to how water works?
Does the syphon on the outside has to be higher than the inside leg to increase draft? As in when it's full of rising warm air it will create negative pressure on the inside leg and thus suck more hot air which will continue this cycle?
*edit I was thinking about using a 200mm syphon pipe or wider if necessary; the fan location is illustrative;
The pipe goes down to the ground so existing structure does not need any modifications.
How much flow I need in order to actually cool the greenhouse significantly?
| This might work, but there's another way that does not need a fan- called the chimney effect, which will work for a pipe of diameter ~at least 4 to 6 inches.
Imagine a tall black pipe standing vertically in the sun, open at both ends. The sun's rays make the air inside get hotter than the air outside and the inside air becomes buoyant, and starts to rise upwards. As it does, it draws in air from the bottom of the pipe, which then gets heated by the walls of the hot pipe, becomes buoyant, rises, etc., etc.
Note that for any such scheme to work, you need a farily tall chimney, and the chimney diameter has to be no smaller than about 3 inches so that air friction does dominate the dynamics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/703121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is angular momentum just a convenience? I'm wondering whether angular momentum is just a convenience that I could hypothetically solve any mechanics problems without ever using the concept of angular momentum.
I came up with this question when I saw a problem in my physics textbook today. In the problem, a puck with known velocity hits a lying stick. The puck continues without being deflected, and the stick starts both linear and angular motion. There are three unknowns: velocity of puck and stick after collision, and the angular speed of the stick. So, we need three equations: conservation of linear momentum, kinetic energy, and angular momentum.
So, for instance, is it possible to solve this problem without using angular momentum?
Also, how would a physics simulator approach this problem?
| Another small point, implicit in the other answers: it's not exactly that "angular momentum is essential to solve problems", but that it is a convenient concept by which to understand things.
Slightly fancier: neither Lagrangian nor Hamiltonian classical mechanics immediately give "formulas to solve problems", but, rather, amount to discoveries of concepts that apply broadly. In contrast, Newtonian mechanics superficially seems "better" (to a novice?), because it seems to more directly address things. But that turns out not to be the highest virtue. :)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/703268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 11,
"answer_id": 4
} |
What does is really mean to say that a 3-body problem is not solvable? What does it really mean to say that a three-body problem (the Sun, the earth, and the moon) is not solvable? Why is it not possible to solve the differential equations on a computer with adequate initial conditions? What's the real issue here?
| Quoting from wikipedia
The three-body problem is a special case of the n-body problem. Unlike two-body problems, no general closed-form solution exists,[1] as the resulting dynamical system is chaotic for most initial conditions, and numerical methods are generally required.
Of course, you can get a numerical solution for given initial conditions and masses. (Although, as Jon Custer points out, getting accurate numerical solutions is very difficult).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/703420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Unitary Transformation Taking a 4$\pi$ Periodic Wave Function to 2$\pi$ Periodic Wave Function I am reading the following paper, which discusses Majorana fermions in Josephson junction arrays. Initially, the paper starts with a model such that the wavefunctions are $4\pi$ periodic. These satisfy the following relationship (see equation 5 of the paper too):
$$
\Psi(\phi + 2\pi) = (-1)^{(1 - \hat{P})/2} \Psi(\phi)
$$
I understand why these wavefunctions are $4\pi$.
Now, equation A.1. of the paper discusses what happens when one performs the unitary transformation on the Hamiltonian:
$$
H' = \Omega^\dagger H \Omega \text{ where } \Omega = \exp[i(1 - \hat{P}) \phi_i/4] \text{ and } \Psi'(\phi) = \Omega^\dagger \Psi(\phi)
$$
The claim is that the transformed wavefunction is $2\pi$ periodic. I do not see why:
$$
\Psi'(\phi + 2\pi) = \Omega^\dagger \Psi(\phi + 2\pi) = \Omega^\dagger(-1)^{(1 - \hat{P})/2} \Psi(\phi)
$$
How is the last term equal to $\Psi'(\phi)$?
| First note (I'm assuming there are no operator ordering issues)
\begin{eqnarray}
\Omega(\phi+2\pi) &=& \Omega(\phi) e^{i(1-\hat{P})\pi/2} = \Omega(\phi)\left(e^{i\pi}\right)^{(1-\hat{P})/2} = \Omega^\dagger(\phi) (-1)^{(1-\hat{P})/2} \\
\Omega^\dagger(\phi+2\pi) &=& \Omega(\phi) \left((-1)^{(1-\hat{P})/2}\right)^\dagger
\end{eqnarray}
So
\begin{eqnarray}
\Psi'(\phi+2\pi) &=& \Omega^\dagger(\phi+2\pi) \Psi(\phi+2\pi) \\
&=& \left[\Omega^\dagger(\phi) \left((-1)^{(1-\hat{P})/2}\right)^\dagger\right]\left[(-1)^{(1-\hat{P})/2} \Psi(\phi)\right] \\
&=& \left|(-1)^{(1-\hat{P})/2}\right|^2 \Omega^\dagger(\phi) \Psi(\phi) \\
&=& \Psi'(\phi)
\end{eqnarray}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/703532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can light produce electric and magnetic field when there are no accelerating charged particles? If we see light as a wave, especially in vaccum, there is nothing there, no particles, yet light has an electric and magnetic field. How can this be possible?
|
there is nothing there, no particles,
There is no particle there, but there is field. A charge at the rest has an associated electric field everywhere around in the space, getting smaller with the inverse of the distance squared. If we think of all existing charges, the space is full of fields.
If one of the charges is moved to another position this field changes. The EM wave is the pattern of field change, that propagates through the space with the light speed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/703668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
What causes this strange noise in a pair of walkie-talkies? Let us suppose that Bob and Alice both have walkie-talkies. They are both 3m apart from one another. Alice pushes her walkie-talkie to speak but instead of speaking, she starts walking toward Bob. Both Alice's and Bob's walkie talkies are facing each other in the same general direction (speakers are in the same direction). Alice suddenly reaches a point along her path to which Bob's Walkie talkie creates a loud pitch screeching sound, Alice moves, even more, closer and the sound escalates in pitch, it becomes unbearable and she unclicks the speak button.
What is the cause of this high pitched sound coming from Bob's walkie talkie? Why only at a certain distance from Bob does the sound start?
Summary:
*
*Alice is 3 m from Bob when she presses the speak button and walks towards Bob, at a certain point a loud screeching sound can be heard from Bob's walkie talkie.
*She moves closer and the sound gets louder and higher in pitch.
*The walkies talkie speakers are facing each other the whole time.
This was a problem I came up with just out of interest and something I experienced myself.
| It's called feedback . Here is what happens:
When Alice presses TRANSMIT, it turns on the microphone in her radio and hence begins to transmit any noise that hits the mic. With Bob's radio on RECEIVE, its speaker is turned on and it plays out anything that it receives at that moment- which in this case is the audio signal transmitted from the mic in Alice's radio, which responds to any sound source near Alice.
As Alice approaches Bob, her mic begins to detect noises coming from Bob's speaker, which her radio then transmits to Bob's radio, which plays it through Bob's speaker, which is picked up by the mic in Alice's radio, etc., etc. and the signal gets looped around and around and around, getting stronger all the while as Alice gets closer to Bob.
The critical case for best looping happens when a sound in the loop has a wavelength in air approximately equal to the distance between the two radios, which at 10 feet is about 100Hz, at 1 foot it is 1000Hz and at 1 inch it is about 12,000Hz, so the pitch of the feedback will go higher and higher as the radios approach one another.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/703989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Why can I write $\frac{d}{dt}=\frac{d}{dt'}\frac{dt'}{dt}+\frac{d}{dx'}\frac{dx'}{dt}$? I’m dealing with a Lorentz invariance problem, and in one of the solutions I’ve seen to prove the wave equation the term above was used. However I don’t really understand why it can be written that way. Could someone provide an explanation?
| For a function of two variables $\displaystyle f=f( x,y)$ the total differential is given by
\begin{equation*}
df=\frac{\partial f}{\partial x} dx+\frac{\partial f}{\partial y} dy.
\end{equation*}
The function $\displaystyle f$ depends explicitly only on $\displaystyle x$ and $\displaystyle y$. It can depend on more variables as well, but that dependence has to go through $\displaystyle x$ and $\displaystyle y$. If $\displaystyle x$ and $\displaystyle y$ depend on a variable $\displaystyle t$, we can find the derivative of $\displaystyle f$ with respect to $\displaystyle t$ by dividing the total differential by $\displaystyle dt$:
\begin{equation*}
\frac{df}{dt} =\frac{\partial f}{\partial x}\frac{dx}{dt} +\frac{\partial f}{\partial y}\frac{dy}{dt}
\end{equation*}
We can use this to define a differential operator that works on an arbitrary function of $\displaystyle x$ and $\displaystyle y$:
\begin{equation*}
\frac{d}{dt} =\frac{dx}{dt}\frac{\partial }{\partial x} +\frac{dy}{dt}\frac{\partial }{\partial y}
\end{equation*}
Let me know if there is something I can clear up.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/704278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is torque defined as $\vec{r} \times F$? Here I cannot convince myself myself that it is units because the torque is defined to be in units of Newton meter is a reiteration of the law stated above. Why was it not $r^2 \times F$ or $r^3 \times F$ or $r^2 \times F^2$ etc. The argument "in our experience how much something rotates depends on the lever length and the force applied" is really insufficient. Can someone outline a more rigorous proof or motivation?
| Consider a point particle of mass $m$ with velocity $\vec v$. The particle is located at some position $\vec r$ with respect to the origin $O$.
I will start with the angular momentum calculated about $O$. The angular momentum is $\vec L = I\vec \omega$, where $I=mr^2$.
Since we know $v=\omega r$, you can work out that $\vec \omega = \dfrac{\vec r\times \vec v}{r^2}$.
Next, we can substitute that into our angular momentum to get $$\vec L = m \vec r\times \vec v.$$
Define torque as the time-derivative of angular momentum, and we have that $$\vec \tau \equiv \dfrac {d\vec L}{dt}=m\vec r\times \dfrac{d\vec v}{dt}.$$
Since $\vec F=m\vec a$, we then have $$\boxed{\vec \tau =\vec r \times \vec F}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/705214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
} |
Why doesn’t horizon distance move exactly proportional to the height of the observer? For instance if someone is 8 inches above the surface of the Earth, they can see approximately 1 mile to the horizon. However, if someone is viewing the horizon at an eye level of 5’5 they can only see about 3 miles out. If the height of the observer increases by a factor of about 8, from 8 inches to 65 inches, why does the distance they can see only increase by a factor of 3?(from 1 mile to 3 miles)
| Consider the following image showing the earth (with radius $R$)
and an observer at height $h$ above the ground.
The distance from the observer to the horizon is $s$.
The theorem of Pythagoras applied to the right triangle gives
$$R^2+s^2=(R+h)^2$$
With a little bit of algebra we get
$$R^2+s^2=R^2+2Rh+h^2$$
$$s^2=2Rh+h^2$$
Because $h$ is much smaller than $R$
we can neglect the $h^2$ on the right side.
$$s^2 \approx 2Rh$$
$$s \approx \sqrt{2Rh}$$
Now you see that $s$ is not proportional $h$,
but proportional to $\sqrt{h}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/705354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Is the speed of signal transport via electricity as fast as light? Let us assume a time synchronization system that comprises a sender and a receiver. The sender generates and sends an encoded signal which presents the current time to the receiver periodically, and the receiver calibrates its clock according to this signal. Is the speed of signal transport via electricity as fast as light?
If it is, does it mean no matter which media we use, copper or fiber, even air(WiFi), the time lag between the sender and the receiver is identical theoretically(ignore interference)?
| Electrical signals propagate as electromagnetic waves. If conductors are present, the waves occupy the space around the conductors. The geometry of the conductors is designed to guide the wave.
The nature of the conductor guiding the wave (copper, aluminum, gold, ...) has little influence. The medium in which the wave propagates (polyethylene, PTFE, air, ...) matters: the speed of light in material media is reduced.
The geometry of the conductor matters: waves on straight vacuum-insulated coaxial or balanced lines move at the vacuum speed of light. Other geometries slow the waves down.
Fibers are, of course, material, and therefore the speed of light is slower in them. Electromagnetic modes in fibers are also somewhat slower than free-space modes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/705462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What frequency of cord shaking maintains the same vertical motion for a point on the cord after increasing the wave speed on the cord? I'm studying for my upcoming AP Physics 1 exam but can't figure out this problem
A student shakes a horizontally-stretched cord, creating waves. The graph above shows the vertical position $y$ as a function of time $t$ for a point on the cord. The student then tightens the cord so that waves on it will travel faster than before. How should the student now shake the cord to make the graph of $y$ versus $t$ for the point look the same as above?
(A) With fewer shakes per second than before
(B) With the same number of shakes per second as before
(C) With more shakes per second than before
(D) The answer cannot be determined without knowing the wavelength of the waves.
My intuition would tell me that increasing the speed of the waves would cause the point to oscillate at a faster rate vertically, thus fewer shakes per second than before are needed to maintain the same frequency for the particle's oscillation. However, the correct answer is B, so I really need a thorough explanation as to why the answer is B.
| Bottom line intuitive answer: Changing the propagation speed affects the relationship between frequency and wavelength. The wave moves faster, but is correspondingly longer, and those two cancel out, leaving the frequency the same at any point on the cord.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/705865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Is it generally accepted that Field Aligned Currents are caused by Force-Free Fields? I am currently an undergraduate working on a project about FAC (Birkeland Currents) and it seems that most of the sources on the subject are very technical and hard for me to read (particularily because I am not at all familiar with plasma physics). My understanding is that FACs are caused by a Force-Free magnetic field but it is not at all clear to me what exactly causes them.
I looked into Perrat's Physics of the Plasma Universe, Donald Scott's paper about force-free FAC model (not too sure Scott is a good source tbh), D. Southwood and M. Kivelson's paper "An approximate Description of Field-Aligned Currents in a Planetary Magnetic Field".
|
Is it generally accepted that Field Aligned Currents are caused by Force-Free Fields?
In a plasma, a force-free field is one that satisfies $\mathbf{j} \times \mathbf{B} = 0$, where $\mathbf{j}$ is the electric current density and $\mathbf{B}$ is the magnetic field. One way to guarantee that the system is force-free would be to only allow $\mathbf{j}$ to be parallel to $\mathbf{B}$, i.e., field-aligned.
My understanding is that FACs are caused by a Force-Free magnetic field but it is not at all clear to me what exactly causes them.
No, the field-aligned currents (FACs) are not caused by the force-free magnetic field, at least not in the sense I think you are implying. A FAC corresponds to a force-free configuration of the magnetic field but that doesn't mean it causes itself (see the circular reasoning there if it did?). FACs can be caused by all sorts of things from just the simple Lorentz force making it easier to flow along $\mathbf{B}$ than across it, to large-scale kinetic Alfven waves differentially accelerating particles along $\mathbf{B}$ (e.g., electrons can be accelerated better than ions in some cases resulting in a differential flow between oppositely charged particles, i.e., a current).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/706063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two touching surfaces transmitting light: Name of effect When two surfaces are sufficiently close enough to each other, light travels through the remaining gap as if it did not exist. Effects like total internal reflection no longer occurs.
If you look at this candle picture, you can see where the wax "touches" the glass and where it doesn't.
https://www.amazon.de/-/en/Bolsius-103422531800-Starlight-Candle-Transparent/dp/B009S95N1A
I know this physical effect has a strange name but I can't seem to find it by Googling it. What's the name?
| It sound like you are describing an evanescent wave.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/706186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Confusion with the variational operator $\delta$ and finding variations I have recently started studying String Theory and this notion of variations has come up. Suppose that we have a Lagrangian $L$ such that the action of this Lagrangian is just $$S=\int dt L.$$ The variation of our action $\delta S$ is just $$\delta S=\int dt \delta L.$$ I have read on other posts that the variation is defined to be $$\delta f=\sum_i \frac{\partial f}{\partial x^i}\delta x^i,$$ which seems like an easy enough definition. Having this definition in mind, I proceded to come across an action $$S=\int dt \frac12m \dot X^2-V(X(t))$$ which implies our Lagrangian is $L(t,X, \dot X)$ which makes our first varition follow as $$\delta S=m \int dt\frac12(2 \dot X)\delta \dot X-\int dt \frac{\partial V}{\partial X} \delta X$$ $$=-\int dt \left(m \ddot X+\frac{\partial V}{\partial X}\right)\delta X.$$ My question is, did that variation of the action follow the definition listed above? That is $$\delta S=\int dt\frac{\partial L}{\partial t} \delta t+\frac{\partial L}{\partial X} \delta X+\frac{\partial L}{\partial \dot X}\delta \dot X,$$ where the $$\frac{\partial L}{\partial t} \delta t$$ term vanishes because there is no $t$ variable.
| If the Lagrangian only depends on time through $X$ or $\dot{X}$, then we say that the Lagrangian has implicit but not explicit time dependence. So in your example, we would write
\begin{equation}
L(X, \dot{X}) = \frac{1}{2} m \dot{X}^2 - V(X)
\end{equation}
even though $X=X(t)$ depends on time. Given this Lagrangian, your variation is completely fine; the equations of motion are
\begin{equation}
m \ddot{X} + V'(X) =0
\end{equation}
In order to have explicit time dependence, you would need to have time appear explicitly, not simply through $X$ or $\dot{X}$. For example:
\begin{equation}
L(X, \dot{X}, t) = \frac{1}{2} m \dot{X}^2 - V(X) + J(t) X
\end{equation}
for some function of time $J(t)$. In this case, the equation of motion would be
\begin{equation}
m \ddot{X} + V'(X) = J(t)
\end{equation}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/706688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
In Particle Physics what does the Rest Mass notation: 95$^{+9}_{−3}$ MeV/c$^2$ mean? On the Wikipedia page for the Strange Quark, I came across the following notation for defining its mass:
95$^{+9}_{−3}$ MeV/c$^2$
Following the reference link brings me to this page, which shows a range of published values for the Strange Quark rest mass. Does that mean the $^{+9}_{−3}$ format indicates the value range for the rest mass?
| It is the way the evaluation of the error for the quark masses is given in the particle data group, page 10 for the strange quark in the link.
In the corresponding figure for the up quark there is the clarification:
Values above of weighted average, error,
and scale factor are based upon the data in
this ideogram only. They are not neces-
sarily the same as our ‘best’ values,
obtained from a least-squares constrained fit
utilizing measurements of other (related)
quantities as additional information
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/706932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Gauss' law in the presence of surface charges Assume $V$ is a volume such that $\rho=0$ in $V$ where $\rho$ is the charge density. Assume further that we have a surface charge density $\sigma$ on the surface $S$ enclosing $V$ such that the total charge $Q_S$ on the surface is $\neq 0$. If we assume that $S$ is an equipotential surface (a conductor), we have the formula
$$\vec{E} =\frac{\sigma}{\epsilon_0}\vec{n} \ \ \text{on }S$$
where $\vec{E}$ is the electric field and $\vec{n}$ the unit normal.
But now, if I apply Gauss' law to this situation, I get this:
$$0=\frac{1}{\epsilon_0}\int_V \rho d^3x=\int_{S} (\vec{E} \cdot \vec{n}) da=\int_S \frac{\sigma}{\epsilon_0} da=\frac{Q_S}{\epsilon_0}\neq 0$$
What is the mistake in this reasoning or overall setting?
| There are two ways of explaining this. First is that if you're considering a charge density which is concentrated on a surface, then the electric field $\mathbf{E}$ does not satisfy the hypotheses of Gauss' divergence theorem (being continuously differentiable), because if you assume $\mathbf{E}$ is continuously differentiable, then by Maxwell, $\nabla\cdot\mathbf{E}=\frac{\rho}{\epsilon_0}$ everywhere, so if $\mathbf{E}$ is $C^1$ then the divergence is continuous so $\rho$ is continuous, so if it vanishes away from the surface, then by continuity it must vanish on the surface as well, i.e no surface charges. Therefore, you cannot invoke the theorem, so your second equality is not justified.
Alternatively, if you insist on pushing the applicability of the divergence theorem beyond the classical hypotheses, then you have to treat things distributionally. So, you can't say $\rho=0$ and thus the first (left) integral is zero. You have to say $\rho=0$ away from the surface, but at the surface, the density is modelled by some Dirac-delta, which therefore gives a non-zero contribution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/707343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Wouldn't the cosmic background radiation (CMB) produce drag and thus create a preferential inertial frame? Because the CMB is everywhere and is isotropic, if an object would have a certain velocity, it could have a pressure differential produced by the CMB which would produce drag till it would stop with respect to the CMB.
However, wouldn't this mean that there is a 'universal' reference frame created by the CMB? Wouldn't this be going against special relativity assumptions?
| One thing to note is that the CMB looks different from different locations (and at different times).
So while it provides a local frame of reference for every point in space-time, that frame is not constant over larger amounts of space, and it's likely that observers elsewhere (millions of light years away) observe it differently.
So things are still relative.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/707452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 2
} |
The entropy given by stefan Boltzmann's law looks remarkably similar to the volume of the sphere; $S(T)=\frac{4}{3}\sigma T^3$ If I am not mistaken the entropy for a blackbody per unit area is given by:
$$S(T)=\frac{4}{3}\sigma T^3.$$
The volume of a sphere is given by:
$$ V(r) =\frac{4}{3}\pi r^3. $$
Is this coincidental? I can't really imagine a hypothetical sphere with volume 'entropy' and radius temperature. It could be that I am misunderstanding the formula.
Where $\sigma$ is
$$ \sigma = \frac{2\pi^5k_{\rm B}^4}{15h^3c^2} = \frac{\pi^2k_{\rm B}^4}{60\hbar^3c^2}\,,$$
| It's a coincidence, as the lack of $\pi$ indicates. The entropy per surface of a blackbody in $D$-dimensional space is $\frac{D+1}{D}\sigma T^D$. (You can deduce it e.g. by generalizing this.) By contrast, the unit $D$-ball has volume $\frac{\pi^{D/2}}{\Gamma(D/2+1)}$, which decays superexponentially at large $D$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/707610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to visualise red shift? If a picture of a star or galaxy hurtling away from Earth is taken, does it appear red despite it being a different colour? Would a blue coloured star moving away from us appear red to us or vice versa? If so how do scientists understand if say, the red colour of a star is due to it having a cooler surface temperature (red supergiants like betelgeuse) or if it is due to the red shift?
|
If a picture of a star or galaxy hurtling away from Earth is taken, does it appear red despite it being a different colour? Would a blue coloured star moving away from us appear red to us or vice versa?
Red shift and blue shift is a terminology meaning shifting towards lower/higher frequencies. Usually one applies this terminology in visual spectrum, bounded by red and blue. Thus, the frequency of interest lies between these two and shifts in the direction of one of them - this does not necessarily involve noticeable change of the color.
If so how do scientists understand if say, the red colour of a star is due to it having a cooler surface temperature (red supergiants like betelgeuse) or if it is due to the red shift?
The temperature of stars is estimated by fitting their radiation spectrum with a black body curve, which has maximum at a certain frequency, proportional to temperature - hence different colors correspond to different frequencies, and blue is hotter than red.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/707892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Why is there no temperature difference in the Joule expansion experiment? The whole system is adiabatic, and no heat exchange can take place. If the volume of the gas now doubles, it should actually cool down.
That's why I don't understand $dT=0$
|
if i have a piston and i pull on it really hard and this expands the isolated gas, i have no temperature difference?
In the case of the moving piston, the molecules of the gas are striking a moving wall. The pressure of the gas creates a force and the motion of the wall means this happens over a distance. So work is being done. This work comes at the expense of energy in the gas and it cools down.
In the Joule expansion, there is no moving wall for the gas to strike. There is nowhere for the energy to leave, so the temperature remains constant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
Fake Perpetual Motion Device using an Electromagnet I was watching a video of one of those fake perpetual motion machines where a ball falls down a hole and then flies off a ramp back onto the starting platform.
As suspected, the large base is hiding an electromagnet. Studying frames of one cycle it seems that the ball seems to suddenly accelerate in an unexpected way around where the blue arrow is pointing.
Here the rail touches the ground and the electromagnet looks to be switched on at that point due to a pressure sensor. However, I am a bit confused how the magnet is working to accelerate the ball, can a magnet ''push'' a ball in this way? How is energy loss due to friction being overcome?
| When you switch on a magnetic field in the vicinity of a conductive object, you induce an eddy current in the object. This, in turn, makes its own magnetic field. The polarity of this field opposes the polarity of the inducing field, so repulsion results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Slope of constant pressure line on $T$-$S$ plot Is slope of constant pressure line is same or di in liquid region and super heated region for pure substance on $T$-$S$ diagram?
| $$U=T\,\mathrm{d}S-P\,\mathrm{d}V+\mu\,\mathrm{d}N$$
We want $\frac{\partial T}{\partial S}$.
We know $T=\frac{\partial U}{\partial S}$, so $\frac{\partial T}{\partial S}$ is the second derivative of the internal energy with respect to entropy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does the combination of lens create a sharper image? There's a line in a book which states that the combination of lens helps create a sharper image, but I don't understand how. Does more magnification mean sharper image?
| The distance between nerve endings in the retina of the eye places a limit on the sharpness of an image that you can observe. A good lens system can bring the image closer and larger. This can cause the sharpness observed to be limited by other (smaller) factors.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Different values for the Normal ordering I've come across 2 examples approaching the ordering of $a^2({a^\dagger})^2$, each reach different results:
*
*$a^2({a^\dagger})^2=\;:\!\sum\text{all contractions}\!:\;=\;:\!aaa^\dagger a^\dagger\!:+\;4:\!aa^\dagger\!:+\;2:\!0\!:$
*$a^2({a^\dagger})^2=a(aa^\dagger)a^\dagger=a(a^\dagger a+1)a^\dagger=(aa^\dagger)(aa^\dagger)+aa^\dagger=(a^\dagger a+1)(a^\dagger a+1)+(a^\dagger a+1)=a^\dagger aa^\dagger a+3a^\dagger a +2=a^\dagger(a^\dagger a+1)a+3a^\dagger a +2=({a^\dagger})^2a^2+4a^\dagger a+2$
Both are Wick ordering the string but one returns an additional $+2$ term, which is incorrect?
| Hint: In Wick's theorem what remains in the fully contracted term (= the double contration) is the unit-operator $\hat{\bf 1}$ not the zero-operator $\hat{\bf 0}$, so OP's 1st calculation (v3) is wrong.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there any end to the electromagnetic spectrum? Is there any theoretical end to the electromagnetic spectrum…
| In theory, the electromagnetic spectrum should extend indefinitely without limit.
You can create an electromagnetic wave with arbitrarily long or short wavelength, and therefore arbitrarily low or high frequency respectively, by accelerating a charged particle back and forth.
Of course the amount of energy required to accelerate a charge back and forth with any extremely high frequency will be limited to how much energy you have available. So with that, the upper bound of the electromagnetic spectrum will be limited to your supply of energy. That being said, the frequency corresponding to the total energy in the universe may be the true upper bound, in principle.
As for the other extreme, in principle, a vanishingly low enough energy such that the wavelength would perhaps be bounded by the Hubble sphere.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Question about the parity violation of weak interaction Lagrangian In the textbook of A. Zee, Quantum Field Theory in a Nutshell, the author states that the following Lagrangian:
$$ \mathcal{L} = G (\overline{\psi}_{1L} \gamma^\mu \psi_{2L})(\overline{\psi}_{3L} \gamma^\mu \psi_{4L})$$
Which describes the weak interaction, violates the parity invariance. I've tried to prove this but with no success. Before I start showing what I've done, it's good to give you some context of the notation I'm familiar with. First, the parity operator is defined as $P\psi := \gamma^0 \psi$, where
$$\gamma^0 :=
\begin{pmatrix}
0 & 1
\\
1 & 0
\end{pmatrix}
.$$
And that
$$ \psi_{L,R} := \frac{1}{2}(1 \mp \gamma^5) \psi = P_{L,R} \psi \quad \text{and} \quad \overline{\psi}_{L,R} : = \overline{\psi} \frac{1}{2} (1 \pm \gamma^5) = \overline{\psi}P_{R,L},$$
where
$$\gamma^5 := \begin{pmatrix}
-1 & 0
\\
0 & 1
\end{pmatrix}
.$$
And $\overline{\psi} := \psi^\dagger \gamma^0$. So, what I've tried to do is, basically:
$$ \begin{split}
P(\overline{\psi}_L \gamma^\mu \psi_L) = \gamma^0 \psi^\dagger \gamma^0 P_R \gamma^\mu \gamma^0 P_L \psi &= \gamma^0 \psi^\dagger P_L \gamma^0 \gamma^\mu \gamma^0 P_L \psi
\\
&= \gamma^0 \psi^\dagger P_L (\gamma^\mu)^\dagger P_L \psi
\end{split}$$
But I really don't know to proceed from here or if there are some mistakes in my calculations. Any help would be really welcomed.
| $$
\gamma^0 P_L \gamma^0 = P_R, \\
\gamma^0 P_R \gamma^0 = P_L,
$$
$$
P: \qquad \psi(x) \longrightarrow \gamma^0 \psi(-x) ~~~~\leadsto \\
P: \qquad \psi(x)^\dagger \longrightarrow \psi(-x)^\dagger \gamma^0 ~~~~\leadsto \\
P: \qquad \overline \psi(x)\gamma^0\psi'(x) \longrightarrow \overline \psi(-x)\gamma^0 \gamma^0 \gamma^0 \psi'(-x) =\overline \psi(-x) \gamma^0 \psi'(-x), \\ P: \qquad \overline \psi(x)\vec \gamma\psi'(x) \longrightarrow \overline \psi(-x)\gamma^0 \vec \gamma \gamma^0 \psi'(-x) =-\bar\psi(-x) \vec \gamma \psi'(-x).
$$
One projector suffices for each spinor bilinear, given the intercalated γ s, so that
$$
P: \qquad \overline \psi(x)\gamma^0P_L\psi'(x) ~~ \overline \psi''(x)\gamma^0P_L\psi'''(x)\longrightarrow \\ \overline \psi(-x)\gamma^0 \gamma^0 P_L\gamma^0 \psi'(-x)~~ \overline \psi(-x)\gamma^0 \gamma^0 P_L\gamma^0 \psi'(-x)=\overline \psi(-x) \gamma^0 P_R \psi'(-x)~~\overline \psi''(-x) \gamma^0 P_R \psi'''(-x),
$$
and similarly for the spacelike γ s... You do it.
You see that the left fermions and right antifermions flipped to right fermions and left antifermions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/708991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the standing wave equation proof require $\ell=Nλ$? Consider two identical sources $S_1$ and $S_2$ of waves, separated by a distance $\ell$ (as shown in the figure).
The sources produce waves in opposite directions(and towards each other). Now, suppose we wish to derive the equation for the standing wave produced.
Let us take the origin at $S_1$. The equation of the wave due to $S_1$ is:-
$$ y_1=A\sin(wt-kx_1)$$
where $x_1$ is the distance from $S_1$.
Now the equation of the wave due to $S_2$ is:-
$$ y_2=A\sin(wt-kx_2)$$
where $x_2$ is the distance from $S_2$. Note that here we take $x_2>0$.
Now, applying the principle of superposition, we get:-
$$ y=y_1+y_2=A\sin(wt-kx_1)+A\sin(wt-kx_2)$$
Now, observe $x_2=\ell-x_1$, so we get:-
$$ y=y_1+y_2=A\sin(wt-kx_1)+A\sin(wt-k(\ell-x_1))=A\sin(wt-kx_1)+A\sin(wt+kx_1-k\ell)$$
Using $\sin C+\sin D=2\sin\left(\frac{C+D}{2}\right)\cos\left(\frac{D-C}{2}\right)$, we get:-
$$y=2A \cos\left(kx-\frac {k\ell}{2}\right)\sin\left(wt-\frac{k\ell}{2}\right)$$
Note, that here we replace $x_1$ with $x$ since $x=x_1$ as the origin is at $S_1$.
However, the standard equation of stationary waves in such a case is given as $y=2A \cos(kx)\sin(wt)$. Using the equation we just derived, $\ell$ must =Nλ (where N is a non-negative integer) so that $k\ell=\frac{2π}{λ}.Nλ=2Nπ$ and now,
$$y=2A \cos\left(kx-\frac {2Nπ}{2}\right)\sin\left(wt-\frac{2Nπ}{2}\right)=2A (-\cos(kx))(-\sin(wt))=2A\cos(kx)\sin(wt))$$ as required.
Therefore, our proof of the standard equation of stationary waves requires $\ell=Nλ$. However, looking at the proof in my textbook, there is no discussion of this and the author uses $y=y_1+y_2=A\sin(wt-kx)+A\sin(wt+kx)$. A superficial explanation is given that a wave traveling in the opposite direction is taken with a '+kx' with no deeper reasoning. Although, my proof helps explain the origin of the '+kx' term (in the wave traveling in opposite direction), it adds a rather restrictive condition i.e. $\ell=Nλ$. Thus, I am skeptical about it.
Please help me out with two things:-
*
*Is my proof correct? If not, then what is the reasoning behind the '+kx'?
*If my proof is correct, then why isn't the $\ell=Nλ$ condition more commonly seen?
| You did correct mathematics deriving the standing wave equation
$$y=2Acos(kx-\frac {kl}{2})sin(wt-\frac{kl}{2})$$ which comes as the superposition of
$$y_1=Asin(wt-kx)$$ and $$y_2=Asin(wt+kx-kl)$$
However the other equation of standing wave which is
$$y=2Asin(wt)cos(kx)$$
comes as the superposition of the following two wave equations
$$Y_1=Asin(wt-kx)$$ and
$$Y_2=Asin(wt+kx)$$
Notice that $y_1$ and $Y_1$ are identical whereas $y_2$ has extra phase wrt $Y_2$. The two becomes identical only when that phase difference is $N×2\pi$.
Thus
$kl$ = $N×2\pi$, which gives $$l=N\lambda$$
Now coming to your first question,
Your derivation of standing wave is correct however there is a phase difference of $kl$ between the wave you described as travelling in -ve x axis and the wave defined by just changing -kx to +kx as done by the author.
Secondly, the condition $$l=N\lambda$$ is not generally seen because it's not a necessary condition. This condition just makes $y_2$ and $Y_2$ identical.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Does the neutrino interact with the photon? I know that the straight answer is no, but in my EFT course, where we're interested in nonrenormalizable operators of the Lagrangian, things aren't so straightforward.
The non-minimal QED Lagrangian is
$$\mathcal{L}=\mathcal{L}_{ren}+\mathcal{L}_{nonren}=\bar{e}(i\hat{D}-m)e-\frac{1}{4}F^2+\frac{1}{M}\bar{\nu}\sigma^{\mu\nu}\nu F_{\mu\nu}$$
does the last term actually exist? If so, is it "naturally" small? If not, why, since it's allowed by symmetries?
To be clear, I'm asking if this interaction can appear in a theory like the one I wrote above (with abelian $F$, so only electron, photon, and neutrino) and if it can happen at the tree level.
| Yes, the neutrino may have a magnetic moment at the 1 loop level in vacuum, cf here, e.g.
This is summarized by your unrenormalizable effective (fake tree)
dimension 5 operator, since the loop into which neutrinos can resolve involves charged leptons or gauge bosons.
Highly suppressed by the weak/EM coupling factor (Fujikawa & Schrock, 1980), $m_\nu G_F$.
In practical terms, the neutrino mass is so much smaller than the W mass that the question is "purely academic".$^\natural$ Your dimension 5 term you tacked on needs a dimension -1 coupling in front of it, for dimensional consistency. It is what's provided above.
There is a voluminous literature on the subject, but astrophysicists focus on interesting media beyond vacuum, affording enhancements, like intense magnetic fields, as cited in the comments above.
$^\natural$ You do notice your term requires a right-chiral neutrino. You might be alarmed that the Ws don't couple to it, but the Higgs, in similar diagrams, omitted here, does!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Photoelectric emission at frequency less then threshold frequency If I shine an EM radiation of frequency $\nu$ on a metal surface which has threshold frequency of $\nu_o$, where $\nu < \nu_o$ then, will the emission occur by multi photon absorption?
My reasoning is this
Since there are quantised energy levels in metals forming a band, so it may happen that the incident photon excites the electron (even though the photon has not enough energy to knock the electron out) from lower energy level to higher one and when the second photon is absorbed by that excited electron, it is knocked out.
Please tell me where am I wrong in my reasoning.
Thanks:)
| This is the experiment that established the photoelectric effect:
Within experimental errors there are no electrons coming out when the frequency is below the threshold for the metal.
so it may happen that the incident photon excites the electron (even though the photon has not enough energy to knock the electron out) from lower energy level to higher one and when the second photon is absorbed by that excited electron, it is knocked out.
where you are wrong is in considering the probability of this to happen. The experiment shows it does not happen, so , even if this reaction has a probability to happen, the number of excited electrons is too small and the probability that a second photon will impact the atom with the excited electron is very small , as seen in the experiment. The electromagnetic coupling is 1/137 and where multiple interaction enter it prohibits observing such a phenomenon.The reaction is possible to be computed with the diagrams needed, but the small coupling makes the probability small, for the first absorption, which leaves a very much much smaller number of target atoms than the atoms found by photons that give the photoelectric effect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does gravitation really exist at the particle level? As I understand, we usually talk about gravity at a macro scale, with "objects" and their "centre(s) of mass". However, since gravity is a property of mass generally (at least under the classical interpretation), it should therefore apply to individual mass-carrying particles as well.
Has this ever been shown experimentally? For example, isolating two particles in some manner and then observing an attraction between them not explained by other forces.
To pose the question another way, let's say I have a hypothesis that gravitation is only an emergent property of larger systems, and our known equations only apply to systems above some lower bound in size. Is there any experiment that reasonably disproves this hypothesis?
| That's exactly what we are observing when we look at the night sky and see galaxies form out of nothing but dust.
Ever wondered why we are here to ask this question? Because gravity pulled together nothing really just dust of particles in the very early universe, forming all the galaxies and solar systems etc.
As a side note, if gravity would not exist at the particle level, this would make it very hard to explain the dark matter halos around the galaxies, since these particles only interact via gravity.
Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo.
https://en.wikipedia.org/wiki/Galaxy_formation_and_evolution
So at the particle level, the existence of complex (gravitationally bound) systems is the very proof that gravity (understanding how it binds simple dust of particles) really works at the particle level.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 5,
"answer_id": 3
} |
Confusion in the showing EM wave exist from Maxwell equation When deriving the mathematical description of a field, we set the current density and charge to zero in Maxwell's equations.
However, this condition is not absolutely true anywhere on earth.
Yet, we are able to apply EM waves for problems in communication, medicine etc. How is that possible that instead of ignoring the sources of the fields the fields are calculated obviously properly nevertheless?
| Free space solutions of maxwells equations show that wave like solutions can theoretically exist. Plane wave solutions to the homogenous wave equations are not created by charges and currents, and thus these solutions don't prove that EM waves are generated by charges.
This is not the only way that we can show wave like solutions exist, for example, using the retarded potentials for the hertzian dipole. This is the best way to show EM waves are generated by accelerating charge
I also think a point can be made, that any REAL EM wave generated by charges and currents, will ALSO satisfy the homogenous wave equation, in regions where charge density and current density are zero. However these real waves generated by charges, will only satisfy the homogenous equation on a restricted domain.
This is similar to using $\nabla^2 V = 0$
In order to solve the the electric field on a restricted domain, in regions where there are no charges and currents.
An example of this, (I think) is solving the homogenous equations in spherical coordinates. I believe that you get solutions that are caused by a charge distribution, as there is a singularity in the center. However I think the plane wave solutions are the only solutions that are valid on all domains, and thus charge density IS infact zero everywhere.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/709950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Currently self-studying QFT and The Standard Model by Schwartz and I'm stuck at equation 1.5 in Part 1 regarding black-body radiation So basically the equation is basically a derivation of Planck's radiation law and I can't somehow find any resources as to how he derived it by adding a derivative inside. Planck says that each mode of frequency $\omega_n$ can be excited by $j$ times which gives the energy of the form $jE_n = j(\hbar\omega_n)$
This is equation 1.5:
$$\langle E_n \rangle = \frac{\sum_{j=0}^\infty (jE_n)e^{-jE_n\beta}}{\sum_{j=0}^\infty e^{-jE_n\beta}} = \frac{-\frac{d}{d\beta}\frac{1}{1-e^{-\hbar\omega_n\beta}}}{\frac{1}{1-e^{-\hbar\omega_n\beta}}} = \frac{\hbar\omega_n}{e^{\hbar\omega_n\beta}-1}\tag{1.5}$$
Where $\beta = \frac{1}{k_BT}$.
In short, I just don't get where the $-\frac{d}{d\beta}$ comes from. I also don't get how the summations disappear. Hope to get answers before I continue this book.
| It just a clever use of the geometrical series:
$$\frac{1}{1-q} = \sum_{j=0}^\infty q^j$$
which is valid for any real number $q<1$.
Here $q = e^{-E_n\beta} \equiv e^{-\hbar\omega_n\beta}$. As long as the energies $E_n >0$ which is certainly fulfilled, we can make use of the geometrical series. And only this. The parameter $q=q(\beta)$ is a function of $\beta$, so we can take the derivative of it:
$$-\frac{d}{d\beta} \frac{1}{1-q(\beta)} = -\frac{d}{d\beta} \sum_{j=0}^\infty q(\beta)^j=-\sum_{j=0}^\infty \frac{d}{d\beta}q(\beta)^j =-\sum_{j=0}^\infty q'(\beta) j q(\beta)^{j-1}$$
where in the last step the chain rule was used. We compute the derivative of $q(\beta)$: $q'(\beta) = -E_n e^{-E_n\beta} = -E_n q(\beta)$.
In the next step we plug this result into the sum:
$$-\sum_{j=0}^\infty q'(\beta) j q(\beta)^{j-1} = -\sum_{j=0}^\infty (-E_n) q(\beta)j q(\beta)^{j-1} = \sum_{j=0}^\infty jE_n q(\beta)^j = \sum_{j=0}^\infty jE_n e^{-jE_n\beta}$$
quod est demonstrandum.
One only has to plug it now into the expression for $\langle E_n\rangle$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the instant velocity? The velocity is the variation rate of the position correct? So does it make sense to talk about velocity without time?
| Velocity is indeed the rate of change of displacement w.r.t time. You are right that it is paradoxical to consider velocity at a point, as at one point there is no change in time or a change in displacement.
For velocity for any two points in time:
$$ v = \frac {\Delta s}{\Delta t} $$ where s is displacement, and $\Delta t = t_2 - t_1 $.
If velocity is constant, the graph for displacement-to-time is a straight line, so slope is equal for any 2 points on the line (and slope = velocity)
So, $ \frac {\Delta s}{\Delta t} $ gives us the accurate velocity.
However, if the velocity is not constant, then, as @Steeven said, we lose the information about the changes in velocity between the two points in time $ t_1 $ and $ t_2 $.
As we consider smaller and smaller choices of $ \Delta t $, the information lost between the two points in time reduces more and more (because the time between the two points, i.e. $t_2 - t_1 $ reduces).
When we want to find "instantaneous" velocity, we let $\Delta t$ go to zero.
The derivative of displacement with respect to time, or $\frac{ds}{dt}$ is what we call "instantaneous velocity". The $dt$ represents a tiny, tiny change in time, $ \Delta t $ ($ t_1 $ and $ t_2 $ are very close to each other).
Thus, we can say that $$ v = lim_{\Delta t \to 0} \frac{\Delta s}{\Delta t} $$
In conclusion, the time derivative of displacement does not actually give us instantaneous velocity (there is no velocity if there is no time). Rather, the $dt$ represents smaller and smaller changes in time, until the stage where the two points in time are almost (but not exactly) at the same point. This method to find "instantaneous" velocity is only an approximation, using very small changes in time. But there is still a change in time, without that velocity has no meaning.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 7,
"answer_id": 5
} |
Can't understand a statement about motion From the book where I am studying motion, It says
Motion is a combined property of the object under study and the observer. There is no meaning of rest or motion without the viewer.
I know that, for an object, it can be said that 'it is moving' in one frame of reference, and it can be said that 'it is at rest' in another frame of reference, but the sentence I mentioned above seems somewhat confusing. How can a phenomena be a property of two things? Also, how is it that, when there is no one to see, the topic of motion and rest is irrelevant? I don't know exactly what the second sentence is trying to say, provided that my understanding of the second sentence is wrong. I need assistance.
| A non accelerating particle can always be said to have a velocity of zero in its own non accelerating frame of reference regardless of what an external viewer measures. In this manner, There is no meaning of rest or motion without a viewer of the particle in the viewers frame of reference.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Earth is spiraling away from Sun at rate of 1.5cm per year due to mass loss of Sun? How it was calculated? My physics teacher asked if we could calculate the rate at which Earth moves away from the Sun due to the mass loss of the Sun. It's very sensible for me to understand that Earth is spiraling away from Sun due to the nuclear fission reactions undergoing within the sun, which consumes $4.289 \times 10^{9}$ Kg every second to energy. I searched online and found that this rate is $1.5$ cm/y, but how did scientists know theoretically that Earth moves away at the rate of $1.5$ cm per year?
I tried to work out the calculations with the equation
$$m_E \frac{dv}{dt}=-G\frac{M_s(t)m_E}{r(t)^2}$$
But I found that I couldn’t work anymore using this equation. Can someone help me, please?
| If we assume$^\dagger$ the earth always takes on a circular orbit of $r$ around the sun we get the following equation:
$$\frac{mv^2}r=G\frac{Mm}{r^2}$$
We can solve this for the radius:
$$r=\frac{GM}{v^2}.$$
If we now assume the velocity is constant throughout this process$^{\dagger\dagger}$ we get that the orbital radius changes at a rate of
$$\dot r=\frac{G\dot M}{v^2}$$
If we now plug in the mean orbital radius for $v=29.78\text{ km/s}$ we get a rate of
$$\dot r\approx1.018\text{cm/yr}$$
which is almost 50% off from the cited number but it has the right order of magnitude. So the idea behind this calculation is that we assume a circular orbit characterised by the suns mass and that losing mass just changes the radius of this orbit.
$\dagger$ This is not a good assumptions since, as you probably know, the orbit of a planet is more like an ellipse. I'm only trying to get an order of magnitude answer and taking this ellipticality into account would greatly complicate things (we would have to consider the semi-major axis and the semi-minor axis instead of the radius). For a complete/precise treatment you would probably have to solve the equation you mention and perform some analysis.
$\dagger\dagger$ This is again a simplification but if you assume a circular orbit and a slow decrease in mass this is a reasonable assumption.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If the escape velocity at the event horizon is the speed of light does it mean that slower bodies won't move away at all? If we say that the escape velocity from a planet is say 10 km/s we think that a slower body will move away from that planet but will be eventually forced to fall back on the planet. In simple words we don't say the body won't move at all but it couldn't leave for ever the planet. What is confusing for me is the escape velocity at the black hole event horizon. If it is the speed of light does it mean that a slower body would leave the horizon but fall down again or that is impossible for that body to make a path at all even 1mm away from the event horizon?
| The people who first postulated the existence of an object like a star or planet massive enough that the escape velocity would be equal to that of light did apparently think in those terms (light struggling to leave the object but inevitably being drawn back in, or light beams hovering motionlessly in space right next to the object).
Then hundreds of years later when general relativity was understood to predict the existence of black holes and they (later) became subjects of study by mathematical physicists, those ways of imagining the workings of a black hole had to be abandoned and replaced with something entirely different.
My understanding of this is that once any object falls through the event horizon of an uncharged, nonspinning black hole its world line points in only one direction- towards the singularity- and so it gets carried in that direction in the same way that it gets carried forward in time. Since no world lines can bend outward to cross the EH on an exit path, it makes no sense to think of a photon for instance on the inside of a black hole trying to head out, slowing down, and falling inwards again.
This is a frightfully complicated business and I hope someone here can give you a more complete answer than this.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/710906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
How to see that the electromagnetic stress-energy tensor satisfies the null energy condition? I am trying to show that the Maxwell stress-energy tensor,
$$T_{\mu\nu} = \frac{1}{4\pi}\left( F_{\mu\rho} F^{\rho}{}_{\nu} - \frac{1}{4}\eta_{\mu\nu}F_{\rho \sigma} F^{\rho\sigma} \right),$$
satisfies the null energy condition, i.e., that
$$T_{\mu \nu}k^\mu k^\nu \geq 0$$
for all null vectors $k^\mu$. I see that the second term vanishes on contraction with $k^\mu k^\nu$, but I'm struggling to see how to manipulate the first term.
| Notice that what you are trying to show is that $k^\mu F_{\mu}{}^\nu$ is a spacelike vector (in this answer, I'm assuming the $-+++$ metric convention). Hence, let us focus on this particular vector.
Given $k^\mu$ at some point, pick a choice of Cartesian coordinates such that $k^\mu = (1,1,0,0)^\intercal$, which is always possible. In this choice of coordinates, the field strength tensor reads (units with $c=1$)
$$F_{\mu\nu} = \begin{pmatrix} 0 & -E_1 & -E_2 & -E_3 \\ E_1 & 0 & B_3 & -B_2 \\ E_2 & -B_3 & 0 & B_1 \\ E_3 & B_2 & -B_1 & 0 \end{pmatrix}.$$
Notice then that
$$k^\mu F_{\mu}{}^{\nu} = \begin{pmatrix} -E_1 \\ -E_1 \\ -E_2 + B_3 \\ -E_3 - B_2 \end{pmatrix}.$$
A straighforward computation then shows that $k^\mu F_{\mu}{}^{\nu} k^\rho F_{\rho}{}_{\nu}$ is the sum of two explicitly non-negative terms.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If the EM field is a self-propagating field that doesn't need a media why should space expansion make any changes to its wavelength? If the EM field is a self-propagating field that doesn't need a media why should space expansion make any changes to its wavelength? If it makes changes to the photons wavelength should it be considered as the field propagator as the photons spread away from each other according how much space streched along the photons path?
| In general relativity it is important to separate in your mind coordinate-dependent effects from coordinate-independent effects. This can be tricky, particularly in scenarios where there is a “standard” coordinate system. In those cases it is easy to forget that the standard coordinate system is still just a coordinate system.
In this case, the standard coordinate system is the comoving coordinates. This system of coordinates has the weird property that two distant objects which are each at rest in these coordinates are moving away from each other in a coordinate independent sense. The observed redshift is a reflection of that coordinate independent fact.
It is possible to write the laws of electromagnetism in these coordinates and show that there is a strange wavelength-stretching term. However, the utility of doing that is dubious. I recommend sticking to coordinate independent explanations whenever feasible, as it is here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Eyes shut, can a passenger tell if they’re facing the front or rear of the train? Suppose you’re a passenger sitting in one of the carriages of a train which is travelling at a high, fairly steady speed. Your eyes are shut and you have no recollection of getting on the train or the direction of the train’s acceleration from stationary. Can you tell whether you’re facing the front or the back of the train?
This isn’t a theoretically perfect environment - there are undulations, bends and bumps in the track. Not a trick question - you cannot ask a fellow passenger!
Edit: This is intentionally lacks rigorous constraints. Do make additional assumptions if it enables a novel answer.
| Bumps and gaps are asymmetric.
They make the cart jump up AND BACK and then, at slower rate, return to its equilibrium speed and direction. So you will detect rapid accelerations back and slower accelerations forward.
Curve handling (both intentional turns and railroad imperfections) happens at front wheels first and next at rear wheels.
This can be used for detection, too.
Of course, a straight, high-quailty railway and wheels can make the detection less reliable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/711352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 9,
"answer_id": 4
} |
Electric potential generated by spherical symmetric charge density I know this question is pretty basic but I found a supposedly wrong formula in my notes and I'm trying to understand where this is coming from. Suppose we have a spherically symmetric charge density $\rho({\boldsymbol{r}})=\rho(r)$, then the formula I was given for the potential is
$$\phi(r)=\frac{1}{r}\int_0^r4\pi\rho(r')r'^2dr'\tag{1}$$
But using Gauss law for electric field one gets
$$\int\boldsymbol{E}\cdot d\boldsymbol{S}=4\pi\underbrace{\int\rho(\boldsymbol{r'})d^3\boldsymbol{r'}}_{Q(r)}\implies \boldsymbol{E}(\boldsymbol{r})=\frac{Q(r)}{r^2}\hat{\boldsymbol{r}}=\frac{\int_0^r4\pi\rho(r')r'^2dr'}{r^2}\hat{\boldsymbol{r}}\tag{2}$$
Taking the gradient of $(1)$
$$\boldsymbol{E}(\boldsymbol{r})=-\nabla\phi=\left[\frac{\int_0^r4\pi\rho(r')r'^2dr'}{r^2}-\frac{4\pi\rho(r)r^2}{r}\right]\hat{\boldsymbol{r}}=\left[\frac{Q(r)}{r^2}-\frac{dQ(r)/dr}{r}\right]\hat{\boldsymbol{r}}$$
That is off by a term from what I got from Gauss Law, so I concluded $(1)$ is wrong.
Is this correct?
| The first formula is true only far away from the source. It is in fact the first term of the multipole expansion.
By using Gauss theorem, you obtain the correct result for the electric field, but if you try to compute it with the gradient, you should use the general expression for the potential of a continuous distribution:
\begin{equation}
\Phi(r)=4\pi \int_{V} \frac{\rho(\textbf{r}')}{|\textbf{r}-\textbf{r}'|} d^{3}r'
\end{equation}
where the primed indices refer to the points inside the volume of the source, the un primed indices to points in which you determine the potential, and V is the total volume of the source. This formula reduces to (1) at first order of its Taylor expansion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Is sand in a vacuum a good thermal insulator? My reason for thinking that sand in a vacuum would be a good insulator is that heat cannot be conducted in a vacuum, and the area of contact between adjacent grains of sand is very small, which means heat would transfer between grains relatively slowly. Is this correct, or is there something I'm missing?
Also, the sand is there instead of pure vacuum for structural support.
| It sounds reasonable. But there are a few more things to consider.
How good a vacuum are you talking about? If you mean just good enough to make a better insulator, likely yes. But if you are pulling a vacuum just to get an insulator, there are likely better ways. A vacuum thermos bottle does this. It does not use sand, just vacuum. To make this work, it supports the inner bottle only at the neck.
If you need a vacuum for other reasons, you likely need a good vacuum with a very low residual pressure. Possibly you want it clean, with very little contamination. Putting sand in it may spoil it.
Sand has a lot of surface area. All that surface can contain contaminants. Sand is like dirt. You don't really know what is in it. Some grains may be contaminants. Typically you heat up a vacuum chamber like an oven to evaporate contaminants. If some grains are made of or contain something that evaporates at high temperature, heating will bring it out.
There are a lot of crevices. Air molecules deep in the sand have a long twisted path to get out, where they can be pumped away. It will take a long time to pump down.
If you really need a bed on which you lay something and want that bed to be insulating, perhaps larger ceramic beads would be better. They can be clean, round, and insulating at high temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 2
} |
Why is skin depth quoted as when the amplitude has decayed by a factor of $\frac{1}{e}$ The definition of the skin depth is:
"Skin depth defines the distance a wave must travel before its amplitude has decayed by a factor of $1/e$."
My question why is the decay of 37% significant here. The EM wave will still have some penetration abilities after it has lost 37% of its initial amplitude, won't it? That is, it will still be able to penetrate the conductor after the skin depth is reached.
| Because the decay of an electromagnetic wave is exponential, i.e. it decays as $A_0e^{-z/\delta}$, where $A_0$ is the initial amplitude, $z$ is the distance in the conductor, and $\delta$ is the skin depth. It feels straightforward to then write the skin depth in terms of the natural exponential function.
Of course the skin depth could also be given as a distance for which the field has decayed, say, 50%, like is common for the half-life of radioactive atoms. This is simply convention.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/712525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Invariants of inner product in pseudoreal representation of $SU(2)$ I am reading Peskin's and Schroeder (P&S), "An introduction to Quantum Field Theory", specifically the first paragraph on page 499 in section 15.4 "Basic Facts about Lie Algebras". At some point, the authors claim that the invariant combination of two spinors is:
$$\epsilon^{\alpha\beta}\eta_{\alpha}\xi_{\beta}$$
and I would like to ask what is meant by the above-mentioned inner product? Does the author (secretly) imply that one of the two spinors ($\eta_{\alpha}$ or $\xi_{\beta}$) is actually a complex conjugate of another spinor? Or is the complex conjugate form of one of the two spinors given by contracting the one spinor with the Levi-Civita tensor, i.e.:
$$\epsilon^{\alpha\beta}\xi_{\beta}=\xi^{*\beta}$$
or something like that? And if so, why?
Any help will be appreciated!
| P&S are talking about the spinor/defining/fundamental representation $$\eta,\xi~\in~ V~\cong~ \mathbb{C}^2$$ of $SU(2)$.
*
*The expression $\epsilon^{\alpha\beta}\eta_{\alpha}\xi_{\beta}$ is $SU(2)$-invariant because the determinant of an $SU(2)$-matrix is 1.
*One can use this to show that the complex conjugate spinor representation $\bar{V}$ with complex conjugate matrix $\bar{U}=\epsilon U \epsilon^{-1}$ is equivalent to the spinor representation $V$.
*In fact the spinor representation $V\cong\mathbb{H}$ is a pseudoreal/quaternionic representation of $SU(2)\cong U(1,\mathbb{H})$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why is I = $\partial Q / \partial t$ and not $I=-\partial Q / \partial t$? I was playing around the Maxwell equations and I came across this:
$$\nabla\cdot J =-\frac{\partial \rho}{\partial t}$$
$$\iiint_V{\nabla\cdot J \space \partial V} = \iint_A{J\cdot\partial A}$$
$$-\iiint_V{ \frac{\partial \rho}{\partial t} \cdot\partial V}=\iint_A{J\cdot\partial A}=I$$
$$I=-\frac{\partial}{\partial t}\iiint_V{\rho}\cdot\partial V$$
$$I=-\frac{\partial Q}{\partial t}$$
However, on Wikipedia, the definition is this:
So how come every definition I come across doesn't contain the negative sign? Physically if a current is flowing in the positive direction, shouldn't the charge by decreasing at that point as it becomes more negative?
| Where you're using the divergence theorem, surface $A$ is oriented from the inside to the outside, making it a charge loss for the system: current is positive when charges leave the system.
On the other hand, in electricity, the usual $i=dq/dt$ relies on the opposite norm: $q$ rises when the current enters the system (think capacitor).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Wick Theorem: number of contractions I have to prove that the number of contractions in Wick's Theorem
is equal to:
$$\frac{n!}{(n/2)! \ 2^{n/2}} \ \ \ where \ \ n \ \ is \ even$$
I don't know how to start, if someone can help.
| Since $n$ is even, I prefer to work with $2n$ instead. Take the correlator $\langle \phi_1 \cdots \phi_{2n}\rangle$. Starts with $\phi_1$: it has exactly $2n-1$ contractions; once all the $\phi_1$ contractions have been taken care, consider $\phi_2$: it has $(2n-3)$ contractions; in general $\phi_k$ has $(2n-k)$ contractions. Hence, the total number of contractions is:
$$\# = (2n-1)(2n-3)\cdots (3)\cdot(1)$$
But this is equal to your equation as:
$$ \frac{(2n)!}{((2n)/2)!\,\,\,2^{(2n)/2}} = \frac{(2n)\cdot(2n-1)\cdots (2)\cdot(1)}{(2n)\cdot(2n-2)\cdots(4)\cdot(2)} = \#$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does current flow in a purely inductive circuit if the net voltage is zero? Considering the equation,
$$E=−L\frac{di}{dt}$$
The negative sign in the above equation indicates that the induced emf opposes the battery's emf.
If we're talking about a purely inductive circuit, the induced emf is equal and opposite to applied emf. Isn't it just like two identical batteries in opposition?
If that's the case, how does the current flow?
| When the source voltage is suddenly made zero then the current will be decreasing at some rate and if an ideal inductor is present in a circuit then that decaying current will cause the magnetic flux to decrease with time through the loop of that inductor.
And In accordance with faradays law if the magnetic flux through a conducting loop is changing with time then there should be an induced EMF in that inductor and that EMF will be induced such that it will support the decaying current (Lenz Law) and this induced EMF causes the current not to decay quickly so, that is why you find the current is still present in the circuit even after the removal of source voltage.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
How you calculate the age of the observable Universe if the acceleration expansion is not constant? What makes us believe that the Cosmological constant was the same in the past?
And if there is no way to prove this then could the age of our Universe be different from the current calculated value since the Universe could be expanded at a different rate in the past?
Even if the Cosmological constant value was different in the past how are the fail-safe limits calculated giving a finite tolerance to the current prediction of the age of our observable Universe?
| It is, of course, possible to add dynamical fields to the theory which act as Dark Energy given an appropriate equation of state. In general, different models will indeed lead to a different age of the Universe.
In addition to its simplicity and the good agreement with a large array of observations, the cosmological constant is well-motivated by Lovelock's theorem, which states that under a few reasonable assumptions, the gravitational field equations take the unique form
$$
a G_{\mu\nu} + b g_{\mu\nu} = T_{\mu\nu},
$$
with two constants $a$ and $b$. These are fixed by specifying the Newtonian gravitational constant and the cosmological constant. From this perspective, having no cosmological constant would actually be kind of fine-tuned.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/713566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Higher order terms in Big Bang derivation You can easilty proof that an SEC fluid gives a big bang by looking at the second Friedmann equation:
$$
\frac{\ddot{a}}{a} = -\frac{4\pi G}{3}(\rho + 3P) \le 0
$$
This implies that $\ddot{a} \le 0$ and thus $a$ continues to get smaller and smaller for smaller t, so at some point a(t)=0. Now we'll get the time relation by looking at Hubble's law:
$$
a(t) = a(t_0)[1+(t-t_0)H_0 + ...]\\
0 = 1 + (t-t_0)H_0 + ...\\
(t-t_0)H_0 < -1\\
(t_0-t)H_0 < 1\\
(t_0 - t)< H_0^{-1}
$$
Why, however, can I drop the higher order terms?
| As put by this course, when using the linear approximation to estimate the age of the universe:
"This result of 14 billion years is surprisingly close to the currently accepted value of
around 13.8 billion years. However, there is a large dose of luck in this agreement, since
the linear approximation is not very good when extrapolated over the full age of
the universe."
So you can drop the higher order terms when considering $t$ near the present day; the Taylor expansion is about the present day. However, this expansion is not meant to be used out to $a(t)=0$.
It is only luck that gives a reasonable age for the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What actually are microscopic and macroscopic viewpoints in thermodynamics? The microscopic viewpoint of studying a system in thermodynamics is the one in which we consider the system on a molecular/atomic/sub-atomic level. (is that even right?)
The macroscopic viewpoint is the one in which we ignore the molecular nature of the system and treat it as an aggregation of differential volumes, that have a limiting volume so that the system acts as a continuum.
If the above statements are true, then why temperature is considered a macroscopic concept?
Temperature is the measure of the average KE of the molecules of a system. Clearly, we're talking about molecules when we talk about temperature then why it is a macroscopic concept?
| A full answer to your question is simply reading a book on statistical physics.
The thermodynamic limit
There's no microscopic thermodynamics. At the microscopic level, you simply use ordinary mechanics (classical or quantum) for the particles.
But as the number of particles grow, this approach becomes both inefficient and impractical. So the trick is to resort to statistical analysis. This approach throws out a lot of information, since knowledge about individual particles is lost, but it matches empirical observation that this knowledge isn't necessary at the precision level required for macroscopic physics.
The limit $N\to+\infty$ with $N$ the number of particles is called the thermodynamic limit. This is where thermodynamics emerges.
Temperature
As for temperature (as well as other notions like pressure or interal energy), its definition makes it undefined at the microscopic level, so it has meaning only in thermodynamics (i.e. at the macroscopic scale).
In a nutshell, temperature is defined as the statistical average value of the kinetic energies of the particles. While an average value can always be computed, no matter the number of particles, standard deviation is another matter.
It's only with a high number of particles that standard deviation goes to zero, meaning that the average value becomes "almost certain" and represents the system correctly. That's when this value is called temperature.
As the microscopic level, temperature is undefined and unnessary, since we can work directly with kinetic energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
What does $\sin(a,b)$ mean in the absorber theory of radiation? I'm doing a revision of the absorber theory of radiation by Wheeler and Feynman (that you can see here: "Interaction with the Absorber as the Mechanism of Radiation" - page 161) and I have encountered the expression $-(e a/r_k c^2) \sin(a,r_k)$, where $a$ is the acceleration of the source and $r_k$ is the position for a particle of the absorber, which gives us the retarded field for a non-relativistic source.
Does anyone understand what the notation $\sin(a,r_k)$ stands for?
| It stands for sine of the angle between the two vectors in the braces, in this case, between the retarded radius vector $r_k$ and the charged particle acceleration $\mathscr{U}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is there still disagreement over the mass of the bottom (or beauty) quark, but none of the others? Wikipedia (among other places) lists two values for the alleged mass of the B quark, 4.18 and 4.65 GeV.
Only one of the two possible masses listed has a link to another Wiki page explaining the theoretical framework behind it.
Is there a good reason for the continuing confusion?
| Looking at the wiki article, the difference in the mass values depends on the mathematical model used to derive from the data the value of the bottom quark mass.
One of the values links to this model
In quantum field theory, the minimal subtraction scheme, or MS scheme, is a particular renormalization scheme used to absorb the infinities that arise in perturbative calculations beyond leading order, introduced independently by Gerard 't Hooft and Steven Weinberg in 1973.1 The MS scheme consists of absorbing only the divergent part of the radiative corrections into the counterterms.
The other value does not have a page linked for the model of the calculation but says: 1S scheme. I cannot find any calculations in this scheme.
The latest paper uses the MS scheme.
We present a new measurement of the bottom quark mass in the MS¯ scheme at the renormalization scale of the Higgs boson mass from measurements of Higgs boson decay rates at the LHC: $mb(mH)=2.60+0.36−0.31 GeV.$
An active proof that quark masses depend on the model and the assumptions used in the model for the given data.
The mass here
The masses should not be taken too seriously, because the confinement of quarks implies that we cannot isolate them to measure their masses in a direct way. The masses must be implied indirectly from scattering experiments. The numbers in the table are very different from numbers previously quoted and are based on the July 2010 summary in Journal of Physics G, Review of Particle Physics, Particle Data Group. A summary can be found on the LBL site.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/714946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
I can't seem to figure out a way to compute a gradient without reference coordinates I'm not sure if this question is better asked here or in Mathematics but here it goes:
I'm studying electric dipoles, and this exercise I'm working on asks for the energy between 2 dipoles, given by $$U_{DD}=-\vec{p}_1\cdot\vec{E}_2\,\,.$$
The thing is that I can't advance from here since I don't really know how to calculate the gardient of the potential without using a coordinate system for reference, something the solutions simply say not to do, but don't explain how.
What I've got so far is as follows:
$$V_2=K_e\frac{\vec{p}_2\cdot\hat{R}_2}{R_2^2}=K_e\frac{p_2\cos(\theta)}{R_2^2}$$
$$\vec{E}_2=-\vec{\nabla}V_2=\frac{K_e\cdot\vec{p}_2}{R_2^3}(2\cos(\theta)\hat{i}+\sin(\theta)\hat{j})$$
When the electric field shown in the previously referenced solutions is $$\vec{E}=K_e\frac{3(\vec{p}\cdot\hat{r})\hat{r}-\vec{p}}{r^3}\,\,.$$
My question essentially boils down to: how do I go from the first expression for $\vec{E}$ to the second one, or in an even shorter form, how do I prove the following equality $$p(2\cos(\theta)\hat{i}+\sin(\theta)\hat{j})=3(\vec{p}\cdot\hat{r})\hat{r}-\vec{p}$$
| Use $\nabla\left[\frac{{\vec p}\cdot{\vec r}}{r^3}\right]$=$\frac{\nabla({\vec p}\cdot{\vec r})}{r^3}$+$({\vec p}\cdot{\vec r})\nabla\left(\frac{1}{r^3}\right)$. $\nabla({\vec p}\cdot {\vec r})={\vec p}$ since $\nabla\times{\vec r}=0.$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Will the potential energy is same in both the cases? Suppose there is a charge $Q$. Now bring in another charge $Q'$ from infinity to a position a distance $r$ from charge $Q$. Then the change in potential energy is equal to $kQQ'/r$.
My question is: will the potential energy will be same if the same charge $Q'$ is brought from infinity to a distance $r$ from $Q$, but in small portions $dQ'$. I mean that the first $dQ'$ is brought to a distance $r$ from $Q$, and then additional incremental charges $dQ'$ are also brought to the separation $r$, and so on.
Will the potential energy will be same in both cases?
| Short answer: Not really.
The answer is slightly different when talking about point charges vs distributons.
Given I have some charge distribution $\rho_{1}$ and some other charge distribution $\rho_{2}$ that produces a potential $V_{2}$
The potential energy between the 2 distributions [not the same thing as the total potential energy of the whole distribution] is given as the following:
$$\iiint V_{2} \rho_{1} dv $$
Aka building up distribution 1 in the presence of potential 2. This represents the potential energy between the 2 distributions.
Moving the distribution 1 to its location, as a whole, or as individual $\rho_{1} dv$ elements yield the same result. The potential energy between the distributions is invariant to the proccess, just the final state.
Now... this kind of goes out the window for point charges, sort of.
Point charges cannot be broken up into individual elements, so the question about multiple "dq" of a point charge doesn't really make sense. With that being said, a point charge has a single "dq" element, That is $Q\delta^3(\vec{r}-\vec{r}_{0}) dv$ Plugging it into our formula gives the same result, the potential energy between the distributions is invariant to how you arrive at the final distribution.
Edit:
This is also the same for total field energy , moving it as a whole requires you to build to distribution in the first place adding work in overcoming the electrostatic forces of itself. Then overcoming the forces from the other charges. Or alternatively, moving the charge piece by piece taking into account that the amount of work required to move each charge increases as you now need to overcome the forces of the placed charge.
Both of these methods yield the same total field energy
[May comment on the false infinite energy of a point charge later]
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Difference between stable manifold and basin of attraction? In 'Nonlinear Dynamics and Chaos' by S. Strogatz, a distinction is made between a stable manifold and basin of attraction of a fixed point in phase space:
Here, the stable manifold of a saddle point is a line, and the basin of attraction of a stable node is a plane. However, the definitions of the two terms are the same, namely:
The set of initial conditions $\bf x_0$ such that $\bf{x} \rightarrow \bf{x^{*}}$ as $t\rightarrow\infty$ for a fixed point $\bf{x^{*}}$.
Why is a distinction being made between the two terms?
| To some extent it's indeed a convention/definition:
*
*intuitively, if a set has a region of typical initial conditions leading to it (say, a neighborhood, a finite phase space volume, etc.), and this region is by definition its "basin of attraction" and this attracting set is called an "attractor";
*whereas invariant sets (stable or not) can be calculated for any point, including non-attractive ones: and when a non-attracting set has a stable set associated with it and this stable set is a manifold, then we don't call this manifold its basin of attraction — since it's no attractor — but its stable manifold.
That's the basic gist of it, though of course there's more to it than that. For instance, you'll still distinguish the basin of attraction from the (stable) manifolds of a point attractor, since these manifolds are related to (to be more exact, they coincide locally with) the stable eigenvectors of the linearized system on this point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How can we experimentally confirm that atoms/molecules in a solid actually "move"? The atoms in a solid are so attracted to each other that they "vibrate" and don't move past each other.
How do scientists "measure" that atomic vibration in a solid (let's say at room temperature)?
As a raw, uneducated person it is easy for me to conclude that the solid is completely at rest and no part of it is "moving". So, what is the experimental evidence which shows that my conclusion is totally wrong and that the tiny invisible atoms are actually "jiggling"?
In the case of the Brownian motion, it is somehow easier (more intuitive and common sense) to assume that the invisible atoms are "moving" and thus "hitting" the colloidal particles. However, regarding a solid... I can't even imagine how I can detect that atomic "vibrations" because I can't see them or feel them.
| For me, the most salient fact arising from molecular “jiggling” is simply thermal radiation. It has the advantage of being relatively easy to observe (using thermal imaging at room temperature, and just your eyes at red-hot temperatures and higher), but I suppose whether you consider it convincing evidence of molecular oscillation depends on how comfortable you are with the fact that accelerating charges produce radiation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/715752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 0
} |
Why, in this solution, acceleration is constant even when it depends on distance between two charges? I used integration of $a=dv/dt$ to solve this Why, in this solution is acceleration constant, even when it depends on the distance between two charges? I used integration of $a=dv/dt$ to solve this.
Question
Two particles have equal masses of $5.0 \ g$ each and opposite charges of $+4.0 \times 10^{-5} C$ and $-4.0 \times 10^{-5} C$. They are released from rest with a separation of $1.0 \ m$ between them. Find the speeds of the particles when the separation is reduced to $50 \ cm$.
This involves Coulomb's law, Newton's 2nd law of motion and kinematics of relative acceleration.
Solution of above question
$$q_1 = q_2 = 4 \times 10^{-5}C \ \ \ and \ \ \ s=1m, \ \ m=5g=0.005 kg$$
$$F=K \frac{q^2}{r^2} = \frac{9 \times 10^9 \times (4 \times 10^{-5})^2}{1^2} = 14.4 \ N$$
$$Acceleration \ \ a = \frac{F}{m} = \frac{14.4}{0.005}=2880 \ m/s^2$$
$$Now \ \ u = 0 \ \ \ \ s = 50 \ cm = 0.5 \ m, \ \ \ a = 2880 \ m/s^2, \ \ \ v = \ ?$$
$$v^2 = u^2 + 2as \ \ \ \rightarrow \ \ v^2 = 0 + 2 \times 2880 \times 0.5$$
$$v = \sqrt{2880} = 53.66 \ m/s \approx 54 \ m/s \ \ \ for \ each \ particle.$$
| The method used in the given solution is completely incorrect. Its only redeeming virtue is that it happens to give the correct answer through a numerical coincidence. The simplest way to solve the answer correctly would involve energy conservation, as described in Visza Sekar's answer.
To be a bit more concrete about how this happened, let's generalize to a situation where initial distance of the charges is $r_i$ and the final distance is $r_f$. Following the method in the given solution, we would find that
$$
mv^2 = 2 \frac{kq^2}{r_i^2} (r_i - r_f)
$$
while the (correct) energy conservation method would yield
$$
mv^2 = kq^2 \left( \frac{1}{r_f} - \frac{1}{r_i} \right).
$$
A bit of algebra shows that these two expressions are equal only when $r_f = r_i/2$, which happens to be the case in this problem. However, if the problem involved an initial and final separation that were not related like this, then the two methods would give different result, and the result from the solution's method would be wrong.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
The value of $g$ in free fall motion on earth When we release a heavy body from a height to earth. We get the value of $g=9.8 \ ms^{-2}$. Now, I'm confused about what it means. For example, does it mean that the body's speed increases to $9.8$ every second? Or, does it mean that the speed of the body is $9.8 \ m/s$?
| You can see that $g$ has units of acceleration, namely $\frac {m}{s^2}$ or $\frac {m}{s} \left (\frac 1s \right )$. Last form gives an easy interpretation,- speed change per 1 second. Additionally, there are couple of assumptions in use:
*
*Earth is assumed an ideal sphere, otherwise $g$ would depend on radius $R$ at exact $\varphi, \lambda$ (Latitude, Longitude) coordinates on surface.
*Free-fall has to be in pure vacuum, otherwise gravitational acceleration would be opposed by air resistance (air drag force), which would make net acceleration dependent on such parameters like falling body cross-section, velocity, etc.
*Finally, mass distribution in Earth along axis parallel to the surface normal is assumed to be uniform, namely- density variation function $\rho(R)$ has same profile along any coordinates $\varphi, \lambda$. Otherwise, you would need to account for local mass distributions in $g$ calculations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Born's Rule for states over supernumbers? For Quantum-mechanics on a Hilbert-space over the complex numbers, the usual scalar product of two states $\langle \phi | \psi \rangle$ and gives the transition amplitude between the two states. The absolute square of this quantity then gives the probability that a particular value associated with $|\phi \rangle$ can be measured when the system is in state $| \psi \rangle$.
However, when one constructs states over super-numbers (for example fermionic coherent states), those states do have supernumbers as coefficients, and thus the scalar product yields a super-number as well.
Can this super-numbers still be used as a transition-amplitude?
For example, in a 2 state-system:
$$
|\theta \rangle = | 0 \rangle - \theta | 1 \rangle \\
$$
then
$$
\langle 0 |\theta \rangle = 1 \\
\langle 1 | \theta \rangle = - \theta.
$$
How would we proceed from here?
*
*The absolute square would be $ \bar{\theta} \theta $, which is grassmann even - or would it be $\theta \theta = 0$?
*If the square is zero, does that mean that fermionic coherent states essentially are overlapping with the vacuum state?
*Is the concept of transition probabilities simply not defined for states over super numbers?
*If so, could it in principle be defined in a consistent way?
| In order for the Born rule of a wavefunction or an overlap to produce measurable physical probabilities $\in[0,1]$ of ordinary numbers, all supernumbers must first have been integrated out, cf. e.g. this & this Phys.SE posts.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why does the the dielectric constant of a ferroelectric increases with temperature, below $T_C$?
The above figure is taken from C. Kittel.
When a ferroelectric substance (say, BaTi${\rm O}_3$) at room temperature is gradually heated, the dielectric constant $\varepsilon_r$ first increases and then attains a peak at a temperature called the Curie temperature $T_C$, and above $T_C$, further increase in the temperature causes a rapid decrease in the dielectric constant $\varepsilon_r$. The decrease in $\varepsilon_r$ above $T_C$ can be understood from the ferroelectric to paraelectric transition in which there is a structural phase transition from the tetragonal unit cell structure (carrying a nonzero dipole moment) to the cubic unit cell structure (carrying no nonzero dipole moment).
But what is the reason for the initial growth in the dielectric constant when the temperature is raised from room temperature to $T_C$?
| The response of dielectric constants to temperature is model-dependent; thus, I would say that there is no simple rule of thumb. However, in the specific case of phase transitions, the material always builds up long-range correlations between its parts and fluctuations are very intense (near the critical temperature, there is not a well-defined phase of matter, and small perturbations end up with dramatical responses).
For this reason, all the response properties are generally increased. The dielectric constant is one of them, as it is the degree of response of a material to an applied electric field. In summary, the dielectric constant (and other responses to external perturbations) will in general increase towards the critical temperature both ways, as it is a region of extreme fluctuations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/716970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why doesn't $dU=nC_{v}\,dT$ hold for all substances? Consider the following proof for change in internal energy of real gases, liquids and solids(assuming Non-$PV$ work $=0$):
*
*Let X denote real gases, liquids, and solids
*The First law of thermodynamics is $dU=dQ-dW=dQ-PdV$, which also holds for X
*At constant volume, $dU_{v}=dQ_{v}-0$.
*Now, $dQ_{v}=nC_{v}\,dT$ is a trivial expression and thus, will also hold for X.
*So we have $dU=nC_{v}\,dT$.
*Since U is a state function(in terms of V and T), $dU_{v}=dU$ since the path is irrelevant.
*Thus, we get $dU=nC_{v}\,dT$ for all X.
However, some sources indicate that $dU=nC_{v}\,dT$ is applicable only for ideal gases. Are they correct? If so, what is the mistake in this proof?
Addendum:
It seems the issue is in point 6 in that $dU_{v}=du$ cannot be used. This is because the internal energy change does not depend on the path, but if you are choosing an alternative path to calculate $du$ (like isochoric), that path needs to exist between the two states. So $dU=nC_{v}\,dT$ is true for an isochoric process for all X, but not in general for any process. But, why doesn't this issue arise in ideal gases?
| For a constant volume transformation, the relation is always true: it is simply a definition of Cv !
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
An extension of von Neumann entropy to observables Suppose we define the "entropy" of a self-adjoint matrix $\rho$ as the real number $S(\rho)$ given by:
$$S(\rho)=-\text{tr}(\rho\log|\rho|)$$
(notice the absolute value on $\rho$, as $\rho$ may have negative eigenvalues). While clearly such an entropy function can be negative (e.g. $S\left(-\frac{1}{n}I_n\right)<0$), I would like to know if such an entropy function is necessarily positive (i.e., non-negative) on bipartite self-adjoint matrices whose marginals are density matrices (states). More precisely, if $\rho_{AB}\in A\otimes B$ is a self-adjoint matrix such that $\rho_A=\text{tr}_B(\rho_{AB})$ and $\rho_B=\text{tr}_A(\rho_{AB})$ are both density matrices, is it necessarily the case that $S(\rho_{AB})\geq 0$?
| Let $\rho=p\lvert a\rangle\langle a\rvert+(1-p)\lvert b\rangle\langle b\rvert$ , with $\lvert a\rangle$ and $\lvert b\rangle$ two orthogonal maximally entanged states.
Then, the reduced density matrices of $\rho$ are maximally mixed states, and thus valid density matrices, independent of the value of $p$.
On the other hand, $S(\rho) = -p\log|p|-(1-p)\log|1-p|$ is negative for all $p\notin[0;1]$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Heat death of the Universe in LCDM I have often read that the heat deat of the Universe occurs in cosmologies where its age can be arbitarily large, even with a cosmological constant. However the standard LCDM cosmology's conformal age is bounded, even in the arbitarily far future. It seems to me that for the Universe to necessarily reach equilibrium then it must also conformally reach equilibrium, but I don't see how that is a given if the conformal age is bounded. My question is the LCDM model how can the Universe definitely reach heat death?
| The fact that the conformal time is bounded in the future means that there are regions in the universe which we will not be able to get information from, so particles here cannot equilibrate with particles there.
However, the form of equilibrium you reach in LCDM is not one with a bunch of particles colliding and reaching equilibrium. Rather, you reach an equilibrium with no particles, only the cosmological constant. Basically, all the particles will leave the horizon (and the wavelength of any photons will stretch beyond the horizon). The temperature of this universe will be given by the Hubble rate, $T \sim H$, similar to the Hawking temperature of a black hole.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can the Cosmic Neutrino Background (CνB) have a temperature? How can any neutrino have a 'temperature'? The word temperature usually refers to the average velocity of massive particles, correct?
And the Cosmic Microwave Background (CMB) has a 'temperature' based on the temperature of a 'black body' that would emit photons of energies corresponding to those seen in the CMB, correct?
But, how can a neutrino or neutrinos have a temperature? What does it correspond to?
| The temperature of a gas is a parameter that reflects the distribution of energy/momentum of the particles. It is not a characteristic of any individual particle.
Before the cosmic neutrino background was formed (when the early universe was $>10^{11}$ K) neutrinos and anti-neutrinos were produced and destroyed in thermal equilibrium with the rest of the radiation and baryonic matter. That is, the neutrinos had a distribution of energies and momenta that was determined by the temperature of the universe at that time. NB: This is not a blackbody distribution, it is the Fermi-Dirac distribution because neutrinos are spin 1/2 particles with mass.
As the universe expanded and cooled, the density fell, and at about 1 second after the big bang, the interaction timescale for the neutrinos became longer than the expansion timescale of the universe. The neutrinos "decoupled" from the other matter and radiation, but their distribution of momentum was preserved, with a characteristic temperature of a few $10^{10}$ K.
Since then, the universe has expanded by a factor of $\sim 10^{10}$ and the momentum of the individual neutrinos with respect to the comoving frame has decreased by a similar amount. (Even though the neutrinos have a small mass, you can think of the process as the expansion stretching their de Broglie wavelengths). Thus the neutrinos still have a momentum distribution, but it is now the equivalent of a much colder gas - about 2K.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/717782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Does dusk really remain for a shorter period of time at the equator? It is said that the dusk remains for shorter time at equator than the poles. Because, the equator rotates faster than poles. But it is also true that time is the same in every latitude, and if it's true, then the dusk should remain the same at equator as the poles. So, does dusk really remain for a shorter period of time at the equator?
| "Dusk" is defined as "the darker phases of twilight" (in the evening), so it may be ambiguous. There are in fact 3 different twilights:
which are defined by how far the sun is below the horizon (hence the answer from @tobalt). Since the East-West speed of the sun in the sky is identical across the planet (though it does change over the year), the more perpendicular the celestial equator is to the horizon, the faster the sun goes down.
A good resource for such questions is: https://www.timeanddate.com, which shows the length of daylight (sun above horizon), civil twilight, nautical twilight, astronomical twilight, and night for each day of the year for any significant city.
Here's today's look at Quito ($\phi = -0.18^{\circ}$) and Utqiaġvik, AK ($\phi = 71.17^{\circ}$):
You can see that Quito's twilight is nearly constant over the year, are shorter than Utqiaġvik, where the length of twilight today is zero. In the winter it will be 10 hours long, though that does comprise both dusk and dawn, with no sunrise/sunset between them.
The formulae use to make these figures are involved, e.g. https://gml.noaa.gov/grad/solcalc/solareqns.PDF or https://en.wikipedia.org/wiki/Sunrise_equation .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 4,
"answer_id": 1
} |
Why doesn't the variation of resistivity with temperature go both ways? I've learnt that the variation of resistivity with temperature for a conductor is:
$\rho=\rho_0(1+\alpha (T−T_0))$
Let's consider resistivity at 0℃ and 100℃.
When heating the conductor from 0℃ to 100℃,
$ρ₁₀₀=\rho_0(1+\alpha (100-0))$
α=$\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₀}\, $
Now, when cooling the conductor from 100℃ to 0℃,
$\rho_0=ρ₁₀₀(1+\alpha (0-100))$
α=$\displaystyle \frac{ρ₀-ρ₁₀₀}{-100ρ₁₀₀}\, $= $\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₁₀₀}\, $=$\displaystyle \frac{ρ₁₀₀-ρ₀}{100ρ₀(1+α(100-0))}\, $=$\displaystyle \frac{α}{1+100α}\, $
Why does this discrepancy exist? Even if the relation only holds for smaller temperature differences, the discrepancy seems to hold, as the new value of α only seems to depend on the old one, as $\displaystyle \frac{α}{1+T'α}\, $.
| More broadly, it's convenient to postulate that the resistivity $\rho$ changes with temperature $T$ in differential form as
$$d\rho=\alpha(T)\rho\,dT,$$
where $\alpha$ is the (temperature-dependent) thermal coefficient of resistivity.
For the purposes of this question, though, we can idealize $\alpha(T)$ as constant. In other words, the temperature variation in $\alpha$ is not the cause of the specific problem you observed that $\rho_0\neq\rho_0(1+\alpha\Delta T)(1-\alpha\Delta T)$.
Integrating, we obtain
$$\ln\left(1+\frac{\Delta\rho}{\rho_0}\right)=\alpha\Delta T,$$
which is symmetric regarding heating and subsequent cooling. That is, $\rho_1=\rho_0e^{\alpha\Delta T}$ is always consistent for successive temperature excursions and returns (in which $\rho_1=\rho_0e^{\alpha\Delta T}e^{-\alpha\Delta T}=\rho_0$).
However, if we choose to simplify the logarithmic relation for small $\Delta\rho$ using a Taylor series expansion as
$$\ln\left(1+\frac{\Delta\rho}{\rho_0}\right)\approx\frac{\Delta\rho}{\rho_0}\left(\approx\alpha\Delta T\right)$$
to obtain the equation you start with, then we lose the advantage of symmetry in exchange for the advantage of a simpler expression.
(Incidentally, the same thing happens with engineering strain $\varepsilon=\frac{\Delta L}{L_0}$; if we apply a certain engineering strain and then its exact opposite, we don't recover the original length. To ensure proper symmetry, we need to use the true strain.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 1
} |
How does an electron move in the $p$ orbital? This is my first time learning about orbitals and I am very confused over how do electrons move around the nucleus in the
$p$ orbital.
Wouldn't it have to move out of the orbital where probability of finding an electron is low in order to complete its revolution? Maybe my understanding of orbitals is flawed.
Can somebody please help.
| The electron $p$ orbitals with $\ell,m=1,\pm1$ have nonzero expectation value in a torus around the $z$-axis. As time evolves, the complex phase of the wavefunction increases clockwise or counterclockwise around the $z$-axis, depending on the sign of $m$. Your favorite intro quantum textbook has a paragraph about interpreting this phase evolution as a “probability current.”
The $m=0$ orbital, sometimes called the $p_z$ orbital, corresponds classically to a particle with angular momentum $\ell=1$ projected in some direction onto the $x$-$y$ plane. Because of noncommutivity among angular momentum components, you can’t say which direction in the $x$-$y$ plane. Trying to imagine a classical electron whose motion occupies the $p_z$ orbital is asking for trouble.
Chemists like to use the $p_x$ and $p_y$ orbitals, which are linear combinations of the $\ell,m=1,\pm1$ orbitals so that the wavefunction is real-valued everywhere. These are identical to the $p_z$ orbitals in a rotated coordinate system. If you choose to analyze your wavefunctions in the real-valued basis, you throw away the obvious correspondence with classical angular momentum.
In the limit of large $\ell$, the $|m|=\ell$ states correspond better and better to a particle which is most likely to be found in a ring around the equator. However, the small-$|m|$ states never have a good classical correspondence, because classical physics doesn’t have the same un-interpretability of mutually-perpendicular projections of angular momentum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
If water is nearly as incompressible as ground, why don't divers get injured when they plunge into it? I have read that water (or any other liquid) cannot be compressed like gases and it is nearly as elastic as solid. So why isn’t the impact of diving into water equivalent to that of diving on hard concrete?
| Incompressible doesn't mean that it has to keep the same shape.
But, due to viscosity, water can be "slow" to change its shape under external influence. So when a diver arrives too fast, water can't adapt in time and behave like a brick wall for the duration of the diver's penetration.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 13,
"answer_id": 6
} |
Translating Ashcroft and Mermin's "Second Proof" of Bloch's Theorem to Dirac's Notation At the end of this post I attach Ashcroft and Mermin's proof of Bloch's theorem which is not essential per se (the proof using lattice symmetries is more general), but is key in being used later as a jumping off point for the nearly free electron model.
Now I am trying to translate it to Dirac's bra-ket notation, since that always helps me think in more general, coordinate-free terms (once I pull out all of the identity operators which have been tacitly inserted). Essentially, I am trying to arrive at (8.38) by beginning at the coordinate-free eigenvalue equation for $H$.
Thus I thought to write something like
$$H|\psi\rangle=\epsilon|\psi\rangle \implies \langle \mathbf{r}|H|\psi\rangle=\langle \mathbf{r}|\int \mathrm{d}{\mathbf{k}}\left(\frac{\mathbf{p}^2}{2m}+U\right)|\mathbf{k}\rangle\langle \mathbf{k}|\psi\rangle=\int \mathrm{d}{\mathbf{k}}\left(\frac{\hbar^2\nabla^2}{2m}+U(\mathbf{r})\right)\langle \mathbf{r}|\mathbf{k}\rangle c_\mathbf{k}=\int \mathrm{d}{\mathbf{k}}\left(\frac{\hbar^2\nabla^2}{2m}+U(\mathbf{r})\right)c_\mathbf{k}e^{i \mathbf{k} \cdot\mathbf{r}}$$
where the integration is over all of momentum (k-space up to a proportionality factor). The first term in the equation above recovers (8.36), but (8.37) I cannot seem to "find". Obviously I want something of the form (8.32), so I thought to insert completeness in coordinate space ($\mathbf{r}$), but then I have some extra $\mathbf{r}$ kets in the second term which aren't there in the first.
Any help in completing the steps which I cannot would be greatly appreciated.
| You need to use the fact that $U$ is periodic with the same periodicity as the lattice, which means that it can be expressed as a sum
$$U(\mathbf r)=\sum_{\mathbf G\in \mathrm{RL}}u_\mathbf G e^{i\mathbf G\cdot\mathbf r}$$
where RL is the reciprocal lattice.
Also, for what it’s worth note that since Ashcroft and Mermin are working on a torus (with Born von Karmin boundary conditions) the set of possible momenta is discrete, so the integral should be a sum. You may repeat the derivation in the continuum if you wish, but it will look slightly different than it does in the text.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/718918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Is Newton’s third law of motion formed from Poincare symmetries? So I know that Newton's third law states that every action has an equal reaction, making a symmetry. But just like how Poincare symmetries form conservation laws, do any Poincare symmetries form Newton's third law?
(Side question: If not, what symmetry is Newton's third law based of?)
| The third law states that momentum is conserved. From Noether we know that momentum consrevation is a consequence of translational symmetry. The tranlations are a subgroup of Poincare. So Yes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How is a state of $|0\rangle$ created experimentally? In the context of quantum computing, many times in textbooks and online courses, they say "we generate a state of $|0\rangle$", and then proceed to apply quantum (logic) gates.
My question is: how would we practically ensure that a generated state is actually $|0\rangle$, rather than $|1\rangle,\,|+\rangle,\,|-\rangle$, or any other state?
| $|0\rangle$ is typically a ground state, separated from the other (excited) states by a gap $\Delta$. One typically lowers the temperature below the gap, $k_BT\ll\Delta$ and waits for long enough to be sure that the system has relaxed to the ground state.
There have been extensive research in initialization of quantum computers in the early 2000, for various systems. The difficulty here is that, in order to perform quantum computation, one typically wants to reduce the decoherence, i.e., one chooses systems with very long lifetimes of the excited states. This means that one needs to wait for a very long time in order to initialize the system in state $|0\rangle$. Alternatively, one may try to control the decoherence (i.e., the coupling to the bath) - making it strong when initializing the system, and switching it off to carry out the computation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A cylinder rolling down an inclined plane A few questions popped into my mind while studying rotational motion.
Take a cylinder to the top of an inclined plane. Suppose there is friction. Let go of the cylinder. If it is rolling without slipping, is its acceleration constant over the time interval it is rolling down? If so, why? Why does the acceleration depend on the rotational inertia of the body in this case? And the final and most important question that had me struggling: why can't we simply apply $F = ma$ on these objects and get the same result on every object, regardless of their rotational inertia, since all the forces acting on the object in this system are proportional to the mass?
| Newton's second law is stated for point objects where acceleration has no ambiguity. When you are studying a system like the cylinder which is composed of many points, the problem is to choose which point are you going to choose to calculate the acceleration. The centre of mass theorem states that the point to consider if you are considering only the external forces is the centre of mass of the system.
In the case of your cylinder, you can certainly apply the centre of mass theorem. You have three forces acting on the cylinder: weight, normal reaction and friction. The normal reaction is only there to cancel out the normal component of weight. You therefore have a tangential acceleration of the center of mass of the cylinder (which is on its axis if you're assuming rotational symmetry). The subtlety is that the determination of the friction force is not evident as it originates from the non slip condition. This is the part that will have a non intuitive dependence on the moment of inertia.
This is why a moment method / energy is more appropriate as it easily handles the non-slip condition. It does not need to calculate this tricky friction force, but rather focuses directly on the relevant angular velocity. In fact, once you've solved the equations of motion using these methods, you can revert back to the center of mass theorem to figure out the friction force.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/719767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does the Lagrangian have $O(4)$ symmetry after Wick rotating (previously Lorentz symmetry)? Pertaining to the answer within link.
Why is it the case, that for Lorentz invariant Lagrangian $\mathcal{L}$, after Wick rotation, the $O(4)$ invariance is established, thus manifesting itself as having Euclidean metric? Is that a consequence of requiring the four vector fields to transform as $A_0^E = iA_0$ and $A_j^E=E_j$ or a result of it? So which premise comes first?
| As long as the Minkowski action is constructed from Lorentz-covariant tensors, then under Wick rotation [where the contravariant and covariant $0$-components of the tensors are Wick-rotated in opposite ways], the corresponding Euclidean action becomes constructed from the corresponding $O(4)$-covariant tensors, cf. e.g. this Phys.SE post.
Note in particular that the Minkowski metric tensor [with the signature convention $(-,+,+,+)$] is Wick rotated to the Euclidean metric tensor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Functional Derivative Calculation Given the functional:
$$ F[\phi] = \int_V \frac{k_B T}{a^3}\phi\ln(\phi) \ ds = \int_V I(\phi) ds $$
I want to find the functional derivative. I believe this would result in:
$$ \frac{\delta F}{\delta \phi} = \frac{\partial I}{\partial \phi}=\frac{k_B T}{a^3}[\ln(\phi)+1]$$
However, the paper I am following along has only the first term. Is my calculation correct? Note, in this case I set the functional derivative equal to the partial derivative because the functional doesn't contain any higher derivatives - hence those partials vanish.
| Your attempt points in the right direction, but note that the functional derivative is not the partial derivative as you're deriving with respect to a function and not a variable. Nonetheless, they are connected for certain functionals.
Take a compactly supported smooth function $\psi$, then by the definition of the functional derivative:
\begin{align*}
\int_V\frac{\delta F[\phi]}{\delta\phi}\psi\; ds
\stackrel{!}{=}\left[\frac{\mathrm d}{\mathrm d\varepsilon}F[\phi+\varepsilon \psi]\right]_{\varepsilon=0}
=\ldots
=\int_V\frac{k_\mathrm{B}T}{a^3}(\ln(\phi)+1)\psi\mathrm ds,
\end{align*}
from which the result follows with the fundamental theorem of the calculus of variations. I think you can fill in the two missing steps, where you just have to put in your expression of the functional, for yourself.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the black body radiation independent of composition and incident radiation? There have been questions similar to this, but most of them do not explain the mechanism responsible for the phenomena but instead explain through contradiction of second law of thermodynamics, for example this answer https://physics.stackexchange.com/a/130901/324947
can anyone explain qualitatively the mechanism responsible so that on imposing that an object absorb all the incident radiation in thermal equilibrium must lead to emission of radiations irrespective of its composition and the nature of incident radiation?
| That is must be radiating is clear from the fact that it is absorbing energy from the environment it is in thermal equilibrium with while not heating up.
So the question becomes why this is independent of material type and why it had that particular distribution.
It is important to remember that a black body is an idealation and is only an approximation of the real world. In this sense it is not actually materially independent. But it remains a good approximation. The key word here is black. To be a good approximation the material needs to be able to absorb any wavelength of light.
For a material to absorb light it needs to interact with light in some way. in normal matter this is caused by electrons in the material being able to absorb a wife range of wavelengths into one of many internal degrees of freedom. The electron is then excited and can step down to ground.
Thermodynamics is reversible so if an object can absorb any wavelength is must be possible for the electrons in that object to Emit that wavelength.
But why is it this distribution? This can be obtained from the equipartion theorem which states that given many possible degrees of freedom the energy is spent equally on each degree of freedom. There are only finitely many possible high energy wavelengths because the shortest possible is a plank length. The peak wavelength changes with temperature because energy depends on the square of the velocity of the electrons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Differential charge existing We define current by $I=\frac{\mathrm{d}q}{\mathrm{d}t}$. Here, $\mathrm{d}q$ is the infinitesimal element of charge. But again,we know that charge is quantised meaning there is a finite value to the smallest amount of charge which is $e$. Since $\mathrm{d}q$ is infinitely small, $\mathrm{d}q<e$. Then how can $\mathrm{d}q$ charge even exist?
| The same holds true for water. I presume you don't have problem using $d V/dt$ for the flow of a volume $V$ of water. Yet we know that water is ultimately discrete molecules...
The point is: the discrete nature of the water molecule or the electric charge does not manifest itself much in everyday life so it's much more convenient to think of macroscopic amounts of water or charge as being continuous rather than discrete quantities.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
} |
How can you calculate how long life will last with a regular run in terms of special relativity? Imagine, John run everyday 10 km with speed 12 km/h towards the east along the equator. If he had not run, he would have died at 70 from the point of view of a motionless observer. How long will John live from the point of view of a stationary observer with a regular run? Is there a formula for solving such problems?
Let's neglect the improvement of various biological processes in the body with regular running and another kinds of sports. Let's also neglect John's movements, with the exception of such a run.
| The scenario you describes has thousands of potential variables that might play a role in the outcome, such as the weather each day, the clothes John wears each day, the route he follows on each run. However, if you ignore those, and you assume that John does exactly the same run every day, going 5km directly to the East then returning, while the stationary observer never moves, then you have a low-speed instance of the 'twin paradox', and you can indeed calculate the reduction in John's ageing that arises from the consequence of his motion. You can find the relevant formula by googling the 'twin paradox' (I am too tired to type it). My guess is that the effect of the movement you describe will be less than a trillionth of a second.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can plasmas be black bodies? I have recently heard the claim that sun can not be composed of plasma because plasma can not be a black body.
I am an uneducated layman, I've seen a lot of people (laymen) deviate from accepted scientific consensus. I am skeptical and I don't have enough knowledge about physics to argue it.
| The argument is silly if the claim is that plasmas cannot appear anything like blackbodies, since there are observable examples like the Sun.
To be a blackbody, a volume of plasma needs to come into equilibrium at a reasonably uniform temperature and to be thick enough that it will absorb all radiation incident upon it at all wavelengths.
From your comments, it appears that the source you cite disputes that plasmas are capable of producing a continuous spectrum, or equally from absorbing at all wavelengths. This is clear nonsense that can easily be demonstrated in a lab; there are many plasma processes involving free electrons that can emit or absorb a continuous spectrum. Examples include thermal bremsstrahlung, Compton scattering and photoelectric recombination.
Near the solar "surface" the dominant process is the formation of H$^{-}$ ions, created by H-atoms capturing free electrons ionised from alkali atoms. This emits a recombination spectrum across the visible region and its inverse process, the photoionisation of H$^{-}$ ions, provides the continuous absorption that makes a few 100 km thickness of the plasma effectively opaque.
The reason the Sun isn't a perfect blackbody is not because it lacks the basic mechanisms to absorb at all wavelengths, but because it is not isothermal on the length scale at which it becomes opaque to radiation. A better example of an almost-perfect blackbody is the cosmic microwave background, emitted by the cosmic plasma when it was at temperatures of 3000 K. Here, the temperature was almost uniform across the universe and the plasma was opaque to its own radiation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/720940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Why flapping rudder produce net thrust if one half-stroke produce thrust and second half-stroke drag? In small sailing boat like optimist is well know technique when there is no wind, rudder pupming which push boat forward.You just need push-pull rudder stick left to right with fast movement.
Rudder works complety under the hull, so there is no pressure interaction between stern and rudder.
Forward half-stroke is when rudder rotate from centerline to left or right
(from 2 to 1 or from 2 to 3).
Why stiff rudder(not felxibile like flippers) produce net thrust if forward half-stroke produce drag?
(Or maybe forward half stroke produce thrust as well? I dont think so..)
Please explain your answer with pressures at rudder sides for two condition;
*
*boat speed zero
*boat is moving
Avoid Newton 3 law.
| I guess that during the forward thrust portion of the stroke the skipper pushes harder and faster creating more turbulence and drag and thus more thrust. During the reverse thrust part they slow down for more laminar flow and thus less drag and less thrust. So over one cycle the net impulse is forward.
But I also guess that the actual situation is more complex and involves more fluid dynamics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Does the force between two magnetic poles ever reach zero? If we hold two magnetic like-poles together and start to move them away, would the repelling force reach absolute zero at certain point?
In that scenario, as a layman, I think that there is something paradoxical :(
We can never reach absolute ZERO in Physics. Theoretically, it will always be bigger than zero... it just gets smaller and smaller... ad infinitum. And that reminds me of Zeno paradox.
| Like gravity or electrostatic attraction, magnetism reduces with distance. However, while the first 2 have an inverse square law (the force diminishes with the square of the distance), the magnetic force diminishes with the 4th power of the distance, or $f\propto{r^{-4}}$. Hence it reduces much faster than electric or gravitational forces.
However, even if magnetism reduces very quickly, we still see that, no matter how large $r$ gets, the resulting force never reaches $0$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/721826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How do quantum probabilities transform under Lorentz transformations? I think I get how scattering probabilities transform under Lorentz transforms. Once the interaction phase is over, the final probabilities become time independent. Hence, every observer could describe the final state using the same probabilities.
But I don't understand how time-dependent probabilities would transform under a change of frame. Suppose there's a quantum system in a box whose probabilistic state at time $t$ is described by some wavefunction/wavefunctional $\psi (t)$. How would a moving observer describe the probabilistic state of the same system? I think the concept of "probability at a time" gets screwed up because of different planes of simultaneties for the two observers.
| There is no universal answer here. Transformation formulas depend on the way you describe (enumerate) system states: it can be done in invariant and non-invariant way, consistent with system symmetry or not. So the only answer to your question is: they transform somehow, as some representation of Lorentz group.
ADDENDUM
In general case we have some Hilbert space $\mathcal H$. We can imagine a time dependent state as a moving point in the $\mathcal T \times \mathcal H$ fiber space, where $\mathcal T$ is the time axe. To put this theory into special relativity context, some Lorentz group representation $L: \mathcal T \times \mathcal H \to \mathcal T \times \mathcal H$ should be defined.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
A nuclear fusion generating cart In an unrealistic thought experiment, suppose I had a $100$ meter track with a cart ontop that had a "pocket" on the front of the cart. Suppose further that this track and cart were in a room of hydrogen gas at 1atm.
How quickly would I need to accelerate the cart down the track so that by the end the compressed and heated hydrogen gas in the front "pocket" of the cart has fused together (i.e. the cart has caused a nuclear fusion reaction)?
|
How quickly would I need to accelerate the cart down the track so that
by the end the compressed and heated hydrogen gas in the front
"pocket" of the cart has fused together (i.e. the cart has caused a
nuclear fusion reaction)?
I believe no amount of speed will suffice.
The problem is that the compression takes place at the speed of sound in the substance, which in this case is hydrogen gas, so about four times the speed of sound in air. As the cart moves the gas will be compressed and heat up, but this will cause it to radiate the energy back out. This will occur much faster than the amount of hydrogen you can cram back onto the front. You might reach thousands of degrees, but not the ~100 million you need.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can I know who is accelerating? – Inertial reference frames and relative motion Suppose I (observer $I$) am standing somewhere in space. I see a region in which my friend ($F$) is accelerating in some direction $\mathrm{\mathbf{\hat{u}}}$. Suppose I see everything in $F$ accelerating in the same direction.
My question is: how can I know if I am an inertial frame of reference or if they are (in which case I am accelerating in the $-\mathrm{\mathbf{\hat{u}}}$ direction)?
|
how can I know if I am an inertial frame of reference or if he is (in which case I am accelerating in the −u^ direction)?
Use an accelerometer. If your accelerometer reads 0 then you know that you are inertial. If your accelerometer reads something other than 0 then you know that you are non-inertial. This is irrespective of anything else, i.e. it doesn't matter what you observe happening elsewhere.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
Shape of fastest spinning rod A one-meter steel rod of variable thickness is attached at one end to a spinning hub. The cross-sectional area of the rod is a function $f(x)$ of the distance $x$ in meters from the hub, x ranging from 0 to 1. My question is: how can I choose the function $f(x)$ to maximize the speed at which the rod can spin without flying apart?
Additional constraints: the rod has a minimum cross-section of 1 cm$^2$ everywhere, and the rod weighs 10 kg. Density of steel = $\rho$ = 8 g/cm$^3$, and the ultimate tensile strength is $F_{tu}$ = 800 MPa.
What I have: assume that at each distance c from the hub, the rod's cross-section at that distance has just enough tensile strength to support the rest of the rod. By setting $F_{tu} f(c)$ equal to the sum of centripetal forces needed for the rest of the rod, with $\omega$ angular velocity, I get
$$F_{tu} f(c) = \int_c^1 \rho \cdot x \cdot f(x) \cdot \omega^2 dx$$
But I do not know how to solve for f.
| start with this equation ?
$$F_{tu} f(x) = \int \rho \cdot x \cdot f(x) \cdot \omega^2 dx$$
heche
$$F_{tu} \frac{df(x)}{dx}=\rho \cdot x \cdot f(x) \cdot \omega^2$$
with $~f(0)=f_0~$ you obtain
$$f(x)=f_0\,e^{\frac{\rho\omega^2\,x^2}{2\,F_{tu}}}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/722862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How do physicists know that some of a beta ray/particle's 'missing' energy isn't lost to interference with the electron cloud surrounding the atom? Enrico Fermi and Wolfgang Pauli ultimately concluded that beta decay resulted in an electron and an electron antineutrino leaving a nucleus... BUT...
How does the electron leaving a neutron punch its way through the electron cloud surrounding a large nucleus?
How do we know that a slow-moving beta electron didn't just run through a lot of extra electrons surrounding the atom?...
| Beta particles have a typical kinetic energy of about half a million electron volts. This is plenty enough to pass through the electron cloud surrounding the nucleus and completely escape from it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does the intermolecular forces change during phase transition? When water is heated but not yet boiling, I understand that the intermolecular attraction does not change, but the molecules vibrate more.
But when water boils to gas, does the forces of attraction between the molecules change, or are the intermolecular forces simply broken?
| Intermolecular forces are never broken. What can be broken are bonds, although one should add that the precise concept of a bond is not straightforward (there are no hooks joining molecules).
From a fundamental point of view, all the interactions relevant in typical condensed matter systems are basically electrostatic. Different kinds of bonds (covalent, ionic, metallic, van der Waals, etc.) are just other names given to extreme cases. They continuously modify one into another as a function of distances and environmental conditions. An accurate solution to the electronic problem of a set of Hydrogen and Oxygen atoms for each nuclear position would be enough to provide a good description of the molecular interactions within the so-called Born-Oppenheimer approximation.
People still use classical model potential for many reasons, including the computational cost of accurate ab-Initio calculations. There are many of them available for water. I just cite TIP4P/2005 or SPC/E as a couple of popular choices good enough to be used for many computer simulations of different properties of water and aqueous systems. At the level of such model potential, it may be necessary to modify them as a function of the thermodynamic phase (although it is not always the case).
However, it should be clear that this is just an effect of an approximate description of the interactions. It is not a conceptual necessity.
The problem of accurately determining the liquid-vapor coexistence properties may require some expressly modified adaption, as seen in a recent paper on this issue.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Deriving wave equation of string without approximation When deriving the equation for a standing wave of a string, we often approximate that the tension at all points in the wave is constant. but I want to derive the equation without the approximation. I tried to derive it with lagrangian as below:
*
*kinetic energy of string with unit length is $\frac{1}{2}λf_t^2$, where $λ$ is the linear density of string, $f$ is the wave function.
*potential energy of string with unit length is $mg(\frac{dl}{dx}-1)+\frac{1}{2}k(\frac{dl}{dx}-1)^2$, where $dl$ is the small length of stretched string, $m$ is mass of weight, $k$ is constant according to Hooke's law. The first term is the energy generated by the string being stretched by the weight, and The second term is the energy generated by stretching as the shape of the string changes.
3.lagrangian density $L=T-V=\frac{1}{2}λf_t^2-(mg(\frac{dl}{dx}-1)+\frac{1}{2}k(\frac{dl}{dx}-1)^2)$, and $\frac{dl}{dx}=\sqrt{f_x^2+1}$.
4.by euler-lagrange equation $\frac{∂}{∂t}\frac{∂L}{∂f_t}+\frac{∂}{∂x}\frac{∂L}{∂f_x}=\frac{∂L}{∂f}$, I get the equation λ$\frac{∂^2f}{∂t^2}$-$\frac{∂}{∂x}$$\frac{mgf_x}{\sqrt{1+f_x^2}}$ -$k(f_x^2-\frac{f_x}{\sqrt{1+f_x^2}})$, or $λ\frac{∂^2f}{∂t^2}-mg\frac{∂^2f}{∂x^2}-\frac{3}{2}k\frac{∂^2f}{∂x^2}f_x^2=0$ with approximation $(1+x)^n=1+nx$ ($x<<1$).
My question is, is this correct? If so, how can I solve this equation?
| You want to derive a non linear 1 D wave equation but still assume the motion to be purely transverse: $ \overrightarrow{v}(x,t) = \frac{\partial f}{\partial t} \overrightarrow{ e_{y} } $. If the tension varies along the string you must include the horizontal component of the velocity: T(x) and T(x+dx) do not cancel out anymore in the x direction.
Your potential energy $ PE=mg \big(\frac{dl}{dx}-1\big)+...$ is false: it is homogeneous to a force (mg).
As a mater of fact, you want to use Lagrangian mechanics to derive your equation but a string has an infinite continuous number of degrees of freedom (parametered by x). You must introduce a potential and kinetic energy density.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
By Jordan-Wigner transform, we can tranfer spin-$1/2$ model into fermions, then how to choose the right hamiltonian so that we can solve the model? I know that by using Jordan-Wigner transform(JWT), we can transform spin-$1/2$ systems into fermions. My problem is, for example, after JWT, we have a hamiltonian of form
$$\epsilon\left(c_{1}^{\dagger} c_{1}+c_{2}^{\dagger} c_{2}\right)+\lambda\left(c_{1}^{\dagger} c_{2}^{\dagger}+c_{2} c_{1}\right)\tag{1}$$ with $\lambda$ and $\epsilon\in \mathbb R$.
If I directly write it as quadratic form $$\left( \begin{matrix}
c_{1}^{\dagger}& c_2& c_{2}^{\dagger}& c_1\\
\end{matrix} \right) \left( \begin{matrix}
\epsilon& \lambda& 0& 0\\
\lambda& 0& 0& 0\\
0& 0& \epsilon& 0\\
0& 0& 0& 0\\
\end{matrix} \right) \left( \begin{array}{c}
c_1\\
c_{2}^{\dagger}\\
c_2\\
c_{1}^{\dagger}\\
\end{array} \right) $$
the eigenvalues of the middle hermitian matrix will be
$$0,\epsilon ,\frac{\epsilon \pm \sqrt{\epsilon ^2+4\lambda ^2}}{2}$$
But if I change eq.(1) into another form with canonical commutation relation for fermions as follows: $$
\begin{align}
&\epsilon \left( c_{1}^{\dagger}c_1+c_{2}^{\dagger}c_2 \right) +\lambda \left( c_{1}^{\dagger}c_{2}^{\dagger}+c_2c_1 \right)
\\
&=\epsilon \left( c_{1}^{\dagger}c_1+c_{2}^{\dagger}c_2 \right) +\frac{\lambda}{2}\left( c_{1}^{\dagger}c_{2}^{\dagger}+c_2c_1 \right) -\frac{\lambda}{2}\left( c_{2}^{\dagger}c_{1}^{\dagger}+c_1c_2 \right)
\\
&=\frac{\epsilon}{2}\left( c_{1}^{\dagger}c_1+c_{2}^{\dagger}c_2 \right) +\frac{\epsilon}{2}\left[ \left( 1-c_1c_{1}^{\dagger} \right) +\left( 1-c_2c_{2}^{\dagger} \right) \right] +\frac{\lambda}{2}\left( c_{1}^{\dagger}c_{2}^{\dagger}+c_2c_1 \right) -\frac{\lambda}{2}\left( c_{2}^{\dagger}c_{1}^{\dagger}+c_1c_2 \right)
\\
&=\frac{\epsilon}{2}\left( c_{1}^{\dagger}c_1+c_{2}^{\dagger}c_2 \right) -\frac{\epsilon}{2}\left[ c_1c_{1}^{\dagger}+c_2c_{2}^{\dagger} \right] +\frac{\lambda}{2}\left( c_{1}^{\dagger}c_{2}^{\dagger}+c_2c_1 \right) -\frac{\lambda}{2}\left( c_{2}^{\dagger}c_{1}^{\dagger}+c_1c_2 \right) +\epsilon
\end{align}
$$
The same reason, we can write it as quadratic form
$$\mathcal{H}=\frac{1}{2}\left(\begin{array}{cccc}
c_{1}^{\dagger} & c_{2} & c_{2}^{\dagger} & c_{1}
\end{array}\right)\left(\begin{array}{cccc}
\epsilon & \lambda & 0 & 0 \\
\lambda & -\epsilon & 0 & 0 \\
0 & 0 & \epsilon & -\lambda \\
0 & 0 & -\lambda & -\epsilon
\end{array}\right)\left(\begin{array}{c}
c_{1} \\
c_{2}^{\dagger} \\
c_{2} \\
c_{1}^{\dagger}
\end{array}\right)+\epsilon$$
But this time, eigenvalues become $$\pm \sqrt{\epsilon ^2+\lambda ^2}$$
I think this might be the reason that the unitary used to diagonalize the middle hermitian matrix cannot keep the canonical commutation relation for fermions.
So my problem is, after JWT, is there some routine that we can write the right hermitian matrix so the eigenvalue of the hermitian matrix is the answer we want?
| The example you find has a mistake. When you diagonalize the matrix $\mathcal{H}$, you are applying some unitary transformation $U$, which is a $4 \times 4$ matrix, to the vector $(c_1,c^{\dagger}_2,c_2,c^{\dagger}_1)^T$. Let the transformed vector be $(\tilde{c}_1,\tilde{c}^{\dagger}_2,\tilde{c}_2,\tilde{c}^{\dagger}_1)^T$. We must have $\tilde{c}^{\dagger}_1=(\tilde{c}_1)^{\dagger}$ and $\tilde{c}^{\dagger}_2=(\tilde{c}_2)^{\dagger}$, but a $4 \times 4$ matrix $U$ gives extra degrees of freedoms to the transformation, and the relation generally no longer holds. Therefore, you will have different eigenvalues for the same Hamiltonian as invalid results.
In your previous edition of the post, you mentioned a link about Jordan-Wigner transformation and its use in $1$-dimensional quantum Ising model. The Hamiltonian after Jordan-Wigner transformation is
$$H=\sum_{k}{\big(-c^{\dagger}_kc_k(2\cos{k})-(c^{\dagger}_kc^{\dagger}_{-k}e^{ik}+c_{-k}c_ke^{ik})+2h_zc^{\dagger}_kc_k\big)}$$
You argue that we can write the Hamiltonian as
$$H=\sum_{k}
\begin{pmatrix} c^{\dagger}_k & c_{-k} \end{pmatrix}
\begin{pmatrix}
-2\cos{k}+2h_z & -e^{ik} \\
-e^{-ik} & 0
\end{pmatrix}
\begin{pmatrix} c_k \\ c^{\dagger}_{-k} \end{pmatrix}$$
This is true, but diagonalizing the $2 \times 2$ matrix in the above equation does not give you superposition of $c_k$ and $c^{\dagger}_{-k}$ which can diagonalize the Hamiltonian. There is the other $-k$ term as contribution in the sum, meaning
\begin{align}
H & = \sum_{k>0}
\begin{pmatrix} c^{\dagger}_k & c_{-k} \end{pmatrix}
\begin{pmatrix}
-2\cos{k}+2h_z & -e^{ik} \\
-e^{-ik} & 0
\end{pmatrix}
\begin{pmatrix} c_k \\ c^{\dagger}_{-k} \end{pmatrix} \\
& + \sum_{-k,k \geq 0}
\begin{pmatrix} c^{\dagger}_{-k} & c_k \end{pmatrix}
\begin{pmatrix}
-2\cos{k}+2h_z & -e^{-ik} \\
-e^{ik} & 0
\end{pmatrix}
\begin{pmatrix} c_{-k} \\ c^{\dagger}_k \end{pmatrix}
\end{align}
Therefore, you can see the diagonalization of $\begin{pmatrix}
-2\cos{k}+2h_z & -e^{ik} \\
-e^{-ik} & 0
\end{pmatrix}$ only diagonalizes the first part of the Hamiltonian. As written in the link, the correct way to do this is to sum up the two parts, and have
$$H=\sum_{k}
\begin{pmatrix} c^{\dagger}_k & c_{-k} \end{pmatrix}
\begin{pmatrix}
-\cos{k}+h_z & -i\sin{k} \\
i\sin{k} & \cos{k}-h_z
\end{pmatrix}
\begin{pmatrix} c_k \\ c^{\dagger}_{-k} \end{pmatrix}$$
The diagonalization of the matrix gives you the superposition of $c_k$ and $c^{\dagger}$ which diagonalizes the Hamiltonian.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Emf induced by a solenoid Could someone please clear my confusion regarding this concept and point out what is wrong with my argument:
Say we have a simple circuit as shown in the image:
Why is the potential difference between points b and a not equal to L*di/dt but is instead -Ldi/dt?
Since the magnetic flux through the solenoid is increasing, the current induced between a and b opposes the direction of the original current. So point b must be at a higher potential than point a and this implies Vb-Va=Ldi/dt .
| Your understanding of the two currents is incorrect. There is only one current. The way the circuit is drawn the current flows clockwise. Current always enters the positive end of passive elements when they are absorbing or dissipating energy.
The voltage is placed across the RL combination such that $$V_{a}>V_{b}>V_{c}$$ So $V_{ab}$ is positive, then $V_{ba}$ is negative.$$V_{ab}=L\frac {di}{dt}$$$$V_{ba}=-Ldi/dt$$
It is a mistake to think the currents is induced here. The current is caused by the electric field placed by the voltage source. As the current increases the magnetic flus increases. So the magnetic flux is caused by the current, not the other way. An opposing voltage is created by the changing flux (Faraday and Lenz):$$V_{ab} =-\frac {d\phi}{dt}=-\frac {d\phi}{di}\frac {di}{dt}$$
Defining $$L=-\frac {d\phi}{di}$$
yields$$V_{ab}=L\frac {di}{dt}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to calculate the energy of a spring-mass system considering harmonic oscillation of the normal mode? For a spring-mass system, we know that the potential and kinetic energy are
$$E_p = \frac{1}{2}ku^2 \text{ and } E_k = \frac{1}{2}m\dot{u}^2.$$
where $k$, $m$ and $u$ are the spring constant, mass and the displacement of the mass. If we consider harmonic motion of the normal mode, we have that
$$ u(t) = \hat{u}\, \mathrm{e}^{i\omega t},$$
where $\omega = \sqrt{k/m}$ is the natural frequency. If we substitute this equation into the energy formulas above, we get
$$ E_p = \frac{1}{2}k\hat{u}^2\mathrm{e}^{2i\omega t} \text{ and } E_k = -\frac{1}{2}m\omega^2\hat{u}^2\mathrm{e}^{2i\omega t}. $$
Substituting the natural frequency yields
$$ E_p = \frac{1}{2}k\hat{u}^2\mathrm{e}^{2i\omega t} \text{ and } E_k = -\frac{1}{2}k\hat{u}^2\mathrm{e}^{2i\omega t}. $$
However, this is clearly wrong since the total energy $E=E_p+E_k$ would be zero. Besides, we know that $E$ should be constant over time and equal the maximum kinetic or potential energy ($E =\frac{1}{2}k\hat{u}^2$). What am I doing wrong? How can one calculate the energy of a spring-mass system considering harmonic oscillation of the normal mode?
I don't have a solid background in physics, so, please, be as clear as possible.
Motivation: the question relevancy lies on being able to use Euler's representation to calculate energies. Although the stated problem is simple and could be solved using trigonometric functions, Euler's representation is much more convenient when dealing with complex problems -- such as multi-degree-of-freedom systems -- and the answer to the question could be extended to such problems.
| The position of a spring-mass system is real valued function of time, not complex. When you write down the normal mode, you need to specify that you are taking the real part of the complex expression:
$$
u(t) = \text{Re}\left(\hat u e^{i\omega t} \right) = A \cos(\omega t + \delta).
$$
Here, $A = |\hat u|$ is the amplitude of the oscillation and $\delta$ is an unimportant phase shift.
The time derivative of this expression gives
$$
\dot u(t) = \text{Re}\left(i\omega \hat u e^{i\omega t} \right) = - \omega A \sin(\omega t + \delta).
$$
If we compute the potential and kinetic energy using these two expressions we get
$$
E_p = {1 \over 2} k A^2 \cos^2(\omega t+\delta),
~~~~~
E_k = {1 \over 2} m \omega^2 A^2 \sin^2(\omega t+\delta).
$$
Using $\omega^2 = k/m$, the total energy is then
$$
E = E_p + E_k = {1 \over 2} k A^2 (\cos^2(\omega t+\delta)+\sin^2(\omega t+\delta)) = {1 \over 2} k A^2,
$$
which is constant in time, as we expect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What causes light passing through a hole to change direction? On diagrams showing light passing through a hole, the wave of light appears to change direction when it emerges from the hole.
What causes that change of direction? Is it maybe the walls of the hole imparting a pulling force or the sudden absence of light next to the emerging beam causes the light to spread?
Or maybe light does this all the time and we only notice when we put a wall with a hole in the way.
Please explain this to like I'm a five year old.
| Photons, including single photons interact with single edges. The effect is more noticeable when the edge is sharp. Photons are pulled around and behind the edge But photons also scatter away from the Edge. A single slit is created with two sharp edges. Each edge is diffracting and scattering photons on their way to the detection screen. On the screen you have four Single edge patterns overlapping to create a single slit interference pattern. See “Single Edge Certainty” at billalsept.com
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/723976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
What percentage of sunlight isn't scattered by the atmosphere? What percentage of sunlight isn't scattered by the atmosphere and instead will arrive at your eyes directly from the sun.
It's been aksed here before but a proper answer hasn't been given.
I was thinking about the effects looking directly at the sun would have for someone on the ground relative to someone in space.
| It very much depends on the wavelength, the elevation of the Sun, the altitude of the observer and what pollution is in the atmosphere.
A simple example. At zenith, the extinction in the V band (about 550 nm) is about 0.12 astronomical magnitudes at a pristine observatory site, high on a mountain. This means a fraction $1-10^{-0.12/2.5}=0.105$ is scattered or absorbed.
If you observe the Sun at lower altitudes you multiply the extinction at zenith in magnitudes (roughly) by $\sec z$, where $z$ is the angle from zenith, to account for the number of air masses the light travels through. (NB a better approximation is needed as $z$ approaches 90 degrees.)
At bluer wavelengths the extinction is higher - maybe 0.3 magnitudes/airmass at 400 nm and becomes very large of course as you head towards the UV. This wavelength dependence is largely attributable to Rayleigh scattering. At redder wavelengths it is lower maybe 0.06 magnitudes/airmass.
The presence of aerosols, dust and pollutants all can increase the extinction. At sea level in a city, there could easily be 1 magnitude of extinction meaning 60% of the direct light is absorbed or scattered.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Brightness of bulbs in Parallel When adding bulbs in parallel, the brightness is brighter than that of series. But does that mean adding bulbs in parallel will increase the brightness of the other bulbs?
My intuition is as follows: When adding a bulb in parallel the current doubles, but that current splits between the two branches such that both bulbs receive the same current and the same voltage, so brightness doesn't increase, but it is still brighter relative to adding bulbs in series. Is this correct?
| You just get the brightness of two ore more bulbs, every single bulb keeps its brightness, if they are parallel.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/724239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.