Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Does the track do work? (second try) When a ball rolls without slipping down a track, it seems like static friction from the track does rotational work on the ball. As explained in this post:
Is work done in rolling friction?,
this work is exactly the same as the work done by gravity around the pivot point. But shouldn't the track also do linear (i.e. translational, not rotational) work on the ball? After all, the ball is moving.
(The fact that the pivot point is not moving does not seem to be a sufficient explanation because $F=ma$ holds and therefore so should the work-energy theorem for the displacement of the ball.)
| Yes, the track does work. You can verify that the final velocity of the ball is less than $\sqrt{2gh}$. This is because the friction force did negative work.
This is exactly what is the work-energy theorem says: The sum of the forces on an object times the displacement of the center of mass of the object is equal to the change in $\frac12 mv^2$, where $v$ is the velocity of the center of mass of the object. This is precisely what is happening here. The static friction force acts against the motion of the ball and reduces its final linear kinetic energy.
As for conservation of energy, the work done by the friction force is exactly equal to the final rotational kinetic energy of the ball. The total change in kinetic energy (linear plus rotational) is indeed equal to $mgh$. So everyone goes home happy :).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What happens to Kinetic Energy in Perfectly Inelastic Collision? I’m going over a chapter on linear momentum in my physics course right now and am somewhat puzzled with what happens with some of the kinetic energy that is lost in a perfectly inelastic collision.
Imagine a world without sound, heat, or any non-mechanical forms of energy. Now imagine that there are two perfectly square blocks, M1 and M2, in empty space that each have a mass of 1 kilogram. M1 flies toward M2 in a perfectly straight line with a velocity of 1 meters per second. M1 sticks to M2, creating M3, and the new 2 kilogram rectangular block moves with a velocity of 1/2 meters per second. In the collision, 1/4 J of energy was lost.
What happened to that energy, given that no sound or heat was emitted? Does it requires a certain amount of energy to form a single 2 kilogram object out of two 1 kilogram objects?
EDIT:
I was able to figure things out thanks in great part to the posts below. In case this bothers someone else in the future, the way that I think about it is, it requires energy to slow down M1 and speed up M2 to the same velocity (you can imagine M3 as two separate particles). Fundamentally, the energy is lost in that speeding up/slowing down process in a world without friction, heat, etc.
| Your question contains a contradiction. You imagine a world without any forms of non-mechanical energy- in such a world, inelastic collisions could not exist. The whole point of inelastic collisions is that mechanical KE is lost to non-mechanical forms of energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Time averages of complex quantities If an electric field $E$ oscillates as $E_0\sin(ωt)$ then the average value of $E^2$ over one period of oscillation will be
$$E_0^2\left< \sin^2(ωt)\right>=E_0^2/2$$ since the average value of $\sin^2(ωt)$ is well known to be $1/2$.
However if we write $E$ using complex numbers as $E_0e^{iωt}$ and then take real parts, as is often the case, then we have
$$\left< E^2 \right> = E_0^2\left< (e^{iωt})^2 \right> = E_0^2\left< e^{2iωt} \right> $$
$$=E_0^2 \left< \cos(2ωt) + i\sin(2ωt)\right>=0$$ as the average value of an unsquared $\sin$ or $\cos$ will be equal to $0$ over a single period of oscillation, and the real and imaginary parts (should be) independent of one another.
What's gone wrong here? Is there a fault in my above assumption?
| The real part of the product of two complexes is not the product of the real parts. So it is mandatory to go back to real notation before calculating the average.
Another solution is to use the trick :
$$\langle x(t)y(t)\rangle=\frac{1}{2}\Re\{\underline{x}{\underline{y}}^*\},$$
with the exponent * which represents the conjugate complex.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Wave Equation Simulation I've been recently trying to simulate a wave equation in P5.js.
My aproach is to plot a bunch of points with $(x, y)$ coordinates. I've used the wave equation for obtaining the movement of those particles.
This is my numerical integration method:
var pos = this.masses[i].pos
var nextPos = this.masses[i + 1].pos
var lastPos = this.masses[i - 1].pos
var vel = this.masses[i].vel, acc = this.masses[i].acc // for simplicity
acc.x = 0
acc.y = c * c * ((nextPos.y - 2*pos.y + lastPos.y)) / dxdx
vel.x += acc.x * dt
vel.y += acc.y * dt
pos.x += vel.x * dt
pos.y += vel.y * dt
Edit: variables definition.
dxdx = 1e-5 //small step
dt = deltaTime // value from P5js (calculates time between frames)
c = 10
My acceleration is defined like this:
$$
\frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}
$$
So I convert my equation into a discrete equation like this:
$$
acc = c^2 \frac{u(x + 2\Delta x, t) - 2 u(x + \Delta x, t) + u(x, t)}{\Delta x^2}
$$
As shown here in my code:
acc.y = c * c * ((nextPos.y - 2*pos.y + lastPos.y)) / dxdx
And then simply I integrate by the Euler's method.
But my result is quite strange:
As you can see, the wave grows, when the expected result was something like this:
What I'm doing wrong? Someone could help me? Is there something I'm missing?
Thanks :)
| I am not at all an expert in numerical methods but the stability condition for the algorithm "time stepping" is $c<∆x/∆t$.
This condition does not seem to be verified in your case, with $c=10$ ?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why are there both antinodes at both ends of the tube? I learned stationary/standing waves the other day. For stationary waves in open tubes, the textbook says both ends must have an antinode. Can anyone tell me why? (shown as figure)
And also, when playing with the instruments like guitar, what's the number of harmonic on the string, i.e. how many antinodes and nodes are there on the string?
| The air molecules are free to move at the open end of a tube, so there's an antinode. At a closed end, there must be a node as the air molecules don't move there.
That's similar to a wave in a string, at the fixed ends the string can't move and there are nodes there. The first harmonic for a string (the fundamental), has two nodes at the ends and one antinode in the middle. The second harmonic has two antinodes etc...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How is electromagnetic induction analogous to gravitational frame dragging? This wiki says: https://en.wikipedia.org/wiki/Frame-dragging
Qualitatively, frame-dragging can be viewed as the gravitational
analog of electromagnetic induction.
I was wondering what exactly this means, and the wiki for electromagnetic induction doesn't seem to go into it. I wasn't able to Google anything that shed light on this analogy either.
What does this sentence mean exactly? How is gravitational frame dragging, analogous to electromagnetic induction?
| The easiest way to access the idea behind this analogy is to think in 4-vector, we know that the electric and magnetic fields can be represented in one being $A^{\mu}=(\phi/c, \vec{A})$ with $ \vec{B}=\vec{\nabla}\times\vec{A}\;\;,\vec{E}=-\vec{\nabla}\phi-\frac{\partial\vec{A}}{ \partial t} $
If we replace the electric field by the gravitational field ($\vec{g}=-\vec{\nabla}\Phi)$, we need to complete the 4D "representation" for this scalar field by a vector field which is the analog of the vector $\vec{A} $ that is related to the magnetic field.
The analogy and explanations here : https://en.wikipedia.org/wiki/Gravitoelectromagnetism
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
What formula is this and what does it signify? (Electric Field and Potential) I probably skipped the useful part of the lecture, but while we were being taught about electric potential energy, my professor mentioned an equation, which he said we will seldom use, but which is significant. The equation is for the potential energy per unit volume. The equation was:
$$
\frac{dU}{dV}=\frac{1}{2}\varepsilon_{0}E^{2}.
$$
He then rearranged and integrated to give
$$
U=\frac{1}{2}\int\varepsilon_{0}E^{2}\, dV,
$$
where $\varepsilon_{0}$ is the permittivity of free space, $E$ is the magnitude of electric field at the point of focus, and $dV$ is a volume element of the space in focus.
He then ended the lecture with saying, this shows that electric potential energy is stored as electric field in free space. So, I want more insight into this equation, what is the significance of this and if possible a name of any of these two.
|
So, I want more insight into this equation, what is the significance
of this and if possible a name of any of these two.
Perhaps showing you the connection between the formula and the potential energy stored in the volume containing the electric field between the plates of a parallel plate capacitor may give you some insight.
First we start with the potential energy stored in the capacitor as a function of its voltage $V$ and capacitance $C$.
$$U=\frac{1}{2}CV^2$$
Next we relate the voltage $V$ across the capacitor to the electric field $E$ and the plate separation distance $d$
$$V=Ed$$
Finally, a parallel plate capacitor's capacitance in terms of the area $A$ of the plates and the electrical permittivity $\epsilon$ of the dielectric material between the plates is given by
$$C=\frac{\epsilon A}{d}$$
Substituting the last two equations into the first
$$U=\frac{1}{2}\frac {\epsilon A E^{2}d^{2}}{d}=\frac{1}{2}\epsilon E^{2}V$$
Where $V=Ad$ is now the volume of the space containing the electric field of the capacitor.
For air or vacuum, $\epsilon=\epsilon_{o}$
Since the electric field $E$ is considered constant in a parallel plate capacitor, it would come out of the integral you gave, making the last equation identical to your second equation.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Averaging over spin phase-space for a cross section In Peskin and Schroeder the Dirac equation is solved in the rest frame for solutions with positive frequency:
$$\psi(x) = u(p) e^{-ip\cdot x}$$
$$u(p_0) = \sqrt{m} \begin{pmatrix} \xi \\ \xi \end{pmatrix},$$
for any numerical two-component spinor $\xi.$ Boosting to any other frame yields the solution:
$$u(p) = \begin{pmatrix} \sqrt{p \cdot \sigma}\ \xi \\ \sqrt{p\cdot \bar{\sigma}}\ \xi \end{pmatrix},$$
where in taking the square root of a matrix, we take the positive root of each eigenvalue.
Then, they summarize:
The general solution of the Dirac equation can be written as a linear combination of plane waves. The positive frequency waves are of the form
$$\psi(x) = u(p)e^{-ip\cdot x}, \ \ \ p^2 = m^2, \ \ \ p^0 >0.$$
And there are two linearly independent solutions for $u(p),$
$$u^s (p) = \begin{pmatrix} \sqrt{p \cdot \sigma}\ \xi^s \\ \sqrt{p\cdot \bar{\sigma}}\ \xi^s \end{pmatrix}, \ \ \ s=1, 2, $$
which are normalized: $$\bar{u}^r (p) u^s (p) = 2m \delta^{rs}.$$
Next, we can consider the unpolarized cross section for $e^+e^{-} \to \mu^+ \mu^-$ to lowest order. The amplitude is given by:
$$\bar{v}^{s'} \left(p'\right) \left(-ie\gamma^{\mu}\right)u^s\left(p\right)\left(\frac{-ig_{\mu \nu}}{q^2}\right)\bar{u}^{r} \left(k\right) \left(-ie\gamma^{\nu}\right)v^{r'}\left(k'\right)$$
Then, I quote
In most experiments the electron and positron beams are unpolarized, so the measured cross section is an average over the electron and positron spins $s$ and $s'$. Muon detectors are normally blind to polarization, so the measured cross section is a sum over the muon spins $r$ and $r'.$
...
We want to compute
$$\frac{1}{2}\sum_s \frac{1}{2} \sum_{s'} \sum_r \sum_{r'}|M(s, s' \to r, r')|^2.$$
Why, in order to take the average and the sum, do we only need to sum, rather than integrate, over the spin phase space? Doesn't each incoming particle have an infinite number of spinors? Assuming it is unpolarized, the probability distribution will be uniform, but in principle it still seems like we should integrate over some $\theta$ for spinors $\xi = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix}$. Instead, what has been done is to assume each incoming particle is in a definite state of "spin-up" or "spin-down" and assign a $50/50$ probability to each.
| We can't solve it analytically, so the average is taken as a good approximation.We also make the assumption that each particle has finite spinors.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Why Do We Ever Get Interference in a Quantum Eraser? There are many questions on this site about the quantum eraser, but I think mine is not quite answered by any of the other answers on the topic.
Here's the setup:
My understanding of this experiment is the following. If a stream of photons is passed through a double-slit one at a time, but then passed through the BBO crystal, the photon is marked with which-way information due to the nonlinear interaction (basically the act of destroying a photon and making two new ones represents a measurement which collapses the wavefunction). After this the entangled pairs are split; one member of the pair is sent to a detection screen and the other member of the pair is sent into this setup with mirrors and beamsplitters and finally onto some click detectors $D_1, D_2, D_3, D_4$.
Based on this description, it seems obvious that if we look at the photons at the detector screen $D_0$ whose entangled pair photons hit $D_3$ or $D_4$, we'll see no interference pattern: in these cases we have which-way information and there should be no reason for interference.
What confuses me is what $D_0$ registers when you look at the photons whose pair photon hits $D_1$ or $D_2$. I understand that, due to the setup, the two paths have been recombined in the lower portion of the experiment, so that a click in $D_1$ or $D_2$ does not tell you which slit the photon went through anymore. But the upper photons are still marked with this information, so why should we recover an interference pattern? In other words, based on my understanding, the BBO crystal should prevent us from ever seeing an interference pattern.
| At the heart of your question is the supposition that:
"basically the act of destroying a photon and making two new ones represents a measurement which collapses the wavefunction"
This is not necessarily the case, if we consider the photon process per Feynman/Dirac, we have excited electron, photon creation and path determination (or vice versa), un-excited original electron and excited receiving electron. With an intermediary BBO crystal we would add another step with an additional excitation.
The wave function for the above processes may be one or two successive wave functions .... But the fact that a pattern is seen would be evidence for a single wave function at least for the photons on D1/D2.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
The change of mechanical into electromagnetic waves and vice versa I know that sound is a type of mechanical wave, so the human eardrum changes mechanical energy into electronic energy (impulses) so the information may be processed by the brain.
Question: As satellites transfer info by electromagnetic waves that are also electric signals, then can we change these mechanical waves into electromagnetic waves and vice versa?
| Yes, a microphone is a device that takes mechanical vibrations from sound waves and turns them into electrical signals. A loudspeaker takes electrical signals and creates mechanical vibrations of a speaker and that causes pressure oscillations in the air that we hear as sound.
Any such device that has created sound on earth from sound in a satellite e.g. the international space station, must have done what you asked about, with the information being tansmitted through space using electromagnetic waves.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why do we need insulation material between two walls? Consider a slab made of two walls separated by air. Why do we need insulation material between the two walls. Air thermal conductivity is lower than most thermal conductivities of insulating material and convection cannot be an issue in the enclosed volume: hot air rises, so what? it won't go any further than the top of the cavity.
| You can think of thermal conductivity as a measure of how readily heat will flow through the material while it is stationary. The low thermal conductivity of air means that it takes a long time for heat to diffuse through an air pocket.
If the air is permitted to move, however, this intuition goes out the window. The air in contact with one wall gets warm and rises, and the resulting circulation causes it to be brought into contact with the other wall. In this way, the heat doesn't need to diffuse through the air, as it's being transported by bulk air flow.
Insulating materials such as blown fiberglass (or a wool sweater) are good insulators precisely because they trap many small pockets of air, which shuts down convection and forces the heat to flow diffusively. Once there's no convection, the low thermal conductivity of the air pockets makes the material a good insulator. You're right that the thermal conductivity of the trapping material is usually higher than the thermal conductivity of the air itself, but that's the (fairly modest) price we have to pay for killing the convection.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 4,
"answer_id": 1
} |
Solving this problem using the Work-Energy Theorem
A block is tied to a string wound on a cylinder of mass $m$ and radius $r$, (Through a pulley) which can rotate about its axis on a massless mount placed on a smooth surface.
The system is released from rest, what will be the velocity $v_1$ of the block after it falls by a height $h$?
$\hskip2in$
I was able to do the problem after creating the Newton's 2nd law equations.
I also want to solve this using the work energy theorem. Here's what the equation I ended up with:
$\hskip2in$
As the string would also be moving with velocity $v_1$ as it's directly connected with the block, and the point where it's wound over the cylinder would have the translational velocity $v$ and rotational velocity $\omega r$ this gives us the relation:
$$v_1=v+\omega r$$
This relation isn't enough to get the value of $v_1$ What other kind of motion should I focus on to get another relation? Would be great if someone could hint me what am I missing.
For reference here's how I did it in the Newton's Law method:
For the block,
$$mg-T=ma_1-(1)$$
For the cylinder system, as the Tension is the only force causing the acceleration of the cylinder,
$$T=ma-(2)$$
Also, the rotational equation for the cylinder (Net torque = $I\alpha$, where $I$ is the moment of inertia and $\alpha$ is the angular acceleration):
$$Tr=(\frac{1}{2}mr^2)\alpha$$
$$T=\frac{mr\alpha}{2}-(3)$$
The constrained equation for the acceleration of the string and angular acceleration of the cylinder is
(Here, $a_1$ is the acceleration of the block hence, it's also the acceleration of the string. And at the point where the string is wound about the cylinder, the acceleration is $a+r\alpha$):
$$a_1=a+r\alpha-(4)$$
Solving (1) and (2) we get:
$$g=a_1+a-(5)$$
Solving (2) and (3) we get:
$$a=\frac{r\alpha}{2}$$
Hence, from (4) we get:
$$a=\frac{a_1}{3}$$
Putting this in (5), $a_1=\frac{3g}{4}$
Since the velocity of the block would be $v_1=\sqrt{2a_1h}$,
We get
$$v_1=\sqrt{\frac{3gh}{2}}$$
| Thanks to BioPhysicist for the hint. I decided to complete my answer now,
As mentioned in the post, the work energy equation we get is:
$$mgh=\frac{1}{2}mv_1^2+\frac{1}{2}\omega ^2+\frac{1}{2}mv^2$$
Also, as mentioned in the post (Equation 5) we have:
$$g=a_1+a \space\space\space-(1)$$
Using Newton's speed equations, as the system was released from rest, initial velocity u=0.
Hence, As the block travelled $h$ displacement with acceleration $a_1$,
$$v_1^2=2a_1h \implies a_1=\frac{v_1^2}{2h}$$
Let $t$ is the time in which the block attained speed $v_1$ with acceleration $a_1$ (Hence, using $v_1=a_1t$),
$$t=\frac{v_1}{a_1}\implies t=\frac{2h}{v_1}$$
For the cylinder, it attained velocity $v$ in time $t$ with acceleration $a$
$$v=at\implies a=\frac{vv_1}{2h}$$
Putting the values of $a$,$a_1$ in equation (1),
$$2gh=v_1^2+vv_1 \space -(2)$$
We also already have the constrained relation between the velocities (Mentioned in the post),
$$v_1=v+\omega r \space -(3)$$
Solving equations (2) and (3) we get:
$$v_1=\sqrt{\frac{3gh}{2}}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can we measure the one-way speed of anything at all? I know the one-way speed of light question has been exhausted, and I'm sorry for the naive question, but I would like to understand one thing. Can we measure the one-way speed of anything at all? If we "truly" can, why can't we synchronize that thing and an emission of light from one place to another to compare their speeds? For instance, and for simplicity sake assume 2 cars pass a point at exactly the same time and we know one car is going 60 mph and we do not know the speed of the other car. We could set up a clock 60 miles away, knowing that the car going 60 will take one hour to get there. Then,by using only one clock and by checking the difference in arrival times, we could calculate the second car's speed. Why can't we do something similar with light and another medium. Even if it needed to be sent from some space shuttle to the ISS, it seems like with modern equipment, we should be able to get some decent approximation of the one way speed.
| According to Derek Muller from Veritasium, no:
https://www.youtube.com/watch?v=pTn6Ewhb27k&ab_channel=Veritasium
At this point in time, we are measuring the 'average' speed of the roundtrip of light. This is due to the problem of needing two points in space to measure speed:
Speed = Distance/Time
So you would need to send off a beam of light from Point A and simultaneously tell Point B that you've started recording. This would need to happen at a speed faster than what we know as light-speed. Which is impossible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 5
} |
Can laser beam condense water vapour? Suppose you pump humidity (water vapour) in an enclosed tank fitted with lasers in all directions, will the water vapour turn to liquid due to collision of photons and $\text{H}_2\text{O}$ molecules?... My thought is that when the high speed water vapour molecules collide with photons, they will reduce its velocity thus changing them to liquid water... Am I right?
| Firstly, water vapour is made up of individual, isolated water molecules.
Over a wide electromagnetic spectrum water does absorb electromagnetic radiation, in particular IR/VIS/UV which you can see here.
But what makes you think that such absorptions can make the water molecules aggregate and condensation to occur?
AFAIK, the excited water molecules shed their absorbed EM energy by an allowed cascade of emissions and nothing else.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Peskin and Schroeder equation 3.136 (book edition 1995) I'm studying Peskin and Schroeder's QFT and I'm confused by equation 3.136 on page 68:
$\textbf{Previously, on page 48, equation 3.62 says:}$
$\textbf{My question is: how do we deduce 3.136 from 3.62?}$ $\textbf{It doesn't seem to me that} \boldsymbol{\xi^s}\textbf{ and }\boldsymbol{\eta^s}\textbf{ are related to each other?}$
| Try to use latest version! In the latest version book, equation (3.62) is the spinor solution belongs to negative energy electrons, and (3.136) is belongs to positive energy positron. If you still don't understand then try to interpret it with Dirac sea level (particle-hole transformation), then you will see (3.62) and (3.136) are physical equivalent. And BTW, (3.62) and (3.136) hold same orthogonality relation & completeness relation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can current flow in a simple circuit if I enclose the battery in a faraday cage? So suppose I have a regular circuit with a battery connected to a resistor and a lightbulb.
Suppose now somehow the battery is inside a metal box (faraday cage) but the rest of the circuit is outside of it so the wire is maybe poked through a tiny hole in the box.
Since energy flow through a circuit is due to the electromagnetic field as described by the Poynting vector, since the field cannot penetrate through the faraday cage, will current flow through the circuit?
| Yes, it will. The Faraday cage won't stop the current flowing through the wire around the circuit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Does anyone know of an adjustable focusing mirror? Does anyone know of an adjustable focus mirror? Allowing short sight and long sighted people to see clearly in a mirror with no specs on. Is it even possible?
| Most of the glass telescope mirrors currently being manufactured are adaptive: they are made thin enough that their shape can be easily changed by piezoelectric actuators mounted underneath them. This allows almost instantaneous adjustment of their focal length across the mirror area that nulls out the effects of atmospheric distortion, moment by moment.
You can also do this in a cruder way by pulling a vacuum behind a metallized mylar film being used as a mirror. I recommend you buy a derelict snare drum shell, seal it to make it airtight, and mount an aluminized mylar sheet in place of the batter head and then carefully change the air pressure inside the drum shell to cause the film to draw in or bulge out slightly, while measuring the resulting focal lengths. Be sure to report your findings here!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Planck radiation law of a dielectric layer Suppose we have a rectangular slab of thickness $h$, width $a$ and length $b$. The upper surface of the slab is put at constant temperature $T$ while all the rest is at initial temperature $T_0$. Obviously the temperature of this slab will increase according to the heat equation:
$$\dfrac{\partial T}{\partial t}=K\left(\dfrac{\partial^2 T}{\partial x^2}+\dfrac{\partial^2 T}{\partial y^2}+\dfrac{\partial^2 T}{\partial z^2}\right)$$ where $K$ is the thermal diffusivity of the material.
Because:
$$\dfrac{\partial^2 T}{\partial x^2}=\dfrac{\partial^2 T}{\partial y^2}=0$$
the previous equation becomes:
$$\dfrac{\partial T}{\partial t}=K\left(\dfrac{\partial^2 T}{\partial z^2}\right)$$
What is the total radiance seen from the bottom of the slab vs. time, assuming the refractive index of the material is $n$?
| If I understand your question, you’re asking about the flux of thermal radiation from a heated side of a dielectric slab (index $n$), through the slab, incident to the other side. Correct me if I’m wrong.
I would analyze this starting with the relation
$$A=E,$$
where $A$ is the absorptivity and $E$ is the emissivity of the dielectric material (both as a function of wavelength, etc, depending on the details of $n$). This relation is a consequence of time-reversal symmetry.
Note that if $E$ is large, then actually very little light will make it to the back of a slab with non-negligible thickness because $A$ is also large. The emitted light will be absorbed exponentially into the depth, proceeding to heat up the regions further along. Whether or not the additional heat diffusion process is significant would depend on the details, I suppose.
Note also that if the slab is totally transparent ($A=0$), well then there is no thermal emission anyway.
There are a number of directions you can take the analysis from here, and I’ll leave it as an exercise to the reader to proceed as desired.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Landau levels degeneracy in symmetric gauge I'm reading David Tong's lecture notes on the Quantum Hall Effect.
When symmetric gauge taken, a basis of the lowest landau level wave functions is
$$\psi_{LLL,m}\sim\left(\frac{z}{l_B}\right)^m e^{-|z|^2/4l_B^2},$$
where $z=x-iy$,
and we have
$$J_z\psi_{LLL,m}=\hbar m \psi_{LLL,m}.$$
On page 25, it says that
the profiles of the wavefunctions form concentric rings around the origin. The higher the angular momentum $m$, the further out the ring.
The wavefunction with angular momentum $m$ is peaked on a ring of radius $r=\sqrt{2m}l_B$. This means that in a disc shaped region of area $A=\pi R^2$, the number of states is roughly (the integer part of)
$$N=R^2/2l_B^2=A/2\pi l_B^2$$
I can't understand these two statements. I think the profile of $e^{-|z|^2/4l_B^2}$ does form concentric rings around the origin, but does not when multiplied by $(\frac{z}{l_B})^m$. And why $r_{max}=\sqrt{2m}l_B$?
For the second statement, my understanding is that it divide the area in real space by the area "a wave function occupies", but if this is the case, shouldn’t there be a $m$ in the denominator?
| why $r_{max}=\sqrt{2m}l_B$? You can square $\psi$ and take derivative w.r.t to z. And assume z is positive since radius r is positive. Then $|z|^2$ is just $z^2$. You solve this equation and then you can obtain $r_{max}$. This procedure is just finding the maximum value.
Other parts of your problem has already been answered very well.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there any *global* timelike Killing vector in Schwarzschild geometry? I have been dealing with the following issue related to the Schwarzschild geometry recently. When expressed as:
$$
ds^{2}=-\left(1-\frac{2GM}{r}\right)dt^{2}+\frac{1}{1-\frac{2GM}{r}}dr^{2}+d\Omega_{2}^{2}$$
one can find a Killing vector $\xi=\partial_{t}$, since there are no components of the metric depending on $t$. This Killing vector is timelike for $r>2GM$, but spacelike for $r<2GM$ (since $\xi^{\mu}\xi_{\mu}=-\left(1-\frac{2GM}{r}\right)$). My question is:
*
*Can we find any timelike vector for the region $r<2GM$?
*If not, this would imply that the Schwarzschild solution is not stationary for $r<2GM$. But it is usually referred to as a "static spacetime". This wouldn't be true for the region $r<2GM$. So is this an abuse of language?
| Suppose $\xi$ is a Killing field. Then its flow is a local isometry, so for any scalar $K$ we have that the derivative of $K$ in the direction of $\xi$ is zero i.e. $dK(\xi)=0$. Take the Kretschmann scalar for $K$, this implies that $dr(\xi)=0$. Therefore inside the horizon you have that $\xi^\mu\xi_\mu>0$, because all the terms are positive and the $dr$ term is zero, hence it cannot be timelike.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Does capacitance between two point charges lead to a paradox? Is it possible to have a capacitance in a system of two point charges? Since there is a potential energy between them and they both have charges then we can divide the charge by the potential and get capacitance.
However, capacitance is supposed to depend only on geometry so should therefore be zero. How does one resolve this paradox?
| If we talk about capacitors that can be charged electron by electron, then these are an everyday reality in modern nanostructure physics (for the past few decades already): see Coulomb blockade.
Remark: there is some ambiguity in the question, since it attributes capacitance to charge itself, rather than a structure/conductor containing charges.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
} |
Varying the Hamiltonians between two fixed states Let us have a Hamiltonian $H_0$ and 2 states which can time evolve into each other via this Hamiltonian. In this particular situation, say one of the states evolves into the other in time $t_0$ .
Now let us fix these two states ; then we may have infinite Hamiltonians (other than $H_0$) that can time evolve one of them into the other. But the time taken for this evolution varies from $t_0$ and is say $t_H$ (that is, $t_H$ is the time Hamiltonian H takes to time evolve one of the fixed states into the other). From now on, we shall consider only those Hamiltonians that can time evolve one of the states into the other.
My question is given any small positive number $\epsilon$, will there exist a Hamiltonian H such that $t_H$ = $\epsilon$ ?
| As RoderickLee has commented already, the answer to your question as stated is yes. The reason is that rescaling any Hamiltonian $H$ with a dimensionless constant $\alpha$ speeds up the dynamics by a factor of $\alpha$.
But you might be interested to learn that quantum speed limits are an active area of research. The situation you are asking about is well understood already. The time $\tau$ that a quantum system takes to get from an initial state to an orthogonal final state satisfies the Mandelstam-Tamm bound
$$ \tau \geq \frac \pi 2 \frac \hbar {\Delta H} $$
and the Margolus-Levitin bound
$$ \tau \geq \frac \pi 2 \frac \hbar {\langle H \rangle} . $$
Technical notes: $H$ is normalized so that the ground state energy is zero. $\langle H \rangle$ and $\Delta H$ are the energy expectation value and its standard deviation, which are both constant during the evolution with the constant Hamiltonian $H$. For more info, see e.g. [Deffner 2017] (arXiv link).
These bounds show that, as long as the average energy and its fluctuations are bounded (thus, no rescaling allowed), you can not go arbitrarily fast from one state to another orthogonal state.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do guitar strings behave so nicely? To explain the harmonics on a guitar string, we use 2D models of the string. For example we assume that the string can only go up and down. But the string is inherently a 3D object and it could vibrate in a combination of side to side and up and down motions. My question is:
Why does this 3D problem reduce to a 2D problem?
My first idea was that no matter which weird 3D starting condition we set for the string (for example by plucking it in 2 different directions), it quickly settles to a state that can be described by a combination of the two groups of normal modes: up and down , side to side. Is this right?
Bonus question: In practice, if I pluck the string side to side, will it ever start vibrating up and down after some time?
| Given a very simplistic model of a string in space, fixed at both ends, you're right that the vibration can be in any direction, and furthermore that the vibration can be decomposed as a combination of vibrations in some basis (such as up/down and left/right).
However, due to the rotational symmetry of the setup, when the string is plucked in one direction (without any torque), it will oscillate in a plane, and hence it will resemble a 2D problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is the Newtonian gravitational potential $-\frac{GMm}{R}$ just an approximation? Is $-\frac{GMm}{R}$ just an approximation? I believe that it is since we assume that one of the mass is at rest when deriving it.
| So when we use this Potential we tend to use it in cases whereon mass is much larger than another so any movement from gravity is negligible, like with us on earth we are pulling back on the earth as the earth pulls on us so we give it a minuscule change in momentum so in our reference frame negligible movement so can just be seen as being at rest. This applies to all masses in a Newtonian framework no matter the velocity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If you were invisible, would you also be cold? If you were invisible, would you also be cold? (Since light passes through you, so should thermal radiation.)
Additionally, I'd like to know if you were wearing invisible clothes, would they keep you warm? In my understanding, the heat radiation from the body would pass through the cloth.
Is it even necessary to be permeable for heat radiation in order to be invisible? Could there be a form of invisibility (hypothetically speaking, of course) that makes you permeable for light in the visible spectrum, but not for heat radiation? Can those two things be separated?
| Two key points to remember:
*
*Radiation is not the only form of heat transfer. There's also conduction and convection.
*Being invisible doesn't only mean that you shouldn't absorb any light. It also means you shouldn't emit any.
From the perspective of heat transfer, making yourself invisible would be similar to wearing a skintight body suit made of reflective mylar (like emergency blankets).
You could definitely be kept warm by invisible clothes. Think about how the inside of a greenhouse stays warm.
The entire EM spectrum carries heat, but if by "heat radiation" you just mean infrared, then yes, you can absorb or reflect it while letting visible light pass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 9,
"answer_id": 6
} |
What happens at $V=Nb$ in the Van der Waals equation (i.e. becomes divergent)? The VdW equation:
$$\left(P+a\left(\frac{N^2}{V^2}\right) \right)\left(V-Nb\right)=Nk_BT$$
when the intermolecular forces are zero $a=0$, so $P=\frac{Nk_BT}{V-Nb}$ which diverges at $V=Nb$ for fixed temperature.
I'm simulating some hard sphere collisions, with $r_{sphere}=0.5, R_{contianer}=10 $. From some curvefit I got $b \approx 2.2$, so when $N>143$ (or less than 50% of the volume/area),the VdW equation breaks down.
Does that imply a different law holds when the number of particles becomes large? But isn't $b \propto r^3$ anyway, so it should account for the case when the number of particles are large?
| By definition, $b$ is the volume of a molecule, and $Nb$ is the volume occupied by all the molecules in the gas. So $V\rightarrow Nb$ corresponds to squeezing the gas to a point where the molecules cannot move anymore. The fact that something diverges in this limit is a good indication that, perhaps, the underlying theory (i.e., the van der Waals equation) does not apply there.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can I conclude that acceleration happens a bit later after force is felt? We define forces like electric force, magnetic force and gravitational force etc, to be caused by field lines such as electric field, magnetic field and gravitation field respectively. Since these fields take time to reach the object on which the force is applied for acceleration, the acceleration should occur after the force is applied. Also, does it apply to all cases or are there any interactions that happens with contact?
What I think is that when object A applies force on B, A first feels the force and then B feels the force and so accelerates. Means that force applies on B and B accelerates at the same time but A feels force first.
| For a particle in a force field (like electric field, magnetic field, gravitational field etc. like you mention) the acceleration of the particle at time $t$ is determined by the value of the field at the particle's position at the same time $t$, in agreement with Newton's laws.
If you change the source of the field (charge a capacitor, move a magnet, etc.) and the source is in another place then you are correct that the field values in the rest of space will take some time to "update". But the particle will always move according to the field strength at it's own position.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Spacetime effects on human scale objects? For a human standing upright on the earth, gravity would have a different value at the feet than at the head, and gravity influences the flow of time. Does the difference in the flow of time cause any effects?
I was toying with the idea that gravitational acceleration is just nature trying to compensate for time flowing at different speeds with a preference for moving towards slower timeflow.
Highschool level question.
| It's actually now possible to measure the difference in time flow across a millimetre of height.
In this report by Emily Conover at Science News, she details how:
physicist Jun Ye of JILA in Boulder, Colo., and colleagues used a clock made up of 100,000 ultracold strontium atoms ... after correcting for non-gravitational effects that could shift the frequency, the clock's frequency changed by about a hundreth of a quadrillionth of a percent over a millimeter, just the amount expected according to General Relativity.
They also add:
Previously, scientists have measured this frequency shift, known as gravitational redshift, across a height difference of 33 cm.
So this is an improvement by a factor of around 300.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/682058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
If $\mathbf{F}_{net} = m\mathbf{a}$ then how is $m\mathbf{a}$ not a force? The question is in the title: If Newton's second law says that the sum of the forces acting on a body in a given direction is the same as the mass of the object times its acceleration in that direction, then how is $m\mathbf{a}$ not a force? Every book I have read on physics (all basic) says that $m\mathbf{a}$ is not a force. Are forces not "closed" under addition? Is this somehow a loose version of equality?
$$$$
"University Physics" by Young and Freedman says...
Acceleration is a result of a nonzero net force; it is not a force itself.
I guess that's just not enough explanation for me. How can a force be equal to something that is not a force?
| Mathematical equality is not the same thing as physical equality. If you have a mass $m$ undergoing an acceleration $a$, then we know the net force acting on the mass is mathematically equal to $ma$, but an accelerating mass isn't a force itself. You can't take the "$ma$" and use that to accelerate something else.
Most physics equations that are not definitions relate mathematical quantities that are not physically the same thing. This is what makes physics so useful. Saying "forces are forces" won't get you very far.
If this is true, can you perhaps give another example of two things that are mathematically equivalent but not physically equivalent?
The work-energy theorem that relates the net work done on an object to its change in kinetic energy: $W=\Delta K$. This is telling us that the net work done changes the kinetic energy, but work, which is the kind integral of force over a path, and kinetic energy, given by $\frac12mv^2$, are two different things physically, and they each have different definitions. They have the same mathematical value, but they are not physically the same thing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/682553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 12,
"answer_id": 11
} |
Where is quantum probability in macroscopic world? How can macroscopic objects in real world have always-true cause-effect relationships when underlying quantum world is probabilistic? How does it not ever produce results different than what is predicted by Newtonian physics, except for borderline cases?
| A simple way to understand this is to realize that a macroscopic object consists of trillions of individual quantum systems whose QM properties average out into Newtonian behavior as the number of particles in the system is increased.
The only exceptions to this rule are lasers, superconductors and condensates like liquid helium. In each of these special cases, those quantum properties get writ large for us by arranging for most of the quantum particles to not average themselves out but to instead (roughly speaking) all settle into the same state.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/683695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Does light have mass or not? We know light is made of photons and so it should not have mass, but light is a form of energy (light has energy) and has velocity ($c$), so according to $E=mc^2$, light should have mass... So what is correct?
|
Does light have mass or not
Light is the word we use for classical electromagnetic radiation at optical frequencies.
Electromagnetic radiation is emergent from a large number of photons. The figure in this experiment is a clear proof that the addition of photons, elementary particles of zero mass and energy equal to $hν$ where $ν$ is the frequency of the classical electromagnetic wave arising from the confluence of very many photons .
Photons are described by four vectors in the special theory of relativity . The addition of the four vectors of two non collinear photons has an invariant mass even if each individual photon has zero mass,
thus the built up light will have an invariant mass, within the formalism of special relativity.
Experimental proof is the decay of the pi0 to two gamma. The added four vectors of the two gamma have the invariant mass of the pi0. (Related )
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/683919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Energy of a photon I undestand that the energy of a photon is given by $E=h\nu$ where $\nu$ is the frequency of the light. Is this the total energy of the photon? Or its kinetic energy?
| It is the total energy of the photon and in some sense it is also the kinetic energy. The energy-momentum relation says that the energy of an object is given by
$$E=\sqrt{(mc^2)^2+(pc)^2}$$
Here $m$ is the rest mass of the particle. When a particle with mass is moving at non-relativistic speeds (it moves much slower than the speed of light) this relation can be approximated as
$$E\approx mc^2+\tfrac 1 2mv^2+\dots$$
The term $\tfrac 1 2mv^2$ comes from $(pc)^2$ so $pc$ can be called "the kinetic energy"$^\dagger$. For photons $m=0$ so the energy-momentum relation reduces to
$$E=pc$$
So for photons the kinetic energy is the total energy. Another reason to motivate this is that photons that climb out of a gravitational well get redshifted, i.e. their frequency becomes lower. Massive particles slow down when they climb out of a gravitational well but photons, which can't be slowed down, become lower in frequency. Like in this picture:
Now one result from quantum mechanics is that energy is related to frequency, $E=h\nu$, and momentum is related to wavelength, $p=\frac h\lambda$. This is true for all particles, not just photons. What is special for photons is that
if you plug these expression in the energy you get $\nu\lambda=c$.
The reason that $E=h\nu$ is often used for photons is because they don't have rest mass so their frequency is one of their defining features. We are also used to measuring the frequency of light. Your eyes are quite good at this. But to reiterate: $E=h\nu$ is true for any particle, but for massive particles you don't normally compute the frequency.
$\dagger$ The actual relativistic kinetic energy is given by $(\gamma-1)mc^2$ as can be seen here. So this statement is only in the loose sense.
Source of picture: https://en.wikipedia.org/wiki/Gravitational_redshift
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Do the pupils of an eye emit blackbody radiation? A blackbody, by definition is an ideal system that absorbs all radiation incident on it.
If a good approximation of a black body is a small hole leading to the inside of a hollow object, then am I right in saying that the pupils of an eye are a good approximation of a black body because they are also holes to a (near) spherical cavity?
If yes, do they also emit blackbody radiation in accordance with the Planck wavelength distribution function, and is this why they appear black?
| Any approximation has a region of applicability - that is the conditions where one can apply it or not:
*
*Human eye obviously absorbs the radiation only in a certain range - e.g., it si totally transparent for gamma rays, which are also a part of the Planck spectrum (sinc ethe latter includes all frequencies up to infinity).
*Human eye reflects some of the radiation (due to the different refractive index of the lens, immediately behind the pupil (see this figure).
*Not all of the absorbed radiation is re-emitted (which is one of the conditions for the black body as related to the Planck's law).
Remark: Note also that one can define the black body radiation without resorting to a (largely historical) concept of the black body - as an equilibrium state of a photon gas: see, e.g., the discussion in this thread.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sound wave travelling from high impedance to low impedance medium, what will be the reflection & transmission coeffiecient? Let us assume that a wave propagates in the direction perpendicular to the flat surface of discontinuity. When the characteristic impedance of the medium of medium 0 (where the incident and reflected wave exists) is mush larger than medium 1 (where the transmitted wave exists). How do the pressure and intensity reflection and transmission coefficient behave?
I believe that in this case, the power will not be transmitted. And all the sound waves will turn into reflection.
I don't know how to represent it math terms for R and $\tau$.
| Assuming that your media have characteristic acoustic impedances $r_0$ and $r_1$ you end up with the following reflection and transmission coefficients
\begin{align}
&R = \frac{r_1 - r_0}{r_1 + r_0}\, , \\
&T = \frac{2r_1}{r_1 + r_0}\, .
\end{align}
The intensity reflection and transmission coefficients are
\begin{align}
&R_I = \left(\frac{r_1 - r_0}{r_1 + r_0}\right)^2\, , \\
&T_I = \frac{4 r_1 r_0}{(r_1 + r_0)^2}\, .
\end{align}
You can see that the intensity reflection coefficient does not depend on the sign of $r_1 - r_0$, that is, it does not matter which one is larger. The same power is reflected. The reflection coefficient does depend on the sign of $r_1 - r_0$ and it tells us something about the (relative) phase of the reflected wave.
For more detail I suggest checking the following reference:
*
*Kinsler, L. E., Frey, A. R., Coppens, A. B., & Sanders, J. V. (2000). Section 6.2 in Fundamentals of acoustics. John wiley & sons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How come the number of wandering electrons is same as the number of the positive ions? My book mentions the following:
Cause of resistance : When an ion of a metal is formed , its atoms lose electrons from its outer orbit . A metal ( or conductor ) has a large number of wandering electrons and an equal number of fixed positive ions . The positive ions do not move , while the electrons move almost freely inside the metal These electrons are called free electrons . They move at random , colliding amongst themselves and with the positive ions in any direction as shown
The book mentions that :A metal has a large number of wandering electrons and an equal number of fixed positive ions. My doubt arises that lets says the metal is aluminium since aluminium has 3 valence electrons a single atom will loose 3 electrons which becomes the free electrons in the metal, so since a atom looses 3 electrons to form a cation so in this case should not the number of wandering electrons be three times the number of positive ions. So how come the number of wandering electrons is same as the number of the positive ions
| I think that paragraph is badly worded and, at face value, wrong.
Most probably the author meant something like:
A metal ( or conductor ) has a large number of wandering electrons and a number of fixed positive ions that amount to the same charge, but of opposite sign.
I guess in the process of making the sentence more compact and fluid the error crept-in.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
What is the physical intuition behind this energy conservation theorem? I'm reading Quantum Theory for Mathematicians, by Brian C. Hall. Although the book is about Quantum Mechanics, it's chapter 2 is actually about Classical Mechanics, in which I encountered the following theorem:
Rephrasing in English, suppose a force acting on a particle has two components. The first component comes from a potential function and the second component is orthogonal to the velocity. Then the energy of the particle is conserved.
What is the physical intuition behind this theorem?
| Not sure if this helps, but...
If you start off with Newtonian mechanics, energy conservation has to be added in as an additional axiom. (As in "... and only forces which conserve energy are found in the wild").
If you start off with Lagrangian mechanics it is simply not possible to write a Lagrangian which does not lead to a conservative force.
At first I thought Newtonian and Lagrangian formulations were totally equivalent, but once I grasped the above it suddenly made sense why people consider Lagrangian to be more fundamental.
So the physical intuition is simply that a formulation of mechanics based around conserved energy is more fundamental than one based around force, and forces are simply derived from that.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Molecular explanation for pressure According to this post, viscous stress is the result of molecular diffusion. More specifically, it's a transfer of momentum in a direction perpendicular to the direction of a velocity gradient. This got me wondering: is pressure also due to molecular diffusion? Is pressure just the transfer of momentum in the direction of the velocity gradient?
| Statistical physics
In statistical physics one considers mainly the pressure of gas (liquid, solid) against the walls of the container. This indeed arises from the molecular collisions, in which the molecules are scattered from the walls of the container, transferring to the walls their momentum. Any basic statistical physics text provides the calculation (usually as the derivation of the ideal gas law).
Fluid dynamics
In fluid dynamics the situation is a bit trickier, since there is no apparent surface against which the molecules collide. In fact, fluid dynamics is valid only on the scales big compared to the diffusion length, i.e., indeed, we can speak of pressure only when we discuss layers of liquid that are sufficiently thick, so that all the molecules coming from one layer collide and lose their momentum, transferring it to the next layer of the liquid, and thus creating pressure.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Factor $1/\sqrt{2\pi}$ in the normalization of wave function packet My book has started using the wave packet definition as follows (time independent form):
$$\Psi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} A(k) \ e^{ikx}dx$$
I do not understand where the $1/\sqrt{2\pi}$ comes from in this definition. First, I thought it has something to do with normalization however, I can't seem to prove this to myself.
$$\Psi'(x) = N \int_{-\infty}^{\infty} A(k) \ e^{ikx}dx$$
$$\Psi'(x)^{\ast} = N \int_{-\infty}^{\infty} A^{\ast}(k) \ e^{-ikx}dx$$
$$\Psi'(x) \Psi'(x)^{\ast}= N^2 \int_{-\infty}^{\infty} A(k) \ e^{ikx}dx \int_{-\infty}^{\infty} A^{\ast}(k) \ e^{-ikx}dx = N^2 \int_{-\infty}^{\infty} A(k) A^{\ast}(k)dx = N^2 A(k) A^{\ast}(k)$$
The last step I justify by the conditions that the wave functions must approach zero as you go from $\pm \infty$.
$$P = 1 = N^2 \int_{-\infty}^{\infty} A(k) A^{\ast}(k)dk$$
I am not sure where to go from here. Does this term actually come from the normalization? If so, how can I show this.
| For normalization, you want to look at $\int\psi^*\psi\,\text dx=1$. Try this and the result will be much better.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why in the first Friedmann equation quantity $ρ$ is directly proportional to Hubble's constant despite the fact that gravity counteracts expansion? Here is the first Friedmann equation:
$$H^2 = \left(\frac{\dot a}{a}\right)^2 = \frac{8\pi G}{3}\rho - \frac{kc^2}{a^2} + \frac{\Lambda c^2}{3}$$
We know that matter and energy through gravity slow down or reverse any expansion in the fabric of spacetime. Yet in some context and specially here with this equation I encounter the fact that matter and energy content of universe is increasing the expansion rate instead of the opposite, as if there's an anti gravitational force in effect. How so?
| $\dot a$ is the rate of change of the scale factor i.e. it tells us how fast the universe is expanding. Shortly after the Big Bang the universe was very dense and expanding very rapidly so both $\rho$ and $\dot a$ were high. Then as time went by the universe became less dense as the matter was diluted by the expansion, and at the same time the expansion slowed as the gravitational attraction of all the matter slowed the expansion. The end result is that both $\rho$ and $\dot a$ started high and decreased with time.
So it is not the case that a high matter density causes a high value of $\dot a$, but rather that in an expanding universe they cannot help but be correlated. The first Friedmann equation tells us how they are correlated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Calculating the speed of a train given its power and weight give nonsensical results Maybe I am missing something but whenever I calculate the maximum speed attainable by a train knowing only its power and mass I always get values that make no sense. If someone could explain my error that would be greatly appreciated. As I understand the relevant formulas are as follows. I am assuming constant mass for simplicity and ignoring the need to accelerate. All I am interested in is the maximum sustainable speed, analogous to terminal velocity.
$$F_\text{tractive effort}=(M_\text{total} \cdot F_g) \cdot C_\text{friction}$$
$$P=F \cdot v$$
$$\therefore P=((M_\text{total} \cdot F_g) \cdot C_F) \cdot v$$
From my reading, a reasonable coefficient of kinetic friction for steel wheels on steel track is $C_\text{friction}=0.5$
So if we try to calculate the speed of the heaviest train ever run we have the following numbers.
Sources: https://en.wikipedia.org/wiki/Longest_trains,
https://en.wikipedia.org/wiki/GE_AC6000CW
$$M_\text{total}=99734\text{t}$$
$$P=8 \cdot 3500\text{kW}=28000\text{kW}$$
$$\therefore v=\frac{28000000}{99743000\cdot 9.81\cdot 0.5}\approx 0.0572\text{m}/\text{s}$$
For reference, according to google a garden snail moves at $0.013\text{m}/\text{s}$
Now clearly my result of point o' 6 meters per second is ridiculous for the speed of even a freight train.
Any insight would be much appreciated.
| The estimation for the coefficient of rolling friction of $C_{\rm friction} = 0.5$ sounds unrealistic. The whole point of a train is that it rolls with very little friction. The above value looks more like the coefficient of sliding friction.
For example if you use a more realistic $C_{\rm friction} = 0.0018$ the resulting rolling resistance force is
$$F_{\rm friction} = C_{\rm friction} M_{\rm total} g = 1760.5\,\mathrm{kN}$$
and the top speed
$$ v_{\rm max} = \frac{P_{\rm total}}{F_{\rm friction}} = 15.9\,\mathrm{m\, s^{-1}}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Thermodynamics (Pressure-Temperature) Graph Analysis I am studying the Graphs in Thermodynamics, and I have found one graph that had made me very curious to know the Real concept of that. Please see it (below here) -
So, here The Line with Negative Slope is the Main, which we have to analyze. So, here the Volume is constantly Increasing . So, my question is that," What is the Slope of this line AB is telling us ? For, example the Slope of the lines that are drawn dotted(in Yellow color), are telling us the variation of Volume ( acc. to the Ideal Gas Equation P = nRT/V, compare it with Straight line Equation, y = mx + c). So, We can see the Slope of AB is also constant, but its Volume is Increasing, so What it tells us? Is the Slope of AB is trying to tell something else, or I am missing out something. Please Help, its a genuine question.
| Most probably you are missing the core idea of Gay-Lussac's Law, which says: If $V=\text{constant}$, then $P\propto T$, and converse is also true. See the dotted lines here are called isochore lines or simply isochores. if we have to write equations of lines we do following substitution: $x \rightarrow T, y\rightarrow P$
Now an isochore line has following equation: $P=mT$, where m is slope, or we can write: $P=\frac{P_1}{T_1}T$. In the previous equation we can clearly see that $P\propto T$, so the line/process is an isochoric process and $V$ is constant. Let us make the equation of process/line AB:
$$P-P_A=\frac{P_B-P_A}{T_B-T_A}(T-T_A)$$
$$P=\frac{P_B-P_A}{T_B-T_A}(T-T_A)+P_A$$
, clearly, the $P$ is NOT directly proportional to $T$. So the line/process AB is definitely, not an isochoric process/line.Further we can substitute: $V=nRT/P$, to get graph of V vs T, and see the variation (I guess calculations yield a downward curve parabola).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If diverging rays never meet, why do parallel rays meet at infinity? I've seen that in the case of concave mirrors if the object is between focus and the pole - the reflected rays diverge and never meet.
But if the object is at the focus, it's defined to be meeting at infinity. Why is it so?
| Infinity is not a real distance or an actual number. It's used in mathematics when describing limits as a parameter increases without bound.
Parallel lines, by definition, never actually meet in a flat plane (there are non-Euclidean geometries where they do meet, and these are relevant when General Relativity is taken into effect, but not for classical physics of light rays -- we can approximate space as a flat plane).
The distance from the mirror to the point where the rays meet is a function of the angle between the rays. The smaller the angle, the further the distance. Since angles can get infinitessimally small (ignoring Quantum Mechanics), this means that the distances can get infinitely large. Parallel lines have an angle of 0, so the limit of the distance as the angle approaches 0 is infinity.
In the mathematics, you'll have an equation with the angle in the denominator of a fraction. Dividing by 0 has no actual meaning in arithmetic, so that's why we use limits to deal with it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 3
} |
Is energy really Conserved in rolling motion? Energy is conserved in pure rolling motion. Then why does the ball stops its motion after some time. I think it's not the case of air drag only. Does all work gets Transferred to surrounding in the form of heat?
| Rolling resistance is an interesting topic. It is clear from experience that a rolling wheel does eventually slow down and stop. So clearly there is some dissipative process that removes the energy from the wheel and transfers it to the environment. However, it is more than that, momentum and angular momentum are also conserved, and you have to consider mechanisms that dissipate them all.
For example, you mention air resistance. Air resistance doesn't produce a torque (or at least not obviously) so by itself it cannot be the sole cause. Also, we would expect that a tire rolling on the moon would stop too.
There is gravity, but on a level surface gravity produces neither torque nor work, and together with the normal force there is no change in momentum either.
There is also the friction force at the surface of the road. This acts backwards so it could account for the decrease in momentum, but naively the point of contact is not moving so the ordinary friction force does no work. Furthermore, the torque produced by the friction force points in the wrong direction and would actually tend to increase the angular momentum.
So the mechanism of rolling resistance is not trivial. The key is to recognize that the material of the tire deforms at the contact patch. Furthermore, this deformation and the forces involved in the deformation are asymmetric as shown in figures 2.8 and 2.11 here. Note how the forces on the leading edge of the contact patch are larger than the forces on the trailing edge.
This gives rise to a torque in the correct direction (reducing angular momentum), a net force in the correct direction (reducing linear momentum), and because the material at the contact patch deforms a negative mechanical power (reducing mechanical energy).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
What mechanism will force mechanical watch to tick slower when go fast, due to relativistic effects? To make mechanical watch tick slower, watch tick rate must be changed, oscialtion of balance wheel must be SOMEHOW changed, how would speed change oscialtion of balance wheel, due to relativistic effects?
I dont understand mechanism between speed and parts inside mechanical watch that will somehow mysteriously start ticking slower?
This video show how watch works.
| As a supplement to Marco Ocram's excellent answer: we are all moving not only in space, but also in time. We have no choice about that: even if we think we are "at rest" in space, we'll be moving forward through time. But different observers may be moving in different directions in spacetime. If we assign $(x, t)$ coordinates to the path of a watch, with the beginning of the path at $(0,0)$, then after one of our seconds a stationary watch will have coordinates $(0, 1)$ whereas a moving watch will have coordinates $(v, 1)$. The vectors $(v, 1)$ and $(0, 1)$ clearly point in different directions and have different lengths. If you mark out 1 unit intervals along the $(v, 1)$ (moving) line, they won't have the same time or space coordinates as they would along the $(0, 1)$ line.
The only complication to all of this is that time and space are not the same thing, and so in practice the "distance" is calculated with $x^2 - t^2$ rather than $x^2 + t^2$. Time comes into it with a "negative" sign (actually the choice of signs for time and space are arbitrary, but they have to be opposite).
This also explains why when you bring the moving watch back it will show a shorter time. The "moving" watch goes around two sides of a triangle, where the "resting" moves along the third side (only in time, not in space). The moving watch travels a longer spatial distance. Time and space have opposite signs, so this corresponds to a shorter temporal distance, i.e. the watch that moved will show less elapsed time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Wave Equation energy time independence I'm a bit confused about how the energy of the solution of the wave equation is constant. For a general solution of $\phi_+(x,t)$, the energy in the Hamiltonian formulation is given to be
$$H=\int[\partial_x\phi_+(x-vt)]^2dx=\int[\partial_x\phi_+(x)]^2dx$$
I'm quite confused about how it is clear that the gradient is time independent. My guess is that it is due to the partial $x$, however I'm not convinced that this reasoning is correct. Is there a way to reach see the time independence without explicitly differentiating the energy with respect to time?
| The spatial derivative has nothing to do with the independence of time. Any integral
$$H(t)=\int_{-\infty}^{\infty}dx\,f(x-vt)$$
is going to be independent of time, by a simple $u$-subsitution. Let $u=x-vt$, so that $du=dx$; then $H$ can be rewritten as
$$H(t)=\int_{u\,=\,-\infty}^{\infty}du\,f(u),$$
which is just the value of $H(t)$ took at $t=0$. Ergo, $H(t)=H(t=0)$ does not depend on $t$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
"The resultant of two forces of equal size, that form an angle, is lowered by 20% when one of the forces is turned in the opposite direction."
"The resultant of two forces of equal size, that form an angle, is lowered by 20% when one of the forces is turned in the opposite direction."
Does anyone know how one would go about trying to find the angle where this happens? I've been reading an old textbook on Mechanics and it has stunped me for quite some time now.
The book is originally written in Swedish so forgive my bad translation skills.
| I think this problem can be solved the easiest without introducing coordinates.
Say we have two forces $\mathbf{f}$ and $\mathbf{g}$, then the question statement can be written as
$$
\alpha\sqrt{(\mathbf{f} + \mathbf{g})^2} = \sqrt{(\mathbf{f} - \mathbf{g})^2} \, ,
$$
with $\alpha = 0.8$.
Squaring and using $\cos \theta = \frac{\mathbf{f} \cdot \mathbf{g}}{\vert\mathbf{f}\vert \, \vert\mathbf{g}\vert}$ directly leads to
$$
\cos \theta = \frac{(\mathbf{f}^2 + \mathbf{g}^2)(1-\alpha^2)}{2\vert\mathbf{f}\vert \, \vert\mathbf{g}\vert(1 + \alpha^2)} \,.
$$
If $\vert\mathbf{f}\vert = \vert\mathbf{g}\vert$, this reduces to
$$
\theta = \arccos \left(\frac{1-\alpha^2}{1+\alpha^2}\right) \, ,
$$
which evaluates to $\theta \approx 77.31^\circ$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Source of randomness Is the random nature of all macroscopic phenomena like for example, turbulence or chemical kinetics ultimately traceable to quantum randomness?
| You are effectively asking "What is the relationship between quantum mechanics and classical chaos?" This is an open question and an intense area of study, known as quantum chaos. Stack exchange questions on quantum chaos have already been asked here (in a general context), here (in the context of predicting the weather), here (in the context of the double pendulum), and here (in the context of predicting the brain).
But in short, I'd answer your question like this:
If quantum mechanics was completely deterministic, chaotic macroscopic phenomena would still appear unpredictable to us, just because we will never know the initial conditions perfectly, and chaos exponentially amplifies uncertainty. But if you ask whether quantum mechanics makes most chaotic systems fundamentally unpredictable, then I'd say yes. Chaos can be one of many mechanisms by which quantum uncertainties / fluctuations have very macroscopic consequences.
However the precise understanding of classical chaos from a quantum mechanical perspective is difficult, because the Schrodinger equation is entirely linear, and so at first glance does not seem to be able to produce chaos. A subtler treatment involving many-body interactions is needed, and is being actively researched.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
What causes this frost pattern on my windshield? I was walking back to my car yesterday when I noticed the frost on the back windshield formed these long "straight" lines:
The temperature was about -10C and I was wondering what the mechanism behind these lines was (the horizontal lines I can guess have to do with the wires in the window but that doesn't really explain the other frost lines).
| Additional information:
Just got into my car Dec 23 in Michigan and saw this frost pattern on my windshield. It is 6°F
Wind chill -14 and the frost pattern is on the inside of the glass.
My hypothesis is that the extremely cold temperatures freeze the water vapor at random points on the windshield along the major stress lines that are intentionally designed into tempered glass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
How to draw the phase plane of this equation? Using various computational tools, it's possible to draw a phase plane from two first-order ODEs or a single second-order ODE. However, when there is a parameter in the equation and we don't know the value of the parameter, is there any way to draw the phase plane and see the changes with respect to the parameter? For example (e-print), if we have two first-order ODE
$$ \frac{dx}{dt} = \alpha T x - \beta xy$$
$$ \frac{dy}{dt} = \alpha T y - \beta xy$$
can we draw the $x$-$y$ phase plane? We are not given any value of $\alpha$ and $\beta$, but we are given a few constraints:
$$\gamma = \frac{x-y}{x+y}\;,\;\;\;\;\;\;\frac{dT}{dt} = -\left(\frac{dx}{dt}+ \frac{dy}{dt}\right)$$
$$\text{so,}\;\;\;\frac{d\gamma}{dt}=\frac{\beta}{2}(x-y)(1-\gamma^2).$$
| The authors seem to consider a simple (analytic) stability analysis of the obvious equilibrium solutions (from the OP's last equation they are $x=y$ and $\gamma=\pm1$) and then to obtain the phase space not numerically, but to draw it schematically.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Under what condition is an electrostatic field both solenoidal and irrotational? I'm trying to figure out
under what condition is an electrostatic field both solenoidal and irrotational?
A solenoidal field satisfies $\nabla \cdot \mathbf{F}=0$. An irrotational field satisfies $\nabla \times \mathbf{F}=\mathbf{0}$.
From the fundamental postulates in electrostatics we have $\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}\:$ and $\: \nabla \times \mathbf{E}=\mathbf{0}$.
So obviously, an electrostatic field is always irrotational, but when is the field solenoidal?
One condition would be when there is no charge density, $\rho=0$. Then clearly $\nabla \cdot \mathbf{E}=0$. But if there is no charge density is there even an electric field to begin with?
Another condition would be if the electrostatic field is homogeneous, that is, it doesn't depend on spacial coordinates. For example, $\mathbf{E} = \mathbf{a_x}+\mathbf{a_y}2+\mathbf{a_z}3 \: $ satisfies $\nabla \cdot \mathbf{E} = 0$.
Does anyone know an exact formulation of what the condition should be, for the electrostatic field to be both solenoidal and irrotational?
| You've identified that E-field can be both solenoidal and lamellar when the charge density is zero, but it is actually possible that the E-field is not solenoidal at points in space without charge densities. Likewise, it is possible for the E-field to be solenoidal even when there is charge density.
According to Maxwell's equations:
$$\nabla \times \mathbf{E} = 0,$$
$$\nabla \cdot \mathbf{D} = \nabla \cdot (\varepsilon \mathbf{E}) = \varepsilon \nabla \cdot \mathbf{E} + \mathbf{E}\cdot\nabla\varepsilon = \rho,$$
then it follows that
$$ \nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon}-\mathbf{E}\cdot \nabla\ln\varepsilon .$$
If the permittivity is not spatially constant it is then possible for the electric field to not be solenoidal despite there being no charge density. Likewise, it is possible to construct a spatially varying permittivity so that the E-field is solenoidal despite there being charges.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Understanding ergodicity and what an ergodic system is I am trying to understand the concept of ergodicity/ergodic system in physics, but because my understanding of phase space, its elements is a bit unclear,I have trouble understanding the former. Regarding ergodicity (in physics), in Wikipedia I read this:
A physical system is said to be ergodic if any representative point of the system eventually comes to visit the entire volume of the system.
A point in phase space, represents a microstate as far as I understand. Also in the case of MCE or CE the microstates are eigenstates of the hamiltonian. That doesn't mean that it can't also be a superposition of the eigenstates of the Hamiltonian (please correct me if my understanding is faulty here). Now if a point (system) gets to visit the entire volume of the system, doesn't that imply that the microstate changes its energy, and isn't that in contradiction to the Liouville theorem where we say that the change of a system is governed by the Hamiltonian mechanics?
Edit:
What does it mean for a system to spend time in a region of phase space?
| The wikipedia article is talking about ergodicity in classical physics. This is where concepts like phase space are most relevant. Allow me to quote the first paragraph of the section you're reading:
The case of classical mechanics is discussed in the next section, on ergodicity in geometry. As to quantum mechanics, although there is a conception of quantum chaos, there is no clear definition of ergodocity; what this might be is hotly debated.
As alluded to, the emergence of ergodicity in quantum mechanics is an active topic of current research. If you are interested in how ergodicity relates to the energy eigenstates of an isolated system's Hamiltonian, you can start by reading about the eigenstate thermalization hypothesis.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Terminology referring to the term "quantization" in Schrödinger and Dirac equation When people write "Quantization of Dirac equation" is the word "Quantization" the same as "second quantization"?
As I understand it, both Schrödinger and Dirac equations describe one particle (or many specific number of particles using tensor product) and when we "quantize the equation" we introduce a formalism that can describe creation and annihilation of particles.
Is this correct?
| Yes, the two equations describe quantum particles, i.e., the "first quantization" is already done, and the only further quantization possible is second. (Note how it is different for photons, which are classically already described by a wave equation, see this discussion.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does the shape of Maxwell-Boltzmann distribution depends on temperature, but not mass and number of particles? My physics textbook provided the following probability density function for speed of particle in an ideal gas under a certain temperature.
$$f(v)=4\pi N\left(\frac{m}{2\pi k T}\right)^{3/2} v^{2} e^{-mv^{2}/2kT}$$
Below is the graph.
Why does the shape depends on temperature?
Can't I just keep increase $N$ to an extremely large number and thus increase the probability for extremely high speed?
| As one can see from the expression in the OP, the shape is independent on the number of particles - increasing $N$ only increases the height of the distribution, i.e., it only changes the scale on the $y$ axis.
However, unlike it is suggested by the title, the distribution is dependent on the particle mass - the curves would look differently, if it were a gas other than nitrogen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the gravitational constant in 5D? I am trying to find the energy density for a given spacetime using Einstein's Equation $G_{\mu\nu}=\kappa T_{\mu\nu}$. I am trying to do this in 5D and with restored SI units, but I am having trouble finding what the constant $\kappa$ should be.
I know that the Einstein tensor $G_{\mu\nu}$ should be in units of m$^{-2}$, and that in the 4D case the stress-energy tensor $T_{\mu\nu}$ has units of energy density J m$^{-3}$. I also know that in 4D $\kappa = \frac{8\pi G} {c^4}$, with units s$^2$kg$^{-1}$m$^{-1}$. However, in 5D, the energy density will be given in units of J m$^{-4}$, so then $\kappa$ must be given in units of s$^2$kg$^{-1}$. I'm not sure, then, what the value of $\kappa$ should be in 5D.
I have seen some papers and textbooks use $\kappa=\frac{8\pi G_5} {c^4}$, where $G_5$ is the "5D gravitational constant", but I can't seem to find what the 5D gravitational constant is meant to be, and none of these papers or textbooks seem to consider the problem of restoring to SI units. I have also seen some papers and textbooks multiply the 4D $G$ by the length of the compactified 5th dimension, but I am looking at a case where the 5th dimension is not necessarily compactified.
What is Einstein's gravitational constant, $\kappa$, in 5D, for both the case of a compactified and a non-compactified 5th dimension?
| For the case of a compactified 5th dimension, you can find your answer from Kaluza-Klein dimensional reduction:
The D-dimensional gravitational constant is related to the (D+1)-dimensional one by the volume of the compact space $$\kappa^2_{D+1}=2\pi R_z\kappa^2_D$$
where $2\pi R_z$ is the volume of the compact dimension. In general, $$\kappa^2_{D+P}=V_P\cdot\kappa^2_D$$
I'm not an expert of large extra dimensions, but reading some papers on the subject it seems to me that a proper definition of $\kappa_5$ is part of the problem. I might be wrong on this last part though.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How would Newton explain mirages? Suppose we think of light as photon packets with tiny momentum, then with this picture in mind, go and see the refraction of light in mirages:
We see that the packets of light photons must be continuously under a nature of force since it must change natural path direction. How would Newton have explained this force since he had no knowledge of wave theory?
| Newton literally wrote the book on Optics and knew perfectly well how to predict refraction.
He did posit a speculation on the reasons for refraction - that is, light was composed of tiny, very subtle pieces which were subject to kinematic laws and had the tendency to accelerate towards regions of higher density, an interaction in which they exchanged some of their extremely small momentum with the medium subject to the 2nd Law. This speculation is mostly false, although it bears a trivial resemblance to quantum mechanics. However, the light does exchange momentum with the medium as it curves, subject to a revised 2nd Law,
$\frac {d}{dt} \vec P = \frac {d}{dt} (E\vec v/c^2)$
Note how the right hand side simplifies to Newton's $ m\vec a$ for $v \ll c$ and constant $m\gt0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Particle density vs. Probability Density in Quantum Mechanics I am currently reading trough "Bose-Einstein Condensation and Superfluidity" by Pitaevksii and Stringari and noticed some inconsistencies in my reasoning.
In Chapter 5 (Non-uniform Bose gases at zero temperature) the authors introduce the condensate wave function $\Psi$.
It is futher stated that the normalization of $\Psi$ is given by $N = \int d\vec{r} |\Psi(\vec{r})|^2$, where N is the total number of atoms in the condensate. Up until this point, I think of $\Psi$ as a probability density, as I have been doing when dealing with Quantum Mechanics for the past few years.
The following sentence then really confuses me:
The modulus $|\Psi(\vec{r})|$ determines the particle density $n(\vec{r}) = |\Psi(\vec{r})|^2$ of the condensate.
My question is: How can something that describes a probability density be a quantity that represents a particle density?
| The wave function describing a BEC is in nature quite different to that of a single quantum particle.
Usually in quantum mechanics you have that $|\psi(t,x)|^2$ is the probability density of the particle being around $x$ at time $t$.
However in BECs the description is different, since quantum phenomena are now apparent macroscopically. This requires us to use a different kind of wave function, describing the condensate as a whole. So in this context, $|\Psi(t,x)|^2$ is the spatial distribution of the BEC cloud at time $t$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Thermodynamic Process in a Thermally Insulated Container Suppose we have a specific amount of gas in a thermally insulated container having a frictionless-massless piston. The piston initially is in the middle, dividing the container into equal two chambers, one of which has the gas while the other is vacuum. Now the piston is released. Then the gas, being inside pressure must expand quickly and go through an adiabatic process to expand upto the full volume of the container. My question is: whether this thermodynamic expansion is a reversible process or not? Why is it reversible or irreversible?
Again, suppose, somehow, we get back to the initial state of the system. Then, instead of releasing the piston, we just hold the piston and allow the force that holds the piston to slowly decrease so that the piston moves slowly. This causes a slow change in the expansion of the gas, not as fast as the one before. As much as I know, this slow process is mainly an isothermal process. Since, the container have ideal gas, the change of internal energy of the gas is $0$, which means:
$$
dE_{int}=0
$$
Now, according to the first law of thermodynamics, we know that:
$$
dQ=dW+dE_{int}
$$
Now, since $dE_{int}=0$, we find that for the isothermal process:
$$
dQ=dW
$$
Therefore, the heat applied to the gas is the amount of work done by the gas. But as mentioned, the container is thermally insulated, therefore, the gas is not allowed to receive any heat from outside the system, that indicates $dQ=0$. So, finally $dW=0$ is bound to be true.
But as we can see, the gas here actually expands, then the work done by the gas cannot be $0J$ anyhow in this case. Since, another probable equation to measure $dW$ is
$$
dQ=pdV
$$
the gas have to do a work to expand itself. Then what's the mystery here?
| The first process you describe is irreversible. Why? Because there is no process you can devise for returning the system to its original state without resulting a change in the surroundings.
The second process you describe is not isothermal. So, in this process, the internal energy decreases as a result of the work done by the gas. This process can be carried out reversibly by replacing your manual control with a more energy-conserving method (such as raising tiny weights to different elevations).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why displacement is not used to calculate average potential energy in SHM? We know that the average potential energy of a body executing simple harmonic motion (SHM) is
$$\frac{1}{4}KA^2$$
where $K$ is the spring force constant and $A$ is oscillation amplitude. This was derived using potential energy as a function of time:
$$U(t) = \frac{1}{2}KA^2 \sin^2(\omega t)$$
If we take $$U(x) = \frac{1}{2} K x^2$$
and find the average potential energy by
$$\frac{\int_{-A}^A{\frac{1}{2}Kx^2dx}}{\int_{-A}^Adx} = \frac{1}{6}KA^2$$
we get completely different result.
My questions are:
*
*Why are we getting different answers on approaching in two different ways?
*Which of them is correct?
*Why the other one is incorrect?
| The two integrals do not evaluate to the same value because the velocity is not constant. In a hypothetical situation, if velocity were constant the two averages would evaluate to the same value.
Let the displacement and velocity of the spring be defined as:
$$x(t) = A \sin(\omega t) \quad \text{and} \quad \dot{x}(t) = \omega A \cos(\omega t)$$
where $A$ is amplitude of oscillations and $\omega$ is the oscillation frequency. Note that $\omega = \frac{2\pi}{T}$ where $T$ is the period of oscillations, and $\omega T = 2 \pi$.
In first case the averaging is done in time for one half period:
$$U_\text{av} = \frac{2}{T} \int_{-T/4}^{T/4}{\frac{1}{2} k x(t)^2 dt} = \frac{A^2 k}{T} \int_{-T/4}^{T/4}{\sin^2(\omega t) dt} = \frac{A^2 k}{T} \int_{-T/4}^{T/4}{\frac{1 - \cos(2\omega t)}{2} dt} = $$
$$ = \frac{A^2 k}{2 T} \Bigl. \Bigl( t - \frac{1}{2 \omega} \sin(2\omega t) \Bigr) \Bigr|_{-T/4}^{T/4} = \frac{1}{4} A^2 k$$
You would get the same result if you evaluated the above integral for one full period. Try it!
In second case the averaging is done in distance for one half period:
$$U'_\text{av} = \frac{1}{2A} \int_{-A}^{A}{\frac{1}{2}kx^2 dx} = \frac{k}{4 A} \frac{1}{3} \left. x^3 \right|_{-A}^{A} = \frac{1}{6} A^2 k$$
You could also evaluate the second integral in time as follows:
$$U''_\text{av} = \frac{1}{2A} \int_{-A}^{A}{\frac{1}{2}kx^2 dx} = \frac{1}{2A} \int_{-A}^{A}{\frac{1}{2}kx^2 \frac{dx}{dt} dt} = \frac{1}{2A} \int_{-T/4}^{T/4}{\frac{1}{2}kx(t)^2 \dot{x}(t) dt} = $$
$$ = \frac{A^2 k}{4} \omega \int_{-T/4}^{T/4}{\sin^2(\omega t) \cos(\omega t) dt} = \frac{A^2 k}{4} \frac{\omega}{\omega} \int{u^2 du} = \frac{A^2 k}{12} \left. \sin^3(\omega t) \right|_{-T/4}^{T/4} = \frac{1}{6} A^2 k = U'_\text{av}$$
where the substitution is $u = \sin(\omega t)$ and $\frac{1}{\omega} du = \cos(\omega t) dt$. Ax expected, the result is the same as in $U'_\text{av}$, which makes sense since that was our starting point.
This brings us to the reason why the final result for the two cases is different. If you compare integrals in time, it is obvious that functions under the integral are not the same:
$$\boxed{\frac{2}{T} \int_{-T/4}^{T/4}{ \frac{1}{2} k x(t)^2 dt} \neq \frac{1}{2A} \int_{-T/4}^{T/4}{ \frac{1}{2} k x(t)^2 \dot{x}(t) dt}} \tag 1$$
unless the motion is uniform at constant velocity
$$\dot{x}(t) = \frac{A - (-A)}{\frac{T}{4} - (-\frac{T}{4})} = \frac{4A}{T}$$
in which case the two sides of the Eq. (1) would be equal!! Of course, the constant velocity is only hypothetical scenario, it does not apply to the spring. I just wanted to show where does the difference come from.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Given a formula in Minkowski spacetime, how can we transform it so it works in curved spacetime? To bring a concrete example, let's say I know that the stress-energy tensor related to the electromagnetic field in flat spacetime is
$$T_{\mu\nu}=F_{\mu\lambda}F_{\nu}^{\;\lambda} - \frac{1}{4}\eta_{\mu\nu}F_{\rho\sigma}F^{\rho\sigma}$$
I would like to have this formula working in curved spacetime. I know that I will need to use the appropriate metric ($g_{\mu\nu}$). My question would be: is this a simple switch, as such:
$$T_{\mu\nu}=F_{\mu\lambda}F_{\nu}^{\;\lambda} - \frac{1}{4}g_{\mu\nu}F_{\rho\sigma}F^{\rho\sigma}$$
Or do I need to do something else as well in order be able use the formula in curved spacetime?
| Better yet, you can derive this by starting with the Lagrangian:
$$L = \sqrt{|g|}\left(\frac{1}{16\pi}R + \frac{1}{4}F^{ab}F_{ab}\right)$$
And just finding the equations of motion (I recommend cheating and just "remembering" that the variation of $\sqrt{|g|}R$ with respect to $g_{ab}$ is the Einstein tensor, though).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is a single photon a wave plane or a wave packet? According to the definition a photon is monochromatic, so it has a unique frequency $\omega$ and thus it can be expressed as
$\psi(x,t)=\exp i(kx-\omega t)$.
This suggests that a photon is a plane wave which occupies the whole space at the same time.
But why we can say a photon transports one place to another? In ordinary thinking a photon is more like a wave packet, and its probability density has a non-uninform distribution in the space.
So what the photon indeed is?
| All you need is some mode with a definite freqency so that in that mode the EM field dynamics is that of a harmonic oscillator having that frequency. In a conducting cavity there are many such modes and a photon can occupy any of them. Similarly there are many confined modes in an optical fibre or waveguide and any of them can be occupied by any number of photons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 2
} |
Uncertainty notation: I am unsure of how the parentheses notation works If I have a value of $5.868709...×10^{−7}$, and an uncertainty of $7.88431...×10^{−12}$, is it correct to write this as $5.86871(8)×10^{−7}$ or $5.8687(8)×10^{−7}$?
A problem I have with the first is the values used to calculate the $5.868709...×10^{−7}$ were to five significant figures, so is it wrong for me to have six significant figures in my answer?
| In the first place, I would write
$$ 5.868\,709\cdots×10^{−7} \pm 7.884\,31\cdots×10^{−12} $$
instead as
$$ (5.868\,709 \pm 0.000\,078\,8431 )×10^{−7} $$
with (a) grouped digits, (b) a common exponent, and (c) no ellipses. Next I would start removing “insignificant” digits.
The parenthesis notation gets rid of the leading zeros in the uncertainty. You should include the uncertain digits, rather than truncating or rounding them; think of them as “guard digits” for future computations, so that you can postpone as much rounding as possible for as long as possible. So the correct notation would be one of
\begin{align}
&5.868\,709(79)&{}\times10^{-7}
\\
&5.868\,71(8)&{}\times10^{-7}
\end{align}
depending on whether you also keep a guard digit in your uncertainty. The Particle Data Group has a useful statement (section 5.3) about how many significant figures they include in their displayed uncertainties.
Going the other way, if you see a number like $\pi = 3.146(5)$, that means something like $3.141 < \pi < 3.151$, with appropriate caveats about confidence limits. Here you can see that rounding away the uncertain digit,
$$
3.146 \pm 0.005 \not\to 3.15\pm0.005
$$
would erroneously exclude the correct value.
As far as having “gained” a significant figure: don’t fret about that. (Though you may be miscounting. The string $5\,868\,709$ has seven significant figures; moving the decimal point around using scientific notation doesn’t change that count.) The significant-figure approach is a heuristic for people who, for whatever reason, can’t or won’t keep track of uncertainties quantitatively. The usual rule is that, absent other information, the uncertainty in the least significant digit is one unit. So if you say,
“I trust $5.868\,709$ to five significant figures”
then I read instead
“the value is $5.868\,7(1)$, or maybe $5.868\,71(10)$, not to suggest those are terribly different”
You have “gained a digit” because your result, $5.868\,71(8)$ is has slightly smaller uncertainty than the naïve sig-fig approach.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Possibility of combining photovoltaics and solar thermal energy In a private setting, photovoltaics and solar thermal energy are often harvested on the home's roof and roof area is limited. So, I thought about combining both, i.e. mounting solar collectors underneath solar cells. The rationale behind this is that the solar cells appear almost black and probably heat up considerably under irradiation. So if the collectors are in tight thermal contact to the cells, the water in the collectors might carry away the heat as usable energy, and possibly even increase the lifetime or efficiency of the cells due to the cooling effect (but this is rather engineering and not part of the question). So roof area is exploited twice (in two different wavelength windows). Moreover, if electric energy from the cells exceeds actual consumption and the battery's storage capacity, it might also be used for heating (albeit at a lower total efficiency, of course).
Can the amount of (infrared) radiation that gets absorbed (or possibly transmitted) by solar cells, and which is available as heat at the back side of the cells, be quantified by a rough calculation and either prove or disprove the benefit of such a concept? Does the almost black appearance of the solar cells fool one into thinking that they also absorb in the infrared, although they don't?
| From this site comparing solar panels,
solar panels are usually able to process 15% to 22% of solar energy into usable energy, depending on factors like placement, orientation, weather conditions, and similar.
As all the energy has to be absorbed, the photovoltaic if in contact would be loosing only ~20 percent of the direct light so it will depend if it can be placed in contact.
It seems your idea though is already commercial.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a limit of size for superpositions? Can objects be always in superposition if there were no environment for decoherence to occur.
| According to the laws of quantum mechanics, certainly. However, this assumes the system is still completely isolated - even the presence of a gravitational source can lead to decoherence, as per this article, because the objects might be in superpositions of two different gravitational potentials. As well, it assumes that the laws of quantum mechanics hold for macroscopic objects (that's why people keep trying to push quantum mechanics experiments to larger and more massive systems, to test this assumption).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Gravitational forces of a building In trying to understand the gravitational forces of a building, I have devised a thought experiment:
A building is floating in space. The building's mass is asymmetrically distributed. Inside it also has a large concentrated mass, not aligned to the center of gravity of the building as a whole.
We let loose a bunch of dust particles inside the building. Given the gravitational forces of the building's mass, where would these particles be collected?
I have considered a couple of scenarios: | A is not correct. Gravitational forces inside the building will not cancel out, except in a few specific locations - and even those locations will not be stable equilibrium points.
C is not correct. The concept of centre of mass is only relevant for calculating gravitational forces on objects outside of the building - and even then, only if the building's design is symmetric.
So you are left with a combination of B and D. Dust close to walls an ceilings will be gravitationally attracted to the nearest surface. Dust floating in the centre of a room will be gravitationally attracted towards the large mass concentration.
This is like our solar system. For objects near to or on the surface of a planet, the most significant gravitational attraction acting on them is the planet. For objects in outer space, not close to a planet, the most significant gravitational attraction acting on them is the Sun, which contains more than 99% of the mass in the solar system.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Measurement on mixed states I have a conflict between my lecture notes on quantum mechanics, where it is stated that the probability of measuring an eigenvalue $a_i$ on a mixed state with desnsity matrix $\rho$ is
$$
\operatorname{Tr}(P_i \rho P_i)\ ,
$$
where $P_i$ is the projector for the subspace corresponding to $a_i$.
However, all resources out there states that the probability should be $\operatorname{Tr}(\rho P_i)$, and even the professor gave as a solved exam as an example where the later formula was applied instead of the first one.
Which calculation for the probability is correct? Is it possible that both traces are the same because of $P_i$ being a projection operator?
| As mentioned by the OP both versions are the same. For an observable $A$ of the form
$$A = \sum\limits_k a_k \, P_k \quad , $$
with the projections $P_k^2 =P_k = P_k^\dagger$ on the eigenspace corresponding to the eigenvalue $a_k$, the probability to measure $a_k$ in the state $\rho$ is given by
$$p_\rho(a_k)=\mathrm{Tr}\left(P_k\,\rho\, P_k\right) = \mathrm{Tr}\left(P_k\,\rho\right) \quad ,$$
where we've used the cyclic property of the trace. One advantage I can see in explicitly writing both projectors is the fact that after the measurement, the state is given by
$$\rho \longrightarrow \rho^\prime=\frac{P_k\,\rho\,P_k^\dagger}{\mathrm{Tr}\left(P_k\,\rho\, P_k^\dagger\right)} \quad ,$$
and it is thus immediately clear that $\rho^\prime$ is properly normalized.
Further, the form of these equations suggests that this notion of a measurement (projective measurement) is a special case of a more general notion of measurement, cf. this and this.
These things are discussed in detail in e.g. Nielsen and Chuang. Quantum Computation and Information. 10th Anniversary Edition, section 2.2. and 2.4. See also this PSE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
What is the order of the transition for a 2D Ising model? I have been running around the block trying to find answers for this question, and I keep running into caveats. So, I just want to write down the list of things I want to know:
Given that the order parameter is magnetization,
Consider the simple two-spin-state Ising model. Where the Hamiltonian is defined by
$$ \mathcal{H} = J\sum _{i,j} s_i s_j + B \sum s_i. $$
*The 1D Ising model shows no phase transitions as temperature changes: there are no divergences to look out for.
*If we are not present in an external field $(B=0)$, will the 2D Ising model will a second-order/continuous phase transition as temperature passes over critical point ($T_c$)?
*In the presence of an external field $(B\neq 0)$, will the 2D Ising model show a first-order phase transition as temperature passes over critical point ($T_c$)?
| For your 1, suppose we are on a 2D square lattice and $J<0$ (so ferromagnetic interaction), then yes there is a continuous transition at a critical temperature between ferromagnetic and paramagnetic phases. In fact the model was solved exactly by Onsager, whose main motivation was to prove the existence of a phase transition. The lattice geometry is not crucial: the same thing happens on triangular lattice for example. But the sign of $J$ does matter sometimes.
For your 2, the critical point is located at $T=T_c, B=0$. For finite $B$, there is no singularity in free energy anymore. However, there is indeed a first-order transition as you tune $B$ from say positive to negative below the critical temperature crossing $B=0$. This first-order line terminates at the critical point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is thermal conductivity additive? Suppose I have water in a beaker. I measured, its thermal conductivity by some means to be $\kappa_0$. Next, I added some salt to it so that it dissociate from it. Say,
$$AB\rightarrow A^++B^-$$
Now, I again measured the thermal conductivity and found $\kappa$. Can I say that the thermal conductivity contribution from the ions is $\kappa_0-\kappa$?
This is in the following context.
| In general, no.
First, it would not be expected to be the sum, but rather some kind of average. Otherwise, you could add fifty different materials to your mix and get a huge conductivity. So a weighted average of some kind might apply to some mixtures.
The factors affecting thermal condutivity will change in a mixture such that the result of the combination is not a simple average.
In your example, salt is presumably in a solid form before, where both the pure water and the mixture are liquids. Solids and liquids have very different behavior. Salty water has very different electrical conductivity to pure water. The density of the result will be different to the density of the components.
So no, the resultant thermal conductivity is not a simple average average of the components.
Even a mechanical mixture is troublesome. Consider two solids that you ground up into a fine powder, then mixed thoroughly, then compressed. The resultant thermal conductivity might be expected to be the average of the two, possibly weighted in some way to account for the relative amounts of each. But then you would need to account for the contact between grains in the mix. That is a very complicated business indeed, depending on the compression pressure, the nature of grinding, the temperature, and the deformation character of the solids. Just to name a few.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What exactly is mass? I was looking for a definition of mass and most of the time what I got was that "it is the amount of matter". Now that is very vague. And the way we define matter is "anything that has "mass" and occupies space". so... what exactly is mass?
Can you please answer it to the level where a highschool junior/11th grader can understand?
|
«I was looking for a definition of mass and most of the time what I got was that "it is the amount of matter". Now that is very vague.»
No, that is wrong.
Chemists and physicists agree the «amount of matter» is not mass $m$ (e.g., in the unit of kg), but about a counting number n (in the unit of mol). The mole is tied to Avogadro's constant ($\approx 6.022 \times 10^{23}\,\mathrm{mol}^{-1}$), but conceptually similar to e.g., a dozen of items; it is a more practical one in chemistry when counting electrons, atoms, molecules, etc. when balancing reaction equations (keyword stoichiometry).
A testimony about amount of matter is the standard established by BPIM. Though the concept is older, since 1971, it is one of the SI base units and may be found e.g., in the very first section of the four-page summary of IUPAC's Green Book (link to pdf (open access)) Quantities, Units and Symbols in Physical Chemistry, which was prepared jointly by chemists (by IUPAC) and physicists (by IUPAP).
Much of IUPAC's nomenclature is summarized in one of IUPAC's Color Books (by tradition, the color of the printed editions' cover is specific to the topic) the section References about Nomenclature the sibling site chemistry.se compiled; many of them accessible freely for anybody interested (open access).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 9,
"answer_id": 8
} |
Finding relation between matrix $S$ and matrix $M$ for wave propagation we have the same Scattering matrix concept in RF as in quantum physics however, I couldnt derive an expression for the $S$ matrix using the $M$ matrix elements and vice-versa. How can I derive eq 1.13 from eq 1.10 and eq 1.12
| $$\begin{bmatrix}b_1 \\b_2 \end{bmatrix} =
\begin{bmatrix}
S_{11} & S_{12} \\
S_{21} & S_{22} \\
\end{bmatrix}
\begin{bmatrix}
a_1 \\a_2
\end{bmatrix} \tag{1}\label{1}$$
$$b_1= S_{11}a_1 + S_{12}a_2 \\
b_2= S_{21}a_1 + S_{22}a_2 \tag{2}\label{2}$$
$$\begin{bmatrix}b_2 \\a_2 \end{bmatrix} =
\begin{bmatrix}
M_{11} & M_{12} \\
M_{21} & M_{22} \\
\end{bmatrix}
\begin{bmatrix}
a_1 \\b_1
\end{bmatrix} \tag{3}\label{3}$$
$$b_2= M_{11}a_1 + M_{12}b_1 \\
a_2= M_{21}a_1 + M_{22}b_1 \tag{4}\label{4}$$
Now rearrange $\eqref{2}$ as follows:
$$a_2=-\frac{S_{11}}{S_{12}}a_1+\frac{1}{S_{12}}b_1\\
b_2=(S_{21}-\frac{S_{11}S_{22}}{S_{12}})a_1+\frac{S_{22}}{S_{12}}b_1$$ that is
$$ \begin{bmatrix}
M_{11} & M_{12} \\
M_{21} & M_{22} \\
\end{bmatrix}=\begin{bmatrix}
S_{21}-\frac{S_{11}S_{22}}{S_{12}}& \frac{S_{22}}{S_{12}} \\
-\frac{S_{11}}{S_{12}} & \frac{1}{S_{12}} \\
\end{bmatrix}$$
which is what you wanted to prove.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is exactly mean by wavelength in De Broglie equation? I'm wondering what exactly is meant by the wavelength in De Broglie formula $p=\frac{h}{\lambda}$, where $p$ is the momentum of a particle and $\lambda$ is the wavelength. I know that a wave function might very well be messy without a defined wavelength.
Can someone clear my confusion?
| De Broglie wavelength of a particle is the associated wavelength of the particle when the particle behaves as a wave. Simply put, in wave-particle duality, De Broglie wavelength of a particle is the wavelength that the particle would have if it were a wave exhibiting the particle's wave-like properties.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is kinetic energy relative or absolute? I only can think of kinetic energy as absolute. I know velocity is relative but I can't see kinetic energy as being relative because that would violate energy conservation. For example, if in some reference frame, the loss of kinetic energy is $60\mathrm{\ J}$, how can, in another reference frame, the loss be different? If kinetic energy is relative then its effects should also be relative. If $60\mathrm{\ J}$ produces an $X$ temperature and a $Y$ wave sound, how can this temperature and sound be different just because of changing the reference frame? It should be the same. Also, I thought that if the classical kinetic energy equation is true then the Galilean relativity is wrong because the more speed or velocity, the more loss of kinetic energy will there be in a collision, so what is true and what is wrong? Are the effects of kinetic energy equal no matter what reference frame is it? Is the kinetic energy equation wrong or is the Galilean relativity wrong?
| There is a fundamental point in the interpretation of the concept of energy: even the kinetic energy (KE) exists in relation to other kinds of energy, whose exchange is realized through work. In this way, the existence of an absolute quantity, the KE does not make sense. On the other hand, differences that come from energy transformations (no matter what kind) are such that they conserve the overall energy (momentum) of the system.
Look for instance to the potential energy, in its most simple form: U = mgh. It is settled with the exception of a constant of height. This can be thought to be similar to the kinetic energy.
In both cases, for the transformations through work, the important concept is the difference from the initial/final KE (or potential) in a given process, not how each observer/experimenter would measure that energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 5
} |
A planet with a square orbit? To understand how gravity influence objects, time and space, I have been thinking of how a planets shape would change the orbits of its moons.
More specifically: can I design a planet whose moon move in a square orbit?
Below is a diagram of my first intuitive try. For simplicity I imagine a two-dimensional shape extruded with the moon moving around it.
1. Is there in theory a shape that would create a square orbit for objects moving around it?
2. If yes, what is that shape?
| You can definitely have a system with any shape orbit. Try designing your square orbit solar system as a planet orbiting a binary star. Who cares about stability for now. Most non elliptical orbits probably become unstable by now, but that does not mean all sorts of orbits do not have there day. All things are possible. Circular orbits are just common because they are stable. The other shapes would be destroyed easier.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55",
"answer_count": 8,
"answer_id": 7
} |
Is the time interval between two events at different points in the same rest frame the proper time? For example, if an observer in the rest frame of a room measures an explosion on one side of the room and then at a time $\Delta t$ later observes another explosion on the opposite side of the room, is this time interval $\Delta t$ the proper time? I know that the length of the room measured by the observer in the room will be the proper length as they are measuring the length in the same frame but I am unsure of the time.
So far, proper time has been explained to me as the time interval between two events that happen at the same point. However, I am confused as to what this means. Does this mean the two events have to happen at the same point in the frame or does it mean that the observer recording the time interval between two events in the frame has to record them at the same point. I say this because the way I see it is that the events could be the observer observing the explosions and these observations happen at the same point in space.
| Three observers standing in a room will have their clocks synchronised, they will see that their private clocks run at the same rate as the other two. If two of them snap their fingers simultaneously then all three will agree that the snaps were simultaneous, after allowing for the speed of light (they don't hear but see the snaps). If one observer snaps his finger one second after another then all three will agree that there was a one second gap.
But if they start tossing a clock at each other then that clock will show a slower time compared to the clocks that stay with them.
Comoving clocks show each other's proper time.
Hope it answers your question
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Double slit formula derivation. Why not $I_{\theta} = 4 I_m (\cos \beta)^2 \left( \frac{\sin{\alpha}}\alpha \right)^2$? The intensity of the double slits is given by
$$I_{\theta} = I_m (\cos \beta)^2 \left( \frac{\sin{\alpha}}\alpha \right)^2$$
where
$$\alpha = \frac{\pi a}{\lambda}\sin \theta$$
$$\beta = \frac{\pi d}{\lambda} \sin \theta$$
where $d$ is the distance between the centerlines of the slits and $a$ is the width of each slits.
I understand that the intensity of a single slit is $I_\theta = I_m \left( \frac{\sin{\alpha}}\alpha \right)^2$, but if there are two slits shouldn't the intensity become four times as much: $I_\theta =4 I_m \left( \frac{\sin{\alpha}}\alpha \right)^2$? Because the two diffracted waves are coherent and thus they interfere and so the amplitude is twice as much as of a single slit and since the intensity is proportional to the square of the amplitude, the resultant intensity of the two slits should be multiplied by 4
Note: I know the proof of all of formulas written here.
| You are correct that when you go from one slit to two slits, the center maximum is 4x greater in intensity.
I don't know which book you are going off of, but I am willing to bet that it is mainly concerned with the relative intensity rather than the absolute intensity. In other words, we aren't interested in calculating the exact value of $I_{\theta}$ based on all the variables (such as the distance from the slits to the target screen or the initial intensity of the laser, etc, etc), and instead we are interested in finding the overall look of the intensity pattern with respect to $\theta$.
Because we are talking about relative intensity, whether we put $I_{m}$ or $4I_{m}$ is going to be a matter of convention.
It seems like the book chose $I_{m}$ to be the intensity of the double-slit pattern at the center maximum. You're asking why can't it be the intensity of the pattern at the center maximum when one of the slits is covered.
The answer is that it can be that, but it ends up not mattering if all you care about is the relative intensity graph. The book simply chose a different convention.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does Griffith mean by adding a prime on integration variables? In the book "Introduction to Electrodynamics" by Griffith, the author mentions electric potential as a point function writes the equation for electric potential as
Then in a side note he write "To avoid any possible ambiguity, I should perhaps put a prime on the integration variable"
To what 'ambiguity' is he refering to and what how does adding the prime clarify it?
| The primed coordinate is what you are integrating over, whereas the unprimed coordinate is the point in space you are computing the potential for. If $r$ was left unprimed in $E(r)$ it could be seen as ambiguous whether we are talking about the variable of integration or not.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Original BPS state paper by Bogomol'nyi I've been searching for the original paper by E.B. Bogomol'nyi titled "The Stability of Classical Solutions" online, and have yet to find a resource which holds it. So far, the closest I've come is a random website which seems to have part of it at least, but it is quite a low-resolution scan:
https://www.docsity.com/pt/the-stability-of-classical-solutions/4895287/
If anyone has access to a decent quality copy of this, it would be very helpful.
| This answer was initially just a comment, since the resource I mention is not available for free. (At least not in a chivalrous way ;) But as it was pointed out to me it could still be useful, since it is maybe more likely that your local institute library holds the book rather than the original journal article.
The paper "The Stability of Classical Solutions" by E. B. Bogomol'nyi from 1976 can be found in the book "Solitons and Particles" by Claudio Rebbi and Giulio Solani. In the version I was able to look at it is found on page 389.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Speed of heat through an object According to the Heat equation (the PDE), heat can travel infinitely fast, which doesn't seem right to me. So I was wondering, at what speed does heat actually propogate through an object?
For example, if I have a really long iron rod at a constant temperature (say 0 Celsius), and one end of it instantenously becomes hot (e.g. 1000 Celsius), how far down the rod will the temperature have changed in 1 second? I don't care how much the temperature changes, only how far a temperature change (however minuscule) happened.
Would changing the material (e.g. steel instead of iron) or the initial temperatures change the answer?
My gut tells me the answer should be the speed of sound for the material, because that's the speed at which movement in the atoms can affect each other.
| If instead of one metal rod, we have $2$ of them, from different metals joined at one end, a voltage is produced between the other ends, when the join is heated. In this case, the velocity of the propagation is the velocity of the electric current, close to the speed of light.
If we have only one metal rod, the heating of one of the ends can not of course generates an steady electric current, but the modification in the occupied states of the conduction band due to the heating spreads along the rod. And the velocity is again close to the speed of light.
I don't say that it is the main mechanism of heat transfer by conduction. But for a really very small $\Delta T$, a transient current carries some heat by Joule effect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 3
} |
$I$ proportional to $V$ or vice versa? I am confused whether Voltage depends on current or the vice versa. I always thought that the vice versa was correct. I tried to find the answers of some of my other conceptual doubts on the web but I was not able to understand the answers as people were saying things beyond school level. But the answers made me confused about the question about $V$ & $I$ mentioned above. Can you please tell which one is correct and why?
Also, please tell me why it is officially stated that $H = I^2RT$ not $VIT$.
|
I am confused whether Voltage depends on current or the vice versa. I
always thought that the vice versa was correct.
Both are true, but you have to know where the voltage and current are being measured to determine the dependence.
Also, please tell me why it is officially stated that $H = I^2Rt$ not
$VIt$.
Heat is only dissipated in resistance. For example, if the voltage $V$ and current $I$ are for an ideal capacitor or ideal inductor, there is no heating ($H=0$). Only the in phase components of voltage and current result in heating. So, for heating in an ac circuit, it would be
$$H=V_{rms}I_{rms}t \cos\theta$$
Where $\theta$ is the phase angle between the voltage and current and $rms$ is the root means square value of the current and voltage. For a purely resistive component, $\theta=90^0$, $\cos\theta=1$ and then $H=V_{rms}I_{rms}t$.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Finding deflection of an electron through 2 charged plates when given initial velocity I've been trying to relate the initial velocity of an electron to the deflection created based on the electric field between 1 pair of plates. The 2nd half of page 3 of this pdf is what I'm concerned with. Now, I was trying to derive y (the deflection distance), where e is elementary charge, and let $\theta$ be the trajectory of the electron immediately after leaving the plates ($\sin{\theta}$ is equal to $\frac{\Delta y_1}{L}$ and probably equal to $\frac{v_y}{v}$ in the diagram, this could be part of the misunderstanding).
Explaining my approach for change in y-position through the plates only:
using $F=ma$ and $F=qE$ and $a=\frac{qE}{m}$ and $t=\frac{L}{v_x}$ where L is the length of a plate (total horizontal distance that the electric field between the plates acts on the electron) and $v_x$ is the initial velocity I get:
$$\Delta y_1=\frac{1}{2}\frac{qE}{m}\biggl(\frac{L}{v_x}\biggr)^2$$ by kinematics.
Accounting for the remaining distance (not between plates):
My thought was that $\Delta y_1=\frac{eEL^2}{2mv_x^2}$ is the change in the y-position while between the plates, so the remaining $D\sin{\theta}$ would be the extra change in y-position after leaving the plates:
$$y=D\sin{\theta} + \frac{eEL^2}{2mv_x^2}$$
vs what the pdf has:
$$y=\frac{eEL}{mv_x^2}\biggl(D+\frac{L}{2}\biggr)$$
This is almost the same as what I have, the only difference is I have $D\sin{\theta}$ instead of $D\frac{eEL^2}{2mv_x^2}$ and I'm trying to figure out my misunderstanding.
Can someone please explain where my misunderstanding is? I can't tell why my derivation is incorrect?
| You have found yourself the cause of your misunderstanding,
($\sin{\theta}$ is equal to $\frac{\Delta y_1}{L}$ and probably equal to $\frac{v_y}{v}$ in the diagram, this could be part of the misunderstanding).
So let me clarify it for you so you really understand.
The only meaningful $\sin{\theta}$ is in fact $\frac{v_y}{v}$ , the local slope of the trajectory.
Integrating $v_y$ over the trajectory gives you $\Delta y$, but since $v_y$ is not constant over the entire trajectory, the quantity $\frac{\Delta y_1}{L}$ is not simply related to $\frac{v_y}{v}$. So $\frac{\Delta y_1}{L}$ is not a useful object during the calculation. It is the slope of the curve determined by $\frac{v_y}{v}$ at distance $L$ that gives you $\sin{\theta}$.
In fact, just by looking at the picture, you see that if the slope of the trajectory at $L$ is pushed back to the origin in $x$, you dont hit the initial point, but below that. So the slope is clearly larger than $\frac{\Delta y_1}{L}$ !
It turns out the slope is exactly twice this latter value, but this is in fact irrelevant, since $\frac{\Delta y_1}{L}$ is not a useful quantity.Of course ${\Delta y_1}$ is important, but dividing it by $L$ does not tell you anything useful.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Are there any slow neutrinos? Since we now know that neutrinos have a rest-mass, we ought to be able to observe relatively slow-moving neutrinos. Have we seen any?
| They haven't been seen and I doubt they will ever be. Since they were created they have traveled almost undisturbed. I find it difficult to believe most neutrinos from the Sun travel through the Earth. But it seems to be the case.
Because they have mass and interact (though weak), they should be stoppable. Maybe in between the space between the two stars of a binary system, they could collide head on inelastically. If the stars emit the same number of neutrinos as the Sun, this should happen once in a while, though I'm not sure if this can happen inelastically.
We're not sure though. Of the zillions there might actually be some slow ones.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is there a physically meaningful example of a spacetime scalar potential? From Misner, Thorne and Wheeler, page 115.
0-Form or Scalar, $f$
An example in the context of 3-space and Newtonian physics is temperature $T\left(x,y,z\right),$ and in the context of spacetime, a scalar potential, $\phi\left(t,x,y,z\right).$
I'm trying to think of an example of such a scalar potential. Is there one? Electrostatic potential is the time component of the electromagnetic 4-vector potential, so it's really a vector with 0-valued space components.
| Within the Standard Model, the simplest model of the Higgs field is a multiplet of Lorentz scalar fields. This multiplet does have a non-trivial transformation under an underlying gauge group of the Standard Model; but under Lorentz transformations, the Higgs field is invariant, as all good Lorentz scalars should be.
Of course, the Higgs field is not a "potential" in the sense that a "potential" is a field whose derivative is a physically observable field; so if you're strictly looking for a scalar potential this isn't what you're looking for. But to the best of my knowledge it is the only fundamental scalar field we know to date.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
What would happen if you left Earth and then it was destroyed? (Potential energy and thermodynamics) So I did some research on magnetic generators and what kind of energy magnets loose when they move something. Turns out magnets loose potential energy (you need to expend energy to get the object moved closer to or further away from the magnet again,) similarly to how the energy used to get something into space becomes the potential energy stored in the distance that object can then fall. So then I thought, “what if I deleted or blew the Earth up after spending energy to get away from it". So what would happen if I did that?
| If you could delete the Earth, that would cause the concept of gravitational potential energy to be quite problematic, which is one the reasons why the conservation of mass is considered to be a solid law (there's also other issues such as Noether's Theorem).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Can two waves be considered in phase if the phase angle is a multiple of 2$\pi$? Question is essentially what the title states. Wavefront is defined as the locus of points that are in phase. So I wanted to know if the locus would be the points of only a single circle or multiple circles whose points all have the same displacement? Or in other words can all the points that are at the peak at a specific time be considered as part of a single wavefront/inphase?
Can all the points in all the green circles be said to be in phase? Can they be said to be in the same wavefront?
| Yes. Two waves are in phase when phase difference is two pi. Out of phase when phase difference is one pi.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Non-dimensionalizing laser system of diffeqs, Strogatz Nonlinear Dynamics and chaos 3.3.1 D The system of equations in question is
$$ \dot{n} = GnN - kn$$
$$\dot{N} = GnN - fN + p$$
Where ${N(t)}$ is the number of excited atoms, ${n(t)}$ is the number of photons, ${G}$ is the gain coefficient, ${f}$ is the rate of decay for spontaneous emission, and ${p}$ is the pump strength. All parameters except ${p}$ are positive, ${p}$ can be either +ve or -ve
The problem has us assume $\dot{N} \approx 0$, with part D asking us for what range of parameters we can use this approximation. Following examples set earlier in the text, I'd like to non-dimensionalize this equation, but I cannot for the life of me seem to figure out what units each of these coefficients should be, especially ${G}$ and ${N}$. If someone could give me hints without giving away under what conditions we can neglect the parameters (giving the dimensions of all the things or less, ideally) that would be amazing.
| As march already commented, the variables described as "numbers", $n$ and $N$, can safely be taken to be dimensionless (though, of course, we do use pseudo-units "dozen", "mole", etc. — see this question), and thus $\dot{n}=dn/dt$ will have the inverse unit of time, and so on for the other quantities.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to make the Moon spiral into Earth? I recently watched a video of what would happen if the Moon spiraled into Earth. But the video is pretty sketchy on the physics of just what would have to happen for that to occur. At first I thought I understood (just slow the Moon down enough), but my rudimentary orbital mechanics isn't enough to convince me that's sufficient (e.g., wouldn't the Moon just settle into a lower orbit?).
What forces would have to be applied to the Moon to get it to spiral into the Earth, at what times? What basic physics are involved? (And why should I have already known this if I could simply remember my freshman Physics?)
|
wouldn't the Moon just settle into a lower orbit?
Yes. To make the moon spiral into the earth requires continuous application of some drag or retarding force, not just some singular event.
What forces would have to be applied to the Moon to get it to spiral into the Earth, at what times?
If the only (significant) force on the moon is the earth's gravity, then it will move in an ellipse. To change that orbit requires a force. For a spiral, you need a constant drag. Real sources of such drag on moons might be an atmosphere or tidal losses.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 8,
"answer_id": 2
} |
If force depends only on mass and acceleration, how come faster objects deal more damage? As we know from Newton's law, we have that $\mathbf{F} = m\cdot\mathbf{a}$. This means that as long as the mass stays constant, force depends solely on acceleration. But how does this agree with what we can observe in our day-to-day lives?
If I drop a coin on someone's head with my hand standing just a couple centimeters above their hair, they won't be bothered too much; but if I drop the same coin from the rooftop of a skyscraper, then it could cause very serious damage or even split their head open. And yet acceleration is pretty much constant near the surface of the earth, right? And even if we don't consider it to be constant, it definitely has the same value at $\sim1.7\text{ m}$ from the ground (where it hits the person's head) regardless of whether the motion of the coin started from $\sim1.72\text{ m}$ or from $\sim1 \text{ km}$.
So what gives? Am I somehow missing something about the true meaning of Newton's law?
| Force on the victims head is rate of change of momentum. i.e. Force=rate of change of momentum. If the time is tiny then the rate of change is huge. This is why crumple zones in cars work, they increase the time over which the passengers' momentum changes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 14,
"answer_id": 11
} |
Time constant versus half-life — when to use which? In some systems we use half-life (like in radioactivity) which gives us time until a quantity changes by 50% — while in other instances (like in RC circuits) we use time constants. In both cases the rate of change of a variable over time is proportional to the instantaneous value of variable.
What is a simple intuitive way to know the difference between the kind of systems where half-life is useful, versus systems where time constants are more meaningful? (Does it have anything to do with the shape of the curve representing the change in value over time, for example?)
| Wrong but slightly useful heuristic is to use time constant for events that are repetitive and half life for events that are one-off
More useful is to use whatever is used by others in that particular field. Half life for radioactivity, time constant for electronic filters, time till failure for reliability calculation, annual percents for money, birth rate in demographics, R value for deseases, bushels per acre in agriculture.
Using values that are not common in that field is unwelcomed in general. Like negative half life for population growth or R value for money growth. Probably they will get what you mean, but they wont like it for sure.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
Acceleration as a function of displacement I am given a question such that a 0.280kg object has a displacement (in meters) of $x=5t^3-8t^2-30t$. I need to find the average net power input from the interval of $t=2.0s$ to $t=4.0s$.
I know the formula for average net power is $\frac{\int^{x_2}_{x_1}F \ dx}{t_2-t_1}$ as the force (acceleration is not constant).
The acceleration is given by $a = 30t-16$. The force is then given by $F=ma$, but since $m$ is a constant, I intend to ignore it in my calculations. As such, all I need to do is to express acceleration as a function of displacement $x$.
I initially tried to substitute $t=\frac{a+16}{30}$ into the displacement equation, but ended up with a complex expression in $a$ that I could not integrate $x$ against.
I then attempted to try chain rule, with $\frac{da}{dx} = \frac{da}{dt} \div\frac{dx}{dt} =\frac{30}{15t^2-16t-30}$, but this is an expression in $t$ and I still cannot perform $\int^{x_2}_{x_1} F \ dx$.
Does anyone have any advice on what I can do? Many thanks for any help extended!
| You can use the fact that $\frac{dx}{dt} = 15t^{2} -16t -30$ first of all. Then, you can make a substitution into your work done integral, for $dt$, and change the limits so that instead of the displacement $x_{i}$, you have whatever initial and final times $t_{i}$. That should then work! If not, let me know and I can give you an explicit answer, but do try it yourself first.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to understand the notion of critical temperature in thermodynamics? I just want to verify my understanding of the notion of critical temperature of fluids, because the more I read about it in the literature I become more and more confused.
My main clue for understanding this notion is the statement that latent heat becomes zero at the critical temperature. That is, no heat input is required to cause a phase transition from liquid to gas, and this implies also that if a piece of supercritical fluid is thrown into a low pressure area than the whole piece evaporates immediately.
Therefore I believe the critical temperature to be simply a measure of the natural strength of intermolecular attraction forces - above the critical temperature the molecules have enough kinetic energy to overcome their attractive forces (no external "help" is required). This seems to me consistent with Dimitri Mendeliev's term for critical temperature, "absolute boiling point".
I know my question seems like a bit strange question, since it is mainly a verification question, but i really need help to understand this notion. I'll also be glad to hear different approaches to understand/gain intuition about "critical temperature".
| The surface energy of condensed matter generally decreases with increasing temperature because the increasing molecular vibration combats cohesion. The critical temperature is the temperature where the surface energy has dropped to zero. At this and higher temperatures, there's no driving force to form a condensed phase.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Commutators as contour integrals in 2D CFT, and classical limits In a 2D CFT, the commutator of two operators
$$A_i=\oint a_i(z)dz$$
can be given by
$$[A_1,A_2]=\oint_0dw\oint_wdza_1(z)a_2(w)$$
where the $z$ integral is taken over a contour around $w$ and the $w$ integral is taken over a contour around the origin, and $a_i(z)$ are holomorphic operators.
What does this look like in the classical limit? When the commutator is replaced with the Poisson bracket $$[\cdot,\cdot]\to i\hbar\{\cdot,\cdot\}.$$
What happens to the integral on the right hand side?
| If $$\hat{a}(z)~=~\sum_n z^{-n-h_a}\hat{a}_n
\quad\text{and}\quad
\hat{b}(w)~=~\sum_m w^{-m-h_b}\hat{b}_m,\tag{1}$$
or conversely,
$$ \hat{a}_n~=~\oint_0 \frac{\mathrm{d}z}{2\pi i}z^{n+h_a-1}\hat{a}(z)
\quad\text{and}\quad
\hat{b}_m~=~\oint_0 \frac{\mathrm{d}w}{2\pi i}w^{m+h_b-1}\hat{b}(w),\tag{2}
$$
then OP is essentially interested in the classical limit $\hbar\to 0$ of the OPE formula
$$ [\hat{a}_n,\hat{b}_m]~=~\oint_0 \frac{\mathrm{d}w}{2\pi i}\oint_w \frac{\mathrm{d}z}{2\pi i}z^{n+h_a-1} w^{m+h_b-1}{\cal R}\hat{a}(z)\hat{b}(w).\tag{3} $$
Here the symbol ${\cal R}$ denotes radial ordering,
$${\cal R} \hat{a}(z)\hat{b}(w)~:=~\left\{ \begin{array}{rcl} \hat{a}(z)\circ\hat{b}(w)&{\rm for}&|z|>|w|, \cr \hat{b}(w)\circ\hat{a}(z)&{\rm for}&|w|>|z|.\end{array}\right. \tag{4}$$
The symbol ${\cal R}$ itself is often implicitly implied in CFT texts. Concerning the proof of the formula (3), see e.g. this Phys.SE post.
It seems OP is already well aware of the classical limit $\hbar\to 0$ of the commutator on the LHS of eq. (3) in terms of the Poisson bracket, cf. e.g. this Phys.SE post. Obviously the classical limit $\hbar\to 0$ of the RHS of eq. (3) has to yield precisely the same result.
One idea for further insides is to replace the operators $\hat{a}(z)$, $\hat{b}(w)$ and composition $\circ$ with symbols/functions $a(z)$, $b(w)$ and the star product $\star$, and then expand in $\hbar$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Simplification of nested time-ordered products I'm trying to progress towards understanding, and perhaps finding a proof for, the "nested" Wick's theorem for time-ordered products $T\{ \ldots \}$ alluded to in part (II) of this answer.
Assuming bosonic operators for now, I've noticed that
$$T\{ T\{ A(t_1)B(t_2) \} T\{ C(t_3)D(t_4) \} \} \equiv T\{ A(t_1)B(t_2)C(t_3)D(t_4) \}\tag{1}$$
through brute force calculation. Is the natural generalisation of this,
$$T\{ T\{ \ldots_1 \} \ldots_2 T\{ \ldots_3 \} \ldots_4 ~~~\ldots~~~ T\{ \ldots_{n-1} \} \ldots_n \} \equiv T\{ \ldots_1 \ldots_2 ~~~\ldots~~~ \ldots_n \},\tag{2}$$
also true?
| *
*Let us first define a $n$-ary Heaviside step function:
$$\begin{align} \theta&(t_1\geq t_2\geq\ldots \geq t_n)\cr
~:=~&\left\{\begin{array}{rl} 0&\text{if ineq. is violated}, \cr
\frac{1}{m_1!\ldots m_r!}&\text{if ineq. holds and there are $r$ sets of equal} \cr
&\text{times with multiplicities } m_1, \ldots, m_r. \end{array} \right.\end{align} \tag{A}$$
It satisfies
$$ \sum_{\pi\in S_n} \theta(t_{\pi(1)}\geq t_{\pi(2)}\geq\ldots \geq t_{\pi(n)})~=~1. \tag{B}$$
*Next define time-ordering $T$ for Grassmann-even$^1$ operators as
$$ T(A_1 \ldots A_n)~:=~\sum_{\pi\in S_n} \theta\left( t(A_{\pi(1)})\geq \ldots \geq t(A_{\pi(n)})\right) A_{\pi(1)} \ldots A_{\pi(n)}. \tag{C}$$
Time-ordering $T$ is a multi-linear map and it may be viewed as a symmetric operad
$$T(A_1 \ldots A_n)~=~T(A_{\pi(1)} \ldots A_{\pi(n)}), \qquad \pi~\in~ S_n.\tag{D} $$
It satisfies a generalized idempotency
$$\begin{align}T&\left(T(A_1 \ldots A_r)T(B_1 \ldots B_s)\ldots Z_1 \ldots Z_u \right)\cr
~\stackrel{(C)}{=}~~&\sum_{\pi\in S_r} \theta\left( t(A_{\pi(1)})\geq \ldots \geq t(A_{\pi(r)})\right)\cr
&\sum_{\sigma\in S_s} \theta\left( t(B_{\sigma(1)})\geq \ldots \geq t(B_{\sigma(s)})\right)\ldots\cr
&T\left(A_{\pi(1)} \ldots A_{\pi(r)}B_{\sigma(1)} \ldots B_{\sigma(s)}\ldots Z_1 \ldots Z_u \right)\cr
~\stackrel{(B)+(D)}{=}&T(A_1 \ldots A_r B_1 \ldots B_s \ldots Z_1 \ldots Z_u )
,\end{align}\tag{E} $$
which is OP's eq. (2).
--
$^1$ There is as straightforward generalization to Grassmann-graded operators.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Microwave inside-out cooking true/false The wikipedia article on microwave ovens says
Another misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards.
It further says that
with uniformly structured or reasonably homogenous food item, microwaves are absorbed in the outer layers of the item at a similar level to that of the inner layers.
However, on more than one occasion I've microwaved a stick of butter, and the inside melts first, then the outside caves in releasing a flood of butter. (It may be relevant that my microwave turntable does not turn - but since I've done it more than once, I would not expect it to be a fluke of placement in the standing wave. And, the resulting butter-softness seemed very strongly correlated with depth, more than I'd expect from accident.) That sure seems consistent with the food absorbing more energy on the inside than on the outside. Given that this takes place over 30 seconds or so, I'd not expect much heat exchange to occur with the butter's environment (nor inside the butter itself), so that would forestall explanations of "the air cools off the outer layer of butter", unless I'm seriously underestimating the ability of air to cool off warm butter. So what's going on?
| The whole reason why microwave ovens have turntables is that they always heat food unevenly because standing waves form inside them:
If a wave's antinode (where most energy is released) happens to be in the middle of your butter stick, the butter will melt in that place first, and may even explode if the temperature inside reaches the boiling point of water (there's often some water in the butter) before the sides of the stick have a chance to melt.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 7,
"answer_id": 3
} |
Time dependence of generalized coordinates and virtual displacement The Cartesian coordinates of particles are related to the generalized coordinates via a transformation (for the $x$ component of the $j$-th particle) as:
$$x_j = x_j(q_1, q_2, \ldots, q_N, t)$$
What I can't understand is why in the virtual displacement which occurs in constant time i.e. $\delta t=0$ isn't zero? We can write the virtual displacement as:
$$\delta x_j = \sum_{i=1}^N \frac{\partial{x_j}}{\partial{q_i}}\cdot \delta q_i $$
but because the generalized coordinates can also be considered functions of time then:
$$\delta x_j = \sum_{i=1}^N \frac{\partial{x_j}}{\partial{q_i}}\cdot \dot{q_i} \cdot \delta t$$
If time is frozen isn't virtual displacement also $0$?
| If you want to think of a virtual displacement as a curve $s\mapsto q(s)$, since time $t$ is frozen, you cannot pick time $t$ as a curve parameter, you have to pick something else, say $s$. Hence $\delta q=\frac{dq}{ds}\delta s$. See also e.g. this, this & this Phys.SE posts and links therein.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why should the electron's energy knocked out be dependent on the intensity of the light? I know according to the photelectric effect this is not true , but the belief that it should be according to classical physics ? , could someone explain the approach of what classical physics 's hypothesis was ?
| Exposure of a metallic surface by light knocks electrons out of the surface at an electric potential difference. The surprising thing was that the intensity of the photoelectric current periodically increased and decreased depending on the wavelength of the light. This was described as early as 1887 by Hertz and Hallwachs.
Einstein's hypothesis was now that light consists of quanta, later called photons. Light is not a wave, but a stream of photons, which have a periodically oscillating electric and a magnetic field component and transport energy in packets. The electrons can leave the atom that binds them only if they absorb photons of certain energy content.
Planck had unexpectedly come up with quantization of radiation in explaining blackbody radiation. He considered his results as pure calculation not relevant for Newton's hypothesis that light consists of corpuscles. At first, he was also skeptical of Einstein's hypothesis.
However, the quantum hypothesis turned out to be verifiable. Thus, the emission spectra of chemical elements could be recognized as the discrete emission of photons from excited electrons as they relax in the atom.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/695899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does 2D circular wave reduces amplitude as it spreads out in lossless medium? When we throw a pebble into water, a 2D circular wave is generated. Suppose the water here is completely lossless, will the wave amplitude still reduce as the wave front spreads out?
In the case of a 1D lossless and infinitely long string, the wave front will travel forever?
| Yes, the amplitude will decrease due to spreading. Energy must be conserved, and as the radius of the circular wavefront increases the energy at any single point must decrease proportionately. Usually the square of a wave's amplitude is proportional to the energy density, and for circular waves the length of the wave increases with the radius. Thus, the energy density must decay as one over the radius; taking a square root then suggests the amplitude will decay as one over the square root of the radius.
In 3D you follow the same logic to concluding the amplitude falls off as one over the radius.
In 1D, in the absence of any losses or reflecting surfaces, the amplitude will not decay at all.
It might be worth pointing out that these energy arguments do not apply only to water waves or string waves, but are general statements for all waves.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/696297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can I calculate the intercept direction of a constant accelerating missile? I'm simulating missiles in 3d space and want the missiles to intercept a target which has constant velocity.
Given the targets velocity is "u" a vector and the missiles acceleration rate is "a" a constant scalar
what is the formula to calculate the direction the missile should be facing to intercept the target.
I have tried using the quadratic formula but that only works on constant speed missiles, I tried to run it iteratively: calculate the intercept position find the average velocity the missiles will have to reach that point and give that as the new speed input but it didn't work, its possible I messed up the formula somewhere or maybe that's just not a proper way to calculate it.
I'm lost any help will be greatly appreciated.
| From your question
Given the targets velocity is "u" and the missiles acceleration rate is "a" what is the formula to calculate the direction the missile should be facing to intercept the target.
I infer that you are analyzing a simple problem of straight-line motion for both target and missile. This problem would be (much) more complicated to analyze if the missile would follow a trajectory in which it is always directed towards the target. That would be a form of a well-known radiodrome (pursuit curve) problem.
Since the target moves in a straight line, we can orient our coordinate system such that the target is moving along one of the axis. Let $\vec{r}_m(t)$ and $\vec{r}_t(t)$ be missile and target position at time $t$, where
$$\vec{r}_m(0) = \vec{0} \qquad \text{and} \qquad \vec{r}_t(t) = \vec{r}_t(0) + (v_t t) \hat{k} = x_0 \hat{\imath} + y_0 \hat{\jmath} + (z_0 + v_t t) \hat{k}$$
The problem is to find an angle for $\vec{r}_m$ vector such that the missile and the target meet at time $t_0$.
The distance of the missile and the target from the origin at time $t$ is
$$|\vec{r}_m(t)| = \frac{1}{2} a t^2 \quad \text{and} \quad |\vec{r}_t(t)| = \sqrt{x_0^2 + y_0^2 + (z_0 + v_t t)^2} = \sqrt{d_0^2 + (2 z_0 + v_t t) (v_t t)}$$
where it is assumed that the missile starts from rest, and $d_0^2 = x_0^2 + y_0^2 + z_0^2$ is the initial distance between the missile and the target.
At time $t = t_0$ the missile and the target meet, which means $\vec{r}_m(t_0) = \vec{r}_t(t_0)$ and
$$|\vec{r}_m(t_0)| = |\vec{r}_t(t_0)| \quad \rightarrow \quad \frac{a^2}{4} t_0^4 - v_t^2 t_0^2 - 2 z_0 v_t t_0 - d_0^2 = 0$$
Solve the above quartic equation to get $t_0$ and then find the three (or two) angles of the $\vec{r}_t(t_0)$ vector. The missile needs to be launched at exactly these angles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/696508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are one-dimensional tensors of arbitrary rank just scalars? Consider a tensor of arbitrary rank (2 for this case) $A_{ij}$, and dimension one. Granted there are two indices to specify a component, but since each index can only take one value, there is only one component in this entire tensor: $A_{11}$. So, are all one dimensional tensors scalars?
Further. transformation under coordinate transform for this case:
$$(A')^{11}={\left (\frac{\partial x'}{\partial x}\right )}^2A^{11}$$
suggests that since in general $(A')^{11}$ is not equal to $A^{11}$, it is not a scalar.
So what exactly is this non-scalar one component object?
| Perhaps an example is in order. Consider e.g. a 1D charge density $\rho$ in a 1D world. It transforms as a covariant (0,1) tensor $\rho^{\prime}=\frac{\partial x}{\partial x^{\prime}}\rho,$ so it is not a scalar.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/696747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Could we estimate the total energy of the universe? So I know that we do not know the sum of all energy in the universe, but why can we not just estimate with the following logic? (That I assume has some fatal flaw preventing anyone from guessing the total energy of the universe with it)
Since the universe is considered uniform by some (if you zoom out a lot) could we take an average piece of the universe, guess how many of those pieces there are in the universe, and then multiply the chunk’s energy by how many of them we think could exist?
| This has been done. This energy density is called the critical density and is about 5 GeV/$c^2$ per cubic meter (i.e. 5 proton masses per cubic meter or or $10^{-26}$kg/m$^3$). So, I do not understand why you write that we did not know the sum of all energy in the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/696863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the physical interpretation of the two tree-level Feynman diagrams for $e^-e^- \to e^-e^-$ scattering? In the tree level, the $e^-e^- \to e^-e^-$ scattering has two Feynman diagrams, the first one is indicative that one electron emitted a photon which was later absorbed by the other electron:
However, I have yet to understand how we can interpret the other term:
at first glance it would seem like the electrons changed places or simply that they have opposite 4-momentum compared to the previous diagram, but I am still not sure what is the correct interpretations.
| It is precisely what you said. When you do a scattering experiment, you're throwing in two electrons with momenta $p_1$ and $p_2$ and see two electrons coming out with momenta $q_1$ and $q_2$. However, electrons are indistinguishable, so you can't know whether the electron with momentum $p_1$ is the one with momentum $q_1$ or the one with momentum $q_2$ (to be fair, due to indistinguishability, the question doesn't even make that much sense). The first diagram can be thought of as pictorially describing the case in which $p_1$ becomes $q_1$ and $p_2$ becomes $q_2$, while the second diagram describes the possibility of $p_1$ becoming $q_2$ and $p_2$ becoming $q_1$.
It should be remarked that Feynman diagrams should not be taken too literally. They are mainly just computational tools and interpreting as what actually, physically happens is an extra philosophical step. The surely provide a pictorial interpretation, but it is important to recall that there is nothing ensuring that is what actually happens. Some physicists do prefer to interpret it like that, it is important to notice this is an extra philosophical step (just like choosing to interpret as something with no physical meaning).
Also, while splitting the two diagrams is necessary in the computation and this allows the interpretations I explained in the first paragraph, it is important to recall electrons are indistinguishable, so there's no really way of saying (or even asking) where the electron $p_1$ turned into $q_1$ or not.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/696979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Euler-Lagrangian equation of motion of quantum fields in QFT A canonical way of doing quantum field theory is by starting with some Lagrangian, for example, that of free scalar field
$$L=\frac{1}{2}\partial_{\mu}\phi \partial^{\mu}\phi-\frac{1}{2}m\phi^2$$
Then by employing the Euler-Lagrangian equation, i.e. $\delta L=0$, which would produce Klein-Gordon equation for the field
$$(\square+m^2)\phi=0$$
Then we proceed to various quantization procedure that lead us to express $\phi$ in terms of creation/annihilation operator.
However, when I am reading QFT text, it is often said that the Euler-Lagrangian equation does not hold exactly in QFT, and there are various quantum fluctuation that is characteristic of QFT.
I don't understand this statement, didn't we start doing QFT by employing the Euler-Lagrangian equation, and in this case, just the KG equation? Didn't we do quantization on the basis of this equation? Why is it said that the quantization would make the original E-L equation be violated by quantum fluctuation? Can anyone give an explicit example of quantum fluctuation violating the E-L equation?
| The Heisenberg field operators $\hat\phi(\mathbf{x},t)$ do in fact obey the E-L equations, however expectation values of these operators don't and require corrections. For example, an expectation value containing two field operators obeys
$$(\partial_x^2-m^2)\langle0|T\phi(x)\phi(y)|\rangle = -i\delta^3(x-y)$$
the term on the RHS is called a contact term.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/697049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.