Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Black hole temperature in an asymptotically de Sitter spacetime I am trying to calculate the Hawking temperature of a Schwarzschild black hole in a spacetime which is asymptotically dS. Ignoring the 2-sphere, the metric is given by $ds^2=\left(1-\frac{2M}{r}-\frac{r^2}{L^2}\right)d\tau^2+\left(1-\frac{2M}{r}-\frac{r^2}{L^2}\right)^{-1}dr^2$ where $\tau=it$ the Euclidean time and $L^2=\frac{3}{\Lambda}$. In asymptotically flat space ($\Lambda=0$), one has to require that $\tau$ be periodic in the inverse temperature $\beta$ in order to prevent a conical singularity at the event horizon, from which the temperature follows. In asymptotically de Sitter spacetime however, there are 2 positive roots of $g_{\tau\tau}$: the event horizon of the black hole $r_h$, but also the cosmological horizon $r_c>r_h$. As in the flat case, we can deduce the period of $\tau$ that is needed to prevent a conical singularity at $r_h$, but then we're still left with a conical singularity at $r_c$. Similarly we could make $\tau$ periodic in a way such that the conical singularity at $r_c$ disappears. However, we cannot make both conical singularities disappear! Then how can we derive the black hole's Hawking temperature in this case? Should I just ignore the singularity at the cosmological horizon? Or should I use different coordinate patches?
See L. Rodriguez and T. Yildirim, Class. Quantum Grav. 27, 155003 (2010), arXiv:1003.0026. Section 2.3 has the Schwarzschild-dS calculation. Lets define $f(r)=1-\frac{2M}{r}-\frac{r^2}{L^2}$ Radius of the horizon is given by the largest real root of f(r)=0 But of course $L\rightarrow \infty$ is still important. Once you obtain the energy momentum tensor for the fields near the horizon, you need to force Unruh boundary conditions which includes taking the limit $L\rightarrow \infty$. In the light cone coordinates, $T_{++}=0$ for $r\rightarrow \infty$, $L\rightarrow \infty$ $T_{--}=0$ for $r\rightarrow r_+$ This fixes the integration constants. The anomaly is cancelled by the Hawking flux $\langle T_{++}=0 \rangle= \frac{\pi}{12}T_H^2$ where $T_H$ is the Hawking temperature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Sound as a use to separate molecular structures Sound can be a destructive force. However, could it be used to separate say the Hydrogen atom from the Oxygen atoms?
Molecules are broken apart when they collide with one another with enough force to break the bonds that hold them together. These collisions happen all the time and depend on the density of the gas. This is what defines the mean free path. The frequency of collisions increases with density. The force involved in the collisions increases with temperature (because the temperature of a gas is related to the average kinetic energy of an ensemble of particles -- hotter gas == faster molecules == more energy transferred in collisions). Okay, so that's the background. Very, very crudely explained. To the specific question, sound is just pressure waves. Where the pressure is high in the wave, the density and the temperature of the gas increase relative to the baseline, and where it is low in the wave, the density and temperature decrease. It is therefore possible for a pressure wave to be strong enough to break apart molecules by increasing the temperature of the gas from the adiabatic compression. This happens all the time in hypersonic (Mach Number > 5 or so) regimes. This is space re-entry bodies, missiles, meteors... The real question is when does a pressure wave stop being a "sound wave" and start being a "shock wave." I'm not an expert in acoustics but typically acoustics implies linear wave theory. This means that shocks are not "sound." Since shocks happen when the flow reaches Mach 1, and chemical dissociation starts at Mach 5 or so in air, that would seem to imply that no, sound cannot cause chemical dissociation by the definition of sound as a linear pressure wave. But shock waves certainly can. The destructive nature of sound is typically more related to exciting natural frequencies of a material than just obliterating it to pieces from the incident wave itself. The frequency of the sound wave is chosen to match the resonance of the material so it self-amplifies and the material destroys itself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Write down equations for the electric field and magnetic fields components of a linearly polarized plane wave A linearly polarized plane wave at 100 MHz is propagating in the $z$ direction. The electric field vector makes an angle of 30° with the $x$-axis. Its peaks amplitude is measured to be $2.0\:\mathrm{ V m}^{-1}$. Write down equations for the electric field and magnetic fields components of the wave as a function of distance, $z$, and time $t$, measured in meters and seconds respectively. Assume the phase term is zero. Since the phase term is zero, I got that $E(z,t)=2\cos(kz-ωt)$. I think I should use $ω=2πf$ and $k=2πf/c$, but how can I split the electric field into $x$ and $y$ components? Also, I think $B(z,t)=E(z,t)/c$, so is the $x$ component of $B(z,t)$ equal to the $x$ component of $E(z,t)/c$? The $x$ component of the electric field at any time is $|E|\cos(30°)$ and the $y$ component of the electric field at any time is $|E|\sin(30°)$.
The x component of E(z,t) is |E|cos(30)=2 V/m * (sqrt(3)/2) The y component of E(z,t) is |E|sin(30)=2 V/m * (1/2) k=2πf/c = 2*pi/3 ω=2πf = 2*pi*10^8 E(z,t)=2(i(sqrt(3)/2)+j(1/2))cos(2*pi*z/3-2*pi*10^8*t) V/m B= (kxE)/c = (0i+0j+k)/(3E8) x (i(sqrt(3))+j) [cos(2*pi(z/3-10^8*t))] 10^(-8) T
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Size of Universe after inflation I read in some website that during the period of inflation, the expansion of the universe underwent incredibly fast, and its size increased by a factor of $10 ^{50}$, see this link In this field, I think, there is nothing for sure, but if there was really inflation, what does it mean that the size increased by a factor of $10 ^{50}$, but from what extent? From 1 mm? Or from the Planck length? Or something else? Has it been established that (whatever was the original measure) the factor is increased by $10 ^{50}$, and not for example of 40 or $10 ^{50^{60}}$?
In the phrase used in the article you link: inflated the size of the cosmos by a factor of $10^{50}$ the word size is misleading and should be replaced by scale factor. Whether the universe has a size or not isn't clear. The universe may well be infinite, in which case its size isn't defined. However the scale factor is precisely defined, and it's the scale factor that changed by $10^n$ during inflation (where the value of $n$ depends on the model you use). You probably learned Pythagoras' theorem at school, and this tells you that if you move a distance $dx$ in the $x$ direction and a distance $dy$ in the $y$ direction then the total distance you've moved, $ds$, is given by: $$ ds^2 = dx^2 + dy^2 $$ General relativity is basically a theory for calculating the distance $ds$ as in the equation above, but the expression used is rather more complicated than Pythagoras' theorem because (a) it includes movements in time and (b) spacetime can be curved. If you make a few simplifying (but physically reasonable) assumptions about the distribution of matter universe general relativity tells us that the analogous expression for calculating $ds$ is: $$ ds^2 = -dt^2 + a^2(t)(dx^2 + dy^2 + dz^2) $$ This equation is called the FLRW metric if you want to investigate it further. As promised, this expression includes time (with a minus sign!) but for our purposes the important bit is $a(t)$, which is called the scale factor. If you ignore $dt$ for the moment, the expression looks much like Pythagoras' theorem, but the total distance $ds$ is multiplied by $a(t)$ so if $a(t)$ increases with time then the distance $ds$ increases with time by the same amount. What happened during inflation is that $a(t)$ increased by $10^{50}$, or $10^{60}$ or whatever number your favoured theory of inflation predicts. So when the article says the size increased by $10^{50}$ what it really means is that if you choose any two points then during inflation the distance between those points increased by $10^{50}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Temperature of gases I can't find any law that states this (maybe the combined gas law does and I'm misinterpreting it?), but Feynman said that if you compress a gas, the temperature increases. This makes sense, for example, a diesel engine (or gas engine with insufficient octane or too high a compression ratio). Also, must thinking about a piston "hitting" particles as it is compressed makes sense that energy is imparted. But he goes on to say that when the gas expands, there is a decrease in temperature. This used to make more sense to me, but the more I think about it, it doesn't at all. Why would the particles lose energy if the container expands?
Consider a gas in a container. When the container expands, the gas cools down. The crux is in thinking about why the container expands. The reason the container expands is because there are gas particles hitting the walls, pushing them outward: they do work on the walls! This work on the walls costs them some energy, so that they now have less kinetic energy. The average kinetic energy is proportional to the temperature, so when the kinetic energy goes down, so does the temperature. This, and much more, is all neatly summarized in the ideal gas law: $$PV=nRT$$ where $R$ is the universal gas constant, $n$ the number moles of particles, and $P$, $V$ and $T$ are the pressure, volume and temperature (in SI units).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Where is the "event horizon" on a basketball hoop? I'm watching a lot of basketball this month. A common event is the ball going part way into the hoop and then coming out again. Announcers sometimes claim that the ball was "halfway through" when it rims out. Thinking about it, with enough rotation and friction I wouldn't be surprised if the ball could fall that far and not go through. How far can a basketball fall without being certain to fall all the way through? Assuming we are talking about NCAA men's basketball the relevant data: Circumference of the ball: 29.5-30 inches (749–762 mm) Weight of the ball: 20-22 ounces (567–624 g) Bounce of the ball: 49-54 inches (1245–1372 mm) when dropped 6 feet (1829 mm) Diameter of the rim: 18 inches (457 mm) Coefficient of friction*: 1.2 The coefficient of friction between the rim and the ball was estimated in The Engineering of Sport 7: Vol. 1 By Margaret Estivalet, Pierre Brisson. I assume the synthetic cover is used. For leather, the coefficient of friction was estimated at 0.5. For the purpose of this question, let's assume any rotational speed is possible. (It should be possible to estimate what humans can achieve, but obviously there is a limit.) Also, assume that gravity (and air pressure, if it matters) is at sea level.
Theoretically, if you don't limit the rotational speed, The ball going halfway through and coming back out is (probably) possible. All you gotta do is spin it fast enough in the right direction and launch it at the right angle so that it hits the rim perpendicularly. The normal impulse due to the collision will generate a frictional impulse upward, which will "kick" the ball upward. So by changing the velocity with which the ball collides, we can probably achieve the appropriate final velocity to get the ball out after going halfway in. One more thing, you have to spin the ball in the "forward" direction (direction in which you are throwing it) which is odd and looks difficult when you have to spin it really fast.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Help understanding proof in simultaneous diagonalization The proof is from Principles of Quantum Mechanics by Shankar. The theorem is: If $\Omega$ and $\Lambda$ are two commuting Hermitian operators, there exists (at least) a basis of common eigenvectors that diagonalizes them both. The proof is: Consider first the case where at least one of the operators is nondegenerate, i.e. to a given eigenvalue, there is just one eigenvector, up to a scale. Let us assume $\Omega$ is nondegenerate. Consider any one of its eigenvectors: $$\Omega\left|\omega_i\right\rangle=\omega_i\left|\omega_i\right\rangle$$ $$\Lambda\Omega\left|\omega_i\right\rangle=\omega_i\Lambda\left|\omega_i\right\rangle$$ Since $[\Omega,\Lambda]=0$ $$\Omega\Lambda\left|\omega_i\right\rangle=\omega_i\Lambda\left|\omega_i\right\rangle$$ i.e., $\Lambda\left|\omega_i\right\rangle$ is an eigenvector with eigenvalue $\omega_i$. Since this vector is unique up to a scale, $$\Lambda\left|\omega_i\right\rangle=\lambda_i\left|\omega_i\right\rangle$$ Thus $\left|\omega_i\right\rangle$ is also an eigenvector of $\Lambda$ with eigenvalue $\lambda_i$... What I do not understand is the statement/argument "Since this vector is unique up to a scale." I do not see how the argument allows to state the equation following it. What axiom or what other theorem is he using when he states "since this vector is unique up to a scale"?
Since the vector $\Lambda | \omega_i \rangle $ has the same eigenvalue as $| \omega_i \rangle $, it must be in the same invariant subspace as $| \omega_i \rangle $, which Shankar takes to be one dimensional.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Proving a step in this field-theoretic derivation of the Bogoliubov de Gennes (BdG) equations In derivation of the BdG mean field Hamiltonian as follows, I have a confusion here in the second step: $H_{MF-eff} = \int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\downarrow}(\mathbf{r}) +\int d^{3}r\triangle^{\star}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})\triangle(\mathbf{r})-\int d^{3}r\frac{|\triangle(\mathbf{r})|^{2}}{U}$ $ = \int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})H_{E}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})-\int d^{3}r\psi_{\downarrow}(\mathbf{r})H_{E}^{\star}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r}) +\int d^{3}r\triangle^{\star}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})\psi_{\uparrow}(\mathbf{r})+\int d^{3}r\psi_{\uparrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}^{\dagger}(\mathbf{r})\triangle(\mathbf{r})-\int d^{3}r\frac{|\triangle(\mathbf{r})|^{2}}{U}$ $= \int d^{3}r\left(\begin{array}{cc} \psi_{\uparrow}^{\dagger}(\mathbf{r}) & \psi_{\downarrow}(\mathbf{r})\end{array}\right)\left(\begin{array}{cc} H_{E}(\mathbf{r}) & \triangle(\mathbf{r})\\ \triangle^{\star}(\mathbf{r}) & -H_{E}^{\star}(\mathbf{r}) \end{array}\right)\left(\begin{array}{c} \psi_{\uparrow}(\mathbf{r})\\ \psi_{\downarrow}^{\dagger}(\mathbf{r}) \end{array}\right)+const. $ with $H_{E}(\mathbf{r})=\frac{-\hbar^{2}}{2m}\nabla^{2}$ In the second step, we have taken $\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})\nabla^{2}\psi_{\downarrow}(\mathbf{r}) = -\int d^{3}r\psi_{\downarrow}(\mathbf{r})\nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})$............(1). I can prove (by integration by parts and putting the surface terms to 0) that $\int d^{3}r\psi_{\downarrow}^{\dagger}(\mathbf{r})\nabla^{2}\psi_{\downarrow}(\mathbf{r}) = \int d^{3}r \nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}(\mathbf{r})$ but how is it justified to now take $\int d^{3}r \nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})\psi_{\downarrow}(\mathbf{r}) = - \int d^{3}r\psi_{\downarrow}(\mathbf{r})\nabla^{2}\psi_{\downarrow}^{\dagger}(\mathbf{r})$ in order to prove (1) ?
Let's do a fourier transform of the field operator: $$\Psi_\downarrow(r)=\frac{1}{\sqrt{N}}\sum_p\phi_p a_p$$ Where $\phi_p$ is the plane wave hence the eigenvector of $H_E$. Now it is easy to show that: $$\int d^3r\Psi_\downarrow^\dagger(r)H_E\Psi_\downarrow(r)+\int d^3r\Psi_\downarrow(r)H_E^*\Psi_\downarrow^\dagger(r)=\sum_p \frac{p^2}{2m}=const$$ I have used the anticommute relation $\{a_p,a_p^\dagger\}=1$. Now you can absorb this const into the final equation
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How to show the invariant nature of some value by the group theory representations? Let's have Dirac spinor $\Psi (x)$. It transforms as $\left( \frac{1}{2}, 0 \right) \oplus \left( 0, \frac{1}{2} \right)$ representation of the Lorentz group: $$ \Psi = \begin{pmatrix} \psi_{a} \\ \kappa^{\dot {a}}\end{pmatrix}, \quad \Psi {'} = \hat {S}\Psi . $$ Let's have spinor $\bar {\Psi} (x)$, which transforms also as $\left( \frac{1}{2}, 0 \right) \oplus \left( 0, \frac{1}{2} \right)$, but as cospinor: $$ \bar {\Psi} = \begin{pmatrix} \kappa^{a} & \psi_{\dot {a}}\end{pmatrix}, \quad \bar {\Psi}{'} = \bar {\Psi} \hat {S}^{-1}. $$ How to show formally that $$ \bar {\Psi}\Psi = inv? $$ I mean that if $\Psi \bar {\Psi}$ refers to the direct product (correct it please, if I have done the mistake) $$ \left[\left( \frac{1}{2}, 0 \right) \oplus \left( 0, \frac{1}{2} \right) \right]\otimes \left[\left( \frac{1}{2}, 0 \right) \oplus \left( 0, \frac{1}{2} \right) \right], $$ what group operation corresponds to $\bar {\Psi} \Psi$? This question is strongly connected with this one.
You need to work out the tensor product and will find a direct sum of different contributions \begin{multline} [(1/2, 0) \oplus (0, 1/2)] \otimes [(1/2, 0) \oplus (0, 1/2)] =\\ \big((1/2, 0) \otimes (1/2, 0)\big) \oplus \big((1/2, 0) \otimes (0, 1/2) \big)\oplus \quad \\\big((0, 1/2) \otimes (1/2, 0)\big) \oplus \big((0, 1/2) \otimes (0, 1/2)\big) = \\ (0, 0) \oplus (1, 0) \oplus (1/2, 1/2) \oplus (1/2, 1/2) \oplus (0, 1) \oplus (0, 0)\end{multline} The states now can be classified: * *$(0, 0)$ is a scalar or pseudoscalar, i.e. the $\bar \psi \psi$ you are looking for as well as $\bar \psi \gamma_5 \psi$ *$(1/2, 1/2)$ is the vector / pseudovector component $\bar \psi \gamma^\mu \psi$ or $\bar \psi \gamma^\mu \gamma_5 \psi$ *(1, 0) and (0, 1) are the (anti)-self dual parts of the tensor $\bar \psi \sigma^{\mu \nu } \psi$ All these transform well-definedly under Lorenty boosts. The $(0, 0)$ part tells you that this rep will transform neither under the left-chirality nor the right-chirality $sl(2)$ that you classify the reps by. Edit: Let me add that the distribution law I used above to get from the first to the second line is one of reasons we speak of a "direct sum" vs. "direct product".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Spatial bound on the internal electron structure In 2006 the radius for a possible internal structure of the electron has been pinned down to $10^{-18} m$. This validates the approximation of electrons as point particles at long distances, e.g. in an atom. The upper bound on the internal electron radius has been derived from a very precise measurement of the $g$-factor, see New Measurement of the Electron Magnetic Moment Using a One-Electron Quantum Cyclotron What I don't understand is how did they determine the relation between the $g$-facor and the radius of the internal structure? As far as I understand they compare the $g$ factor of a point particle to the $g$ factor of an extend particle. But how do you calculate the $g$ factor of a point particle or an extend particle?
But how do you calculate the g factor of a point particle or an extend particle? This is done for a point particle, and any experimental deviation from the calculated value for a point particle would suggest structure beyond a point particle. Dirac theory predicts g=2. Anomaly from g=2 has QED, hadronic and weak contributions, which are each calculated. The hadronic and weak contributions are small and considered to be well understood. The QED contribution to the anomaly is the main contribution and extremely difficult to calculate. Hundreds of Feymann diagrams are involved. See New Determination of the Fine Structure Constant from the Electron g Value and QED for more information.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/104896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Correct formula to express the potential generated by a single layer charge distribution Assume that the closed surface $S$ encircles a volume $V$, and that a surface charge with density $\sigma$ ("single layer") is distributed over $S$. My question regards the electrostatic potential $\phi$ generated inside the volume $V$ by this charge distribution: on the one hand, by using the superposition principle and Coulomb's law, I get $$\tag{1} \phi(x)=\frac{1}{4\pi \epsilon_0} \int_S \frac{\sigma(x')\, dS'}{\lvert x-x'\rvert}. $$ On the other hand, I know that $\phi$ solves Laplace's equation on $V$ with Neumann boundary conditions on $S$: $$\begin{cases} -\nabla^2 \phi =0 & \text{on }V\\ \frac{\partial \phi}{\partial \nu} \propto \sigma &\text{on }S \end{cases} $$ so that it may be expressed in an integral form by means of a suitable Green function (see Jackson, 3rd ed., equation (1.46) pag.39): $$\tag{2} \phi(x)=\langle \phi\rangle_S + C\int_S \sigma(x') \frac{\partial G}{\partial \nu'}(x, x')\, dS(x').$$ Question. Formulas (1) and (2) do not agree in general, otherwise all volumes would share the same Green function, and that's not true. So one of the two must be wrong. Which one is wrong, and for what physical reason?
Your equation (1) is right. Your Neumann boundary value problem is not correct. You actually have a free-space problem (your domain is $\mathbb{R}^3$). In that space some surfaces are charged. You do not have something like ideal conducting surfaces or symmetry conditions which would allow you to reduce the problem to a bounded domain. An example which shows that your Neumann boundary problem is not the right formulation for the problem: Assume $V$ is a cube and set $\sigma(x)=0$ on almost all plane surfaces of the cube except one where you set $\sigma(x)=1 \rm \frac{C}{m^2}$. Your Neumann boundary condition would wrongly imply that the field strength on the surface opposite to the one with $\sigma(x)=1\rm \frac{C}{m^2}$ would be zero. PS: Why does this reference not work:\eqref{1}?? It worked when I first edited the answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is radioactive decay spontaneous or random? When it comes to radioactive decay, what is the difference between random and spontaneous? For example, when the count rate of a radioactive isotope is measured, the readings fluctuate. Is this a demonstration of the randomness of the process, or of its spontaneous nature?
Spontaneous means to me as anything that takes place on its own without any inflience of any external factor e.g Temperature Pressure Magnetic field Electric field.etc ........... Radioactivity or radioactive decay is a spontaneous process. It is because the radioactive elements continuously emit radiation from them as a result of reactions taking place within them. It can be understood easily by the following experiment. Once henry bacquerel(a physicist) accidently observed that the uranium salt crystal emitted some radiations that turned the photographic plate black. It was seen that he did nothing during this observation but still these radiations were emitted. This shows that radioactivite ie. The emission of radiations by particles is a spontaneous process.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Why is the Gibbs Free Energy $F-HM$? With magnetism, the Gibbs Free Energy is $F-HM$, where $F$ is the Helmholtz Free Energy, $H$ is the auxiliary magnetic field, and $M$ is magnetization. Why is this? Normally, in thermodynamics, we Legendre Transform the various free energies into each other to maximize the global entropy. In these cases, we subtract $TS$ when we are imagining a system exchanging heat with a thermal reservoir (i.e. heat bath at constant temperature $T$), add $PV$ when we exchange volume $V$ with a constant pressure reservoir at pressure $P$, and subtract $\mu N$ when we exchange particles with a chemical reservoir at constant chemical potential $\mu$. In every other case, we exchange heat, volume, and particles with the reservoir. How do we justify writing $G=F-HM$. Though it is true that $H$ is maintained constant, we don't exchange magnetization with a "magnetic reservoir".
According to the first law of thermodynamics \begin{align}U=TS+YX+\sum_j\mu_jN_j.\end{align} Where $Y$ is the generalized force, $dX$ is the generalized displacement. Helmholtz Free Energy \begin{align}F=U-TS=YX+\sum_j\mu_jN_j. \end{align} Gibbs Free Energy \begin{align}G=U-TS-YX=\sum_j\mu_jN_j.\end{align} Therefore that \begin{align}G=F-YX.\end{align} In your case, $Y=H$, $X=M$, so we get \begin{align}G=F-HM.\end{align} You can see the textbook: A Modern Course in Statistical Physics by L. E. Reichl, 2nd, ed (1997), p23, 42, 45.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Why is titanium dioxide transparent for visible light but not for UV? I wonder the reason for TiO2 thin films to be transparent for visible light but not for UV. I made a quick search and I found that it is due to the band gap of TiO2. It absorbs UV light but not visible light. I imagine this occurs because of the different wavelengths of these two types of radiation. But what is the relation between the wavelength of a certain type of radiation and the width of the band gap a semiconducting material? And how does this effect its optical properties?
The energy per photon of light with wavelegth $\lambda$ is given by: $$ E = \frac{hc}{\lambda} $$ If the energy per photon is smaller than the band gap the light cannot excite electrons from the valence to conduction band so it will pass through the material without being absorbed. If the energy is larger than the band gap the light will excite electrons and will be (partially) absorbed. The cutoff wavelength is given by simply rearranging the above formula to get: $$ \lambda \approx \frac{hc}{\Delta E} $$ where $\Delta E$ is the band gap. I've used the approximately equal sign because band gaps are rarely sharp and the light absorbtion will increase over a wavelength range of around the cutoff wavelength. If you want to establish the band gap accurately you'd use a Tauc plot.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What happens when we bring an electron and a proton together? I have a couple of conceptual questions that I have always been asking myself. Suppose we have an electron and a proton at very large distance apart, with nothing in their way. They would feel each the other particle's field - however weak - and start accelerating towards each other. Now: 1) Do they collide and bounce off? (conserving momentum) 2) Does the electron get through the proton, i.e. between its quarks? 3) Do both charges give off Brehmsstrahlung radiation while moving towards each other? Different scenario: Suppose I can control the two particles, and I bring them very close to each other (but they are not moving so quickly as before, so they have almost no momentum). Then I let them go: 1) Would an atom be spontaneously formed? 2) If anything else happens: what kind of assumptions do we make before solving the TISE for an Hydrogen atom? Does the fact that the electron is bound enter in it? This is to say: is quantum mechanics (thus solving the Schrödinger equation) the answer to all my questions here?
In nuclear fussion electrons and protons can fuse to form neutrons with the release of photons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
Magnetism due to relativity? So I have been reading in some books that magnetism does not have to be assumed a priori, but can be obtained from the electric field + special relativity. And I have seen how this leads to the common formula for the magnetic field of a current carrying wire. Fine. What about materials that are inherently magnetic? Such as iron, or magnetite? Surely their magnetic field is not a consequence of relativity? (if yes, who's moving and with respect to whom?)
Magnetism in ferromagnetic materials (e.g. iron) emerge from the spin alignment of valence electrons. Spin is a purely quantum phenomenon that occurs due to relativity, so without relativity one would not have spin in the first place. The presence of spin causes electrons to possess an intrinsic magnetic moment, equal to approximately twice the spin (this factor can actually be derived from relativistic quantum field theory).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Viewing glass from an oblique angle When I view most glass from the side it's green which I've found out is due to impurities in the glass specifically from iron oxide. Why is it when I view the larger face from an oblique angle, it isn't nearly as green? I cannot personally notice any different on the piece I have next to me even when I hold it at an angle that would be almost looking at the edge of the glass. It is pretty small (about 2.5" x 5" x .0625" or about 61mm x 127mm x 2mm, l x w x h) but I feel like it's big enough that I'd be looking through enough glass to get the green.
As you stated, the degree of green is directly dependent on the thickness of glass you stare at (Beer-Lambert law). It actually comes from the absorption of the other wavelengths by the glass. Due to refraction, even when you look at the glass from a grazing angle in the air, the light rays bend to a higher angle in the glass which makes the light path through the glass shorter (figure 2). On the contrary, when you stare at the glass from the edge, total internal reflection makes the light rays travel through the whole length of the glass to your eye (figure 3).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Total Electrical potential energy of two particle system I recently have been studying Electro-statics and I couldn't understand properly how the potential energy of two particle system is found. Suppose you have two particles with charges $Q_1$ and $Q_2$ respectively. The distance between them is $r$. What is the total electrical potential energy of the system comprising of the two particles? Well, I know the answer. I want to know the reasons behind the answer. My book says "Fix one of the charges and then bring the other from infinity. Hence we get the answer. I am not satisfied with the answer, as I saw in Wikipedia they find the potential of each other with respect to the other and then add and divide by $2$. Why is this taking potential with respect to each other valid? Why do they divide by two? What does taking potential with respect to other mean? Do they fix one charge and bring another one? what about three charges $Q_1, Q_2$ and $Q_3$ which are situated in the vertices of an equilateral triangle with side length $r$?
The potential energy is the energy required (or work done) to pick up one of the charges from infinitely far away, and push it towards the other particle to the distance you want. Since there is a force acting on these particles (either positive or negative, depending on their relative charges) it takes energy to move them towards or away from each other.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
A question about Hamiltonian phase flow Show that if a one-parameter group of difeomorphisms of a symplectic manifold preserves the symplectic structure then it is a locally hamiltonian phase flow. Note that A locally hamiltonian vector field on a symplectic manifold $(M^{2n}, \omega^2)$ is the vector field $I \omega^1$, where $\omega^1$ is a closed 1-form on $M^{2n}$ and $\omega^1(\eta) = \omega^2(\eta, I\omega^1)$.
Hints: * *Prove that a one-parameter group $(\Phi_t)_{t\in I}$ of diffeomorphisms $\Phi_t: M \to M$ is generated by a vector field $X\in\Gamma(M)$. *Prove that if the one-parameter group $(\Phi_t)_{t\in I}$ preserves the a form $\omega$ then ${\cal L}_{X}\omega =0$. *Prove that ${\cal L}_{X}\omega =0$ together with the fact that $\omega$ is a symplectic two-form (in particular the fact that $\omega$ is non-degenerate) imply that $X$ is locally a Hamiltonian vector field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do particles in high pressure air always flow to lower pressure? The title really says it all: Why is this case? A "Feynman type" answer would be really appreciated as I'm more of a layman that a physicist.
The answer to this question is quite intuitive when you think about what pressure is: a force per unit area. In a high pressure zone, particles experience a high force, and in a low pressure zone, they experience a lower force. The high force "overpowers" the lower force, pushing the particles from the high pressure zone to the lower pressure zone. You can also think about this from a statistical thermodynamics standpoint. Consider the following thought experiment: You have two containers, one with high pressure gas and another with lower pressure gas. The high pressure container contains a lot of particles per unit volume (that is, it's relatively "full"), and the lower pressure gas contains few particles per unit volume (it's relatively "empty"). When the two containers are put side to side and gas is allowed to flow, the "full," high pressure container will lose particles to the "empty," low pressure one, causing particles to move from high to low pressure again. Note that this effect is purely due to random movement of the particles. The equilibrium position, of equal particle densities everywhere, is simply the one that has the largest chance of happening (and an overwhelmingly large chance at that) in the long run.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can point-like particles in an ideal gas reach thermodynamical equilibrium? Having learned that the particles of an ideal gas must be point-like (for the gas to be ideal) I wonder how they can reach thermodynamical equilibrium (by "partially" exchanging momentum and energy). First the probability of two point-like particles to collide is literally zero, and second, they can only collide head-on which implies that they can only "swap" their momenta and energies. How is this puzzle to be solved?
An ideal gas in equilibrium cannot be supposed to have reached its equilibrium from a non-equilibrium state by interaction of its particles, because by definition the particles of an ideal gas do not interact: * *Hard spheres of radius $a$ that collide elastically do interact in the time average - even when they interact "almost never". (Among other things they induce a non-vanishing virial coefficient $B_2 = \frac{2}{3}\pi a^3$.) So there is no ideal gas with finite-sized particles. *A system of point-like particles which collide elastically cannot be distinguished from a system of point-like particles that don't interact at all. Thus, an ideal gas is - eventually - just supposed to be in equilibrium state. It cannot have reached it by "internal mechanisms".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 0 }
Is a scaled-up aircraft carrier 'ski-ramp' a viable system to impart enough velocity to significantly assist a spacecraft to orbit? Not a boost straight up to escape velocity, just sufficient added momentum to make a significant economic saving on fuel cost, mass and complexity of a standard launch. It seems ludicrous to me how much fuel in a standard launch appears to be wasted pushing thousands of tonnes of rocket fuel in a vertical direction from a standing start. The system I envisage would have a horizontal track of possibly 2-3km (distance to be debated), acceleration via external systems (electromag or JATO?), continue up a ramp of slowly increasing gradient, with craft's own engines now running, and exit the ramp at between Mach 1 or 2. Safety run-off track if craft's own engines fail to ignite properly. Winged style craft for stability from ramp & shuttle-style landing. Similar I know to the Phys.SE 'Rail gun' question here, but acceleration could be more survivable. Had a longer script but it appeared to over-run the site limit. Acknowledge AdamRedwine's railgun question -would have added my contribution to that but my Reputation today is only 1. Acknowledge Gerry Anderson's Fireball XL5 puppet show on British TV in the '60s - the launch concept has always stayed with me! Submitted for criticism.
This concept has gotten consideration from NASA. In the NASA MagLifter concept, a 300-600 miles per hour speed on a superconducting magnetic levitation track appoximately 2.5 miles long and going up a mountain to about 10,000 feet is proposed. The option of using a helium filled tunnel to reduce drag was also proposed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/105973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can electrons coincidentally flow along a circuit to cause current? My understanding of circuits which are not supplied an e.m.f. is that the electrons randomly just flow about in random directions, and since there's so many of them, probability dictates that any forwards or useful movement is cancelled almost perfectly by the negative or unwanted movement. If this understanding is correct, is it theoretically possible for the electrons to, under an extremely low probability, mostly flow in the correct direction, and generate a current, perhaps lighting up a lamp? I'm talking even probabilities like 1x10^-1000, just wondering if it is at all possible. This seems wrong to me and if anyone could explain why this is not possible, or the flaw in my logic, that would be greatly appreciated. Apologies if there's some huge hole in my logic, or I'm being extremely stupid, I'm a student studying Physics below university level and thus know little more than basic outlines of Physical theories.
Yes, it is possible. The simplest qualitative answer to this is that, at the microscopic level, the electrons in a conductor are dictated by quantum mechanics, which is inherently probabilistic. Velocities and positions are rarely ever totally excluded from a given value; it's just insanely unlikely for a single electron to attain that given value. Expecting it to happen for all of them makes it much, much worse. But it is possible. A more classic example of this question is throwing a tennis ball against a brick wall: you can calculate how many times you'd have to throw it for all the atoms in the ball to tunnel through the potential of the wall, but you find that it's much more than the age of the universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Gravity of very distant objects As far as I know stars emit a finite number of photons in all directions in a given period of time and as an observer goes further away he experiences less and less photons to the point where the photon sphere is so spread out that sometimes he will not experience any photons making the star appear to be blinking. So my question is: Does the same apply to gravity and is it also "blinking" at very long distances? And if so, then could theoretically an observer (by chance) experience a long enough period between such blinks as to drift away as if he wasn't being influenced by gravity of this object?
According to my limited understanding: gravity isn't part of the Standard Model and gravitons are purely hypothetical. If they exist then I'd say you're right. But if General Relativity is right then gravity doesn't come in quanta and you would always experience a small but non-zero attraction due to the curvature of spacetime.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Size of the Universe: Curved vs flat? Finite vs infinite? I have recently heard the theory that the Universe may be smaller than observed but may be curved to the extent that light rays may have looped past us once already and hence appear to have originated from further away than the source actually is located. If this is the case, would we not see light in every direction due to the fact that empty space would simply allow us to see fully around the curved universe to ourselves or other luminous objects? Would this same problem not present if the Universe were infinite?
This is similar to the idea that if the universe was infinite in size and infinitely old, the sky would be as bright as the sun, since every point in the sky would end on a star, somewhere in the infinite universe. Since this is not the case, it led people to conclude that the universe is either finite in size or age. Similarly, even if the universe was curved, as long at the curvature is sufficiently small, the light (curving around the universe on it's way to being seen by us) would not have had enough time to reach us. This agrees with experimental values that say that the universe is flat, within the error bars. Astrophysicists have looked for repeating objects in the night sky in an attempt to confirm a curved universe, but failed to find anything. This means that the universe is not smaller than observed and curved, but does not rule out very small curvature, so that on a scale much larger than the observable universe we would see the light repeating.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Centrifugal force when there is no friction Assume that a coin is placed on circular disk and now a disk is rotated with constant angular velocity. If there is no friction between the surfaces of a disk and coin, according to theory the coin will move away from centre of disk. But I have confusion here that the centripetal and centrifugal forces are of equal magnitude so why latter comes to play effectively?
Here, because the coin is placed at the center, the centrifugal forces balance each other. Every point mass in the coin has it's conjugate point at the diameter passing through it and on the same distance from the center on the other side. Hence the coin is under equilibrium and does not fly off.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
force on a moving charge in magnetic field Need help in understanding the direction of magnetic force in the magnetic field!Totally confused by directions. Why is it that magnetic force is perpendicular to the direction of magnetic field and velocity of charged particle. Why is it(force) not in the same direction as the magnetuc field
A more comprehensive and deeper explanation of the Lorentz-force is based on relativistic electrodynamics as given in: http://chip-architect.com/physics/Magnetism_from_ElectroStatics_and_SR.pdf by Hans De Vries.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Clarification of multipole expansion for a point charge In Griffith's electrodynamic: 3.4.2 He pointed out that the monopole term is the exact potential for a single point charge. However I was under the impression that different configuration of a charge distribution can act as a point charge from superposition thus allowing other multipoles to exist? If not how do I prove that a single point charge only has monopole?
The multipole coefficients associated with a $1/|r|$ distribution $\rho$ depends on the choice of origin. For example, if you have a point charge and you choose the origin to be at that point charge, then it will have a pure monopole character. However, if you choose the origin to be elsewhere, it will have nonzero expansion coefficients other than the monopole. This is an artifact of your choice of coordinate system. To make this rigorous, let $\mathbf{I}=\{I_0^0,I_1^{-1},I_1^{0},I_1^1,...\}$ and $\mathbf{R}=\{R_0^0,R_1^{-1},R_1^{0},R_1^1,...\}$ be the set of irregular and regular solid harmonics. Then the potential $V(\mathbf{r})$ due to $\rho$ admits the exterior and interior multipole expansions $$V=\sum_{J=0}^\infty \mathbf{I}_J\rangle\left\langle\mathbf{R}_J,\rho\right\rangle\qquad\text{when }|\mathbf{r}|>r_\text{max} \\ V=\sum_{J=0}^\infty \mathbf{R}_J\rangle\left\langle\mathbf{I}_J,\rho\right\rangle\qquad\text{when }|\mathbf{r}|<r_\text{min} $$ or in matrix notation, $$V=\mathbf{I}\mathbf{R}^\dagger\rho\qquad\text{when }|\mathbf{r}|>r_\text{max} \\ V=\mathbf{R}\mathbf{I}^\dagger\rho\qquad\text{when }|\mathbf{r}|<r_\text{min}.$$ In the case where $\rho$ is purely real, we can use the real solid harmonics $\mathbf{I}'$ and $\mathbf{R}'$, which are related to the standard solid harmonics by a unitary block diagonal matrix $\mathbf{U}$ via $\mathbf{I}'=\mathbf{I}\mathbf{U}$ from which we obtain the analogous real expansions $$V=\mathbf{I}\mathbf{R}^\dagger\rho=\mathbf{I}\mathbf{U}\mathbf{U}^\dagger\mathbf{R}^\dagger\rho=[\mathbf{I}'][\mathbf{R}']^\dagger\rho=[\mathbf{I}'][\mathbf{R}']^\mathsf{T}\rho\qquad\text{when }|\mathbf{r}|>r_\text{max} \\ V=\mathbf{R}\mathbf{I}^\dagger\rho=\mathbf{R}\mathbf{U}\mathbf{U}^\dagger\mathbf{I}^\dagger\rho=[\mathbf{R}'][\mathbf{I}']^\dagger\rho=[\mathbf{R}'][\mathbf{I}']^\mathsf{T}\rho\qquad\text{when }|\mathbf{r}|<r_\text{min}$$ which has the advantage that the list of multipole moments $[\mathbf{I}']^\mathsf{T}\rho$ or $[\mathbf{R}']^\mathsf{T}\rho$ are purely real. So, why does a point charge not located at the origin have moments other than a monopole? It's for the same reason why a washing machine with a raccoon inside of it will shake around when it's on a wash cycle: it's not balanced, as the charges (or mass) are not located at the center of the relevant coordinate system. As an explicit proof of why a point charge not located at the origin can't have a pure monopole moment, suppose otherwise. Then a test charge will be uniformly accelerated towards the center of the coordinate system, instead of towards the point charge. This is a contradiction. Therefore, there must be higher moments involved. Alternatively, a detailed justification can also be obtained by applying the addition theorem for spherical harmonics, but hopefully the proof given in the previous paragraph is sufficiently illuminating to show why higher moments will appear when a point charge is not located at the chosen origin. Here's a numerical example to compute the moments of a single point charge located at spherical coordinate $(R,\pi/2,0)$ in Mathematica (it also computes the potential $V$ at an arbitrary point and compares it to the potential obtained from direct application of $V=1/|\mathbf{r}-\mathbf{r}_0|$): SolidHarmonicI[l_, m_, r_, \[Theta]_, \[Phi]_] := Sqrt[(4 \[Pi])/(2 l + 1)] SphericalHarmonicY[l, m, \[Theta], \[Phi]]/r^(l + 1); SolidHarmonicR[l_, m_, r_, \[Theta]_, \[Phi]_] := Sqrt[(4 \[Pi])/(2 l + 1)] r^ l SphericalHarmonicY[l, m, \[Theta], \[Phi]]; SphToCart = CoordinateTransform["Spherical" -> "Cartesian", #] &; r = {R, \[Pi]/2, 0};(*Spherical coordinates of point charge*) Q[L_, m_] := ((-1)^m SolidHarmonicR[L, -m, ##] & @@ r) q;(*Exterior multipole moment or order (L,m)*) MatrixForm[ Table[Q[L, m], {L, 0, 5}, {m, -L, L}]] rule = {R -> 1.2, q -> 2, rtest -> 5.2, \[Theta]test -> 1.2, \[Phi]test -> 2.3}; Chop[Sum[SolidHarmonicI[L, m, rtest, \[Theta]test, \[Phi]test] Q[L, m], {L, 0, 5}, {m, -L, L}] /. rule] q/Norm[SphToCart@r - SphToCart@{rtest, \[Theta]test, \[Phi]test}] /. rule 0.332219 0.332273 $$\left( \begin{array}{c} \{q\} \\ \left\{\frac{q R}{\sqrt{2}},0,-\frac{q R}{\sqrt{2}}\right\} \\ \left\{\frac{1}{2} \sqrt{\frac{3}{2}} q R^2,0,-\frac{q R^2}{2},0,\frac{1}{2} \sqrt{\frac{3}{2}} q R^2\right\} \\ \left\{\frac{1}{4} \sqrt{5} q R^3,0,-\frac{1}{4} \sqrt{3} q R^3,0,\frac{1}{4} \sqrt{3} q R^3,0,-\frac{1}{4} \sqrt{5} q R^3\right\} \\ \left\{\frac{1}{8} \sqrt{\frac{35}{2}} q R^4,0,-\frac{1}{4} \sqrt{\frac{5}{2}} q R^4,0,\frac{3 q R^4}{8},0,-\frac{1}{4} \sqrt{\frac{5}{2}} q R^4,0,\frac{1}{8} \sqrt{\frac{35}{2}} q R^4\right\} \\ \end{array} \right)$$ Note that there are nonzero moments of all orders whenever $R\neq 0$. However, the potential at the test location is correct up to parts per thousand accuracy when the sum runs up to $L=4$. how do I prove that a single point charge only has monopole? Set $R=0$ in the above triangle of numbers. Everything vanishes except the monopole term.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can a laser be designed to ionize muonic atoms so as to prevent a-sticking? Muon catalyzed fusion is currently little more than a lab curiosity today in part because of how many hydrogen nuclei can be fused before the muon is carried away by an alpha particle. Deuterium+deuterium reactions are ten times more likely than deuterium tritium reactions to result in a muon sticking to a helium ion. I am wondering if some one can calculate the ionization energy needed to prevent that from happening and to speculate if a laser can be built to do it. If it is possible, it may help pave the way to clean low-temperature fusion energy that produces more power than is used to make it.
For what it's worth (I cannot verify the claims): http://www.j.sinap.ac.cn/nst/EN/article/downloadArticleFile.do?attachType=PDF&id=448 (NUCLEAR SCIENCE AND TECHNIQUES 25, 020201 (2014) - I guess this is a Chinese journal). Abstract: "Considering the mixture after muon-catalyzed fusion ($\mu$CF) reaction as overdense plasma, we analyze muon motion in the plasma induced by a linearly polarized two-colour laser, particularly, the effect of laser parameters on the muon momentum and trajectory. The results show that muon drift along the propagation of laser and oscillation perpendicular to the propagation remain after the end of the laser pulse. Under appropriate parameters, muon can go from the skin layer into field-free matter in a time period of much less than the pulse duration. The electric-field strength ratio or frequency ratio of the fundamental to the harmonic has more influence on muon oscillation. The laser affects little on other particles in the plasma. Hence, in theory, this work can avoid muon sticking to $\alpha$ effectively and reduce muon-loss probability in $\mu$CF."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Majorara mass and fermion number violation How can it be shown that the Majorana mass violates the fermion number by two units? Can even a Noether charge be defined in presence of Majorana mass term?
It should be pointed out that there is zero experimental/ observation evidence for Fermion number conservation violation. Fermion number violation is predicted in Grand Unification theories and SUSY but , at least based on current evidence, there is no reason to think nature is fundamentally Super symmetric or that grand unification occurs in nature. The SUSY models that have useful features like explaining the Higgs mass have been ruled out by the LHC. Also despite the best efforts proton decay and neutrino-less double beta decay, both predictions of Grand Unification , have never been observed. And then there is super string/ M theory which predicts nothing at all. Maybe it's time to rethink where particle physics has been going.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/106866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Are there any QM effects where charged particles are not intimately involved? Are there any QM effects that have been/could be measured from interactions involving non-charged particles? Elementary QM is all about the electron energy levels in the atom, photon - atom interactions, etc. When one looks at the nucleus, its all about quark interactions - which are also charged particles. I can think of some theoretical ones - like neutrinos orbiting a mass, but they would be hard to impossible to measure. Another possibility is the strong / weak nuclear force - but that always happens with particles that are also charged. In the end we always need matter built instruments to see a result - that's fine. You could for instance observe photons coming from some distant (metres to Mpc) away region where some interaction occurred.
Am I missing something here? Photons and the double-slit experiments do the trick. Likewise, you can entangle the polarisation degrees of freedom of photons. This is inherently quantum and photons aren't charged. True, they are the excitations of electromagnetic fields, but they are not charged. Anyway, I guess the problem here is that except for the neutrinos and certain gauge bosons, any elementary particle carries some charge, hence interacts with electromagnetic fields. However, that doesn't mean that charge is in any way special. You can work out (and "find" in condensed matter systems, I believe) a lot of different quantum field theories without electromagnetic interactions. The framework of QM is completely independent of charged particles - it's just that the electromagnetic force is pretty strong, nearly every particle participates (in contrast to the strong force, where all leptons are uncharged) and the easiest to work with in a lab that makes you feel that it is really special with respect to QM.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Calculate the average temperature needed for hydrogen fusion reaction my question is simple. How can we calculate the temperature needed in order to do the nuclear fusion things , and also the temperature after the reaction successful. If you can describe it, it would be really cool. I just want to know about it Thanks
Firstly, fusion doesn't happen in the way depicted in the question. Four protons don't participate in a 4-body reaction. Instead there are many intermediate steps: ![enter image description here][1] Each step has its own reaction rate. The overall reaction rate is determined by the [rate limiting step][2]. The proton-proton reaction is the rate limiting step in this case. It is important to think in terms of the rate at which a reaction occurs, rather than whether or not it will occur. The reaction rate will depend on temperature and pressure. In the Sun, pressure is ~265 billon bar, so the reaction can proceed at 10-15 million K. The rate of reaction is actually very low, the center of the Sun only produces energy at rate of ~277 Watts per cubic meter. On Earth, we can not built a reactor with pressure this high, so higher temperature is needed. Different reactions such as starting with deuterium or tritium are used to avoid the need for the proton-proton reaction. Then fusion could be achieved in the ~100 million K range for example. Of note, the first step in the reaction is only possible because of the weak force, in which one of the protons spontaneously changes into a neutron. This is an extremely unlikely occurrence, and is in fact the primary reason the sun lasts for billions of years [1]:https://upload.wikimedia.org/wikipedia/commons/thumb/7/78/FusionintheSun.svg/421px-FusionintheSun.svg.png [2]: http://en.wikipedia.org/wiki/Rate-determining_step
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do rocket engines have a throat? Diagrams of rocket engines like this one, (source) always seem to show a combustion chamber with a throat, followed by a nozzle. Why is there a throat? Wouldn't the thrust be the same if the whole engine was a U-shaped combustion chamber with a nozzle?
Previous answers have focused on the fluid dynamics angle. However, you can also view it from a purely thermodynamic angle, viewing the rocket engine as a heat engine. In order to get useful work (accelerated exhaust gases), you need some form of thermodynamic cycle with combustion followed by expansion. Due to conservation of energy, the amount of kinetic energy acquired by the gas will then be proportional to the amount of enthalpy (heat + pressure energy) that disapears as the exhaust gas expands and cools. This means you want to maximize the temperature in the combustion chamber and minimize the temperature of the exhaust to maximize your Carnot efficiency. You ensure this by making sure that combustion happens before expansion, with a separate combustion chamber and expansion nozzle. Furthermore, you want the gas to expand by as large of a factor as possible to minimize the exhaust temperature - and the expansion ratio is proportional to the area of the nozzle exit divided by the area of the nozzle throat. This means that from thermodynamic considerations alone, we can see that it is preferable to have a very tight throat and a very large exit area. Fluid dynamics determine the exact details of nozzle shapes (de laval nozzles etc) that get the thermodynamic efficiency as close to the Carnot efficiency as possible, and whether the exhaust will actually expand or instead separate from the nozzle walls. But the need for a separate combustion chamber and nozzle is much simpler and can be understood without any knowledge of subsonic/supersonic flow.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 4, "answer_id": 1 }
What happens with a tunneling particle when its momentum is imaginary in QM? In classical mechanics the motion of a particle is bounded if it is trapped in a potential well. In quantum mechanics this is no longer the case and there is a non zero probability of the particle to escape the potential through a process call quantum tunneling. This seems extraordinary from the point of classical mechanics because it implies the particle must cross a zone where it has imaginary momentum. I understand that from the point of view of quantum mechanics there is a non zero probability for the particle to be in such zones. What is it know about the behaviour of the particle in this zone? Links to research experiments or papers would be appreciated.
I can give you one example. In a semiconductor reverse-biased p-n junction, a potential barrier exists that prevents electrons from crossing the junction. There is an energetically-forbidden region in the vicinity of the junction. The wave functions of electron states in both the valence and conduction bands are real exponential in this region. Additionally it's possible that the only spatial overlap between the valance and conduction bands occurs in the forbidden region. Yet optical absorption occurs due to valance to conduction band transitions. The interpretation is that electrons in the forbidden region are promoted from the exponential tail of the valence band to the exponential tail of the conduction band. This process is called the Franz-Keldysh effect or tunneling-assisted absorption. Here's a nice figure from the German Wikipedia page. The English page doesn't have such a nice figure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How can the tension force be computed to test if a shape is moving or not? Source Given the coordinates of $n$ 3D joints ($1kg$ each) connected by $m$ rods. Assume rods have zero mass and joints with $z=0$ are fixed to the ground while others are free to move, will the shape be move or not? If not, will it be stable? The totals force at each joint has to be zero for the shape to be stable. the force at each node is * *weight of the ball which is $1 \times 9.8067$ *tension force in the rods connecting that node. How can the tension force be computed?
This is a partial answer that may get you thinking in the right direction. Three rods connected in a triangle are rigid. 4 our more rods connected in a square or larger polygon are flexible. This is why high voltage power lines are supported by structures built entirely of triangles. However, triangles are not enough to ensure stability. E.G. Two triangles joined along one edge can open and close like the covers of a book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of acceleration vectors If a point mass has some accelerations $\mathbf{a_1} $ and $\mathbf{a_2} $, why is mathematically true that the "total" acceleration is $\mathbf{a}= \mathbf {a_1}+\mathbf {a_2}$?
While the other answer are all completely correct, I just want to write a more simplified answer. It's much the same as distances. I you walk 1 meter North and 1 meter East, you can add the two distance vectors and get $\sqrt2$m North-East: $$\vec{d}_1=1m[N]=(1,0),~~\vec d_2=1m[E]=(0,1)$$ $$\vec d=\vec d_1+\vec d_2=(1,1)=1m[N]+1m[E]=\sqrt2m[NE]$$ Adding acceleration vectors works the same way as adding distance vectors. You add the corresponding components (x with x, y with y, etc. whatever coordinates you are using) and the magnitude and direction will work itself out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Why is the periodicity of fields in finite temperature QCD consequence of Trace in the action? In finite temperature QCD, the gauge fields must be periodic in temporal direction. They say this is the consequence of trace in the action for gauge fields. How does trace imply that the fields must be periodic?
let's say the trace is the expectation value. the action will be invariant so by calculating the expectation value of the action one would expect a minima on the path taken by a particle. This would be independent of time, the same physics will describe the dynamics tomorrow. hence, periodicity in the temporal direction is a way of saying that if something happens right now that something is equally likely to happen tomorrow, next week and so on.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What does really mean by- power of a number or an exponential function is dimensionless? Is power of only a number or an exponential function is dimensionless? If power of any other thing can also be dimensionless then please explain with examples.
To consider an example, take the case of exponential decay $$N=N_\circ e^{-\lambda t}$$ We can write this as \begin{eqnarray*} N & = & \frac{N_{\circ}}{e^{\lambda t}}\\ & = & \frac{N_{\circ}}{\underbrace{e\times e\times e\times e\times\ldots \times e}_{\lambda t\text{ times}}} \end{eqnarray*} So $\lambda t$ must be a dimensionless term that is telling how many times we should multiply $e$ by itself. Thus, $\lambda t$ must be dimensionless "overall". Individually, $\lambda$ has the dimensions of $[T^{-1}]$ which cancels with $t$ to give a net dimensionless quantity. $\underbrace{e\times e\times e\times \ldots}_{10 \text{ meters times}}$ makes no sense mathematically. We could have taken a dimensional quantity instead of $e$ but the exponent $\lambda t$ would still be dimensionless. eg in the kinematical equation $s=ut + \frac 12 at^2$, $t^2$ has the dimensions of $[T^2]$ but the exponent $2$ is dimensionless. The same applies to transcendental functions i.e. logarithmic, trigonometric, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Electric field in a sphere with a cylindrical hole drilled through it Suppose that you have a sphere of radius $R$ and uniform charge density $\rho$; a cylindrical hole with radius $a$ ($a\ll R$) is drilled through the center of the sphere, leaving it like a "necklace bead". I would like to find a function for the electric field (1) very far away from the sphere ($r\gg R$) and (2) inside the hole, near the center of the bead $r\ll R$. In case (1), I simply treat it as a point charge and calculating the electric field is trivial. However, I am uncertain how to approach part (2) and would appreciate any assistance. The combination of spherical and cylindrical geometries seems to make this quite tricky. I am unsure what approximation or simplification to make from the knowledge that $r\ll R$. Would it perhaps be correct to find the electric field from (1) a complete, uniformly charged sphere and (2) a cylinder of charge density $-\rho$? Summed together, the charge densities would result in our original "bead" system, so then I can just add together the expressions for the electric field. Doing case (1) is quite easy, but (2) is nontrivial for positions that are not along the axis of the cylinder, but perhaps due to our condition that $r\ll R$ and $a\ll R$, we can assume that the field from the cylinder along the $z$-axis is a good enough approximation.
I agree with the result, but I would like explain another more general and rapid approach. Because of the radius of the hole is negligible with respect to the radius of the sphere, and the only posible direction for E compatible with the symmetry is the z axis, and finally having in mind that the tangencial components of E are continuous, the solution is exactly the same we obtain when only considering the sphere with a uniforme charge distribution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
In calculating work done by a constant force over a constant distance, why doesn't the subject's initial velocity matter? Assume a point-mass $m$ is travelling in a straight line, and a force $F$ will act on $m$ (in the same direction as $m$'s velocity) over a constant distance $d$; why doesn't $m$'s velocity matter to the calculation of work done on $m$ by $F$? Work is defined such that, in this example, the work done by $F$ on $m$ is equal to $Fd$, but it seems that if $m$ were moving slower, it would spend more time in the field, allowing $F$ more time to act on $m$, thereby doing more work. In fact, if $m$'s velocity were very great, it would hardly spend time in $F$'s field at all (so very little work done). Maybe I misunderstand work; can someone address this confusion of mine?
Well, you simply need to accept that work is given by Force time Distance, and it doesn't matter how long it takes. For example, the work done on a mass $m$ lifted a distance $h$ against gravity with an acceleration $g$ is given by:$$W=F\times h=mgh$$ If you are told that someone is going to drop a $1$ kilogram mass on your head from a height of $10$ metres, you may well have a lot of urgent questions, but how long the evil dropper took to get the weight up there is likely not one of them. In the case of your example, suppose you have an object with mass $m$ travelling at velocity $v_o$, when a force $F$ is applied for a distance $D$, after which it is travelling at a velocity $v_f$, having experience an acceleration $a$. The definition of the various constant acceleration equations give us:$$v_f^2=v_o^2+2aD$$ Multiply by $m$, divide by $2$, and we get:$$\frac12 mv_f^2=\frac12 mv_o^2+maD=\frac12 mv_o^2+FD$$The LHS is the final kinetic energy, and the RHS is the initial kinetic energy plus the work done.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 0 }
Galaxies fading away after time Dark energy is constantly pulling all objects away from each other with increasing speed. This in turn causes a red-shift of the light from the most distant object where this effect is most profound. This red-shift will gradually increase as the objects move away faster. Is there going to be a time where the objects travelling away will emit light of such long wavelength that they will disappear from sight?
In an acceleratingly expanding universe, there will be an emission time of light from a distant galaxy after which we can never recieve newly emitted light. Old light will eternally be received, but even more dimmer and red shifted. See The Long–Term Future of Extragalactic Astronomy for more information.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are Verdet Constants Temperature Dependent? The Verdet constant of a magneto-optical material shows up in the calculation of the rotation of polarized light in a medium submerged in a magnetic field. The amount of rotation is given by $$ \theta=VBd, $$ where $\theta$ is the angle of rotation of linear polarized light, $V$ is the Verdet constant, $B$ is the magnetic field (assumed to be constant over the length of the crystal), and $d$ is the path length over which the magnetic field interacts with the light. I was told that there should be a temperature dependence somewhere as well as a dependence on the wavelength of the laser? Where do these dependencies fit in this equation?
The Verdet constant is a coefficient which sums up the magneto-optical properties of the medium. So, the temperature and wavelength dependence are wrapped up in it. Fundamentals of Photonics by B.E.A. Saleh expresses the Verdet constant in terms of the wavelength as $$ V\simeq-\frac{\pi\gamma}{\lambda n} $$ where $\lambda$ is the wavelength of the light and $n$ is the index of refraction of the material, and $\gamma$ is called the magnetogyration coefficient. The magnetogyration coefficient shows up in the equation of motion of the electrons in the material which is given by $$ \mathbf{D}=\mathbf{\epsilon E}+i\epsilon_0\gamma\mathbf{B\times E}. $$ So, $\gamma$ tells how strongly the electrons in the material are curved by the magnetic field when they are driven by the electric field of the light. Determining $\gamma$ from solid state calculations is an arduous task, but the authors of this paper (unfortunately behind a paywall) say that "the effect is most likely associated with shifts in the band gap." In the same paper they measured the temperature dependence of the Verdet constant in three common magneto-optical glasses. As you can see below the temperature dependence of all of them is on the order of $\sim10^{-4}$ of the static Verdet constant. * *For SF-57 they measure $V_0=11.5\ \frac{\text{deg}}{\text{cm}}$ and $\frac{dV}{dT}=1.26\cdot10^{-4}\ \frac{1}{\text{K}}$. *For SiO$_2$ they measure $V_0=2.1\ \frac{\text{deg}}{\text{cm}}$ and $\frac{dV}{dT}=0.69\cdot10^{-4}\ \frac{1}{\text{K}}$. *For BK-7 they measure $V_0=2.3\ \frac{\text{deg}}{\text{cm}}$ and $\frac{dV}{dT}=0.63\cdot10^{-4}\ \frac{1}{\text{K}}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/107992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Projection operators and their subspaces (of Hilbert space) I've been watching Susskind's lectures on Quantum Entanglement, and something he said regarding (non-)commuting projection operators confused me. Consider two subspaces {$|a\rangle$} and {$|b\rangle$} of Hilbert space, with operators $K$ and $L$ for which: * *$K |a\rangle = \lambda |a\rangle (1)$ *$L |b\rangle = \mu |b\rangle (2)$ Now considers operators $P_K $ and $P_L$ that project any vector in Hilbert space onto their respective subspaces, that is: * *$K (P_K |\psi\rangle) = \lambda (P_K |\psi\rangle) $ *$L (P_L |\psi\rangle) = \mu (P_L |\psi\rangle) $ We want to find simultaneous eigenstates of both $K$ and $L$. If $P_K$ and $P_L$ commute: $P_K (P_L |\psi\rangle) = P_L (P_K |\psi\rangle)$. Now the left-hand satiesfies $(1)$, and the right-hand side satisfies $(2)$, so these are the required states. In fact, if $P_K$ and $P_L$ operators commute, they share a complete set of eigenstates. The eigenstates of projection operators are those that span the subspace they project onto, so apparently $P_K$ and $P_L$ project onto the same subspace, which means they're the same operator? Then, is the statement: "projection operators commute $\rightarrow$ they're the same" correct, or do they somehow project states onto the same subspace in a different way? Furthermore, we can imagine the subspaces geometrically as 'planes', and where these planes intersect we can find states that satisfy both $(1)$ and $(2)$. Now, according to Susskind, if $P_K$ and $P_L$ do not commute, finding such states is impossible. If the previous paragraph holds (does it?), then them commuting implies the intersection of their subspaces is the entire subspace. I don't know what non-commuting means geometrically, but shouldn't there be a case where the intersection of their subspaces isn't the entire subspace (for example, imagine two 2D perpendicular planes intersecting each other on a 1D line)? Susskind's comment seems to contradict that, and can't see exactly where I'm going wrongly.
A complete set of eigenstates spans the whole space, not just the subspace the projection operators project on. In this set of eigenstates you also have a basis of the subspace belonging to the eigenvalue 0.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Characteristic length for the diffusion equation (temperature) The background: I'm doing some simulation work involving the diffusion equation in 1D. Specifically I have some temperature profile, constant thermal conductivity and fixed temperature at each end of the system. I know that we can write: $$ \tau = \frac{L^2}{\kappa} $$ where $\tau$ is the characteristic time scale, $L$ is the characteristic length scale and $\kappa$ is the thermal conductivity. In this case, $\kappa = 1$ so the time scale is equal to the square of the length scale. I know that in a gas, the time scale corresponds to something like the amount of time it takes a particle to diffuse over the length scale of interest, but I'm not sure what it means in the context of temperature. Could anyone enlighten me? Thanks!
Short answer: $\tau$ is the typical time it takes for heat (energy) to be transported over the distance $L$. I'll try to elaborate a bit on your analogy to particle diffusion. For particle diffusion in one dimension, you may think of the particle as jumping around on the x-axis. Some times it jumps to the right, and some times to the left. The end result is that it typically takes $\tau = L^2/\mathcal D$ to cover the distance $L$, when the diffusion constant is $\mathcal D$. The diffusion constant is a measure of how large the jumps are (in fact, how large the variance of the jumps is). But for heat transport you may instead think of a chain of beads on the x-axis. Each bead is wiggling around its spot on the axis, and the more it wiggles, the higher the temperature at that position. Every now and then a wiggling bead will smack its neighbor, and exchange some energy with it. Some times the energy transfers from left to right, and some times the transfer is from right to left. The "thermal diffusivity" $\kappa$ is a measure of how often the beads collide, and how willing they are to exchange their energy with each other. The end result is that $\tau = L^2/\kappa$ is the typical time it takes for a "packet" of energy to travel over the distance $L$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
First Order Correction to wave function in ground state I am looking at a spin 1/2 particle in a magnetic field. This has Hamiltonian $$H=-\mu s\cdot B_0$$ For simplicity, assume $B_0=B_0\hat z$ so $H=-\mu B_0$. I then apply a perturbative magnetic field such that $$V'=-\mu B_1 s_x$$ First I wanted to compute $E^{(1)}$ $$E^{(1)}_n=\langle\psi_n^{(0)}|-\mu B_1s_x|\psi_n^{(0)}\rangle=\mp \mu B_1 \hbar/2$$ Now I am looking to find the first order correction to the ground state wavefunction. I know that this is given as $$\psi^{(1)}_n=\sum_{n\neq n'} \psi^{(0)}_{n'}\frac{\langle\psi_{n'}^{(0)}|-\mu B_1s_x|\psi_{n}^{(0)}\rangle}{E_n^{(0)}-E_{n'}^{(0)}}$$ I am confused as to how to treat the summation. The only term I would get is if $n=n'$, but that would be degerate. So I am thinking that this first order correction is 0. Is this correct?
Spin1/2 particle Ususally, in this kind of Hamiltonian, people uses $s=s_z$, where $$s=s_z=\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1\end{array} \right].$$ Then, your unperturbed hamiltonian $H_0$ is: $$H_0=-\mu s\cdot B_0 = -\mu \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1\end{array} \right]B_{0,z}. $$ Then the eigen vectors of energy are: $$|\psi^0_+\rangle=\left[ \begin{array}{c} 1 \\ 0\end{array} \right],$$ $$|\psi^0_-\rangle=\left[ \begin{array}{c} 0\\ 1 \end{array} \right].$$ Perturbation solution Then you want to compute $|\psi_+\rangle$ and $|\psi_-\rangle$ for the perturbed Hamiltonian $H=H_0-\mu B_1 s_x$, where $$s_x=\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0\end{array} \right].$$ As you said, you have to compute the following quantities (note I use $+,-$ instead of $n=0,1$. Which became: $$\psi^{(1)}_+=\sum_{n\neq +} \psi^{(0)}_{n'}\frac{\langle\psi_{n'}^{(0)}|-\mu B_1s_x|\psi_{+}^{(0)}\rangle}{E_+^{(0)}-E_{n'}^{(0)}}=\psi^{(0)}_{-}\frac{\langle\psi_{-}^{(0)}|-\mu B_1s_x|\psi_{+}^{(0)}\rangle}{E_+^{(0)}-E_{-}^{(0)}}$$ $$\psi^{(1)}_-=\sum_{n\neq -} \psi^{(0)}_{n'}\frac{\langle\psi_{n'}^{(0)}|-\mu B_1s_x|\psi_{-}^{(0)}\rangle}{E_-^{(0)}-E_{n'}^{(0)}}=\psi^{(0)}_{+}\frac{\langle\psi_{+}^{(0)}|-\mu B_1s_x|\psi_{-}^{(0)}\rangle}{E_-^{(0)}-E_{+}^{(0)}}$$ Put here vectors and matrices we just found and let me know if you get zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Shielding RFID with aluminium foil I've been playing around with some contactless bank cards and an RFID reader app on my phone. As expected, if I wrap the card in foil, the reader no longer detects it. But I was surprised to find that if I place a layer of foil on a flat surface, put the card on top of it and the phone on top of that, it also fails to detect the card. Why does placing a barrier behind the card prevent communication?
The metal is detuning both the tag's antenna and depending on how close the phone is, the phone's RFID antenna too. When a piece of metal is placed in the near field area of an antenna it becomes coupled to the antenna and it's resonance frequency drops, the impedance decreases (causing a large signal loss) and the bandwidth widens (Q decreases). In an RFID tag, these little radios probably have very little transmitter power and a small reference ground plane. Both of these things make detuning the system very hard to have enough signal to communicate. This is a very common problem in antenna design as many people around the world witnessed with iPhone4 as demonstrated in this video. Anything conductive can do this. So your hand with water in it, detunes your cell phone. Designers have to be aware of these cases and make sure the design is robust enough to overcome some of these conditions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How does the frequency of a particle manifest itself? In terms of wave-particle duality for, let's say a photon; how would the frequency practically manifest/demonstrate itself? Like, i understand that the frequency is related to the energy a particle has, but frequency in my mind suggests oscillation about a point. Is the photon physically oscillating through space as it travels? I wouldnt imagine so. Which periodic occurrence is referred to when one talks about the frequency of a particle?
Is the photon physically oscillating through space as it travels? I wouldnt imagine so. Which periodic occurrence is referred to when one talks about the frequency of a particle? No the photon is not oscillating through space. It is an elementary particle of the standard model which is the quantum mechanical description of most of our experimental knowledge on elementary particle to date. Elementary particles are point particles. The classical wave is built up from an enormous number of photons and as physics theories have to be consistent when the parameters and variables change from microscopic to macroscopic the frequency entering the E=h*nu for the individual photon is the frequency built up in a coherent classical beam emerging from a large ensemble of photons. Conversely, when one starts with a classical coherent beam of frequency nu, as discussed in the other answer, and one goes to the microscopic level of individual quanta that compose the beam, the photons, the frequency identifies the quantum of energy the photon carries. For example, a large number of excited atoms at the same energy level ( lasers for example) de exciting to photons will have the energy h*nu that builds up the classical wave. For what is a particle in the quantum mechanical microcosm see my answer here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
2D Gauss law vs residue theorem I used to have a vague feeling that the residue theorem is a close analogy to 2D electrostatics in which the residues themselves play a role of point charges. However, the equations don't seem to add up. If we start from 2D electrostatics given by $$\frac{\partial E_x}{\partial x} + \frac{\partial E_y}{\partial y} = \frac{\rho}{\epsilon_0},$$ where the charge density $\rho = \sum_i q_i \delta(\vec{r}-\vec{r}_i)$ consists of point charges $q_i$ located at positions $\vec{r}_i$, and integrate over the area bounded by some curve $\mathcal{C}$, we find (using Green's theorem) $$\int_\mathcal{C} (E_x\, dy - E_y\, dx) = \frac{1}{\epsilon_0} \sum_i q_i.$$ Now, I would like to interpret the RHS as a sum of residua $2\pi i\sum_i \text{Res}\, f(z_i)$ of some analytic function $f(z_i)$ so that I would have the correspondence $$q_i = 2\pi i\epsilon_0 \text{Res}\, f(z_i).$$ For this to hold, the LHS would have to satisfy $$\int_\mathcal{C} (E_x\, dy - E_y\, dx) = \int_\mathcal{C} f(z)\, dz,$$ however, it is painfully obvious that the differential form $$E_x\, dy - E_y\, dx = -\frac{1}{2}(E_y+iE_x)dz + \frac{1}{2}(-E_y + iE_x)dz^*$$ can never be brought to the form $f(z)dz$ for an analytic $f(z)$. So, it would appear that there really isn't any direct analogy between 2D Gauss law and the residue theorem? Or am I missing something?
There is indeed a connection. The holomorphy is easily seen in the electrostatic potential. In a charge free (two-dimensional) region, the electrostatic potential solves Laplace's equation and hence is a harmonic function. The real and imaginary parts of a holomorphic function are harmonic functions and thus the electrostatic potential can be identified with, say, the real part of a holomorphic function. In more detail, let us write (with $z=x+iy$) $$ f(z) = \phi(x,y) + i\ \psi(x,y)\ , $$ where we choose to identify the real part with the electrostatic potential. You can check that Cauchy-Riemann conditions imply that $\mathbf{E}\cdot \nabla \psi=0$. This implies that the $\psi=$ constant lines are the electrostatic field lines. Adding a point charge, implies that it is harmonic everywhere except at the location of the charge. The relevant function is $f(z) = \lambda \log (z-z_0)$, where $z_0$ is the location of the charge and $\lambda$ is proportional to the charge. The connection with the residue theorem follows since $f'(z)$ has a simple pole at $z_0$ with residue $\lambda$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Could Charles-Augustin de Coulomb measure the charge in Coulombs? * *Did Charles-Augustin de Coulomb know: * *Coulomb's constant *Coulomb (as a unit) if not then what was the first time it was measured?
Coulombs date back to the 1860's, and even predate CGS units. The connection between the volt-ohm-second and mks units were made only in 1904. Coulomb used e.s.u. based on the french ft lb s system. The coulomb constant is a feature of choice of units, if charge is found from LMT, then the size of the coulomb constant can be set to one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why can colors be mixed? We can combine colored light, creating other colors, at least in terms of visual perception. But how it the result physically "a different color" - if it is at all? Or is all this not a physical question to begin with - but only about our eye and brain? To have an example, we * *have an incandescent bulb, showing "white" light, and *combine red, green and blue light in intensities such that it looks roughly the same. It is not central to the question whether it is exactly matching the white light - but certainly interesting to understan whether it could perfectly match, and why.
The cells in our retina that detect by frequency (read: colour) detect most strongly in three slightly different bands we know as Red, Green and Blue. To make a slight correction I would say an incadescent bulb is quite far from white, so I would rather proceed talking about sunlight on a clear day. The reason why sunlight appears as white as say a white flashlight made of RGB LEDs is because light from both sources stimulate all your RGB retinal cells similarly and more importantly in similar proportion. To put into perspective, an incandescent bulb is very similar to sunlight in the RGB distribution by proportion (see blackbody curve), except that there is heavier contribution in the frequencies we associate with the color Red. Candle light is similar. Daylight fluorescents are much more white, but upon closer inspection, these actually produce more Blue in comparison to R/G. Another thing to note with non blackbody light sources like these is that they cheat by having almost no light output in frequencies other than R G or B. Your eyes can't tell the difference because all RGB cells are stimulated, which is exactly what happens when natural light hits those cells. None of these lights would look the same to a honey bee, which have 4 types of cone cells in their eyes. That extra cell dedicated to UV could very well be what allows a bee to very easily distinguish sunlight from reconstituted RGB light coming from LED flashlights.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Reconciling total internal reflection and the evanescent Wave I understand that light is guided in a dielectric waveguide via total internal reflection. My question is regarding the origin of power contained in the evanescent field traveling along the direction of propagation. From Fresnel equation we get that the reflection coefficient is %100 for angles above or equal to the critical angle as shown in the following figure: It is clear from the figure that no power is transmitted from medium 1 to medium 2. However, from dielectric waveguide theory we know that some power is contained in medium 2 and we define the confinement factor which is a measure of amount of power confined in the core of the waveguide compared to the power contained in the evanescent field. The confinement factor is defined as following: $$\Gamma = \dfrac{\int_{-L_x/2}^{L_x/2}| \mathcal{E}_x|^2dx}{\int_{-\infty}^{\infty}|\mathcal{E}_x|^2dx}$$ Therefore my question is how to reconcile the two facts that no power is transmitted into medium 2 and existence of power in medium 2 carried by the evanescent wave ?
It's a "tunneling" behavior. In effect, all the light is "pulled back" into the medium unless there's another body of high-index (well higher than the $n_1 = 1$ ) material within the distance covered by the evanescent wave. If that material is close enough, then that part of the evanescent wave, which you can view as a probability wave, is in a region where the light itself can again manifest (because $n_1 and n_3 $ are such that total internal reflection between these two materials will not happen at the existing incidence angle). So, no power is transmitted by medium 2, but the wave function is still nonzero there. The same mathematics which govern electrons tunneling out of quantum wells works just as well here, BTW edit : I should have written that no power is transmitted perpendicular to the interface. As several comments point out, energy can be carried parallel to the interface in medium 2, as happens in the cladding of optical fibers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Electric Field inside a regular polygon with corner charges If we have equal charges located at the corners of a regular polygon, then the electric field at its center is zero. Are there other points inside a polygon where the field vanishes? The simplest case would be an equilateral triangle of equal charges. I thought it would be easy to find the location of such other points, if they exist. I am having a hard time proving or disproving their existence.
In addition to Ali's answer, here are some pictures which may be helpful in convincing people that the origin is not the only point inside the polygon where $\mathbf{E}=\mathbf{0}$. Letting the charges be located at $(\cos(2\pi k/N),\sin(2\pi k/N))$ for $k\in\{1,2,...,N\}$, we can generate plots of $|\mathbf{E}|^{-1}$ for various $N$. The zeros of $\mathbf{E}$ will then show up as bright glowing singularities. Example for $N=3$ (in Mathematica): X = {x, y}; n[x_] := Sqrt[x.x]; m = 3; U = Sum[n[X - {Cos[2 \[Pi] k/m], Sin[2 \[Pi] k/m]}]^-1, {k, m}]; expr = n[D[U, {X, 1}]]^-1; L = 0.45; DensityPlot[expr, {x, -L, L}, {y, -L, L}, PlotRange -> {0, 14}, PlotPoints -> 50, PlotLabel -> "N = " <> ToString[m], ColorFunction -> GrayLevel, ImageSize -> 450] Similarly, for $N\in\{4,5,6,7,8,9\}$ their roots are distributed like this: Note that in all cases, the zeros lie along the reflection axes of the symmetry group that do not pass through any of the charges.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/108929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
Studying Quantum Electrodynamics? As an electrical/computer engineer, I already have a relatively thorough understanding of classical electromagnetism. From what I understand though, classical EM is only an approximation to quantum electrodynamics. I'm very curious about how it all really works though. So as an ECE engineer, what would be the best way to approach quantum electrodynamics? (assuming taking a course at a community college is not an option)
What is an ECE engineer, an electronic-computer-engineering engineer? Indeed Classical Electrodynamics is only an approximation to Quantum Electrodynamics. If you just want to get a taste, I would suggest reading Feynman's QED: The Strange Theory of Light and Matter. It describes the theory quite nicely without too much maths. If you want to learn full Quantum ElectroDynamics, you're going to first need to learn tree-level (introductory) Quantum Field Theory, which will require quite a lot of time, effort and maths. If this is indeed what you want, then check out the books list question for some ideas of where to start. I don't know exactly how much education you have so far in terms of maths. It's not easy, but it is a pretty great theory alright!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What determines the speed required to pull a table cloth? I was watching this show "Street Genius" on National Geographic and the host Tim Shaw demonstrated an experiment about Inertia, What he did was, He tied one end of a table cloth to a car through a long rope and started driving the car, when the rope was taut, the cloth was pulled and the things on the table were still on the table. What are the calculations involved in it and what determines the speed required to pull the table cloth without disturbing the things on it?
Not sure anyone will look back at this, but I'd like to give an answer anyway! How do you not disturb the dishes when pulling a tablecloth out from under them? You're exactly right: this is about the inertia of the dishes and the forces on the dishes from the table cloth while the cloth is being pulled. Remember from physics that if we plot the velocity of the dish vs time, then the slope is the acceleration, and the area under the curve is the distance traveled. We want the distance traveled to be less than the distance to the table edge. To have a small area under the curve, then we want a small amount of time that the dish is contacting the table cloth (t*) and also a small effective friction force (the initial slope). So either pull fast or use a slippery table cloth. The effective friction force is a tricky thing to predict, since the dish will surely rattle around and only be in contact part of the time. In fact, predictions about even simple friction coefficients are nearly impossible without just doing the experiment (see citation). Nature 430, 525-528 (29 July 2004) | doi:10.1038/nature02750; The nonlinear nature of friction: Michael Urbakh1, Joseph Klafter1, Delphine Gourdon & Jacob Israelachvili
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How do I simulate this simple quantum circuit in MATLAB? I want to simulate a circuit similar to the one below in MATLAB. If you have a state matrix describing the state of 3 qubits, I understand that you could apply a CNOT matrix tensored with and identity matrix to $\psi_{0} $ get $\psi_{1}$, but if you want to apply a controlled operation to the 1st and 3rd qubit to get $\psi_2$, how can you do this? It's like you need "remove" the information about the second qubit, apply a CNOT gate, and then somehow integrate the result back with the superposition of the second qubit... I do not understand how to do this. In general if I have a superposition of N qubits, how do I apply a controlled operation on qubits i and j?
The answer is $|\psi_{FINAL}\rangle = CNOT_{12} \cdot CNOT_{13} \cdot |\psi_{INITIAL}\rangle$ ; where $|\psi_{INITIAL}\rangle = |\psi\rangle \otimes |00\rangle$. So this operation goes as follows: * *1st) if $|\psi\rangle$ is in state $|1\rangle$, then perform NOT on the 3rd qubit ($|0\rangle$ goes to $|1\rangle$ in the 3rd position). *2nd) if $|\psi\rangle$ is in state $|1\rangle$, then perform NOT on the 2nd qubit ($|0\rangle$ goes to $|1\rangle$ in the 2nd position). The matrix representation is: $$ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ \end{matrix} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why are there interference patterns inside a diffraction envelope? When double-slit diffraction occurs, there are interference patterns inside, say, the central diffraction maxima (or envelope). I am trying to understand how these interference fringes are created. Here is what I know: each individual slit in the double-slit setup produces a diffraction pattern, and these two diffraction patterns overlap very closing. Assuming Fraunhofer diffraction (the diffraction pattern is viewed far from the diffracting slits), the waves coming from each slit into the central diffraction envelope will interference and produce a series of bright and dark fringes. My question: can someone explain “easily” how these overlapping diffraction patterns produce interference fringes inside a diffraction envelope? I feel comfortable mathematically integrating the fields using the Fresnel diffraction integral to arrive at the Fraunhofer condition. When I say easily, I am looking for a more physical explanation if possible.
In a simplistic model, you can view destructive interference for a two-slit situation as arising from one of two possible events: Either light from a single slit is destructively interfering (and hence light from the other slit will as well, since the off-set is usually ignored), or light leaving both slits interfere with each other. The smaller "inner" interference pattern is caused by interference between the slits, the second option above. This is contrast to the diffraction envelope which, as you stated, is caused by interference for a single slit. For example, if light leaving the left-most edge of the left slit has a path length difference of $\lambda/2$ with respect tho the corresponding edge of the right slit, then light from these paths will completely destructively interfere. Furthermore, every point in one slit has a pair in the other slit that causes destructive interference (with the same path length difference). You'll notice the first inner minimum occurs at a smaller angle than first single-slit minimum. This is consistent with $d\sin\theta=\lambda/2$ producing a smaller angle than $(a/2)\sin\theta=\lambda/2$ since the slit separation $d$ is larger than the slit width $a$. These equations yield the first "interference" and first "diffraction" minima for two slits, respectively.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is the molecule of hot water heavier than that of cold water? We know that the molecule of hot water($H_2O$) has more energy than that of cold water (temperature = energy) and according to Einstein relation $E=mc^2$ ,this extra energy of the hot molecule has a mass. Does that make the hot molecule heavier?
$E^2 = m^2c^4 + p^2c^2$, where $m$ is rest mass and $p$ is momentum. If a molecule is moving faster it would have more momentum and more energy, but the same rest mass. Some have defined "relativistic mass" as opposed to "rest mass" as $E=m_rc^2$, so yes the faster moving molecule would have a greater so-called relativistic mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
What is the energy of a black hole? This might be a stupid question but given Einstein's general theory of relativity $E = m c^{2} $ what is the energy of a black hole? Isn't the mass of a black hole infinite? Wouldn't that be infinity multiplied by the speed of light squared?
Black holes are just objects which have very large mass (not infinity) and they are concentrated in very small region of space, Which is called Schwarzschild radius $$r=\frac{2GM}{c^2}$$ it means escape velocity from orbit equals the speed of light (thats why black hole is black, light can't escape it) but black hole can also be considered as a blackbody, it not only absorbs heat but it also radiates it (Mostly in form of infra-red light and the energy of light(Photon) is calculated using this equation $E=hf$) (if you want to calculate what energy will be released if all matter in black hole becomes energy you can use Einstein's equation $E=mc^2$) (if you want to calculate gravitational potential use this equation: $V=-\frac{GMm}{r}$) Example: For example we want to calculate energy released if all matter in black hole in center of our galaxy became energy if we know that mass of that black hole is $8.62\times 10^{36}kg$. Using Einstein equation $E=mc^2$ we get that $E=7.747\times 10^{53}\quad Joules$. More info about that black hole: Sagittarius_A* So in short black holes don't have infinite energy. More information: http://en.wikipedia.org/wiki/Hawking_radiation http://en.wikipedia.org/wiki/Black_body_radiation
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What provides the centripetal force for a glider? Whenever any object follows a circular path, there is always a centripetal force which is provided by something. However, in the case of a glider making a loop in air, what provides the centripetal force? This is the picture that made me think of this question:
There is indeed a centripetal force acting on the glider as it moves through its loop. The size of the force depends on the mass of the glider, the speed of the glider and the radius of the circle the glider is following at the moment. There are two sources for this force: * *The component of the force of gravity acting towards the center of the circle. This is negative in the lower half of the loop, and positive in the second half. *The lift from the airflow over the wings of the glider. The pilot varies this force by adjusting the controls to maintain the desired path during the loop. This lift cancels the outward component of the force of gravity during the bottom portions of the loop, and combines with gravity during the top half of the loop.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Moment of inertia of a hollow sphere wrt the centre? I've been trying to compute the moment of inertia of a uniform hollow sphere (thin walled) wrt the centre, but I'm not quite sure what was wrong with my initial attempt (I've come to the correct answer now with a different method). Ok, here was my first method: Consider a uniform hollow sphere of radius $R$ and mass $M$. On the hollow sphere, consider a concentric ring of radius $r$ and thickness $\text{d}x$. The mass of the ring is therefore $\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi r\cdot\text{d}x$. Now, use $r^2 = R^2 - x^2:$ $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \left(R^2 - x^2 \right)^{1/2}\text{d}x$$ and the moment of inertia of a ring wrt the centre is $I = MR^2$, therefore: $$\text{d}I = \text{d}m\cdot r^2 = \frac{M}{4\pi R^2}\cdot 2\pi\left(R^2 - x^2\right)^{3/2}\text{d}x $$ Integrating to get the total moment of inertia: $$I = \int_{-R}^{R} \frac{M}{4\pi R^2} \cdot 2\pi\cdot \left(R^2 - x^2\right)^{3/2}\ \text{d}x = \frac{3MR^2 \pi}{16}$$ which obviously isn't correct as the real moment of inertia wrt the centre is $\frac{2MR^2}{3}$. What was wrong with this method? Was it how I constructed the element? Any help would be appreciated, thanks very much.
The mass of the ring is wrong. The ring ends up at an angle, so its total width is not $dx$ but $\frac{dx}{sin\theta}$ You made what I believe was a typo when you wrote $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \left(R^2 - x^2 \right)\text{d}x$$ because based on what you wrote further down, you intended to write $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \sqrt{\left(R^2 - x^2 \right)}\text{d}x$$ This problem is much better done in polar coordinates - instead of $x$, use $\theta$. But the above is the basic reason why you went wrong. In essence, $sin\theta=\frac{r}{R}$ so you could write $$\text{d}m = \frac{M}{4\pi R^2}\cdot 2\pi \frac{r}{sin\theta} \ \text{d}x \\ = \frac{M}{4\pi R^2}\cdot 2\pi \frac{r}{\frac{r}{R}} \ \text{d}x\\ = \frac{M}{4\pi R^2}\cdot 2\pi R \ \text{d}x\\ = \frac{M}{2 R} \ \text{d}x$$ Now we can substitute this into the integral: $$I = \int_{-R}^{R} \frac{M}{2 R} \cdot \left(R^2 - x^2\right)\ \text{d}x \\ = \frac{M}{2R}\left[{2R^3-\frac23 R^3}\right]\\ = \frac23 M R^2$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
wave-particle duality and entanglement By fundamental definition of a entangled system we can say that if we know the quantum state of one subsystem then we can describe the state of another subsystem. A particle possess wave-particle duality. If one experiment verify the wave nature of particle then we can not see its particle behaviour in same exp and vice-versa. Can we say that wave and particle behaviour are in some sort of entanglement state.
To see the quantum states of Gigi10012's answer. 1. The state of the electrons is $|\psi >=\frac{|+-\rangle+|-+\rangle}{\sqrt{2}}$, when you measure one electron$|\psi\rangle$ will collapse into $|+-\rangle$ or $|-+\rangle$ then if you know the spin of the first electron you can know the second one. Before measurement you cannot describe one of the electrons individually. This is entanglement. 2. In the Double Split Experiment, let $|\phi\rangle$ describe the photon. $$|\phi\rangle=\int |x\rangle \langle x|\phi\rangle=\int \phi(x)|x\rangle$$ So when you measure the photon, $|\phi\rangle$ collapses into one of the $|x\rangle$, which looks like a particle in a particular position. The Wave-Particle duality is just something used to vividly describe some phenomena.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does electricity flow on the surface of a wire or in the interior? I was having a conversation with my father and father-in-law, both of whom are in electric related work, and we came to a point where none of us knew how to proceed. I was under the impression that electricity travels on the surface while they thought it traveled through the interior. I said that traveling over the surface would make the fact that they regularly use stranded wire instead of a single large wire to transport electricity make sense. If anyone could please explain this for some non-physics but electricly incline people, I would be very appreciated.
Both in the interior (bulk) and at the surface, depending on the source voltage and frequencies. Surface charge is always required on a conducting wire, in order to establish powerflow over the wire. There are two types of current density $\boldsymbol J$: $\operatorname{div}\boldsymbol J = 0$ or $\operatorname{div} \boldsymbol J \lessgtr 0$, depending on the surface charge dynamics: $\operatorname{div} \boldsymbol J + \frac{\partial\rho}{\partial t} = 0$. In most systems $\frac{\partial\rho}{\partial t}$ is so small that conducted current is free of divergence (typical drift current in wires). There are exceptional systems however, such that all the current is used to alternate the sign of surface charge on the wire, then the current is basically surface current. In principle, such a system might transport power. Thanks for sharing the good question, and for out of the box thinking.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 9, "answer_id": 6 }
Fourier transformation I have recently studied Fourier and Laplace transformation in maths. I wanted to understand the utility in physics with some examples that requires this change in dimension and the reason why.
Just to give 3 simple examples: Someone is playing piano. Every key he hits, will produce not only the desired tone but also a full range of resonants and higher harmonics. Those will show up in fourier space. In image analysis, sometimes you have periodic patterns overlaying your image (e.g. Moiré fringes) that disturb image quality. In Fourier space, those patterns might show in a very confined frequency domain, where they can be filtered to enhance image quality. When working in biomedical physics, you come in touch a lot with projection integrals if it comes to attenuation measurement. Solving those inverse problems is a lot easier in Fourier Space. (See for example X-ray CT and the Fourier Slice Theorem).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/109968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Obtain the eigenfunction of Jz for the wave function of an electron in a hydrogen atom? The wave function of an electron in a hydrogen atom is given by * *Is this wave function an eigenfunction of Jz , the z-component of the electron’s total angular momentum? If yes, find the eigenvalue. (Hint: For this, you need to calculate Jz Psi21*mlms*.) *If you measure the z-component of the electron’s spin angular momentum, what values will you obtain? What are the corresponding probabilities? *If you measure J^2, what values will you obtain? What are the corresponding probabilities? How can I solve this problem or with which rules can be obtains.
The last spin state is wrong. It should be $\frac{|\uparrow\downarrow\rangle+ |\downarrow\uparrow\rangle}{\sqrt{2}}$, to be the $S_z=0$, $S=1$ spin state. * *The state would then be an eigenstate of $J_z$, with eigenvalue $+1$. *Measurements of $S_z$ could be $S_z=0$ ($P=1/3$) and $S_z=+1(P=2/3)$. *$J^2=2*3=6$, with $P=1$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Higgs boson production via positron-electron collision One of the suggested diagrams for the Higgs production is the following: so basically an electron-positron pair annihilates and forms an (excited?) Z boson, which then decays into another (less excited?) Z boson and a Higgs boson. Why can't the electron-positron pair decay directly into a Higgs boson? Charge and lepton number would be conserved anyway, and if the pair has enough energy to produce the $Z^*$ boson in the first place it should have enough energy to produce the Higgs boson... ?
The $^*$ notation does not mean excited in this case, it means "off shell" (i.e. virtual or having the "wrong" mass). At the second vertex the $Z^0$ is put "on-shell" by the emission of a Higgs (note, however, that it will decay very quickly in any case). The lepton pair can annihilate directly to the Higgs, but the event is experimentally identical to annihilation to photons or $Z^0$s (because the thing that makes a coupling possible is that both side have compatible quantum numbers, so that (at tree level) all three possibilities decay to very similar end states). The reaction pictured is experimentally identifiable because the on-shell $Z$ decays to an lepton pair with a mass of 90 GeV and the Higgs decays to a limited choice of end states that are mostly reconstructable and add up to the Higgs mass. A surprising amount of collider physics is not so much about what can happen as about what can be uniquely shown to have happened. Finally, I would certainly not describe this reaction as a "decay".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Diver view of refraction I'm studying the refraction in optics. If a red light monocromatic beam of red light (700 nm) passes from air to water it becomes with a wavelenght of aprox 526 nm. So, my question is: How is going to see this beam a diver? Red (700 nm) or something more like green (526 nm)? (Let's suppose that the diver isn't wearing glasses). I think that he is going to see the beam green? I'm a little confused...
You will see it the same, regardless of the refraction index of your medium. The reason is as simple as that, when the light hits your retina, it will be travelling through the interior of your eye, so the only refractive index that matters is that of the eye. What is what we actually detect, wavelength of frequency? Frequency is the one related to energy, so my feeling is that that should be the one influencing chemical reactions, that is, at the end, the way cones can detect light. Indeed, the vitreous humour (the interior filling of the eyeballs) looses water with age, to the point of getting deatached from the retina, something very common among old people (Wikipedia says 75% of > 65). The main consequences are visual artifacts, but no one has claimed colours suddenly look different. Physics books quote wavelengths because those are usually what one measures in the lab in the optical range. Plus, the numerical values are (and this is subjective) more convenient.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is frequency quantized in the black body spectrum? I'm aware that there're some questions posted here with respect to this subject on this site, but I still want to make sure, is frequency quantized? Do very fine discontinuities exist in a continuous spectrum like the black body spectrum? The quantization of photon energies
"Yes", but the quantisation depends on the size of the box. In practice the 'box' is large and of variable shape, so all sizes are available, so all frequencies are available. Ultimately, it is somewhat of a philosophical question who's answer depends on which axioms and base concepts you (they) are using at the various stages of reasoning. Consider, does time pass for a non-interacting particle? Can a non-interacting particle be kept in a box? etc. Try for a quirky view on the problem. Link Between the P≠NP Problem and the Quantum Nature of Universe
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
Why does squeezing a water bottle make the water come out? This seems natural, but I can't wrap my head around it when I think about it. When I squeeze an open bottle filled with water, the water will spill out. When I squeeze a bottle, the material collapses where I squeeze it, but expands in other areas, resulting in a constant volume. If the volume is constant, then I would think that the water shouldn't spill out. If I were to guess, there is something related to the pressure my hand is creating inside the bottle, but I'm not entirely sure.
Squeezing the bottle does decrease its volume. Rather than a bottle, it may be more helpful to think of a full toothpaste tube; the mechanics will be the same. If you squeeze the middle of the tube, the middle will collapse, the back will expand, and the front will expand and squirt out some toothpaste. Treating the toothpaste in the tube (or the water in the bottle) as incompressible, if the tube is full then the volume of the inside of the tube = the volume of toothpaste inside the tube. When you squeeze, some of the toothpaste comes out, indicating there is now less toothpaste inside the tube. But the tube is still full of toothpaste. Therefore, the tube volume is decreased by the amount of toothpaste that came out. Likewise, although it may look like the collapsing middle of the water bottle is offset by the expanding top and/or bottom of the bottle, it doesn't quite do it. Assuming the bottle is completely full of water, the decrease in the bottle's volume will be easily measured by capturing the water that spills out and measuring its volume.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why electrons have less energy than photons with the same wavelength? I am studying quantum physics and I have a question: what is the physical explanation for electrons having less energy than photons with the same wavelength? Energy of a photon : $E = h c/\lambda$. Energy of an electron: $E = h^2/(2m\lambda^2)$
For the photon we have $$E_\gamma = \frac{hc}{\lambda}$$ and for the electron $$E_e = \frac{h^2}{2m\lambda^2} =\frac{hc}{\lambda} \frac{h}{2mc\lambda} = E_\gamma \frac{h}{2mc\lambda}. $$ You can check that the proportionality factor is dimensionless. So what you are asking is why this quantity is less than unity. But recall that $$\frac{h}{\lambda} =p$$ where $p$ is the momentum. What we are looking at is really (one half) the ratio $$\frac{pc}{mc^2} = \frac{mvc}{mc^2}$$ where I assumed that $v \ll c$, that is, we have a non-relativistic electron. Then we get the result you stated in your question. On the other, hand if we don't make this approximation we have the ratio $$\frac{pc}{mc^2} =\frac{mv\gamma c}{mc^2} = \frac{v\gamma}{c}$$ which is unbounded when $v \to c$. You could also argue from Einstein's $$E^2 = m^2 + p^2$$ (in units where $c = 1$). For $m = 0$ we have of course $E = p$. If you make a Taylor expansion of $E$ for $m\neq 0$, $$E = m + \frac{p^2}{2m} + \ldots$$ you see that the kinetic energy, compared to the energy of a massless particle has a factor $p/m$ (as we found above). The non-relativistic regime is precisely when this quantity is small, and if it is not, we have to include terms proportional to $p^4/m^3$ and higher, and again that the energy can be larger for a massive particle than for a massless particle with the same momentum. So the answer to your question really is: because you are considering non-relativistic particles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Do I need to take weight of the rocket into account when calculating escape velocity? Here there is the old problem. I know from the old problem that the work $W_v$ that I need to make a rocket fast enough to reach the escape velocity is $$W_v= G \frac{mM}{r}$$ therefore because $$W_v=F\cdot S = G \frac{mM}{r} \rightarrow F_v=\frac{W}{S}=G \frac{mM}{rS} $$ that is the force I need to make a rocket fast enough to reach the escape velocity BUT Do I also have to count the weight of the rocket? If yes then the equation will be like this: $$F_f=F_g - F_v= G \frac{mM}{r^2}-G \frac{mM}{rS}=G \frac{mM}{r}\biggl(\frac{1}{r} \cdot \frac{1}{S}\biggr) = G \frac{mM}{r}(rS)^{-1} $$
Do I also have to count the weight of the rocket? You already have! This is the $m$ in $G\frac{m M}{r}$. I don't understand the line of reasoning for your second equation, and you made a subtraction/multiplication error as DavePHD pointed out. The value $F_g+F_v$ would be the total acceleration experienced by astronauts (or the payload) sitting in your rocket, but I don't know what $F_g-F_v$ would mean if you're accelerating away from your gravitational source, and I don't know what you want it to mean. So I can't really help you on that one. The force needed to reach escape velocity over a certain distance does depend on the object's mass. The velocity needed to escape a planet is independent of the mass of the rocket.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/110976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Magdeburg Hemispheres The Magdeburg Hemisphere experiment was the experiment that showed the effect of pressure differences on a vacuumed sphere. We know that the Force caused by pressure is $\Delta p A$ and so you can calculate the force by using the area of the base of one of the hemispheres of the the vacuumed ball. It is intuitive to me that the area we use is the area created by the diameter of the sphere, a line through the center of the sphere, but is there a clearer reason that we use this area? I'm not sure how to explain why this "important" area.
At all points on the sphere the pressure points normal to the surface (because that it what pressure does...). That said this system has a preferred direction: the line between the hoops where we hook the harness on for the horse to pull. And we have built the device such that the joint between the two halves lies at the equator perpendicular to that axis. Now, the system therefore has axial symmetry, we are are only interested in the component of force parallel to the symmetry axis (because the components transverse to the axis cancel out when we add up the bits). Measuring angle $\theta$ from the + end of the symmetry axis (either one, just pick your favorite), the axial part of the force from the pressure on a infintesimal area element $\mathrm{d}A$ (or very small element $\Delta A$ if you don't have calculus) is $P \sin (\theta ) \mathrm{d}A$ or the projection of that area onto a place normal to the axis. Therefore the total force is that exerted on a circle of the same diameter. This can be shown rigorously in a few lines of calculus which is what DumpsterDoofus set up (I've really just stated the same integral in a lot of words).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why fermions have a first order (Dirac) equation and bosons a second order one? Is there a deep reason for a fermion to have a first order equation in the derivative while the bosons have a second order one? Does this imply deep theoretical differences (like space phase dimesion etc)? I understand that for a fermion, with half integer spin, you can form another Lorentz invariant using the gamma matrices $\gamma^\nu\partial_\nu $, which contracted with a partial derivative are kind of the square root of the D'Alembertian $\partial^\nu\partial_\nu$. Why can't we do the same for a boson? Finally, how is this treated in a Supersymmetric theory? Do a particle and its superpartner share a same order equation or not?
Fermions obey the Fermi-Dirac statistics, while Bosons the Bose-Einstein statistics. This is an experimental fact and we can't do anything about it. You find its first and most famous evidence in the Pauli principle. To mention some more, Bose condensation and Fermi blocking are a fact of everyday science, and we have even confirmation that the wavefunction of a fermion changes its sign after a 360 degree rotation around. These things experimentally distinguish fermions from bosons. The Dirac equation brings all the fermionic features in the game. It does it in a first order differential equation. On the other hand, the Klein Gordon equation brings all the bosonic features in the game. It does this in second order. You cannot get fermionic features from the Klein Gordon equation or vice verse. On this, there is a nice paragraph in Peskin&Schroeder book at chapter 3.5: "How not to quantize the Dirac field: a lesson in Spin and Statistics." It shows that something goes terribly wrong if you try to quantize the Dirac equation in the same manner you quantize bosonic particles. Statistics is what we observe, statistics is what we try to model.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 7, "answer_id": 4 }
Differences between strong, weak, and micro lensing distinct or subtle? In gravitational lensing, there are three categories of lensing: strong, weak, and micro. As I understand it, strong lensing (just as the name implies) occurs when a source and a gravitational lens are relatively close by and the lens is strong, producing extreme distortions of the light from the source in phenomena such as Einstein rings, weak lensing produces a still distorted image of the source but not as distorted as in strong lensing, and microlensing produces a brightening of an object without any distortion (as described in this question). My question is, are these three fundamentally different phenomena (perhaps caused by separate terms or different limits in whatever equations govern gravitational lensing) or are they arbitrary classifications in a continuum of gravitational lensing effects, much as infrared, microwave, and radio are arbitrary classifications of the electromagnetic spectrum? To repeat, my question is not as much about what the differences between these three classifications of gravitational lensing are as much as whether these differences create three largely distinct and independent phenomena or three classifications of the same phenomenon (such as the difference between a lake and a pond, no fundamental difference in the properties of each, just a size difference).
They are all the same phenomenon and they are basically just arbitrary distinctions. However they are useful ones. Strong lensing normally means we see a clear image. We can then use the shape of the image to precisely calculate the mass distribution in whatever is doing the lensing. For strong lensing we need two things (1) the lens must be very massive to produce a big enough image to see, and (2) the alignment needs to be just right i.e. the object must be almost exactly behind the lens. We get weak lensing when the lens is massive, but there isn't anything exactly behind the lens. In that case the lens produces small changes in the apparent distribution and appearance of objects around it. Again we can use these changes to calculate the mass of the lens, though not as precisely as for strong lensing. Microlensing is somewhat different. Small objects like stars are very weak lenses and the image they produce is too small to be resolved. However the lensing does cause a measurable change in brightness. So if a star passes in front of some other object we may see the brightness rise then fall again, and it's this phenomenon that is referred to as microlensing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Equations of motion for the Yang-Mills $SU(2)$ theory I have an exercise for Yang-Mills theory. I can't find answer anywhere. Derive equations of motion for the Yang-Mills theory with the gauge group $SU(2)$ interacting with $SU(2)$ doublet of scalar fields. I don't even know how to derive EOM for Lagrangian here. Any help? Or any source for appropriate handbook? (I've been using Maggiore, it doesn't have EOM for Yang-Mills here.)
The Lagrangian of Yang-Mills theory coupled to scalars/fermions, etc. takes the form $$ {\cal L}_{YM} = - \frac{1}{2} \text{Tr} F_{\mu\nu} F^{\mu\nu} + (D_\mu \phi) (D^\mu \phi)^* + i {\bar \psi} \gamma^\mu D_\mu \psi + \cdots $$ where the $\cdots$ represents other interactions terms that might be present. Let me explain the notation in the above expression. * *$\phi_i$ and $\psi_i$ are multiplets in some representation $R$ of the gauge group $G$. Here, $i = 1, \cdots, \dim R$ *The generators in representation $R$ are denoted as $T^a$. These are normalized to satisfy $$ [T^a, T^b] = i f^{ab}{}_c T^c,~~~~ \text{Tr} ( T^a T^b )= \frac{1}{2}\delta^{ab} $$ In other words, the $T_{ij}^a$'s are just some set of matrices satisfying the above properties. *The covariant derivatives acting on the fields is \begin{align} (D_\mu \phi)_i &= \partial_\mu \phi_i - i g T^a_{ij} A_\mu^a \phi_j \\ (D_\mu \psi)_i &= \partial_\mu \psi_i - i g T^a_{ij} A_\mu^a \psi_j \\ \end{align} where $(A_\mu)_{ij} = A_\mu^a T_{ij}^a$. *$$ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu + g [A_\mu, A_\nu ] \\ $$ Explicitly $$ F_{\mu\nu}^a = \partial_\mu A^a_\nu - \partial_\nu A_\mu^a + i g f_{bc}{}^a A^b_\mu A_\nu^c $$ This completely specifies the Lagrangian of Yang Mills theory. You can now use the variational principle to determine the equations of motion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Global Properties of Spacetime Manifolds When solving the Einstein field equations, $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = 8\pi GT_{\mu\nu}$$ for a particular stress-energy tensor, we obtain the metric of the spacetime manifold, $g_{\mu\nu}$ which endows the manifold with some geometric structure. However, how can we deduce global properties of a spacetime manifold with the limited knowledge we usually have (i.e. simply the metric)? For example, how may we deduce: * *Whether the manifold is closed or exact *Homology and de Rham cohomology *Compactness I know if we can establish compactness, one can easily arrive at the Euler characteristic, and hence the genus of the manifold, using the Gauss-Bonnet-Chern theorem, $$\int_M \mathrm{Pf}[\mathcal{R}] = (2\pi)^n \chi(M)$$ where $\chi$ is the Euler characteristic and $n$ half the dimension of the manifold $M$. In addition, the Chern classes of the tangent bundle computed using the metric give some information regarding the cohomology. Note this question is really not limited to spacetime manifolds. There are many scenarios in physics wherein we may only know limited information up to the metric, e.g. moduli spaces. It would be interesting to see how one can deduce global properties. This question is inspired by brief discussions on the Physics S.E. with user Robin Ekman, and I would like to thank Danu for placing a bounty; a pleasant surprise! Resources, especially journal papers, which focus on addressing global properties of spacetimes (or more exotic spacetimes, e.g. orbifolds) are appreciated.
The initial value boundary problem in general relativity only gives you the metric on a patch of the spacetime. Other methods must be used to find the true global extension of that spacetime. Therefore, Einstein's equation alone cannot tell you the topology of the spacetime.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 0 }
Why doesn't this model plane fly? I have been designing a model plane for Design Technology for the past month or so, and today I laser cut my final design and assembled, it then tested it. Upon testing the plane does not get any lift, whereas the previous testing model which was virtually the same did. The plane is built using Balsa Wood, and Assembled with hot glue (I used as little glue as possible to reduce weight :) ) Any Ideas? Image:
The shape of the wings does not appear right to produce lift. More convex on top, less on the bottom, would be better And planes often have dihedral, wingtips raised, for stability. EDIT - This answer was too hasty. The comments below are much better.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Adiabatic expansion in the atmosphere When an air parcel rises and cools adiabatically, it is said that there is no heat transfer as work is done on the surrounding atmosphere as the parcel expands. The parcel loses internal energy and condensation occurs. I do understand this concept, but why is it that work is done on the surrounding lower temperature particles by the higher temperature particles and not simply conduction, which would be a heat transfer? Wouldn't the faster moving particles in the air parcel not conduct heat on collision?
The work being done on the surroundings is because the air parcel expands as the pressure decreases. There would be some heat transfer if the parcel of interest has a different temperature than the surrounding air. This effect is smaller because gases are poor conductors of heat, as mentioned by @gerrit. We take advantage of this with fiberglass and styrofoam insulation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Estimating the weight of a vehicle moving in a highway I need to know the estimated weight of a truck in a highway without using scales. What do I have? * *Speed of the car at time X *RPM at time X *horse power *Estimated distance from a point A to B (calculated from GPS data) *And everything I can get from the computer of the car If you need another variable feel free to ask if I can get it. I may also know the default weight of the car from the manual but this is not something to be sure. I started with this simple formula: $F=ma$ OR $m=F/a$, but this is with a constant acceleration and I can't be sure of that. What if we take the problem as 'I want to know the mass of an object in movement, with a non-constant acceleration', or something more neutral. What variables do I need to know? What laws of physics can help?
My previous answer seems to be misleading as I found a way to calculate what you asked by the measurements you said you can take! Assuming that the weight is equally distributed on each tyre, we can equate the torque due to friction with angular acceleration of tyre multiplies by moment of inertia of tyre. The equations would be as following: $$ I_{tyre} \alpha = \frac {\mu m g }{4R} $$ $$ m = \frac {4 \alpha I_{tyre}}{\mu g R} $$ $$ m = \frac {4 I_{tyre} ({\omega}_f - {\omega}_i)}{\mu g R t} $$ $$ m = \frac {4 I_{tyre} (v_f - v_i)}{\mu g R^2 t} $$ You can use either the last or second-last equation for a good approximation, there $\omega $ cqn be calculated using rpm, $ t $ is the time between two measurements, $\mu $ is the coefficient of friction between tyre and road, $ R $ is radius of tyre and $ I_{tyre} $ is moment of inertia of tyre.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/111896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Description of the heat equation with an additional term I have the following equation: $$\frac{\partial U}{\partial t}=k\frac{\partial^2 U}{\partial x^2}-v_{0}\frac{\partial U}{\partial x}, x>0$$ with initial conditions: $$U(0,t)=0$$ $$U(x,0)=f(x)$$ In the problem is requested to give an interpretation of each of the terms in the above equation, and noting that such systems can model, besides solving by Fourier Transform. The Fourier Transform solution is quite simple to do; however, I can not give a physical interpretation of the terms of the equation not to mention a system that can model it. So I wanted to ask your help to answer this question. Thank you very much for your help and attention.
If you re-write the equation to take the form $$ \frac{\partial\psi}{\partial t}+v\frac{\partial\psi}{\partial x}=k\frac{\partial^2\psi}{\partial x^2} $$ Then we can note that the first term is the one-dimensional form of the material derivative: $$ \left(\frac{\partial}{\partial t}+\mathbf v\cdot\nabla\right)\equiv\frac{d}{dt}\equiv D_t $$ Using the material derivative, your equation becomes $$ \frac{d\psi}{dt}=k\frac{\partial^2\psi}{\partial x^2} $$ This equation describes the diffusion of a fluid along the flow in the flow's frame of reference (i.e., the Lagrangian description of continuum mechanics). The equation as you have it, then, describes the diffusion of a flow as it flows in the stationary frame (i.e., the Eulerian description)--which, as Bernhard says, is the convection-diffusion equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/112230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How come this paper clip is " floating" on the subway floor? What's the physics behind the paper clip floating? Technically this was filmed on a subway floor in motion. So I'm guessing it has some Newtonian mechanics involved here, and maybe some other stuff I don't know? Please explain. Thank you. http://www.youtube.com/watch?v=-qJoe8W-N98&feature=youtu.be Sorry if this might not be the proper place to ask this question. I do not know much of physics.
To give a real explanation one would have to repeat the experiment under controlled conditions, which is not simple with a moving metro cab. Steel objects will move and be attached to magnetic fields: think of the magnetized scissors picking up fallen pins. If it is an electric train, which a metro would be, there are strong currents converted for use for the motion of the cab. Secondary currents could be induced in the metal body of the floor, or metal components beneath it, variable depending on the strength and position of the cab under the power lines. Currents are accompanied by magnetic fields. This may show the behavior observed, as with pins and a nearing or further away scissors. It may be a motor that goes on and off underneath the car, for some reason, braking or increasing/decreasing in energy . All this is handwaving guess work. Here is a link describing how electric trains are powered, and it will all depend on the exact system and the exact line and the exact geometry of the cab,
{ "language": "en", "url": "https://physics.stackexchange.com/questions/112308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the symbol Å? I saw this symbol like: $$\lambda=3000\overset{\circ}{\text{A}}$$ and I don't know what this means. Is it a frequency? (since $\lambda$ is usually used for frequency)
It is an ångström, a unit of length commonly used in chemistry to measure things like atomic radii and bond lengths. Although not an official SI unit, it has a simple relationship to the metric units of length: $$1\:\mathrm{ångström} = 1\:\mathrm{Å} = 10^{−10}\:\mathrm{m} = 0.1\:\mathrm{nm} = 100\:\mathrm{pm}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/112443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Will glass always break in the same way? This question has had me thinking for a while. If I have two large panes of glass and a rock or similar item is thrown in exactly the same place on the glass, would the two panes break in the same way. Does the shattering of glass follow any rules or is it always random and subject to other variables? Could you predict the shattering of glass down to the smallest shards or again, is it random?
The answer is sort of yes and no. YES: If you have two perfectly identical panes of glass and two perfectly identical projectiles, and you throw the two projectiles in a perfectly identical way, then the two panes will shatter in a perfectly similar fashion. This is really just by construction, you did the same thing twice. NO: Shattering glass involves breaking bonds between atoms/molecules. This leads to two important conclusions. First, two "identical" panes of glass for this experiment must be identical at least down to the arrangement of the atoms (including the placement of any impurities), and possibly as far as the internal configuration of each atom (as the strength of the bonds can depend on the electron configuration, for instance). In practice this means that it is impossible, given current technological constraints, to construct two macroscopic identical panes of glass. Second, predicting the shattering of a given pane of glass would require both a detailed description of the microscopic structure of the pane (which is impractical because of the large amount of data storage required, and because the structure varies quickly enough in time that any measurement would quickly become obsolete), and solving the relevant dynamical equations. I imagine the equations would be reasonably easy to write down, we're talking about a bunch of particles connected by bonds and reasonably well defined forces, after all. But solving them would be computationally prohibitive, given the size of the system. Still, some characteristics of the shattering can be predicted, for instance under suitable conditions the glass will begin to break at the location of the projectile impact, and the smallest shards will form near the impact site, larger shards further away, etc. The coarse properties of the process can be predicted, but we're stuck describing the fine properties as "random".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/112543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Calculating $\mathrm{Tr}[\log \Delta_F]$ I am stuck with this problem for quite sometime. I have a propagator in the momentum representation (from this Phys.SE question), which looks like $$ \widetilde\Delta_F(p) = \frac{1}{(p^0)^2-\left(\left(n\pi/L\right)^2+m^2\right)+i\epsilon} $$ I wish to know how do go about calculating $\mathrm{Tr}[\log \Delta_F]$ for in general these kind of propagators.The propagator in the position representation would look like, $$ \Delta_F(x-x') = \sum_{n=1}^\infty\int\frac{dp_0}{(2\pi)^2}e^{ip_0(x^0-x'^0)}e^{i\frac{n\pi}{L}(z-z')}\frac{1}{(p^0)^2-\left(\left((n\pi/L\right)^2+m^2\right)} $$ where I have replaced the integral over $p_z$ with a sum over $n$. EDIT 1 : With the given propagator I can write the Trace to be, $$ \text{Tr}\log{\Delta} = - \sum_n \int dp_0 \log{\bigg(p_0^2 - \bigg(\frac{n\pi}{L}\bigg)^2 + m^2\bigg)} $$ but this is divergent in both the limits of $p_0$ I suppose. I have not introduced any cut-off too. How do I renormalise this given the context of this problem. PS: Sorry, I am a beginner with QFT and path integral calculations. It would be helpful if I could get quite an explicit answer. More precisely, I wish to know what is the meaning of $\mathrm{Tr}[\log \Delta_F]$.
Since you want to extract the Casimir force, just take (minus) the derivative wrt to L of your expression in momentum space, the result is finite. This is actually a general mechanism to regularize the theory by taking derivatives of some parameters that lower the divergence degree. You can integrate back after you have gotten to the finite expression. Btw, I don't think your propagator is correct since it depends only on the difference x−x′ while you have two boundaries that break translations so that it should be a separate function of x and x′ – TwoBs 16 hours ago
{ "language": "en", "url": "https://physics.stackexchange.com/questions/112689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
If I'm floating in space and I turn on a flashlight, will I accelerate? Photons have no mass but they can push things, as evidenced by laser propulsion. Can photons push the source which is emitting them? If yes, will a more intense flashlight accelerate me more? Does the wavelength of the light matter? Is this practical for space propulsion? Doesn't it defy the law of momentum conservation? Note: As John Rennie mentioned, all in all the wavelength doesn't matter, but for a more accurate answer regarding that, see the comments in DavePhD's answer . Related Wikipedia articles: Ion thruster, Space propulsion
Can photons push the source which is emitting them? Yes, photons have momentum and momentum must be conserved. The source is pushed in the opposite direction of the photons. If yes, will a more intense flashlight accelerate me more? Yes, more photons means greater momentum. Does the wavelength of the light matter? Yes, shorter wavelength photons have higher momentum. $p = h / \lambda $ Is this practical for space propulsion? Possibly, see Prospective of Photon Propulsion for Interstellar Flight (or use Alternative download site for pre-print version ) The concept of photon recycling is considered, for a potential enhancement of thrust/power ratio by several orders of magnitude. Doesn't it defy the law of momentum conservation? No, photons have momentum in one direction, the source has momentum in the opposite direction, so momentum is conserved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/112866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60", "answer_count": 4, "answer_id": 2 }
Motion of a car rounding a bank When a car is traveling round a banked track as fast as possible, it has a tendency to slip up the slope. Opposite in the case when the car travels slowly and has a tendency to slip down. Can someone please give me an intuitive reason as to why this "tendency to slip up or down" occurs.
If you need just an intuitive reason here it is- the primary forces that pushes the car up and down the track is the balance force of centripetal force and tyre friction force. If the friction between the tyre and road is more than the centripetal force (which in turn depends on velocity of vehicle) due to the banking surface (the bank provides the necessary centripetal force for the vehicle to maneuver the curves), the car slips in. And in the other case the car slips away. In case you need qualitative analysis leave a comment..
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What would we see in the sky if it weren't blue What would we see during the day when we look into the sky (other than clouds), if it weren't for Rayleigh scattering making the sky blue? Would the sky be dark, like at night?
Consider this picture of earth from the moon: Most other photographs taken from the moon would do. From how earth is illuminated, or the shadows of objects on other photographs, it is clear that we are in broad daylight. The sky it pitch black however.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Wrapping plastic in aluminium foil to protect it from heat Does it make any sense to wrap the plastic handle of a pan in aluminium foil to protect it from overheating when placing it to the hot oven?
No. If your saucepan can take the heat of an oven, the handle needs no protection. If it can't take the heat of an oven, there's really nothing you can do to provide any practical protection to the handle. Not only will there be heat transfer from the surrounding air (and radiant heat from the heating element/burner and all the hot interior surfaces of the oven), but heat will also be conducting into the handle from the body of the pan. In all likelihood, to achieve what you want by putting the pan in the oven will require it to be there long enough that the handle is going to approach oven temperature no matter what you do. So, you need a pan with a handle that can withstand such temperatures. There are some materials we commonly describe as plastics, like bakelite, which can take a lot of heat, but still much less than what you'd commonly use in an oven. Bottom line: if it doesn't say oven safe on the label, it doesn't belong there, no matter what you try to do with it or to it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Is it possible to produce gamma radiaton using radio emitter? As in the title, I'm wondering is it possible. I think it is possible, because we have powerful enough radiotechniques and gamma radiation are just EM waves, not particles. However I think is useless, because it costs too much. Can anyone say something more about this? * *How much power would consume such emitter? *Would it be helpful anywhere? *How huge frequencies of current would it use in the antenna? *How big device would it be?
Is it possible to produce gamma radiaton using radio emitter? Unlikely. A 'radio emitter' consists of, at least, some type of antenna and a transmitter to drive that antenna. The size of the antenna is related to the wavelength of the transmitted radio wave, e.g., half-wave dipole, quarter-wave monopole. But the wavelength of gamma rays is less than the diameter of an atom.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Gravitational force from spherical shell Say we have a point mass $\mu$ located at $(0,0,R)$ and a spherical shell (not considering it's volume!) of radius $R$ located in the origin. So we have a particle standing right on top of the sphere and we want to determine the total gravitational force exerted on the particle by the spherical shell. Since the force is directed along the z-axis, we only have to consider the z-component of the force. Moreover, suppose the shell has homogeneous density $\sigma$. We can then write $dM = \sigma dA = \sigma R^2 \sin \theta d\theta d\phi$. So we have $$ dF_z = \frac {(z-R)G\mu dM}{\sqrt{x^2 + y^2 + (z-R)^2}^{\ 3}} = \sigma \mu G R^2 \frac { (z-R)\sin \theta d\theta d\phi }{ \sqrt{x^2 + y^2 + (z-R)^2}^{\ 3} } $$ We then integrate over the entire sphere: $$ F_z = \iint \sigma \mu G R^2 \frac { (z-R)\sin \theta d\theta d\phi }{ \sqrt{x^2 + y^2 + (z-R)^2}^{\ 3} } $$ This turns out to equal $-2\pi G \sigma \mu$ and $\sigma = \frac M{4\pi R^2}$, where $M$ is the total mass of the spherical shell, which yields $-\frac {GM\mu}{2R^2}$. What I do not understand why there is a factor 2 in the denominator, should it really be there? Just for the protocol, we get the classical formula if the particle is outside the shell and zero if it is inside the shell.
The factor of two is correct as far as the integral goes; it comes from the unphysical situation of having your test mass exactly on the thin shell. Intuitively, you get the average of the "just outside" result (as if mass is concentrated at the centre) and the "just inside" result of zero. A more physical thing to do would be to `regulate' the calculation somehow, to find something more meaningful. For example, you could give the shell a small but finite thickness, in which case the force interpolates linearly between the inside and outside results as you pass through it. Alternatively, you could cut a very small circular hole in the shell where you're passing through. Then the force would smoothly change from zero on the inside to the expected outside result in a distance of order the size of the hole. (If the hole has radius $\epsilon<<R$, at a height $h<<R$ above the surface the force will be modified by a factor $\frac{h}{\sqrt{h^2+\epsilon ^2}}+1$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Apparent dimensional mismatch after taking derivative Suppose I have a variable $x$ and a constant $a$, each having the dimension of length. That is $[x]=[a]=[L]$ where square brackets denote the dimension of the physical quantity contained within them. Now, we wish to take the derivative of $u = log (\frac{x^2}{a^2})-log (\frac{a^2}{x^2})$. Here, we have taken the natural logarithm. It is clear that $u$ is a dimensionless function. $$\frac{du}{dx} = \frac{a^2}{x^2}.\frac{2x}{a^2} - \frac{x^2}{a^2}.(-2a^2).\frac{2x}{x^3} \\ = \frac{1}{x} - 4. $$ Here, the dimensions of the two terms on the right do not match. The dimension of the first term is what I expected. Where am I going wrong?
You are doing nothing wrong except failing to take the second derivative correctly. Remember, derivative is "the speed of change" of a function. Now, you take a dimensionless number and want to find how fast it changes in respect to x. Of course the resulting dimension will be ~ [1/m], where m is the thing you measure your distances in (I usually use metres ;)). Imagine you did the same with change over time: the initial function may very well be dimensionless — yet after taking the derivative, the dimension would obviously be $s^{-1}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is Jacobian about the "Jacobian Edge" in $E_\mathrm{T}$ distributions? Particle physicists often talk of a "Jacobian Edge" in distributions, i.e. when looking at the $E_\mathrm{T}$ distribution of $W \to e \nu$ decays at rest. How is this related to the Jacobian determinant we all know about?
Imagine you have a $W^+$ decaying (at rest) in the electron channel, so $W^+\rightarrow e^+ \nu$ . The transverse momentum of the electron is given by (neglecting electron mass, which is very small compared to $M_W$): $$p_t=\frac{M_W}{2}*\sin{\theta}$$ Now you want to have the differential cross section versus $p_t$ so $\frac{d\sigma}{d p_t}$. Following derivative rules and using the previuos expression of $p_t$ you get: $$\frac{d\sigma}{d p_t}=\frac{d\sigma}{d \cos{\theta}}*\frac{d\cos{\theta}}{dp_t}=\frac{d\sigma}{d cos{\theta}}*\frac{2p_t}{M_W}*\frac{1}{\sqrt{(\frac{M_W}{2})^2-p_t^2}}$$ From this formula you can see that you have a maximum of the cross section at $p_t=\frac{M_W}{2}$ and then a fast drop off, so an "edge". The term "Jacobian" for this edge comes from the fact that this peak comes from the transformation of variables described here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Deriving Bernoulli's equation via conservation of E So I'm not OK with how some people derive this equation. These people consider a pipe whose endings have cross-sectional areas and heights which are different. They then use the conservation of energy principle by saying $dW = dK + dU$ (Where $W$ is work, $K$ is kinetic energy, and $U$ is potential energy). For this they consider that the work done on the system would be due to external pressure forces exerted on the whole system of water along the pipe. And here comes the part where I disagree: they use this Work to calculate the change in Potential and Kinetic energy for just a small slab of water within the whole system. This is completely invalid isn't it? I mean you would have to consider the entire system, I think. My way of interpreting the derivation is if you consider just one slab the whole time. Is this a valid way of thinking? Thanks! edit: In fact, in one video I saw, the person just says "the middle chunk of water stays the same the whole time, so we can just ignore it".
You cannot derive the classic Bernoulli Equation from conservation of energy, because, contrary to popular opinion, it is actually not an expression of conservation of energy at all. It is more accurately construed as an integrated expression of the conservation of linear momentum, $F=ma$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/113975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is evaporative cooling more efficient with dry or moist air? I live in India, and in the summer season, the temperature can reach up to $45 \sideset{^\circ}{}{\mathrm{C}} .$ We use Split 1.5 Ton AC in our small office. The idea is to put an evaporative cooler on the inlet side of the heat exchanger of AC to give it more efficient cooling. Will it help to increase efficiency? or COP? By how much?
I think it will increase efficiency, Compared to a normal AC there will be a significant change. I dont know the exact change. But you will probably feel a difference.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How would one compute the angle of deflection, in a relativistic collision - underspecified system? Consider the simplistic case of two identical mass particles colliding elastically with the second particle initially stationary and the first particle travelling with energy $E$. By conservation of 4-momentum we have: $$p_{1}^{\mu}+p_{2}^{\mu}=p_{1}'^{\mu}+p_{2}'^{\mu}$$ Taking the inner product of this with itself: $$\left\langle p_{1}^{\mu} \middle| p_{1}^{\mu}\right\rangle + 2\left\langle p_{1}^{\mu} \middle| p_{2}^{\mu} \right\rangle+\left\langle p_{2}^{\mu} \middle| p_{2}^{\mu} \right\rangle = \left\langle p_{1}'^{\mu} \middle| p_{1}'^{\mu}\right\rangle + 2\left\langle p_{1}'^{\mu} \middle| p_{2}'^{\mu}\right\rangle +\left\langle p_{2}'^{\mu} \middle| p_{2}'^{\mu}\right\rangle $$ Using the fact that $\left\langle p_{i}^{\mu} \middle| p_{i}^{\mu}\right\rangle = m_{0}^{2}c^{2}$, we can simplify this: $$2m_{0}c^{2}+2m_{0}E= 2m_{0}c^{2}+2\left(\frac{E_{1}'E_{2}'}{c^{2}}-\vec{p}_{1}'\cdot\vec{p}_{2}'\right) \implies m_{0}E=\frac{E_{1}'E_{2}'}{c^{2}}-\vec{p}_{1}'\cdot\vec{p}_{2}'$$ We note that $\vec{p}_{1}'\cdot\vec{p}_{2}'=\|\vec{p}_{1}'\|\|\vec{p}_{2}'\|\cos(\theta)$, where $\theta$ is the inner angle between $\vec{p}_{1}'$ and $\vec{p}_{2}'$. However, this results in a system with $\|p_{1}'\|$,$\|p_{2}'\|$ and $E_{1,2}$ unspecified and I cannot see how we could thus extract $\theta$ from the initial conditions; what have I misunderstood or misapplied here?
Yes, it's underspecified. In real experiments with beams elastically scattering from fixed targets you have scattered particles coming out at all angles; usually the cross section as a function of angle tells you about the energy levels involved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Feynman's $i \epsilon$ prescription in loop expansion I have some questions about the $i\epsilon$ factor in Feynman diagrams. First, what is the physical meaning of $i\epsilon$ in loop amplitudes. Second, how does it ensures unitarity? And third, Dyson series assume that incoming and outgoing particles are free, this can be implemented by assuming that the interaction Hamiltonian switches off adiabatically, $e^{-\eta\,t}H_{I}(t)$. Is this $\eta$ related with the $i\epsilon$?
At first, the $i \epsilon$ prescription: In Feynman diagrams we have a lot of Green's functions connected by some rules, the Feynman rules. Actually, the $i \epsilon$ prescription is more related to the Green's function. The prescription is responsible for a choice in the boundary conditions (asymptotic behaviour). We may put this $i \epsilon$ inside the expression $(\omega + i \epsilon)^2$ or $(\omega - i \epsilon)^2$. The first is related to retarded Green's function, the second is related to advanced Green's function. This choices of boundary condition are related to some adapted Feynman's rules. Calculating things in this prescriptions is not manifest relativistically and not so compact. The Feynman Green's function is the best for doing calculations and very natural. Now, let's be more general. This prescriptions are aroused when we have operators like this: $$ (H-E)G(t)=\delta(t) $$ We want to find an expression for $G(t)$. Applying Fourier transform, we can find that $$ G(t) = \frac{1}{(2 \pi)} \int d\omega \, \frac{e^{-ip(t)}}{\omega - E\pm i\varepsilon} $$ If you see this function in complex plane, we can se that we have a cut in real axis made by eigenvalues of $H$. The prescription $\epsilon$ determine how far we pass about the singularity $\omega=E$. The choice of signal tell in what side. This precription is simply procedure to getting inverse of operators with real eigenvalues, satisfying some asymptotic behavior. We can interpret this $\epsilon$ as a inverse of timelife of the signal describe by the repective Green's funciton. This is because we can identify a relativistic Breit–Wigner distribution The untary is hold because we get $\epsilon \rightarrow 0$, so the signal persist at infinite time. The adiabatic turn on and turn off interaction is another mathematical departure: adiabatic theorem and Gell-mann and Low theorem, but physically is very close to the $i\epsilon$. Is a inverse of a caracteristic time that is send to infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why Does Change of Magnetic Flux Induce an emf? Why does change in magnetic flux with time through a coil induce an emf across it? Please explain what happens to the charges in the coil when magnetic flux changes? Also, why does a constant magnetic flux not induce an emf?
To produce EMF we need to drive electrons or we need to produce a net charge difference at the ends of conductor. As we all know that a moving charged particle will experience force by a magnetic field, so, if we are talking about dc machines, the magnetic field produced by the fieldcoil remains constant, that is, no force is experienced by electrons of the wire. So, we have to produce a relative motion between magnetic field and electrons. Whenever the flux passing through the coil changes by any way (like either changing angle, magnetic field or area of coil), we are actually producing a relative motion between electrons and magnetic field. As a result, the electrons experience a magnetic force and shift to produce EMF.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
Is Newton's third law always correct? Newton's third law states that every force has an equal and opposite reaction. But this doesn't seem like the case in the following scenario: For example, a person punches a wall and the wall breaks. The wall wasn't able to withstand the force, nor provide equal force in opposite direction to stop the punch. If the force was indeed equal, wouldn't the punch not break the wall? I.e., like punching concrete, you'll just hurt your hand. Doesn't this mean Newton's third law is wrong in these cases?
Almost off-topic, it's worth mentioning that Newton's laws only apply in a Galilean reference frame, which is rather utopic (making Newton's law an approximation of the reality… but what else is Physics anyway?) In any case, other answers were right: the wall has a reaction from you (and it may break) and also applies a force on you (you may feel pain in your hand).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 14, "answer_id": 7 }
Does the Earth gets closer to the Sun? We know that the sun loses an amount of it's mass equivalent to the amount of energy it produces, according to the $E=mc^2$ equation. so the sun is losing mass every second. Does this affect the space-time curvature it creates. Or does this affect the distance between the Sun and the Earth. Does losing mass affects the gravity of Sun or other planets?
The answer is here There exists the effect of the loss of mass and therefore gravitational attraction between the earth and the sun but it is small: If we assume that the Sun's rate of nuclear fusion today is the same as the average rate over those 10 billion years (a bold assumption, but it should give us a rough idea of the answer), then we're moving away from the Sun at the rate of ~1.5 cm (less than an inch) a year. I probably don't even need to mention that this is so small that we don't have to worry about freezing. There is also the even smaller effect of the tides induced on the sun by the earth: It turns out that the yearly increase in the distance between the Earth and the Sun from this effect is only about one micrometer (a millionth of a meter, or a ten thousandth of a centimeter). So this is a very tiny effect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Starting a nuclear reaction In Chemistry, an amount of energy has to be supplied for a reaction to occur. This energy, known as the "activation energy", breaks up the bonds between molecues in the substance. It is equivalent to the total bond energy of the reactants. However, in high school I learnt that the energy required to start a nuclear reaction is the difference between the binding energy of the reactants and the binding energy of the products. Why is it that the minimum required energy is not the binding energy of the reactants, similar to a chemical reaction?
For nuclear reactions we commonly talk about the Q of the reaction. for the reaction A(a,b)B where "A" and "a" are reacts (A is generally the target, "a" is the projectile) and "b" and "B" are the products, $Q = \left(m(\mathrm{A})+m(\mathrm{a})-m(\mathrm{B})-m(\mathrm{b})\right)c^2.$ Here, $m(A)$ is the ${nuclear}$ mass of A. If $ Q>0$, the reaction will release energy in the center of mass; if Q<0, energy must be supplied to the center of mass frame. This value, however, does not take into account the energy required to overcome any coulomb repulsion which might be involved, the answer to your comment is: No, balancing the Q value is not sufficient to making the reaction happen. If the reaction involves bringing two positive nuclei (or particles) together, you must provide energy to overcome coulomb repulsion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is the ratio of velocity to the speed of light squared in the Lorentz factor? Why is the ratio of velocity to the speed of light squared in the Lorentz factor? $${\left( {{v \over c}} \right)^2}$$ My only guess is the value must be positive.
It derives from the special relativistic version of the Pythagorean theorem. The hypotenuse of a Euclidean triangle is given by $$h^2 = a^2 + b^2$$ In Minkowski space (special relativity) you get a minus sign instead of a plus sign, but you still have to square everything: $$\Delta s^2 = \Delta t^2 - \Delta x^2$$ (and then you work onwards as described in Alfred's answer.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/114913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 2 }