Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How do we know that one particular solution for the velocities of a two-body elastic collision is the correct one over the other? Assuming there is a 1-D collision between two bodies, having masses $m_1$ and $m_2$, if we conserve energy and momentum, we get two solutions. $$ v_{1,i} = v_{1,f} \\ v_{2,i} = v_{2,f} $$ or $$ v_{1,i} = -v_{1,f} \\ v_{2,i} = -v_{2,f} $$ Both of these are valid mathematical solutions under the conservation laws. If so, apart from practical experimentation, how do we decide which one of these is the correct answer? Is there an analysis that we should do locally within the system, rather than just using global laws? Note: Subscripts i and f denote initial and final states.
I mean the first solution would not change anything in the system at all after the collision right? So the two masses would just pass each other. Since you want to calculate the final state of a fully elastic collision, why would you consider the "not change a thing" solution?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Explain how scaling of the inverse square law breaks down at a stars surface If the radiation pressure at distance $d>R$ from the center of an isotropic black body star is found to be $$P_{rad}=\large{\frac{4\sigma T^4}{3c}}\left[1-\left(1-\frac{R^2}{d^2}\right)^{\frac{3}{2}}\right],$$ a) How do I show that $P_{rad}$ obeys an inverse square law for $d \gg R$? b) Why does the inverse square law scaling break down close to the stars surface?
As about (b): Why does the inverse square law scaling break down close to the stars surface? $$ P_{\,d\approx R} = \lim_{d \to R} {\frac{4\sigma T^4}{3c}}\left[1-\left(1-\frac{R^2}{d^2}\right)^{\frac{3}{2}}\right] = {\frac{4\sigma T^4}{3c}} $$ In other words, when you are at the star surface - you get as much radiation flux as possible, thus radiation density depends just on star surface temperature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can a red light photon be different from a blue light photon? How can photons have different energies if they have the same rest mass (zero) and same speed (speed of light)?
The only difference between the two is the energy they have. $$ E=\frac{hc}{\lambda} $$ As you can see from the equation above, different energies means different wavelengths. Different wavelengths means different colors. It is important to know that even though photons are always massless and always move with the speed of light, that does not mean that they always have the same energies as can be seen from the equation above.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 6, "answer_id": 4 }
Is Stokes' law, for drag force in fluids, accurate? In high school, I was taught that Stokes' law is dependent on assumption that drag force is proportional to velocity, viscosity and radius of the sphere (and the powers/exponents are evaluated using dimensional analysis). Is Stokes' law proven or is it just an assumption?
Stokes' law only applies when the inertia forces in the fluid (caused by its acceleration or non-uniform motion) are negligible compared with the viscous forces. The ratio of the two types of forces is described by a non-dimensional number called Reynolds number (usually written as Re). Stokes' law applies when Re is much smaller than $1$. This is only true for very slow "creeping" flows, or for very small objects moving in typical fluids like air or water - for example dust particles "floating" in the air or single-celled animals "swimming" in water. For comparison, a ball being thrown in most sports will have Re of the order of $10^5$ to $10^6$ and a large ship travelling at sea may have Re of the order of $10^9$ to $10^{10}$. Stokes' law can be used to measure the viscosity of fluids, so long as the experiment only involves small Reynolds numbers. For larger Reynolds numbers, the drag force is approximately proportional to the velocity squared, not to the velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Contradiction using amperes law to calculate magnetic field $B$ I am trying to study the influence on the magnetic field B generated by a solenoid in two different cases * *the solenoid is wound around an open iron core *the solenoid is wound around a closed iron core I am trying to use amperes law to give an estimation of the magnetic field in both cases, but I seem to arrive at a contradiction: acc to the calculation, the magnetic field with an open iron core would be bigger than that of a closed iron core. I am pretty sure this is wrong, because the flux lines in an open iron core need to travel a large distance through the air (which has low permeability), where as the flux lines in a closed iron core can travel through the highly permeable material all the time, and never need to travel in the air. See the attached picture for my calculation. (I am assuming the coil diameter, wire diameter, number of windings, current and core material to be the same in both cases. The only difference is the iron core being open or closed) What am I doing wrong here?
Thank you ohneVal for your answer. I still had problems understanding why my approach did't work so I also had a chat with some colleagues about this. I think I now understand my biggest flaw. Amperes law is actually $$ \oint H\cdot dl=n\cdot I $$ since $ H=\frac{B}{\mu} $ this can be written as $$ \oint \frac{B}{\mu}\cdot dl=n\cdot I $$ Lots of textbooks will now rewrite the law as $ \oint B\cdot dl=\mu \cdot n\cdot I $ however, this can only be done if $\mu$ is constant (for example $\mu_0$, permeability of free space) In the left case, the path travels partly through the iron core, and partly through the air. In other words, $\mu$ is not constant along the path, so amperes law can not be easily rewritten or simplified, it stays $ \oint H\cdot dl=n\cdot I $, and so B can also not be easily calculated. In the right case, the path travels all the way through the iron core, so in this case $ \oint H\cdot dl=n\cdot I $ can be rewritten as $ \oint B\cdot dl=\mu_0 \cdot \mu_r \cdot n\cdot I $ and now B can be calculated as done on the right side of the figure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finite barrier. Constant including minus or not? For a finite potential barrier of magnitude $V_0$ between $x=-a$ and $x=a$ we know that the time independent schrodinger equation is $\Psi'' +\frac{2m}{\hbar}E\Psi=0$ for $x<-a$. Let $E<V_0.$ Normally we set $k_1^2=\frac{2mE}{\hbar^2}$ and get $\Psi''+k_1^2\Psi=0$ which would give $$\Psi=A_1e^{ik_1x} + B_1e^{-ik_1x}.$$ But if we set $k_2^2=\frac{-2mE}{\hbar^2}$ we get $\Psi'' - k_2^2\Psi=0$ and the solution $$\Psi=A_2e^{k_2x} + B_2e^{-k_2x}.$$ Why is the second solution incorrect, while the first one is?
First, your Shrodinger equation seems to have a some small problems. From $$ - \frac{\hbar^2}{2m}\Psi'' +V(x)\Psi = E\Psi$$ one get: $$ \Psi'' + \frac{\sqrt{2m}}{\hbar} (E-V(x)) \Psi =0. $$ For your barrier, and for $0<E<V_0$, One has in the barrier ; $$ \Psi(x) = A e^{\kappa x} + B e^{-\kappa x},$$ where $\kappa=\sqrt{\frac{2m (V_0-E)}{\hbar^2}}$, and for $|x|>a$ : $$ \Psi(x) = A e^{ikx} + B e^{-ik x},$$ with $k= \sqrt{\frac{2m E}{\hbar^2}}$, Hence of your first solution is the only correct one, even if it is the false solution of a wrong equation. Your problem is that if you use an assumption like $k^2=-K$ where $K$ is positive, you get an imaginary $k$ which converts real exponentials into imaginaries and reciprocally.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do we need Gauss' laws for electricity and magnetism? The source of an electromagnetic field is a distribution of electric charge, $\rho$, and a current, with current density $\mathbf{J}$. Considering only Faraday's law and Ampere-Maxwell's law: $$ \nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t}\qquad\text{and}\qquad\nabla\times\mathbf{B}=\mu_0\mathbf{J}+\frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t}\tag{1} $$ In an isolated system the total charge cannot change. Thus, we have the continuity equation that is related to conservation of charge: $$ \frac{\partial\rho}{\partial t}=-\nabla\cdot\mathbf{J}\tag{2} $$ From these three equations, if we take the divergence of both equations in $(1)$, and using $(2)$ in the Ampere-Maxwell's law, we can get the two Gauss' laws for electricity and magnetism: $$ \nabla\cdot\mathbf{B}=0\qquad\text{and}\qquad\nabla\cdot\mathbf{E}=\frac{\rho}{\varepsilon_0}\tag{3} $$ Therefore, the assumption of $(1)$ and $(2)$ implies $(3)$. At first glance, it could be said that we only need these three equations. Also, conservation of charge looks like a stronger condition than the two Gauss' laws (it's a conservation law!), but, as the article in Wikipedia says, ignoring Gauss' laws can lead to problems in numerical calculations. This is in conflict with the above discussion, because all the information should be in the first three equations. So, the question is, what is the information content of the two Gauss' laws? I mean, apart of showing us the sources of electric and magnetic field, there has to be something underlying that requires the divergence of the fields. If no, then, what is the reason of the inherently spurious results in the numerical calculations referred? (Also, I don't know what type of calculation is referred in the article.)
This just an explicit example to @vadim's answer: Pick a function $f(\vec x)$, constant in time, such that $\Delta f =5$. Set $\vec B=\vec\nabla f$, $\vec E=\vec J=0$, $\rho=17$. Then Eqns. (1) and (2) are satisfied, buth both equations in (3) are not.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Is it possible for a quantum system to evolve out of a determinate state of some observable before measurement is made? On page 96 of his book, Griffiths explains that determinate states of some observable $Q$ are eigenfunctions of that operator. So if a particle starts out in that state it will continue to be in that state as long as a measurement of an observable is not being made. This is all well and good and seemed to make sense as I went through the book but then I encountered an example (not in Griffiths) of a spin $1$ particle which starts out in the spin state $(1,1)$ which is an eigenstate of $S_z$ and evolved out of that state (with the Hamiltonian being $H = kS_x$ where $k$ is a constant) when $t > 0$. But how can that be? we know that if particle is in a determinate state it should remain in that state for all time unless a measurement is made. More generally I considered the following scenario: Suppose you have a definite state of Angular momentum, call it $\psi(0)$ [we will consider it the initial condition which we need to evolve in time]. Then (suppose for simpicity) you can expand this definite state in terms of two eigenstates of the Hamiltonian so: $\psi(0) = aE_1 + bE_2$ But then to obtain the state at $t>0$ we just tack on the wiggle factor corresponding to each energy eigenstate and we can easily see that the state will evolve out of our initial angular momentum eigenstate! So again: How come that be given that it was a determinate state of an observable. The conclusion I came to was that given any observable $Q$, it will have determinate states only if they are also energy eigenstates, i.e. only if $Q$ and $H$ are compatible observables in which case they will have a common set of eigenfunctions. But there's not even a hint of that in Griffiths who explicitly defines determinate states as eigenfunctions of observables regardless of whether or not they commute with $H$. So given an observable $Q$, get the eigenfunction and you're done: you got the determinate states. But that contradicts what I've stated above so I must be missing something.
It is a different application of the word determinate. Griffiths simply means that the result of a measurement is determinate if the state is an eigenfunction. That is not the same as determinism in time evolution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is heat $\delta Q$ an exact differential for an isochoric process (ideal gases)? Generally speaking, heat and work are path-dependent, thus $\delta Q$ and $\delta W$ are not exact differentials. By first law of thermodynamics, we know that $dU=\delta Q - \delta W$ but $\delta W=0$ for an isochoric process, that yields $dU=\delta Q$. Does this make work an exact differential in this specific situation? Am I neglecting something? This sounds weird to me.
Let's assume that the system is in equilibrium throughout the isochoric process and closed. We say the process has to be quasi-static. Under these conditions the work done on the system is $$\delta W=-pdV+\sum_iy_iX_i $$ where the $X_i$ represent different work variables, arising from different physical interactions. For example the work on a closed homogenous system done by electrical or magnetic field is $$ \delta W=-pdV+\vec{E}d\vec{P}+\vec{H}d\vec{M} $$ Let's assume the only work coordinate in question is the volume $V$ due to compression of the system. And let's furthermore assume that we only move in the thermodynamic subspace spanned by $V=\textrm{const.}$ Then we indeed have $$\delta W=0$$ And by the first law of thermodynamics $$dU=\delta Q$$ As the LHS is integrable the RHS has to be as well. So the state function $Q$ exists for systems undergoing quasi static processes on the subspace defined by $V=const.$ under the ssumptions that no other work coordinates are involved. But I don't see any reason how this would be useful.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why does the range of this integral work out this way? I have a bit of trouble in finding the same limits for the integral in Eq. (17.111) from Peskin & Schroeder. We have something like $$ \int_0^1 dx' \int_0^1 dz f(x',z) \delta(x-zx').$$ Posing $y=zx'$, I find $$\begin{align} \int_0^1 dz \int_0^1 \frac{dy}{z} f\biggl(\frac{y}{z},z\biggr) \delta(x-y) &= \int_0^1 \frac{dz}{z} 1_{[0,z]}f\biggl(\frac{x}{z},z\biggr)\\ &= \int_0^z \frac{dz}{z} f\biggl(\frac{x}{z},z\biggr). \end{align}$$ Instead, P&S find $$ \int_x^1 \frac{dz}{z} f\biggl(\frac{x}{z},z\biggr).$$ I must have overlooked some property of the Delta distribution. Can someone point out my mistake?
The scaling property of the Dirac delta is $$\delta(\alpha x) = {1\over|\alpha|} \delta(x). $$ So you get $$ \int_0^1 dx' \int_0^1 dz f(x',z) \delta(x-zx') = \int_0^1 dz \int_0^z \frac{dy}{z} f({y\over z},z) \delta(x-y) $$ meaning that when you apply the delta function you have $x\leq z$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Fourier Optics - Impulse Response of Free Space from Fresnel Transfer Function I am currently reading the chapter "Fourier Optics" in the book "Fundamentals of Photonics" by Saleh and Teich. However I am not able to follow one specific mathematical derivation. On page 111 the transfer function of free space is derived $$ H(\nu_x, \nu_y) = \text{exp}(-j 2 \pi d \sqrt{\lambda^{-2} - \nu_x^2 - \nu_y^2}).$$ $d$ is the distance the light travels from the input plane to the output plane. $\lambda$ is the wavelength and $\nu_x$ and $\nu_y$ are the spatial frequency components. After that this formula is being simplified by using the fresnel approximation, for which it is assumed, that the frequency components $\nu_x$ and $\nu_y$ in the input wave are much smaller than the system bandwidth $\lambda^{-1}$. The resulting approximated transfer function is $$ H_{\text{Fresnel}}(\nu_x, \nu_y) = \text{exp}(j \pi \lambda d (\nu_x^2 + \nu_y^2)) \cdot \text{exp}(-j k d).$$ This still makes sense to me, everything is fine so far. However after that they derive the impulse response of the system by applying the inverse fourier transform to the transfer function $H_{\text{Fresnel}}$. The resulting function is $$h(x,y) \approx \dfrac{j}{\lambda d} \cdot \text{exp}(-j k d) \cdot \text{exp}(-j k \dfrac{x^2+y^2}{2 d}).$$ And honestly, I have absolutely no idea how they come to that expression. The inverse fourier is $$h(x, y) \approx \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} H_{\text{Fresnel}}(\nu_x, \nu_y) \cdot \text{exp}(-j 2 \pi (\nu_x x + \nu_y y)) d\nu_x d\nu_y.$$ Small annotation: Out of some reason they flipped the signs in the fourier transform in contrast to the standard notation. So the core question is: How did they solve this integral? There is a correspondence table at the end of the book, but I have no clue how this should help. Kind regards
I think I was able to solve the problem by applying the same method as mentioned here. However my solution still differs by a constant factor from the solution in the book, so maybe it is still not completely right. If you look at $h(x, y)$ one can see easily that it can be separated as $$h(x, y) = K \cdot f(x) \cdot f(y).$$ with $f(x) = e^{\dfrac{-j k x^2}{2 d}} = e^{j a x^2}, a = \dfrac{-k}{2 d}$ and $K = e^{-j k d}$. So if we know how to fourier transform $f(x)$, the problem is more or less solved. If we differentiate $f(x)$, we get the following equation $$\dfrac{d f(x)}{dx} = f(x) \cdot 2 j a x.$$ Lets fourier transform it with the known correspondences $$j \omega F(\omega) = \dfrac{d F(\omega)}{d \omega} 2ja.$$ This gives us $$\dfrac{d F(\omega)}{d \omega} = F(\omega) \cdot \dfrac{\omega}{2 a j}.$$ We can see, that $$F(\omega) = \text{exp}(\dfrac{-j \omega^2}{4a})$$ is a solution of the differential equation. No we can resubstitute $a$ and $k = \dfrac{2 \pi}{\lambda}$ $$F(\omega) = \text{exp}(\dfrac{j \omega^2 d \lambda}{4 \pi})$$ and with $\omega = 2\pi \nu$ we get $$F(\nu) = \text{exp}(j \pi \lambda d \nu^2).$$ Resubstituting everything gives us $$H(\nu_x, \nu_y) = \dfrac{j}{\lambda d} \text{exp}(-j k d) \text{exp}(j \lambda d \pi (\nu_x^2+\nu_y^2)).$$ Out of some reason the factor $\dfrac{j}{\lambda d}$ is still wrong, but that's the best answer i could derive. Kind Regards
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Physics of Project Orion I was reading the book "Project Orion" by George Dyson. For those who are unaware, Project Orion was basically a plan to launch a spaceship by flinging bombs out the rear and detonating them. The plasma from the explosion would contact a "pusher plate", which was attached to shock absorbers, which themselves are attached to the main spaceship. The shock absorbers are supposed to turn the 10,000 $g$ sledgehammer from a nuclear bomb into a more manageable 2 $g$ acceleration so the crew doesn't liquify. Freeman Dyson says the "peak acceleration on top of a shock absorber is proportional to $\frac{v^2}{L}$ where $v$ is the change in velocity per bomb, and $L$ is the length of the shock absorber. I'd like to know where this formula came from, and I'd also like to know what the constant is. Is the constant just the "k" value of a spring?
I'd like to know where this formula came from... I am guessing it was an assumption about kinematics where one assumes constant acceleration. If this is valid, then we know: $$ V_{f}^{2} - V_{o}^{2} = 2 \ a \ \Delta x \tag{0} $$ where $V_{i}$ is the speed ($i$ = $f$ for final and $o$ for initial), $a$ is the acceleration, and $\Delta x$ is the displacement. In the example you gave, we would have $V_{o}$ = 0 so the acceleration would be roughly given by: $$ a \approx \frac{ V_{f}^{2} }{ 2 \ \Delta x } \tag{1} $$ ...and I'd also like to know what the constant is... For this, we just use the Hooke's law and Equation 1 to show: $$ k \approx \frac{ 1 }{ 2 } \ m \ \frac{ V_{f}^{2} }{ \Delta x^{2} } \tag{2} $$ where $m$ is the mass of the object attached to a massless spring (this latter part is obviously not true, thus the $\approx$ instead of $=$). Notice that Equation 2 could also be derived from energy conservation as well. Side Note: I am hoping the project describe some sort of dampener on such a spring assembly otherwise the spacecraft would be shaking like mad for a long time. Typically this would add a term $\propto - \gamma \tfrac{ \Delta x }{ \Delta t }$ to the simple harmonic oscillator equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can a black hole eject plasma? This image from an online Italian newspaper shows photographs of one of the most powerful phenomena in the cosmos. Nothing, not even at the speed of light $c$, can escape a black hole one it has been caught. So how is it possible mathematically that a black hole, which "swallows" the stars and gas approaching its powerful accretion disk, can then eject some of the gas into two thin jets of plasma at speeds $V_{pl}$ close to the speed of light?
The black hole is defined by its event horizon. This is the point at which the escape velocity reaches $c$. But the accretion disc forms outside the event horizon, so stuff can still escape from it. It is this outer stuff that finds its way into the jets, super-accelerated beyond escape velocity by magnetic fields being dragged round the hole. If the escape velocity at the point of acceleration outside the black hole is $V_s$ then $c > V_{pl} > V_s$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to build a many-body state starting from single-particle states? Suppose that I have 3 non-degenerate single-partilce energy levels $E_1$, $E_2$, and $E_3$, each one associated to eigenstates $|\psi_1\rangle$, $|\psi_2\rangle$, and $|\psi_3\rangle$. How do you build the most general many-body state in the case particles are spinless fermions? How does the answer change if, instead of fermions, particles are bosons?
It is easier to do it in terms of wave functions than in the bra-ket notation. The one-particle states are $$|\psi_1\rangle\leftarrow\psi_1(x),|\psi_2\rangle\leftarrow\psi_2(x), |\psi_3\rangle\leftarrow\psi_3(x)$$. The two-particle states are antisymmetrized combinations of the pairwise products, i.e. $$|\psi_1\psi_2\rangle\leftarrow [\psi_1(x_1)\psi_2(x_2) - \psi_1(x_2)\psi_2(x_1)]/\sqrt{2},\\|\psi_1\psi_3\rangle\leftarrow [\psi_1(x_1)\psi_3(x_2) - \psi_1(x_2)\psi_3(x_1)]/\sqrt{2},\\|\psi_2\psi_3\rangle\leftarrow [\psi_2(x_1)\psi_3(x_2) - \psi_2(x_2)\psi_3(x_1)]/\sqrt{2}. $$ Finally, the three-particle state is antisymmetrized in in respect to all the pairwise permutations. The compact representation us by the determinant $$|\psi_1\psi_2\psi_3\rangle\leftarrow \frac{1}{n!}\left|\begin{matrix} \psi_1(x_1)&\psi_2(x_1)&\psi_3(x_1)\\ \psi_1(x_2)&\psi_2(x_2)&\psi_3(x_2)\\ \psi_1(x_3)&\psi_2(x_3)&\psi_3(x_3) \end{matrix}\right|.$$ Note that the bra-ket notation above is not the same as the bra-kets given in the question, rather corresponds to the Fock space (second quantized representation). If we really want to construct the many-particles states from the original bras and kets, as a direct product, we need to supplement them with indices indicating which particle occupies the orbital, e.g., we could use notation $$|\psi_i\rangle \rightarrow |\psi_i\rangle_j.$$ Then the construction proceeds exactly as described above, e.g., two particles states can be written as $$|\psi_{i,j}\rangle_{1,2}=\frac{1}{\sqrt{2}}\left(|\psi_i\rangle_1\otimes |\psi_j\rangle_2 - |\psi_j\rangle_1\otimes |\psi_i\rangle_2\right).$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Physics of the trikke tricycle I love my trikke, but I still do not understand what propels it forwards. It is very clear that the energy comes from my legs and not from my arms (I only have to touch the handle bar ever so lightly), but I do not see how my shifting weight from side to side can result in a forward pointing force. How is the side to side movement converted into a forward moving force? (And just to be clear: My trikke is not electric).
The front wheel provides a friction force perpendicular to the direction it is pointing. Since the wheel turns the trikke one way, the reactive force of the road on the wheel pushes the other way. Since this is perpendicular to the wheel, there is a component of forward force on the bike (or backwards as the case may be). As with swinging on a swing, your body (or the part of the brain responsible for controlling it) works out what to do. Not so easy to bring that knowledge to the level of conscious thought. You have to lean in to the turn of the trikke or you would fall off (like riding a bicycle). You turn the handlebar with your arms, but the force is transmitted through the frame of the trikke. Your legs supply the energy through shifting your weight.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Nicholas Gisin's papers about Time in Physics I recently read the article from QuantaMagazine which says Over the past year, the Swiss physicist Nicolas Gisin has published four papers that attempt to dispel the fog surrounding time in physics. As Gisin sees it, the problem all along has been mathematical I tried searching for these four papers but not able to find them. If anyone could provide DOI for all these papers, it would be of great help for me.
Here are the papers listed with their preprint links: * *Indeterminism in Physics, Classical Chaos and Bohmian Mechanics: Are Real Numbers Really Real? - linked here *Real numbers are the hidden variables of classical mechanics - linked here *Physics without Determinism: Alternative Interpretations of Classical Physics - linked here *Classical and intuitionistic mathematical languages shape our understanding of time in physics - linked here
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How can a photon collide with an electron? Whenever I study the photoelectric effect and the Compton effect, I have always had a question about how a photon can possibly collide with an electron given their unmeasureably small size. Every textbook I've read says that the photo-electrons are emitted because the photons collided with them. But since the photons and electrons virtually have no size, how can they even collide? I have searched for the answer on the internet but I couldn't find any satisfying one.
It is very important to understand that you are asking about the absorption of a photon. Now if you try to imagine this as a classical collision of two balls, that is just not correct. You are confused because you think the photon needs to collide head on with a specific electron to get absorbed. What is correct to say is that the whole QM system, the atom/electron system absorbs the photon. Now you say that the electron that collides head on will absorb the photon. Let's take an atom with multiple electrons that are all able to absorb photons and move to higher energy levels. What is correct to say is that the electron that will absorb the photon and move to a higher orbital will be the one that has a energy gap that is available for the electron to move to that matches the energy of the photon. So these two QM entities, the photon (though the photon does not have a strict position observable) and the electron both have a probability distribution of being at certain places, and you are saying that if they collide head on, the electron will absorb the photon. In reality, the atom/electron system will absorb the photon, and the specific electron that will move to a higher energy level will be the one that has an available energy gap that matches the photon's energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 9, "answer_id": 6 }
Existence conditions for completely positive trace-preserving (CPTP) map Given two separable Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$, I am wondering: what are the necessary and sufficient conditions for there to be a completely positive trace-preserving (CPTP) map $\Phi:B(\mathcal{H}_1)\to B(\mathcal{H}_2)$?
The map $\Phi(\rho) = \mathrm{tr}(\rho)\sigma$, for some state $\sigma\in B(\mathcal H_2)$ with $\mathrm{tr}(\sigma)=1$ is CPTP and exists for any pair of separable Hilbert spaces. (As always for these questions, let me advertise my list of canonical examples for quantum channels.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fourier transform of Fermi function As an alternative approach to the Sommerfeld-expansion, my lecturer tries to motivate properties of free fermions, such as temperature dependencies of the chemical potential $\mu(T)$, electron number $N_e(T)$, energy density $U(T)$, etc. by expanding the Fourier transform of the Fermi function for low temperatures: $$\int d\epsilon g(\epsilon)f(\epsilon)=\int d\epsilon g(\epsilon)\int dt \tilde{f}(t)e^{-i\epsilon t}\\ \text{where}~ \tilde{f}(t)=\frac{e^{i\mu t}}{2\pi i}\left(\pi i \delta (t)+\frac{1}{t}\frac{\pi t /\beta}{\sinh(\pi t /\beta)}\right)\\ \text{and} ~ g(\epsilon) ~\text{some arbitrary well-behaved function}. $$ I have only come so far to calculate the Fourier transform $\tilde{f}(t)$ of $f(\epsilon)=\frac{1}{(e^{\beta(\epsilon-\mu)}+1)}$: $$\tilde{f}(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty} d\epsilon \frac{e^{i\epsilon t}}{e^{\beta(\epsilon-\mu)}+1}=\frac{e^{i\mu t}}{2\pi \beta}\int dx \frac{e^{ixt/\beta}}{e^x+1}=\dots= \frac{e^{i\mu t}}{2\pi i}\left(\pi i \delta (t)+\frac{1}{t}\frac{\pi t /\beta}{\sinh(\pi t /\beta)}\right)$$ I have used $x=\beta(\epsilon-\mu)$. Can anybody give me hints for the calculation steps in between?
I have in my notes a related Laplace transform: $$ I=\int_{-\infty}^{\infty} \frac{d\epsilon}{2\pi} e^{\tau\epsilon/2\pi} \left\{ \frac{1}{1+e^{\beta(\epsilon-\mu)}}-\theta(-\epsilon)\right\}= \frac 1{\tau}\left\{ \frac{(\frac{\tau T}{2})}{\sin(\frac{\tau T}{2})} e^{\tau\mu/2\pi}-1\right\}, \quad 0<\tau T/2\pi< 1. $$ I evaluated it goes as follows: $$ I=\int_0^\infty \frac{d\epsilon}{2\pi} \left\{\frac{e^{\tau\epsilon/2\pi}}{1+e^{\beta(\epsilon-\mu)}}+\frac{e^{-\tau\epsilon/2\pi}}{1+e^{\beta(-\epsilon-\mu)}}- e^{-\tau\epsilon/2\pi}\right\}\nonumber\\ = \int_{-\infty}^\infty \frac{d\epsilon}{2\pi} \left\{\frac{e^{\tau\epsilon/2\pi}}{1+e^{\beta(\epsilon-\mu)}}\right\}-\frac 1\tau\\ = e^{\mu\tau/2\pi} \int_{-\infty}^\infty \frac{d\epsilon}{2\pi} \left\{\frac{e^{\tau(\epsilon-\mu)/2\pi}}{1+e^{\beta(\epsilon-\mu)}}\right\}-\frac 1\tau\\ = e^{\mu\tau/2\pi} T\int_{-\infty}^\infty \frac{d\xi }{2\pi} \left\{\frac{e^{\xi T\tau/2\pi}}{1+e^\xi}\right\}-\frac 1\tau\\ = e^{\mu\tau/2\pi} T \int_{0}^\infty \frac{dx }{2\pi}\frac{x^{T\tau/2\pi-1}}{1+x} -\frac 1\tau\\ = \frac 1{\tau}\left\{ \frac{(\frac{\tau T}{2})}{\sin(\frac{\tau T}{2})} e^{\tau\mu/2\pi}-1\right\}. $$ I set $x=\exp\{\beta(\epsilon-\mu)\}$ and at the last step used the standard integral $$ \int_0^\infty dx \frac{x^{\alpha-1}}{1+x}= \frac{\pi}{\sin\pi \alpha}, \quad 0<\alpha<1. $$ There may be an easier way! The integral is interesting because it seems to be related to the generating function for the $\hat A$ genus that appears in the Dirac index theorem. I learned this from a paper of Loganayagam and Surówka: arXiv:1201.2812
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is a photon reflected, transmitted or in a superposition? When a photon hits a half-silvered, mirror quantum mechanics says that rather than being reflected OR transmitted it enters into a superposition of transmitted AND reflected (until a measurement takes place). Is there an experiment that demonstrates that this is actually the case and that the photon didn't end up with a single outcome all along? In other words, is the superposition view just a hypothesis that can't be proven either way?
There is a very nice experiment that does this for 2 photons: it is the Hong-Ou-Mandel experiment: C. K. Hong; Z. Y. Ou & L. Mandel (1987). "Measurement of subpicosecond time intervals between two photons by interference". Phys. Rev. Lett. 59 (18): 2044–2046. For simplicity, consider two photons simultaneously entering a beam splitter described by the unitary matrix \begin{align} U=\left(\begin{array}{cc} U_{11}&U_{12} \\ U_{21}&U_{22}\end{array}\right)\, . \end{align} Each photon of the two-photon input state $a_1^\dagger a_2^\dagger \vert 0\rangle$ is then scattered into a superposition \begin{align} a_1^\dagger &\to a^\dagger_1 U_{11} + a^\dagger_2 U_{21}\, ,\\ a_2^\dagger &\to a^\dagger_1 U_{12} + a^\dagger_2 U_{22}\, \end{align} so the output state is a product of superpositions: \begin{align} \left(a^\dagger_1 U_{11} + a^\dagger_2 U_{21}\right) \left(a^\dagger_1 U_{12} + a^\dagger_2 U_{22}\right)\vert 0\rangle\, . \tag{1} \end{align} The experiment then measures the rate at which photons are counted in different detectors, i.e. it excludes from the total amplitude (1) terms in $a^\dagger_1a^\dagger_1$ and $a^\dagger_2a^\dagger_2$. This is illustrated as follows: The count rate is then proportional to \begin{align} \vert U_{11}U_{22}+U_{12}U_{21}\vert^2\, , \tag{2} \end{align} and thus detects the interference between the path. A model where the photons would not output in a superposition would not have a sum of products of terms. In the original experiment, HOM used a $50/50$ beam splitter and the relative phase upon reflection leads to $\vert U_{11}U_{22}+U_{12}U_{21}\vert^2=0$: basically the path destructively interfere. HOM also controlled a relative time delay between the photon pulse by adjusting the position of the beamsplitter in their setup, and the 0-rate only occurs when the pulses perfectly overlap so the photons are exactly indistinguishable. You can find more details with full time delays and various pulse shapes in this paper: Brańczyk, Agata M. "Hong-ou-mandel interference." arXiv preprint arXiv:1711.00080 (2017). The quantity $U_{11}U_{22}+U_{12}U_{21}$ is actually the permanent of the scattering matrix $U$. It need not be $0$ in general but happens to be $0$ for the $50/50$ beam splitter. The notion of permanent is defined for an $n\times n$ matrix, and is at the core of the BosonSampling proposal to show how a (single purpose) quantum computer could outperform a classical computer. As additional material prompted by comments: Assuming for simplicity Gaussian pulses of unit width with maxima separated in time by $\tau$, the rate is given by \begin{align} \textstyle\frac{1}{2}(1+e^{-\tau^2})\vert U_{11}U_{22}+U_{12}U_{21}\vert^2 +\textstyle\frac{1}{2}(1-e^{-\tau^2})\vert U_{11}U_{22}-U_{12}U_{21}\vert^2\, . \end{align} Thus, for exact overlap, $\tau=0$ and only the first term remains. In a 50/50 beam splitter the combination $U_{11}U_{22}+U_{12}U_{21}=0$ so the rate is exactly $0$. For partial overlap and a 50/50 beam splitter, one is left with the second term, which contains the determinant of the scattering matrix. If the scattering matrix is unitary, this determinant is of magnitude 1 so the rate is basically given by $\sim (1-e^{-\tau^2})$, going smoothly to $0$ as $\tau\to 0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Fourier Analysis for Physicists My professor wanted me to master these topics from Fourier Analysis. I need a resource where these topics are discussed in brief. Although i know many of the topics in the list, i prefer a good resource to brush up my rusty knowledge and learn what i don't know. The topics are: * *Fourier series: sin and cos as a basis set; calculating coefficients; complex basis; convergence, Gibbs phenomenon *Fourier transform: limiting process; uncertainty principle; application to Fraunhofer diffraction *Dirac delta function: Sifting property; Fourier representation *Convolution; Correlations; Parseval's theorem; power spectrum *Sampling; Nyquist theorem; data compression *Solving Ordinary Differential Equations with Fourier methods; driven damped oscillators *Green's functions for 2nd order ODEs; comparison with Fourier methods *Partial Differential Equations: wave equation; diffusion equation; Fourier solution *Partial Differential Equations: solution by separation of variables *PDEs and curvilinear coordinates; Bessel functions; Sturm-Liouville theory: complete basis set of functions
* *If your into solving a lot of examples and gathering some intuition i recommend Schaum's outline series. They have nice solved examples. (https://www.amazon.com/Schaums-Analysis-Applications-Boundary-Problems/dp/0070602190) *If you are into more technical mathematical stuff, here is a textbook I used. (https://www.amazon.com/Introduction-Fourier-Analysis-Russell-Herman/dp/1498773702) *A great way to learn about DFTs and Signal Processing in general, I recommend going through some coding problems and in such case, technical notes from NI and some coding textbooks helped a lot. (https://www.ni.com/ko-kr/innovations/white-papers/06/using-fast-fourier-transforms-and-power-spectra-in-labview.html)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does a rubber band become a lighter color when stretched? I was stretching a pink colored rubber band, and I noticed that the longer I stretch it, the lighter the pink becomes. I haven't found answers to this question anywhere else. Is there a reason for this phenomenon? Why does this happen?
Colour can come from pigment particles embedded in the translucent rubber matrix absorbing light. When you pull the band the particles become separated by a longer distance, but being themselves inelastic remain the same size. Hence the amount of absorption per unit area decreases, and the band become lighter in color. Simulated rubber band with pigment particles embedded in the matrix. As it is extended it becomes more translucent Rubber bands are also incompressible ($\nu=1/2$) so the volume is largely unchanged by pulling. This has the effect of reducing the cross section, further reducing absorption.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85", "answer_count": 6, "answer_id": 0 }
"A spinning top spins much longer because it experiences less frictional torque" is wrong? The above quote was found in my physics textbook, but it struck me as strange because my understanding of friction is that the surface area doesn't matter in calculating the amount of frictional force. Another question that asked a similar thing on stackexchange was answered basically by saying that a spinning top with a narrow point spins better and longer because of "precession"? Why does a top spin so well? So my question is: is the above statement just flat out wrong? Is the reason it spins much longer not because of torque, but because of other properties of a narrow point?
There are two types of friction, static and kinetic friction. Imagine trying to push a table across a carpet. Initially you need to generate some force to get the table to move, but once the table starts moving you may feel it's easier to push it further. The type of friction that is important for your spinning top example would be the kinetic friction, because it resists the spinning motion of your top, similarly to when you feel some resistance while pushing a table across a carpet. If you let go of the top after you give it some initial velocity, the friction force will cause a torque opposing the top's spinning direction, which will cause the angular velocity of the top to decrease. Remember that friction is caused between materials because they are pretty rough and jagged at a very small scale, even a perfectly flat glass table may be very rough when you look at it with a powerful microscope. The spinning top's point is also not perfectly smooth so there will inevitably be friction between the top and the glass. However, if you would spin the top on a much rougher material such as a sponge or in sand, the effects are amplified and the top will slow down much faster.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Wavefunction of a photon Does anyone have an explicit closed-form expression for the wavefunction of a single photon from a multipolar source propagating through free space? Any basis is acceptable as long as it is a single photon state. A reference would also be appreciated, but not essential. ———————— A possible duplicate has been suggested: Does a photon have a wave function or not? But this question primarily concerns the existence of the wavefunction and is not what I am looking for. None of the answers provide an explicit expression for the wave function, and neither the question nor the answers discuss a multipole source. The multipole source, in particular, is central to my question.
I define the photon wave function in a covariant formulation which has four polarisation states, two of which are not observable. Some authors use only transverse states, but the other two states would appear on Lorentz transformation, and they appear to be necessary to derive the classical correspondence correctly. For momentum $p=(P^0,\mathbf p)$, define a longitudinal unit 3-vector, $$\mathbf w(\mathbf p,3) = \frac {\mathbf p} {|\mathbf p |} $$ and orthogonal transverse unit 3-vectors $\mathbf w(\mathbf p,1),\mathbf w(\mathbf p,2)$ such that for $r,s= 1,2,3$ $$ \mathbf w(\mathbf p,r) \cdot \mathbf w(\mathbf p,s) = \delta_{rs} $$ Define normalised spin vectors, $$ \mathbf w(\mathbf p,0) = (1,\mathbf 0)$$ $$ \mathbf w(\mathbf p,r) = (0,\mathbf w(\mathbf p,r))$$ For momentum $p$ a photon plane wave state is given by the wave function, $$\langle x|p,r\rangle = \lambda(|\mathbf p |,r) w(\mathbf p,r) e^{-ix \cdot p} $$ where $p^2 = 0$ and $ \lambda $ is determined by relativistic considerations $$ \lambda(|\mathbf p|, r) = \frac 1{(2\pi)^{3/2}} \frac 1{\sqrt{2p^0}} $$ You can then express a photon wave function as an integral $$f^a(x) = \frac 1{(2\pi)^{3/2}} = \sum\limits_{r=0}^3 \int \frac{d^3\mathbf p}{\sqrt{2p^0}} \mathbf w(\mathbf p,r) e^{-ix \cdot p} \langle \mathbf p, r|f\rangle$$ I took this from lecture notes at Cambridge, and I am not sure which books do things much the same way (there are some normalisation choices, as well as choice of gauge). I have given more detail in A Construction of Full QED Using Finite Dimensional Hilbert Space and in The Mathematics of Gravity and Quanta
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Inverse of a metric tensor on a Hermitian manifold Let $(M, g)$ be a Hermitian manifold. We have a metric tensor $g^{i \bar j} dz_i \otimes d\bar{z_j}$, where $(g_{i \bar j})$ is a hermitian positive definite matrix. Now we naturally get the inverse of the metric $(g^{i \bar j})$. I have been told being inverse to each other would imply: $g^{p \bar k} g_{q \bar k} = \delta_{pq}$ which makes no sense to me. I think matrix multiplication should give us $g^{p \bar k} g_{k \bar q} = \delta_{pq}$.
Metric tensors are usually assumed to be symmetrical, i.e. $g_{\mu\nu} = g_{\nu\mu}$, so \begin{equation} g_{\mu\nu}g^{\nu\epsilon} = g_{\nu\mu}g^{\nu\epsilon} = \delta_\mu^{\ \ \epsilon} \end{equation} The symmetry is due to the fact that the metric is used to compute the line elements $ds^2$ and the follwing holds \begin{equation} ds^2 = g_{\mu\nu}dx^\mu dx^\nu = g_{\nu\mu} dx^\nu dx^\mu \end{equation} (I used the Einstein convention on repeated indices).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can Higgs potential provide a cosmological constant? Usually, in particle physics, people do not care about a constant term in scalar field potential. Rather, attentions are paid to the local profile at the minimum. But in the context of cosmology, the absolute value of the potential has a physical meaning; it is a cosmological constant and can cause the Universe to accelerate or decelerate. My impression is that the naive potential for the Higgs field has a negative value at the minima. Do people take it seriously as a negative cosmological constant? Is the dynamical change in the value of the potential at the minimum during EWSB taken into account?
The cosmological constant is part of the mathematical framework of General Relativity. The Higgs field is part of the mathematical framework of quantum field theory and particle physics . At the moment there is only effective quantization of gravity, used in cosmological models , and yes, if you google higgs field and the cosmological constant a number of publications come up, so people are examining the possible relation. A real proof will come only when/if a unifying theory of everything embedding the standard model of particle physics and gravity is found. In this answer I give a link to a related question, on how string theories, which include both the standard model and quantization of gravity may deal with the problem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Confusion in understanding the Lambertian law Lambertian law states that the luminous intensity of light emitted from a perfectly diffusing surface is proportional to the $cos(\theta)$ between the surface normal and the direction of observation. I understand it in the following way. If we will observe some surface scattering light equally, for us the light power per unit area should be the same for all angles of observations. Since the observation area decreases with the angle the scattered light power should be also decreased. However, if I think about it in terms of a number of photons I am confused. I can imagine a perfectly diffusing surface as a surface with a lot of very small light sources, each producing a certain constant number of photons per solid angle in all directions. It means that if I place some photodetector at different observation angles I should always measure the same number of photons because it should equal $N_{sources}*n$, where $N_{sources}$ is a number of light sources within the surface and $n$ is a number of photons emitted in a certain observation angle. However, according to the Lambertian law, I should observe the drop in the number of photons increasing the observation angle. Please, help me with the confusion. Probably, my problem is terminology. Being a laser physicist I don't get used to such definition as luminous intensity, because for me intensity is energy divided by the illumination area.
each producing a certain constant number of photons per solid angle in all directions. This would be non-Lambertian emission. In Lambertian scattering/reflection most rays are emitted along the normal and fewer as we approach large angles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Light’s Behavior in a Rotating Reference Frame Let’s say there is a laser on one side of a very large rotating table, and it’s beam is shining onto a target on the other side of this table. The target is equipped with a very sensitive buzzer that will sound if the laser moves off of the target. Here’s what I think will happen and why: During acceleration the laser will move off of the target and the buzzer will sound. Once a constant speed is maintained the buzzer will continue to sound. My logic for this is rotating frames are not inertia frames. They are always accelerating. Someone on the table could tell they are moving without looking beyond the table simply by placing a ball onto the table and watch it roll off. The Sagnac Effect influences my answer too. Rotating mirrors can move toward or away from a beam of light to shorten or lengthen it’s path. The speed of light is constant. In order for the beam to stay on the target it would need to follow a curved path which is longer than a straight path. Is my logic faulty??? Will the buzzer buzz???? Please set me straight. Thanks.
As it will take time for the photons to reach the sensor after leaving the emitter, once the sensor starts moving away from where the emitter was originally pointed, the photons will no longer hit the same point on the sensor. So the buzzer will sound until the table stops spinning. This will happen in your spinning table situation, or in any situation where the sensor moves in any direction other than directly away from the emitter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mach cone geometry from Mach number Given a Mach number, how would I go about determining the geometry of the associated Mach cone? Apologies, I'm not too well versed in physics.
While @R.W.Bird's answer is absolutely correct, I will complement it with a more graphical explanation. Consider an airplane flying with speed $v$, and the spherical sound waves spreading from the airplane with the speed of sound $c$. You see, when $v>c$ (i.e. when the airplane is faster than sound), then the sound waves form to a cone with the airplane at the tip of the cone. During a time interval $t$ the airplane flies a distance $vt$. In the same time a spherical sound wave grows to radius $ct$. To calculate the cone angle $\theta$ consider the red right-angled triangle. From the geometric definition of the sine function you get $$\sin\theta=\frac{ct}{vt} =\frac{c}{v}. \tag{1}$$ On the other hand the Mach number is defined as $$M=\frac{v}{c}. \tag{2}$$ By comparing (1) and (2) you can conclude $$\sin\theta=\frac{1}{M}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the term to describe when pressure exerted between two obejcts is balanced? I'm searching for a term here. All materials compress (some more than others). Newton's Third law states: ...all forces between two objects exist in equal magnitude and opposite direction: if one object A exerts a force FA on a second object B, then B simultaneously exerts a force FB on A, and the two forces are equal in magnitude and opposite in direction: FA = −FB So for example, if a rubber ball is placed on top of a sponge, both would feel a "constant" force exerted on each other (in this case due to gravity). Now obviously the sponge would compress more, while the rubber ball would hardly compress. What is the term to denote that the force applied by and to each of these objects results in a balance of compression? I'm not even sure if balance is the right word to describe this. I'm trying to describe that the compression of each object will no longer increase or decrease. The closest term I conjured up was "equilibrium of pressure."
Newton’s third law does not tell us what the effect of the equal and opposite forces are on each of the bodies. Newton’s second law applied to each of the bodies individually tells us what, if any, acceleration each experiences based on the net external force applied to each. Mechanics of deformable solids helps us to determine the deformation each body experiences as a result of the same force each experiences. Hope this helps
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Decoupled Temperature for photons: why is it 0.25 $\rm eV$ rather than 13.6 $\rm eV$? When calculating the decoupled temperature of photons using Saha' equation for the following process: \begin{equation} e^- p\longleftrightarrow H\gamma \end{equation} we find that $T_{dec}=3000$ K$=0.25$ eV. From my understanding, this phenomenon happens when it becomes thermodynamically favourable for protons and electrons to combine into neutral atoms. I was expecting it to be 13.6 eV (Rydberg energy) for this case, which is the Hydrogen's biding energy. Why is it less than that?
This is because there are hugely many more photons than charge-carriers per unit volume, roughly 10 billion photons for every electron in the universe. As an example, consider affairs when the universe cooled to a temperature of 1 eV, or around 10,000 K. At this temperature, electrons are no longer relativistic and their density follows the Boltzmann distribution, $$n_e = 2\left(\frac{m_e T}{2\pi}\right)^{3/2} \exp \left(\frac{\mu_e - m_e}{T}\right).$$ At $T = 10^4$ K, the electron density is $n_e \approx 10^4 \,{\rm cm}^{-3}$. Meanwhile, the number density of photons with an energy in excess of 13.6 eV can be found by integrating the Planck spectrum, $$n_\gamma = \frac{1}{\pi^2}\int^\infty_{13.6}\frac{E^2}{\exp(E/T)-1}\, {\rm d}E,$$ giving $n_\gamma \approx 3 \times 10^9 \, {\rm cm}^{-3}$ at $T = 10^4$ K. In other words, there are around $3\times 10^5$ more photons than electrons per unit volume with energy greater than 13.6 eV! At these temperatures, there is no shortage of energetic photons available to re-ionize neutral hydrogen once it forms. The following illustration helps visualize this:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
A question about conformal time I have a function to calculate the Hubble parameter at a given redshift: $$H(z)=\sqrt{\Omega_R(1+z)^4+\Omega_m(1+z)^3+\Omega_k(1+z)^2+\Omega_{\Lambda}}$$ And I have another function to calculate the conformal time between two redshifts: $$\eta(z1,z0)=\int_{z1}^{z0}\frac{1}{H(z]}dz$$ So now I want to calculate the particle horizon at the time of recombination. I calculate $$D_{PH}=c\space \eta(z_{CMB},\infty )$$ Have I just calculated the particle horizon at $t_{CMB}$ as it would be measured today after the expansion (comoving), or have I calculated the particle horizon as it was at $z_{CMB}$? I want the actual (proper) particle horizon as measured by an observer 380,000 years after the big bang. Do I divide the value returned by the $\eta$ function by $z_{CMB}$? Another question is: is it even valid to integrate between $z1$ and $z0$? The reference formula I have shows only integration from 0 (present time) to the given redshift.
I am only familiar with your first equation, except my experience is in using $$a(t)= \frac {1} {1+z(t)}.$$ Your equation seems to have omitted the $H_0$ value as a coefficient of the square-root.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Classic Man on a Boat problem To be clear I have indeed reviewed the question asked by helios321 (Classic man on boat problem). But i have something else to ask related to man on a boat problem. The man on a boat problem goes like this: A man is standing on one side of a boat and the boat is stationary. We ignore friction between water and boat (and air friction). Thus there are no external forces on the man+boat system. So momentum is conserved, and centre of mass does not move. (Copied from helios321's post) I know that if the man moves to the other side of the boat the boat moves in the opposite direction. But what i don't understand is : Let the boat move $x$ m to left and the man $(L-x)$m to right.[L is the length of the boat] then how can we say that $M_{man}(L-x) = M_{boat}(x)$
As the man begins to move, the boat begins to move in the opposite direction. So when the man has moved, say forward with respect to the boat the boat meanwhile has drifted backwards. If one would calculate their center of mass it would be at the same place as before. And if one would sum up the momentum vectors of the two bodies that is the man and the boat, the resultant would be zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How do electron wavelengths relate to orbitals and probability density? I'm doing a physics research project and I am a bit confused. We haven't learnt much of this on our course so I'm sorry if this is a stupid question, I couldn't seem to find an explaination that I understood online. I understand stationary waves, and why electron wavelengths (with wave/particle duality) mean that they can only be at certain energy levels, like this: (I still don't really know why they can't/ what would happen if they deconstructively interfered) What I don't understand is how this relates to the electron probability density- what happens at the nodes on this diagram, and why the wave direction in the second diagram is away from the nucleus. 'If the electron interfering with itself in the diagram- as it moves around the nucleus- is what causes the stationary wave, what is moving both towards and away from the nucleus in the second diagram in order to create a stationary wave with nodes? Again, sorry if this is a stupid question but any help is appreciated! Thank you for your time.
Your first sketch (upper left) represents a resonant condition for a 1D wave wrapped around a circle. (Keep in mind that the "waves" in the sketch are a mathematical graph representing the probability density at points on the circle.) An electron orbital is a resonant 3D standing wave, bounded by the electric field of the nucleus. For a simple hydrogen atom its shape and properties can described by the solution of the Schrodinger equation as described by “probably-someone”. I would say that the nodes in the wave are a result of wave interference and are predicted by the wave equation. The energy of each resonant pattern seems to well defined. I don't know that the same can be said about the wavelength in a 3D pattern.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the force of gravity always directed towards the center of mass? This is a pretty basic question, but I haven't had to think about orbital mechanics since high school. So just to check - suppose a [classical] system of two massive objects in a vacuum. If the density of either object is the same at a given distance from the center, and both objects are spherical, then both objects can be treated as point-masses whose position is the [geometric] center of the original sphere. In the case that either object is not spherical or has an irregular distribution of mass (I'm looking at you, Phobos!), both objects can still be treated as point-masses but the center of mass rather than the geometric center must be used. Is this correct?
In order to have gravity to always point to the center of mass, your mass must have a spherical symmetry (be homogenous or at least made of homogenous concentric layers). The approximation can be used (to a certain extent) for bodies that are not symmetrical, but are pretty much apart from each other. The more the body deviates from the symmetry, the more its gravity deviates from the "point mass" approximation. Most celestial bodies are in or near a hydrostatic equilibrium that imposes more or less symmetric distribution of mass. Then again, certain phenomena like tides or sun-synchronous orbits imply non-center-of-mass gravity even for pretty round objects like the Earth, the Sun and likes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 6, "answer_id": 2 }
Can massless particle have effective mass? The effective potential was probably very familiar in many concepts. However, what about effective mass? Suppose a massless particle. For simplicity, suppose it's not some superficial particle, i.e. it has observable effect. Is it possible for such massless particle to gain an "effective mass" through dynamical interaction? For example, a photon could well obtain a $e^-\sim e^+$ pair in space, but I'm not sure weather it's a meaningful case. Further, what does it mean to four momentum for such effective mass, if it exists.
Well if the photon interacts with something and something effectively slows it down. Then the photon will have mass. While the photon does not have any intrinsic mass any interaction will make it to have mass. This is similar to the case with gluons. Gluons can bundle up together. While the gluons themselves are massless, overall the glueball has mass due to the interaction between the color charged gluons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
What is wrong with this calculation of work done by an agent bringing a unit mass from infinity into a gravitational field? Let us assume that a gravitational field is created by a mass $M$. An agent is bringing a unit mass from $\infty$ to distance $r < \infty$, both measured from mass $M$. The agent is always forcing the unit mass with a continuously changing force $\vec F(\vec x)$, $\vec{x}$ being the distance pointing radially out from $M$. According to classical mechanics, it holds that $\vec F(\vec x) = \frac{GM}{x^2}\hat{x}$, with $G$ being the gravitational constant. The work is calculated as follows: $$W = \int_\infty^r\vec F(\vec x)\cdot d\vec x$$ $$=\int_\infty^r{{F(x)}\,dx\cdot cos(\pi)}$$ $$=-\int_\infty^r{{\frac{GM}{x^2}}dx}$$ $$=-GM[-\frac{1}{x}]_\infty^r$$ $$=GM[\frac{1}{x}]_\infty^r$$ $$=GM[\frac{1}{r}-\frac{1}{\infty}]$$ $$=\frac{GM}{r}$$ The body moved against the force's direction (the angle between them was always $\pi$). So the work should have been negative. But since $r$ is the scalar distance from $M$, it is positive like $G$ or $M$, yielding the result always positive. What is wrong here?
The agent is always forcing the unit mass with a continuously changing Force, $\vec{F}$(x) ... = $\frac{GM}{x^2}\hat{x}$ By your force definition, the agent is not the attractive gravitational force but is something which is restricting the motion to constant velocity because the mass M is pulling in the $-\hat{x}$ direction with a force equal in magnitude to the gravity but opposite in direction. That's okay, but I wanted to state that explicitly. Also, you are calculating only the work done by that agent. You also have defined the positive direction to be away from $M$, and that's okay, too. Your work integral calculates the work done by the force of the agent which is holding the mass back from accelerating toward $M$. Notice that, with your symbols, $$W = \int_{\infty}^r \frac{GM}{x^2}\hat{x}\cdot dx(\hat{x})= \int_{\infty}^r \frac{GM}{x^2}~ dx.$$ The $\cos \pi$ fator you have is incorrect. The infinitesimal $dx\hat{x}$ in an integral defines the direction of the positive coordinate change, not the direction of the motion. The direction of motion is contained in the integration limits. The result of the integral (for a unit mass being moved) is $$W = \left.\frac{-GM}{x}\right|_{\infty}^r= \frac{-GM}{r}.$$ The negative value makes sense because the agent is restraining the motion and acting in the positive $x$ direction while the motion is in the negative $x$ direction. And because the object is moving at constant velocity, the work done by the gravitational field will be the negative of the above so that the net work is zero, in agreement with the work-energy principle: $$\Delta K = W_{net}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Approximation of the total number of accessible microstates So, here is a system having two subsystems $\alpha$ and $\beta$ where the two subsystems can exchange energy between them, then the total number of accessible microstates of the whole system is given by, $$\Omega(E)=\sum_{E_{\alpha}}\Omega_{\alpha}(E_{\alpha})\Omega_{\beta}(E-E_{\alpha})$$ which approximation did we use to get,$$\Omega(E) \approx \Omega_{\alpha}(\tilde E_{\alpha})\Omega_{\beta}(E-\tilde E_{\alpha})$$ where, $\tilde E_{\alpha}$ is the most probable value of $E_{\alpha}$
The approximation is $$ \Omega_\alpha(\tilde{E}_\alpha) \gg \sum_{E_\alpha \ne \tilde{E}_\alpha} \Omega(E_\alpha) $$ or in words: the number of microstates of the most occupied macrostate (which is also very close to the one having the mean energy) dominates not just some of the other macrostates, but all of them together. It is surprising at first, but when you look into it, it is indeed the case owing to the very large numbers involved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Why is the acceleration of the string connected to the cylinder different from which the cylinder is moving forward with? The following Object 'B' is a cylinder. It is kept mounted horizontally on a massless block, when a tension T is applied by a string passing over the lower end of cylinder, the acceleration of the string which is tied to the cylinder Is different from that of the acceleration with which the CENTRE OF MASS of mass of cylinder is moving forward with (i.e., the cylinder is experiencing both rotational and translocation motion). Please explain me why this happens. Intutuively I can imagine that they are to be different, but can you please provide a proof of that.
Firstly, at the point the string contacts the cylinder, their velocity and acceleration are the same (otherwise the string will slip). Secondly say that the velocity of the center of the cylinder is $v$ and its angular velocity is $\omega$. At that point, its velocity will be $v+\omega\times r$. (why? Hint: Galileo transformation) The same argument applies to acceleration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does the light pulse broadens in time when passing bandpass filter? What I don't understand about relationship between laser pulse width in time and frequency, is where these rules apply, namely rules of Bandwidth-limited pulses. Say, I have a femtosecond laser making 100fs pulses with central wavelength 900nm, and FWHM of 20nm. I pass it through something like Ultra Narrow Bandpass Filters at the central wavelength 900nm and very narrow range. Will my pulse broaden in time more than if I passed it through a clear glass plate of similar material and width as the filter?
Usually, femtosecond pulses are produced by a mode-locking. A laser cavity has a certain number of modes at different frequencies, which usually illuminate with a random phase relationship between each other. The mode-locking is a process that results in a certain phase shift between the modes. Imagine that you have a lot of sinusoidal waves with the same amplitude. When you make them in phase, by means of the Fourier transform it could be shown that the sum of the waves will give $sinc$ function in time which indeed looks like a pulse. Note that, the more frequencies you have, the less is the pulse duration. Now, when you cut several frequencies with a filter, the pulse gets broader. Indeed, imagine you cut all the frequencies except one. Then you end up with one sinusoidal wave which is infinite in time. Moreover, you could disrupt the phase relationship by means of reflection and scattering. The last thing that should be mentioned is the material dispersion. If you pass a laser beam through a simple glass plate you don't cut frequencies. However, different frequencies have different velocities in glass, so the pulse again gets broader in this case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does cutting a spring increase spring constant? I know that on cutting a spring into n equal pieces, spring constant becomes n times. But I have no idea why this happens. Please clarify the reasons
spring constant is inversely proportional to its length hence when a spring of constant $k$ is cut into $n$ number of pieces, the length becomes $\frac1n$ times initial length so spring constant becomes $k/(1/n)=nk$. therefore $k$ becomes $n$ times on cutting a spring.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 5 }
What exactly happens when $\rm NaCl$ water conducts electricity? Assume a DC power source with $2$ electrodes made of Fe. We dip those $2$ electrodes into table salt water. What happens exactly? * *Will $H^+$ and $Na^+$ migrate to the negative electrode by electrical field or diffusion or a combination of both? *Will $H^+$ accepts frist electrons and then $Na^+$ or both ? But what if we really $amp$ up the current, are we going to see metal $Na$ at the negative electrode and then $Na$ reacts with water violently? *At the positive electrode, should we expect oxygen and chlorine gas or just the $Fe$ electrode gets eaten away? Although there are many questions, but I believe there is one general principle that can explain all. Something that can explain the priority of all possible reactions.
To describe the diffusion, migration (under an electric field) and the convection of species we have the Nernst-Planck Equation: $$ \frac{\partial c}{\partial t} = - \nabla \cdot J \quad | \quad J = -\left[ D \nabla c - u c + \frac{Dze}{k_\mathrm{B} T}c\left(\nabla \phi+\frac{\partial \mathbf A}{\partial t}\right) \right] $$ $$ \iff\frac{\partial c}{\partial t} = \nabla \cdot \left[ D \nabla c - u c + \frac{Dze}{k_\mathrm{B} T}c\left(\nabla \phi+\frac{\partial \mathbf A}{\partial t}\right) \right]$$ To determine which reactions happen during electrolysis we know the Nernst Equation. $$ \Delta G=-nFE $$ Basically a spontaneous reaction occurs when the Gibbs free energy is negative this is useful during redox reactions. For redox reactions the Gibbs free energy is negative and for electrolysis the minimum cell potential required for a reaction is calculated using the Nernst Equation. The thermodynamically favorable reaction can be found using a standard reduction table the compounds that are more positive (higher value) will be reduced and the compounds that are lower value will be oxidised. We note the standard reduction table assumes T= 298.15 K, an effective concentration of 1M for all species. Using a standard reduction table is just a quick way to guess which species will form, if you know the concentrations of the species you can use the Nernst Equation. THERMODYNAMICS IS NOT ALWAYS THE SOLE CONSIDERATION HOWEVER as kinetically some of these reactions are slow and the thermodynamics do not matter.The rate of these reactions compared to one another determines the products produced, the thermodynamics determines which ones can potentially occur. Yet another consideration is whether the products will react with the solvent reversing the reaction. The rate of these reactions can be sped up by increasing the voltage
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Mathematically prove that a round wheel roll faster than a square wheel Let's say I have these equal size objects (for now thinking in 2D) on a flat surface. At the center of those objects I add equal positive angular torque (just enough to make the square tire to move forward). Of course the round tire will move faster forward and even accelerate (I guess). But how can I mathematicaly prove/measure how better the round tire will perform? This for my advanced simulator I'm working on and I don't want to just Hardcode that rounds rolls better, square worse, etc. I know the answer could be very complex, but I'm all yours.
I think that in perfect conditions, the square and the circle roll AT THE SAME SPEED. The reason for this is that in real life, a circle will roll faster than a square for friction reasons: the kinetic energy of the square will get lost faster than the energy of the circle because of it's shape and go to thermal energy. But in perfect conditions,without friction, there is no reason that the square rolls slower than the circle, except if the energy that you apply to it is less than needed so that it flips 45 degrees, but if it isn't the case, the potential energy will go to kinetic energy and vice versa forever, moving the square less regularly but at an average that is equal to the circle. I think that this question is an intuition problem of how things happen in "perfect conditions", in the same way that two objects fall at the same speed when there isn't any air friction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 12, "answer_id": 10 }
What is the range of Pauli's exclusion principle? In many introductions to the pauli's exclusion principle, it only said that two identical fermions cannot be in the same quantum state, but it seems that there is no explanation of the range of those two fermions. What is the scope of application of the principle of exclusion? Can it be all electrons in an atom, or can it be electrons in a whole conductor, or can it be a larger range?
It depends on the system to which the fermions belong. The exclusion principle says that no two fermions can have the same quantum state. The quantum state includes the system to which the fermion belongs. If you are looking at electrons in atoms, for example, the atom is the system, and the exclusion principle applies only to electrons within a particular atom. If you are looking at a fermi gas, then the range is the volume of the gas. If you are looking at a white dwarf, then it is the size of the white dwarf.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 4 }
Do things have colors because their electrons are getting excited when photons hit them? Atomic electron transitions can be caused by absorbing a photon with a certain wavelength. An electron jumps to an higher energy level, then it falls back and a photon is emitted. The perceived color of the photon depends on the energy absorbed by the electron. Could we say that electrons in the atoms of different objects are excited when white light hits them, and they release photons which in turn causes the object have a color?
No, actually what you are talking about is the atomic spectrum of an atom or a system. The colour of a object depends on the crystal structure of the object. As @user12986714 gave the example, copper has crystalline structure which cause the constructive interference of light wave of particular frequency between two crystal layers,. While copper powder is almost amorphous and there is no interference of light wave so it is white in colour.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does an object rotating in place have linear momentum? I understand that an object with linear momentum could have angular momentum. However, can the same be in reverse? For example, will a wheel spinning in place be considered to have both angular and linear momenta? It will have tangential velocity, but the wheel itself is not moving in a straight line. Could you use its tangential velocity and say it has linear momentum?
Each particle that the object consists of can carry momentum. And they all except for the particle at the very centre do carry some momentum. $$p_\text{ non-centre-particle}\neq 0$$ The total momentum (the sum of all particles' momentum) will be zero if the object is spinning about its centre-of-mass (CoM), since all particles on one side of the spinning object (on one side of the CoM) cancel out the effect of those on the other side. $$\sum p_\text{ particle}=p_\text{ total}=0 \qquad \text{ if centre-of-rotation is CoM}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does the $U(1)$ vector current flip under charge conjugation? The conserved $U(1)$ current of the Dirac Lagrangian is given by $j^\mu = \bar{\psi} \gamma^\mu \psi$, where $\bar{\psi} = \psi^\dagger \gamma^0$. As this is interpreted as electric current I would expect it to flip sign under charge conjugation. Charge conjugation Of a spinor $\psi$ is defined as $\psi^c = C\psi^*$ where $C$ is the unitary charge conjugation matrix that satisfies $C^\dagger \gamma^\mu C = -(\gamma^\mu)^*$ for all gamma matrices. If I calculate the $U(1)$ current under charge conjugation I find $$ j^\mu_c = \bar{\psi^c}\gamma^\mu \psi^c \\ = (C \psi^*)^\dagger \gamma^0 \gamma^\mu C \psi^* \\ = (\psi^\dagger)^* C^\dagger \gamma^0 C C^\dagger \gamma^\mu C \psi^* \\ = (\psi^\dagger)^* (\gamma^0)^* (\gamma^\mu)^* \psi^* \\ = (\bar{\psi} \gamma^\mu \psi)^*\\ = (j^\mu)^* $$ Which hasn’t flipped sign as I thought it would. Have I made an error in my analysis? Any hints would be appreciated. Thanks!
Starting with your third to last line, we begin by rewriting \begin{equation} \begin{split} (\psi^\dagger)^*(\gamma^0)^* (\gamma^\mu)^* \psi^* &= \psi^T \big[(\gamma^0)^\dagger\big]^T \big[(\gamma^\mu)^\dagger\big]^T (\psi^\dagger)^T \\ &= \big[\psi^\dagger (\gamma^\mu)^\dagger (\gamma^0)^\dagger \psi \big]^T\\ &= \psi^\dagger (\gamma^\mu)^\dagger (\gamma^0)^\dagger \psi \end{split} \end{equation} where in going from penultimate to last line we have used that the components of the current are complex numbers and thus not matrix valued, such that we may drop the transpose. We may then proceed in a way similar to my answer to this question, using the following properties of the gamma matrices \begin{align} (\gamma^0)^\dagger &= \gamma^0, \\ (\gamma^\mu)^\dagger &= \gamma^0 \gamma^\mu \gamma^0, \\ (\gamma^0)^2 &= \mathbb{I}_{4}, \end{align} where $\mathbb{I}_{4}$ is the identity to write \begin{equation} \begin{split} \psi^\dagger(\gamma^\mu)^\dagger(\gamma^0)^\dagger \psi &= \bar{\psi}\gamma^\mu(\gamma^0)^2\psi\\ &= \bar{\psi} \gamma^\mu \psi. \end{split} \end{equation} This is then the result $j^\mu_c = j^\mu$. This is a consequence of the charge conjugation symmetry of quantum electrodynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do gravitons and photons interact? First of all, I am a noob in physics (I‘m a computer scientist) and started reading Hawking‘s „A brief history of time“. In Chapter 6 he says that “electromagnetic force [...] interacts with electrically charged particles like electrons and quarks, but not with uncharged particles such as gravitons.” My question now: how come that extremely massiv object are able to bend light (e.g. we are able to see distant stars that are behind the sun)? I mean, how can gravitation (actually gravitons) affect photons if gravitons are not charged? I know that there are some questions here that go in the same direction but as I‘m a noob in physics, I don‘t quite get the answers. I‘d appreciate if someone had a laymen‘s explanation for this that not necessarily covers all different aspects (I might pose some follow-up questions) but explains the essence. Thanks to y‘all!
Gravitons should couple to almost every particle. It is just a matter of how much it couples with the particle. However in the particle world gravitons are pretty weak compared to the other forces. However on the largest of scales gravitons and gravity wins over. To answer the question gravitons do couple to photons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why is a pump's head usable for any fluid? As far as I investigated, a pump has a specific head in a determined flow rate (relating to its power and rotating speed). Then considering the formula ($\Delta P=\rho g H$), $\Delta P$ is adjusted for any fluid (with a different density) to obtain the same head. But my question is: how the extra pressure is created for a fluid with higher density, when using a specific pump with a specified power and therefore max head? This makes it confusing. Because it seems more logical to say the head is reduced/increased in such case; not that the pump produces more power to obtain the same head.
There are vanes in the impeller of a centrifugal pump, and the tips of those vanes are moving at a tangential speed that is a function of the impeller diameter and impeller revolutions per minute. In SI units, this tip speed is given by $v=r\omega$ m/s. Individual parcels of liquid come off the impeller vane tips at this speed, and the pump head is equivalent to how high those parcels of liquid would rise if you threw them straight up at this speed. The centrifugal pump is question is coupled to an electric motor that MUST turn at the synchronous frequency of the A/C power being supplied to it, which is 60 Hz in the U.S. If the motor becomes more loaded, such as when the specific gravity of the pumped fluid is increasing, the motor draws more amps and hence more power in order to maintain its designed speed (e.g., 3600 rpm). Obviously, if you pump a fluid that has a specific gravity much higher than the pump is designed for, the motor will exceed its amperage rating and either trip an electrical breaker or burn up.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do metals reflect electromagnetic radiation? Microwaves, for example, can be reflected off metallic surfaces. This seems counter-intuitive, since the metal's electrons could interact with the electric field component of the EM wave and absorb it. In fact, you can use a metal grid to polarise microwaves, and there the metal absorbs the microwaves. So what determines whether an EM wave is absorbed or reflected and how does the reflection happen exactly (I'm assuming in terms of quantum mechanics)
You are right that there are ample electronic transitions in a metal that match the frequency of optical and lower frequencies. However these transitions do not satisfy momentum conservation. When you apply a grid then momentum is only conserved up to an inverse lattice vector (of the grating). For a suitably choice of grid pitch and duty cycle it is then possible to meet the requirement for a chosen wavelength. Under these conditions light is absorbed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How do electrons in n type (conduction band) fall into holes in p type (valence band with lower energy) in a pn junction? I was studying about the semiconductor physics.I learned about the concepts of holes and all.If the electrons present in n type is in conduction band how can they fall into holes in valence band of p type which is of much lower energy state. If it is by losing energy while crossing the depletion region then how in the first place ,on joining p and n type materials, the electrons on n side combine with the holes on the other side forming the depletion layer itself. The question may be utter foolish but correct me if i am wrong.
I'm hoping the diagram at the bottom may help a little, it's one I made when I was writing up to show the basics of what happens in the p-n junction (without any bias). I would highly recommend also the website pveducation.org, which takes you through step-by-step what happens on the formation of a p-n junction and how the depletion region is formed. The formation of the depletion region occurs chiefly through diffusion of carriers - i.e. the electrons, which are the majority carrier in n-type materials diffuse towards the p-type side, and vice versa for holes. As a result the 'ion' cores of the opposing charge are left behind, which causes the build up of an electric field between the cores (positive in n-type, negative in p-type). The recombination of the electrons with the holes can happen through many pathways - for example through defects in the material. It doesn't happen straight away, and the electron lifetime is an important property in a semiconductor! I hope I have helped, if there is anything I've missed do let me know. (Edit as I realised I had my band labels the wrong way around)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can even there be a non-zero BMS vector field with zero asymptotic data? Following the BMS approach, one spacetime $(M,g)$ is asymptotically flat when: * *We can find a Bondi gauge set of coordinates $(u,r,x^A)$ characterized by $$g_{rr}=g_{rA}=0,\quad \partial_r\det\left(\dfrac{g_{AB}}{r^2}\right)=0\tag{1}.$$ *The range of the $r$ coordinate is $r_0\leq r < +\infty$ and the $x^A$ coordinates parameterize a two-sphere $S^2$ *The metric has asymptotic behavior (where $\gamma_{AB}$ is the $S^2$ round metric) $$g_{uu}=-1+O(r^{-1}),\quad g_{ur}=-1+O(r^{-2})\quad g_{uA}=O(1),\\\quad g_{AB}=r^2\gamma_{AB}+O(1)\tag{2}.$$ In that scenario a BMS vector field is a vector field in $(M,g)$ which preserves (1) and (2) when we vary the metric as $\delta g = L_X g$. The space of all such vectors is then the $\mathfrak{bms}_4$ algebra. It is possible to show that such a vector field is identified by a pair $(f,Y)$ where $f\in L^2(S^2)$ and $Y$ is a Conformal Killing Vector on $S^2$ such that its leading behavior is: \begin{align} X &= \left(\frac{u}{2}D_A Y^A + f\right)\partial_u + \left(-\frac{r}{2}D_A Y^A -\frac{u}{2} D_A Y^A +\frac{1}{2}D_A D^A f + O(r^{-1})\right)\partial_r\\ & + \left(Y^A -\frac{D^A f + \frac{u}{2}D^A (D_B Y^B)}{r}+O(r^{-2})\right)\partial_A.\tag{3}\end{align} Moreover preservation of (1) still demands two conditions. Preservation of $g_{rA}=0$ demands: $$\partial_r X^A = -g_{ur}g^{AB}\partial_B X^u \tag{4}$$ and preservation of the determinant condition demands $g^{AB}L_X g_{AB} =0$ which becomes: $$X^r g^{AB}\partial_r g_{AB}=-\bigg(X^u g^{AB}\partial_u g_{AB}+X^C g^{AB}\partial_C g_{AB}+2 g^{AB}\partial_A X^u g_{uB}+2g^{AB}\partial_A X^C g_{CB}\bigg)\tag{5}$$ which in effect fully determines $X^r$. Now in "Advanced Lectures on General Relativity" the author says that "Trivial boundary diffeomorphisms $f=Y^A=0$ form an ideal this algebra". But why isn't the set $f = Y^A=0$ comprised of just the zero vector? I mean if $f = Y^A = 0$ then $X^u =0$. If $X^u = 0$ then (4) implies that $X^A = Y^A$ and therefore $X^A =0$. Finally using $X^u,X^A = 0$ in (5) implies that, since $g^{AB}\partial_r g_{AB}\neq 0$, we have $X^r = 0$. What am I missing here? What is my misunderstanding? How can there be BMS vector fields, preserving (1) and (2), with $f = Y^A =0$ which are not identically zero?
Here is the problem: It is possible to show that such a vector field is identified by a pair $(f,Y)$ where … vector field $X$ is not uniquely identified by this pair $(f,Y)$, only its action on the boundary data, while the vector field itself is defined not only near the boundary but everywhere “inside” the manifold. In other words, this pair $(f,Y)$ says nothing about behavior of $X$ at finite values of radial coordinate, only about its asymptotic behavior. For example we can choose $X$ to be arbitrary for $r<r_1$ (for some $r_1<\infty$) and identically zero for $r_2 < r <\infty$ (with $r_2>r_1$), while for $r_1<r<r_2$ we can choose $X$ interpolating between two behaviors to satisfy the necessary smoothness conditions. It is easy to see that for such example vector field the pair $(f,Y)$ would be zero. Thus this $X$ is a nontrivial example of the generator of a trivial boundary diffeomorphism. As an aside, such construction is related to so called Einstein's hole argument. Also note, that fields $X$ corresponding to trivial boundary diffeomorphisms do not have to be identically zero in some vicinity of the boundary, they just have to approach zero fast enough to not alter the boundary data.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can a very small piece of material be superconducting? The existing theory of superconducting seems to be based on statistical mechanics. Can an ultrasmall piece of material, like a quantum dot with very few atoms (like a small molecule), be superconducting? For example, can a cubic of 3 * 3 * 3 = 27 copper atoms be superconducting? What is the minimum n for a cubic of $n*n*n$ copper atoms to be able to be superconducting? Can a few unit cells of a complex high temperature superconducting material be superconducting? If so, then maybe some calculation from first principles can be done on such a piece of material as a molecule to understand the exact mechanisms of high temperature superconducting. If not, can some first principle calculation on such a small piece of material be done to find some pattern that lead to a possible theory of high temperature superconducting?
My experience in this field is mostly applied. Here is what i have seen in papers. Superconductors behave as 'macroscopic' only as long as their size is above the coherence length $\xi_0$. For example, in titanium this is nearly 0.5um, in niobium it is 20nm, in YBCO it is at the atomic scale (but anisotropic). Coherence length depends on temperature and applied magnetic field. When the size of a piece of superconducting material is decreased below its coherence length, one gets decrease in the critical temperature and critical magnetic field. To get tunneling of charge carriers in Josephson junctions their 'thickness' must be comparable or below the coherence length
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Why is the Force of Gravitational Attraction between two “Extended” bodies proportional to the product of their masses? Newton’s Law of gravitation states that force of attraction between two point masses is proportional to the product of the masses and inversely proportional to the square of the distance between them. I know that the force of attraction between two spheres turns out to be of the same mathematical form as a consequence of Newton’s law. But I am not able to prove how the force between any two rigid masses is only proportional to the product of their masses (as my teacher says) and the rest depends upon the spatial distribution of the mass. So $F$ is ONLY proportional to $Mmf(r)$ where $f(r)$ maybe be some function based on the specifics of the situation.
It is not true in general that the gravitational force of attraction between extended bodies is proportional to their masses. It happens that we usually deal with gravitational attraction between celestial bodies, and that celestial bodies above a certain size are almost invariably close to spherical (in consequence of the self gravity of the body). In the particular case of spherical bodies, the result is true as a consequence of Newton's shell theorem. In the general case, simply note that the inverse square law of gravity is basically the same (up to the sign of charge) as the Coulomb law of electrostatics, and apply the argument of any number of text book examples, such as the electrostatic attraction/repulsion for a charge uniformly distributed on a long rod, or a large plate. Clearly the force does depend on the distribution of charge/mass. OTOH, with regard to gravity, because gravity is such a weak force, most of the practical examples with rigid bodies in celestial mechanics do involve spherical bodies. One important exception is to treat the gravitational field of a spiral galaxy (it is not rigid, but its mass distribution can be treated as constant). This is not the same as the gravitational field of a central mass. I have shown how it can be treated in The effects of turbulence generated in Big Bang nucleosynthesis
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 9, "answer_id": 6 }
Actual meaning of refraction of light The definition of refraction which I found on wikipedia is In physics, refraction is the change in direction of a wave passing from one medium to another or from a gradual change in the medium. But in the below case, there is no change in direction of light. So, is this also refraction?
Refraction describes the change of direction of a light beam in geometrical optics. Since your the light beam does not change it's propagation direction, there is no refraction. I guess you are somehow mixing the phenomena and its explanation: The change of angle is a phenomena. Its explanation is that the speed of light changes according to $c_0/n$. We are not allowed to reverse this "logic".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Can we use quantities other than temperature to describe thermal equilibrium? From the 0th law, Thermal equilibrium is when there is no heat transfer between two objects. So I want to ask is temperature the only "potential"-esque quantity which should be equalized for stop of heat flow? If temperature is the only one then why is it the only one? Could we prove this?
In general, thermal equilibrium means maximizing the entropy. The reason we use temperature is that very often, two systems can do this by exchanging energy. Under an exchange of energy $dE$, $$dS_{\text{tot}} = \frac{dS_1}{dE_1} \, dE + \frac{dS_2}{dE_2} \, (-dE) $$ so the maximum entropy is achieved when this is zero, and the systems have the same $$\frac{dS}{dE} = \frac{1}{T}$$ where this is really a definition of $T$. In general, you can exchange other things too. For example, if a container is separated in two by a movable piston, then the total volume of the two pieces is conserved, and we can maximize entropy by exchanging volume. Then in thermal equilibrium, they have the same $$\frac{\partial S}{\partial E} \bigg|_V = \frac{1}{T}, \quad \frac{\partial S}{\partial V} \bigg|_{E} = \frac{p}{T}$$ where the second equation serves as the thermodynamic definition of pressure. If the total number of some kind of particle is conserved, and the systems can exchange particles, we equalize $$\frac{\partial S}{\partial E} \bigg|_{V, N} = \frac{1}{T}, \quad \frac{\partial S}{\partial V} \bigg|_{E, N} = \frac{p}{T}, \quad \frac{\partial S}{\partial N} \bigg|_{V, E} = - \frac{\mu}{T}$$ where the third equation defines the chemical potential. If there were $n$ separate types of such particles, we'd have $n$ separate chemical potentials that would be set equal. There are plenty of more exotic options too. In general, there is a potential for every conserved quantity which is conserved, can be exchanged between the systems, and affects the entropy in the thermodynamic limit. (On the other hand, in an introductory course it's reasonable to focus on systems with only one or two, to avoid too much complication with partial derivatives.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does bending your arm in space require any energy? Since your are weightless in space, your arm has no weight, right? Does this mean that bending it in space requires no energy? Why or why not?
Well, yes. Movement of your body parts (hands, legs eyelids, etc.) occurs due to the contraction of muscle fibers. This process requires energy (from cleavage of ATP molecule to form ADP). This is the only way an astronaut can move his arm. Transforming the internal energy (chemical) into mechanical energy requires the expenditure of ATP. So, the answer is yes. But, you can make it smaller, I guess.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
Doppler effect of light when it's windy I think I understand the classical doppler effect in sound, where the equation is non-symmetric whether the source of the observer is moving because the speed of medium where sound wave propagates is different according to each of the observers. I think I also understand why doppler effect is symmetric with light since the speed of "the medium" where light propagates is the same for both observers, meaning we need special relativity to explain the doppler effect of EM waves in a vacuum. But I struggling to make an eqution to describe the doppler effect of light in an actual realistic moving medium. What is the frequency shift of light between the source and the observer if wind is blowning at 1/3 of $c_0$, flowing towards the observer. I have to somehow take in to effect the slowdown of light, the lenght contraction of space as well as the fact that for two observers, the light is now travelling at different speeds. The source is here glowing his laser beem in a lenght-contracted medium. It gets even stranger if you change the wind to water and assume the water is moving faster than the speed of light in water. On a nano-level, the slowdown of light is caused by the delay in absorption and emmitance speeds of photons in. If the wind is blowing, it is moving those tiny photon-emmiting molecules in space thus causing a classical doppler shift as well.
I completely agree with Dale, but since the OP talked about both air and water, I decided to generalize the above answer further. The speed of light in any medium is given by $$v = \frac{c}{n}$$ where c is the speed of light in vacuum (the OP denoted this by $c_0$), and $n$ is the absolute refractive index of the medium. Doing a bit of algebraic 'juggling' we get $$w = \frac{u + v}{1 + uv/c^2} = \frac{u + c/n}{1 + \frac{u * c/n}{c^2}}$$$$ = \frac{u + c/n}{1 + \frac{u}{nc}}$$ I know this looks a little messier, but you can just plug in the refractive index to get the final velocity in whichever medium you want.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Should the thermos flask better be half full or half empty? Every evening I am preparing hot water for my two year old son wakes up in the night to get his milk. We use a rather bad isolation can for this. It is a typical metal cylinder shaped can holding half a liter. If I put cooking hot water into it, I know that about 5 hours later it will have room temperature already, but it does the job as my son typically wakes up 2 or three hours after I go to bed, and so he gets his milk temperated. As I need only about 200ml then to mix up his milk, I was asking myself if it is better to only fill in that amount of hot water or to fill up the whole can. I guess losing temperature has much to do with the amount of water but also with its surface touching the (colder) room air outside. With no idea anymore of what my old physics teacher told me twenty five years ago I hope you could share some wisdom for my little story here. Thanks in advance ;)
It's a metal can so the heat from the water will spread across the whole surface and be lost at approximately the same rate however much water you use. Now, since a larger volume of water will hold more heat to start with, a full can will keep its temperature better, as the temperature loss will be shared across a greater mass of water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Does the resistance in the secondary circuit of a potentiometer circuit affect the balance length? I just learnt how potentiometer circuits work, and I was taught that the resistance in the secondary circuit does not change the balance length as no current flows through the secondary circuit and thus the only potential drop is through the EMF of the unknown battery in the secondary circuit. However my proffessor mentioned that if the resistance of bulb x was decreased in this particular arrangement, the balance length would shorter! The reason he gave was "This is because with a lower resistance bulb used for X, the current flowing in the lower circuit increases, the voltage drop across the internal resistor increases, and hence the terminal potential difference across the cell in the lower circuit decreases. " However I am struggling to understand why the concept of no current flowing through the circuit does not apply here, I still feel that decreasing the resistance of bulb x would not affect the balance length for the reason stated above. Thank you for clarifying this conceptual error of mine in potentiometer circuits!
The bulbs X Y and Z are not part of a normal potentiometer. Normally the battery E is balanced against AC and CB, but now AC is replaced by the combination of AC, X, Y, and Z. The internal resistance of the battery and meter don't matter, but these extra ones certainly do because they are in parallel with the battery instead of in series, and changing one of them will affect the balance point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does light behave like a wave? When discussing a single or double slit experiment, where light is shined through a very small slit, it is often compared to a water wave going through a similar, if larger, slit. It's my understanding that when a ripple hits a wall with a hole in it the reason the ripple "bends" and spreads out is because of internal attraction between the water molecules, which are polar. So the molecules on the far side of the slit with energy will pull on the ones without and create a diffraction pattern; and I believe that a similar argument could be made for sound waves, that the molecules the wave travels through are at least slightly polar, or at least they have mass and momentum, so they will push/pull each other and create a diffraction pattern . But as far as I know light exhibits none of these properties, So what property of light allows it to diffract? Shouldn't light which passes through the slit be completely unaffected by the light which hits the material? Clearly light sometimes behaves like a physical wave; but I was wondering if this physical behavior can be explained with some intrinsic property of light. Similar to how a wave travelling through a physical medium can be explained with different attractive forces and momentum.
For the water waves the restoring force is gravity, and there is a circular symmetry for any bump. For sound, it is a pressure wave, and there is a spherical symmetry for a region with higher or lower pressure and the surroundings. In both cases, it would be strange if they follow a straight line after the slit, without spreading. The behaviour of light depends on the size of the slit compared to its wave lenght.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Is it possible that a series of Feynman diagrams converge? A bit of maybe unnecessary context I'm reading "Lecture notes on Diagrammatic Monte Carlo for the Frohlich polaron". It says It is usually unknown whether a series converges or not. The series is guaranteed to diverge at a phase transition, but it may happen sooner. In fact, most series in physics are asymptotic, which can be established rigorously in a number of cases Question: I take this as an indication that a series of Feynman diagrams may converge. However, I can't really make sense of it. To me it seems that no matter the system considered, each diagram will consist of a small parameter $u^N$. This parameter suppresses the importance of each diagram exponentially. However, for every(?) diagrammatic series the number of diagrams increases factorially. It now seems to me that for any finite $u$ the series will diverge, because the factorial number of diagrams always "beats" the expontential supressing. I am not really sure how to understand this, but suspect I might have a wrong understanding of what is meant by convergence it this case. In addition, I am aware of Dyson's argument that when the series is not analytic for a coupling constant equal to zero, the series will diverge. Hence, this question is only relevant when Dyson's argument does not apply.
The discussion in question deals with resummation of the diagrammatic series for a partition function. If the Hamiltonian and the phase space are properly defined, the partition function is finite. The non-convergence problem here is expanding a function at the point of its non-analiticity (e.g., one cannot expand $\log x$ or $1/x$ around $x=0$), and we do not know in advance whether it is analytical or not, as a function of the small parameter. We certainly know that it is non-analytical in the vicinity of a phase transition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Amount of electrons in a material? Is there a way to calculate the amount of electrons in a plate of a certain material and certain dimensions? What I want to know is how many electrons are available to remove from a plate when light of appropriate wavelength hits the plate(photoelectric effect).
Yes. In the free electron model (of a metal), it is possible to define an electron density in the conduction band. See the table in this link for example. But to a first approximation you can consider the density of atoms in the material (mass density upon molar volume times Avogadro number) times the valency of the metal under question as the electron density.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Optimize crossbow I'm currently building a crossbow and was wondering how I might improve the performance of it? I was suggested to fine-tune the rubber band more and maybe change the projectile maybe to a zinc alloy one instead of the plastic ones I use. I do understand this is sort of engineering feat but I think it wouldn't hurt to hear feedback from some physicists so I would appreciate any insight into this.
"Performance" is a pretty broad term. For example, it can relate to how fast the bolt (the arrow) is launched, or the repeatability of the bolt's trajectory. Plastic bolts are not a very good idea: most plastic can deform permanently, which will lead to a wildly variable trajectory. You might do well to buy some fiberglass or graphite fiber fishing pole blanks and cut the ends off to make your bolts. Light weight is good. The length of the bolt is important: if it is too long, it will flex while being launched, which can result in an unpredictable trajectory. Using rubber bands you've made more of a slingshot than a crossbow. Take a look at the design of a recurved bow or a Mongolian bow if you want a really powerful crossbow.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A uniformly polarized sphere Say there is a polarized sphere with polarization density $\vec{P} = \alpha \hat{r}$. How can I tell if the electric field outside of the sphere will also be radial? I see in many places that it is taken as obvious, but why is it? *Edit: rephrase
It's because the whole system (including the polarization density) has spherical symmetry. Think of it this way, if I rotate the sphere by an arbitrary angle around an axis passing its origin, the sphere, and the associated polarization density $\mathbf{P}(\mathbf{x}) = \alpha \hat{\mathbf{r}}$ are both going to coincide with the non-rotated case. So the same has to be true for the electric field distribution; i.e. not only is the electric field outside the sphere radial, but so is the field inside.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why doesn't Kirchhoff's Law work when a battery is shorted with an ideal wire? Kirchhoff's law states that the sum of voltages around any closed loop sum to zero. The law is true as the electric field is conservative in circuits. Why can we not apply the law here? Why doesn't the law hold here despite the fact that the electric field is conservative and the voltages should add up to $0$?
There are a number of points here. First if you are saying that there is no resistance in the circuit and nothing else is present then the situation is unphysical and as such you cannot apply Kirchhoff's laws. However, as drawn the circuit is a loop and therefore has a self inductance $L$. Once inductance is considered there is a problem because there is a non-conservative electric field generated by the inductor if the current changes so some would say that Kirchhoff's laws cannot be used. In the end and assuming that there is no resistance in the circuit, by whatever route you take you end up with an equation of the form $V= L\dfrac {dI}{dt}$ where $\dfrac {dI}{dt}$ is the rate of current in the circuit. So suppose that you have a switch in the circuit at close it at time $t=0$ so the initial current is zero. Integration of the equation yields $I=\dfrac VL \,t$ with the current increasing linearly with time for ever, again not a very realistic situation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 12, "answer_id": 0 }
What does CERN do with its electrons? So to get a proton beam for the LHC, CERN prob has to make a plasma and siphon off the moving protons with a magnet. Are the electrons stored somewhere? How? I don’t mean to sound stupid but when they turn off the LHC, all those protons are going to be looking for their electrons. And that’s going to make a really big spark.
The usual thing for a shutdown is to 1) stop injecting fresh particles into the beam tube, and 2) deflect any remaining particles in the main tube and any storage rings into a beam dump which is a very large chunk of metal, a very very large chunk of concrete, or a very very very large pile of earth. Take care not to be standing next to the beam dump- the radiation it produces while stopping the beam will kill you. If your beam is working with electrons, you make them by stripping them off a hot piece of metal or ionizing you some hydrogen. In this case you steer the unwanted protons out of the resulting beam and run them into a dump. There they will find themselves some loose electrons lying about and get happy again.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 5, "answer_id": 2 }
Angular momentum of the earth We know the tidal waves are decreasing the spin rate of the earth which causes the days to longer, so as the angular momentum of the earth decreases it means it rotational kinetic energy also decreases since energy is always conserved the translational kinetic energy of earth must increase now right? Then that would cause number of days in a year to decrease as we right?
Then that would cause number of days in a year to decrease as we right? Maybe you should read this article as a lot more goes into the kinematics of the earth around the sun, Earth rotates faster than the moon orbits it, so the watery tidal bulge travels ahead of the moon's relative position. This displaced mass gravitationally tugs the moon forward, imparting energy and giving the satellite an orbital boost, whereas friction along the seafloor curbs Earth's rotation. ..... Hints of inconsistent Earthly timekeeping come through natural calendars preserved in fossils. Corals, for example, go through daily and seasonal growing cycles that form bands akin to growth rings in trees; counting them shows how many days passed in a year. In the early Carboniferous period some 350 million years ago an Earth year was around 385 days, ancient corals indicate, meaning not that it took longer for the planet to revolve around the sun, but that a day–night cycle was less than 23 hours long Etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What happens to an inductor if the stored energy does not find a path to discharge? Suppose an inductor is connected to a source and then the source is disconnected. The inductor will have energy stored in the form of magnetic field. But there is no way/path to ground to discharge this energy? What will happen to the stored energy, current and voltage of the inductor in this case?
If the coil is in a perfect vacuum, then the unduced voltage may become so high that "cold" electron emissions of the coil metallic ends will create an arc for discharge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 10, "answer_id": 8 }
Origins of this interesting optical phenomenon? Sunlight reflecting off my glasses seem to disperse into these distinct red and blue bands. The glasses are acting as some sort of a prism to split the light. The glasses do have some reflective coating (if that helps). Any thoughts on what might be causing these?
An anti-reflective coating would explain it. The coating is a thin film interference filter that is designed to reduce reflections and its performance changes with wavelength and angle of incident of the light. So instead of the glasses strongly reflecting white light, the reflection is reduced. But it is not evenly reduced over the whole visible spectrum, resulting in some colors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does Flow always occur from Higher Potential to Lower Potential? This is a sort of a generalized question and not just referring to the flow of current. This includes fluids and many other such entities. But why does this flow occur. For example if I consider current, then the definition of potential at any point is the work done by external agent in bringing a unit positive charge from infinity to that point. How can we deduce from this definition that the current will flow from higher to lower potential. In fluids, the fluid flows from higher point to lower point. Why so (referring, again to potential)? Please avoid any analogies in answering the question.
To answer in terms of electric circuits, we know that electric field $\bf E$ is related to the electrical potential $V$ by $${\bf E}=-{\bf \nabla} V$$ That means that a positively charged particle in a region with varying potential will experience a force pointing towards regions of lower potential (and a negatively charged particle will experience a force towards higher potentials). In either case we'd describe the result as a current from higher potential to lower potential. But, it's not correct to say current always flows from high potential to low potential. Every circuit must include some current flowing from high potential to low potential, and some current flowing from low potential to high potential, in order to form a complete circuit. The circuit elements through which current flows from high to low potential consume electrical energy, converting it to some other form (or storing it temporarily). And the circuit elements through which current flows from low to high potential deliver electrical energy to the rest of the circuit, either converting it from some other form (as in a generator or battery) or releasing energy previously stored (as in a capacitor or inductor discharging). In other systems, there are analogous processes of flow in both directions. For example, water only flows downhill (from higher to lower gravitational potential) because it previously was evaporated by solar energy and was transported to the higher potential region as water vapor and rain.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it necessary that a capacitor stores energy but not charge? Is it necessary that a capacitor stores charge? The definition of capacitor given in books is that it store electric energy. So is it possible that the capacitor does not store charge but stores energy only?
If you'll take some time to search this site for capacitor related questions, you'll probably find that I and others have often pointed out that capacitors store energy and not electric charge. A charged capacitor has stored energy due to the work required to separate charge, i.e., the plates of the capacitor are individually charged but in the opposite sense ($+Q$ on one plate, $-Q$ on the other). Yes, you'll often read phrases like "A capacitor stores electric charge". This is just plain wrong. However, you'll also read phrases like "$Q$ is the charge on the capacitor". Literally, this is wrong. However, as long you understand that $Q$ is the charge that has flowed from one plate to the other, you'll stay out of trouble. Bottom line, a charged capacitor is electrically neutral (in 'normal' operation). To say that a "capacitor is charged" is to use charged in the same sense as when we say that a "battery is charged". We mean that there is energy stored. Given the good natured push-back in the comments, I thought I would do a quick Google search. So, for what it's worth...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why is surface tension measured in units of milliNewtons per meter? Rather than square meter(s)? Why is liquid surface tension written in units of mN/m, or milliNewtons per meter? The related concept of surface energy for solids uses units of milliJoules per square meter.
Notice, the surface tension of a liquid is the force acting per unit length of an imaginary line drawn on the free surface of the liquid (its unit is $N/m$). Furthermore, the surface tension force is small enough hence written in small unit $mN/m$. Example surface tension of water is $72mN/m$ The unit of surface energy is $mJ/m^2$ which is the work done to increase the free surface area by $1$ unit. It is measure in $mJ/m^2$ which is equivalent to $mN/m$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Determining the state of a system after a measurement I'm confused about the state of a system after a measurement. Say we have a particle $v$ in the state: $ |\psi\rangle= \sqrt{1/4} \ |0\rangle + \sqrt{3/4} \ |1\rangle $. From my understanding, if one were to measure the state of $v$, one would get the result $|0\rangle$ with probability $|\sqrt{1/4}|^2=1/4$, and similarly, $|1\rangle$ with probability $3/4$. However, I've also learned that a measurement is always done by an observable (a unitary operator), e.g. $Z=|0\rangle \langle 0|-|1\rangle \langle 1|$, and that the outcome of the measurement is an eigenvalue of this operator, and that the state we get after the measurement is always dependent on the observable we use, and similarly for the probability of getting that state. Now, by inspection, I noticed that when I measure $Z$, I do get the state $|0\rangle$ with probability $1/4$, and $|1\rangle$ with probability $3/4$, as expected. But I don't get these results when I measure the Pauli operator $X$, for example. Does that mean that the claim in my second paragraph always assumes a measurement of $Z$?
Yes, you have written the state $| \psi \rangle $ in the eigen basis of $Z$ $(|0 \rangle, |1 \rangle) $, that is why $Z$ is diagonal. Since $Z$ and $X$ do not commute with each other they cannot be simultaneously diagonalised in one basis. If the operators commute then they can be simultaneously diagonalised and will have same eigen basis. Measurement is always done in the eigen basis of an observable but its neither unitary nor Linear operation. The state after the measurement will be one of the eigen states of observable but its a random process that's why you can only come up with the probability of being in particular state. You have to tranform your state $| \psi \rangle$ to some other basis by a unitary operator $U$ and then measure the result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does a water jet hitting a wall move parallel to the wall if momentum is conserved? Classical mechanics says that if I throw a ball with velocity perpendicular to the wall and it collides elastically with the wall with a velocity $v_0$, then it bounces back with the same velocity $v_0$. However, if I shoot a beam of water perpendicular to the wall, in most cases it will not deflect back perpendicular to the wall instead it gains velocity perpendicular to the initial velocity and continues to move on the surface. Isn't this a violation of conservation of momentum since for any small molecule inside the beam of water we had no momentum in the perpendicular direction to get started with?
Momentum conservation does not work like that. You can conserve momentum only when there is no external force acting in the direction. Actually, you don't have to take an example as complex as this to understand thhat momentum need not be conserved always. Imagine that you have a ball in your hand. You leave it at rest, with obviously $0$ velocity in the vertical direction. If you conserve momentum, then the ball should never fall (since its initial momentum is zero), but it does. This is because momentum is not conserved when an external force comes into the play. The simple reason as to why momentum is not conserved when a force acts can be obtained from the very definition of Force, which is the rate of change of momentum. A change in momentum points to a force and presence of a force (net force, to be more precise) implies a change in momentum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
In the equation: $a = dv/dt$ , is $dt$ the time taken to achieve that instantaneous acceleration? If you solve for $dt$ from $a = \frac{dv}{dt}$ , is it the time taken to to achieved that instantaneous acceleration? $a$ : acceleration $v$ : velocity $t$ : time
No, it is not. Suppose, a body is moving at a uniform velocity $v$, now there is no restriction on how much time it wants to remain with that same velocity. And after sometime it can accelerate if there is a net force on it. Now, acceleration means a rate of change in velocity and obviously it will take some time to increase (or decrease) it's velocity. It is therefore given as $$a=\frac{\Delta v}{\Delta t}$$ This acceleration may not be uniform and in principle can change. Therefore, you would want to know what exactly was acceleration at a particular time. This happens when you take the limit of $\Delta t$ tending to zero, so that time duration over which change of velocity happens is as short as possible to get the most accurate result which gives the instantaneous acceleration. This differential small time is denoted by $dt$. I hope this makes things clearer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
C, P and T transformations of $\phi$ that preserves symmetry I have a series of exercises regarding C, P and T symmetry but I am not really sure how to start with the problems. If anyone could help me with one of the problems, or show me a few example problems with full solutions, I would be very grateful. Then I can hopefully solve the remaining problems myself... As an example, we can consider this problem: Given the Lagrangian: $$L = \bar{\Psi}(i\gamma_\mu\partial^\mu - m)\Psi - \frac{1}{2}\partial_\mu\phi\partial^\mu\phi - \frac{1}{2}M^2\phi^2+ig\phi\bar{\Psi}\gamma_5\Psi $$ How should $\phi(x)$ transform under C, P and T such that these are all symmetries of the theory? Should I work directly on the Lagrangian, or should I consider the action? If I find one solution, how do I know it is the sole solution?
You must inspect how the last piece of the Lagrangian transforms, the rest of them are invariant. For example, let's do P. Dirac fields transform as: $$\psi \xrightarrow{\mathcal{P}} \gamma^0 \psi,$$ $$\overline{\psi} \xrightarrow{\mathcal{P}} \overline{\psi}\gamma^0.$$ So the quatity $\overline{\psi}\gamma_5\psi$ transform as $$\overline{\psi}\gamma_5\psi \xrightarrow{\mathcal{P}} \overline{\psi}\gamma^0\gamma_5\gamma^0\psi=-\overline{\psi}\gamma_5\psi.$$ Then if you want the Lagrangian to persevere the symmetry you can impose $$\phi \xrightarrow{\mathcal{P}} -\phi.$$ So it's a pseudoscalar field. You can find more information on how to perform these discrete transformations in the section 3.6 of Peskin and Schroeder.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Problem in Translational invariance In Shankar's QM (second edition, p-282), There are some equations are given, They are following as, $$T(\epsilon)|x\rangle = |x + \epsilon \rangle$$ where $T(\epsilon)$ is Translation operator. I understood equation given above, but Shankar says, "X is basis is not unique" then general result should be given as below, $$T(\epsilon)|x\rangle = e^{i\epsilon g(x)/\hbar}|x+\epsilon\rangle \tag{11.2.10}$$ Here what is $g(x)$ ? Definitely $e^{i\epsilon g(x)/\hbar}$ is periodic in nature. So how could we relate this periodic function with the non-uniqueness of basis? Edit: I know in 7 th chapter of Shankar it's given that basis is not unique, but I don't know that how non-uniqueness of that basis is related to exponential.
Note that $|x+\epsilon\rangle$ and $e^{i\epsilon g(x)/\hbar}|x+\epsilon\rangle$ represent the same state (both kets belong to the same ray). Here what is g(x) ? From Shankar (2nd edition), exercise 7.4.8 This exercise teaches us that the "X basis" is not unique, given a basis $|x\rangle$, we can get another $|\tilde{x}\rangle$, by multiplying by a phase factor which changes neither the norm nor the orthogonality. Earlier in the exercise, Shankar writes: $$|\tilde{x}\rangle = e^{ig(X)/\hbar}|x\rangle = e^{ig(x)/\hbar}|x\rangle$$ where $$g(x)=\int^xf(x')dx'$$ and then asks you to verify that, in the new X basis $$P\rightarrow -i\hbar\frac{d}{dx} + f(x)$$ Thus, specifying only that the translation operator $T(\epsilon)$ translates the state (ray) from a particle located at $x$ to the state of a particle located at $x + \epsilon$, leaves a degree of freedom since (as written above) $|x+\epsilon\rangle$ and $e^{i\epsilon g(x)/\hbar}|x+\epsilon\rangle$ represent the same state. It must be further specified that the translation takes $\langle P\rangle \rightarrow \langle P\rangle$ to "reduce $g$ to a harmless constant (which can be chosen to be zero)."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How is it physically possible that the electric field of some charge distributions does not attenuate with the distance? Let's consider for instance an infinite plane sheet of charge: you know that its E-field is vertical and its Absolute value is $\sigma / 2 \epsilon _0$, which is not dependent on the observer position. How is this physically possible? An observer may put himself at an infinite distance from all charges and he will receive the same E-field. It seems strange.
An observer may put himself at an infinite distance from all charges and he will receive the same E-field. I'm compelled to address this misconception just in case it is at the root of your question. When we solve this problem for the electrostatic field, the result is independent of the distance $r$ from the plane. Your intuition seems to inform you that this isn't possible since an observer can get infinitely far away from all charges. But an observer can't get infinitely far away, an observer can get arbitrarily far away but $r$ must have a value - infinity isn't a number that the value of $r$ can take. Through a limit process, one can talk meaningfully about the electric field strength as $r$ goes to infinity but I think it's a conceptual error to think in terms of being an infinite distance away from all charges. My gut tells me that you're imagining that one can get far enough away from the plane that the 'size' of the plane shrinks to zero but that's not the case by stipulation. To say that the plane of charge is infinite is to say that the plane has no edge. Thus, no matter how large $r$ (the distance from the plane) becomes, there is no edge to be seen.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
Is my friend right about omitting $c^2$ in world famous tiny equation? I know $E = mc^2$ says that inertial mass of a system is equal to the total energy content of a system in its rest frame. My friend told me the $c^2$ can be omitted from this equation because that's just an `artifact' when measuring inertia and energy in different units. Is he right?
In the Natural Units, the speed of light in vacuum i.e. $c$ is taken to be the fundamental speed of the universe. Under this system, all the fundamental physical constants are defined in such a way that their value is just 1 (e.g. $\hbar=k_B=1$). However, in the end one has to include the numerical values when switching from one system of units to another, say from the natural to S.I. units. This is the part of dimensional analysis. Hence, omitting in this sense is just taking $c$ to be 1 and using $E = m$ instead. You can see more here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 4 }
Two solutions for a 4-velocity component given 3 other components? The Setup Suppose I know, in some particular coordinate system, three components of the four-velocity vector $u^{\alpha}$ with $\alpha = \{0, 1, 2, 3\}$. For this question I'm going to assume the known components are the spatial components $u^{i}$ with $i = \{1,2,3\}$. I then use the constraint $$-\epsilon = g_{\mu \nu}u^\mu u^\nu,$$ where $\epsilon = c^2$ for timelike and $\epsilon = 0$ for null particles, to find the value of $u^0$. I begin by expanding the above Einstein sum and rearranging, $$0 = g_{00}(u^0)^2 + 2g_{i0}u^iu^0 + g_{ij}u^iu^j + \epsilon.$$ Noticing that this is quadratic in $u^0$, I can solve using the quadratic formula: $$u^0 = \frac{-2g_{i0}u^i \pm \sqrt{(2g_{i0}u^i)^2 - 4g_{00}(g_{ij}u^iu^j + \epsilon)}}{2g_{00}}$$ The Problem From the above equation, it seems there are two solutions for $u^0$. Specifically, in the case that $g_{i0} \neq 0$ (such as the Kerr metric in Boyer-Lindquist coordinates), the equation for $u^0$ implies two solutions with differing magnitude. I had previously interpreted the two solutions of $u^0$ in the Minkowski metric (where $g_{i0} = 0$ and so $u^0_{(1)} = -u^0_{(2)}$) as being the forward-in-time and backwards-in-time descriptions of the same trajectory. This makes sense as their magnitudes are the same and their signs are different, and essentially is a statement of time-reversal symmetry. In the $g_{i0} \neq 0$ case, not only can the magnitudes of the two solutions be different, it's conceivable from looking at the equation that there could exist metrics where both solutions to $u^0$ are positive, both solutions are negative, or even a situation where there are no real solutions, making my statement about time-reversal symmetry clearly incorrect. Also, we can always move to a local frame $S{'}$ where our metric is Minkowski. in these coordinates, the $u^0{'}$ solutions are equal and opposite - so there's definitely something weird going on in coordinates where $g_{i0} \neq 0$. My questions: * *Am I wrong about the time-reversal symmetry being the reason for $\pm u^0$ when $g_{i0}= 0$? *Can there indeed be a metric for which both solutions of $u^0$ have the same sign? *Can there indeed be a metric for which there are no real solutions of $u^0$? Feel free to give answers in math-heavy language if you need to (manifolds, chart mappings, etc.). EDIT: One more question I forgot to mention. *what is the interpretation of the two different values of $||u^0||$ in the $g_{i0}\neq 0$ case?
* *That's correct. *That's incorrect. In your example above you also get different signs for the two solutions, which is obvious from the $-2g_{i0} u^i \pm \sqrt{(2g_{i0} u^i)^2-{...}}$ so you have to chose the positive solution. *Only if you chose unphysical values for the other components, for example if the local velocity is higher than the speed of light, then you have to switch the sign of ${\rm d}s$ in order to describe hypothetical tachyons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is really meant by the area of black hole? The area of a black hole is an important parameter in the thermodynamic description of a black hole. In particular, reading popular literature, everyone knows that the entropy of a black hole is proportional to its area as discovered by Stephen Hawking. Can someone explain with a diagram which is really the area of a black hole? I know what is event horizon and Schwarzchild radius but I have real difficulty visualizing the area of a black hole.
The area of the event horizon is simply $4\pi r_s{}^2$ where $r_s$ is the Schwarzschild radius. However this is because that's how the radial coordinate $r$ is defined. $r$ is not the distance to the centre of the black hole (in fact the radial distance to the singularity is undefined). For any point $r$ is defined as the circumference of the circle passing through that point, and centred on the singularity, divided by $2\pi$. And that automatically makes the area of the sphere passing through the point $4\pi r^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does the number of accessible microstates decrease overall when heat is transferred? We have two systems of ideal gas with different temperatures. $N$ & $V$ are being kept constant. The number of accessible microstates of each gas is thereby only influenced by a change in $E$. The number of accessible microstates is: $$\Omega = \frac{(N-1+U)!}{(N-1)!\,U!}. $$ In regards to $E$ the function is growing at an increasing pace. Since all the energy is kinetic energy this means that the number of accessible microstates further only depends on the temperature. Now we connect the two systems for only an extremely short amount of time, so that they keep their respective volumes and number of particles. Just a long enough timeframe that a small amount of $Q$ can be transferred from the warm system to the cold system. This decreases the number of accessible MS in the warm system and increases the number of accessible MS in the cold system. Since $\Omega$ increases rapidly with $E$ this means that the change in the warm system is bigger than the change in the cold system. So if the decrease of MS in one system is bigger than the increase in the other the number of accessible MS overall is decreasing. How is that possible if we know the number of accessible MS should always increase as stated by the 2nd law of thermodynamics? kind regards
The formula is valid for units of energy The multiplicity $Ω$ for q units of energy among N equally probable states is given by the expression This is sometimes called the number of microstates for the system. Organic life exists because it exchanges energy and diminishes entropy by using the environment it finds itself in. It is only in closed systems that entropy always increases. Crystals appear because the total entropy is conserved or grows, it diminishes in the crystal and increases in the environment. The error is in the imaginary experiment: Now we connect the two systems for only an extremely short amount of time, so that they keep their respective volumes and number of particles. Just a long enough timeframe that a small amount of Q can be transferred from the warm system to the cold system. If they "keep their respective volumes and number of particles" how can a unit of energy be transferred?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does tangential acceleration change with radius? Do tangential velocity and tangential acceleration change with radius (change of radius on the same object)? For example consider a spinning disk. Does the equation $$a_t = \alpha R$$ (where $a_t$ is the tangential acceleration, $\alpha$ is the angular acceleration and $R$ is the radius of the disk) give me the tangential acceleration of the centre of mass or of a point on the edge of the disk? As you go inwards from a point on the edge of the disk , the radius decreases. So doesn’t that mean the tangential acceleration of the centre of mass is zero? I have the same doubt regarding tangential velocity. What is wrong with my reasoning? Does the centre of mass have the highest tangential velocity and acceleration or the lowest of all points on the disk?
Firstly defining circular motion Circular motion is when a body moves in circle or as they say have a fixed distance from a point (moving or stationary) Now when we consider rotating rigid bodies.We usually take torques ,angular momentum, moment of interia about the point which is stationary with respect to ground. As a disc spins freely and doesnt perform pure roll,the following qualities will be account about the center of the disc. Okay now you may ask why? The formula for angular velocity is a vector equation Mod(v1 vector-v2 vector)/distance between the point 1 and 2 As velocity of center is zero center is considered If you wish to take any other point use this vector equation. Same is valid for angular velocity is you differenciate the equations with respect to time
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why X-ray and radio waves can penetrate walls but light can not? Why can visible light, which lies in the middle between X-ray and radio waves in terms of frequency/energy, not penetrate walls?
X rays penetrate matter because their energy is much higher that of any matter excitations. The electrons in matter are to slow and too heavy to react and compensate the field, as they do for optical frequencies. Fort radio wave the opposite applies. They reflect off matter, especially off metals, unless you apply very special coatings. By reflection and diffraction they can go around obstacles and pass through openings.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Velocity of undamped pendulum On this page, under the heading "Orbit Calculations": http://underactuated.mit.edu/pend.html or here. The author says, "This equation has a real solution when $\cos{\theta} > \cos{\theta_{\rm max}}$" and then they give a piecewise function for $\theta_{\rm max}$. I have no idea how these statement and function were derived from $\dot{\theta}(t) = \pm \sqrt{\frac{2}{I}\dots}$ Can someone show the exact steps to get to this derivation?
If the pendulum has enough energy to go all the way around, then any value for $\theta$ is possible between 0 (hanging down) and $\pi$ (standing straight up). For simplicity, take $\theta$ to be the absolute value of the angle between the pendulum and $-\hat{y}$ since the situation is invariant under $\theta\to -\theta$. If the pendulum does not have enough energy to go all the way around then it will only be able to reach a maximum $\theta_m<\pi$. Given an energy $E$ the pendulum can rise only as high as $$ E=mgh=-mg\ell\cos(\theta_m)\\ \Downarrow\\ \theta_m=\cos^{-1}\left(\frac{-E}{mg\ell}\right) $$ Seems the textbook has a missing negative sign. This is because at the maximum height, all the kinetic energy has been converted to potential energy. Note $h$ is measured from the anchor point not the bob as is often done. It only has a real solution when $\cos(\theta)\leq \cos(\theta_m)$ otherwise the potential energy $-mg\ell\cos(\theta)$, would exceed the total available energy $-mg\ell\cos(\theta_m)$ and the quantity under the radical ($E_0-U$) would be negative and thus would be an imaginary solution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Does the angular velocity of a spinning disk increase if it has a completely inelastic collision with a object with a greater tangential velocity? A roller of radius 10cm is spinning with a angular velocity of 15 rad/s. It has a completely inelastic collision with a hunk of clay, with mass m moving at 3m/s at it's very bottom edge. Does the angular velocity of the roller (now with stuck clay) increase, decrease or stay the same? (The picture should clarify) I think it increases because at the moment of the collision there is a torque on the roller, but the moment of inertia also increase so I am not sure. Thanks in advance for any help!
By using conservation of momentum the angular velocity increases because the clay apply a torque about centre which moves the spinning disk fastly
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Space Time Diagram - world line of a wave My understanding so far: * *A wave is a vector field defined on the space-time. i.e. mathematically wave is just a mapping which for every point in the space-time maps it to a vector. *A world-line is function which maps an event (or a particle) on the space-time. In case the event (or the particle) "exists" only for an instant then the world-line will just be a point in the space-time diagram. A few questions now (basically I want to check if I understand the concepts correct as I self-study these topics): Q1 - Are the above definitions correct (and generic enough)? Q2 - Based on above then (and if they are correct) there is nothing like a world-line of a wave. I'm getting quite confused here (maybe I'm unable to visualize) but it appears to me that only "particles" can have world-lines defined Thanks
Normally an EM plane wave is taken as sinusoidal vector field in space time. But it is not required to have this form to solve the wave equation. An electric field $E_y = e^{-u^2}$ where $u = k(x+/-ct+a)$ also solves the wave equation: $$\frac {\partial^2 E_y }{\partial t^2} = c^2k^2(4u^2 - 2)e^{-u^2}$$ $$\frac {\partial^2 E_y }{\partial x^2} = k^2(4u^2 - 2)e^{-u^2}$$ It is a world line, except to have some thickness because it fades quickly to zero when $u \neq 0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What's the debate about Newton's bucket argument? I visited some other QA threads about this topic, and I don't understand why people think it's mysterious that the bucket knows about its rotation. If a non-rotating bucket is all there is in the universe, then, initially, all the parts of the bucket are at rest wrt to each other. But if we want to rotate that bucket with an angular velocity $\omega$, then we basically require the different parts of it to have relative acceleration wrt each other. Because if we divide the bottom of the bucket into many concentric rings, then each ring would've an acceleration $\omega^2 r$ towards the center, depending on the radius $r$ of ring. This means that the rings have relative acceleration wrt to each other. Laws of physics would take different forms for people standing on different rings. Hence, a rotating bucket is a collection of non-inertial frames having relative acceleration. But non-inertial frames are supposed to detect acceleration in Newtonian physics. So what am I missing?
Suppose that instead of talking about the bucket's angular velocity, you talked about its linear velocity. Then it would have indeed been the case that you can't speak of an absolute linear velocity in an empty universe. The paradox is why the same logic doesn't apply to angular velocity, since they're both "velocities". Of course, within the formulation of Newtonian mechanics, this isn't confusing. Newton's laws tell us unambiguously that there's no such thing as absolute linear velocity, but there is such thing as absolute angular velocity. Newton's bucket argument is really a metaphysical question, asking why it is the case that we have laws that seem to treat angular velocity and linear velocity differently.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What does the Problem 14 from Goldstein's book on classical mechanics chapter-7 (special relativity) really mean? I am having difficulty in understanding problem number 14 in Goldstein's Classical Mechanics, 3rd edition, chapter 7 on special relativity. Here is the problem --- A rocket of length $l_0$ in its rest system is moving with constant speed along the $z$ axis of an inertial system. An observer at the origin of this system observes the apparent length of the rocket at any time by noting the $z$ coordinates that can be seen for the head and tail of the rocket. How does this apparent length vary as the rocket moves from the extreme left of the observer w the extreme right? How do these results compare with measurements in the rest frame of the observer? (Note: observe, not measure). How does this differ from the usual length contraction? What is the meaning of the hint given by asking the reader to "observe" not "measure", what is the difference here?
The difference between measurement and observation is crucial in relativity. When we observe the rocket, the finite speed of light affects our observation. In general, light from the head and the tail of the rocket will take a different amount of time to travel to the observer. When we measure the rocket, we compensate for time delays caused by the finite speed of light. So if we measure two events A & B to be simultaneous we will only observe A & B to be simultaneous if the distances to A & B are identical in our frame. As Alfred Centauri notes in the comments, it's not unusual for writers to use the term "observed" to refer to measured values, not the raw observed data. They assume that the reader knows that light travel time has to be compensated for. This unfortunate ambiguity confuses many people learning relativity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the potential of a charged ring diverge on the ring? I know that the density and potential (in spherycals) of a charged ring is, respectively,: $$ \rho(\textbf{r}) = \frac{\lambda}{a} \delta(r-a)\delta(\theta-\tfrac{\pi}{2}) $$ $$ \varphi(\textbf{r})= \frac{2\pi a \lambda}{r_>} \left[ 1+ \sum_{n=1}^\infty (-1)^n \frac{(2n-1)!!}{(2n)!!}\left(\frac{r_<}{r_>}\right)^{2n}P_{2n}(\cos\theta) \right] $$ Where $P_{2n}$ is the $2n$-th Legendre Polynomial, and $r_<=\min\{a,r\},r_>=\max\{a,r\}$. If I evaluate $\mathbf r$ in the ring ($r=a,\theta=\tfrac{\pi}{2}$): $$ \varphi(\mathbf r)\,\propto\, \left[ 1+ \sum_{n=1}^\infty (-1)^n \frac{(2n-1)!!}{(2n)!!} \right] \to\infty $$ So this is a problem (I suppose).
This is actually a fun question, I learnt something new about double factorials while trying to answer it! I don't see why that term diverges. Using the identities on Wikipedia for the "double factorial", we have that for even integers $k$, $$\int_0^{\pi/2} \sin^{k}(x)\text{d}x = \frac{(k-1)!!}{(k)!!}\frac{\pi}{2}.$$ We can use this to calculate the sum term you have explicitly. $$\sum_{n=1}^\infty (-1)^n \frac{(2n-1)!!}{(2n)!!} = \frac{2}{\pi}\sum_{n=1}^\infty (-1)^n \int_0^{\pi/2} \sin^{2n}(x)\text{d}x = \frac{2}{\pi}\int_0^{\pi/2} \text{d}x \sum_{n=1}^\infty (-1)^n \sin^{2n}(x).$$ Where in the last step I've interchanged the sum and the integral. This particular sum is quite easy to do, and I'll leave it as an exercise to show that $$\sum_{n=1}^\infty (-1)^n \sin^{2n}(x) = -\frac{\sin^2(x)}{1+\sin^2(x)}.$$ We can now perform the integral and show that $$-\frac{2}{\pi}\int_0^{\pi/2} \frac{\sin^2(x)}{1+\sin^2(x)} \text{d}x = \frac{-2 + \sqrt{2}}{2}.$$ Thus, $$\sum_{n=1}^\infty (-1)^n \frac{(2n-1)!!}{(2n)!!} = \frac{-2 + \sqrt{2}}{2} < \infty,$$ which should solve your problem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question about Faddeev-Popov gauge-fixing in Schwartz textbook I am trying to understand equation (25.91) from Schwartz's Quantum Field Theory textbook. The goal is to gauge-fix the path integral for Quantum chromodynamics using the Faddeev-Popov trick. Briefly, the argument boils down to multiplying the integral by: $$1=C\sqrt{\det(\partial_{\mu}D^{\mu})^2}\int {\cal D}\pi~ e^{-i\int d^{4}x \frac{1}{2\zeta}(\partial_{\mu}D^{\mu}\pi-\partial_{\mu}A^{\mu})^2}$$ where $C$ is some numerical coefficient. Now, in the second line of (25.91) the author redefines $$A\rightarrow A+ D\pi ,$$ where $D$ is the gauge covariant derivative in the adjoint representation. He claims that this shift results in the dependence of the integrand on $\pi$ dropping out leading to an extra factor $\int {\cal D}\pi$ which is not significant. I do not understand how the shift $A\rightarrow A+D\pi$ leads to the expression in (25.91). Shouldnt we also shift the $D$ in the factor $\partial D \pi$ living in the argument of the exponential?
OP has a point. Ref. 1 transforms the integration variables$^1$ $$ A^b_{\nu}\quad\longrightarrow\quad A^{\prime a}_{\mu}~=~A^a_{\mu} - \partial_{\mu}\pi^a - gf^{abc} A^b_{\mu}\pi^c~=~A^a_{\mu} - D_{\mu}^{ab}(A)\pi^b$$ upstairs in the exponential function of eq. (25.91) but forgets to also transform the factor $\frac{1}{f[A]}$ downstairs. The result (25.93) is certainly the well-known correct result for the Faddeev-Popov path integral in $R_{\xi}$-gauge, but the derivation$^2$ leading to eq. (25.93) is flawed. The Faddeev-Popov trick usually starts by considering an identity of the form "delta-function times determinant" rather than the identity $\frac{f[A]}{f[A]}=1$. A correct derivation is given in e.g. Ref. 2. References: * *M.D. Schwartz, QFT & the standard model, 2014; eq. (25.91). *M. Srednicki, QFT, 2007; chapter 71. A prepublication draft PDF file is available here. -- $^1$ Notice the minus sign. It should probably also be mentioned that the change of integration variables induces a Jacobian determinant $$ \det\frac{\delta A^{\prime a}_{\mu}}{\delta A^b_{\nu}}~=~\text{function of } \pi \text{ but not a function of }A,$$ which in principle has to be included in the path integral $\int\!{\cal D}\pi$. $^2$ The corresponding abelian derivation in section 14.5 of Ref. 1 is fine because $f$ then doesn't depend on $A$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Question about fluid flow through a funnel or cone I am an MD, I am studying the fluid flow of the tears through the tears duct system of the eye. The newer view suggests that the first part of the system is a funnel or a cone. I cant seem to understand (without equations), what are the main advantages of fluid flow through a funnel?
The primary characteristic of flow through a funnel is that the ratio of the velocities of flow in one end and out the other is scaled by the ratio of the cross-sectional areas of the two ends. Slow flow in the big end is transformed into fast flow at the small end. A funnel is also called a bernoulli transformer for this reason.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Hydrogen atom and scale transformation for radial variable While solving Schrödinger equation for Hydrogen atom we make a scale transformation for radial variable ($r=\frac{ax}{Z}$; where $a=$ Bohr radius, $x=$ dimensionless variable and $Z=$ atomic number), this turns out to be a very good scale transformation. But my question is how do we know value of Bohr radius in advance, before solving Schrödinger equation? Do we just use Bohr radius that we got from Bohr theory? If we do use Bohr radius from Bohr theory then why is so because it is a classical theory?
this turns out to be a very good scale transformation. We don't really care that it's a good or bad choice. We start by choosing a scale factor with just a symbol to move to dimensionless coordinates and follow the math through using it. The physical problem is solved no matter what we choose (as long as it's not silly like zero). We can at any point in going through the math (on this or any problem where we apply this technique) choose a value for the scale factor that is convenient and makes doing the math simpler. But my question is how do we know value of Bohr radius in advance, before solving Schrödinger equation? We don't. We solve the problem and then we label the value as the Bohr radius. That's what happened when Bohr's original model was published - it did not start out being called the "Bohr radius, it became that by convention. For some problems the scale factor we end up with for mathematical convenience has a straight-forward physical interpretation (a radius, an orbital period, a reduced mass, etc.), but sometimes it's just a constant in the equation because that's how the mathematics works out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the torque produced by 2 rotating bodies with a clutch I am trying to simulate a car engine etc, but I have failed to find any equations governing the torque created by $2$ different constant velocity shafts of different angular momenta joining together with some given slip or friction factor. I know $I_1w1 + I_2w_2 = I_3w_3$ which gives me the end angular velocities, but its not giving me the torque acting on each shaft at any moment in time whilst they are joining.
I can see the Torque transferred to the engine from the wheels could be limited by torque clutch, but I dont know the Torque and I dont know the clutch join time! Correct. You need to make assumptions. If you have an inexperienced driver that releases the clutch instantly, the maximum torque that can be applied may well exceed stress limits on parts. The theoretical maximum depends on the specifics of the clutch. The faster it engages, the greater the maximum torque can be. It's like asking the force on an object that bounces. It depends on the materials. A soft ball on a piece of foam will have low force. Two steel balls bouncing might have forces of several thousand newtons. But the total momentum change might be identical. The only difference is the time involved. An automatic transmission's torque converter, or an experienced manual driver will manipulate things in a way to limit the total torque. But this is a variable, not something that falls out of your equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
"Boiling is to evaporation as melting is to... ?" Or, why aren't 31 degree ice cubes wet? Well before a liquid reaches boiling point, it gradually looses molecules with exceptionally high kinetic energies to its surroundings, which is called evaporation. Does this phenomenon occur to some solids as well, where before their melting points, the lose some of their mass into liquid forms? Why don't ice cubes at 31 degrees have a layer of water sticking to them, but are instead extremely dry?
They do, although its very thin. The three phases of matter are merely approximations which let us treat a whole bunch of molecules as if they were a bulk object. However, when a liquid molecule gains enough energy to act gas-like, it tends to get away from the liquid body. When a molecule of a solid gets enough energy to act like a liquid, it stays near the solid, giving it an opportunity to transfer some of that energy back into the solid, effectively "refreezing".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do we describe probability amplitude rather than probability itself in quantum mechanics? In the quantum mechanics, the dynamics of quantum system are described in terms of probability amplitude. However, we want to calculate the probability in the end which can be measured. Why don't we develop quantum mechanics directly describing the probability instead of probability amplitude? Wouldn't this make the quantum mechanics more interpretable and simple?
You are right. In proper treatments of the mathematical foundations of quantum mechanics, following von Neumann, the probability amplitude is simply defined from the probability using the Born rule and satisfying Hilbert space. This does indeed make quantum mechanics much easier to understand, and being rigorous, it actually follows that interpretation is a mathematically solve problem. I have a published paper on the topic, The Hilbert space of conditional clauses.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does it make sense to say that something is almost infinite? If yes, then why? I remember hearing someone say "almost infinite" in this YouTube video. At 1:23, he says that "almost infinite" pieces of vertical lines are placed along $X$ length. As someone who hasn't studied very much math, "almost infinite" sounds like nonsense. Either something ends or it doesn't, there really isn't a spectrum of unending-ness. Why not infinite?
"Almost infinite" is a sloppy term that might be used to mean "effectively infinite", in a given context. For example, if a large value of $x$ in $y = 1/x$ produces a value smaller than the accuracy of measurement of $y$, then it's often reasonable to set the value of $y$ to zero, which is equivalent to setting the value of $x$ to infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 11, "answer_id": 3 }
Capacitance of a single straight wire What is the capacitance of a single straight wire? calculating the electric field using Gauss's law, I get a constant divided by the distance from the wire (r). Integrating 1/r gives me ln(r). evaluating r at the radius of the wire and infinity gives me infinity. On the one hand, I am thinking that I am using the field from an infinitely long straight wire, maybe an infinitely long wire has infinite capacitance. But what is confusing is that using q = CV (charge = capacitance times voltage) , I get cL = Cck*ln(r) where c is charge per unit length and k is the rest of the constants. There I am plugging in L, which means I am not technically using an infinitely long wire.
Using Gauss's law, you should have found that the field strength (radial) at distance $r$ from the central axis of a long straight wire of length $\ell$ and radius $r_1$ carrying charge $Q$ is of magnitude $$E= \frac{Q}{4 \pi \epsilon_0 \ell r}$$ So the pd between the surface of the wire (of radius $r_1$) and a surrounding co-axial conducting surface of radius $r_2$ is $$V=\int_{r_1} ^{r_2} \frac{Q}{4 \pi \epsilon_0 \ell r}dr=\frac{Q}{4 \pi \epsilon_0 \ell} \ln \frac{r_2}{r_1}$$ The capacitance is therefore $$C=\frac QV=\frac{4 \pi \epsilon_0 \ell}{\ln \frac{r_2}{r_1}}$$ So, not surprisingly, the capacitance is proportional to the length of the wire. More interestingly, the capacitance goes to zero as we make the surrounding conducting surface larger and larger ($r_2>>r_1$). In other words, an isolated conducting wire would have zero capacitance. In practice there will be objects at various distances from the wire, and the charged wire will induce charges on these objects, so the system's capacitance, though hard to calculate, will not actually be zero (when it can be defined at all). Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does one determine the $R$-symmetry group? As far as I understand it, the $R$-symmetry group is just the largest subgroup of the automorphism group of the supersymmetry (SUSY) algebra which commutes with the Lorentz group. I know for $\mathcal{N}=1$ SUSY, the $R$-symmetry is $U(1)$, mainly due to there being only one supercharge. However, I was wondering: how does one find the $R$-symmetry group for an extended $\mathcal{N}>1$ supersymmetric theory? Also, does the $R$-symmetry group depend on the dimension and/or geometry (e.g. if we had a compact spacetime manifold) of spacetime?
Properties of R-symmetry group depends on spinor structure: Here M means Maiorana spinors, MW - Maiorana-Weyl, S- symplectic. Spinor structure depends on dimension and signature of space. For more details one can consult Tools for supersymmetry. Put supersymmetric theory on curved space is not simple task. To do such procedure, one must find generalized Killing spinors on such manifold. Such spinors will depend on non trivial SUGRA background filelds. See for example An introduction to supersymmetric field theories in curved space. Simpler task is to study SCFT, but in superconformal algebra R symmetry is not external symmetry. R symmetry appear in commutation relations. Such theories natural to put on conformal flat manifolds. For spheres we have following R-symmetry groups:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }