Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Is projectile motion an approximation? Doesn't the acceleration vector points towards the center of the Earth and not just downwards along an axis vector. I know that the acceleration vector's essentially acting downwards for small vertical and horizontal displacements but if the parametrization of projectile motion doesn't trace out a parabola, what is the shape of projectile motion?
Also the earth is rotating and is a non-inertial (accelerating) reference frame. For long-range projectiles the effect of the Coriolis force is important. Some of the earliest numerical simulations using computers were calculations of long-range projectile motion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/611916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 0 }
Will liquid nitrogen evaporate if left in an unopened container? SOS! I left work today and got a horrible feeling that I forgot to put the lid back on a large container of liquid nitrogen which contains many racks of frozen cells in it. If this did happen, how long would it take liquid nitrogen to evaporate? Does it start to evaporate as soon as it is exposed to oxygen? Will all the liquid nitrogen be gone from the container when I go back tomorrow?
Updated Response per comment by @Chemomechanics. This response assumes the nitrogen container is well insulated and considers only evaporation (mass transfer) and not heat transfer as the dominant phenomenon. At atmospheric pressure the saturation temperature for liquid nitrogen is -196 C. The air in the room is probably about 20 C. So, yes unless the cover space is very small all the liquid nitrogen will eventually evaporate. The time to evaporate all the liquid nitrogen depends on the mass and the exposed surface area (the size of the opening) which you did not provide. The nitrogen will evaporate until the partial pressure in the room is the saturation pressure at the temperature of the nitrogen. For a small opening the rate of evaporation is low, so the rate of release into the surrounding room is small. I do not have good information on the rate of evaporation; perhaps others can address this. Comments by @Chemomechanics and @J. Murray indicate little evaporation over a day for a small opening. If a large amount if nitrogen is evaporating into a sufficiently small closed cover space the partial pressure is reached before all the nitrogen has evaporated. Here is some additional information. Release of significant amounts of nitrogen vapor poses significant safety concerns. Search the net for liquid nitrogen accidents. A tremendous amount of force can be generated if liquid nitrogen is vaporized in an enclosed space. The expansion ratio of nitrogen is 1 to 696; vaporization of liquid in a tank where the pressure relief devices failed resulted in a serious accident at Texas A&M on January 12, 2006 If a large amount of nitrogen fills a space, asphyxiation can occur due to lack of oxygen.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/612017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Ball in magnetic field, not understanding boundary condition So I am reading solution of following exercise: A conducting sphere with radius $R$ moves with constant velocity $v=ve_x$ inside a constant magnetic field $B=Be_y$. Find the induced charge distribution on the sphere to 1st order in $v/c$ in the laboratory inertial reference frame. Which is on page: https://www.physicsforums.com/threads/moving-sphere-in-magnetic-field.825426/ So the personal is this topic solves laplace equation with condition that potenial w infinity must go to $-r\gamma \frac{v}{c}\cos\theta$ I don't get why.
Because it implements the correct boundary conditions for the electric field in the primed reference system $E'(r) \to E_0\vec{e}_z$. Check that this is true by taking the gradient. Conversely you may ask what potential fulfills $\nabla \Phi(r,\theta) = E_0\vec{e}_z$ which will lead to the expression. Admittedly, it is not obvious at first glance and the author of the question probably put some thought into it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/612346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it even theoretically possible for a perfect clock to exist? I have heard that even atomic clocks lose a second every billion years or so. That raises the question, is it even theoretically possible for a perfect clock to exist, one that never gains or loses time?
I have heard that even atomic clocks lose a second every billion years or so. That would be a small misunderstanding on your part. The second now is defined by atomic clocks. So, if all atomic clocks were consistently slow, then that would mean that the definition of a second was wrong... by definition. That doesn't make sense. What you read probably said that an atomic clock can not be regulated to better accuracy than plus or minus so-many seconds per billion years. That is to say, if you built an ensemble of atomic clocks, and you let them all run for a billion years without ever correcting them, then you could expect their counts to differ by some small number of seconds at the end of that time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/612447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Yukawa matrices It is known that the masses of fermions in the Standard Model are represented in the form of singular values of complex Yukawa matrices (Yukawa couplings). The question is, are the values of the masses/couplings themselves real numbers? If so, what are the values of the imaginary part, what role do they play?
Yes, the values of the masses are real positive numbers. Recall how you find them out of the complex matrix Y. Note first that $Y Y^\dagger$ is hermitian, and has positive-eigenvalues, and so can be written as $$ Y Y^\dagger = U D ^2 U^\dagger $$ for some unitary U and diagonal real D with no zero entries, for simplicity. Take the positive square roots. Define the hermitian matrix $$ H = UD U^\dagger ~~~~\leadsto ~~~ H^2= UD^2 U^\dagger ,\\ H^{-1}= U D^{-1} U^\dagger. $$ Define the unitary matrix $$ S\equiv H^{-1} Y ~~~ \leadsto ~~~ Y=H S, \\ Y= U D U^\dagger S\\ \equiv U D K^\dagger , $$ where K is unitary. Had we not taken the positive roots for the diagonal D, we could make it positive by incorporating the negative signs diagonal into $K^\dagger \equiv U^\dagger S$ preserving its unitarity. So Y has been bidiagonalized to a real positive diagonal D, now safely identified with a diagonal M. Now you are ready to apply U or K, the one contiguous to the left-chiral fermion components, to produce the CKM matrix. Note the ensuing companion expression to the first equation, useful in identifying K, $$ Y^\dagger Y = K D^2 K^\dagger . $$ For zero eigenvalues see the Singular Value Decomposition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/612674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Where is this magnetic rail gun getting its energy? Please watch this youtube video of a magnetic rail gun moving a marble. So as you already know, Conservation of Energy states that "energy can be neither created nor destroyed, but can only change form". Where is this rail gun getting the energy to move the marble? It looks like it's being created out of thin air. Is it not possible to create a perpetual motion machine by modifying this rail gun? For example, what if a funnel and tube was added that can catch the moving marble, which feeds the marble back to the rail gun to continue the loop? I'm not 100% sure, but I can't imagine any kind of friction will cause this loop to stop. Thanks!
There is no deviation from the laws of physics. Magnetic potential energy is the another source of energy apart from gravitational potential energy. As long as magnetic property of the metal exists it will last
{ "language": "en", "url": "https://physics.stackexchange.com/questions/612783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Intrinsic carrier concentration in doped semiconductors For an intrinsic semiconductor, due to thermal energy we get some charge carriers whose concentration is known as intrinsic carrier concentration. Now if we dope the material we'll have carriers both due to donation and due to thermal energy generation. In this case is the intrinsic carrier concentration still defined to be equal to the thermally generated carriers? And in what way might this doping affect the number of intrinsic carrier concentration?
Intrinsic carrier concentration is the concentration of electrons or holes in a pure, undoped, semiconductor. Doping a semiconductor changes the concentration of electrons and holes but it doesnt change the intrinsic concentration. It just stops being an example of an intrinsic semiconductor. Its the same as heating water above 0 C doesnt change the freezing point of water. You just no longer have ice.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/612998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can we say load is static in case of simple tension test if it is changing with time? In simple tension test the load acting is static load(which doesn't change with time), but in this test the load varies with time i.e., the load increases, so how can we say it is a static load test?
In a static test, the load is nearly constant and acts in only one direction. In a dynamic test, the load varies rapidly and can act in both directions i.e., for example in reversed bending. Static tests are used to determine yield point, ultimate strength, etc. and dynamic tests are used to determine fatigue resistance and vibrational dynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/613254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding power stored/given by each element in a circuit I will start with my attempt. I recall my professor saying that $i_1 = 0$ due to the fact that current cannot flow through an open circuit. That makes the current-dependent-voltage-source on the right circuit generate no voltage at all. Looking at the left circuit, I think that no current will flow through $R_2$ because it will always choose the path of least resistance (through the short circuit); atleast that's what I recall from Sadiku's book. So, I could change the left circuit consisting with only $R_{eq} = R_1$. On the right circuit, because the current-dependent-voltage-source does not generate any voltage, based on the equation $R = V/I$ I think that the resistance on it should be zero. Hence, I can make the right circuit equivalent to $R_{eq} = \frac{R_3 \cdot R_4}{R_3 + R_2}$. At this point it's just about finding the voltage or the current in each circuit needed to calculate the power via $P = V \cdot I$. Is this approach correct? These are the things that I'm not quite sure of (not explained in the textbook that I use, atleast that I've read yet): * *Is $i_1$ = 0? *Does current flow through $R_2$? *Does the CDVS on the right circuit have zero voltage, thus having $P = 0$, and a resistance of zero so I could make $R_{eq}$ as the parallel of $R_3$ and $R_4$? Thank you!
Your approach is correct, for the most part. * *Consider the leftmost circuit. Since, as you said, $R_2$ is short-circuited, we can redraw the leftmost circuit: Thus, applying Kirchhoff's Current Law on the blue node yields $i_1=0$. *No current flows through $R_2$, since it is short circuited. *The voltage across the CCVS is indeed $2i_1=2\cdot0=0V$, so yes, you can replace it with a short circuit, and that would make $R_3$ and $R_4$ be in parallel with each other.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/613360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is Minkowski metric diagonal? Why is the Minkowski metric a diagonal in a 4x4 matrix? What does the diagonal do?
Being diagonal is a coordinate-dependent concept: the components of the matrix associated to the metric tensor depend on the coordinate system you use. Thus a very simple example of a non-diagonal metric is the standard Euclidean metric $\delta = dx^2 + dy^2$ on $\mathbb R^2$ in the coordinate system $(x,z) = (x, x+y)$, where it has the coordinate expression $$\delta = dx^2 + d(z-x)^2 = 2dx^2 + dz^2 - 2 dx dz.$$ In fact, there's some very famous solutions that have non-diagonal metrics. Such as the Kerr metric for a rotating black hole in General Relativity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/613457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
What does the "true" visible light spectrum look like? When I google "visible light spectrum", I get essentially the same image. However, in each of them the "width" of any given color is different. What does the "true" visible light spectrum look like, then? It can't be that each and every image search result is correct. I could not find any information about this on the web, so I turn to the experts.
If you're really curious, buy a cheap prism, and take it outside in sunlight. You'll be dispersing the frequencies present in sunlight, and in addition, your eyes are more or less sensitive depending on the frequency, but that's a good start for being able to see what a "real spectrum" of visible light is. A monitor does not produce all frequencies of light, but rather tricks human perception by sending different proportions of red, green, and blue light. A color-calibrated, wide-gamut display can reproduce the effect of broad spectrum light, but it won't be the real thing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/613798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 6 }
Is a wave function a ket? I just started with Dirac notation, and I am a bit clueless to say the least. I can see Schrödinger's equation is given in terms of kets. Would I be correct to assume if I were given a wavefunction, say $\Psi(x)=A\exp(-ikx)$, would I be able to just use the notation $\lvert \Psi\rangle =A\exp(-ikx)$?
The definition is $$ \psi(x)=\langle x| \psi\rangle, ~~~\leadsto \\ |\psi\rangle= \int dx ~~\psi(x) | x\rangle , ~~\leadsto \\ |\Psi\rangle= \int dx ~~ A e^{-ikx}| x\rangle . $$ Wavefunctions are coefficients of coordinate kets. NB You may also then check $$\langle p|\Psi\rangle= \int dx ~~A e^{-ikx} \langle p|x\rangle ={A\over \sqrt{2\pi \hbar}} \int dx ~e^{-ix(k+p/\hbar)} \\ ={A\over \sqrt{\hbar}} \delta (k+p/\hbar). $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/613937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
Oscillation coil: where is the electric field? Let assume a simple RF coil fed with an alternating current at RF frequencies, say 100MHz. I believe that no one doubts that the coil will radiate RF energy in the form of radio waves. A radio wave is classically composed of an electric vector and a magnetic vector orthogonal one to the other. Now, let assume a low frequency big coil, say, for magnetic induction, and assume it is fed by a low frequency current (e.g. 1 KHz). An oscillating magnetic field will be perceived near the coil, but I've never heard that the coil also radiates a sensible electric field. Yet, there is no difference with the former RF coil, but the frequency. So, my question is: why is there no sensible electric field radiated by the low frequency coil?
One has to distinguish the near field (i.e., the field near the coil) and the far field, i.e., the propagating electromagnetic waves far away from the coil (far away on the scale of the wave length). The major factors that determine whether the oscillating electric current will produce a propagating electromagnetic wave are: * *the damping should be weak *the energy of the electromagnetic wave should be substantial to be detected and to be of interest for practical applications (otherwise you will never hear about it). Kilohertz frequency waves have wave lengths of several hundred kilometers - too long to be of practical use, and even to be generated by conventional methods (which typically require an antenna of half-wave length size). Yet, such waves are known to exist in the atmosphere and can be rather easily detected (to the extent that in some universities their detection is a subject of an undergraduate lab experiments). They go under the name of whistlers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/614077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are there fields (of any kind) inside a black hole? It is said that nothing escapes from black holes, not even light. All particles are now thought to be excitation of different fields (electric field, electromagnetic field, photon field, etc). Does it follow that there are no fields (of whatever kind) inside the event horizon of a black hole? IFFY (if and only if) there are fields, are particles created inside the black hole?
Besides the electric field, as Jerry Schirmer mentions in his answer, that is non-zero everywhere one can have a scalar field that is exists inside the horizon. See for example the (2+1) dimensional solution reported here Conformally dressed black hole in 2+1 dimensions. The metric function is found to be: $$f(r) = \cfrac{(r+B)^2(r-2B)}{rl^2} $$ and the scalar field: $$\Psi = \sqrt{\cfrac{8B}{\kappa(r+B)}}$$ where $B>0$. The black hole horizon is located at $r_h=2B$ and the scalar field remains finite for all $r>0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/614146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Are angles ($\theta$ and $\phi$) in spherical coordinates treated as operators in quantum mechanics? Position is specifically considered as an operator in quantum mechanics. I want to know if $\theta$ and $\phi$ are explicitly considered as operators in quantum mechanics for solutions to 3D Schrodinger equation. Also, how do they commute with each other and with operators like $r$ or $J$? Also, if they are considered as operators then how do we calculate expectation value for, say, solution to electron wave function in Hydrogen atom potential. For as far as I understand, the potential is spherically symmetric so there should be no preference for any particular angle. (This seems to be the case for s-orbitals but for p and higher orbitals, the wave function is not spherically symmetric. So what is the origin of this asymmetry?)
Yo can define such an operator. Let $\{|\mathbf{r}(r,\theta,\varphi)\rangle\}$ be the eigenstates of the position operator $\mathbf{r}$: $$ \mathbf{r}|\mathbf{r}(r,\theta,\varphi)\rangle = \mathbf{r}(r,\theta,\varphi) |\mathbf{r}(r,\theta,\varphi)\rangle. $$ Let us try the following definition of an "angular-operator" $\hat{\theta}$: $$ \hat{\theta}|\mathbf{r}(r,\theta,\varphi)\rangle = \theta |\mathbf{r}(r,\theta,\varphi)\rangle . $$ But you will definitely agree with $|\mathbf{r}(r,\theta,\varphi)\rangle = |\mathbf{r}(r,\theta+2 \pi,\varphi)\rangle$. But then we have $$ \hat{\theta}|\mathbf{r}(r,\theta,\varphi)\rangle = (\theta+ n 2 \pi) |\mathbf{r}(r,\theta,\varphi)\rangle , \quad n \in \mathbb N_0. $$ Hence, this operator, or the action of this operator, is not well-defined. An alternative would be a "modulo" definition like this: $$ \hat{\theta}|\mathbf{r}(r,\theta,\varphi)\rangle = (\theta + \mathrm{mod}\, 2\pi) |\mathbf{r}(r,\theta,\varphi)\rangle. $$ :-)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/614304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
What are the necessary and sufficient conditions for a wavefunction to be physically possible? Often times it is stated in books that a quantum state is physically realizable only if it is square integrable. For example, in Griffiths (2018 edition) page 14 he stated Physically realizable states correspond to the square-integrable solutions to Schrödinger’s equation. But when we have an operator with continuous eigenvalues like the position operator $\hat{X}$ our eigenstates are not square-integrable. Like the eigenfunction for the eigenvalue $x'$ of $\hat{X}$ is $\delta(x-x')$ which is not square-integrable and we also know - $$ \langle x|x'\rangle=\delta(x-x')\Rightarrow \langle x|x\rangle=\infty$$ But clearly $|x\rangle$ is a physical state as wavefunction collapses to that after measurement. So definitely the definition by Griffiths is not completely correct. I came across about Rigged Hilbert space while searching in Stack but I am still not completely sure whether all physical states lie in Rigged Hilbert space or all states in Rigged Hilbert space are physical. So, my question is what are the necessary and sufficient conditions for a wavefunction to be physically possible?
In addition to what John Rennie said in the comments, I would like to highlight another thing using the quote you provided: Physically realizable states correspond to the square-integrable solutions to Schrödinger’s equation. the delta function isn't a solution to Schrödinger’s equation. a physical wavefunction therefore satisfies: $$\left[-\frac{\hbar^2}{2m}\nabla^2+V(\vec{x})\right]\psi (\vec{x})=E\psi (\vec{x}),$$ And must be normalizeable: $$\int_{-\infty}^{\infty}d^3x ~ \psi^{*}\psi <\infty$$ the delta functions $\delta(x-x') $ form a basis to the Hilbert space but they are not a basis of eigenfunctions of the Hamiltonian operator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/614496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Is entanglement time-symmetric? It is common to describe an experiment as "causing" entanglement. For example, two quantum particles that interact become entangled as a result of their interaction, so we are likely to say that the interaction "caused" the entanglement. However, quantum mechanics is time-symmetric, so a time reversed movie of the experiment would show two entangled particles approaching each other, interacting, and then going on their way unentangled. I suspect that this picture is incorrect, but can't put my finger on why.
The picture is correct. The key fact is that interaction can both create and destroy entanglement, so there is no conflict with time symmetry. Suppose for example that two qubits initially in the product state $|\psi_i\rangle = |+\rangle|0\rangle$ are allowed to interact in a way that performs the CNOT gate from the first to the second qubit. Then the final state is $|\psi_f\rangle=(|00\rangle + |11\rangle)/\sqrt{2}$. Now consider the above process in reverse. Two qubits begin in the Bell state $|\psi_f\rangle=(|00\rangle + |11\rangle)/\sqrt{2}$, interact in a way equivalent to the CNOT gate and finish in the state $|\psi_i\rangle=|+\rangle|0\rangle$. In terms of spins, you can think of $|1\rangle$ as the "spin up" state, of $|0\rangle$ as the "spin down" state and of $|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$ as the uniform superposition of the two states with zero relative phase. A process implementing the CNOT gate is then any interaction that flips the second spin if the first spin is up.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/614802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Will a changing $E$ field induce a current in a loop similar to a changing $B$ field? An induced current in a wire loop that is caused by a changing B field is a common EM question. However, I couldn't find examples online where the B field was substituted for a changing E Field. The following question was given on a test and the goal was to find the current flow caused by a varying B Field first, then a varying E Field. My answer is illustrated below. While it was simple to deduce the direction of the current with a changing B field (clockwise), when the E field was subbed in below, my answer was completely different. Instead, I ended up with an induced B field that was counterclockwise on the outside of the loop and clockwise on the inside of the loop.
Keep in mind three facts: * *If you look at the Lorentz force, a static magnetic field never imparts kinetic energy onto a charged particle; it only curves its trajectory. You need electric field to speed up or down a charge. *If you look at Faraday's law, you will see the curl of the electric field is zero when the time derivative of magnetic field $B$ is zero everywhere. *Maxwell's equations are coupled. A changing electric field also produces a magnetic field. If the electric field is curlless, then there is no closed loop that accelerates charges around the whole way through: so, when exposed to a new, curlless electric field, charges just rearrange themselves, without developing a net current around the loop. A changing electric field can produce a magnetic field, but this magnetic field cannot directly speed up charges around the loop. You still need the line integral of $\mathbf{E}\cdot \mathbf{n}$ around the the loop to be nonzero, which requires $\nabla\times\mathbf{E}=-\partial\mathbf{B}/\partial t$ to be nonzero somewhere. So unless a changing magnetic field that produces an EMF is produced/present somehow, just changing the electric field is not enough.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/615146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Matrix Representation of Lorentz Group Generators Let $\Lambda^{\alpha}{}_{\beta}$ denote a generic Lorentz transformation. Then, an infinitesimal transformation can be written like $$\Lambda^{\mu}{}_{\nu} = \delta^{\mu}{}_{\nu} + \omega^{\mu}{}_{\nu} $$ where $$\omega^{ij} = \epsilon^{ijk}\theta_k$$ $$\omega^{i0} = - \omega^{0i} = \delta^i$$ where $i,j,k$ run from 1 to 3 and $\delta^i$ is a parametre related with boosts. Then, an infinitesimal transformation has a matrix representation \begin{pmatrix} 1 & -\delta_1 & -\delta_2 & -\delta_3\\ -\delta_1 & 1 & \theta_3 & -\theta_2\\ -\delta_2 & -\theta_3 & 1 & \theta_1\\ -\delta_3 & \theta_2 & -\theta_1 & 1 \end{pmatrix} However, we can also write $$\Lambda^{\mu}{}_{\nu} = \delta^{\mu}{}_{\nu} + i\frac{\omega^{\alpha \beta}}{2}\left(J_{\alpha \beta} \right)^{\mu}{}_{\nu} $$ where $J_{\alpha \beta}$ are the generators of the group. I want to prove that $J_{01}$ is of the form \begin{pmatrix} 0 & -i & 0 & 0 \\ -i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} My problem is in understanding the notation in $$\Lambda^{\mu}{}_{\nu} = \delta^{\mu}{}_{\nu} + i\frac{\omega^{\alpha \beta}}{2}\left(J_{\alpha \beta} \right)^{\mu}{}_{\nu} $$ For example, I tried to compute $\left(J_{01} \right)^{0}{}_{1}$ by doing $$\Lambda^{0}{}_{1} = \delta^{0}{}_{1} + i\frac{\omega^{01}}{2}\left(J_{01} \right)^{0}{}_{1} $$ $$\Leftrightarrow - \delta_1 = 0 -i \frac{\delta_1}{2}\left(J_{01} \right)^{0}{}_{1}$$ which yields $\left(J_{01} \right)^{0}{}_{1} = -2i$, which is not correct. What am I doing wrong?
In the following text, we use $\eta_{\mu\nu} = \text{diag}(-1,+1,+1,+1)$. For an infinitesimal homogeneous Lorentz transformation, we have $$ {\omega^\mu}_\nu = \begin{pmatrix} 0 & \zeta_1 & \zeta_2 & \zeta_3 \\ \zeta_1 & 0 & -\theta_3 & \theta_2 \\ \zeta_2 & \theta_3 & 0 & -\theta_1 \\ \zeta_3 & -\theta_2 & \theta_1 & 0 \end{pmatrix}, $$ and $$\begin{aligned} {\Lambda^\mu}_\nu &= {\delta^\mu}_\nu + {\omega^\mu}_\nu \\ &= {\delta^\mu}_\nu + \eta^{\rho\mu}\omega_{\rho\nu} \\ &= {\delta^\mu}_\nu + \eta^{\rho\mu}\omega_{\rho\sigma}{\delta^\sigma}_\nu \\ &= {\delta^\mu}_\nu + \frac{1}{2}\omega_{\rho\sigma}(\eta^{\rho\mu}{\delta^\sigma}_\nu - \eta^{\sigma\mu}{\delta^\rho}_\nu) \\ &= {\delta^\mu}_\nu + \frac{i}{2}\omega_{\rho\sigma}{(S_V^{\rho\sigma})^\mu}_\nu \\ \end{aligned}$$ where the vector representation of the generators are defined as $$ {(S_V^{\rho\sigma})^\mu}_\nu \equiv -i({\eta}^{\rho\mu}{\delta^\sigma}_\nu - {\eta}^{\sigma\mu}{\delta^\rho}_\nu). $$ Note that $\omega_{\rho\sigma}$ and $S_V^{\rho\sigma}$ are antisymmetric in the indices $(\rho\sigma)$, and $$ {(S_V^{0i})}^\dagger = - S_V^{0i}, \quad {(S_V^{ij})}^\dagger = S_V^{ij}. $$ If we define $$ \boldsymbol\theta \equiv (\theta_1, \theta_2, \theta_3), \quad \boldsymbol\zeta \equiv (\zeta_1, \zeta_2, \zeta_3), $$ $$ \mathbf{J} \equiv (S_V^{23},S_V^{31},S_V^{12}),\quad \mathbf{K} \equiv (S_V^{01},S_V^{02},S_V^{03}), $$ then $$ {\Lambda^\mu}_\nu = {\delta^\mu}_\nu - i{{(\boldsymbol\theta \cdot \mathbf{J} + \boldsymbol\zeta \cdot \mathbf{K})}^\mu}_\nu. $$ The explicit expressions of the matrices $J_i$ and $K_i$ are $$ J_1 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 \end{pmatrix}, \quad J_2 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 \\ 0 & -i & 0 & 0 \end{pmatrix}, \quad J_3 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & -i & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$ $$ K_1 = \begin{pmatrix} 0 & i & 0 & 0 \\ i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad K_2 = \begin{pmatrix} 0 & 0 & i & 0 \\ 0 & 0 & 0 & 0 \\ i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad K_3 = \begin{pmatrix} 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ i & 0 & 0 & 0 \end{pmatrix} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/615596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a limit to the energy density of a battery? Better battery technology is very important today: improving the energy stored per volume or mass. This led me to wonder whether there is a theoretical limit. (I'm not expecting that we are at all close to it. Real life just inspired the question.) One extreme battery would have a reservoir of anti-matter which it could combine with ordinary matter in a controlled fashion. Could anything beat that for stored energy per mass?
For a battery powered by electrochemistry, there will be a natural limit on its energy density of the following form: Batteries work by capturing and diverting the electron transfers occurring in chemical reactions that happen in solution (commonly). This means that a chunk of, say, zinc metal in a zinc-copper battery has a certain number of charge units (of electrons) which it releases at a certain voltage. the charge transfer is current and current times voltage is power; divide by the density of zinc and now you have some number which represents the maximum theoretical electrochemical power density of zinc metal on a per-kilogram basis. To fully exploit that power density requires the invention of a battery consisting almost entirely of replaceable chunks of zinc metal and an electrochemical reaction with no resistive losses- neither of which are possible today.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/615712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Newton's 3rd Law Coil in Magnetic Field When a current-carrying rectangular loop is placed in a magnetic field, the forces acting on either side of the loop which is perpendicular to the field provides a torque which rotates the loop. However, according to Newton's 3rd Law, there should be some equal and opposite force acting on the object exerting the force. But in fields, it is the field that exerts forces on objects. Does that mean that the loop exerts an equal and opposite force on the magnet or on the magnet's field? How do these forces play out and what effect do they have on the magnet?
Newton's third law does cause magnets to have equal and opposite effects on each other, whether electromagnets or permanent magnets. The equal and opposite forces are transferred by the magnetic field. In an electric motor or generator, just as the field magnets are pushing /pulling on the armature via their magnetic fields, the armature is also pushing / pulling the field magnets. If you hold a small electric motor in your hand when it is started you can feel the anti spin ward torque of the motor housing as it applies spinward torque on the armature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/615888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Kinetic Energy of a Block-Bullet System A bullet of mass $m$ is fired towards a wooden block of mass $M$. At a particular instant of time when the bullet is inside the block, the speed of the block is $V$ and the speed of the bullet, relative to the block, is $v$. I would like to find the total kinetic energy of the system at this point. Considering the bullet and block as separate entities, it should be $$\frac 12 MV^2 +\frac 12 m(v+V)^2$$ But I could also look at the bullet and block as one body with velocity $V$, and then add the extra velocity of the bullet which has not been accounted for: $$\frac 12 (M+m)V^2 +\frac 12 m v^2 $$ Which one of these is correct?
The kinetic energy of system depends on the choice of reference frame. To compare the energy of different objects or use conservation of energy, each energy must be defined in the same reference frame. The most common choice would be to define a "lab frame." This is your reference frame, standing at rest next to the experiment happening. In this case all velocities used in the kinetic energy equations should be measured relative to you. The total kinetic energy is the sum of the individual energy of each particle $$ K = \frac{1}{2}MV^2 + \frac{1}{2}mu^2,$$ where $V$ is the speed of the block relative to you and $u$ is the speed of the bullet relative to you. Instead of knowing the bullet's speed relative to you directly, you know the bullet's speed relative to the block. We can rewrite the bullet's speed relative to you in terms of the variables you care about $$u = v + V.$$ Putting it all together, the correct way to account for the kinetic energy of the system would be $$K = \frac{1}{2}MV^2 + \frac{1}{2}m(v+V)^2$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/615968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Lagrangian for a fixed number of non-interacting, non-relativistic bosons In my book on QFT, (during an explanation of superfluidity) the author states that the lagrangian for a fixed number of non-interacting, non-relativistic bosons is $$i\Phi^{\dagger}\partial _{0}\Phi-\frac{1}{2m}\nabla\Phi^{\dagger}\cdot \nabla\Phi + \mu\Phi^{\dagger}\Phi$$ where $\mu$ is the chemical potential. However, I do not understand this - why is there a need for a chemical potential term if the number of bosons remains fixed? Any explanations of the need for this term would be much appreciated.
Since $N$ is constant $\mu \int \psi^\dagger \phi= \mu N$ is fixed, so it just sets the zero of energy to be be that of the ground state of the $N$-body system. Note that $\mu$ is usually negative for bosons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/616103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is mist gray but water clear? I was walking outside one cold afternoon with my mask on and my glasses began fogging up. The mist was initially gray. I kept walking without cleaning my glasses and eventually enough mist collected that that it transformed into clear water droplets. This got me thinking: why is mist gray but water clear? Or perhaps more specifically, why are smaller water droplets gray and larger droplets clear? I couldn't find any explanation online. What is the physics behind such shenanigans?
Mist is a suspension of tiny water droplets in air. Light traveling through the mist gets randomly scattered, mainly by bouncing of the droplets. That makes mist far less transparent than bulk water. I don't think mist is literally gray in colour but the fact that mist is far less transparent than pure air (or bulk water) causes it to look the way it does. Other suspensions like smoke (a suspension of tiny solid particles in air) look quite similar, due also to light scattering. Another example is very much diluted milk (an emulsion of fat droplets in water, mainly).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/616258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
Why does water cast a shadow even though it is considered 'transparent'? If you pour water from a container, the flowing water stream seems to cast a shadow. I am not sure you can call it a shadow, but it definitely is not letting all light through it. How is this possible and what uses can it have?
A large amount of water (i.e. if the path of light through it is long) will simply start absorbing light, as it's not completely transparent. For smaller amounts, as when pouring it from one container to another, this is mostly negligible. However, there is also surface reflection. A small amount of the incident light will be reflected off by the surface. The much larger contribution, however, will come from refractive effects. If you look closely, there are not only areas which are darker than the uniformly lit surroundings, but some will also be brighter. The stream of water forms shapes that act similar to a lens and will divert light off its original path. The patches where the incident light would have gone without the disturbance will then be darker, the places where it is directed to instead will be brighter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/616364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 8, "answer_id": 1 }
What are the "derivations" of the inverse-square law? Besides the derivation mentioned in this Wiki article, I want to know, if there exists any other derivation of the inverse-square law based on some profound physical/philosophical concepts.
Based on an informal assumption, we could derive the inverse square law for gravitational force and Coulomb force. Assumption Suppose everything in the space is scaled up by a factor of $k$, and time stays the same, then we shouldn't expect anything to change. Derivation Since we are in a 3D space, any volume would be scaled by $k^3$, and so would mass. From $F = ma$: $$F' = (k^3m)(ka) = k^4ma = k^4F$$ Then suppose $F_G = G\frac{m_1m_2}{r^n}$; after scaling up, $F_G' = G\frac{(k^3m_1)(k^3m_1)}{k^nr^n} = k^{6-n}F_G$; also, we know $F_G' = k^4F_G$, so $6-n = 4$, and $n = 2$. Note: this piece of text is composed by one of my friends, Yushun Cheng.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/616466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problems deriving the Quantum Hamilton-Jacobi equation This is my first question at Physics SE so please be kind. I am well versed in the etiquette over at Math SE, but not so much here. Anyway, I thought this question was better suited to this site because it is less a problem of understanding mathematical computations and more of a problem about understanding why two results should be the same. MY QUESTION: I am taking a quantum mechanics class and I have been given an assignment. It is as follows: Let $\Psi(\mathbf{r},t)$ be the wavefunction for a single particle satisfying Schrodinger's equation, $$i\hbar \partial_t\Psi=\left(\frac{-\hbar^2}{2m}\nabla^2+V\right)\Psi$$ Write $\Psi$ in polar form, $\Psi=Re^{i\Theta}$. Show that: $$\partial_t R+\frac{\hbar}{2mR}\nabla\boldsymbol{\cdot}\left(R^2\nabla\Theta\right)=0$$ Define $S=\hbar \Theta$. Show that $S$ satisfies the Quantum Hamilton-Jacobi equation: $$\partial_t S+\frac{\Vert\nabla S\Vert^2}{2m}+V=\frac{i\hbar}{2m}\nabla^2S.$$ After some lengthy calculations taking the time derivative and Laplacian of a composite function, I ended up with a big equation $$i\hbar \partial _{t} R-\hbar R\partial _{t} \Theta =\frac{-\hbar ^{2}}{2m}\left( \nabla ^{2} R+i R\nabla ^{2} \Theta -R\Vert \nabla \Theta \Vert ^{2} +2i \nabla R\boldsymbol{\cdot } \nabla \Theta \right) +VR.$$ Taking the Imaginary part of both sides of the equation, we do end up with the first of the required equations $$\partial _{t} R+\frac{\hbar }{2mR} \nabla \boldsymbol{\cdot }\left( R^{2} \nabla \Theta \right) =0$$ However taking the real part, we do not get the equation my professor wants. We instead get (using $S=\hbar \theta$) $$\partial _{t} S+\frac{\Vert \nabla S\Vert ^{2}}{2m} +V=\frac{\hbar ^{2}}{2m}\frac{\nabla ^{2} R}{R}.$$ So... is my professor wrong? My results agree with this other Physics SE post. If my professor isnt't wrong, can anyone explain why $$\hbar \frac{\nabla^2 R}{R}=i\nabla^2S~?$$ Might be true?
I have solved this problem myself. If we write $$\Psi=e^Z$$ Where $Z$ is now allowed to be complex, we get $$i \hbar \partial _{t} Z=\frac{-\hbar ^{2}}{2m}\left( \nabla ^{2} Z+\Vert \nabla Z\Vert ^{2}\right) +U$$ Letting $Z=\frac{iS}{\hbar}$ will yield the desired result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/616909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can the Auger effect cause a second electron to be just excited instead of ionised and emitted from the atom? From what I understand, the Auger effect is usually defined as when an electron deexcites but instead of releasing its change in binding energy as a photon, it transfers it as kinetic energy to another electron which, if greater than its binding energy, will cause this second electron to be emitted from the atom. My question is why is this process defined with the second electron being emitted from the atom instead of just excited to a higher energy state sometimes. My guess is maybe it has something to do with entropy and the fact that there are so many more possible states for the second electron final state if it is emitted that maybe only in this case will this process actually occur (instead of just emitting a photon as usual).
Let me first note that Auger process is due to Coulomb interaction between electrons, so it may be beneficial to think of it in terms of the Fermi golden rule: $$ w_{i_1 i_2\rightarrow f_1 f_2}=\frac{2\pi}{\hbar}|\langle i_1, i_2 | V|f_1, f_2\rangle|^2\delta(\epsilon_{i_1} + \epsilon_{i_2} - \epsilon_{f_1} - \epsilon_{f_2}) $$. One needs to be careful when calculating the matrix element to account properly for the exchange term, but what interests us at the moment is the energy conservation, codified here in the delta-function. If the first electron changes its energy between two levels: $\epsilon_{i_1}-\epsilon_{f_1} = E_n-E_m$ than the second electron must increase its energy by the same amount: $\epsilon_{f_2}-\epsilon_{i_2} = E_n-E_m$. * *If, e.g., we were discussing Auger recombination in a semiconductor crystal, then satisfying this condition would be rather easy, since there are many states available in the condiction band. In an atom with discrete levels this however could be tricky, so being ejected into the continuum of states is the only option. *Another consideration is the pure size of the energy change. If we take hydrogen-like spectrum $$ E_n=-\frac{E_1}{n^2}. $$ then $$ E_n - E_1 = E_1\left(1-\frac{1}{n^2}\right) > \frac{1}{n^2} \text{ for any } n>1. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/617066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If radio signals attenuate when travelling through space, then what kinds of emissions are we looking for when searching for extraterrestreal life? My understanding is that radio waves travel forever, like ripples in a pond, but attenuate with distance. They get mixed with other signals and become cosmic noise. I'm looking at these answers, which suggest that the TV and radio signals that originate from earth attenuate and are indistinguishable from background noise after some distance. This makes me ask: How do we hope to detect signals from alien civilizations, if those signals attenuate as well? Are there other types of signals, that can cross vast interstellar distances (> 600 light year) and carry data that is recoverable?
You are right, an omnidirectional radio broadcast would be very faint by the time it reached earth, and very difficult to distinguish from background noise. Projects such as SETI use large amounts of computing power and sophisticated signal analysis algorithms to try to detect a faint signal with a pattern indicating an artificial origin. Instead of broadcasting in all directions, an extra-terrestrial intelligence that wanted to say "hello, here I am" could encode information in a powerful narrow laser beam and direct it at the star systems near its home that it thought most likely to harbour life. There are various "optical SETI" projects that are looking for such laser signals.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/617251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are there any books/portions of books/particular topics of study that go into great detail to describe the physics of a rope (or any ropelike body)? I don't know why, but lately, I can't get the physics of a rope of my mind, specifically what happens to a coiled, or otherwise not taught/pulled straight rope when it is pulled taught. I don't really know if its useful in any way but I cant't get it off the brain. for example, here's a couple of laid out ropes. I think it would interesting how to describe each point on the rope as the ends of the rope are pulled outward and made taught. These are just some simple 2D examples, things would get pretty complex when ropes are allowed to overlap and are also given a width. I tried googling for any resources on this but I couldnt find any (I did find a cool article on "Liquid Rope-Coil Effect" though). So I was wondering if anyone knew of some sources that went into detail on this subject or anytging similar, if such a thing exists?
Continuum mechanics, but it has too many topics except for the rope. The principals are the same. Besides, there does exist some books about rope only. Such as Theory of wire rope . Just try googling with book about rope/wire/string mechanics and you will find more. I did some research about this before and if you are into codes, youcan find some programs in github for rope modelling, which maybe helpful to you to understand the numerical approach to deal with the rope.(finite element analysis of course)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/617384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is perfectly monochromatic light always polarized and vice-versa? Is perfectly monochromatic light always polarized and polarized light always monochromatic? I am not totally sure but I think that the answer to the first is 'YES'. Because if a radiation is unpolarized, its polarization changes randomly with time so that a Fourier analysis would immediately tell us that it composed of many frequencies. What about polarized light? Does it have to be monochromatic? If no, please give a counterexample.
You are correct, but the light does have to be really really monochromatic and the polarization has to be strictly defined and exactly fixed. As you point out, if you examine the light through a fixed polarizing filter and if its amplitude changes over some period of time, then there must be more than one frequency present. Polarization can be linear or circular or something in between, but must be fixed for the duration. The length of the observation and the amount of change that can be measured obviously limit the degree to which you can claim only a single frequency is present. [ To be fair, 'monochromatic' would usually be interpreted to mean a very narrow band of frequencies, as from a laser, rather than just one frequency. Strictly, a single frequency necessarily lasts unchanged for an infinite length of time ] Polarized light is not necessarily monochromatic. A single beam may contain two wavelengths (frequencies) both with the same well-defined fixed polarization. [Edit] The emission or measurement of a single photon is a quantum-mechanical event associated with a certain region of spacetime. As such it cannot be strictly monochromatic. By definition a single frequency would occupy all of spacetime. Nothing in the real world can be considered purely monochromatic
{ "language": "en", "url": "https://physics.stackexchange.com/questions/617600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Is it impossible to use a 1m by 1m suction cup? I was learning about fluids and I randomly thought of suction cups. I think they're a really cool application of air pressure. When you squeeze the air out, the outside air pressure exerts a huge force on the cup, meaning it will hold in place. However, I was thinking about how I could actually squeeze out the air in the first place, given that the pressure of air is $101kPa$. At first I was horrified - my little arms can't dish out 100,000N! But then I realised that the average suction cup is only a few $cm^2$ so if you multiply the pressure by the total area, it is actually quite a small force so it's quite manageable. However, if I did have a suction cup measuring 1m by 1m, then wouldn't it be impossible for me to be able to exert 100,000N to actually remove the air in the first place? Does this mean that there is a limit on the size of suction cups because otherwise humans are not able to actually squeeze out the air from them?
I don't think the air pressure matters - that's always present. By your logic as stated, you couldn't push all the air out of an air mattress either. All that matters is the resistance of the suction cup material itself, as that is what you're deforming.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/617848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How are propagator and two-point function related? Assume that we have a QFT with one scalar field $\phi$ with mass $m$ and the Lagrangian $$\begin{aligned} \mathcal{L}_{\mathrm{EFT}, \mathrm{off}}=& \frac{1}{2}\left(\partial_{\mu} \phi\right)^{2}-\frac{1}{2} m^{2} \phi^{2} \\ &-\frac{C_{4}}{4 !} \phi^{4}-\frac{C_{6}}{6 ! M^{2}} \phi^{6}-\frac{\tilde{C}_{6}}{4 ! M^{2}} \phi^{3} \square \phi-\frac{\hat{C}_{6}}{2 M^{2}}(\square \phi)^{2} \end{aligned}.$$ The propagator for $\phi$ in momentum space will then be something like $$\frac{i}{p^2-m^2 + i\epsilon}.$$ The Feynman rule for this propagator is usually represented by a straight line. In some lecture notes (that I'm unfortunately not allowed to share here) we consider all $1PI$ diagrams at tree-level which contribute to the two-point function, i.e. only one diagram, a straight line. The amplitude of this diagram is written down as $$\mathcal{M}_2 = i (p^2-m^2).$$ Question I don't understand why the propagator and amplitude don't coincide. I mean, just looking at the units these two things don't seem to be related, but we still use the same description in terms of Feynman diagrams, which seems weird. Is there a connection? How can I see it?
The main point is that the 2-pt functions for the generator $W_c[J]$ of connected diagrams and the generator $\Gamma[\phi_{\rm cl}]$ of 1PI diagrams are each other's inverse (up to factors of $i$), cf. e.g. this & this related Phys.SE posts. In particular note that for a 1PI diagram the external legs are stripped/amputated. In this process, the free propagator/2pt-function then turns into its own inverse.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/618099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Standing waves - why do wavelengths fit perfectly? When reading about standing waves it is always said that only certain wavelengths are "allowed". I understand that these wavelengths are a requirement for there to be a standing wave due to the boundary conditions, but what does "allowed" mean in this context? When creating a wave on a fixed string, does this mean that the wavelength will always be an appropriate fraction of the length of the string such that a standing wave exists - i.e it is not possible to create waves with a wavelength that would not create a standing wave on a fixed string? Or are the wavelengths completely dependent on the source that created the wave, and standing waves are simply the special case/coincidence when the wavelength is appropriate?
It is a good question. The single standing waves are the modes that get emphasized in teaching, but the real motion of strings is due to sums of such modes. Consider a Slinky. One can make it swing in the fundamental mode or in modes with one or two nodes by adding energy from your hands in just the right intervals. You will feel that, that gives a positive feedback to that mode. That also is what happens at the lip of an organ pipe etc. The standing waves are resonances. Or one can look at what happens when you make a pulse on the Slinky. Or on this Java simulation by Falstad. The pulse travels back and forth, getting reflected at both ends. So there is a periodic signal, with many harmonics of the fundamental frequency. Falstad's simulation also has the option of adding a driving force with arbitrary frequency. It is like pushing a swing at arbitrary times: it won't add energy to the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/618524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do position operators ($\hat{x}$, $\hat{y}$, $\hat{z}$, $\hat{r}$) act on orbital angular momentum states? Consider an orbital angular momentum state $\vert l,m\rangle$, I am pretty sure when $\hat{r} = \sqrt{\hat{x}^2 + \hat{y}^2 + \hat{z}^2}$ and $\hat{z}$ act on it, the resulting states will still be $\vert l, m\rangle$, since $\hat{r}$ and $\hat{z}$ both commute with $L_z$ and $L^2$. However, I don't know what what are the resulting states of $\hat{y}\vert l, m\rangle$ and $\hat{x}\vert l, m\rangle$, as $\hat{x}$ and $\hat{y}$ both do not commute with $L_z$.
To find the action of $x,y,z$ on $|l,m\rangle$ you need to use the Clebsh-Gordon procedure for combining the $l=1$ vector defined by $x-iy \sim |l=1,m=-1\rangle$, $z\sim |l=1,m=0\rangle$, $x+iy\sim |l=1,m=+1\rangle$ to the $|l,m\rangle$ state. You will get a linear combination of $l-1$, $l$, $l+1$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/618682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Disintegration of the deuteron Considering the scattering of gamma rays on a deuteron, which leads to its break up acording to: $$ \gamma+ d \longrightarrow p +n $$ we can use the conservation of energy and momentum in order to determine the minimum photon energy in order to make this reaction possible, which happens to be very close to the binding energy of the deuteron nuclide. We assume that the speed of the proton and neutron after scattering are highly non-relativistic. So, the conservation of energy: $$ p_{\gamma}c + m_dc^2=m_pc^2+m_nc^2+ \frac{1}{2}m_pv_p^2+\frac{1}{2}m_nv_n^2 \tag 1$$ and of momentum: $$ \vec p_{\gamma} = m_p \vec v_{p}+m_n \vec v_{n} \tag 2$$ Squaring equation $(2)$ we obtain: $$ p_{\gamma}^{ \space 2} = m_p^{ \space 2}v_{p}^{ \space 2}+m_n^{ \space 2}v_{n}^{ \space 2} + 2m_pm_n \vec v_{n}\cdot \vec v_p \tag 3$$ But if we are looking for the minimum photon energy, then we must find the case where the proton and neutron speeds are also minimum. We assume $m_p \approx m_n$ and $v_n \approx v_p$. With these approximations and the deuteron mass formula: $m_d = m_n + m_p -\frac{B_d}{c^2}$ we should be able to get to this equation: $$ p_{\gamma}^2 = 2 m_d \left( \frac{1}{2}m_pv_p^2 + \frac{1}{2}m_nv_n^2 \right) \tag 4$$ but I'm having trouble getting there. Can someone show me how to do it?
If you analyze the threshold reaction in the center-of-momentum frame, rather than the lab frame where the deuteron is initially at rest, the energy-conservation equation becomes $$ pc + m_d c^2 + \frac{p^2}{2m_d} = m_p c^2 + m_n c^2 \tag{1*} $$ Here $p$ is the magnitude of the initial momentum for both the photon and the deuteron (because that's how we get to the zero-momentum frame), and we have the minimum-energy configuration in the final state. Therefore, when we boost back into the lab frame, a threshold disintegration will be one where $\vec v_p = \vec v_n = \vec v_\text{lab}$ are exactly the same, whether the fragments have similar masses or not. If the fragments have the same velocity, your equation (3) becomes \begin{align} p_\gamma^2 &= \left( m_p \vec v_\text{lab} + m_n \vec v_\text{lab} \right)^2 \tag{$3'$} \\ &= (m_p + m_n)^2 v_\text{lab}^2 \\ &= 2(m_p + m_n) \left(\frac{m_p}2 + \frac{m_n}2\right) v_\text{lab}^2 \tag{$4'$} \end{align} which is the same as your (4) if you make the approximation that the binding energy is small and $m_p+m_n\approx m_d$. Perhaps the author of the text you're following meant to write about temporarily neglecting the binding energy but thinko'd it into a hint that $m_p\approx m_n$ instead. The point of the construction seems to be to set up $p_\gamma^2 / 2m_d$ to replace the ugly kinetic-energy term in (1). It would be more correct to keep that term as $p_\gamma^2/2(m_p+m_n)$. However the next thing the author does is a binomial expansion to find the first-order correction to the threshold energy, and that algebra is quite tedious enough without keeping track of two different mass terms. Note that if you work in the center-of-momentum frame, starting with my (1*), you don't get quite the same quadratic equation for $p$ as your reference gets for $p_\gamma$ in the lab frame. The first-order difference between the two results, \begin{align} p &= \frac{B}{c} \left( 1 - \frac12 \frac{B}{m_d c^2} + \mathcal O\left(\frac B{mc^2}\right)^2 \right) \\ p_\gamma &= \frac{B}{c} \left( 1 + \frac12 \frac{B}{m_{(p+n)}c^2} + \mathcal O\left(\frac B{mc^2}\right)^2 \right) \end{align} is because, in the center-of-momentum frame, part of the energy required to destroy the deuteron is included in the deuteron's kinetic energy; meanwhile in the lab frame, the photon has to carry not only the binding energy but also enough kinetic energy to allow the fragments to carry the final momentum. At first order, the two results are connected by the Doppler shift when you boost from the center-of-momentum frame to the lab frame, but the algebra is much more tractable in the $m_p+m_n=m_d$ approximation. At higher orders there starts to be plenty of higher-order funny business.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What's the difference between "optical amplification" and "magnifying"? Optical amplification used widely in astronomical observatories and "magnifying" used in microscopes. What's the difference between "optical amplification" and "magnifying"?
Optical amplification is a process where one captured photon triggers the release of an electron inside the apparatus which then releases more electrons, etc. in a process called photomultiplication. This can also be done when the incident photon causes the injection of an electron into a semiconductor junction, which is then sent to an amplifier which generates a far larger current in response. In this case, the amplification is performed electronically rather than optically. Magnification is the use of lenses or mirrors which collect photons over a large aperture and focus it down to an imaging apparatus which can be the human eye, a piece of photographic film, or a solid state device or photomultiplier as described above.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can buoyancy be explained in terms of kinetic and potential energy exchanges between the buoyant object submerged in a fluid? As observed in the above diagram, a wooden block is held submerged in water within a container by an (external, such as a string attaching the block to the bottom of the container) force which counter-acts the upward force due to buoyancy. Clearly, if the force is terminated (for instance, by cutting the string attaching the block to the bottom of the container), the block would instantly start shooting up, while water of the same volume would fall down on account of gravity. I would like to understand the potential and kinetic energy changes in the above system for both, the wood and the water, after the block starts moving upwards. I am an elementary physics student and the question may sound stupid, but I am requesting help to clear my concepts. Any help would be sincerely appreciated. How can buoyancy be explained in terms of the energy exchanges between the water and a wooden block submerged in the water withing a container, as the block moves upwards because of the force due to buoyancy?
When the wood block is released, the upward force $F$ is the pressure difference between its top and bottom surfaces minus the weight. That force is constant until the block reaches the surface of the water, so we can say that the potential energy is $Fh$, where $h$ is the depth of the block from the water surface. However, the force field is not conservative, and a drag force resisting the movement is present during the path. So, the potential energy doesn't translate (completely) in kinetic energy of the block. Part of it becomes kinetic energy of the water, including waves at the surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does the magnetic field produced by a current carrying wire, exert a magnetic force on the wire itself? I have to calculate the pressure on a current carrying wire. Since there is a pressure on the wire, there must be a force on it, which is a magnetic force. Does the magnetic field produced by the wire, exert a magnetic force on the wire itself? If this is true, why?
If the wire is straight, then no, due to axial symmetry magnetic field is just compressing the wire a little but no net force is present. However, if the wire isn't straight, then net magnetic force due to wire on itself may be non-zero. For example, consider wire in shape of upside-down letter J, in which current flows from bottom to top. / / / | | | | | | | I The shorter oblique segment will experience magnetic force in north-west direction from the longer vertical segment. At the same time, the longer segment will experience force in the west direction. There forces add up to net non-zero magnetic force on the wire due to itself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If the world had four spatial dimensions, then area would be a tensor? In three dimensions area is a vector because two dimensions have a direction relative to the third. If the world had four spatial dimensions then area would be a tensor? And what form then the laws of physics which imply the concept of area, as electromagnetism or the definition of pressure, would have?
Roughly speaking, Yes! In 3 spatial dimensions a 2D thing (an area) uses 2 of the 3 available dimensions. So even though it isn't really a vector (an area should have twice the units of a vector) an area can be described by picking out the single direction that is orthogonal to it. In 4D an area has a 2D space orthogonal to it, so one needs a 2-index thing (tensor) to specify its orthogonal space, or you could equally use 2-indicies to specify its actual (own) space. For more details this wiki-page is very clear: https://en.wikipedia.org/wiki/Exterior_algebra If you have some number of dimensions then a thing (eg. A volume) can be denoted either using either the number of dimensions it has, or the number it is missing. Not only is this why a 2D area in 3D can be specified by a 1D vector (3-2=1), it is also why a Volume in a 3D space can be represented by a scalar (3-3=0). In 4D 3-volume is a vector quantity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Approximating the angle between the trajectory I started to learn physics this semester and I found the following task: A contestant is participating in a half-maraton tournament(straight line length $L =21095$ meter) running in a zig-zag manner (constantly surpassing other contestants), holding a stable angle $\alpha$ between the trajectory. After finishing the run, the contestant has noticed that the distance travelled was 500 meters longer than $L$. Approximate the angle $\alpha$ without using a calculator. I have no idea how to approach this problem so any help would be useful.
The green line is the line that the "zig-zag" runner is running. Thus: $$s\cos(\alpha)=a\tag 1$$ So if he is doing this n times during the half-marathon you obtain that: $$n\,(s-a)=\Delta L\tag 2$$ With Eq. (1) and (2) you obtain that: $$\cos(\alpha)=\frac{a\,n}{\Delta L+a\,n}$$ Edit with the remarks from @AgniusVasiliauskas $$a\,n=L~\Rightarrow \\(\alpha)=\frac{L}{\Delta L+L}\approx 0^\circ $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/619944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can the Hamiltonian be interpreted as the "speed" of unitary evolution? The Schrodinger equation $$i\hslash \frac{d}{dt} \psi = H \psi$$ means a quantum state $\psi(t)$ evolves unitarily, that is, $$\psi(t) = \exp(-\frac{i}{\hslash} H t) \psi(0)$$ where $\psi(0)$ is the initial state at time $t = 0$. Suppose if we scale the energy levels of the Hamiltonian by some factor $0 < \zeta$, let $$\tilde{H} = \frac{1}{\zeta} H$$ then for the evolution from $\psi(0)$ to a target state $\phi$, $$\phi \equiv \psi(\tilde t = \zeta t) = \exp(-\frac{i}{\hslash} \frac{1}{\zeta} H \zeta t) \psi(0)$$ where the time $\tilde t$ to reach $\phi$ has to scale contravariantly to complement the change in $H$. So can it be said the Hamiltonian $H$ is the "speed" of unitary evolution?
The Hamiltonian itself is not a speed, but you're right that evolution speed is proportional to the energy scale that defines the dynamics. To define quantum evolution speed, consider first a classical signal with a finite range of frequencies in its Fourier spectrum. The width of the range tightly bounds the number of distinct values that can occur per unit time---this is Nyquist's signaling rate bound. Intuitively, doubling the width lets you double all frequencies in a Fourier sum, making everything happen twice as fast. Time evolution of a quantum wavefunction is similar, with energy playing the role of frequency in determining how fast the state can change. Any well-defined energy width of a wavefunction bounds the rate at which distinct (orthogonal) states can occur in its time evolution. In the quantum literature, these bounds are called quantum speed limits.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/620306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Could gravity be a weak force because gravitons are absorbing gravitons before they reach a target rest mass? Could gravitons be similar to the gluons in the colour force? Can gravitons absorb other gravitons before they reach their target rest mass?
Classical general relativity has many well-known phenomena involving gravitational fields interacting with themselves, most dramatically the geon solutions which describe gravitational radiation collapsing to a black hole. Since classical general relativity has self-interacting gravity, it would make sense for there to be solutions where gravitons absorb and emit other gravitons in a full quantum gravitational theory, but this is unrelated to why one would expect gravity to be weak (as you said, gluons absorb and emit gluons, and no one would characterize the strong force as "weak")
{ "language": "en", "url": "https://physics.stackexchange.com/questions/620626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Cosmology - an expansion of all length scales From the link Is non-mainstream physics appropriate for this site? "a question that proposes a new concept or paradigm, but asks for evaluation of that concept within the framework of current (mainstream) physics is OK." Here is a concept, evaluation within the framework of current (mainstream) physics would be welcome. Is it possible that an expansion of all length scales can be happening, as in the cartoon below? It shows all lengths increasing, the size of atoms, people, stars and the distances between all objects. Each physical quantity and constant varies depending on the number of length dimensions in it. For example since Planck's constant has a length dimension of 2, so it's change with time is $h=h_0e^{2Ht}$ where $H$ is an expansion constant and $t$ is time. \begin{array}{c|c|c} {quantity} & {length-dimension} & {change}\\ \hline length & 1 & e^{Ht}\\ mass & 0 & constant\\ time & 0 & constant\\ h & 2 & e^{2Ht}\\ c & 1 & e^{Ht}\\ G & 3 & e^{3Ht}\\ Area & 2 & e^{2Ht}\\ \end{array} etc... Can this type of expansion be ruled out A) locally or B) by distant measurements e.g. of distant stars or galaxies, from within mainstream physics? The expansion referred to occurs for the whole universe. It's proposed as there could be another reason for the redshift of light from distant stars. If the energy of a photon is conserved during flight, but was emitted when Planck's constant was lower, then from $E=hf$, the frequency of the received photon would be lower and the light from a distant star would be redshifted. A bounty has now been added. A convincing reason why the above type of expansion cannot be occurring would be welcome. Here is the work done so far. It is to determine the apparent matter density that would be concluded in a flat universe, with a matter density of $1.0$ and the type of expansion above. It leads to the conclusion that the matter density would be measured to be $0.25$ or $0.33$ from galaxy clusters and supernovae data respectively. A Diagram of supernovae data is below and then more details of the calculations. and The diagrams show the distance modulus predicited by the type of expansion in the question, top curve. Concordance cosmology with a matter density of 0.3 and 1.0 are the middle and bottom curve respectively. The second diagram is an enlargement of the first. Matter density from Galaxy Clusters etc... Traditionally the scale factor of the universe at redshift $z$ is $a=\frac{1}{1+z}\tag{1}$ If the energy of the photon is conserved during flight, from $E=hf$ and $h=h_0e^{2Ht}$ For an emitted wavelength of $\lambda_1$ $z=\frac{\lambda_1e^{2Ht}-\lambda_1}{\lambda_1}$ $1+z = e^{2Ht}=a^{-2}$ , ($a$ decreases with increasing $z$ in an expanding universe) so $a=\frac{1}{\sqrt{1+z}}\tag{2}$ For small distance $d$ $\frac{v}{c} =z= e^{2H\frac{d}{c}}-1=\frac{2Hd}{c}$ $v=2Hd\tag{3}$ i.e. Hubble’s law is still valid but we identify the expansion parameter $H$ with half of Hubble’s constant $H_0$ this leads to the conclusion that the matter density will be measured to be $\frac{1}{4}$ of the true value, as follows. $\Omega_m = \frac{\rho}{\rho_{crit}}\tag{4}$ $\rho_{crit}=\frac{3H(z)^2}{8\pi G}\tag{5}$ If the value for $ H(z)$ used in $\rho_{crit}$ is twice the true value, then the apparent matter density would be measured as $0.25$ instead of $1$. Matter Density from Supernovae Data. In LCDM the Hubble parameter is $H(z)=H_0\sqrt{\Omega_m {(1+z)}^3+\Omega_k{(1+z)}^2+\Omega_\Lambda}$ The comoving distance is obtained from $D_M=\int_0^z \frac{c}{H(z)} dz$ Using a flat universe approximation, omitting $\frac{c}{H_0}$ and using $m$ for $\Omega_m$ ,the comoving distance, for small $z$ is $\int_0^z(m(1+3z+3z^2+\dots )+1-m)^{-\frac{1}{2}}dz$ $=\int_0^z(1+3mz+3mz^2)^{-\frac{1}{2}}dz =\int_0^z(1-\frac{3}{2}mz+\dots)dz$ $=z-\frac{3mz^2}{4}\tag{6}$ For the type of expansion that we hope to rule out, The co-moving distance is $D_M=\int_t^0 \frac{c}{a(t)} dt$ $a=\frac{1}{\sqrt{1+z}}$ $\frac{da}{dt}=\frac{da}{dz} \times \frac{dz}{dt} ={-\frac{1}{2}(1+z)^{-\frac{3}{2}}}\times\frac{dz}{dt}$ $H(z)=H=\frac{\dot{a}}{a}=\frac{-1}{2(1+z)}\times\frac{dz}{dt}$ $dt=\frac{-1}{2H(1+z)}dz$ $D_M=\int_0^z \frac{c}{2H}{(1+z)}^{-\frac{1}{2}} dz$ $D_M=\frac{2c}{H_0}(\sqrt{1+z}-1)\tag{7}$ again omitting $\frac{c}{H_0}$ and for small $z$, $(7)$ becomes $2(1+\frac{1}{2}z-\frac{1}{8}z^2-1)$ $=z-\frac{z^2}{4}\tag{8}$ there is a match between $(6)$ and $(8)$ if $m=\frac{1}{3}$ So we conclude from Galaxy and supernovae data, or combinations of data sets, that the matter density would be measured, with the type of expansion in the question, at between $0.25$ and $0.33$. As it is measured at this value, it's concluded that the expansion cannot be ruled out this way. A diagram with supernovae data is above. Is there a convincing reason why the expansion described should be ruled out?
the frequency of the received photon would be lower Why would it ? Since $c=\lambda f$ and $c$ and $\lambda$ change in the same proportion then $f$ is constant. All you are doing is changing the units in which length is measured. You get exactly the same effect if you measure the wavelength in furlongs instead of metres, and denominate the speed of light in furlongs per second - frequency remains unchanged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/620794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Do humans use the doppler effect to localize sources of sound? Consider a source of sound such as a person speaking or a party of people which makes a continual drone sound of the the same frequency. If a human shakes their head side-to-side with sufficient angular speed, they are in effect obtaining different frequencies of the same sound source and should be able to apply the Doppler effect to approximately localize (from prior experience) the sound source. Do humans use the Doppler effect to localize sources of sound and have there been any studies proving this? Edit: A link to the Weber-Fechner law and a link to the wiki article discussing the just-noticable-difference (JND) for music applications were added to the OP for reference, based on the accepted answer.
A person would not be able to localize a sound using the Doppler effect created by shaking their head. Say a person shakes their head at 20 cm/s. The speed of sound is about 330 m/s. This gives a frequency change of 0.06%. The "just noticeable difference" to discern two frequencies played in succession is about 0.6% (source), so about an order of magnitude too coarse.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Singularity in Robertson Walker metric with flat spatial slices In Sean Carroll's GR book, pg. 76, a special case of the Robertson-Walker metric, where the spatial slices are flat is given by $$ds^2=-dt^2+a^2(t)[dx^2+dy^2+dz^2].$$ It was said that $t=0 $ represents a true singularity of the geometry (the 'Big Bang') and should be excluded from the manifold. The range of the $t$ coordinate is therefore $0<t<\infty$. Why is $t=0$ a singularity? What is infinite or undefined when $t=0$ ?
Remeber that the solution to the Friedman equations for the scale factor $$a(t) = a_0 t^{\lambda}$$ where $\lambda$ is a constant. This is obviously zero at $t=0$. At this point the spatial part of the metric $$ds^2=-dt^2+a^2(t)[dx^2+dy^2+dz^2]$$ vanishes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Spherical Lens Instead of Parabolic Lens I know that using the paraxial approximation, spherical lenses behave like parabolic lenses. It seems that there is no reason to use spherical lenses instead of parabolic (because they are used in the same way, and parabolic lenses do not required paraxial approximation) apart from the fact that parabolic lenses are more complicated to make. * *Does spherical lenses have more advantages over parabolic lenses? *Are there any application that required specifically spherical lenses (and not parabolic)?
Once you deal with practical lens systems that operate at higher numerical apertures, for example a microscope objective, the paraxial conditions no longer hold. Spherical surface can be made highly accurately, and then combined in order to control aberrations precisely in a way that would be much more expensive with any other type of surface. Spherical surfaces are the natural outcome of polishing two materials against each other, so people have been making spherical surfaces for hundreds of years. There is an entire industry built up around the generation and mounting of spherical surfaces. That is why they are significantly cheaper than aspheric surfaces and usually of higher quality. (In some cases, the design objectives for a lens can not be achieved with spherical surfaces, and then aspheres will offer an advantage).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is there a way to prove that different angular momentum components anticommute without using a specific matrix representation? I know spin-1/2 Pauli matrices satisfy the anticommutation relationship $\{\sigma_i, \sigma_j\}=2\delta_{ij} \mathbb{I}$. I wonder how this can be proved without writing down the matrix representation of these matrices and performing matrix multiplication. As the matrix representation of angular momentum operator (and hence Pauli matrix) can be written down just using the commutation relationship $[\hat{J_x},\hat{J_y}]=i\hbar\hat{J_z}$ and its cyclic substitutions, I think there should be a way to prove this anticommutation relationship just using the commutation relationship and without using any specific matrix representation of Pauli matrices. I tried to follow a similar fashion as in determining matrix representation of $\hat{J_x}$ and $\hat{J_y}$, but the use of the lowering and raising operators $\hat{J_-}$ and $\hat{J_+}$ (which I believe may be useful in the proof) only occurs when evaluating the matrix entry $\langle s,m'|J_\pm|s,m\rangle$, which is something I want to avoid. As a result, I failed to finish the proof.
It will not be possible to derive the anti-commutators from the commutation relations alone, because not every representation of the commutation relations (i.e., of the algebra $\mathfrak{su}(2)$) satisfies $\{L_i, L_j\} = 2\delta_{ij}$. For example, $\sigma_x$ and $\sigma_y$ anti-commute, but the spin-1 matrices $$ L_x = \frac{\hbar}{\sqrt 2} \pmatrix{ 0 & 1 & 0 \cr 1 & 0 & 1 \cr 0 & 1 & 0 } \qquad \text{and}\qquad L_y = \frac{\hbar}{\sqrt 2} \pmatrix{ 0 & -i & 0 \cr i & 0 & -i \cr 0 & i & 0 } $$ do not. It seems to me that the anti-commutation relations of the Pauli matrices are a coincidence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Newtonian physics and equivalence principle: a doubt on acceleration and gravity First of all, the famous Einstein's elevator experiment is quite clear in my head, both of versions. But now, consider the following: Suppose then you wake up inside a car that is traveling in perfect straight path in a autoban (but you don't know that). The car have a constant velocity $v$ and is a self-driving car with totally dark-glass windows. You don't have any information about the outside world. After a time $t$ travelling in the straight path, the car enters in a curve. You then fells an acceleration (exactly with $9,8 m/s^2$) accelerating you. Now, in my opinion, the person inside the car cannot say that the centrifugal acceleration is different from artificial constant gravitational field. The equivalence principle states something similar, since a person inside a elevator in a gravitational field is equivalent to a person inside a elevator which is accelerated with $9,8 m/s^2$. Furthermore, we can construct a ring-like structure to produce, via circular motion, a artificial gravitational field. So, can I say that any accelerated frame, due to equivalence principle, is equivalent to a gravitational field?
An accelerated frame is only locally equivalent to a gravitational field. Globally, you will not be able to "fake" the gravitation of a planet by just accelerating. Only if the passenger ignores tidal forces will he be unable to distinguish an accelerated frame from a static gravitional field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Newton's Laws of Motion, Pulleys, Rope and tension I was solving some questions to apply my concepts, and I came across the atwood machine and pulley block problems. Consider the following for example: The pulley is massless and frictionless, string, too, is ideal. Why does the book say that the tension in the green string is $2T$ if the tensions in the two wings of the lower string is $T$ and $T$. Like if we see closely the strings only apply the normal force on the pulley how is it equal to two times tension $T$. Also if the pulley would have been having mass and friction props. (string is mass less but with friction) would the tensions in the lower string be the same throughout? And what about the upper string? And what happens if everything is non ideal?
Imagine removing the black string and masses and instead just grabbing the pulley with your arms. If you pull with a force of $T$ with each arm, shouldn't the green string in the top then hold back against both? The green string tension should be $2T$. Back to your scenario, the situation is the same. The green string is holding up two masses all in all whereas each half of the black string is only carrying one mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/621995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How do speakers vibrate for a complex music? I understand how a speaker could produce simple sound and constant frequency. How does it produce more complex sounds like music? How can you calculate what frequency to oscillate at when there are multiple instruments and voices in a song? There must be a limit to the complexity of the oscillations and the sound that can be produced.
A loudspeaker can be modeled as a linear AC motor driving a flexible membrane. The motor components (in this case, the voice coil and the cone) have a certain amount of mass, and the membrane's clamped circumference possesses compliance, and when taken together they result in a fundamental resonant frequency. When the speaker is driven by any AC waveform whose frequency components are less than or equal to that resonant frequency, the cone will produce an analog representation of the driving waveform no matter how complicated it might be. By inserting compliance into the cone itself, in the form of circular ribs pressed into it, it is possible to extend the frequency response of the speaker above its fundamental resonance by allowing the outermost annular mass elements of the cone to decouple from the centermost elements of the cone at frequencies above the fundamental, progressively turning a 12" diameter cone into an 8" or a 6" or a 4" or a 2" cone with less mass and a higher resonant frequency. This allows the centermost portion of the cone to respond to high frequencies contained in the driving waveform, at the same time the entire cone is still responding as a single unit to frequencies at or below the (original) fundamental. In this way, a single ribbed speaker cone can reproduce a very broad range of frequencies- all of which were contained in the driving signal- simultaneously.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/622173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
High school experiments recommendations I am from India and you know teachers don't show the experiments and they only give the theory but I want to see experiments video at least so I'm satisfied that physics is correct , can you tell me some websites / channels which show high school experiments .
Well I think as you are asking for High school course, I believe that you should prefer YT videos by Walter Lewin here. He was a professor at MIT and has an unique way of teaching the subject with the demonstrations wherever it is required. The videos cover Newtonian Mechanics, Electromagnetism, Vibrations and Waves,Bohr's model and a brief Introduction to Quantum Mechanics. Along with this he has quiz questions as well and he demonstrates them in his own way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/622308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $I \propto V$, then why is $R = V/I$ and not $I/V$? I know that the current flowing through a conductor is directly proportional to the potential difference across its ends (by Ohm's Law). Hence, * *I ∝ V *V ∝ I *R = V/I, where R is a constant (Resistance) But why can't it be derived this way? * *I ∝ V *I = RV *R = I/V Won't these two derivations contradict each other? Thank you
It is true that $V/I$ is a constant for resistors, and also that $I/V$ is a constant. But, of course, they are not the same constant. $R=V/I$ gives the resistance of a resistor, while $G=I/V$ gives the less commonly-used conductance of a resistor. Neither of these is a "derivation" of the resistance- $R=V/I$ is a definition of resistance, and you can use Ohm's law to prove that it is constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/622463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Excess Pressure on a curved surface with two radius of curvature While studying surface tension, I noticed the following formula to calculate excess pressure on a curved liquid film made use of two radii of curvature: $$2T\left(\frac{1}{R_{1}} + \frac{1}{R_{2}}\right)$$ I have not been able to understand the significance of two radii of curvature for a surface.
The best way to visualize two different radii for any given surface. The best way is to cut the surface by a pair of perpendicular planes. Since, now you are viewing the section of surface cut by a plane you will have a planar surface which has a radius of curvature. Try this out:::: Use a torus to apply the above argument. That will clear your doubt.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/622759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If electrons can be created and destroyed, then why can't charges be created or destroyed? I read on Wikipedia that electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. Also, that they can be destroyed using pair annihilation. We also know that charge is a physical property which can be associated with electrons. My question is why can't charges be created or destroyed if electrons can?
Feynman once asked more or less the same question (page 129 of "Quantum Field Theory" by Lewis H. Ryder): I remember that when someone had started to teach me about creation and annihilation operators, that this operator creates an electron, I said 'How do you create an electron? It disagrees with conservation of charge'. - R. P. Feynman So you are good company. I think by now it has become clear to you that whenever an electron appears it must take its charge from other charged particles. An electron can never be created on its own. Or it takes its charge from other particles, or a positron is created at the same time. Likewise, an electron can't be destroyed without another equally, but oppositely, charged particle being created. When the electron is isolated, it can never be destroyed. Charges can be created, like the charges of an electron and a positron in pair production, but their total value must always be zero (i.e., total charge can't be created).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/623146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 8, "answer_id": 5 }
Mesoscopic Bose-Einstein Condensate Bose-Einstein condensates of molecules of a few daltons have been already created, so I was wondering: would making a Bose-Einstein condensate on a system of Quantum Dots, due to their properties, cause the system to display any different effects?
There is no boson condensate. It was imagined to fulfill the idea of big bang inertial gravity, energy without matter and all that resulted in dark energy, dark matter. Below is image to show how finiteness of particles gives boson distribution and there is no infinite or high energy accumulate in given region. This is distribution of boson derived in slightly unconventional way. Here I took number of total particles N as finite. You can't have probability without knowing total number of arrangements.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/623393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do I find the approximate surface area of a chicken? I'm working on building a chicken army and I'm trying to find out how much metal or kevlar (still deciding) I need to make armor for the chickens. this measurement does not need to be exact I'm just trying to get an estimate for how much I will need. You will be spared when my chickens take over the world if you give me a working answer.
The astrophysics answer. Take a representative chicken, put it in a cold room that is lined with infrared detectors measuring the flux in several wavelength bands. Assume the chicken is a blackbody and fit a Planck function to estimate both temperature and emitting surface area.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/623492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 8, "answer_id": 1 }
Understanding a Poynting vector equation I'm reading this section in the Griffiths Introduction to Electrodynamics book. I trying to understand where equation 9.57 comes from (the middle part of the equation at least; I see where the $cu\;\hat{\mathbf{z}}$ part on the right comes from). Does it come directly from calculating $\frac{1}{\mu_0}\mathbf{E} \times \mathbf{B}$ for a wave travelling in the $ \hat{\mathbf{z}}$ direction?
For a electromagnetic wave, $\mathbf E$ and $\mathbf B$ are orthogonal to each other and both are orthogonal to the direction of wave propagation. For the monochromatic plane wave propagating in the $z$ direction, if $\mathbf E$ is in $x$ direction, we deduce that $\mathbf B$ is in $y$ direction. We also know that the magnitude $B$ is proportional to $E$ as $B=\sqrt{\epsilon_0\mu_0}E$. Thus ${\mathbf E}\times{\mathbf B}$ is a vector in $z$ direction, with magnitude $\sqrt{\epsilon_0\mu_0}E^2$. The result (9.57) follows. If $\mathbf E$ is in another direction in the $x$-$y$ plane, the result does not change.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/623580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is no acceleration a cause or consequence of no net force? If a body is moving with constant velocity, or is at rest, then the net force on it must be $0$. If the net force on a body is $0$, then it must be moving with constant velocity or must be at rest. Is $0$ net force a consequence of being at rest or moving with constant velocity or is moving at constant velocity or being at rest a consequence of $0$ net force?
The latter. I think it is most intuitive to think about the F=Ma equation as a statement about cause and effect. The force on an object arises due to something physical (ie. a stretched spring connected to your object), and it's magnitude depends on the configuration of your system. The acceleration is a consequence of the force. Thus zero acceleration would imply no net force, but the reason there is no net force is because he individual forces acting on your objects cancel out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/623887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 14, "answer_id": 5 }
How much force applied to canal wall from that cargo ship given 220,000 tons and 12.8 knots? In case you've been hiding under a rock, or are reading this in the future: "that cargo ship" is a huge story right now (3/26/2021). A brief summary: well basically a few days ago one of the world's largest cargo ships somehow managed to dig its bulbous bow into the east wall of the Suez canal. The back end of the ship is resting on the west end and no other ships can pass. I read 220,000 tons and 12.8 knots https://www.baltimoresun.com/news/nation-world/ct-aud-nw-cargo-ship-stuck-egypt-suez-canal-20210324-oytkblgh5ngihlwitcy7hdsnwi-story.html and thought it might make a fun little physics question. Another one I thought of is how much volume of water that much weight displaces...
If it can be determined what the stopping distance was for the ship, such as by a measurement of the depth of penetration of the ship into the canal wall, and can ignoring the resistance of the water to the ship movement, one can estimate the average impact force using the work energy theorem, which states that the net work done on an object equals its change in kinetic energy, or $$F_{ave}d=\frac{1}{2}mv^2$$ Where $F_{ave}$ is the average impact force, $d$ is the stopping distance of the ship, and $v$ is the ship velocity just prior to impact. if we use 50 meters as a guess we get 4,346,866,404.5 joules / 50m = 86,937,328 newtons Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/623977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does physics explain why the laws and behaviors observed in biology are as they are? Does physics explain why the laws and behaviors observed in biology are as they are? I feel like biology and physics are completely separate and although physics determine what's possible in biology, we have no idea how physics determine every facets of biology. We know roughly how forces in physics may impact biological systems, but not every little connections and relations that exist between physics and biology. Am I wrong?
We are very good at describing small quantum mechanical systems, because today we have QM, and we do know that the world is ultimately quantum mechanical in nature. That being said, when it comes to predicting bigger biological systems (just like our own human nature), our capabilities are very limited. We are all made up of QM systems, elementary particles, who are reading this question, but if we would ask a question like, "can you explain why you are asking this question at all", then explaining it based just on QM is not possible. There are two main reasons for that: * *though we are very good at describing small QM systems, the task becomes extremely difficult for larger systems What are the primary obstacles to solve the many-body problem in quantum mechanics? *biology adds something extra, you can call it instinct, consciousness, life, nature, whatever you want, but it is governed by a biological program, and described by a programming language, the DNA (this about the programming language is nicely described in @andrewsteane answer). We are in babyshoes at describing biological system's behavior based on DNA, but in the future we might be able to do so with much more efficiency. https://en.wikipedia.org/wiki/DNA_computing So the ultimate answer to your question is that biological systems are qualitatively more then just a "bunch" of elementary particles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/625503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 12, "answer_id": 11 }
How does Stefan Boltzmann law work for absorption? I have come across the Stefan Boltzmann law and I have a couple of doubts based on its use in the net heat flow expression $$dq/dt = e A\sigma T^4$$ Now according to my textbook If the temperature of the surrounding is $T_1$ and the temperature body is $T_2$ and the body has an area $A$ with emissivity $e$ The heat radiation emitted by the body is given by $$dq/dt = e A\sigma T_2^4$$ Now, this where my problem starts. Next the book states, **the rate at which the surrounding in immediate contact with the body absorbs the radiation is given by ** $$dq/dt = a A\sigma T_1^4$$ Where $a$ is the absorptivity of the surrounding which is approximately equal to that of the body. Ok, now my question is how are we sure that the rate of heat absorption is $dq/dt = a A\sigma T_1^4$. I don’t see any reason why this is actually True. I couldn’t find any reason on the internet. I believe this might be due to my lack of knowledge in this subject :/ I also wanted to know if there was any kind of law that states that the amount of heat absorbed is equal to the amount of heat radiated.
I believe it was Gustav Kirchhoff who found out that the emissivity $\epsilon(\lambda)$ of a body at wavelength $\lambda$ equals its absorptivity $A(\lambda)$ at the same wavelength, $$\epsilon(\lambda) = A(\lambda)$$ In physics, we are not really able to answer why this is the case. However, let's consider the theoretical case where a body levitates inside a box. If the body and the box are in thermal equilibrium at time $t_0$, the equality $\epsilon(\lambda) = A(\lambda)$ ensures that they remain in thermal equilibrium throughout time. In this example, it does not matter whether or not the body and the box have the same absorptivity, or if they differ (e.g. we use two different materials): The equality ensures that the body absorbs the same amount of energy it emits.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/625633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How would velocity of sound, the fundamental frequency and wavelength of sound vary when the temperature of an organ pipe is increased? Here is my approach to this: neglecting any thermal expansion of the pipe: By the Laplace formula for the speed of sound, $V=\sqrt{\frac{\gamma P}{\rho}}$ where P is the pressure, $\gamma$ is the adiabatic constant and $\rho$ is the density of the medium. Assuming the gas to be an ideal gas, we can use the ideal gas equation. Hence we have: $V=\sqrt{\frac{\gamma RT}{M}}$ where R is the gas constant, T is the absolute temperature, and M is the molar mass of air So, when we increase the temperature, clearly, the velocity would increase as well. Coming to the fundamental frequency, we know $f_0$ (fundamental frequency) $\alpha$ V (velocity of sound) Hence, the fundamental frequency would also increase. But how would we get the variance of the wavelength with temperature? I thought of using the relation $V=f\lambda$ where V is the velocity of sound, f is the frequency and $\lambda$ is the wavelength. According to which, the wavelength should also increase but according to my book, that's not right. Why does this happen?
The wavelength is independent of temperature. You can see here that the wavelength depends only on the length of the organ pipe and the harmonic of the resonance, rather than the temperature of the gas itself. The only way of changing the wavelength is by increasing the harmonic or the length of the pipe itself. Hope this helps answer your question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/625836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Force on the bottom of a tank full of liquid - Hydrostatic Pressure or Gravity Imagine a tank filled with water that has some height $h$ and at the bottom area $A$ but as it goes up, for example at height $h/2$, it's area is now $A/2 $. What's the correct way to calculate the force at the bottom of the tank? (Let's ignore atmospheric pressure for now) * *If I use $W=mg$, we get $F=W=ρVg=ρ(\frac{Ah}{2}+\frac{Ah}{4})g=\frac{3}{4}ρghA$ *If I calculate the hydrostatic pressure at the bottom, it's $p=ρgh$, and then $F=pA=ρghA.$ Which one is the correct one and why?
Imagine what will be the case if you consider a closed tank and put the pressure inside under high pressure. While the hydrostatic pressure is high, the weight of the water on the bottom of the tank will always be equal to the weight of the water. so when you calculate pressure is a different thing from calculating weight, which is what you have done in the second case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/625915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 5 }
How to know if the error is in a law or in uncertainty of the measurement? I read these words in a (great) answer to this question: There are errors that come from measuring the quantities and errors that come from the inaccuracy of the laws themselves But how do we know that the errors are in the measuring or in the law about which we make measurements?
But how do we know that the errors are in the measuring or in the law about which we make measurements? Laws in physics theories are extra axioms to pick up from mathematical solutions those solutions that are descriptive and predictive of data. Whenever data do not fit predictions , one finds the dimensions of validity for the theory, a new theory needed outside those dimensions. In the dimensions needed for the GPS system to predict accurately the positions on earth, Newton's laws fail and special and general relativity have to be pulled in. So it is the failure of predictions for data of a theory that decides the errors on the laws. It is evident that the measurement errors should be small enough to show discrepancy with theoretical predictions using the law.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/626034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
What is the physical importance of topological quantum field theory? Apart from the fascinating mathematics of TQFTs, is there any reason that can convince a theoretical physicist to invest time and energy in it? What are/would be the implications of TQFTs? I mean is there at least any philosophical attitude behind it?
TQFTs were not discovered by mathematicians - they were actually discovered by physicists, so one should expect there to be physical motivation for the theory. One reason why that this is difficult to discover is that mathematicians have taken over the theory so it is hard to recognise the physical motivation. One reason to study them is as toy models for quantum gravity; for example, the Barrett-Crane model. This was first published in 1995. It quantises GR when written in the Plebanski formulation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/626151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
Why should $\lim_{V\to\infty} \frac{1}{V} \ln Q(z, V, T)$ have a finite limit? In the book Intro. Statistical Physics by K.Huang, on page 174, it is given that In the thermodynamic limit $V \rightarrow \infty,$ we expect that: $$ \frac{1}{V} \ln Q(z, V, T) \underset{V \rightarrow \infty}{\longrightarrow} \text { Finite limit. } $$ where Q is the grand canonical partition function. This is expected but is there any mathematical or physical reason and/or evidence/explanation for why this is/should be the case?
There is no mathematical proof just because, in general, it is not true that the limit exists or it is finite. Of course, we would expect a finite limit as a precondition for a thermodynamic interpretation of the statistical mechanics formula. The right question is not about the reason for a finite limit, but to ask the question do we have a good characterization of the Hamiltonians which ensure the existence of thermodynamic limit? Indeed, a set of sufficient conditions for the existence of the thermodynamic limit, ensuring at the same time the correct properties of convexity of the resulting fundamental equation, is known for different classes of systems. For an overview see this paper.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/626310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
In thermodynamic limit, how is $\frac{V}{(2 \pi)^{3}} \int \mathrm{d}^{3} k= \int \frac{\mathrm{d}^{3} r \mathrm{~d}^{3} \boldsymbol{P}}{h^{3}}$? In the book Intro. Statistical Physics by K.Huang, on page 106, it is given that Because of indistinguishability; the $N$ -body wave function is labelled by the set $\left\{\alpha_{1}, \cdots, \alpha_{N}\right\}$, in which the ordering of the set is irrelevant. [...]The number $n_{\alpha}$ is called the occupation number of the single-particle state $\alpha_{1}$ with the allowed values For an $N$ -particle system, they satisfy the condition $$ \sum_{\alpha} n_{\alpha}=N $$ [...] For free particles, it is convenient to choose the single-particle functions to be plane waves. The label $\alpha$ corresponds to the wave vector $\mathbf{k}$ : [...] In the thernodynanic limit, we can replace the sum over plane-wave states by an integral: $$ \sum_{k} \rightarrow \frac{V}{(2 \pi)^{3}} \int \mathrm{d}^{3} k=\int \frac{\mathrm{d}^{3} r \mathrm{~d}^{3} \boldsymbol{P}}{h^{3}} $$ I got everything except how does the equality $\frac{V}{(2 \pi)^{3}} \int \mathrm{d}^{3} k= \int \frac{\mathrm{d}^{3} r \mathrm{~d}^{3} \boldsymbol{P}}{h^{3}}$. Where does this equality come from?
It is simply a change of variables. With the definition $$\vec{p}=\hbar \vec{k} = \frac{h}{2\pi} \vec{k},$$ we can conclude that for every direction, $i\in\{x,y,z\}$ $$\frac{dk_i}{2\pi} = \frac{dp_i}{h}$$ therefore $$\int \frac{d^3\vec{k}}{(2\pi)^3} \rightarrow \int\frac{d^3\vec{p}}{h^3}$$ the other factor is just volume $$V = \int d^3\vec{r}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/626581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Series combination of springs When a spring mass system is connected vertically with two massless springs in series whose spring constants are $k_1$ and $k_2$ to a block of mass $m$ we know that equal forces act on both the springs. Let that force during oscillations be $F$. When we calculate effective spring constant $k_s$, why don't we say the net force acting on the system is $2F$? Finding net force acting on the above system: When the block is attached,the system attains equilibrium position through displacements $x'_1$ and $x'_2$. At equilibrium: $2F'=mg$(Where $F'$ is magnitude of spring force initially by each spring) So, $k_1x'_1+k_2x'_2=mg$ (equation 1) When the system is pulled down it makes oscillations,now: Total elongation be $x$ Elongation in spring 1 be $x_1$ and elongation in spring 2 be $x_2$. Total spring force $= -k_1x'_1-k_2x'_2-k_1x_1-k_2x_2$ Total forces acting on the system $= -k_1x'_1-k_2x'_2-k_1x_1-k_2x_2+mg = -mg-k_1x_1-k_2x_2+mg$ (from equation 1) So, total force $= -k_1x_1-k_2x_2 = F_1+F_2=2F$(as we know that both forces are equal) So net force acting on the system is $2F$ The way I calculated effective spring constant is: $x=x_1+x_2$ $2F/k_s = F/k_1 + F/k_2$ $2/k _s = 1/k_1 +1/k_2$ But that is not a correct equation. What's wrong in taking net force acting on system as $2F$.
When the springs (assumed to be massless) are hung upside down, they will have a zero extension. So in fact $$mg = k_1x+k_2x$$ when the mass is added to the system. You wrote that the resultant force $$F_r = F_1+F_2$$ and concluded that this should be $2F$ as if the force in both springs were equal. So what you should have done is write the resultant displacement $$x=x_1+x_2$$ so that $$\frac{mg}{k_s}=\frac{mg}{k_1}+\frac{mg}{k_2}$$ where $k_s$ is a effective spring constant. This will then give $$k_s=(\frac{1}{k_1}+\frac{1}{k_2})^{-1}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/626954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Angular momentum commutation relations The operator $L^2$ commutes with each of the operators $L_x$, $L_y$ and $L_z$, yet $L_x$, $L_y$ and $L_z$ do not commute with each other. From linear algebra, we know that if two hermitian operators commute, they admit complete sets of common/simultaneous eigenfunctions. The way I understand this statement is that the eigenfunctions of both operators are the same. So, if that were the case, that would mean that $L_x$ has the same eigenfunctions as $L^2$. The same goes for $L_y$ and $L_z$. That would mean that $L_x$, $L_y$ and $L_z$ all have the same eigenfunctions, which doesn't seem to be true since they do not commute with each other. How is this resolved?
If two observables $A$ and $B$ commute, i.e. if $[A,B]=0$, then there exists a common eigenbasis. In other words, there is a basis $\{|\phi_n\rangle\}_n$ for which $$A|\phi_n\rangle= a_n\, |\phi_n\rangle \quad\text{and}\quad B|\phi_n\rangle= b_n\, |\phi_n\rangle \quad.$$ Now consider the case where $A$ also commutes with another observable $C$. Then this does not imply that $C|\phi_n\rangle= c_n\, |\phi_n\rangle$ for all $n$: The basis $\{|\phi_n\rangle\}_n$ is, in general, not an eigenbasis of $C$. The fact that $L^2$ commutes with all $L_x$, $L_y$ and $L_z$ does not imply that the e.g. $L_x$ and $L_y$ commute. Indeed, as you pointed out, they do not commute and hence do not share a common eigenbasis, although each of them shares a common eigenbasis with $L^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/627060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
Why are electronic transitions in atoms modelled as oscillating electric dipole radiation? Sources such as Eugene Hecht and Griffiths claim that oscillating electric dipole radiation is a great approximation for radiation generated from atoms and molecules during electronic transitions. I don't really understand why this is true.
It seems that they refer to the dipole approximation, which is not the same as describing the radiation as that of an oscillating dipole (the description that work well for antennas, but not for atoms). The essence of the approximation is that the interaction between the atom and the electric field can be represented as $$ H=-\mathbf{d}\cdot\mathbf{E}, $$ where $\mathbf{d}$ is the dipole moment and $\mathbf{E}$ is the eletric field. The approximation is made possible by the smallness of the atoms (about $0.1$nm in size) in comparison to the typical electromagnetic field wave lengths (e.g., a few hundred nanometers in visible range). Here is my post that gives a bit more mathematical details. In some cases, notably when the dipole moment is zero, one has to resort to higher-order approximations (quadrupole, octupole, etc.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/627419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hyperbolic isometries in the context of General Relativity In the context of hyperbolic geometry, it is possible to create a classification for isometries. I would like to know if these isometries have any particular meaning in the context of general relativity. Is it possible to understand these isometries from the point of view of general relativity?
There is a Bianchi classification of 3d Lie algebras upto isomorphism. Hence this also classifies all 3d Lie groups. Now, in the ADM formalism of General Relativity, we consider the evolution of a 3d spatial slice. The symmetry group of this slice is a Lie group. When this symmetry group is 3d, we can use the Bianchi classification. This gives us a classification of Bianchi spacetimes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/627516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What exactly does an exchange of particle labels for identical particle wave functions mean *physically*? We know that the wavefunction of identical particles behaves as follows: $$\Psi(1,2)=\begin{cases}-\Psi(2,1) & \text{for fermions} \\ +\Psi(2,1) & \text{for bosons} \end{cases}$$ Now, what exactly does exchanging particle labels mean physically? The above relation, as far as I know comes due to the observables remaining the same for two identical particle system under the exchange of particle labels and thus- $$\Psi(1,2)=e^{i\delta}\Psi(2,1)$$ or $$\Psi(1,2)=e^{2i\delta}\Psi(1,2)$$giving us $\exp(i\delta)=\pm1$ which respectively accounts for bosonic and fermionic wavefunctions.
The exchange is an artifact of the formalism. When we write it down, we tend to label the particles as 1 and 2 (or something else), but physically there is no difference between the particles. They are not "labeled" in any way in the physical world. Therefore, we need rules for our formalism when we exchange the particles with the different labels. Note, the signs do not come from a phase. It is produced by the statistics, i.e., commutation or anti-commutation relations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/627641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How does one integrate the Fermi-Dirac distribution using the zeta function? I've seen in my physics book that: $$n=\frac{g}{2\pi}\int_0^\infty\frac{E^2dE}{e^{E/T}\pm1}$$ Regarding the number density of a relativistic gas of either bosons ($-1$) or fermions ($+1$). The solution of both of the integrals is given to be: $$n_b=\frac{\zeta(3)}{\pi^2}gT^3$$ $$n_f=\frac{3}{4}\frac{\zeta(3)}{\pi^2}gT^3$$ Where $\zeta$ denotes the Riemann zeta function. I can see how the first one is obtained, since it's very straightforward using: $$\zeta(s)=\frac{1}{\Gamma(s)}\int_0^\infty \frac{x^{s-1}}{e^x-1}dx$$ $$\Gamma(n)=(n-1)!$$ However, I don't see how the one for the fermions is integrated, since the denominator is $e^x \boldsymbol{+} 1$ instead of $e^x \boldsymbol{-} 1$, as the Riemann's zeta function has.
As a hint, try to show that: $$\frac{1}{e^{E/T}+1} = \frac{1}{e^{E/T}-1} - \frac{2}{e^{2E/T}-1}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/627711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving for transmission coefficient in the finite square well Consider a finite square well of depth $V_0$ and which extends from $-a$ to $a$. For $|x|>a$, $V=0$. The wavefunction ansatz one can propose for an incoming wave from the left $Ae^{ikx}$ is: $$ \psi = Ae^{ikx} + Be^{-ikx}, x<a $$ $$ \psi = Ce^{ik_2 x} + Be^{-ik_2 x}, |x|<a $$ $$ \psi = Fe^{ikx}, x>a $$ Where $k=\frac{2mE}{\hbar^2}$ and $k_2=\frac{2mE+V_0}{\hbar^2}$ can be obtained from the Schrodinger Equation. Then, if we define the transmission coefficient to be: $$ T = \frac{F^2}{A^2}$$ One should be able to find the value of this coefficient if $F$ is written in terms of $A$. We can apply boundary conditions to do so: $\psi$ and $\psi'$ must be continuous at $-a$ and at $a$, so: $$ Ae^{-ika}+Be^{ika} = Ce^{ik_2 a}+De^{ik_2}a $$ $$ -ik(Ae^{-ika}-Be^{ika}) = -ik_2(Ce^{-ik_2 a}-De^{ik_2a}) $$ $$ Ce^{ik_2a}+De^{-ik_2a} = Fe^{ika} $$ $$ ik_2(Ce^{ik_2a}-De^{-ik_2a}) = ikFe^{ika} $$ Now, I have seen derivations of $T$ where $\psi$ is taken to be $Ccos(k_2x)+Dsin(k_2x)$, but it should be the same here and I am not being able to eliminate the $C$ and $D$ to get equations only in $B$,$F$ and $A$. Am I not seeing a way to do this, are the boundary conditions wrong or is the ansatz wrong (I have been told that either way the value of $T$ should be the same). Thanks!
You have mistakes in the equations for boundary conditions, they would be: $$\psi\text{ continuous at }x=-a\longrightarrow Ae^{-ika}+B e^{ika}=C e^{-ik_2a}+De^{ik_2 a}$$ $$\psi'\text{ continuous at }x=-a\longrightarrow ik(Ae^{-ika}-B e^{ika})=ik_2(C e^{-ik_2a}-De^{ik_2 a})$$ $$\psi\text{ continuous at }x=a\longrightarrow C e^{ik_2a}+De^{-ik_2 a}=F e^{ika}$$ $$\psi'\text{ continuous at }x=a\longrightarrow ik_2(C e^{ik_2a}-De^{-ik_2 a})=ikFe^{ika}.$$ Then, solving for $D$ using the first equation we get $$D=Ae^{-i(k+k_2)a}+Be^{i(k-k_2)a}-Ce^{-i2k_2a}.$$ Inserting this in the second one and solving for $C$ $$ik(Ae^{-ika}-B e^{ika})=ik_2(2C e^{-ik_2a}-Ae^{-ika}-Be^{ika})\rightarrow\\C=\frac{A}{2k_2}e^{i(k_2-k)a}(k_2+k)+\frac{B}{2k_2}e^{i(k_2+k)a}(k_2-k),$$ so $D$ now is $$D=Ae^{-i(k+k_2)a}(1-\frac{1}{2}-\frac{k}{2k_2})+Be^{i(k-k_2)a}(1-\frac{1}{2}+\frac{k}{2k_2}).$$ Similarly, using the third one you can get $F$ in terms of $A $ and $B$, i.e. $F=F(A,B)$, and from the fourth one, $B=B(A)$. Finally, you can use this last relation between the $A$ and the $B$ to get $F=F(A,B(A))=F(A),$ and compute the transmission coefficient $$\mathbb{T}=\frac{|F|^2}{|A|^2}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/627845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
According to general relativity planets and Sun bend the spacetime (explaining gravity), but does this hold true for smaller objects? According to general relativity planets and the sun bend spacetime, and that is the explanation of gravity. However, does this hold true for smaller objects, like toys, pens, etc.? Do they also bend spacetime?
Recently gravity was measured between two 1 mm gold spheres. (Measurement of Gravitational Coupling between Millimeter-Sized Masses by Westphal et al) Gravity cannot be separated from "bending spacetime". Any force that affects everything equally in a place can alternatively be described as a bent spacetime. So those 90 mg spheres are bending spacetime, and the experiment was able to measure this. Going to much smaller sizes quantum effects come into play and there are open questions about the behavior of gravity there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/628115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 9, "answer_id": 5 }
Muon $g-2$ experiment: is there any theory to explain the results? The nature of the experiment has been discussed here, but my main question is this: is there any theory that has predicted the results of this experiment or are we completely clueless about what's happening? In other words, have we come up with a new hypothetical interaction that could explain the results?
In the standard model the $α=(g-2)/2$ of an elementary particle should be calculable , the calculations as accurate as the higher orders are computed. For the electron the calculations coincide with the experimental value to great accuracy The muon $α=(g-2)/2$ has different diagrams dominant so the theoretical value will be different, but it was seen ,first at the Brookhaven data, that the theory differed , not statistically significantly, with the data and so the Fermi lab experiment was designed at great effort experimentally and expense, because the difference would mean that a new interaction should add to the effect, not in the standard model that the calculations use. As the comments say there are many extensions of the standard model that can try to fit the new results and thus choose the extension or the new theory. Hit the link given by G.Smith to get a large number of papers using different theories.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/628266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
What is momentum? Momentum tells you the mass of the object and how fast it is going right? So if I have a 2 kg ball moving at 2 m/s, then the ball has 4 kg⋅m/s of momentum. My question is why do we multiply mass and velocity to get momentum. (From the example above) Why cant we just say the ball is 2 kg moving at a speed of 2 m/s and that is momentum. Why do we have to multiply it?
The deep reason for introducing the momentum is that momentum is a quantity that in some circumstances can be conserved, while this is not the case for the velocity. The case of one particle is not really enlightening, but as soon as we move to systems of more than one particle the advantage of introducing momentum is clear. The special role of momentum can be fully understood only using the full conceptual framework of analytic mechanics. However, some hint about the importance of this concept can come from a few facts: * *it turns out that it is possible to define and assign a value to the momentum of massless entities like electromagnetic radiations or photons; *Relativity introduces a deep revision of the concepts of classical mechanics, but momentum, although defined in a different way, still plays the same role as in classical mechanics; *Quantum mechanics does not allow a meaningful definition of velocity (the concept of trajectory becomes meaningless), still, it is possible to define the momentum of a particle, playing a key role in the theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/628601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 4 }
Is there a fundamental reason why in the Standard Model, there is no Feynman rule for a vertex with more than 4 legs? In Standard Model, there are vertices with 3 legs (example: $W l\nu$) and with 4 legs (example: $WWWW$). Is there a fundamental reason why in the Standard Model, there is no Feynman rule where a vertex has more than 4 legs? Is this possible in Beyond-the-Standard-Model theories?
Fundamental? Power counting. The SM is renormalizable, that is, without dimensionful couplings. The lagrangian must have dimension 4, and any vertex with 5 legs or more would dictate a coupling of dimension -1 or less. So the gauge couplings g,g', the Yukawa y, etc, are all dimensionless. (You do have Feynman diagrams of higher dimension, like a 4-fermion diagram with exchange of a vector boson: four fermions are of dimension 6. So in the effective theory for that, it is summarized by 4-Fermi coupling with a dimensionful (-2) effective $G_F$.) Anything is possible in BSM theories, but, normally, one looks for renormalizable theories as well—otherwise one simply tacks on unrenormalizable effective interactions to the SM, which afford little "explanation" of the energy behavior of the amplitudes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/628736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Hybridisation of orbitals When we talk about the electronic configuration of boron , sulphur , nitrogen. What I got to learn new was about their hybridisation. For example , boron has electronic configuration as $1s^2 2s^2 2p^1$.now , there is one pared electron at 2s^2 and one unpaired electron at 2p1. Now , during a reaction of boron with an element. It can make one of the paired electron shift to 2p1 orbital and thus form 3 unpaired electrons. I want to know what forces the orbitals to make this happen.
This is called Atomic excitation and there can be multiple sources of energy that can lead to this effect, f.e. atom can absorp energy of an photon which will then lead to excitation (but any energy source can lead into it). You can also take a look on wikipedia Excited state. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/628852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Double slit interference question a level physics The question attached asks you to calculate the angle between the central fringe and the second bright fringe. The mark scheme says that you should use tan to work out this angle using the distance between the slits and the plane and the fringe separation. But I'm wondering why you can't use dsinθ = nλ? Using the second formula, I get 0.19º (to 2 s.f. as is required in the question), but the mark scheme is looking for 0.18º. Why is it wrong to use dsinθ = nλ?
It's actually correct (and, IMHO, better) to use the equation you used. The problem solution uses a couple of approximations that are basically correct but lead to small enough errors that the final answer differs at 2 significant figures. Using $s \sin \theta = n \lambda$ as you did, you would obtain $\theta = 0.1865...^\circ$. This is a direct calculation of the angle, and does not rely on any trigonometric approximations (or at least none that significantly affect the answer.) The equations used in the solution, though, relies on a couple of approximations that introduce small errors. Specifically, we have the distance between the first bright fringe and the central spot (call it $w_1$) would satisfy $$ \frac{w_1}{D} = \tan \theta \approx \theta \qquad \frac{\lambda}{s} = \sin \theta \approx \theta \\ \Rightarrow \frac{w_1}{D} \approx \frac{\lambda}{s}. $$ This introduces a small error relative to the exact solution, though it's pretty close to being true. They then assume that the displacements between the consecutive fringes are evenly spaced, and in particular that the displacement of the second fringe (let's call it $w_2$) is exactly twice that of the first fringe. But this isn't exactly true either. what is instead true is that $$ \sin \theta_2 = n \frac{\lambda}{s} = 2 \sin \theta_1. $$ If $\theta_1$ and $\theta_2$ are both "small", then this reduces to $\theta_2 \approx 2 \theta_1$, which then implies (under the same approximations) that $\tan \theta_2 \approx 2 \tan \theta_1$, which then implies that $w_2 \approx 2 w_1$. So while it is true that $w_2$ is pretty close to twice $w_1$ for small angles, it's not exactly equal to twice $w_1$. The net effect of these approximations is to change the answer in the solution by about 1% relative to the answer that you calculated directly. This is enough to change it from 0.19° to 0.18° after rounding to two significant figures. The moral of this story is that approximations can be useful in simplifying your life, but you should always keep in mind that you have made them, and remember that your answers are only approximately true. The folks who wrote out that answer key (and anyone who blindly relied on it to mark an assignment without keeping it in mind) apparently forgot that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hamiltonian classical electrodynamics After coming across the Lagrangian density of the Maxwell equations $$ \mathcal{L} = -\frac{1}{4\mu_0} F_{\mu\nu}F^{\mu\nu}-J_\mu A^\mu = \frac{\varepsilon_0}{2}||\mathbf{E}||^2-\frac{1}{2\mu_0}||\mathbf{B}||^2 -j_\mu A^\mu $$ I was wondering whether there is a corresponding Hamiltonian for Classical Electrodynamics. I have found that for the source-free case ($j_\mu$=0) it is $$\mathcal{H}=\frac{\varepsilon_0}{2}||\mathbf{E}||^2+\frac{1}{2\mu_0}||\mathbf{B}||^2,$$ yet I have only seen it in a couple of places and never including the source terms. Is there any Hamiltonian that works with sources as well? If not, what could be the reason? PS: there seems to be a similar question Is there a Hamiltonian for the (classical) electromagnetic field? If so, how can it be derived from the Lagrangian? but no Hamiltonian is given in the answers...
According to these lecture notes, the combination of the following Hamiltonian density and constraint gives rise to the Maxwell equations: $$\mathcal{H} = \frac{\varepsilon_0}{2}\mathbf{E}^2 + \frac{1}{2\mu_0}\mathbf{B}^2 - j_\mu A^\mu$$ and $$\nabla\cdot\mathbf{E}=\frac{\rho}{\varepsilon_0}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Different versions of Schwinger parameterization One common used trick when calculating loop integral is Schwinger parameterization. And I have seen two versions among wiki, arxiv and lecture notes. $$\frac{1}{A}=\int_0^{\infty} \mathrm{d}t \ e^{-tA}$$ or, $$\frac{-i}{(-i)A}=-i\int_0^{\infty} \mathrm{d}t \ e^{itA}$$ where $A=p^2-m^2+i\epsilon$. I know the latter is surely true since its real part $Re(-iA)=\epsilon\gt0$ and thus applicable for the equation $$\frac{1}{a}=\int_0^{\infty} \mathrm{d}t \ e^{-at}\ \text{ for } Re(a)\gt0.$$ But as for the former, it doesn't hold true for space-like which the loop momentum probably behaves like , i.e. $p^2-m^2\lt0$. I am very confused why so many people still use the first one and any explanation will be appreciated!
The Schwinger parameter itself is manifestly positive. In particular, it is not Wick-rotated, so there are not different versions of it. Rather it is OP's $A$ operator that is Wick-rotated. OP lists a few references in above comments. * *Ref. 1 works in Euclidean signature, so it's well-defined. *Ref. 2 & 3 only use the Schwinger parameter to derive the Feynman parametrization. They, on the other hand, want to work in Minkowskian signature. In practice one would then have to argue (presumably case by case) if one can analytically continue/Wick-rotate from Euclidean signature to Minkowskian signature (thereby introducing the Feynman $i\epsilon$-prescription). References: * *H. Kleinert & V. Schulte-Frohlinde, Critical Properties of $\phi^4$-Theories; chapter 8, p. 106. *J.A. Shapiro, Schwinger trick and Feynman Parameters, 2007 lecture notes; p. 1. *S. Weinzierl, The Art of Computing Loop Integrals, arXiv:hep-ph/0604068; p. 11.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is the half-wave dipole the most used antenna design? When producing em waves using a dipole antenna (of length L), you could theoretically use any L and adjust the frequency of the oscillating voltage to get the desired wavelength. Then why are most antennas half a wavelength long? I'd also like to know why it's useful in the context of receiving em waves. Thanks.
An antenna is a resonator. If you are not feeding it with its natural frequency, it is not going to oscillate with sufficient amplitude. Of course you can always think about increasing voltage, but usually everything in technology is about efficiency. Imagine you would have to carry a heavy car battery and a high voltage generator with your smartphone in order to get enough sending power. Remember your childhood days, when you had to learn how to ramp up a swing? If you did not move your feet and trunk in the right "rythm", the oscillation would just starve. That's exactly what would happen if you feed the antenna with a frequency far off its resonance. With receiving it's the same as with sending. If the frequency of the incoming wave does not match the antenna, the signal will be very low. It's actually much worse for reception because you cannot just crank up the voltage. You can increase amplification, but that will usually result in a worse signal-to-noise ratio.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Perception of simultaneous events I have a two-fold question about the light-cone structure of spacetime, specifically about space-like separated events. As far as I understand it, any two events that happen at the same time in a given reference frame are space-like separated. If so, any two simultaneous events occurring on my arm and leg are space-like separated. What confuses me is this: If I am not mistaken, all the events that we perceive are in our past light-cones. This is because we only perceive events that have emitted light that has reached us, and thus has causally affected us. If so, what happens when I look at my arm and leg? It seems to me that for any two simultaneous events A and B, where A occurs on my arm while B occurs on my leg, A and B must be time-like (since I have perceived them) and not space-like separated. In a nutshell, if simultaneous events are space-like separated, while I only perceive time-like separated events, how can I perceive simultaneous events? And second, could someone recommend me an article or a book explaining how the light-cone structure relates to ordinary perception? I struggle to connect the light cone structure to real-life events, so some kind of graph or an explanation of this would be useful.
The notion of 'perceiving an event' is misleading. Perception happens here and now, so all perception at a given time for a given observer takes place within its own single event (the observer here-and-now as a point in spacetime). What is perceived is causally determined by whatever happened in other events within that observing event past light cone. So while we can say that we observe what happened elsewhere in the past, in fact we observe what is here and now with us and that is a more or less direct consequence of what happened elsewhere in the past, which is not quite the same thing. In other words, one must not confuse the common notion of 'event' as 'something happening' with the relativistic notion of 'event' as simply 'a point in spacetime'.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Commutation relations inconsistent with constraints In section $9.5$ of Weinberg's Lectures on Quantum Mechanics, he uses an example to explain the clasification of constraints. The Lagrangian for a non-relativistic particle that is constrained to remain on a surface described by $$f(\vec x)=0\tag{1}$$ can be taken as $$L=\frac 12 m \dot{\vec x}^2-V(\vec x)+\lambda f(\vec x).\tag{2}$$ Apart from the primary constraint $(1)$ there is also a secondary one, arising from the imposition that $(1)$ is satisfied during the dynamics $$\dot {\vec x}\cdot\vec \nabla f(\vec x)=0.\tag{3}$$ Then he states that imposing $[x_i,p_j]=i\hbar\delta_{ij}$ would be inconsistent with the constraints $(1)$ and $(3)$ (which reads $\vec{p}\cdot\vec{\nabla}f=0$ in the Hamiltonian formalism). How can I see this inconsistency?
On one hand, $$ 0~=~[0,0]~=~[f(x),\vec{p}\cdot\vec{\nabla}f]~=~i\hbar (\vec{\nabla}f)^2.$$ On the other hand, a constraint function $f$ typically satisfies a regularity condition $$ \left .\vec{\nabla}f \right|_{f=0}~\neq~\vec{0}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Anticommutation of variation $\delta$ and differential $d$ In Quantum Fields and Strings: A Course for Mathematicians, it is said that variation $\delta$ and differential $d$ anticommute (this is only classical mechanics), which is very strange to me. This is in page 143-144: If we deform $x$ we have $$\delta L = m \langle \dot x, \delta \dot x \rangle dt$$ $$= - m \langle \ddot x, \delta x \rangle dt - d\left(m \langle \dot x,\delta x\rangle \right)\,.$$ Here $\delta$ is the differential on the space $\mathcal F$ of trajectories $x$ of the particle, $d$ is the differential on $\mathbb R$, and the second minus sign arises since $\delta$ and $d$ anticommute on $\mathcal F \times \mathbb R$. As far as I know, we should have $d(\delta x)=\delta(dx)$. It doesn't make sense to me why a "differential" with respect to the trajectory would anticommute with a differential with respect to time.
To elaborate on Qmechanic's answer to show why anticommutation in a bigraded differential algebra is natural, consider a manifold $X$ and its exterior algebra $\Omega(X)$. Suppose that there is a bigrading on $\Omega(X)$ such that $$ \Omega(X)=\bigoplus_{(r,s)\in\mathbb Z^2}\Omega^{r,s}(X), $$ where the sum is a direct sum. Suppose furthermore that when restricted to any pure grade subspace $\Omega^{r,s}(X)$, the exterior derivative goes as $$ d:\Omega^{r,s}(X)\rightarrow\Omega^{r+1,s}(X)\oplus\Omega^{r,s+1}(X). $$ If the bigrading is compatible with the ordinary grading by degree in the sense that each differential form of pure bigrade also has pure "monograde" then this is natural. We then define the splitting of the exterior derivative as $$ d=d_1 +d_2,\quad d_1=\pi^{r+1,s}\circ d,\quad d_2=\pi^{r,s+1}\circ d,$$ where $$ \pi^{r,s}:\Omega(X)\rightarrow\Omega^{r,s}(X) $$ is the projection - which is well-defined because the bigraded decomposition is a direct sum. This decomposition is then well-defined on any element by extending linearly. The nilpotency condition $d^2=0$ of the original exterior derivative now gives $$ 0=d^2=\left(d_1+d_2\right)^2=d_1^2+d_2^2+d_1d_2+d_2d_1. $$ Suppose that we applied $d^2$ to an element of pure bigrade $(r,s)$. Then $d_1^2$ maps to $\Omega^{r+2,s}(X)$, $d_2^2$ maps to $\Omega^{r,s+2}(X)$, and both $d_1d_2$ and $d_2d_1$ maps to $\Omega^{r+1,s+1}(X)$. Because of the direct sum decomposition, these subspaces are disjoint and linearly independent, therefore $d_1^2$, $d_2^2$ and $d_1d_2+d_2d_1$ must separately vanish. The first two gives $$ d_1^2=0,\quad d_2^2=0, $$i.e. the derived operators $d_1$ and $d_2$ are also nilpotent differentials, and the third condition gives $$ d_1d_2=-d_2d_1, $$ which shows that $d_1$ and $d_2$ anticommute.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/629992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Magnetic flux through circular loop due to infinite wire I’m trying to calculate magnetic flux that’s going through circular loop with radius $R$, due to magnetic field of a infinite wire that is in distance $d$ from the center of the loop. $\vec{B}$ vector is parallel to $\vec{dS}$ vector. I know that magnetic field of that wire is equal to $B=\frac{\mu_0 I}{2\pi r}$ and that flux is equal to $$\int_{S} \vec{B}\cdot \vec{dS}=\int_{S} B\cdot dS$$ where $dS$ Is surface of the loop, but what I don’t know is how to change that $dS$ so is possible to solve.
The convenient infinitesimal surface $\rm dS$ is shown in the Figure-01 : \begin{equation} \mathrm{dS} \boldsymbol{=}\mathrm{hdw}\boldsymbol{=} (2R\sin\theta)( \mathrm d\ell\sin\theta)\boldsymbol{=} (2R\sin\theta)( R\mathrm d\theta\sin\theta) \tag{01}\label{01} \end{equation} so \begin{equation} \mathrm{dS} \boldsymbol{=}2R^2\sin^2\theta\mathrm d\theta\boldsymbol{=}R^2(1\boldsymbol{-}\cos2\theta)\mathrm d\theta \tag{02}\label{02} \end{equation} We could verify that \begin{equation} \int\limits_{\theta\boldsymbol{=}0}^{\theta\boldsymbol{=}\pi}\!\!\!\!\mathrm{dS}=\pi R^2 \tag{03}\label{03} \end{equation} Hence for the magnetic flux through the circle we have \begin{equation} \Phi=\int\!\!\!\int\limits_{\!\!\!\!\!\!\bf circle}\!\!\mathbf{B}\boldsymbol{\cdot}\mathrm{d}\mathbf{S}=\int\!\!\!\int\limits_{\!\!\!\!\!\!\bf circle}\!\!\mathrm{B}\,\mathrm{dS}=\dfrac{\mu_{0}\mathrm{I}R^2}{\pi }\int\limits_{\theta\boldsymbol{=}0}^{\theta\boldsymbol{=}\pi}\!\!\!\!\dfrac{\sin^2\theta\,\rm d\theta}{\mathrm{ L}-R\cos\theta} \tag{04}\label{04} \end{equation} In Figure-02 below we see a detail of Figure-01 corresponding to this comment of OP : Why $\rm dw=\sin\theta d\ell$ ? – cover
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Can a body float in the middle of a fluid? Let's say we have a cubic body of side $a$ and made of a material with density $\rho$ and we measure its immersed height in a fluid of density $\rho_f$ by the variable $y$. Then, its potential energy (and considering a gain of potential due to buoyancy) can be written as: $V = -Mgy + \frac{\rho_f}{2}a^2y^2g$ To find the system equilibrium points, one can derivate the previous expression in order to y, obtaining: $\begin{equation} \frac{\partial V}{\partial z} = 0 \Longleftrightarrow \rho_f a^2gy_{eq} = Mg \Longleftrightarrow y_{eq}=\frac{Mg}{\rho_f a^2 g} = \frac{\rho a^3 g}{\rho_f a^2 g} = \frac{\rho}{\rho_f}a \end{equation}$ Which leads to something that I don't know how to explain. Having $\rho > \rho_f$, one will obtain that the body floats mid-water. How is this even possible if, theoretically with the equations obtained, there isn't any change on the fluid's density with depth?
The buoyancy on an object in a liquid is constant as long as the object and the liquid are constant. Whether the object is completely or only partially submerged is irrelevant. So is whether it is floating on top or in the middle somewhere or laying on the bottom. What makes it tricky in practice, is the fact that it is very hard to get the object and the liquid to stay constant. The nice side to that is, that it is that instability that generates increase in buoyancy below the surface and thereby enables the body to float in the middle of a fluid. On paper you need the pressure on the body to change some characteristic of the body. Pressure relates to depth, so there you have your way to control the depth at which the body floats in the liquid. Buoyancy and pressure at depth being the result of the same force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Why are constant volume and constant pressure heat capacities basically the same for solids? Are degrees of freedom involved? I knowv that $C_V=\frac{\frac{f}{2} Nk_B}{m}$ and $C_P=\frac{(\frac{f}{2} +1)Nk_B}{m}$. Since for solids their values are very close to each other, I would assume $\frac{f}{2} +1$ is very close to $\frac{f}{2}$. Namely, I thought I would need to have $\frac{f}{2} >> 1$. This would require a high number of degrees of freedom. However, is this the case? We are talking about a solid, of which the particles can move in less directions then, let's say, a gas molecule. Or is the very constraint that makes the number of degrees of freedom get very high? And is this the case or is there another explanation for why $C_V$ and $C_P$ have close values for solids?
It is because solids and liquids are very close to being incompressible. So it doesn't matter whether the pressure is changing or not.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Definition of electric polarisation and the potential due to a polarised body I've two questions, the second one depends on the first. $\mathbf{1}$ How exactly is polarisation defined? Griffiths says $\mathbf{P} \equiv$ dipole moment per unit volume How exactly do we go about calculating it? For example if I need to find the value of $\mathbf{P}$ at some point do we take a small volume around that point enclosing few hundred/thousand atoms, add up the dipole moments and divide by the volume? And similarly repeat the process to find the value of Polarisation everywhere? $\mathbf{2}$ Suppose I have to find the potential due to a polarised body far away from it. I can find it by adding up the individual contribution of each dipole. Since the field point is far away I can safely assume that the potential of each dipole can be written as $$V_{\mathrm{dip}}(\mathbf{r})=\frac{1}{4 \pi \epsilon_{0}} \frac{\mathbf{p} \cdot \hat{\mathbf{r}}}{r^{2}}$$ All I've to do is to add up the contributions of each individual dipole. However an alternate equation is presented which too gives us the potential and is as $$V(\mathbf{r})=\frac{1}{4 \pi \epsilon_{0}} \int_{\mathcal{V}} \frac{\mathbf{P}\left(\mathbf{r}^{\prime}\right) \cdot \hat{r}}{r^{2}} d \tau^{\prime}$$ How can one justify that the second equation is correct and gives us the value of potential? MORE DETAIL: Dear Urb said that : "Instead of doing a sum $$\sum_{i=1}^N\frac{1}{4 \pi \varepsilon_{0}} \frac{\mathbf{p}_i \cdot \hat{\mathbf{r}}}{r^{2}}$$ over all $N$ dipoles inside the body, we just chop the body into little pieces of volume $d\tau'$, assign to each piece a dipole moment $\mathbf P(\mathbf r')d\tau'$ and integrate over the entire body". But we can't chop the body into $d\tau'$ elements and use the integral of $\mathbf P(\mathbf r')d\tau'$ . Because $d\tau'$ is infinitesimal. We know that $P$ was an average over a small but not infinitesimal volume element and if we use $$V(\mathbf{r})=\frac{1}{4 \pi \epsilon_{0}} \int_{\mathcal{V}} \frac{\mathbf{P}\left(\mathbf{r}^{\prime}\right) \cdot \hat{r}}{r^{2}} d \tau^{\prime}$$ we have implicitly assumed $P$ to be an average over an infinitesimal volume element which isn't how we initially defined it.
A general remark, which is too long for a comment: Landau's book on the subject is appropriately called Electrodynamics of continuous media - important thing here is that we are dealing with macroscopic quantities, i.e., the quantities averaged over a "macroscopically small volume", so that they vary smoothly in space (as opposed to actual atomic potentials that vary wildly on the microscopic scale). How one calculates a dielectric response from its microscopic structure of a media is an interesting and important subject, but this is not the point here. Note that similar (but less unexpected) approximation is done in the elasticity theory, where the objects are considered as continuous, neglecting their atomic structure. Thus, instead of dealing with the intermolecular potentials we get something simple, such as Hooke's law. Indeed, the Hooke's law is the elastic equivalent of the relations such as $$ \mathbf{D}=\varepsilon \mathbf{E}, \mathbf{B}=\mu \mathbf{H} $$ Remark: I fully agree with the answer by @Urb.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
In special relativity, how do we know that distance doesn't change in the direction perpendicular to velocity? In the theory of special relativity it is said that the distance in the direction of the speed changes by a factor of $$\gamma=\frac{1}{\sqrt{1-\frac{v^{2}}{c^{2}}}}$$ How do we know that the distance perpendicular to the velocity doesn't change?
I think the best way to understand this is to figure out why we needed special relativity in the first place. One of the fundamental principles of Physics is that no matter which frame of reference you go to, the laws of Physics should remain same (Also known as lorentz invariance). Relativity was introduced by Galileo and Newton creating what is traditionally known as classical mechanics. The problem that arose was that when classical electromagnetic theory was applied to Maxwell's equations (more specifically to moving charges) things went haywire. There were asymmetries that started to rise up. Classical relativity and electromagnetism were not working together and thus something was wrong. So we needed some new formulation of relativity which can have a transformation (not galilean) that makes all physical laws looking the same + the speed of light is same in all frames. While the second condition may look arbitrary, it comes out from the electrodynamics issue. Lets start the derivation. We have two inertial frames F and F' in which a light front is moving and they have a relative velocity of v (in the x direction). The equation for this would be $ x^2+y^2+z^2=c^2t^2 $ and in the second frame it would be $x'^2+y'^2+z'^2=c^2t'^2$. Once you solve these equations and try to find relationships between x,x' y,y' z,z' t,t' you end up with the so called lorentz transformation. These transformation laws show us that length contraction happens in whichever direction the relative velocity is between the two frames. ( $x'=\gamma (x-vt) \mathrm{\: and\: }y'=y\: z'=z$)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How to find the falling time of an object when acceleration is not a constant? Let's say we are throwing an object from the surface of the earth, this object reaches 70,000km with initial velocity of $10713 \mathrm{m}/\mathrm{s}$ until it reaches the peak high , the g value at 70,000km is $0.068 \mathrm{m}/\mathrm{s}^2$. Now my question is: How to calculate the falling time of the object from the peak height (70,000km) until it reaches the ground? Acceleration is not constant here.
Integrating twice based on acceleration due to gravity. (Keplar could also be used, but the result is the same). m1 = mass of earth m2 = mass of object, so much smaller than mass of earth it can be ignored gravitational constant G = 6.6743 x 10^-11 m1+m2 = 5.9722 x 10^24 r is distance between the two objects v is combined closure rate of the two objects towards a common center of mass. a is combined acceleration of the two objects towards a common center of mass initial distance r0 = 76370000 final distance r1 = 6370000 $$ v = dr/dt \\ a = dv/dt \\ a = (dv / dr) (dr / dt) \\ a = v \ dv/dr = -G (m1 + m2) / r^2\\ v \ dv = -G (m1 + m2) dr / r^2 $$ Integrating both sides, with constant v = 0 at r0 $$ 1/2 v^2 = G(m1+m2)/r - G(m1+m2)/r0 \\ v = -\sqrt{2G(m1+m2)/r - G(m1+m2)/r0} \\ v = \frac{dr}{dt} = -\sqrt{\frac{2 \ G\ r0 \ (m1+m2)-2 \ G \ r \ (m1+m2)}{r \ r0}} \\ \frac{- \sqrt{r0 \ r} \ dr} {\sqrt{2 \ G \ r0 (m1 + m2) - 2 \ G \ r \ (m1 + m2)}} = dt \\ {- \sqrt{ \frac{r0}{2 \ G \ (m1 + m2)}}} \ \ \sqrt{\frac{r}{r0 - r}} \ dr = dt \\ $$ Integrate again, using u for substitution. $$ u = \sqrt{\frac{r}{r0-r}} \\ u^2 = \frac{r}{r0-r} \\ r = \frac{r_0 u^2}{1 + u^2} = \frac{r_0 + r_0 u^2 - r_0}{1 + u^2} = \frac{ r_0(1 + u^2) - r_0}{1 + u^2} = r_0 - \frac{r_0}{1 + u^2} \\ dr = \frac{2 \ r0 \ u \ du}{(1 + u^2)^2} \\ when \ r = r0 = 76370000\ m, u0 = \infty \\ when \ r = r1 = 6370000\ m, u1 = \sqrt{0.091} \\ t = \frac{-2 {r0}^{3/2}}{\sqrt{2 G (m1 + m2)}} \ \int_\infty^\sqrt{0.091} \frac{u^{2}}{(1+u^{2})^{2}}du \\ t = \frac{-2 \ {r0}^{3/2}}{\sqrt{2 G (m1 + m2)}} \left [ \frac{1}{2} \left (\tan^{-1}(u)-\frac{u}{1+u^{2}} \right ) \right ] _\infty^\sqrt{0.091} \\ t = \frac{{-r0}^{3/2}}{\sqrt{2 G (m1 + m2)}} \left (.01648041 - \frac{\pi}{2} \right ) \\ t \approx 32240 \ seconds $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 5 }
Probability current is zero for normalizable stationary state I'm asked to show that the probability current is zero for a normalizable stationary state of the Shrodinger Equation. So we have that $\Psi(x,t)=\psi(x)e^{-iEt/\hbar}$. Now using the conservation of probability we have $$0=\frac{\partial}{\partial t}|\psi|^2=\frac{\partial}{\partial t}|\Psi|^2=-\frac{\partial j}{\partial x}$$ so $j=j(t)$, but in the definition of $j$ all the $t$ dependence drops out to give $$j=\frac{-i\hbar}{2m}\left(\psi^*\psi'-\psi'^*\psi\right)$$ so $j=j(x)$ and so we must have $j=\text{const}$. I want to say that for the state to be normalizable we must have $j\rightarrow 0$ as $|x|\rightarrow\infty$, and so $j=0$ everywhere. But this argument becomes complex as I have to rule out cases like $\sin(e^x)/x$ where $\psi\rightarrow 0$ but $\psi'\rightarrow\infty$. I know this argument is closely linked to showing that in 1D the SE has no degeneracy, but I am sure the exam question doesn't want me to use such a complex argument. I also don't think I can quote the lack of degeneracy. Is there a simpler way to show this, or is it in fact equivalent to the non-degeneracy of the 1D SE?
I want to say that for the state to be normalizable we must have $j\rightarrow 0$ as $|x|\rightarrow\infty$, and so $j=0$ everywhere. I don't know why should be the case. Normalisable just means that $|\Psi|^2 \rightarrow 0$ "fast enough". The $j$ takes out the phase, which in $|\Psi|^2$ does not matter, so I don't think that $j \rightarrow 0$ is equivalent to normalisation. Stationary states have zero probability current $j$ just by virtue of being in the form (real function)*(pure phase factor), so that the conjugates in the definition of $j$ do not give you a "net" term.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How can parallel rays meet at infinity? I found that in every book (till my 12th) it is written that, in concave mirror, when object is at focus, then reflected rays will be parallel and they meet at infinity to form a real image. But, as we know, parallel rays never meet. Then, does this mean that all books are wrong ? If not, then why?
It means that they don't meet, because as you correctly pointed out parallel lines never meet. Then what's the point in saying "they meet at infinity" if they never meet? Because you can obtain a parabola by an ellipse with focal distance $d$ in the limit where $d\rightarrow\infty$. In the ellipse rays from one focus get reflected to the other one, and the same happens for a parabola, but one focus is at infinity (therefore you'll never reach it, and therefore the rays won't meet). Edit: since many are mentioning it in the comments, I mean in $\mathbb{R}^n$, where Euclidean geometry is safe. There surely are more interesting spaces, but I thought OP was referring to the flat, euclidean case (and also, in our everyday lives, it's the space where the physics of reflection lives).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/630986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 9, "answer_id": 1 }
Does Einstein's Equivalence Principle ignore time dilation? It seems Einstein's equivalence principle is neglecting time dilation. If an observer is at rest in an inertial reference frame, free of any gravitation, she will experience time flow at the "native" rate of a universe empty of mass and energy. However, an observer in free fall at the surface of the earth will experience time flow at a rate determined by earth's gravitation, which is slower than the native time flow rate. This seems to imply non-equivalence?
There is no way for the observer knows about time dilation. The clock in the frame shows the local time, and there is no comparison with another clocks in distant places. Otherwise it is not a local frame.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/631073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Mathematical proof of charging by induction If we bring a positive charge +Q near a neutral conductor , we know that the surface near the source gets-Q and opposite to it gets +Q, but why do these induced charges have to be equal in magnitude to source charge, why isn't a charge distribution such as -7Q on surface near the source and +7Q on the opposite surface, not possible ? Can we show the result mathematically?
Charges are free to flow inside a conductor. If there is an electric field inside a conductor, positive charges will flow with the field and negative charges against it. These displaced charges contribute their own electric field that quickly cancels out the original field. This is why we say there the electric field is always zero inside a conductor. Now, getting back to your question. If a change of +Q induced a charge of -7Q on the surface nearest, then the overall charge in that area would be very negative. Charges of the same sign repel, so the accumulated charges would repel each other and the induced charge in the conductor would decrease. If you want a more mathematical answer, take a look at the explicit examples in this article: https://en.wikipedia.org/wiki/Method_of_image_charges
{ "language": "en", "url": "https://physics.stackexchange.com/questions/631150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Residual symmetry group of a scalar field theory Given a Lagrangian $$\frac{1}{2} (\partial_\mu \phi)^2 - \frac{\lambda}{4!}(\phi^2 - v^2)^2$$ for a real scalar field theory with $\vec{\phi} = (\phi_1,\phi_2,...,\phi_n)^T$ and $O(n)$ symmetry. Why is the residual symmetry group (or little group) given by $O(n\!-\!1)$ when spontaneous symmetry is broken?
O(n) means you may rotate any n-vector to any other of the same length, or a suitably normalized combination of others. So you make a choice to rotate your reference vector to say, $\phi_1=v(1,0,0,..,0)^T$. Its little group rotating the n-1 components indexed by 2,3,...,n among themselves is thus O(n-1), and it has the obvious $(n-1)(n-2)/2$ generators of that group acting linearly on your fields. The ones you "lost" (not really, the symmetry generators are still there, transforming $\phi_1$ to the other components, in a nonlinear manner) are the $n-1$ ones realized nonlinearly, corresponding to massless Goldstone bosons (show this!). Your Goldstone bosons are $\phi_a$ with a=2,3,...,n-1, while $\phi_1$ is massive, the σ or Higgs. Specifically, $$ \Delta_{ij}\phi_k= -\Delta_{ji}\phi_k= \theta_{ij}(\delta_{ik}\phi_i - \delta_{jk}\phi_j), $$ So $$ \Delta_{ij}\phi_1= 0 $$ For the O(n-1) Δs involving only indices 2,3,...n. Further, $$ \Delta_{1j}\phi_1= \theta_{1j}\phi_j, $$ for only one index, j in that set: these do not leave your reference vector invariant. There are n-1 of them and shift the $\phi_j$ s by a constant when you redefine $\phi'_1$ to have a vanishing vacuum value.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/631562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why force between charges increases when it moves, instead of decreasing? Imagine two positive charges in a space ship moving with a velocity,v with respect to an observer on earth. according to the person in the spaceship,the electrostatic force between the charges is $F'=(\frac{1}{4π\epsilon_0})\times \frac{q_1q_2}{r^2} $ But if you are the observer on earth,then the equation becomes $F=\gamma \times F' =\gamma\times(\frac{1}{4π\epsilon_0})\times \frac{q_1q_2}{r^2} $ So,the force is greater in the frame on earth. But actually shouldn't the force be lesser because with respect to earth the charges are moving and hence have an inward magnetic attractive force which partially cancels the outwards repulsive force?
Since the vector F is perpendicular to v it is not affected by the Lorentz transformation and it is the same in both reference frames. You can yourself a big favour by using the [covariant formulation of electromagnetism][1]. The force is given by $$f^\mu = F^{\mu\nu} j_\nu$$ where $F^{\mu\nu}$ is the Lorentz covariant, antisymmetric force tensor and $j^\nu$ the current. Form this it is clear that you don't have to worry about the detailed transformation of the fields. You only have to transform the force. The Lorentz transformation is $$ t' = \gamma \left( t - \frac{vx}{c^2} \right) \\ x' = \gamma \left( x - v t \right)\\ y' = y \\ z' = z \,. $$ This shows that if v is along x, then the perpendicular components of the force, unlike those of the antisymmetric tensor fields, are not affected. [1]: https://en.wikipedia.org/wiki/Covariant_formulation_of_classical_electromagnetism
{ "language": "en", "url": "https://physics.stackexchange.com/questions/631902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is self-propulsion in Leidenfrost drops placed carefully on a highly heated solid surface possible? As is familiar to anyone who has inadvertently spilt a few drops of water onto a highly heated pan, rather than boiling away in a flash the water gathers itself up into globules and begins to "dance" over the surface as they evaporate away slowly. On momentary contact with the highly heated surface a vapour layer is quickly formed between the droplets and the surface that lay beneath. The vapour layer, being a poor conductor of heat, is able to insulate the drops (to a degree) and instead of being vaporised leads to their slow evaporation instead. Drops in this state I call Leidenfrost drops. Leidenfrost drops are highly mobile due to the reduction in friction (reduction in friction between the liquid drops and the solid surface, there will of course still be a very small amount of friction between the liquid drop and its vapour). If water is thrown onto a surface in a random fashion, given the almost frictionless nature of drops in the Leidenfrost state, I would imagine that a component of the velocity initially horizontal to the surface is enough to get the drops dancing. Now imagine drops of water are ever so carefully placed onto a highly heated surface. Assume the solid surface is perfectly horizontal and its surface very smooth. Are the drops able to start dancing, that is, be self-propelled, in such a situation? And if so, what would generate the propelling force that acts on the drops causing them to move? Does the size of the drop play a role? I ask the question since it is known liquid drops carefully placed onto a liquid surface experience self-propulsion. Liquid drops placed onto a so-called ratchet, a solid horizontal surface consisting of a series of saw-teeth, are also able to be self-propelled. But what about in the case of a perfectly smooth solid surface?
A good guess might be symmetry breaking through randomly bursting vapor bubbles or vapor emitted at the sides of the drop, which would produce small fluctuating forces on the drop. The consequence might be Brownian motion, with smaller drops being more strongly influenced due to their lower inertia.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/632049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }