Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Meaning of the subscripts $L,R$ for the two component Weyl spinors $\phi_{L,R}$ For a Dirac spinor $\psi$, its chiral projections are $\psi_{L,R}$ are defined as $$\psi_{R,L}=\frac{1}{2}(1\mp\gamma^5)\psi.\tag{1}$$ Acting with the chirality operator $\gamma^5$, we find $$\gamma^5\psi_L=-\psi_L,~~\gamma^5\psi_R=+\psi_R.\tag{2}$$ This is why $\psi_L$ and $\psi_R$ are respectively known as left-handed and right-handed chiral projections of $\psi$. It is to be emphasized that $\psi_L$ and $\psi_R$ are not 2-component spinors; $\psi_L$($\psi_R$) are still 4-component spinors with lower(upper) two entries being zero and upper(lower) two entries being nonzero. Let $$\psi_L=\begin{pmatrix}\chi\\0\end{pmatrix},~~\psi_R=\begin{pmatrix}0\\\zeta\end{pmatrix},\tag{3}$$ where $\chi$ and $\zeta$ are two-component spinors, called Weyl spinors. But sometimes people use a confusing notation, $\phi_L$ for $\chi$ and $\phi_R$ for $\zeta$ i.e., $$\psi_L=\begin{pmatrix}\phi_L\\0\end{pmatrix},~~\psi_R=\begin{pmatrix}0\\\phi_R\end{pmatrix}.\tag{4}$$ For example, see Eq. (8.71) here. Since the chirality projection operators $\frac{1}{2}(1\mp\gamma^5)$ are $4\times4$ matrices, they can only act on $\psi$ to project out $\psi_{L}$ and $\psi_R$. However, the notation $\phi_L$ and $\phi_R$, for the 2-component spinors $\chi$ and $\zeta$ respectively, suggests that there is also a notion of $2\times 2$ chirality operator. If there is no such operator what is the meaning of $\phi_L$ and $\phi_R$?
It's important to distinguish between the Clifford algebra itself versus a matrix representation of the Clifford algebra. The Clifford algebra itself is an abstract associative algebra generated by basis vectors $e^0,e^1,e^2,e^3$ satisfying $e^a e^b+e^be^a=2\eta^{ab}$. The Dirac matrices provide a matrix representation of the Clifford algebra, $\gamma:e^a\mapsto \gamma^a$, which is faithful in the sense that distinct elements of the Clifford algebra are represented by distinct matrices. In four-dimensional spacetime, the smallest matrices that can achieve this feat have size $4\times 4$. A Dirac spinor is a thing that is acted on by this faithful matrix representation of the whole Clifford algebra. The even part of the Clifford algebra is generated by products $e^a e^b$. It is a proper subalgebra of the full Clifford algebra. When restricted to this subalgebra, the Dirac matrix representation is reducible: using the projection matrices $(1\pm\gamma^5)/2$, we can split a Dirac spinor $\psi$ into two parts $\psi_{L/R}$ that don't mix with each other under the action of the even part of this representation of the Clifford algebra. Without referring to Dirac spinors at all, Weyl spinors (aka chiral spinors) can be defined directly as things that transform according to an irreducible representation of the even part of the Clifford algebra. There are two inequivalent (mutually conjugate) representations of the even part, which are often distinguished from each other using the subscripts $L/R$, whether or not they were constructed by applying $(1\pm\gamma^5)/2$ to a Dirac spinor. When Weyl spinors are defined directly like this, the chirality operator is still defined: it's still (proportional to) the matrix representation of $e^0e^1e^2e^3$. However, an irreducible representation of the even part of the Clifford algebra is not faithful: the matrix representing $e^0e^1e^2e^3$ is proportional to the identity matrix. The two inequivalent representations differ from each other in the sign of the matrix that represents $e^0e^1e^2e^3$. So the chirality operator is still defined, but it just multiplies the Weyl spinor by $+1$ or $-1$, depending on which of the two inequivalent representations is being used.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/440478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Contravariant metric in Newton-Cartan spacetime I'm interested in the geometrized Newtonian gravitation or Newton-Cartan theory. In every reference that I have found begins saying that a Newton-Cartan spacetime is a manifold $M$ with some structures. Among then, is always pointed a contravariant metric $g^{ab}$ that represents the spatial distances. My question is: why is contravariant? Should it not be a covariant metric to measure the length of vectors? I understand that a contravariant metric measures lengths and angles of covectors or 1-forms.
The signature is (+++) and the metric has rank 3. See e.g. https://www.nikhef.nl/pub/services/biblio/theses_pdf/thesis_R_Andringa.pdf in which this is motivated by calculating which metrics are kept invariant under the Galilei group.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/440588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Doubt about ray diagrams In a ray diagram, 2 rays are considered enough to locate the image of a point on a given object. But how can we say that the rays other than the one we drew will meet at that same point? I guess we can justify this by saying that we get only one image of a given object by a single mirror/lens (right?). So every point on the object must correspond to only one point on the only image. Is this reasoning correct? Also, can somebody provide a more "rigorous" proof ( maybe with some math involved) Thanks
This concerns what assumptions we are making about our optical system. Consider making a rudimentary lens using a flat slab of glass with a prism glued on the side. A ray going through the center goes straight on through; a ray going through the prism will be bent, so these two rays will meet somewhere, but there is no reason why other rays will meet at that same point. On the other hand, if our lens is an ideal lens, then the rays will all meet. The definition of "an ideal lens" is that it is an optical component which has this property. Once we have agreed that definition, then the method of just picking two rays is obviously sufficient to locate the image. Now you can if you like explore what properties will bring about such an ideal lens. One way to define it is to say the focal length is independent of where the ray passes through the lens, and the direction change is the same for all rays passing through a given point on the lens. To realise this with a realistic device, the easiest approach is to adopt the "paraxial approximation" in which all rays under consideration stay close to the optic axis in their entire journey through any lenses under consideration. In this case a thin lens with spherical surfaces will do the job to first approximation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/440716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Cylinder vs cylinder of double the radius roll down an incline plane, which one wins? A solid cylinder and another solid cylinder with the same mass but double the radius start at the same height on an incline plane with height h and roll without slipping. Consider the cylinders as disks with moment of inertias I=(1/2)mr^2. Which one reaches the bottom of the incline plane first? According to this, the velocity of any body rolling down the plane is v=(2 g h/1 + c) ^½ where c is the constant in moment of inertia (for example, c=2/5 for a solid sphere). My thought process was that since the radius doubled, c=2. So, the velocity of the doubled cylinder would be less, therefore finishing later. Similarly, if it’s moment of inertia increases, it’s angular and linear acceleration decreases. However, my other peers and even my professor disagree, saying that radius and mass do not play a role in the velocity of the body, since both m and r will cancel in an actual calculation of the velocity. Could anyone elaborate on whether I am right or wrong?
The following equation from @R. Romero's analysis is correct: $$Mgh=\frac{1}{2}Mv^2+\frac{1}{2}I\left(\frac{v^2}{R^2}\right)\tag{1}$$But, the moment of inertia of a cylinder is given by: $$I=M\frac{R^2}{2}\tag{2}$$ So, combining Eqns. 1 and 2 gives:$$Mgh=\frac{1}{2}Mv^2+\frac{1}{4}Mv^2\tag{3}$$Calcelling M from both sides of the equation yields:$$gh=\frac{1}{2}v^2+\frac{1}{4}v^2\tag{4}$$So, solving for v, we have:$$v=\sqrt{\frac{4gh}{3}}$$Note that this is independent of the radius of the cylinder. So, both cylinders roll down the ramp in the same amount of time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/440946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Coaxial cable with infinite return conductor If a coaxial cable has a coaxial return conductor with infinite outer radius, will the return conductor experience a voltage build-up due to current flowing through it, or will it stay on ground potential? Here I'm taking infinite to be the zero potential. I would appreciate if both the d.c. and a.c. case are discussed, if the answer is dependent on type of excitation. Edit: I think the specific problem where my intuition lets me down is whether there is a scalar potential build-up axially along the return conductor, such that the electric field in the return conductor is given by $$ \mathbf{E} = -\nabla \varphi - \frac{\partial \mathbf{A}}{\partial t} $$ or, if the electric field is only due to the changing magnetic field, i.e. $\mathbf{E} = - \frac{\partial \mathbf{A}}{\partial t}$. My reason for thinking it's the latter is that since the return conductor certainly is a good conductor, then $\varphi$ must be constant and equal to zero, or else currents would flow radially to cancel the charge/potential build-up. Is this correct?
AC current, due to the skin effect, tends to flow on the inner surface of the outer conductor of a coax cable. The higher the frequency, the thinner the skin depth. For instance, at $1$MHz, most of the current will flow inside a layer of a couple of hundreds of microns. DC current will spread out much more, but most of the current will flow within the radius comparable with several lengths of the cable, since the resistance of a path beyond that would substantially exceed the resistance of the direct path. So, in both cases, the return current will cause some voltage drop, but, in the DC case, it would be much smaller.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/441053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the normal force equal to weight if we take the rotation of Earth into account? In my physics class we were doing problems such that we set $N$ (normal force) $= mg$. I understand that by Newton's Third Law, if I exert a force on the ground, then the ground will exert an equal and opposite force on me. However, the part that I am slightly confused about is that when the Earth rotates, and thus I rotate too, I am accelerating with the centripetal force towards the center of the earth (assuming I am at the equator). How am I doing this if the normal force equals $mg$? If the normal force doesn't equal mg then why isn't the ground exerting an equal and opposite force?
@Aaron has nicely explained using mathematics. Let me brief it out qualitatively and also give a slightly different way of looking at it. * *following the line of thought in @Aaron's answer, the normal force does not equal mass multiplied by g. *so the difference between gravitational pull and the normal force will provide centripetal acceleration that keeps you going round in circle with the earth. *however, in most cases the effective value of g is used instead of using g as acceleration due to gravity. In the effective g, the centrifugal force (as seen from our frame of reference) and other factors such as height and latitude variations are also accounted for.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/441245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Lagrangian of EM field: Why the $B$-field term has a minus sign in front of it in the Lagrangian? I know that $L = T - U$ and that, in the non-relativistic case $$L= \frac{1}2mv^2 - q\phi(r,t) + q\vec{v}\cdot\vec{A}(r,t).\tag{1} $$ My lecturer used the following form of the Lagrangian density to derive Maxwell's equations: $$L = \vec{j}(r,t)\vec{A}(r,t) - \vec{\rho}(r,t)\vec{\phi}(r,t) + \frac{\epsilon}2 \vec{E}^2(r,t)-\frac{1}{2\mu}\vec{B}^2(r,t). \tag{2}$$ Comparing the two equations for $L$, I can see that the KE term in the first equation is substituted for the energy density of the EM field. What I do not understand is why the $B$-field term has a minus sign in front of it in the Lagrangian (2)? Can someone please shed some light on this for me please? P.S - I have checked the related posts and none of them address my issue.
The only reason why Lagrangians are what they are is because they give the correct equations of motion. In addition you may want to require that certain symmetries are preserved, but there is not much more to that. Actually, one can prove that for many systems there are infinitely many (different) equivalent Lagrangians giving raise to the same equations of motion (so you could pick any of them).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/441516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Do centrifugal force and gravity differ in their effects on objects? If the type of object matters, consider the human body. If the situation matters, consider standing on the inside wall of an O'Neill cylinder compared to standing on the surface of Earth. "Differ in their effects on objects" means: Would the object be able to tell the difference? That is, is there an instrument that could tell whether it is placed in an O'Neil cylinder or on the surface of a planet from the effects (acceleartion, I suppose) of centrifugal force and gravity alone?
Yes. An instrument that can sensitively measure the force gradient (for example, the difference between the force at one spot and the force at a nearby spot, say a foot away) could tell the difference. This “tidal” force will be greater for the O’Neill cylinder.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/441606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why does the warm air rises up? Warm air has more energy than cold air. This means that according to the Einstein equation $E = mc^2$ the warmer air has a greater mass than the cold one. Why is the warm air rising, if it has a greater mass, which means that the attraction of gravity between the Earth and the warm air is greater?
$E=mc^2$ is only valid for particles that are not moving. The full expression should be $$E^2=p^2c^2+m^2c^4$$ where $p$ is the momentum of the particle (which is $0$ when at rest, and we recover the famous $E=mc^2$). The reason warm air rises is to do with the fact that "warm" air has faster moving particles. This means the air becomes less dense, and so it will rise above the slower, colder, more dense air.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/441954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Relationship between freefall velocity time dilation and gravitational time dilation in a Schwarzschild metric If you drop an object into a gravitational field, is its final velocity equal to what it would have to be in flat space in order to generate the same time dilation that you get at a given radius for an object that is stationary relative to the gravitational body (sitting on the surface in the case that it isn't a black hole)? I don't have enough GR background to do the calculation myself but this seems consistent with the effects on photons going into a gravitational well. Here's what I've already figured out (mostly from http://jila.colorado.edu/~ajsh/bh/schwp.html) * *The distance toward the black hole is contracted/expanded by an amount $\dfrac{1}{\sqrt{1−r_s/r}}$ where $r$ is "circumferential radius" that you get from dividing the orbit length by $2\pi$ and $r_s=2GM/c^2$ is the Schwarzschild radius. *Time dilatation relative to "Schwarzschild time" is $\sqrt{1−r_s/r}$.
The Schwarzschild metric in Schwarzschild coordinates $(t, r, \theta, \phi)$ shows $ds^2 = -(1 - 2M/r) dt^2 + (1 - 2M/r)^{-1} dr^2 + r^2 (d\theta^2 + \sin^2\theta d\phi^2)$ where: $c = G = 1$ natural units $M$ black hole mass $r_s = 2M$ Schwarzschild radius (event horizon) The gravitational time dilation measured at infinity (far away from the horizon) vs. the proper time $\tau$ of a stationary observer at a radial coordinate $r$ is $dt = (1 - 2M/r)^{-1/2} d\tau$ Let us drop an object at rest from infinity. The time symmetry allows to write $-K_\mu p^\mu = constant = E_\infty = (1 - 2M/r) p^t$ where: $K^\mu = \partial_t = (1, 0, 0, 0)$ time Killing vector $p^\mu$ 4-momentum $E_\infty = m$ energy at infinity (rest energy) The energy of the object as measured by the stationary observer is $E = -p_\mu u^\mu = (1 - 2M/r) (1 - 2M/r)^{-1} m (1 - 2M/r)^{-1/2} = (1 - 2M/r)^{-1/2} m $ Eq. (1) where: $u^\mu = (dt/d\tau, 0, 0, 0)$ stationary observer 4-velocity Applying the equivalence principle, from special relativity we get $E = \gamma m = (1 - v^2)^{-1/2} m$ Eq. (2) where: $\gamma = (1 - v^2)^{-1/2}$ Lorentz factor By comparing Eq. (1) and Eq. (2) we have $\gamma = (1 - v^2)^{-1/2} = (1 - 2M/r)^{-1/2}$ that is $v = (2M/r)^{1/2}$ velocity of a free falling object (at rest from infinity) relative to a stationary observer As you read, the Lorentz factor $\gamma$ (time dilation in Minkowski) equals the gravitational time dilation. Note: If you want the time dilation far away from the horizon vs. the proper time of the free falling object, you have to compose the two effects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Would a supersonic object without a combustion power source leave behind a contrail? Contrails, as far as I understand them, are caused by either a pressure change that forces the condensation of H2O(g) OR by the release of warm H2O from a combustion engine. Most plane contrails, I would assume, operate largely by this second mechanism as they burn jet fuel and release warm CO2 and H2O. My question is, can the first mechanism alone be enough to create a contrail? Would a supersonic object passing through Earth's atmosphere leave behind a contrail? If so, what conditions are required for this - presumably high pressure (close to the surface of the Earth) and high speed? For those curious, this started out as a question on Worldbuilding and has been asked again here per a wise comment.
The Chelyabinsk meteor left a contrail (https://www.nature.com/news/russian-meteor-largest-in-a-century-1.12438) although it was not reported to carry engines:-)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If the molecular collisions are elastic will there be any dissipation in a fluid? Viscosity arises due to collisions of the molecules of one layer of a fluid with another in contact. But viscosity is a dissipative element leading to heating and dissipation. Where does it heat come from? Does it come from the molecular collisions being inelastic? If the collisions were elastic, would there be no viscosity or dissipation in a fluid?
Viscosity arises due to collisions of the molecules of one layer of a fluid with another in contact. Viscosity is due to intermolecular forces that resist relative motion within a fluid. The viscosity of a liquid can be defined as the force of friction between layers of the liquid that move relative to each other at different velocities and that resists that relative motion. But viscosity is a dissipative element leading to heating and dissipation. Where does it heat come from? Yes viscosity is a dissipative effect. It is the action of the friction force between the layers of liquid that increases the kinetic energy (temperature) of the liquid molecules at the interface. Heat transfer then occurs between the fluids at the interface to fluid at lower temperature away from the interface. A dry friction analogy is vigorously rubbing ones hands together to warm them. The friction force acting along the surface of the epidermis (friction work) elevates the temperature of the epidermis. Does it come from the molecular collisions being inelastic? If the collisions were elastic, would there be no viscosity or dissipation in a fluid? I don’t believe collision elasticity is the principle factor involved, although there is an exchange of molecular momentum between the layers due to collisions. I believe the main reason for viscosity is the intermolecular forces between the fluid molecules. These forces are responsible for the motion of one layer of fluid attempting to drag along an adjacent layer. The greater these forces are (greater viscosity) the greater the resistance to relative motion between fluid layers. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is the shape of a gravitational wave form? What is the shape of a gravitational wave as it hits the Earth, particularly the time portion. Does time start at normal speed, then slow slightly, and then return to normal speed? Or does it start at a normal speed, slow down slightly, then speed up slightly, and then return to normal speed? Those other questions only concerned whether time dilation exists. I'm more concerned with the shape of the wave form. So not the same questions at all.
This is written as if the metric for a gravitational wave was something like $ds^2=(1+f(t))dt^2-dx^2-dy^2-dz^2$. It isn't. A metric of that form is just Minkowski space described in unusual coordinates. General relativity doesn't really even offer us any way of describing the notion of whether time slows down or speeds up at a particular point in space or for a particular observer. For instance, if someone asks me whether such a speeding up and slowing down of time occurs for an inertial observer in Minkowski space, I don't think the correct answer is "no, time flows at a uniform rate for that observer," it's more like "the answer is undefined," or "compared to what?" The actual form of a gravitational wave is that it's a tidal (vacuum) distortion of spacetime that is transverse. The wave can in theory be modulated in any way whatsoever. The actual waveforms we see are determined by the properties of the source. A binary black hole inspiral makes a characteristic chirp.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Can a pair of frictional force have positive work? My teacher told me that friction can have positive work which is true. But he told that a pair of friction forces can never have positive work. I am not able to think the reason for this statement. Any help will be really appreciated.
Imagine one block $T$ on top of another block $B$ with both blocks moving with a velocity $\vec v$ to the right ie not moving relative to one another. A force $\vec F$ is being applied to the bottom block $B$ to cause an acceleration of both blocks to the right. The friction force on the top block due to the bottom block is $\vec F_{\rm TB}$ and this force is in the direction of motion of the top block (to the right) so the work done by that force is positive. The friction force on the bottom block due to the top block is $\vec F_{\rm BT} (= -\vec F_{\rm TB}$ - Newton's thrid law) and this force is in the opposite direction to the motion of the top block so the work done by that force is negative. In short, the displacement of both blocks is in the same direction but the frictional forces which are applied to the two blocks are in opposite directions, so the work done by friction on one block is positive whilst the work done by friction on the other block is negative. Overall the work done by the frictional forces is zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the meaning of the negative sign in $\Delta s^2 = \Delta x^2 + \Delta y^2 + \Delta z^2 - (c\Delta t)^2$? In the equation of the spacetime interval formula $\Delta s^2 = \Delta x^2 + \Delta y^2 + \Delta z^2 - (c\Delta t)^2$ is there meaning for the minus sign before the $(c\Delta t)^2$ or is it just a pure mathematical stuff? Another question, sometimes I see the formula as $\Delta s^2 = (c\Delta t)^2 - \Delta x^2 - \Delta y^2 - \Delta z^2$ so why it have two different forms?
If we measured distance in light-seconds instead of meters, the constant c would be 1, and the metric distance element would simply become $Δs^2 = Δx^2 + Δy^2 + Δz^2 - Δt^2$, or $Δs^2 = Δt^2 - Δx^2 - Δy^2 - Δz^2$ (both forms are equivalent, because multiplying the vector by -1 does not change its squared length) This metric distance element follows out of Maxwell's equations, which can be written in 4-space as one single equation (I reuse here c for the sake of clarity): $((1/c^2) (∂^2/∂t^2 ) - (∂^2/∂x^2) - (∂^2/∂y^2) - (∂^2/∂z^2) )A= μ_0 J$ wherein $A=(φ/c,(A_x,A_y,A_z ))$ is the 4-potential composed of scalar and vector potential and $J=(ρc,(J_x,J_y,J_z ))$ is the 4-current density composed of charge and current This equation is also called the Fundamental Equation of Electrodynamics, and is the 4-space equivalent of Poisson's equation in 3D space: $ ((∂^2/∂x^2) + (∂^2/∂y^2) + (∂^2/∂z^2))φ = -ρ $ wherein φ is a potential and ρ a source density (or charge). These equations follow from the general Stokes conservation (or accounting) law: $ ∫_Vdω = ∫_{dV} ω $ stating that the change of inventory of a quantity dω inside a volume or hypervolume V equals the flow of said quantity ω through the surface or hypersurface dV of said volume. In its differential form, it yields the law of Gauss: $ div (ω(x)) = ρ(x) $ wherein ω is the flowing quantity, and ρ the source density. Expressing the flow ω as a gradient of a potential φ, one obtains: $ ω(x)= -grad(φ(x)) $ This yields then Poisson's equation: $ ∆(φ(x)) = div(grad(φ(x))) = -ρ(x)$ Poisson's equation is a flow-conservation equation in 3-dimensional space of metric signature (+,+,+). The metric distance element therein is $Δs^2 = Δx^2 + Δy^2 + Δz^2 $ The Fundamental Equation of Electrodynamics is a flow-conservation equation in 4-space of metric signature (+,-,-,-). The metric distance element therein is $Δs^2 = Δt^2 - Δx^2 - Δy^2 - Δz^2$ Special Relativity is just about flow in 4-space, as is Electrodynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why do we study the Ising model on $\mathbb{Z}^d$ for $d > 3$? I'm a beginner in statistical physics and I'm reading some stuff about the Ising model. So this might be a silly question. My question is: why we study the Ising model for high dimension cases, despite that our physical world has only dimension $2$ or $3$?
Let us start with a quote from a paper by Michael Fisher and David Gaunt in 1964 (Phys. Rev. 133, A224), at a time when it was still necessary to justify such studies: To elucidate the general problem of dependence on dimensionality and coordination number, it seemed worthwhile to investigate the Ising model and self-avoiding walks for lattices of dimensionality higher than three. [...] Of course the behavior of physical systems in four or more space-like dimensions is not directly relevant to comparison with experiment! We can hope, however, to gain theoretical insight into the general mechanism and nature of phase transitions. As they say, it turns out that the spatial dimensionality (and more generally the connectivity properties of the underlying graph) plays a major role in the behavior of macroscopic systems. This is certainly the case at a critical point, where the critical exponents are well-known to depend generally on the spatial dimension, but can also be seen away from the critical point. As one example of the latter, consider the asymptotic behavior of the energy-energy correlations above the critical temperature: $$ \langle \epsilon_0\epsilon_{n\vec e_1} \rangle_\beta - \langle \epsilon_0\rangle_\beta \langle \epsilon_{n\vec e_1} \rangle_\beta \sim \begin{cases} n^{-2} e^{-2n/\xi} & (d=2)\\ n^{-2}(\log n)^{-2} e^{-2n/\xi} & (d=3)\\ n^{-(d-1)} e^{-2n/\xi} & (d\geq 4) \end{cases} $$ where, for any $k\in\mathbb{Z}^d$, $\epsilon_k = \sigma_k\sigma_{k+\vec{e}_1}$ and $\xi$ denotes the correlation length. As can be seen, the corrections to the exponential decay exhibit an interesting dependence on the dimension, which it is very natural for a physicist to try to understand.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How should I imagine a multi-particle state in a free QFT? It is reasonable to think of single-particle Focks states as of plane waves. Indeed, since $|p\rangle=a^\dagger_p|0\rangle$ and $\langle x|p\rangle\sim \operatorname{e}^{ipx}$, we conclude that the state $|p\rangle$ can be thought of as a plane wave in the position representation. What about multi-particle states, such as $|p_1,p_2\rangle=a^\dagger_{p_1}a^\dagger_{p_2}|0\rangle$? Naturally, I would imagine those to be superpositions of plane waves (since in the free theory we are dealing with equations obeying the superposition principle). But is it really like that? Is the matrix element $\langle x|p_1,p_2\rangle$ really equal to the sum of plane waves?
Is the matrix element ⟨x|p 1 ,p 2 ⟩ ⟨x|p1,p2⟩ really equal to the sum of plane waves? Nope, it's the product of plane waves, not sum. Of cause, you have to symmetrize or anti-symmetrize the product according the statistics (boson/fermion) of the particle. In the end, it's a symmetrical/anti-symmetrical sum of products.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Minimum separation from the spacetime interval I've been working through invariant spacetime interval questions recently, and I came across a question in my lecture notes where; $$\Delta s^2=\Delta x^2 -(c\Delta t)^2 > 0 $$ Now it is clear to me that there is no frame where $\Delta x' = 0$ which I have already proven as the question required. Now, out of curiosity, I'm wondering if there is a way to determine the minimum value that $(\Delta x')^2$ can take? I am assuming that the spacetime interval is the same in every frame, so $$\Delta s'^2=\Delta x'^2 -(c\Delta t')^2 > 0$$ which would give $$\Delta x'^2 > (c\Delta t')^2$$ But since $t'$ can be equal to 0, I'm not sure where to go from here. Is there anybody that can either show me how, or point me in the right direction? Any help is much appreciated.
Let me restate the problem the way I understand it: we have 2 events A and B separated by a space-like interval $$\Delta s^2=\Delta x^2 -(c\Delta t)^2 > 0 $$ now, different observers will measure these 2 events A and B and come up with different $\Delta x$ and $\Delta t$, but what will be the minimum possible $\Delta x$ (if it exists) that one of these observers might measure? Quick answer: from $$ 0\geq-(c\Delta t)^2$$ we obtain $$ \Delta x^2\geq\Delta x^2-(c\Delta t)^2)=\Delta s^2$$ So $\Delta x^2\geq\Delta s^2$ always in a spacelike interval and therefore the minimum value that it can attain is $\Delta s^2$, for an observer that sees A and B happen simultaneously i.e. with $\Delta t=0$ Now let's try a different , more physically insightful, apprach. First a little trick which will help better visualize the situation: let's agree that all observers reset their clocks, meters,etc such that event A has coordinates (0,0) for every observer. This does not change the motion of an observer and in general the physics of any problem. So A=(0,0) for everybody, while B=(t,x) has different coordinates for different observers, but for everybody $\Delta s^2=x^2-t^2$ is the same, say $\Delta s^2=9$ ($c=1$ from now on). Every observer will draw a space-time diagram with event A at the origin (not shown) and event B appearing somewhere. If we overlay all the diagrams we get the following where each observer has drawn B as a different colored dot at different positions. All these dots belong to the locus $\Delta s^2=x^2-t^2=9$ so it is clear that the green observer will measure the smallest $\Delta x^2$ and $\Delta t^2=0$ PS: the light cone in the diagram is a bonus, I could not resist putting it in ;-)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a simple way to calculate Clebsch-Gordan coefficients? I was reading angular momenta coupling when I came across these CG coefficients, there is a table in Griffith's but doesn't help much.
Is there a simple way to calculate Clebsch-Gordan coefficients? No, or at least nothing that any working human would qualify as "simple" and that will work for any general Clebsch-Gordan coefficient. The closest you can get in the general case is given in this Wikipedia page, which puts them as \begin{aligned} \langle j_{1},j_{2};m_{1},m_{2}|j_{1},j_{2};J,M\rangle =\ &\delta _{M,m_{1}+m_{2}}{\sqrt {\frac {(2J+1)(J+j_{1}-j_{2})!(J-j_{1}+j_{2})!(j_{1}+j_{2}-J)!}{(j_{1}+j_{2}+J+1)!}}}\ \times \\&{\sqrt {(J+M)!(J-M)!(j_{1}-m_{1})!(j_{1}+m_{1})!(j_{2}-m_{2})!(j_{2}+m_{2})!}}\ \times \\&\sum _{k}{\frac {(-1)^{k}}{k!(j_{1}+j_{2}-J-k)!(j_{1}-m_{1}-k)!(j_{2}+m_{2}-k)!(J-j_{2}+m_{1}+k)!(J-j_{1}-m_{2}+k)!}}. \end{aligned} That's an explicit expression (so, hoorah), but it's not something that's usable by any human metric. If you actually want to calculate them on your own from scratch, then let the DLMF lay out the efficient and stable methods of computation: Methods of computation for 3j and 6j symbols include recursion relations, see [references]; summation of single-sum expressions for these symbols, see [references]; evaluation of the generalized hypergeometric functions of unit argument that represent these symbols, see [references]. But, really: no one calculates CG coefficients, or at least no one who isn't intentionally doing more calculations than they need to or actively developing scientific code on a new platform. Instead, if we want explicit analytical values, we go to the existing tables (like the one in Wikipedia or the one in Griffiths or Edmonds or any other angular-momentum-in-QM book. If we want a software implementation, we just take one of the many existing robust implementations: this not only solves the trivial problem that the task of writing is much simpler, but it also covers the much more important aspect that bugs are minimized and testing of the code is no longer necessary. For a birds-eye view of good sources for code, see this list of papers, or the DLMF software index, which includes a bunch of books and open-source and proprietary software suites that can calculate them:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Why can't we see images reflected on a piece of paper? Why can't you see a reflected image on a piece of paper? Say you put a pen in front of the paper, even when light rays are coming from other sources, hitting the pen, reflecting back, and hitting the paper, there is no reflection. What's wrong with the following "ray diagram" and why such even don't happen and the image of the pen don't form on the paper (right side is a paper)? When then can you see the image of a torch when you shine it on the paper? When you put a convex lens in front of the pen, why you can now see the image of the pen on the paper?
Light falling on a mirror is reflected in such a direction that the angles of incidence and reflection are the same. Light falling on a white sheet of paper is reflected but scattered, going in all directions from the point where it hits the paper.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72", "answer_count": 9, "answer_id": 8 }
Could a microwave oven be tuned to defrost well? Typical microwave ovens do a lousy job of defrosting because liquid water absorbs their radiation far better than ice. So once a spot melts, it will quickly rise to cooking temperature while the rest of the food remains frozen. Would it be possible to build an oven that uses microwaves absorbed preferentially by ice instead, so it would defrost well? Such an oven would presumably be inefficient for cooking, but still valuable.
It would be very difficult to do so. Microwaves heat by adding energy at resonant frequencies of the molecules. Ice and water have very different ranges: The ease of the movement depends on the viscosity and the mobility of the electron clouds. In water, these rely on the strength and extent of the hydrogen-bonded network. In free liquid water, this movement occurs at GHz frequencies (microwaves) whereas in more restricted 'bound' water it occurs at MHz frequencies (short radiowaves) and in ice at kHz frequencies (long radiowaves). A radio wave in the GHz region is less than 1m long, which makes it easy to work with in the spaces a microwave has to work with, and easy to generate with reasonable sized antennas. As it turns out, microwaves must operate in the 2.450 GHz band because that's the band allocated to microwaves by the FCC. That's a wavelength of roughly 12cm, so the antennas are very reasonable indeed. Closer to 1kHz, a region known as VLF, we find wavelengths of almost 3000km. This means our antennas have to be much shorter than their associated wavelength, which makes them much less efficient. Most of the energy of such an antenna doesn't actually get emitted from the antenna. It is typically sent to ground as "waste." I'd have to consult an antenna expert to get a real answer, but 10% efficiency is not unheard of for VLF antennas.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
When to use sine or cosine when computing simple harmonic motion For simple harmonic motion (SHM), I am aware you can start of using either sine or cosine, but I am a bit confused as to when you would start off with sine rather than cosine. I know that a sine graph starts at $y=0$ and a cosine graph starts at $y=1$. So therefore, I would say you use sine for equilibrium positions? However, I came across a question asking to write down the equations for position, velocity and acceleration of a particle starting from rest at time $t=0$, then undergoing SHM with maximum amplitude 0.2 m and period 5 sec. I worked out the angular frequency to be $2\pi/5$ from the period formula. And then used the position formula to be of the form with sine and differentiated to get cosine velocity equation, etc. However, the answer says I should have started with cosine and I am now unsure when I should start with sine or cosine.
The function $x(t) = A \sin \omega t$ starts from zero with maximum speed, while the function $x(t) = A \cos \omega t$ starts from $x=A$ (the amplitude) with zero speed, and starts to move towards $x=0$. Starting from rest doesn't imply starting at the equilibrium position: if you start from rest at $x=0$, nothing moves, so this is not an interesting solution! For a harmonic oscillator, starting from rest means starting at the maximum value of $x$, so $\cos \omega t$ is the appropriate solution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Does a rock use up energy to maintain its shape? A rock sitting on land, the ocean floor, or floating in space maintains its shape somehow. Gravity isn't keeping it together because it is too small, so I'm assuming it is chemical or nuclear bonds keeping it together as a solid. If not it would simply crumble apart. So, what type of energy maintains the shape of a rock, where did this energy come from, and is it slowly dissipating? As a corollary, if a large rock is placed on top of a small rock, is the energy required to maintain the shape of the small rock 'used' at a greater rate?
No, the exact opposite is true. The molecules in a rock don't stay together because they're spending energy. They stay together because of attractive chemical bonds. The molecules have lower energy when they're together than when they're not, so you have to spend energy to break the rock apart, not to keep it together.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 2 }
Can light be compressed? What if we take a cylindrical vessel with an inside surface completely reflecting and attach a piston such that it is also reflecting. What will happen to light if we compress it like this?
Suppose there is an amount of light (electromagnetic radiation) inside the cylinder. Note that electromagnetic radiation is composed of particles called photons, and if we consider that there is a very large number of photons inside the cylinder, we may use statistical mechanics to create a model of a photon gas. Yes, the system you describe will act like a gas, and its properties may be derived from statistics and from the properties of photons. If a photon's frequency is $f$, its energy is $E_γ = hf$, where $h$ is Planck's constant. It is also important to remember that photons have linear momentum $$p = \frac{E_γ}{c} = \frac{hf}{c}$$ But the fact that photons have nonzero linear momentum implies that they will exert pressure against the cylinder's walls. Once the photon reflects on the wall, its momentum will have changed direction, and this imples that the wall has exerted a force on the photon to make it change directions. Therefore, the photon gas exerts pressure against the walls. It can be shown that if the total energy of the photon gas is $U$, then the relationship between the pressure $P$ and the volume $V$ of the gas is $U = 3PV$. If you push the piston, you'll do positive work and therefore give energy to the system. It can also be shown that if you push the piston very slowly (reversible process) while keeping the system isolated (adiabatic transformation), the relationship between pressure and volume will be: $$PV^{4/3} = \text{constant}$$ In other words, yes, light can be compressed and will act just like any other gas inside of a cylinder. Once you push the piston, you will feel an increase in pressure (the pressure of the photon gas increases)! This photon gas can be used to make a simple model of stars, as is discussed in The Feynman Lectures on Physics, Vol. 1. The derivation of the other results presented before can also be found in this same book. As pointed out in Yly's answer, the increase in energy as you push the piston will cause an increase in the frequency of the radiation, essentially causing a blueshift.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 2, "answer_id": 0 }
Dielectrics and capacitors Does the electric field after passing through the dielectric reduce. What I mean is If there is a dielectric which is in between the plates ( but NOT IN contact with them ) Suppose field begins from the positive side, then passes through the dielectric ( where the field intensity reduces ) and when it makes it way out on the “other side” ( towards the negative ) , would its intensity be the same as what it was inside the dielectric or same as what it was before it encountered the dielectric ?
A dielectric is a nonconducting, polarisable material. This means that, when an electric field is applied to it, the molecules and atoms which comprise it shift to align with the electric field. This creates another electric field which partially cancels the original one, thus reducing the overall electric field, inside and outside of the dielectric, so the intensity would be lower than without the dielectric.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is time reversal symmetry true on the microscopic level? I often hear that on the microscopic level, time-reversal symmetry is true for all physical processes. However, I can easily come up with counterexamples that seem to disprove this: * *Two particles of opposite charge being attracted by each other and accelerating towards one other as a result. Furthermore, even barely observed on a microscopic level, the gravitational force surely defies time-reversal symmetry. A movie of an apple accelerating away from the ground is immediately recognizable as the reversed version of the true process. This is a macroscopic example, but you get the point. So where am I wrong?
Furthermore, even barely observed on a microscopic level, the gravitational force surely defies time-reversal symmetry. A movie of an apple accelerating away from the ground is immediately recognizable as the reversed version of the true process. No. When an apple falls from a tree, it starts motionless at a high point, then gains more and more speed as it moves lower. The time reverse of this occurs when you throw an apple up. It starts with high speed at the bottom, then ends up motionless at a high point. In both cases the acceleration is toward the ground. More mathematically, suppose the position of the falling apple is $y(t)$. Then the acceleration must be $-g$. When you time reverse the trajectory to get $y(-t)$, you get two minus signs when differentiating from the chain rule, so the acceleration is still $-g$. So if $y(t)$ is a legal path, so is $y(-t)$. You might say the time reversed process is impossible, because obviously an apple can't jump up from the ground and back into the tree. But that's the entire point of the paradox. Microscopically, Newton's laws allow both processes; it's only thermodynamics that forbids one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the function of this complicated tensioning system? I saw this arrangement for tensioning overhead cables from my train window (schematic below). Why not just have one pulley wheel leading directly to the weights? What function do the additional pulleys serve? For that matter, what are the cables for? They're clearly not power lines.
It's to maintain the tension in the overhead powerline. The line acts like violin string, with the collector on the train acting as a bow. If the train is travelling faster than the wave in the power line, then a standing wave may be induced in the power line, causing it to snap. The line will contract and expand with temperature, so a fixed load is placed on it. See the Wikipedia entry
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 5, "answer_id": 4 }
Which of the two events will occur first Consider a bottle partially filled with water and it is sealed from everywhere so that no air can enter or exit from bottle.Now make a small hole at the bottom of the bottle and hang it vertically so that hole faces downward direction. As we know bottle is sealed from everywhere else so the hole is the only place from where entry or exit of air or water is possible.Now two events will occur one after another repeatedly * *a drop of water came out from hole due to gravity and falls down on ground *a bubble of air (or air)wil enter in bottle through hole and moves upward against the gravity and mixes with the air present at the top of liquid in bottle. Now i know the reason for these two events to occur just because to maintain atmospheric pressure of air in bottle and due to gravity.But the thing in which i get confused is that which of the two events will occur first? I tried this experiment myself but not able to figure it out.I personally feel that event1should occur first
It depends on the diameter of the hole and the surface tension of the fluid in contact with that hole. For a small hole and/or high surface tension, the liquid sags out the hole and breaks off as a droplet; the remaining liquid recoils back into the hole and folds itself into a bubble which floats upwards. For a large hole and/or low surface tension, the sagging liquid and the rising bubble can squeeze past each other in the hole and the liquid dripping and air entry can occur simultaneously. This is called two-phase flow and its onset as a function of hole diameter and surface tension can be predicted with a mathematical tool called a similitude parameter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Movement of fluid in a container filled with that same fluid If a cylinder with the bottom end closed and the top end open was filled with water and then dropped in a pool of water. Would the water inside the cylinder stay in the cylinder?
If both liquids, the one inside the recipient and the pool are at the same temperature, once the cylinder reaches the bottom, the liquid near the opening will start to diffuse into the rest of the liquid, but the part at the bottom will mostly stay inside. While dropping, the liquid near the opening will mix into the pool due to the whirls created in the fall. Watch out if you use inks to check this experiment as the ink usually has different density than water, and could lead to wrong results.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Examples of central forces on the path of orbit? In solving a problem from Goldstein (3.13), I solved for multiple properties of a circular orbit with the attractive central force where the path of orbit crosses the point of the force (at origin). The solutions were simple enough to find, but what's been in the back of my mind is what type of physical system does this represent? I am used to Kepler type problems where the central force is located within the orbit and not on it. What system would this be applied to? Or is it merely an exercise?
The orbit here can be taken as the limit as $r_0 \to a$ of the case where the orbit is an eccentric circle with radius $a$ and center a distance $r_0$ from the origin. I solved for the potential and the force law for this general case in this answer. In the limit of $r_0 \to a$ the results simply become $$ U(r) = -\frac{ k}{r^4}, \quad F(r) = - \frac{4k}{r^5} \hat{r}. $$ So this orbit could arise if the force law was proportional to $r^{-5}$ rather than $r^{-2}$ as in the Kepler problem. There are no known two-body forces with this behavior, but one could contrive such a force law by imagining a mass (or charge) distribution spread out through some region of space acting on a massive (or charged) test particle. Alternately, one would expect a $r^{-5}$ dependence for gravity in a Universe with 6 spatial dimensions. (Whether this seems more or less contrived than the previous example is a matter of taste.) As noted in my previous answer, only particles with special initial conditions (namely $L^2/\mu = k/(2a^2)$) will actually describe these clean circular paths. The general paths for orbits in this potential will be much more complicated. In fact, for a general $r^{-6}$ potential, most orbits will either crash into the origin (as these do) or fly off to infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Momentum Conservation for ARPES I have a question about the principle of momentum conservation the modeling of ARPES: https://en.wikipedia.org/wiki/Angle-resolved_photoemission_spectroscopy#Theory We split the initial momentum of the electron ${\displaystyle \hbar k_{i}}$ into the component $ \hbar k_{i\parallel }$ parallel to the surface and $\hbar k_{i\perp }$ perpendicular to it. Similary we split the momentum of the outgoing electron $\hbar k$ also into the component $ \hbar k_{\parallel }$ parallel to the surface and $\hbar k_{\perp }$ perpendicular. The photon with energy $\hbar \omega$ hits under abitrary angle on the surface and ionizates an atom and therefore induces the outgoing electron. My question is why concretely the parallel component of the momentum is conserved while the perperdicular not. I don't see any assumption that the the photon comes perpendiculary to the surface so naively I don't se any reason for momentum conservation for parallel momentum component. One argument that I often found was "symmetry breaking" although I don't understand how the concept of sb proides here the desired conservation, therefore why should $ \hbar k_{i\parallel }=\hbar k_{\parallel }$ hold?
The kind of classical idea is that the electrons need to gain a certain velocity away from the surface in order to break free of the crystal. This is the 'work function' of the material. In terms of symmetry, you can think like this: inside the crystal the potential is periodic (i.e. like an array of valleys) and if the electron moves between symmetric points in the potential then it will have the same potential energy. If there was no scattering (assumed for ARPES) then the total energy is conserved and so is we can see that the kinetic energy (and momentum) will also be conserved. But when you get near the edge of the crystal the potential begins to change as you transfer to free space. The momentum has to change to account for the changing potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why are protons and neutrons the "right" degrees of freedom of nuclei? This question may sound stupid but why do we visualize nuclei as composed of a bunch of neutrons and protons? Wouldn't the nucleons be too close together to be viewed as different particles? Isn't the whole nucleus just a complicated low energy state of QCD?
This is basically a matter of energy scales. By analogy, you could ask why we don't take into account nuclear structure when we talk about chemistry. The answer is that the eV energy scale of chemistry is mismatched with the MeV energy scale of nuclear structure. Nuclear matter has two phases. One is the phase we normally see, and the other is a quark-gluon plasma. The phase transition happens at a temperature on the order of 100 GeV per nucleon (at standard nuclear densities). Below the temperature of the phase transition, the quarks are strongly correlated, and those correlated groups behave in a way that's very similar to free neutrons and protons. To the extent that they don't quite have those properties, often we can subsume the discrepancies within adjustments to the parameters of the model. It's helpful in terms of practical computation that the fictitious neutrons and protons are nonrelativistic, which makes the theory much more tractable than QCD. If there are small relativistic effects, because the nucleons are moving at a few percent of $c$, these can also be subsumed within adjustments to the parameters. By the way, it is actually possible to consider larger clusters to be the relevant degrees of freedom for nuclear structure. There is a model called the interacting boson approximation (IBA, also known as the interacting boson model, IBM), in which pairs of nucleons coupled to spin 0 or 1 are considered the degrees of freedom. It does pretty well in phenomonologically fitting the properties of many nuclei that are intractable in other models. In a similar vein, there are alpha cluster models and ideas like explaining alpha decay in terms of preformation of an alpha particle, which then tunnels out through the Coulomb barrier. Pictures like these go back to the 1940's, and have considerable utility and explanatory power, although they can't really be microscopically correct, because they violate the Pauli exclusion principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 0 }
Santa and conservation of momentum. How is it possible that conservation of momentum doesn't hold here? Santa wants to deliver his presents but his reindeers are on strike. In order to still be able to get somewhere he decides to sacrifice some of his presents to gain speed. He decides to throw them straight off the back with velocity $v$. Would he be better off throwing all of the presents at once or should he he throw $\frac{1}{2}$ of the presents twice? (Ignore friction) Hint: Mass of the sacrificed presents is $\frac{1}{2}M$ where $M$ is the total mass: $M=m_{santa}+m_{presents}+m_{sleigh}$ This is my solution to the problem: * *Throw everything at once: $$m_sv_s+m_1v_1=m_s v_s'+m_1v_1' \\ \iff 0=m_sv_s'+m_1v_1' \\ \iff v_s'=\frac{m_1}{m_s}v_1'=\frac{\frac{1}{2}M}{\frac{1}{2}M} \\ \iff v'_s=-v_1' $$ Whre $m_s=\frac{1}{2}M \space $(mass of the sleigh+santa) and $m_1=\frac{1}{2}M \space $(mass of presents thrown off the back). This result makes perfect sense and is exactly what I expected. The sleigh and santa move with a velocity that is equal to the velocity of the sacrificed presents but in the other direction. *Throw twice: This was a pretty long calculation but I arrive at the exact same result (I can write it down in case it helps answering my question) $v_s'=-v_1'$ which I thought to be logical because we have conservation of momentum and it shouldn't matter if I split up the presents into $n$ throws. However, this is not the correct answer. The solution we got says that the second scenario is worse and that the velocity of the sleigh+santa will be $$\boxed{v_{santa}=\frac{5}{6}v_{presents}}$$ How can that be?
The problem is that by throwing only half of the presents, Santa needs to accelerate himself, the sleigh, and the other half of the presents that are still in the sleigh. If he throws them all at once, he only needs to accelerate himself and the sleigh. Santa is losing out on velocity by accelerating his propellant!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there an intuitive reason the resistance that maximizes power dissipation in this simple circuit has a simple form? Consider the following circuit: A textbook problem[2] asks to find the resistance $R$ such that the power dissipated in $R$ is maximized (assuming $R_1$ and $R_2$ are fixed). I found that $R$ should be equal to the equivalent resistance of $R_1$ and $R_2$, were they in parallel. This is a simple enough result that it seems it might have some intuitive explanation that doesn't require calculation. Is there one? [2]: Halliday, David, Robert Resnick, and Kenneth S. Krane. Physics, 5ed. vol. 2. Wiley, 2001. Exercise 31-24
This is a simple enough result that it seems it might have some intuitive explanation that doesn't require calculation. In the circuit given, the equivalent resistance seen by the load $R$ is $R_{eq} = R_1||R_2$. For a Thevenin (Norton) equivalent circuit, this equivalent resistance is in series (parallel) with the load. The power $P_R$ delivered to $R$ is maximum when $P_R = P_{R_{eq}}$. Is there an intuitive way to see why this is? Choose the Thevenin equivalent picture ($R_{eq}$ is in series with $R$). The power delivered to $R$ is quadratic in the series current through $P_R = I^2R,\quad I = \frac{V_{th}}{R_{eq} + R}$ where $V_{th}$ is the Thevenin source voltage ($V_{th} = V\frac{R_2}{R_1 + R_2}$ in the circuit given but this is irrelevant to the result) It's easy to see from this that $P_R$ goes to zero in the limits $R \rightarrow 0$ and $R \rightarrow\infty$ so there is a maximum for some finite $R$. The question is then: what value of $R$ would one intuitively expect to give the maximum power? I think there's a reasonable case that it's $R = R_{eq}$ because, in that case, $P_R = P_{R_{eq}}$. Think about it this way: (1) increasing $R$ above $R_{eq}$ means that $P_R$ is getting relatively larger but the total power is decreasing (2) decreasing $R$ below $R_{eq}$ means that $P_R$ is getting relatively smaller but the total power is increasing One might then 'intuit' that $P_R = P_{R_{eq}}$ is a critical point. I think a similar intuition informs us that maximizing the area of rectangle when the sum $L + W$ is held constant requires that $L = W$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Can the second law of thermodynamics be derived from Quantum randomness? The second law of thermodynamics says that the entropy of an isolated system continuously increases. Can we say that this is due to Quantum mechanics, which continuously increases the information contained in the system by producing random numbers? Is the entropy of a classical system without randomness always constant?
One of the most catchy Gedankenexperiment of an isolated system with constant entropy is a photon which gets reflected from the surfaces of two perfect mirrors. At a first glance it looks like the photon gets re-emitted with the same wavelength as it was absorbed. But beside the presumption of such a ideal absorption-emission process there is a second phenomenon, that isn’t negligible: their momentum. Photons carry a momentum and hitting the mirror, the photon creates a recoil. A recoil means a movement of the mirror or a part of the mirror or at least a deformation of its surface. Such a displacement has two effects: * *The re-emitted photon is with less energy (is redshifted) and *the mirror gets for example a photonics excitation which lead at least to heat radiation (nothing else as photon emission). QM was firstly established to describe processes on atomic ranges. And the older introduction of quanta for the electromagnetic radiation is enough to understand the second law of thermodynamics. Can we say that this is due to Quantum mechanics, which continuously increases the information contained in the system by producing random numbers? QM describes something, it does not influence things. And the very specific (concrete) description of the photon interaction with the mirrors could be supplemented by a QM description. I prefer - than ever it is possible - concrete description of a single process compared to statistical processes. Is the entropy of a classical system without randomness always constant? Interactions in classical system (BTW what is a classical system?) are also interactions of photons. And all photon interactions are random processes. A system has a constant entropy if it is a closed system without any exchange with the surrounding. Such isolated systems simply not exist.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How much energy is wasted by a noisy refrigerator? I recently bought a new refrigerator for my kitchen. The feet are adjustable but I've been lazy. Whenever the motor runs and the feet aren't all touching the floor there is a loud buzzing noise. As soon as I move the fridge around to level it up the buzzing stops. One day I'll sort it out properly maybe. Question Clearly the fridge keeps its contents at a roughly even temperature in my warm kitchen and that takes energy. Without buying a multi-meter or doing a long term experiment, can I get a quick idea of how much energy is being wasted by the noise? Maybe the resonance of the noise somehow makes things more efficient and is a good thing? Suppose I borrowed a decibel meter. Could I work it out from that?
One horsepower represents 746 watts. A refrigerator motor develops (typically) 1/4 to 1/3 horsepower of which only a tiny fraction of wattage is dissipated as vibratory noise. The leakage of heat into the refrigerator through its walls is a far more significant loss mechanism than noise generation. By the way, the front-most rubber feet of a refrigerator are mounted on threaded screw shafts which allow you to adjust them so all the refrigerator feet are firmly in contact with the floor. This will make the buzzing stop. Simply grasp the round rubber foot and rotate it to extend or retract it. This also allows you to actually tip the refrigerator out-of-level sideways, so that the door will tend to swing itself shut by gravity if you leave it open.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
What does Ehrenfest's theorem actually mean? I am told that Ehrenfest's theorem, applied to a physical observable $\hat A$, is: $$\frac{d\langle\hat A\rangle}{dt}= \frac{i}{\bar h}\langle[\hat H,\hat A]\rangle$$ I don't understand how to use this equation or what it means intuitively.
The equations describes the time evolution of an expectation value of an operator (which is the expectation value of the value measured when doing many measurements on identically prepared systems). The expectation values are taken with respect to the state of the system at time $t$. This answer will show how to use the Ehrenfest theorem by applying it to an example. As for the intuition: Intuition is always a dangerous business with respect to quantum mechanics. In some cases, you can use the Ehrenfest theorem to calculate the evolution of expectation values without solving the Schrödinger equation, if the commutator results in a sufficiently simple operator, or at least get results for the behaviour of sharp wave packets. Consider for example the Hamiltonian of a particle in a potential $V(x)$ $$H = \frac{p^2}{2m} + V(x).$$ Then you can compute $[H, x]$ and $[H, p]$: $$[H, x] = \frac{1}{2m} [p^2, x] = \frac{p}{2m} [p,x] + \frac{1}{2m} [p,x] p = -i\hbar \frac{p}{m},$$ $$[H, p] = [V(x), p] = [V(x), -i\hbar\nabla] = i\hbar \nabla V(x).$$ Plugging this in the Ehrenfest theorem gives the following equations of motion for the expectation values of $x$ and $p$: $$d_t \left< p \right> = - \left<\nabla V(x)\right> $$ $$d_t \left< x \right> = \frac{p}{m} $$ Those are almost the classical equations of motion for position and momentum. The only difference is that the "force" $-\nabla V(x)$ is averaged over the state and not taken at the central position, for a wave packet which is small compared to the variation scale of $V(x)$ this therefore leads to the classical equations of motion in good approximation. So we have used the Ehrenfest theorem to show how and under which circumstances Newtonian mechanics arise from quantum mechanics. There is a special case where the resulting equations of motion can be solved without reference to the state, namely the harmonic oscillator $V(x) = \frac 1 2 k (x-x_0)^2$, here we get $$\left< - \nabla V(x)\right> = -k \left<x - x_0\right> = -k\big(\left<x\right> - x_0\big),$$ so the equations of motion for the averages are exactly the classical equations of motion for the harmonic oscillator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can we ever "measure" a quantum field at a given point? In quantum field theory, all particles are "excitations" of their corresponding fields. Is it possible to somehow "measure" the "value" of such quantum fields at any point in the space (like what is possible for an electrical field), or the only thing we can observe is the excitations of the fields (which are particles)?
The comment by flippiefanus is curious. First, that contributor criticizes that the answer given is - if I understand correctly - too formalistic, to then go on and refer to the path integral formulation using hypothetical paths which are even more formalistic. Especially if one takes into account that the paths (or here: field configurations) which contribute significantly to the (the measure of the) path integral are extremely "rough" and are not smooth or continuous. In my view, that is far more formalistic than the idea that one could measure a field at a point. Yes, that is clearly an idealization. But pointlike idealizations appear elsewhere in physics, like point particles in classical meachnics. This idealization means the dimensions of an object or, in this case, the process of measurement, are very small compared to other reference items used. As the formalism of quantum field theory then shows (and therein lies its strength), if that can be assumed for a quantity like "field strength" (by its effect on test objects) in the sense of expectation values, then it cannot be assumed for the uncertainties. Remember that quantum field theory is a statistical theory so a single experiment of "field strength measurement in a small extension of space and time" by action on a test object must be repeated multiple times to have a reliable statistics to determine mean values and variances and such. The divergence of the variance for measurements for the true "pointlike" measurement localization limit is then the expression of the quantum mechanical uncertainly relations. This is how the idealization is to be read. Seen in this way, the formal elements of quantum field theory have a clear relation to operational measurement procedures.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 4 }
Is spin 1 described by $SO(3)$ or $SU(2)$ What spin is described by which rotation group? I always only find information about spin-1/2
Quantum spin in nonrelativistic Quantum Mechanics is generally associated either with the projective unitary representations of the rotation group SO(3) or with the vector unitary representations of the special unitary group SU(2). To be more precise, spin comes naturally from the projective unitary representations of the full 3D-Galilei group, but only for angular momentum/rotation symmetry purposes, it is enough for one to restrict only to a subgroup of it isomorphic to SO(3). Therefore, spin 1 we can describe in a proper (rigged) Hilbert space environment by the linear unitary representations of the SU(2) group which are in 1-1 correspondence with the representations of the su(2) Lie algebra by (essentially) self-adjoint operators.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we weight dark matter? We can observe the effect of light as it gets bended in the presence of dark matter, and I wonder how is it possible to measure their mass given that we can't see them and they don't interact with ordinary matter? I know because the speed of the stars at the edge and the center of our galaxy give us a hint that dark matter is presence but I want to know if we can directly measure their mass? For example LIGO is used to detect gravitational wave and can even pin point the source, super kamiokande mostly used to find neutrino which tell us the direction of the supernovae, etc.
By definition of dark matter, it is not possible to determine specific masses with astrophysical means, only the gravitational effect they have. The masses may be composed of various things, as this table shows. For example the machos will not have distinctive masses , they would be planet like, A massive astrophysical compact halo object (MACHO) is any kind of astronomical body that might explain the apparent presence of dark matter in galaxy halos. A MACHO is a body composed of normal baryonic matter that emits little or no radiation and drifts through interstellar space unassociated with any planetary system. Since MACHOs are not luminous, they are hard to detect. So it is an open question for various models. Models that have to do with particle physics are the only ones where a mass could be given by experiment. They are particles called WIMPS which are searched methodically in accelerator experiments, because they are predicted by supersymmetric and other models, so if they are found it would be a success for the models and for having candidates of dark matter, to be used in cosmological models. It is an open research field, as the table shows.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Physical interpretation of FRW normal coordinates The Friedmann-Robertson-Walker metric (I consider for notational simplicity the flat space case): $$\text d s^2 = \text d t^2 - a(t)^2\text d \boldsymbol{x}^2$$ can be brought to normal (Minkowski) form at the origin by a quadratic change of coordinates (see e.g. Eq. (10) of Ref.1): $$\boldsymbol x = \boldsymbol X -H_0\boldsymbol X T, \\t=T-\frac{1}{2}H_0\boldsymbol X^2, $$ where $H_0=\dot a (0)$ and I assume $a(0)=1$. My question is: does the above coordinate transformation have any physical interpretation, for instance in terms of accelerations or Newtonian gravitational fields? The $\boldsymbol {x}$ transformation is telling me that conformal coordinates and locally Minkowski coordinates are related by a simple rescaling $X=a(t)x$, and moreover looks like a Lorentz boost with velocity $\boldsymbol V = H_0 \boldsymbol X$. Which suggests me to rewrite: $$\boldsymbol x = \boldsymbol X -\boldsymbol V T, \\t=T-\boldsymbol V\cdot \boldsymbol X + \frac{1}{2 H_0}\boldsymbol V^2. $$ However, provided I'm on the right track, I don't know how to make sense of the last $V^2$ term in the $t$ transformation.
I calculate the line element: Case I: $\left[ \begin {array}{c} t\\ x\end {array} \right]\mapsto \left[ \begin {array}{c} T-1/2\,H_{{0}}{X}^{2}\\ X- H_{{0}}XT\end {array} \right] $ Line element with $V=H_0\,X$ $ds^2\mapsto \left( 1-{V}^{2}a \left( t \right) \right) {{\it dT}}^{2}+ \left( -2 \,V+2\,Va \left( t \right) -2\,Va \left( t \right) H_{{0}}T \right) { \it dX}\,{\it dT}+{{\it dX}}^{2} \left( {V}^{2}-a \left( t \right) +2 \,a \left( t \right) H_{{0}}T-a \left( t \right) {H_{{0}}}^{2}{T}^{2} \right) $ For $H_0=0$ $ ds^2=\left( 1-{V}^{2}a \left( t \right) \right) {{\it dT}}^{2}+ \left( -2 \,V+2\,Va \left( t \right) \right) {\it dX}\,{\it dT}+{{\it dX}}^{2} \left( {V}^{2}-a \left( t \right) \right) \qquad (1) $ Case II $\left[ \begin {array}{c} t\\ x\end {array} \right]\mapsto \left[ \begin {array}{c} T-H_{{0}}{X}^{2}\\ X-H_{{0 }}XT\end {array} \right] $ Line element with $V=H_0\,X$ $ds^2\mapsto \left( 1-{V}^{2}a \left( t \right) \right) {{\it dT}}^{2}+ \left( -4 \,V+2\,Va \left( t \right) -2\,Va \left( t \right) H_{{0}}T \right) { \it dX}\,{\it dT}+{{\it dX}}^{2} \left( 4\,{V}^{2}-a \left( t \right) +2\,a \left( t \right) H_{{0}}T-a \left( t \right) {H_{{0}}}^{2}{T}^{2 } \right) $ For $H_0=0$ $ds^2= \left( 1-{V}^{2}a \left( t \right) \right) {{\it dT}}^{2}+ \left( -4 \,V+2\,Va \left( t \right) \right) {\it dX}\,{\it dT}+{{\it dX}}^{2} \left( 4\,{V}^{2}-a \left( t \right) \right) \qquad (2)$ Case III: $\left[ \begin {array}{c} t\\ x\end {array} \right]\mapsto \left[ \begin {array}{c} T-VX+1/2\,{\frac {{V}^{2}}{H_{{0}}}} \\X-VT\end {array} \right] $ but $V=H_0\,X$ so we get: $\left[ \begin {array}{c} t\\ x\end {array} \right]\mapsto \left[ \begin {array}{c} T-1/2\,H_{{0}}{X}^{2}\\ X- H_{{0}}XT\end {array} \right] $ This is the transformation case I, so you don't have new line element! Comparing equation (1) with (2) , we have "the same" line element for $H_0=\dot{a}(0)=0$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Mass versus rotation Curves Is there an equation that describes the relationship between the mass of the Galaxy and rotation curve? I found V versus R graphs and equations that describe their relationship (kind of). But I wonder how mass would affect the rotation curves. For instance, if the Milky Way had more mass what would be the rotation curve look like or if it had less mass etc.
import math from scipy import special import matplotlib.pyplot as plt #Constants G = 4.302*(10**(-3)) # in Pc MS-1 (km/s) R_halo = 30000 #in pc M_disk = 10**10 # in solar mass M_halo = 3*10**11 # in solar mass R_disk = 3000 # in pc Radius = [] Velocity = [] V_H = [] V_D = [] for R in range(1,30000,100): y = R/(2*R_disk) F = (special.iv(0, y)*special.kv(0, y))-(special.iv(1, y)*special.kv(1, y)) v_halo = (G*M_halo*(R/R_halo)) / (2*R_halo*((1+(R/R_halo))**2)) v_disk = ((2*G*M_disk*(y**2)*F)/R_disk) t = v_halo+v_disk Velocity.append(t**(1/2)) Radius.append(R) V_H.append(v_halo**(1/2)) V_D.append(v_disk**(1/2)) plt.plot(Radius,Velocity,"r") plt.plot(Radius,V_H,"g") plt.plot(Radius,V_D,"p") plt.xlabel("Radius (pc)") plt.ylabel("Velocity (km/s)") plt.minorticks_on() plt.grid(b=True, which='major', color='k', linestyle='-') plt.grid(b=True, which='minor', color='r', linestyle='-', alpha=0.2) plt.show()
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does a particle with infinite energy escape an infinite well? Currently, my modern physics class is going over particles in finite and infinite wells, general quantum formalism, and tunneling. What happens to a particle as it gains an infinite amount of energy? Does it stay inside of the infinite well? Does it escape? Can it not be determined? Does it depend? Are there any issues with this question? Is it valid? Is there anything I need to define or presume before I ask it? Do I need to define the rates at which the potential of the walls go to infinity, or the rate at which the particle's energy goes to infinity?
The maths breaks. As said in the comments, it's an "irresistable force meets immoveable object" paradox, and the mathematics concedes and concurs with its paradoxical nature by breaking. As mentioned in the other comment, the energy wave functions (without normalization) are $$\psi_n(x) = \sin\left(\frac{\pi n}{L} x\right)\ \mbox{inside the box}$$ If you take $n = \infty$, thus infinite energy, then you get $\sin(\infty)$. This is mathematical nonsense. Usually we extend the definition of a function when including $\infty$ by taking the limit as its input approaches that infinity, but $$\lim_{x \rightarrow \infty} \sin(x)$$ does not exist. This is the math puking its guts out, humbly choosing to offer its life than to dare pretend it is capable of offering a solution to this age-old philosophical conundrum and so suffer the tragedy and shame of hubris.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/447199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Stack/Chimney Effect: a physical explanation on how height of chimney affects $\Delta P$ A fellow engineer told me that there are greenhouses which exploit the stack effect, in order to cover some or all of their electrical energy needs. This is achieved by installing small electrical generators with fans mounted on the rotor, on or near a chimney, which has to be large. Due to pressure difference between the greenhouse and the environment, there is a flow of cold air, from outside, which is capable of rotating the fans and therefore producing electrical energy. By reading the wiki article on the effect, I understood why the chimney has to be large, since $\Delta P$ is proportional to the height h: $\Delta P = C\alpha h(\frac{1}{T_0}-\frac{1}{T_i})$ where * *ΔP = available pressure difference [Pa] *C = 0.0342, the temperature gradient [K/m] *a = atmospheric pressure [Pa] *h = height or distance [m] *To = absolute outside temperature [K] *Ti = absolute inside temperature [K] But can someone explain to me why the height is proportional to ΔP in a physical/mechanical way? or recommend an article/book/paper that explains it? EDIT: Something similar (if not exactly the same) is the solar updraft tower. Again, tower has to be as tall as possible, in order to maximize the power generation. But how is this mechanically explained? Is it because of greater pressure, the speed of the air increased, so fans rotate faster?
The pressure difference here depends on the difference of weights of two identical in size imaginary air columns that go from the ground to the top of the troposphere: one of them includes the ambient air near the chimney, another includes the air inside chimney and everything above it. The taller the chimney - the lighter is the second column.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/447463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the "lowest energy"? In many textbooks I come across the term lowest energy. For example in atomic structures, electrons are placed in orbitals in order for the atom to have the lowest energy. But what is this energy? Potential- or kinetic energy or the sum of the two?
In QM the Schrödinger Equation gives you the solutions for the wavefunction of a particle with a given potential. Because the energy is quantized you usually find several possible values for the energy that are given by an integer number $ n $ called the Principal Quantum Number. The lowest value of $ E_n $, normally when $ n=0 $ or $ n=1 $, is the Groud-State. This means that the energy of the particle is the lowest and it is a combination of the potential energy and the kinetic energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Can a battleship float in a tiny amount of water? Given a battleship, suppose we construct a tub with exactly the same shape as the hull of the battleship, but 3 cm larger. We fill the tub with just enough water to equal the volume of space between the hull and the tub. Now, we very carefully lower the battleship into the tub. Does the battleship float in the tub? I tried it with two large glass bowls, and the inner bowl seemed to float. But if the battleship floats, doesn't this contradict what we learned in school? Archimedes' principle says "Any floating object displaces its own weight of fluid." Surely the battleship weighs far more than the small amount of water that it would displace in the tub. Note: I originally specified the tub to be just 1 mm larger in every direction, but I figured you would probably tell me when a layer of fluid becomes so thin, buoyancy is overtaken by surface tension, cohesion, adhesion, hydroplaning, or whatever. I wanted this to be a question about buoyancy alone.
The USS Missouri $5.8 \times 10^7\,\rm kg, \, 270\,\rm m$ long with a fully laden draft of $11.5\,\rm m$ has an underwater surface area in excess of $270\times 11.5\times 2 \approx 6200\,\rm m^2$ and needs to "displace" $5.8 \times 10^7\,\rm kg$ of salt water (density $\approx 1020 \,\rm kg \, m^{-3}$) to float. Assume a custom made tank so that an even thickness of water (total volume $1 \,\rm litre = 0.001 \,m^{-3}$) surrounds the USS Missouri below its waterline. This thickness of the water layer would be smaller than $\frac{0.001}{6200} \approx 1.6 \times 10^{-7} \rm m$. So in theory possible but in practice very highly unlikely. The OP has changed the title from "1 litre" to "a small amount of water". All that needs to be done is to choose a volume of water such that it is practically possible to float the USS Missouri in a suitably shaped dock and the OP's layer of water 3 cm thick might be possible in practice? The picture in this answer gives a flavour of an "apparent lack of water" being able to float a ship.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 8, "answer_id": 5 }
Neumark's theorem - equivalence of POVM and projective measurements Let's say we have a system coupled with an ancilla $\vert \psi_{SA}\rangle = \vert\psi_S\rangle\otimes\vert\phi_A\rangle$. The unitary evolution of this state is given by $U_{SA}$. If we perform a projective measurement on the ancilla, the $i^{th}$ outcome occurs with probability $p_i = \langle \psi_{SA} \vert U_{SA}^{\dagger}(\mathbb{1}\otimes\vert m_i\rangle\langle m_i\vert) U_{SA}\vert\psi_{SA}\rangle$. The state after this outcome is $M_{i}\vert\psi_S\rangle\otimes\vert\phi_A\rangle$, where we have $M_{i} = \langle m_i\vert U_{SA}\vert\phi_{A}\rangle$, an operator that only acts on the system $S$. We can verify that $\sum_i M^\dagger_i M_i = \mathbb{1}_S$. The proof uses the fact that $U_{SA}$ is unitary. How does one reverse this argument (essentially the proof of Neumark's theorem)? I would like to start with $M_i$ that satisfy $\sum_i M^\dagger_i M_i = \mathbb{1}_S$ and prove that there exists an ancilla system and corresponding unitaries $U_{SA}$ and projective measurements that follow the argument above. I'm not sure how to do this and the proof on Wikipedia isn't very illuminating.
Without loss of generality, I will assume $|\phi_A\rangle=|0\rangle$ -- otherwise, just rotate your ancilla space into that basis first as part of $U$. Similarly, I will assume $|m_i\rangle=|i\rangle$. The condition $\sum M_i^\dagger M_i=1\!\!1$ says that the matrix $$ V=\begin{pmatrix}M_1\\M_2\\\vdots\\M_k\end{pmatrix} $$ has orthogonal columns. You can thus complete it to a unitary matrix $U$ by adding further columns to it (e.g., by taking states from your favorite basis and orthonormalizing them, unless they are in the span of the existing columns). This way, you arrive at a unitary $U$ with $$ V=\begin{pmatrix} M_1 & * & * & \cdots \\ M_2 & * & * & \cdots \\ \vdots\\ M_k & * & * & \cdots \ , \end{pmatrix} $$ this is, $$ \langle m_i|U|0\rangle = M_i\ . $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is $\int \frac{d^{3}\mathbf{p}}{(2\pi)^3}\frac{1}{2\sqrt{|\mathbf{p}|^2+m^2}}$ manifestly Lorentz-Invariant? When writing integrals that look like $$ \int \frac{d^{3}\mathbf{p}}{(2\pi)^3}\frac{1}{2\sqrt{|\mathbf{p}|^2+m^2}} \ = \int \frac{d^4p}{(2\pi)^4}\ 2\pi\ \delta(p^2+m^2)\Theta(p^0) $$ it is often said that this is manifestly Lorentz invariant (where $\Theta$ is the Heaviside step function). Why is this true? If I consider a Lorentz transformation $p \to q = \Lambda p$, then $|\det(\Lambda)| =1$ so the Jacobian is just 1, and: $$ \int \frac{d^4p}{(2\pi)^4}\ 2\pi\ \delta(p^2+m^2)\Theta(p^0) = \int \frac{d^4 q}{(2\pi)^4}\ 2\pi\ \delta(q^2+m^2)\ \Theta\big([\Lambda^{-1}]^0_{\ \nu} q^\nu \big) $$ After this transformation $p^0$ gets taken to $[\Lambda^{-1}]^0_{\ \nu} q^\nu$, and things do not look Lorentz invariant to me. If it were invariant wouldn't this get mapped to just $q^0$?
Use the relation $$ \int \frac{d^3{\bf k}}{(2\pi)^3}\frac{1}{2\sqrt{{\bf k}^2+m^2}} = \int \frac{d^4k}{(2\pi)^4}\frac{1}{k^2+m^2} $$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
For dimensional regularization, why the arbitrary mass scale $\mu$ has the meaning of UV cutoff? For a sharp cut-off regularization, we introduce the UV cutoff $\Lambda$. When we need to do momentum integral, we integrate the momentum ball with radius $\Lambda$. This $\Lambda$ has the explicit physical meaning of UV cutoff. For $\phi^4$ in $4$-dim, when we use dimensional regularization, we introduce an arbitrary mass scale $\mu$ $$S= \int d^Dx \frac{1}{2} (\partial_\mu \phi)^2 + \frac{m^2}{2} \phi^2+ \frac{\lambda \mu^\epsilon}{4!} \phi^4 $$ with $\epsilon = 4-D$. Up to now, the introduction of the arbitrary mass scale $\mu$ is just to keep parameter $\lambda$ dimensionless. It can be any number. But when we write the RG equation and beta function, we give the $\mu$ physical meaning of a UV cutoff. However textbook doesn't explain why. My question: * *Why does this arbitrary mass scale $\mu$ has the physical meaning of UV cutoff?
Dude, you are confused! In the dimensional renormalization scheme (Feynman used to call the shell game of renormalization dippy Hocus-Pocus), it's the $\epsilon = 4-d$ which plays the role of UV cutoff. The renormalization scale $\mu$ is the energy scale ($p^2 \sim \mu^2$) you anchor your renormalized parameters, such as coupling $g_{renor}|_{p^2=\mu^2}$, Usually, $\mu$ is chosen at the physical process scale in concern, rather than an UV cutoff scale $\Lambda$ which is assumed to be close to the Planck scale $M_p$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Why can't a particle penetrate an infinite potential barrier? I am studying basic quantum theory. My question is: Why can't a particle penetrate an infinite potential barrier? The reasoning that I have applied is that particles under consideration have finite energy. So, to cross an infinite potential barrier the particle requires infinite energy. But I cannot think of the mathematical relation between potential and energy so that indeed I am convinced that to cross an infinite potential barrier the particle needs infinite energy. What is the relation between the potential and energy of quantum mechanical particles?
Imagine a finite potential well of the form $$ V(x) = \begin{cases} 0 & |x| < L/2 \\ V_0 & {\rm otherwise}\end{cases} $$ You can solve Schrodinger's equation in the usual way, by splitting the domain in three parts, the resulting wave function will look something like this $$ \psi(x) = \begin{cases} \psi_1(x) & x < L/2 \\ \psi_2(x) & |x| \leq L/2 \\ \psi_3(x) & x > L/2\end{cases} $$ Inside the box $\psi_2(x) \sim e^{\pm ikx}$, but outside the box you will find $$ \psi_3(x) \sim e^{-\alpha x} $$ where $$ \alpha = \frac{\sqrt{2m(V_0 - E)}}{\hbar} $$ Now calculate the limit $V_0\to\infty$ (infinity potential barrier), and you will see that $\psi_3(x)\to 0$, same as $\psi_1(x)$. So in that sense the particle cannot penetrate the barrier and remains confined in the region $|x| \leq L/2$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
How can two electrons repel if it's impossible for free electrons to absorb or emit energy? There is no acceptable/viable mechanism for a free electron to absorb or emit energy, without violating energy or momentum conservation. So its wavefunction cannot collapse into becoming a particle, right? How do 2 free electrons repel each other then?
Let me start with a simple counter-question. How a free electron in a laser cooling process loses kinetic energy? The photon, hitting the compliant electron, gets absorbed and after is re-emitted with a higher frequency (with a higher energy content). There is no acceptable/viable mechanism for a free electron to absorb or emit energy,... There is. Photons are indivisible particles only between their emission and absorption. And the term photons is a summary for a class of particles over all possible frequencies (energy contents). So the re-emission of a photon mostly happens not with the same frequency as the absorbed photon. So I rewrite the equation from another answer to an interaction between the electron and the photon: $$e + \gamma \equiv e \leftrightarrow (\gamma_1 + \gamma_2) \to (e + \gamma_1) + \gamma_2 $$ How do 2 free electrons repel each other then? Beside explanations with virtual photons another explanation is that for equaly charged particles the fields do not exchange energy but work like springs. The electric fields get deformed like springs and get relaxed after by pushing the particles back. But the particles lose meanwhile some amount of their kinetic energy (in relation to each over) by emitting photons. You remember, any acceleration is accompanied by photon emission.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
Violating Newtons First Law! Suppose you are inside a very large empty box in deep space , floating ( i.e not touching the box from anywhere initially).The box is at complete rest. Now you push the box forward from inside. Now you would go backwards but the box will move forward to conserve momentum. However since you were inside the box your force is an internal force but the box would have moved forward. So doesnt this violate newtons 1st law as an internal force made a body move from state of rest?
You are right, the force is internal to the system and not just the box. Overall the combined center of mass will remain fixed as you and the box exchange momentum. There is no paradox here because you are either considering the entire system of box + human with no external forces, or you are considering the box by itself with an external force applied (by the human).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
Is it the current which create magnetic field, or vice versa, or both? Talking about stationary magnetic field, it is said that if a conductor rotates inside the field, a current is induced. Also, I read that current (moving charges) generate magnetic field, too. How are these connected, and what's the best approach to undertand both phenomenons?
Is it the current which create magnetic field, or vice versa, or both? There are involved always three components. * *The most known case and the easiest imaginable is the Lorentz force in its primordial meaning $ \vec F = q \vec v \times \vec B $. A charge, moving nonparallel to an external magnetic field undergoes a deflection perpendicular to the plane of these two components. In the image below you read it like this: Electrons, moving into the image (electric current I], under the influence of a magnetic field B left to right, experience a force F upwards. This is how an electric device (motor) works. *But you can reread the image as follows: A current carrying wire is accelerated and a magnetic field is induced. *The last case: Moving a wire inside a magnetic field a current is induced. This is how an electric generator works. what's the best approach to undertand both phenomenons? Always think abou the three involved components I (qv), B and F.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What was wrong with the old definition of temperature scale in kelvin? Wikipedia's article on the recent change to the definition of the SI base units states, as the reason for changing the definition of the kelvin: A report published in 2007 by the Consultative Committee for Thermometry (CCT) to the CIPM noted that their current definition of temperature has proved to be unsatisfactory for temperatures below 20 K (−253 °C; −424 °F) and for temperatures above 1,300 K (1,030 °C; 1,880 °F). Sure, I understand that tying temperature to a physical artifact, even a highly-reproducible one like the triple point of water, is unsatisfying. But the way it's worded implies there's some more significant problem, as if temperature measurements outside that range are less accurate or less reliable. What is that problem?
I think that the problem is that the temperatures that you quote are a long way away from the triple point of water and no one thermometer can accurately span to your quoted temperatures from the triple point of water. A definition of the kelvin that fixes the value of Boltzmann’s constant makes it possible to design thermometers to suit the temperature range of interest without being compromised by the need to function well at the TPW. The new kelvin definition, in principle, enables all equations of state that include temperature to be used to make traceable temperature measurements. Taken from The Boltzmann constant and the new kelvin which was written in 2015 and gives a nice overview.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
From where electrons flow to make a bulb light? Suppose we have the "basic" stuff like a battery 2 piece of wire and a bulb. Battery has a potential difference. But from where electrons flow to make the bulb light? from wire or from battery or from both? also if electrons flow from battery and they go through the wire (conductor) then why in insulators this doesnt happen? insulators dont give electrons but why they dont let electrons flow?
Every piece of the circuit has the molecular structure in which the electrons can either be bound to its atoms nuclei or they have enough energy to detach from their atom and roam in the field of metallic bond. Conducting metals have low energy threshold that electrons need in order to detach. A force caused by a potential difference can direct their collective movement and therefore we have a stream of electrons or an electrical current. Now, insulators are structured in a way that their electrons must have a greater energy in order to be detached from their own atom so, in standard conditions, force by a potential difference won't be able to move electrons bound to their atoms and therefore there is no electrical current through the insulator material.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the second law of thermodynamics a "no-go" theorem? As defined here, there are several no-go theorems in theoretical physics. These theorems are statements of impossibility. The second law of thermodynamics may be stated in several ways, some of which describe the impossibility of certain situations. The question is: if we view the second law of thermodynamics as a theorem (that is, a proposition that can be either proved to be true or untrue), then is it a no-go theorem? I understand that the second law of thermodynamics is a physical "law" in the sense that it is axiomatic in thermodynamics (i.e. we don't prove Newton's laws in classical mechanics), however, one can "prove" the second law of thermodynamics from statistical physics considerations. So, if you'd rather not call the second law of thermodynamics a "theorem," then perhaps it is a "no-go law"? Perhaps I'm missing a key or subtle point here, all input is very much appreciated. It may be just a matter of terminology, but I'm curious either way.
Consider for concreteness the Kelvin-Planck statement that 'you cannot extract net average work in a closed cycle from a single heat-bath'. This certainly has a flavour of a no-go statement. To call it a theorem we normally demand that it is derived (non-trivially) from some other definitions/axioms. One can indeed derive Kelvin-Planck statement after defining work, heat-bath and closed cycle mathematically (using stochastic thermodynamics). So it seems fair to call it a no-go theorem. We should bear in mind that the domain of validity is very specific, e.g. many systems around us are not heat-baths as defined in stochastic thermodynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 6, "answer_id": 5 }
Inverse of a matrix in a Path Integral Good morning! I can't make sense of an inverse of a matrix appearing in a calculation for a Wiener Path Integral. In discretized form: $$\int \prod_{i=1}^N \frac{dx_i}{\sqrt{\pi \epsilon}} e^{-\frac{1}{\epsilon} \sum_{i=1}^N \left( x_i-x_{i-1} \right)^2-\sum_{i=1}^N p_i x^2_i} \delta(x_N-x)$$ where $p_j=p(j\epsilon)$ and $p(\tau)$ is a real function. Now i wrote the delta function as an integral and in the end I have a gaussian integral where i need the determinant of the matrix $$ a = \begin{bmatrix} a_1 & -\frac{1}{\epsilon} & 0 & \dots & \dots & 0 \\ -\frac{1}{\epsilon} & a_2 & \frac{-1}{\epsilon} & \dots & \dots & 0 \\ 0 & -\frac{1}{\epsilon} & a_3 & -\frac{1}{\epsilon} & \cdots & 0 \\ \vdots & \dots & -\frac{1}{\epsilon} & \dots & \dots & \vdots \\ \vdots & & & & & -\frac{1}{\epsilon} \\ 0 & \dots & \dots & 0 & -\frac{1}{\epsilon} & a_N \end{bmatrix}$$ where $a_i=p_i \epsilon + \frac{2}{\epsilon} \quad j \neq N$ and $a_N=p_N \epsilon + \frac{1}{\epsilon}$. I made sense of the determinant thanks to the Gelfand Yaglom method but I have no idea how to compute the inverse matrix element $a^{-1}_{N,N}$. Any clue?
It's a tridagonal matrix, so the inverse is found as the Green function of the associated three-term recurrence relation. There is no closed-form solution for general $p_i$, but a detailed description of the related math is in the excercises starting on page 86 of my lecture notes at https://courses.physics.illinois.edu/phys508/fa2018/amaster.pdf. Note in particular problem 2.16. This writes the matrix element that you want as a continued fraction, and also comments on the connection to Haydock recusion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Where does the momentum go when atom absorb a photon? Imagine an electron around an atom absorbs a photon and becomes excited, it has now jumped to a higher orbital. At this point in time, where does the momentum of the photon goes?
Emilio's answer (How does one account for the momentum of an absorbed photon?) is great but quite technical. The simplified version is some of the momentum is accounted for by the different linear momentum associated with different orbitals. From the Bohr model we have $$p_n=\dfrac{\hbar}{a_0 n}$$ where $n$ is the principal quantum number which changes when an atom is excited. The rest of the momentum causes the atom itself to recoil. This is the mechanism whereby a photon absorption can excite a phonon (it kicks the atom which sets the lattice vibrating).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do macro black holes take ridiculous amounts of time to evaporate—unlike micro black holes which dissolve in even less than a second? Why do macro black holes take ridiculous amounts of time to evaporate, considering that micro black holes dissolve in even less than a second? Does this mass-based behavior imply that the matter within black holes still affects them with some of its physical properties even beyond the event horizon?
One way of looking at Hawking radiation is to think of the usual representation of virtual pairs as oysters, upon which the BH can feed (losing weight in the process, of course). But it is the tidal effect, the gravitational gradient (GG), which allows the BH to separate the virtual pair, eat one and spit the other out. Think of the GG then as an oyster-shucking knife. Now paradoxically the smaller the oyster, the larger its mass-energy, and the sharper the GG must be to open it. Large BHs with big dull knives can only eat large low-energy oysters, and thus lose weight slowly, in the form of low-energy radiation. But small BHs can feast on high-energy small oysters- thanks to a sharp GG- and lose all their mass in a blaze of glory- UV,X, and gamma rays (some of which we may glimpse from billions of LYs away). This is an explanation I devised for a grandniece, and as such is hardly science. But she readily understood it, and I think it does have a certain intuitive appeal.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does the half-life of an element mean it will never decay completely? Example: Half life of Polonium-194 is 0.7 seconds. If we supposedly take 50g of Polonium, there will surely be a time when no more of this Polonium will be left because if we consider the decay discretely, in the form of individual atoms, won't there be a time when the last atom decays completely? Does this mean an element can decay completely? If so, why don't we actually 'run out' of natural radioactive elements? Is it so because the elements they decay into combine to form the parent element again?
There will certainly come a time at which we can say "it is more likely than not that not even one atom of the original Polonium sample is left". So, yes, the sample can decay completely. The fact is, the earth is running out of natural radioactive elements. Most of what is left are Uranium, Thorium and Potassium because they have half-lives which are not tiny compared to the age of the solar system. The reason why we had any radioactive elements to start out with is that the solar system formed from a cloud of dilute gas which contained debris from an exploded supernova. In the violence of a supernova explosion smaller nuclei can be slammed together so hard that they fuse into the heavier radioactive elements. In reactors we can make samples of heavy radioactive elements - but usually at the cost of many uranium atoms. Other than that the number of radioactive nuclei is winding down here on earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 0 }
Is there a way to separate 2D from 3D? When we see object around us in space, we can always interpret those in 2D, by considering them to pass through a plane, its only when we interact with those objects do we realise that it is 3D, is there any significant way of knowing this difference, using mathematics?
Using mathematics can mean different things. I will start explaining with a trivial example. * *I give you a two dimensional image(or even series of them ) and ask you to reconstruct a 3-D object from it. In this case, there is no inherent information in any two dimensional image that tells you what the object could be in three dimensional. No amount of mathematics with no additional information can take you from image on the right to the cube on the left in the image below. *Luckily, in the real world we usually have some physical information about the object. For example if I just tell you that the two dimensional object(In the right image) is face of a cube. Then you can immediately reconstruct the cube from just one image ! Now this may seem too simplistic. But any reconstruction of a 3-D object from it's 2-D representation works in more or less a similar way. Our brain sees two 2D images and combining this with it's experience of the world it reconstructs a 3D representaion. Infact babies cannot percieve depth till the 5th month or so. There are numerous ways out there of Depth reconstruction from 2D images, all of which heavily depend on mathematics. But there is no mathematical theory (and in my opinion there cannot be) that can just from a 2D representaion guess the 3D nature of the object. Below, is a publication of depth reconstruction from a single image. http://www.cs.cornell.edu/~asaxena/learningdepth/ijcv_monocular3dreconstruction.pdf
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fermionic ghost path integral results in $\delta$ function? This is related to a statement in pg 20 of hep-th/9408074 formula (2.39). Suppose $$\mathcal{L}\sim\frac{i}{\lambda^{\prime}}\bar{\eta}^xg_{ij}U_x{}^i\psi^j+\cdots \tag{2.35}$$where $\bar{\eta}$ to my guess is ghost field as it is non-dynamical and assume $\cdots$ does not contain contribution of $\bar{\eta}$. Consider $\int d\eta e^{\mathcal{L}}$. The paper says integration of $\eta$ gives $$\left(\frac{-i}{\lambda^{\prime}}\right)^t\delta(g_{ij}U_x{}^i\psi^j)\tag{2.39}.$$ $\textbf{Q:}$ How do I see this does give rise to Dirac delta function? I had expanded the exponential function to perform fermionic integral but this does not give me $\delta(g_{ij}U_x^i\psi^j)$ but $\sim U_x^i\psi^j$. Normally, Faddeev-Popov ghosts involve 2 fermions (ghost and anti-ghost) to perform gauge fixing. What is the argument here?
Eq. (2.39) is a $t$-dimensional Grassmann-odd delta function$^1$ $$\prod_{x=1}^t \delta(\frac{i}{\lambda^{\prime}} g_{ij}U_x{}^i\psi^j)~=~\prod_{x=1}^t \int \! d\eta^x~\exp\left\{ \frac{i}{\lambda^{\prime}}\eta^x g_{ij}U_x{}^i\psi^j\right\}~=~\prod_{x=1}^t \frac{\pm i}{\lambda^{\prime}} g_{ij}U_x{}^i\psi^j .\tag{2.39}$$ Recall that if $f$ is an arbitrary function of a Grassmann-odd variable $\theta$, then Berezin integration yields$^2$ $$\int d\theta~(\pm\theta)~f(\theta)~=~f(0), $$ which formally satisfy the defining property $$\int d\theta~\delta(\theta)~f(\theta)~=~f(0) $$ of a Dirac delta distribution! So we can identify $$\delta(\theta)~=~\pm\theta.$$ References: * *C. Vafa & E. Witten, arXiv:hep-th/9408074; eqs. (2.35) & (2.39). -- $^1$ There is a typo in the action (2.35) of Ref. 1: The Grassmann-odd $\bar{\eta}$ variable in the fifth term should be an $\eta$ variable. $^2$ The $\pm$ sign denotes different Berezin conventions $\int d\theta~\theta~=\pm 1 $.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The reasoning behind doing series expansions and approximating functions in physics It is usual in physics, that when we have a variable that is very small or very large we do a power series expansion of the function of that variable, and eliminate the high order terms, but my question is, why do we usually make the expansion and then approximate, why don't we just do the limit in that function, when that value is very small (tends to zero) or is very large (tends to infinity).
In general, one uses whatever works to learn something about the system. Get an exact exact solution if you can. But too often that is not possiblr. So, use any technique you like to learn something about the behaviour. Turns out that the perturbative expansion can often be used, is usually meaningful around a stable state and is helpful. So, it becomes a golden hammer. However, it is good to be skeptical as to validity in a given case. For systems in a state far away from a stable minimum such techniques are often not valid at all.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
$Q$-factor for damped oscillator (not driven)? How would this be defined? Some of the Q-factor definitions I have encountered include: $$Q=2\pi\frac{\text{Energy stored}}{\text{Mean power per cycle}}\\Q=2\pi\frac{\text{Energy stored}}{\text{Energy lost per period of oscillation}}\\Q=2\pi\frac{1}{\text{Fractional power lost per cycle}}$$ However, none of these seem to work for a non-driven, damped oscillator. The first two won't work because energy stored is not a constant, and unless fractional power lost per cycle is a constant (is it, and if it's then how do you show that?) the third won't work either.
A practical way to measure the Q factor for a non-driven oscillator is to measure the logarithmic decrement of the amplitude as the response decays after an impulse, and use that to find the damping ratio and hence Q. Note that the value of Q is only a constant for linear systems. For a nonlinear oscillator, in general it is amplitude dependent, and might not be a very useful concept anyway. For a nonlinear oscillator the resonant frequency may also be amplitude-dependent which makes it even more non-intuitive what (if anything) the Q value means in practice.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Hamiltonian description of a system I know that phase space is the Hamiltonian description of a system, where we deal with position and momentum in equal footing. My question is in this phase space are those position and momentum are basis for that system? As far as I know they are independent, in Hamiltonian dynamics, but how can I say that they are orthogonal basis functions? As alwayas we draw position and momentum line perpendicularly!
I'm afraid you're making a soup of concepts which have nothing to do with one another. My suspicion derives e.g. from your use of words like "orthogonal" and "basis", which I would see better in context about QM. As to drawing "position and momentum line perpendicularly" you're attributing weight to an innocent practice: if I have a 2D space, it's usual to draw a map of its coordinates in a Cartesian plane. But you shouldn't give the drawing more properties than it's meant to have, e.g. a euclidean structure (orthogonality). What has instead a meaning in that plane is the area enclosed by a closed curve (the integral $\oint\!p\,dq$). Maybe you don't think of an area without a euclidean structure, but this is actually possible. And that area is interesting because it's invariant under canonical transformations. I assume you'll see that going on with Hamiltonian mechanics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Different predictions from differential vs integral form of the Maxwell–Faraday equation? Assume a toroidal solenoid with a variable magnetic field inside (and zero outside) and a circular wire around one of the sides. Because there is no magnetic field outside the solenoid, we have $$\nabla \times E = - \frac{\partial B}{\partial t}=0,$$ which impies that E is conservative, that is, $$\int_{\partial \Sigma} E.d\ell =0$$ On the other hand, using the integral form we get: $$\int_{\partial \Sigma} E.d\ell = - \frac{\partial}{\partial t}\int_\Sigma B \cdot dS \ne0,$$ because there is a changing B inside the surface. What is it wrong with my reasoning?
Your conclusion that the electric field is conservative is wrong; from Stokes' theorem, $$\oint_{\partial \Sigma} \mathbf{E}\cdot d\mathbf{l} = \iint_\Sigma \nabla \times \mathbf{E}\cdot d\mathbf{S},~~~~~~$$ and the curl of $\mathbf{E}$ is not zero everywhere on $\Sigma$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How can we determine if poles are North or South when viewing magnetic field lines formed by iron filings? Iron filings can be used to visualize the magnetic field lines of a magnet. For example, from Wikipedia: $\hspace{175px}$,The magnetic field of a bar magnet revealed by iron filings on paper. A sheet of paper is laid on top of a bar magnet and iron filings are sprinkled on it. The needle shaped filings align with their long axis parallel to the magnetic field. They clump together in long strings, showing the direction of the magnetic field lines at each point. Question: When looking at magnetic field lines formed by iron filings, how can we determine which magnetic pole is North and which is South?
We can't. A magnet's North and South poles don't have any particular intrinsic property by themselves, only in how they interact with another North or South pole. If two pole attract, they are opposite, but we can't say which one is which. If two poles repel, they are either both North or both South, but again, we can't tell which it is. We need to compare to against the Earth's magnetic field as a reference. Only then can we identify a magnet's North pole, which is the one that points toward the Earth's magnetic south pole (which is, ironically, located near the Earth's geographic north pole)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Influence of Lorentz force on Eddy Current A magnetic field exerts a force on a moving charge called Magnetic Lorentz force. How does this force work in case of eddy currents? The following is an extract from Wikipedia: "Another way to understand the current is to see that the free charge carriers (electrons) in the metal sheet are moving with the sheet to the right, so the magnetic field exerts a sideways force on them due to the Lorentz force. The Lorentz force on positive charges F = q(v × B) is toward the rear of the diagram (to the left when facing in the direction of motion v). This causes a current I toward the rear under the magnet, which circles around through parts of the sheet outside the magnetic field, clockwise to the right and counterclockwise to the left, to the front of the magnet again." In order for the charges to go in a circle, the magnetic Lorentz force should act as the centripetal force, correct? But this means that the direction of v and B should be perpendicular to each other. So, do eddy currents occur only when they are perpendicular? If not, then how does the Lorentz force influence the motion of the charge carriers?
One idea may be that the magnetic term $\overrightarrow{v}\wedge \overrightarrow{B}$ is not the only one to put the charges in motion. In an electrical circuit, the electric field associated with the surface charges accumulating at the edge of the circuit must be taken into account. The generalized Ohm's law is written in this case $\overrightarrow{j}=\gamma (\overrightarrow{E}+\overrightarrow{v}\wedge \overrightarrow{B})$. The magnetic teme serves as an "emf source" in the region where $\overrightarrow{B}$ is intense and the electric field allows the return of the current when the field is zero. In a way, it does look like a Laplace Rail generator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Where does it getting wrong , when using $v^2 - u^2 = 2as $ down the incline, for different object having different moment of inertia? Well, Consider a situation there is a sphere and a ring, of same mass $M$ and radius $R$. They both starts rolling down the inclined plane. We know moments of them as well, $$I_\text{sphere}=\frac{2}{5}MR^2$$ and $$I_\text{ring}=MR^2$$ respectively. So, We know that sphere will have more transitional kinetic energy, so more velocity, so it will take less time to reach at bottom. The question is while using equation for both, $$v^2 - u^2 = 2as, $$ initial velocity is $0$ for both, final velocity are different for both, but acceleration and distance traveled same. So, where is the blunder happening? And also the equation $$v=u+at,$$ if velocity for sphere is greater, then what about the time? Why is the time taken less? Where are the equations getting wrong or is it me getting it wrong?
I get this solution: The equations of motion are: $I\,\ddot{\vartheta}=F_c\,R\qquad (1)$ $M\,\ddot{s}=-F_c\,R +M\,g\,\sin(\alpha)\qquad (2)$ and rolling without slipping $\ddot{s}=\ddot{\vartheta}\,R\qquad (3)$ with $F_c$ is constraint force. We have 3 equations for 3 unknowns $\ddot{s}\,,\ddot{\vartheta}$ and the constraint force $F_c$. we get for the incline acceleration: $\ddot{s}=\frac{M\,R^2\,g\,\sin(\alpha)}{M\,R^2+I}$ with $I_s=\frac{2}{5}\,M\,R^2$ and $I_r=M\,R^2$ we get: $\frac{\ddot{s}_{\text{sphere}}}{\ddot{s}_{\text{ring}}}=\frac{10}{7}$ so the incline acceleration of the sphere is $\frac{10}{7}$ greater then the incline acceleration of the ring.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Hamiltonian formulation of general relativity Why is it not possible to find a Hamiltonian formulation of general relativity as easily as in classical mechanics? There was a remark to this in my lecture but no real explanation as to why this is. What stops us from creating a Hamiltonian formulation of GR?
The basic Hamiltonian formulation of GR is the Arnowitt-Deser-Misner (ADM) formalism from 1959. The Legendre transformation of the Einstein-Hilbert Lagrangian density is singular, which leads to constraints. References: * *ADM, arXiv:gr-qc/0405109.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Is there anything special about ebonite and fur? I'm from Czech Republic, born 1980. From elementary school, we all remember this mantra: When ebonite rod is rubbed with fox fur, electrostatic charge is created. Electrostatic charge is created by rubbing ebonite rod with fox fur. Rubbing ebonite fur with fox fur creates electrostatic charge. Etc. ad nauseam. So... Is there anything special about the combination of ebonite and fox fur that makes it especially useful for teaching kids about electricity? Does there even exist a clear distinction between things that do and things that don't create electrostatic charge by rubbing? The irony: I can't remember ever hearing the word 'ebonite' in any other context than this particular strange example. (I never even knew what ebonite was until about 15 minutes ago when I googled it.)
The electrons in fur are much less tightly bound than electrons in ebonite (very strong relative bond, ebonite is at the bottom of the negative Triboelectric series, see [1]) and hence ebonite gets a strong relative negative charge [1]. "A material towards the bottom of the Triboelectric series table, when touched to a material near the top of the series, will acquire a more negative charge." So to answer the question, this particular combination of materials will cause a very strong electrostatic effect. Ebonite is very strong negatively static charged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why do mirrors not follow brewster's angle? Normally a material will have an angle where the reflected light is completely polarized. Now say we have a mirror (implemented by a conductive silver coating) that reflects most of it's incident light. https://physics.stackexchange.com/a/10925 says that this imperfect mirror will be mostly linearly polarized, but not at the brewster angle. Why is this? The derivation for the brewster angle assumes non-magnetic materials, but does not assume non-conductive materials I believe.
Brewster's angle relates the index of refraction to a polarization phenomenon in reflection from a dielectric (insulator) material. Most mirrors are silvered (have a metal coating), and the equivalent dialectric constant for a metal is ... infinity. That predicts a Brewster's angle of 90 degrees, which is geometrically unavailable to an experimenter. $$ \Theta_{B} = arctan({\eta_{metal} \over {\eta_{air}}}) = arctan({{\infty} \over 1})$$ The ninety degree angle is simply not a glancing incidence possibility for a reflection to be observed. It is not incorrect to say that metallized mirrors DO follow Brewster's angle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How is quantum mechanics consistent with statistical mechanics? Let's say we have an harmonic oscillator (at Temperature $T$) in a superposition of state 1 and 2: $$\Psi = \frac{\phi_1+\phi_2}{\sqrt{2}}$$ where each $\phi_i$ has energy $E_i \, .$ The probability of finding each the $i$ state would be 50% in this case. However, approaching this problem with statistical mechanics the probability would be proportional to $e^{-E_i/k_BT}$ What is wrong with my approach?
You're comparing the wrong things - you must take the appropriate limits when using statistical physics (or statistical mechanics as you called it) to recover quantum and classical results. Also, recall that when we use statistical physics, we must consider the system in an ensemble formalism, i.e. the canonical ensemble. That is, we consider a (statistically) large number of identically prepared copies of our system: a single harmonic oscillator. Additionally, we must use the quantum version of statistical physics, where the ensemble is characterized by the density operator. For a nice intro to this, see Sakurai. This article should answer your questions regarding the case of a single 1D harmonic oscillator potential viewed statistically. For a given temperature, we use the quantized energy of the oscillator to find the partition function in the canonical ensemble. Consider the two limits: 1) Classcial: the thermal energy is much greater than the spacing of oscillator energies. Here, one recovers the classical result that the energy is $kT$. 2) Quantum: the thermal energy is much less than the spacing of oscillator energies. Here, one recovers the quantum result that for a given frequency the energy is $\frac{1}{2}\hbar \omega$ Further, using the appropriate density operator, you can recover the probabilities that you sought: the density operator represented as a matrix in the basis $\{\lvert \phi_1 \rangle, \lvert \phi_2 \rangle \}$, for the system given by pure state $\Psi$ is $$ \rho = \lvert \Psi \rangle \langle \Psi \lvert = \begin{pmatrix} \frac{1}{2} & \frac{1}{2} \\\ \frac{1}{2} & \frac{1}{2} \end{pmatrix} = \frac{1}{2} \begin{pmatrix}1 & 1 \\\ 1 & 1\end{pmatrix}$$ To find the probability of being in state $i$, take, $$\mathcal{P}_i = Tr(\rho P_i)$$ where $P_i$ is the projection operator of the $i^{th}$ state, and Tr means trace.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
Is the Higgs boson an elementary particle? If so, why does it decay? The Higgs boson is an excitation of the Higgs field and is very massive and short lived. It also interacts with the Higgs field and thus is able to experience mass. Why does it decay if it is supposed to be an elementary particle according to the standard model?
Most fundamental particles in the standard model decay: muons, tau leptons, the heavy quarks, W and Z bosons. There’s nothing problematic about that, nor about Higgs decays. Your question may come from a misconception about particle decay: that it’s somehow the particle ‘coming apart’ into preexisting constituents. It’s not like that. Decays are transformations into things that weren’t there before.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 6, "answer_id": 0 }
Why can blue LEDs be used for generating white light, but red LEDs cannot LEDs consist of pn-junctions, so why can blue LEDs be used for generating white light, but red LEDs cannot
"White" light consists of a mixture of at least three colors that should be blue(ish), green (ish) and red(dish). The most common way to get white from basically monochromatic LED light is to use fluorescent material. This is what a spectrum of a blue LED looks like: This gets converted in the LED to something like this: In fluorescent material, an electron is excited by a single incident photon and then relaxes over intermediate energy levels. This means that photons emitted during this relaxation can only be of lower energy (shifted to the red end of the spectrum) compared to the exciting photon. So it's easily possible to get red from blue but not the other way round.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Dirac matrices in 1+1 dimensions Given $\gamma^\mu$ in 1+3 dimensions with signature $(+,-,-,-)$, how can I obtain Dirac matrices in 1+1 dimensions expressed in terms of the $\gamma^\mu$?
The converse problem, constructing the 1+3 dim $\gamma^\mu$s (4 × 4 matrices) out of the 1+1 dim ones (2 × 2 matrices) is solved systematically here in WP . The stated one is straightforward, since the standard Dirac rep 1+3 ones amount to just $$\gamma^0=\sigma^3\otimes I, \qquad \gamma^1=i\sigma^2\otimes \sigma^1, $$ hermitean and antihermitean, respectively. You then observe that, in anticommuting these two, since the second hermitean tensor factors commute, these second factors make no difference whatsoever in satisfying the Clifford algebra, and may be dropped altogether, so that $$ \sigma ^3 \mapsto \Gamma^0, \qquad i\sigma^2 \mapsto \Gamma^1, $$ where I have designated the 1+1 dim $\gamma$s (2 × 2 matrices) as $\Gamma$s, not a little perversely, if only to stick to your original notation. Check the Clifford compliance thereof!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a connection between the energy distribution and time dilation? Can anyone please help me understand what is descibed bellow? Scenario 1. We have a pair of atomic clocks. Let's call them clock A and clock B. We switch both of them on at the same time. Clock A will stay on Earth and clock B will go with the astronauts. Astronauts with the clock B will accelerate in direction away from the Sun for 10 years (from the astronauts' perspective) at 1 g. Than they will start braking process that will take 1 year at 10 gs. After the braking process is finished, the astronauts are not moving away from Earth anymore. They turn around and head back to Earth. The travel back will be according to the same scenario. 10 years of acceleration at 1 g and braking 1 year at 10 g. And the astronauts are (back) home on earth. The time on clock B is 22 years. What time is on clock A? Scenario 2 In opposite scenario (1 year of acceleration with 10 gs and 10 year of braking with 1 g, there and back) the astronauts will "travel to future" as the clock B shows 22 years while clock A 372... So to ask more general question: If astronauts with clok B in scenario 1&2 always spend 22 years, use same energy and reach same maximum speed...But trave distance will be different. Will it have impact on time dilation? Does the way the energy is being distributed has any impact on the time dilation?
If astronauts with clok B in scenario 1&2 always spend 22 years, use same energy and reach same maximum speed...But trave distance will be different. Will it have impact on time dilation? Yes. The proper time experienced by the clock is given by $\int ds$, where $ds^2=dt^2-dx^2-dy^2-dz^2$ (in units with $c=1$). The proper time depends on the detailed shape of the clock's world-line, not just on some parameter like maximum energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Vrms for monotomic, diatomic, and polyatomic molecules in my notes from class I have that Vrms= sqrt(3kT/m) for a pt molecule Vrms= sqrt(5kT/m) for a diatomic molecule and that Vrms= sqrt(6kT/m) for a triatomic-> higher order molecule but RMS Speed of Gas Molecule for Polyatomic Molecules says that Vrms=sqrt(3kT/m) always and I don't understand why that is. Thanks, I know another post has about the same question- but wasn't able to comment or add a related question due to my 0 reputation. Don't understand where Vrms formula comes from. Is it solely dependent on translation motion. And that's why for all molecules you can use the same exact Vrms formula. RMS Speed of Gas Molecule for Polyatomic Molecules
$\frac 12 mv^2_{\rm rms}$ is the mean translational kinetic energy of an atom/molecule and it is the the mean translational kinetic energy which is proportional to the temperature. Vibrations and rotations may contribute to the total kinetic energy of a molecule but it is the translational kinetic energy which is linked to temperature. Vrms= sqrt(3kT/m) for a pt molecule Vrms= sqrt(5kT/m) for a diatomic molecule and that Vrms= sqrt(6kT/m) for a triatomic-> higher order molecule. The first equation is correct for all atoms/molecules but the statements about diatomic and triatomic molecules are not correct as $\frac 52kT$ and $\frac 62kT$ are equal to the total kinetic energy of a molecule not just the translational part.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to have a planet entirely made out of liquid water? Earth is mostly covered in oceans, but they only go a few kilometres deep. It's obviously not possible to have a planet the size of the earth to be made entirely out of water, because of the kind of pressures reached in the interior. a. But say that we did, how far down from the surface would water remain water before presumably turning to ice under the pressure? b. How large a 'planet' could we have made entirely out of water? Would it be able to attain the size of a small dwarf planet like Ceres?
Suppose we assemble a large mass of water in space, then it will form a sphere held together by its gravitational field. The question is then how large this sphere can become before the pressure at the centre causes the water to solidify to ice. The calculation of the pressure at the centre is straightforward in principle, and is described in How to find the force of the compression at the core of a planet? The problem is that to do the calculation precisely we need to know how the density of water changes with pressure. There is no simple equation for this so we would need to do a numerical calculation. However if we make the approximation that the density of the water remains constant we can get an approximate equation for the pressure: $$ P = \frac{2}{3}\pi G \rho ^2 R^2 \tag{1} $$ We can estimate the pressure at which water solidifies by looking at the phase diagram of water. The following phase diagram comes from London South Bank University web site: The pressure at which the water solidifies to ice is strongly temperature dependent. At everyday temperatures it's around 800MPa to a GPa, and since this is an approximate calculation let's take a GPa as being a round number. Then using equation (1) we find that value of the radius $R$ for which the pressure reaches 1GPa is about 2700km. So there's your answer. A ball of water at room temperature larger than 2700km would contain a solid ice core. In practice the density of the water increases with depth so the radius at which ice forms would be less than this. Though the relative density of ice VI is only around $1.3$ so it wouldn't be that much smaller.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What if we set Hamilton-Jacobi equation as an axiom? We usually postulate the principle of least action. Then we can get Lagrangian mechanics. After that we can get Hamiltonian mechanics either from postulate or from the equivalent Lagrangian mechanics. Finally we get the Hamilton-Jacobi equation (HJE). But what if we have HJE as postulate instead? How can we get Hamiltonian mechanics from it?
I) The Hamilton-Jacobi equation (HJE) itself sure ain't enough as an axiom without some kind of context, setting, definitions and identifications of various variables. II) Let us assume: * *that Hamilton's principal function $S(q,P,t)$ depends on the old positions $q^i$ and the new momenta $P_j$ and time $t$, *the HJE itself, *the definition of old momenta $p_i:=\frac{\partial S}{\partial q^i},$ *the definition of new positions $Q^i:=\frac{\partial S}{\partial P_i}$, *that the new phase space variables $(Q^i,P_j)$ are all constants of motion (COM). III) Then it is a straightforward exercise to derive Hamilton's equations for the old phase space variables $(q^i,p_j)$ provided pertinent rank-conditions are satisfied for the Hessian of $S$. [This result is expected, because (5) is precisely Kamilton's equations and the function $S$ in (1)-(4) is a type-2 generating function of a canonical transformation (CT).] IV) The Lagrange equations follow next from a Legendre transformation. In turn, the Lagrange equations are Euler-Lagrange (EL) equations from the stationary action principle for $\int \! dt ~L$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conservation of momentum if kinetic energy is converted to mass There is a moving object. Through an unspecified (science fiction) mechanism its kinetic energy is converted to mass and the object comes to rest. The mechanism is fictional but in good scifi it is good to adhere to the laws of nature. Does the conversion of kinetic energy to mass violate conservation of momentum? Or is conservation of momentum just a case of conservation of energy, which is conserved when converted to mass?
Momentum was non-zero before the conversion and zero after the conversion and so conservation of momentum was violated. Therefore, your described conversion is not consistent with the known laws of physics which require conservation of momentum to always hold.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 0 }
Dimensional analysis - application to logarithms I read some nice threads about this topic: physics StackExchange maths StackExchange stats StackExchange However, it still puzzles me that logarithm of some physical quantity has no units. Example, let's assume we have a collection of values of the distance between two cities. In set A, distances are expressed in km, while in set B distances are expressed in m. If I apply a logarithmic transformation to both sets and compute the average and standard deviation I get two different values for the first parameter and the same for the latter. The latter makes sense because I'm subracting logarithms and that is equal to a division operation so I get dimensionless values for the standard deviation. For the average value I get different values. Assuming that the logarithmic transformation returns dimensionless values in principle I could sum the average values of sets A and B and get a number, however this doesn't make sense since sets A and B are not expressed in the same scale of length unit. Therefore one can argue that you must still somehow keep track of dimensions when you do logarithmic transformations or you might end up summing apples and oranges. What is your take on the above?
The reason a logarithmic function, or an exponential function can't have dimensions is easy to see if you consider what the expression for a logarithm is in terms of a power series. $$ \begin{align} \ln x &= (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} + \cdots\\ &= \sum\limits_{n=1}^\infty \left((-1)^{n-1}\frac{(x-1)^n}{n}\right) \end{align} $$ If $x$ has dimensions (say of length), then it's clear that $\ln x$ is just a nonsensical physical quantity because its dimensions make no sense at all. So $x$, and hence $\ln x$, must be pure numbers. A similar argument applies to exponentials: $$\exp x = \sum\limits_{n=0}^\infty \frac{x^n}{n!}$$ So $x$, and $e^x$ must be dimensionless. This also applies to trig functions of course. In particular this means you can't take the logarithm or exponential of any physical quantity: you can only ever take logarithms or exponentials or ratios of physical quantities, which are pure numbers. Here's an example of taking logarithms in a legitimate way. If we have some quantity with a dimension, $q$, we can express it as $q = xu$ where $x$ is a pure number and $u$ is a unit of the same dimension as $q$. So if we want a quantity with the dimension of length we can express it as $d\,\mathrm{mi}$, where $\mathrm{mi}$ is a mile. So for any quantity with a dimension we can construct a pure number by dividing by the unit: $$\frac{d\,\mathrm{mi}}{1\,\mathrm{mi}} = d$$ And it's fine to take logs of this. And using this technique we can do things like combining logarithms of quantities with different units: $$\ln\left(\frac{x\,\mathrm{chain}}{1\,\mathrm{chain}}\right) + \ln\left(\frac{y\,\mathrm{furlong}}{1\,\mathrm{furlong}}\right) = \ln \left(xy \frac{\mathrm{chain}\,\mathrm{furlong}}{\mathrm{chain}\,\mathrm{furlong}}\right) $$ (A $\mathrm{chain}\,\mathrm{furlong}$ is an acre.) The units don't have to be dimensionally the same even in cases like this $$\ln\left(\frac{A\,\mathrm{acre}}{1\,\mathrm{acre}}\right) + \ln\left(\frac{m\,\mathrm{month}}{1\,\mathrm{month}}\right) = \ln \left(Am \frac{\mathrm{acre}\,\mathrm{month}}{\mathrm{acre}\,\mathrm{month}}\right) $$ Acre months might be a useful unit for computing rent on land, say.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Possible paradox relating to acceleration, velocity, and work I recently encountered a question that made me think of this "paradox": Imagine the following situation: There are two forces: Force 1 and Force 2. They accelerate a mass $m$ from the same initial velocity to the same final velocity in the same amount of time, but as you can see from the graph, when compared at the same instants in time, the two processes have different accelerations. How does the work done by each process compare? It is easy to see that the work done by both forces must be the same because the change in kinetic energy is equal, which is correct. However, work can also be written as $Fd$. Now Newton's Second Law says $F=ma$, so $W=mad$. From the graph, it seems that the average accelerations are equal for both forces, as they result in the same change in velocity over the same interval of time. But it's also clear that Force 1 results in greater displacement (area under a velocity-time graph). Therefore, shouldn't Force 1 do more work? I have a vague sense as to why the first reasoning was correct and the second incorrect, which has to do with average accelerations vs. instantaneous accelerations, but can anybody give me a definite explanation as to why the work done for both processes are the same?
Another way to look at this. You are correct in saying that the work done by each process is equal, since the two processes produce the same change in final kinetic energy. You are also correct in the think that force required to move the particle over the curved path is larger than the force required to move the particle along the straight path. So why doesn't the curved path force do more work? It is because work is actually defined by $W=\int \overset {\rightarrow } {F}\cdot \, d\overset {\rightarrow } {x} $ The dot product of the force and position element means that only the force parallel to the particle's motion contributes to work. On the curved path there is also a component of force perpendicular to the motion which causes the change in direction of the particle. This extra perpendicular component of the force does no work. By the way, you can get the two particles to the same spot in equal times, or with equal velocities, but not both.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Two black holes colliding (classical mechanics) I had this question come up in my exam where two identical black holes are in orbit around each other. There is a loss of energy via gravitational waves : $$\frac{d E}{d t} = kr^4\omega^6$$ where $k$ is a constant, $r$ is the separation between the black holes and $\omega$ is the angular frequency of the orbit. They asked to show $\frac{dE}{dt}$ is proportional to $\frac{1}{r^2}$. But I found it to be $1/r^5$. I know that this is circular motion where both bodies are on the same orbit revolving about the centre of mass (which would be the centre of the circle) and they both would have the same period so that the centre of mass is in the same place. I substituted $\omega$ = $2\pi/P$ with $P$ being the period of the orbit. I used $$P^2 = \frac{4\pi^2 r^3}{G(m_1+m_2)}$$ to substitute $P$ for $r$. Where have I gone wrong? Is it possible that the dependence is $1/r^5$ after all?
Is it possible that the dependence is $1/r^5$ after all? Yes. For confirmation, see equation (2.38) in [1], equation (3) in [2], and the combination of equations (41)-(42) in [3]. All agree with your result $dE/dt\propto 1/r^5$. However, note that the same result can also be written $dE/dt\propto v^6/r^2$ where $v$ is the orbital speed, because $v^2\propto 1/r$. Equation (20) in [4] writes it both ways. References: [1] Kokkotas (2009), "Gravitational wave physics," http://www.tat.physik.uni-tuebingen.de/~kokkotas/Teaching/NS.BH.GW_files/GW_Physics.pdf [2] Miller (2008), "Binary Sources of Gravitational Radiation," https://www.astro.umd.edu/~miller/teaching/astr498/lecture25.pdf [3] Hirata (2011), "Lecture XV: Gravitational energy and orbital decay by gravitational radiation," http://www.tapir.caltech.edu/~chirata/ph236/2011-12/lec15.pdf [4] Kokkotas (2008), "Gravitational Wave Astronomy," https://arxiv.org/abs/0809.1602
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confusion with Virtual Displacement I have just been introduced to the notion of virtual displacement and I am quite confused. My professor simply defined a virtual displacement as an infinitesimal displacement that occurs instantaneously in the configuration space, but this doesn't make any mathematical sense to me. Is there a way of defining the notion of virtual displacement, and by extension virtual work and generalized forces, in a way that is more mathematically rigorous? Assuming purely geometric holonomic constraints as well as purely conservative forces in the definition is acceptable. As a side note, I have seen some sources give vague statements that the virtual displacements are elements of the tangent space of the manifold of constraint and other sources suggest that the virtual displacements are a result of the total derivative of the position of a particle. A rigorous definition of virtual displacement should make the connection between these two concepts more clear.
* *Let there be given a manifold $3N$-dimensional position manifold $M$ with coordinates $({\bf r}_1, \ldots, {\bf r}_N)$. Let the time axis $\mathbb{R}$ have coordinate $t$. *Let there be given $m\leq 3N$ holonomic constraint functions $$f^a: M\times \mathbb{R} ~\to~\mathbb{R}, \qquad a~\in~\{1,\ldots, m\}. $$ The constraint functions $(f^1,\ldots, f^m)$ are usually assumed to satisfy various regularity conditions, cf. my Phys.SE answer here. Moreover, they are assumed to be functionally independent, and the intersection of their zero-level-sets $$C~:=~\bigcap_{a=1}^m (f^a)^{-1}(\{0\})~\subseteq~ M\times \mathbb{R}$$ is assumed to form a submanifold of dimension $3N+1-m=n+1$, which we will call the constrained/physical submanifold. Let $(q^1, \ldots, q^n,t)$ be coordinates on $C$. The $q$'s are known as generalized coordinates. *Given a point $p\in C$ with coordinates $(q^1_0, \ldots, q^n_0,t_0)$, then the $n$-dimensional submanifold of finite virtual displacements at $p$ is $$ V~:=~C \cap (M\times \{t_0\}). $$ So in a nutshell, a finite virtual displacement is a displacement of position that doesn't violate the constraints and is frozen in time. See also this related Phys.SE post. *The heuristic notion of infinitesimal variations can in this context be replaced with tangent vectors. So the tangent space of infinitesimal virtual displacements at $p$ is $$ T_pV. $$ Concerning the use of infinitesimals in physics, see also this and this Phys.SE posts.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What is the point of generalizing a more specific result to an order of magnitude? In my textbook, an example wants me to find an estimate of the number of cells in a human brain. It gives the volume of the brain as $8 \times 10^{-3}\ \rm m^3$ which it then estimates further as $1 \times 10^{-2}\ \rm m^3$. It follows the same process for the volume of a cell, which it ultimately estimates as $1 \times 10^{-15}\ \rm m^3$. It then divides these two quantities to get $1 \times 10^{13}\ \rm {cells}$. I'm not sure if this example is simply illustrative of orders if magnitude or what, but what is the point of using the order of magnitude estimate for the volume of the brain and the volume of the cell as opposed to just using the original results to find the number of cells? Isn't it being needlessly less accurate? The book also seems to imply that it is commonplace to do this. I understand the how but not the why. Thanks.
An immediate advantage is that the calculation can be done in your head. Quickly, what's $$\frac{(8\times 10^{13})(3\times 10^{-3})(2\times 10^{5})}{(2\times 10^{7})(5\times 10^{8})}+3\times 10^{-2}?$$ How about $$\frac{(10^{14})(10^{-3})(10^{5})}{(10^{7})(10^{9})}+ 10^{-2}?$$ In the second example, I can add the exponents and get 0 (meaning $10^0=1$) and see that $10^{-2}$ is negligible in comparison. The actual answer is 4.83, which is within an order of magnitude of the estimate achieved in a few seconds by mental calculation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/456934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the meaning of effective density in porous media? Is the density of air inside the pore space not same as density of free air? I am trying to understand the physical meaning of using effective density in porous media. Is it a fictitious value? Can't I use the density of solid and fluid as it is while modeling porous media?
Density in a porous medium is not the same as the density of a pure substance. Consider the general case when several phases are present in a porous medium. By definition, effective density is $$\rho_i = \lim_{\Delta V \to 0 } \frac{m_i}{\Delta V} \tag{1}\label{dens},$$ where $m_i$ - mass of $i$ phase (for example mass of oil or gas in considered porous medium volume), $\Delta V$ - volume of considered porous media. By definition $m_i$ is $$m_i =\rho^0_i \cdot S_i \cdot \phi \cdot \Delta V ,\tag{2}\label{mass}$$ where $\rho^0_i$ - density of pure substance, $S_i$ - proportion of the pore space occupied by the phase, $\phi$ - porosity. Substitute \eqref{mass} to \eqref{dens} and derive folowing relation $$\rho_i = \rho^0_i \cdot S_i \cdot \phi \tag{3}\label{relation}.$$ From \eqref{relation} It can be seen that the concepts of effective density and density of pure substances differ
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How does pressure cooker increase cooking speed? My teacher mentioned something about pressure cookers making cooking faster and I was wondering what exactly is the reason behind this?
If we heat a liquid, the average Kinetic Energy of the liquid increases and at a certain stage, the energy becomes sufficient to break the molecular attraction. The molecules anywhere in the liquid can form vapor bubbles. These bubbles float to the surface of the liquid and finally come out of the liquid. This phenomena is called boiling and temperature at which it occurs is called boiling point. The boiling point of a liquid depends on the external pressure over its surface. When pressure decreases, boiling point decreases. That's why at place with high altitude, water will boil before reaching 100 degree Celsius. For example, boiling point of water at 1atm is 100 degree Celsius but at 0.5atm it is 82 degree Celsius. *A pressure cooker is a sealed pot with a valve that controls the steam pressure inside. As the pot heats up, the liquid inside forms steam, which raises the pressure in the pot.*This enables us to cook at a higher temperature, making the process faster.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivation of heat capacity at constant pressure and temperature I have a question pertaining to the following derivation: For the heat capacity at constant volume part, we apparently have: $$dQ = C_v dT + P dV$$ But I find this confusing, as if we are to assume volume is constant, then $dV =0 $ so I would say that $$dQ = C_v dT$$ Secondly, I don't understand the part how the previous thing I addressed implies $$C_p = \frac{d Q_p}{dT}$$ As $$C_V = \left(\frac{\partial U}{\partial T}\right)_V = \frac{d U}{dT}$$ So I'd assume $$C_P = \left(\frac{\partial U}{\partial T}\right)_P = \frac{d U}{dT}$$ but this doesn't look to be the case. What's going on here?
I call this equation ($C_V=(\partial U/\partial T)_V$) the cruelest equation in introductory thermodynamics because of how often it trips people up. It looks like the misconception here is thinking that the heat capacity is how much the internal energy $U$ increases for a given increase in temperature $T$. This is not the case. I recommend thinking of the heat capacity as how much you have to heat the system to obtain a given increase in $T$. At constant pressure, this heating corresponds to the increase in enthalpy $H$, not the increase in internal energy $U$. Thus, $C_P=(\partial H/\partial T)_P$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
If I leave a glass of water out, why do only the surface molecules vaporize? If I leave a glass of water out on the counter, some of the water turns into vapor. I've read that this is because the water molecules crash into each other like billiard balls and eventually some of the molecules at the surface acquire enough kinetic energy that they no longer stay a liquid. They become vapor. Why is it only the molecules on the surface that become vapor? Why not the molecules in the middle of the glass of water? After all, they too are crashing into each other. If I put a heating element under the container and increase the average kinetic energy in the water molecules to the point that my thermometer reads ~100°C, the molecules in the middle of the glass do turn into vapor. Why doesn't this happen even without applying the heat, like it does to the surface molecules?
There's a fundamental difference between a liquid changing to a gas at the surface vs. in the bulk: the formation of new surface area, which costs energy. Net evaporation from the surface is spontaneous whenever the relative humidity is less than 100% because energy fluctuations enable surface molecules to detach into the gas phase, as you describe. Here, the total surface area doesn't change. In contrast, gas formation within the bulk (i.e., the formation of an evaporative bubble) requires the formation of an additional liquid-gas interface, which carries an energy cost because bonds tend to be unsatisfied at interfaces. In fact, the energy penalty from having to form the surface of a nucleating vapor bubble is so important that we generally can only achieve boiling by (1) providing an existing surface or (2) waiting a large amount of time for a large gas cluster to randomly assemble or (3) superheating the liquid to increase the driving force for boiling (or a combination of these).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why in the BCS ground state the probability amplitudes are taken real? In some references (see for example Ballentine ch. 18.5) the ground state of the BCS theory is assumed to be \begin{equation} |BCS\rangle = \prod_{\bf k} (u_{\bf k}+v_{\bf k}\hat{c}^{\dagger}_{\bf k,\uparrow}\hat{c}^{\dagger}_{-\bf k,\downarrow})|0\rangle. \end{equation} with the normalization constraint $u_{\bf k}^2 + v_{\bf k}^2 = 1$. The same can be found in the original paper by Bardeen, Cooper and Schrieffer where you can read \begin{equation} |BCS\rangle = \prod_{\bf k}( (1-h_{\bf{k}})^{1/2} + h_{\bf{k}}^{1/2} b^{\dagger}_k |0\rangle. \end{equation} But in other books like the one by Annett those parameters are considered complex. Why the parameter $u_{\bf k}$ and $v_{\bf k}$ are taken real? Shouldn't they be taken complex amplitudes? Is this state a classical mixture? My guess is that this choice is justified a posteriori, but I cannot find any detail about it in any reference I have read.
I have found the answer to my question on page 86 of the following document https://www.physik.tu-dresden.de/~timm/personal/teaching/thsup_w11/Theory_of_Superconductivity.pdf Here is clear that we have no reason a priori to assume that the two parameters are real. Therefore we should start with complex amplitudes. Searching for the energy of the states we get to \begin{equation} \langle BCS|\hat{H} | BCS \rangle = \sum_{\bf{k}} 2 \xi_k |v_{\bf{k}}|^2+\sum_{\bf{k}} V_{\bf{kk'}} v_{\bf{k}}u^*_{\bf{k}}u_{\bf{k'}}v^*_{\bf{k'}} \end{equation} From this expression is clear that, in order to avoid a complex energy, $v_{\bf{k}}$ and $u_{\bf{k}}$ should have the same phase. Therefore, without loss of generality, we can take both parameters as real numbers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does the CMB constrain the baryon asymmetry? The CMB contains information about baryon acoustic oscillations in which baryons (I assume protons and electrons) and photons form a plasma exhibiting sound waves. How is information about the baryon asymmetry of the universe contain in this? Why do we not get some combination of baryon and (charged) lepton asymmetries if there are both electrons and protons in the mix? Why does the dependence not go as their density but rather the asymmetry?
Here is a description: The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. You ask: How is information about the baryon asymmetry of the universe contain in this? Please note the correction to the previous version of the answer, as I found a paper where the acoustic oscillations of the CMB are used in order to estimate the baryon asymmetry. They have a phenomenological model of how more matter and antimatter annihilation gamma rays would affect the acoustic oscillations at photon decoupling to give the CMB at 380.000 years after the Big Bang, where it says transparency point for light, where the photons of the CMB have their last scatter before reaching our detectors. These oscillations depend on the density of matter previous to the CMB , according to their model, the existence of an asymmetry can be found from the oscillations. Baryon antibaryon's annihilating would affect by the gamma rays generated the final photon spectrum at photon decoupling time. The subject is still at a research state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What's the difference in a $P$-$V$ diagram that is curved versus one that is straight? So what would the difference be between the graph above versus one that has the same initial and final points but the path is curved. I'm sure it has something to do with temperature, so does it mean temperature is constant? Or is there something else going on?
The only difference between a straight PV diagram versus a curved PV diagram is the work done in both cases (provided the final and initial points are same for both diagrams). As you might know, the work done in such a case is the area under the curve of the PV diagram. So the work done will be more or less depending on the shape of the PV curve. The temperature may vary in the process or it may remain constant. As @TrevorKafka has properly shown, an isotherm is the special PV curve where the temperature is maintained constant. But to summarize, the important difference between the two cases is the work done. Hope this helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Can superconductors undergo a BKT transition? In the article by Kosterlitz and Thouless (1973) they write in the abstract: "This type of phase transition (BKT) cannot occur in a superconductor for reasons that are given". Later in the paper they say that their argument for the BKT transition cannot be carried through because the singularities in the superconductor is finite. I do understand why their argument doesn't work for superconductors. However, I feel like their conclusion of disregarding a BKT transition is too strong. Without fully understanding it, https://en.wikipedia.org/wiki/Superconductor_Insulator_Transition seems to suggest there is a BKT transition for superconductors. Now I don't know what to believe, and I would appreciate any clarification! I would be especially interested in the 2 dimensional case.
I found the same statement puzzling given more contemporary discourse. Tony Leggett provides an explanation for this in his lecture notes on 2D materials (https://courses.physics.illinois.edu/phys598PTD/fa2013/L12.pdf), with the argument of enhancement of the London penetration depth in very thin (and dirty) films, and the renormalization of long-range behaviour of the vortex-antivortex interaction, as well as the supercurrent around the vortex. He demonstrates there that something very similar to the neutral superfluid can be recovered.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Central Forces: Newtonian/Coulomb force vs. Hooke's law We know that a body under the action of a Newtonian/Coulomb potential $1/r$ can describe an elliptic orbit. On the other hand, we also know that a body under the action of two perpendicular Simple Harmonic Motions can also have an elliptic orbit. Hence I was wondering if we can differentiate between a body under the influence of a central potential $1/r$ and a body under the action of two perpendicular SHM’s just by observing the orbits without prior knowledge of the potential they are under. So my question is how can we differentiate between these two potentials?
Your two examples are both central forces. For gravity the potential is: $$ U_g = -\frac{k}{r} $$ while for the simple harmonic motion the potential is: $$ U_s = kr^2 $$ Both of these allow circular orbits,and for a circular orbit you cannot tell which is which. However for an elliptical orbit you can because with gravity the origin of the force is at one focus of the ellipse with for SHM the origin of the force is at the centre of the ellipse. As a side note: these are the only two potentials that have closed orbits. This is Bertrand's theorem. The behaviour is also different with respect to the virial theorem. For the gravitational potential the average values of the kinetic energy $T$ and the potential energy $V$ are linked by: $$ 2T = -V $$ while for the SHM potential we get: $$ T = V $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is the far point of human eye infinite? In my exams, the presence of this question, which unfortunately I couldn't answer, made me wonder why is the far point of an eye infinite? First thing that came into my mind was that how come we can see till infinity? Far point of eye is sometimes described as the farthest point from the eye at which images are clear. As stated here There's obviously a limit to a distance where the eye can see. If there is, then why isn't that taken in consideration for accurate measurements?
There's obviously a limit to a distance where the eye can see. But such limits are not a function of the eye itself. For objects on earth the distance we can see depends on atmospheric conditions, curvature of the earth and size of the object. For stars in the sky it ultimately comes down to brightness. Distant objects appear as points of light and the further away they are the dimmer they get, both because of inverse-square losses and because their spectrum gets redshifted. Eventually they become so dim the human eye can't see them. why is the far point of an eye infinite? Because as you get further from an object the rays from any given point on the object get closer to parallel and they are already near as damnit paralell at distances that are biologically relavent. Lets say your lens has an apeture of 1cm (a bit bigger than a human eye) and your subject is 1m away. The angle between the rays is about 0.01 radians or about half a degree. At 1km away the angle between the rays is about 0.000001 radians or about 2 arcseconds. The resolving power of the human eye is about 30 arcseconds. So from a focusing point of view the difference between an object 100m away and an object at infinite distance is negligable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
"Killing leaves" in General Relativity? I now about Killing vector fields in GR but recently stumbled upon the notion of "Killing leaves" and couldn't find any simple explanation of this notion. For example, this paper writes: "integral submanifolds generated by vector fields of a Killing algebra are called Killing leaves." What exactly are Killing leaves and why are they important?
On page 3, it's defined as: integral submanifolds of the distribution, generated by vector fields of a Killing algebra $\mathcal{G}$, are called Killing leaves, A good overview of a (tangent) distribution and how it relates to the foliation of a manifold into "leaves" (hypersurfaces) might be found here, or on notes about the Frobenius theorem, or John Armstrong's blogpost.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do black holes have a limit of mass? * *If you have a bunch of gas and increase its mass, gravity combines it into a planet. *If you have a planet and increase its mass, gravity forces the planet to undergo nuclear fusion, turning it into a star. *If you have a star and increase its mass, the gravitational collapse surpasses any thermal or degeneration pressure and turns the star into a black hole. *If you have a black hole and increase its mass, would gravity turn it into another thing? Do black holes have a limit of mass or they can go to infinity without undergoing any change (like nebula > planet > star > black hole > ...black hole?)?
The rules of classical general relativity say that when you add mass to a black hole, you get a larger black hole. If you add angular momentum to a black hole at a greater rate than that at which you add mass, it would theoretically be possible to get a Kerr black hole with $a \gt M$, which would convert the black hole to a naked singularity, but the rules of black hole thermodyanamics say that a black hole with $a = M$ has zero temeperature, so creating a naked singularity in this way is believed to be impossible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Creation operator acting on a coherent state. Occupation number operator For a coherent state $$|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}(a^{\dagger})^n}{n!}|0\rangle$$ I want to find a simplified expression for $a^{\dagger}|\alpha\rangle.$ I can only get this $$\begin{align} a^{\dagger}|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}(a^{\dagger})^{n+1}}{n!}|0\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\sqrt{n+1}|n+1\rangle \end{align}$$ or $$a^{\dagger}|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}a^{\dagger}e^{\alpha a^{\dagger}}|0\rangle.$$ Is it possible to get something more "beautiful" and "useful"?(I apologize for the unscientific lexicon.) Ultimately, I want to find a simplified expression for $N|\alpha\rangle=a^{\dagger}a|\alpha\rangle,$ but I don't know such an expression for $a^{\dagger}|\alpha\rangle.$
The following expression can sometimes be useful: $$ a^\dagger |\alpha\rangle = \left( \partial_\alpha + \frac{\alpha^\ast}{2} \right) |\alpha\rangle . $$ To prove this, just calculate $$ \partial_\alpha |\alpha\rangle = \partial_\alpha \left( \mathrm e^{-|\alpha|^2 / 2} \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} |n\rangle \right) $$ using the product rule and $\partial_\alpha |\alpha|^2 = \alpha^\ast$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do I derive the angular frequency of a simple pendulum through conservation of energy? Is it possible? I'm not exactly sure what I'm doing wrong. So far I've gotten: $mgl(1-$cos$\theta) = \frac12\omega^2l^2$ Which then gives $\omega = \sqrt\frac{2g(1-cos\theta)}{l}$ which is incorrect. Where am I going wrong??
In your equation, $\omega$ stands for angular speed, not angular frequency. The expression you found gives you the angular speed at the bottom of the swing as a function of the maximum angle to vertical $\theta$. Note: you forgot an $m$ on the right hand side of your original equation, but I figure that was a typo since you cancelled it out correctly in your result. Unfortunately, your equation simply equates initial and final energies, and provides no information about anything in between the initial and final states. Thus, the equation does not contain sufficient information to determine the angular frequency. What could, however be used to determine the frequency, would be an equation that tells you the kinetic energy $K$ as a function of the angle $\theta(t)$ ,which in turn is a function of time (and thus provides you information about all the points in time between the initial and final states). Notice that I'm now using $\theta$ now in a different way than you were in your equation. To distinguish, let's call the maximum angle $\theta_\text{max}$. $$K(\theta) = mgl(1 - \cos \theta_\text{max}) - mgl(1 - \cos \theta) = mgl(\cos \theta - \cos \theta_\text{max})$$ Set this equal to $K = \frac{1}{2} m \omega^2 l^2 = \frac{ml^2}{2} \left( \frac{d\theta}{dt} \right)^2$. $$mgl(\cos \theta - \cos \theta_\text{max}) = \frac{ml^2}{2} \left( \frac{d\theta}{dt} \right)^2$$ $$\left( \cos \theta - \cos \theta_\text{max} \right) = \frac{l}{2g} \left( \frac{d\theta}{dt} \right)^2$$ Assume a small angle oscillation $\theta_\text{max} \approx 0$ and substitute the small angle approximations $\cos \theta_\text{max} \approx 1 - \theta_\text{max}^2$ and $\cos \theta \approx 1 - \theta^2/2$. $$\theta_\text{max}^2 - \theta^2 = \frac{l}{g} \left( \frac{d\theta}{dt} \right)^2$$ $$\frac{l}{g}\left( \frac{d\theta}{dt} \right)^2 + \theta^2 = \theta_\text{max}^2$$ This relationship looks reminiscent of the Pythagorean Theorem. Note that the two terms on the left hand side are time dependent, whereas the terms on the right hand side are time independent. These two hints suggest that the solution $\theta(t)$ will involve a trig function (since it's an oscillation, this seems exceedingly sensible). So, let's input he trial solution $\theta(t) = \theta_\text{max} \sin \omega t$, where $\omega$ is the angular frequency. $$\frac{l}{g}\left( \theta_\text{max} \omega \cos \omega t \right)^2 + (\theta_\text{max} \sin \omega t)^2 = \theta_\text{max}^2$$ $$\frac{l\omega^2}{g}\left( \cos \omega t \right)^2 = 1 - (\sin \omega t)^2$$ $$\frac{l\omega^2}{g}\left( \cos \omega t \right)^2 = (\cos \omega t)^2$$ $$\frac{l\omega^2}{g} = 1$$ $$\omega^2 = \frac{g}{l}$$ $$\boxed{\omega = \sqrt{\frac{g}{l}}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }