Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Difference between projection and component of a vector in product? This is a very basic question about dot products or scalar products:- If I want to move a block and I apply a force parallel to displacement, the block will move and some work will be done. So in the formula will be $W= F\cdot S$, here we won't calculate the force mg of the block but the force we applied (the parallel force). Now let's say that the force is not parallel and is at some angle from the horizontal So in this case the work done will be the projection of the force $F_1$ on the $x$ axis, because that is how dot products are defined (As Projections). But can't we say that the block moved due to its horizontal component $F_1\cos(\theta)$ and the answer would be same. And obviously we won't count $F_1\sin(\theta)$ as the work is not done by it. So why do we say projection and not component in dot or vector product ?
As pointed out, the projection and component actually refers to the same thing. To solve a problem like this it useful to introduce a coordinate system, as you mentioned yourself you project onto the x-axis. As soon as you introduce a coordinate system you can talk about the $\textit{components}$ of some vector. E.g $\vec{F} = F_1 \cos(\theta)\hat{x} + F_1 \sin(\theta) \hat{y}$ and $\vec{S} = S_1 \hat{x}$ if the box in your example is constrained to move horizontally. Here the components of the vectors are the $\textit{projections}$ of the vectors onto the coordinate axis. With this construct you calculate the dot product as $W = \vec{F} \cdot\vec{S} = F_1\cos(\theta) S_1$. However it is not needed to introduce a coordinate system and writing the vectors by their components and then applying the rules for dot product. The dot product is defined in a coordinate independent way as a projection. So your question is just a matter of terminology. A $\textit{component}$ of a vector along some axis is the $\textit{projection}$ of the vector along that axis and in this sense projection is the more fundamental thing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Rotation in molecules I am a bit confused about the rotational motion in molecules. Assuming the bond length is constant, the motion can be described as a rigid rotor. In the center of mass frame the energies are given by BJ(J+1) and the wavefunctions are spherical harmonics. However when we measure the energies or the angular momenta, we do it in lab frame. So I am a bit confused. Is the formula for the energy the same both in lab and CM frame? And if not, what is the formula in the lab frame? Also, is the wavefunction the same in both frames or, in other words, is the angular moment of the molecule the same in both frames. Actually I am a bit confused about how is the angular momentum defined in the CM frame. Isn't the molecule stationary in that frame? Yet the wavefunctions in the CM frame (spherical harmonics) do show a clear angular momentum dependence. Can someone help me clarify these things? Thank you!
The molecule is not stationary in the center-of-mass frame. Any rigid body dynamics can be separated into motion of the center of mass, and motion about center of mass. The second includes rotational motion, which is what you are considering. Only translational motion is absent in the CM frame. The question of whether angular momentum, or energy is the same in the lab frame depends on what other kinds of motion the molecule has in the lab frame. Does it have translational degrees of freedom, for example? If not, the energy is the same in both frames. The angular momentum here is intrinsic angular momentum, due to rotations within the molecule, and intrinsic spins of the constituents. This too should be the same in both frames, unless you are considering some odd situation like the molecule revolving around some axis in the lab frame.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can a wormhole be created if it has not always existed? I know there are solutions to Einstein's field equations that give a wormhole geometry. But they are time independent. They are static. Is there a process where empty flat spacetime can evolve into a wormhole by an appropriate flow of matter and energy and negative energy? If so, it would change the topology of spacetime. Does General Relativity permit this? How would a hole in spacetime form? What determines where the other mouth of the wormhole would be located?
Unfortunately the answer is that we don't know, and most likely a classical theory like General Relativity that concerns itself mostly with local properties of geometry is inherently unfit to aboard the problem of global topology The clues we get from General Relativity is that some static adiabatic (i.e. reversible) solutions with spacetimes topologically inequivalent to Minkowski do exist, but they require unphysical negative energy. If we relax the reversibility requirement on trajectories, we might consider static Kerr geometries as unidirectional wormholes, which can exist without any negative energy required Intuitively, a topology change of spacetime cannot (shouldn't?) occur without at least a transient singularity occurring somewhere in the spacetime, but no one has ever obtained the equivalent of a maximally extended spacetime from a finite-time collapse (see this answer)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 3, "answer_id": 1 }
Why are eggshells so strong? The usual explanation that someone can find on internet is that it is shaped like an arch, but it is not exactly an arch. Does anybody knows something more on this?
It is the same principle as applies to an arch. The curve of an arch is such that forces are transmitted along the curve of the arch. Ideal curves for this are the catenary arch, and for a bridge supporting weight, the cycloid, most famously used in the "New" London Bridge (1831–1967) The difference with an eggshell is that it is a 2D surface in three dimensions, and it transmits compression forces in the surface. It is only really strong under compression of the endpoints, where forces are transmitted symmetrically, and when it is squeezed symmetrically from all directions by the output canal when being laid. Not so strong when squeezed in a single axis from the sides.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do antiferromagnets occur at lower temperature than ferromagnets? The minimal model for describing magnets is the Heisenberg Hamiltonian $$H = -\frac{1}{2}J\sum_{i,j} \mathbf{S}_i \cdot \mathbf{S}_j$$ Where $i,j$ are nearest neighbors and the factor of $1/2$ is for double counting. If $J$ is positive, spins will want to align to save energy (ferromagnets), and if it is negative they will anti-align (antiferromagnets). Ultimately $J$ comes about from Pauli exclusion and electrons not wanting to sit in the same orbital (Coulomb repulsion). But if I look at a table of ferromagnets here, I see transition temperatures up to 1400 K. On the other hand, the highest transition temperature for antiferromagnets is a measly 525 K, with most being below room temperature. Why do antiferromagnets generally occur at significantly lower temperatures than ferromagnets? One can argue that maybe $\vert J\vert$ is larger in ferromagnets than antiferromagnets (as one of the current answers does), but this just begs the question. Why should that be the case (assuming it is true)? I don't see an experimentally-verified theoretical basis for asserting $\vert J_{\mathrm{AFM}}\vert < \vert J_{\mathrm{FM}}\vert$. This question came up in a class I am teaching to talented senior undergraduates.
It is not correct that AF have low Neel temperatures universally. One recent popular example is Mn2Au, whose (anticipated) Neel temperature is so high that it can't be reached before the material decomposes. I've heard of estimates in the ballpark of a 1000 K. There is also the widely commercially applied IrMn, which has Neel Temperatures in the 700-800K ballpark depending on phase and stochiometry and can be probably higher if optimized for Neel temperature. Other than that, an elemental material can achieve better overall exchange strength with all atoms contributing. As there are no elemental high temperature antiferromagnets, the comparison is slightly unfair. I guess the most fair comparison would be certain phases of CoMn or FeMn alloys, which will probably display Neel temperatures in a similar order of magnitude as elemental Co or Fe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Maxwell equations and continuity equation I want to show the following equation with the maxwell equations: $$\frac{\partial}{\partial t}W+\vec{\nabla} \cdot \vec{S} = 0 $$ The problem is that I'm not understanding why I can do the following step: $$\partial_t W=\partial_t \frac{1}{8\pi}(\vec{E}^2+\vec{B}^2)=\frac{1}{4\pi}(\vec{E}\partial_t\vec{E}+\vec{B}\partial_t\vec{B})$$ Can maybe someone explain it to me?
\begin{align} \frac{\partial}{\partial t} \left( \vec{E} \cdot \vec{E} \right) =\; & \frac{\partial}{\partial t} \left( \sum_{i=1}^3 E_i^2 \right) \\ = & \sum_{i=1}^3 \frac{\partial}{\partial t} E_i^2 \\ = & \sum_{i=1}^3 2E_i \frac{\partial}{\partial t}E_i \\ = & \;2 \vec{E} \cdot \frac{\partial}{\partial t} \vec{E} \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are some energies dependent on reference frame, and some are not? And why is transfer between them possible? For example the chemical energy of a kilogram of gasoline is 44-46 MJ/kg. It is only dependent on its chemical structure, which stays the same, whether the gas tank moves or stays still relative to the observer. But the kinetic energy of a car depends on the reference frame. In a reference frame of a car A of the same speed, the car B in question have no kinetic energy. But for a bystander, car B has a lot of kinetic energy. What puzzles me, is: 1 - why are some energies relative to reference frame and some are not? 2 - why the "absolute" energy from gasoline can be changed into kinetic energy of car, and therefore change into "relative" energy? I wouldn't be surprised if someone answers "the chemical energy is also relative", but I can't understand why.
Many of the answers are right for the most part, but I think they have too much detail. The simple answer is that velocities are reference frame dependent, while lengths are not. Kinetic energy depends on velocity, so it is reference frame dependent. Potential energies depend on lengths between two objects (or from some reference point), so they are not reference frame dependent. The transfer of energy between these forms requires work to be done, which relies on the change of lengths (or a velocity over time) through the integral $\int\mathbf F\cdot\text d\mathbf x$. Therefore, we can have transfer between these "absolute" and "relative" types of energy through this "relative" type of energy transfer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
The electric field should be in circular coil. But why do current flows in whole circuit? We know changing magnetic flux induces electric field which makes current to flow .here in the below picture the flux is changing through only circular coil bout not through the rectangular part.so what induces electric field in rectangular part for current to complete the circuit ..Note that the electric field is required inside awire to make the current flow.
The field induced in the circular part of the circuit causes the free carriers (electrons) in the wire of that part of the circuit to move slightly. The the movement of the electrons from one end of the loop to the other creates the field in the other part of the circuit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Energy conservation in reflection of light from a perfect mirror I came across a question where a light source is shined on a mirror attached to a spring which is attached to a rigid support. The question goes: A perfectly reflecting mirror of mass $M$ mounted on a spring constitutes a spring-mass system of angular frequency $\Omega$ such that $\frac{4\pi M\Omega}h=10^{24}\ \mathrm{m^{-2}}$ with $h$ as Planck's constant. $N$ photons of wavelength $\lambda=8\pi\times10^{-6}\ \mathrm m$ strike the mirror simultaneously at normal incidence such that the mirror gets displaced by $1\ \mathrm{\mu m}$. If the value of $N$ is $x\times10^{12}$, then the value of $x$ is $\_\_\_$. [Consider the spring as massless] Now, in the solution the question has been solved by equating the change in momentum of the photons to the change in momentum of the mirror. However, what escapes me is how does the mirror move at all if all the photons are being reflected perfectly? If the mirror is indeed perfectly reflecting then the net energy incident must be equal to the net energy reflected. So, how can the mirror move if it does not take any energy from the light? However if I do assume each photon gives up a bit of its energy, thus changing its wavelength, then the incoming momentum will not be equal to the outgoing momentum. But this would lead to a contradiction, as we assumed the mirror was perfectly reflecting. I am puzzled. I think the only plausible answer to this is that 'there cannot be a perfectly reflecting mirror', but if that is the case, what would happen if we imagined one? In the same way that a perfectly black body does not exist but we can always imagine one.
Let's calculate the energy change of the mirror of mass $M$ from a photon with energy $\hbar \omega$ The photon has momentum $p = \hbar \omega/c$, and so the total change in momentum is $\Delta p =-2\hbar \omega/c$ The mirror on the other hand gains momentum $-\Delta p$ by momentum conservation. This change in momentum will give some energy to the mirror, but how much? This is simply the kinetic energy of the mirror $$KE = \frac{\Delta p^2}{2M} = \hbar \omega \frac{2\hbar \omega}{Mc^2}$$ Thus, for a 10 gram mirror and visible light photon, we have a relative change in energy of $\frac{2\hbar \omega}{Mc^2}$ or 1 part in $10^{33}$, which is absolutely negligible. So we can safely assume that the mirror perfectly reflects the light and does not practically change its energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/539983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 9, "answer_id": 7 }
How many optical and acoustical branches are in a primitive cell? I am reading Introduction to Solid-State Physics (by Kittel) and I don't understand how he counts the optical and acoustical branches in a primitive cell. It says that if there are $p$ atoms in a primitive cell then we have $3p$ branches, 3 acoustical branches and $3p-3$ optical branches. I understand the physical difference of an optical and acoustical branche. But I don't understand: * *How do you know there are $3p$ branches? *How do you know only $3$ ($3p-3$) are acoustical (optical)?
For a primitive cell with p atoms in 3d space, there are 3p degrees of freedom, say x, y, and z for each of the p atoms. This leads to 3p total harmonic modes and hence there are a total of 3p branches. For an acoustic mode, the atoms within the primitive cell need to move exactly in phase, giving a dispersion relation where the frequency vanishes linearly with k in the long-wavelength limit. This can happen in 3 ways, where all the atoms in the lattice move in phase along x, y or z directions (or equivalently along any 3 orthogonal directions). Hence, there are 3 acoustic modes. In all the remaining modes, all atoms within the primitive cell will not move in phase, giving 3p-3 optical modes. Quoting Ashcroft-Mermin, "An acoustic mode is one in which all ions within a primitive cell move essentially in phase, as a unit, and the dynamics are dominated by the interaction between cells; an optical mode, on the other hand, is one in which the ions within each primitive cell are executing what is essentially a molecular vibratory mode, which is broadened put into a band of frequencies by virtue of the intercellular interactions".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Is De Broglie's formula $\textbf{p}=\hbar \textbf{k}$ applicable to a discrete wave number system? I don't know if my question has sense at all but while doing my homework there appeared in my mind this question. Say, for a particle in a box, the confinement makes that the wave number k is discrete, depending on integer numbers $\textbf{n}=(n_{x}, n_{y},n_{z})$. If De Broglie's formula holds, it doesn't mean that momentum $\textbf{p}=\hbar \textbf{k}$ is also discrete?
Yes. But remember, you can think of the wavefunction in quantum physics like a wave in the configuration space of the system, that is isomporphic to the physical space only for a single particle case. Momentum is discrete. Actually, 1 of the 1st quantum observed effects was the discreteness of the energy spectrum, that depends on momentum and electrostatic pontential $ V\vec(r) $ for the electron in Hydrogen atom in an approximated electrostatic potencial within its interactions with the positive charge in nucleus.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why are there no even harmonics in a closed pipe? I have seen a diagram on sites such as hyperphysics.com that show that there is a missing bit every time so that it makes every harmonic odd. I was hoping I could get a more intuitive explanation. We recently got introduced to harmonics and standing waves in class and I am a bit lost :)
I dowwnloaded a trumpet A sharp and did the Fourier Transform in Matlab. I found both odd and even multiples. Have not yet found an actual recording that yields only odd harmonics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Moment of inertia tensor and symmetry of the object What information does the moment of inertia tensor give on the structure of an item. I was told that its eigenvectors give the principal axes of the object. Do you know more about this?
The inertia tensor is a bit more descriptive in the spherical tensor basis (so instead of having nine basis dyads made from combinations like $\hat x \hat y$, you have 9 basis tensors that transform like the nine $Y_l^m$ for $l \in \{0,1,2\}$). Since $I_{ij}$ is antisymmetric, all $l=1$ spherical tensors are zero. The $l=0$ portion is: $$ I^{(0,0)} = \frac 1 3 {\rm Tr}(I)\delta_{ij}$$ and that is the spherically symmetric part of the object. Removing the spherically symmetric part leaves a "natural" (read: symmetric, trace-free) rank-2 tensor: $$ S_{ij} = I_{ij} -I^{(0,0)}$$ The spherical comments are: $$ S^{(2,0)} = \sqrt{\frac 3 2}S_{zz} $$ This tells you if your object is prolate or oblate. $$ S^{(2,\pm 2)} = \frac 1 2 [S_{xx}-S_{yy}\pm 2iS{xy}]$$ You will find that $S^{(2,+2)} = (S^{(2,-2)})^*$, and that if you are in diagonal coordinates, they are real and equal. If the value is 0, then the object is cylindrically symmetric. $$ S^{(2,\pm 1)} = \frac 1 2 [S_{zx}\pm iS_{zy}]$$ Here: $S^{(2,+1)} = -(S^{(2,-1)})^*$, and the term is zero in diagonal coordinates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why does frequency remain the same when waves travel from one medium to another? I was reading about reflection and refraction on BBC Bitesize and I can't understand why frequency is a constant in the wave speed equation. I can't visualise the idea of it. I know that wave speed and wavelength are proportional to each other but how can I tell the speed of a wave by looking at a random oscillation? Here's where I got confused: https://www.bbc.co.uk/bitesize/guides/zw42ng8/revision/2 (the bottom of the page about the water)
Loosely speaking, you can think of it this way: a wave propagates in space and time, but it encounters a change in the spatial properties of the medium at the interface. What should the time part change?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why can't a wave travel in a non-elastic medium? Why a wave cannot propagate in a non-elastic medium We know that wave is a distrubance and carries energy. In this sense let imagine fall of dominoes, which carries disturbance and energy. Here fall of dominoes is non-elastic and we can see that wave propagates. Can I call it as a wave?
Waves can propagate in an inelastic medium, and the fall of dominoes is a wave. So when people say that a wave can't travel in a non-elastic medium can't, this statement applies to particular kind of waves. When talking about waves one often means perturbations that are periodic in space and periodic in time at every point. This is not the case in an inelastic medium, where due to damping, such a wave would decay. Yet, for the case of dominoes such terms as solitary wave, shock wave or soliton could be quite appropriate - the kind of single "waves" associated with explosions, tsunami, propagation of cracks, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Phase Difference in mutual coupling I was reading about oscillator circuit then in my textbook it was written that we get phase change of π at mutual coupling.But, when I workout through mathematics of situation (which I have shown below) I get phase difference of π/2.(M is mutual inductance,i is amplitude of current in primary coil, $ε_o$ is emf induced in second coil). $$i=iₒsin(ωt)$$ $$\phi=Misin(\omega t)$$ $$ε_o=-Mi(\omega)cos(\omega t)$$$$εₒ=Miωsin(ωt-π/2)$$ Please help me to figure out my mistake.Circuit diagram
I think that you are mistaken in the language that book uses That $\epsilon$(in your question) is the emf developed in second coil due to flux change in first, But what is the emf in the first coil that drives the current $I= i \sin(\omega t)$?Assuming pure inductive circuit(with no resistance) you can work out it to be $e_0 \cos(\omega t) $ where $e_0$ is the maximum value of emf of driving source Now you can see a phase difference between emf in first to emf in second as $-\cos (\omega t) = \cos(\omega t + \pi)$ That is what your book should have meant
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why do we need Gauss' laws for electricity and magnetism? The source of an electromagnetic field is a distribution of electric charge, $\rho$, and a current, with current density $\mathbf{J}$. Considering only Faraday's law and Ampere-Maxwell's law: $$ \nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t}\qquad\text{and}\qquad\nabla\times\mathbf{B}=\mu_0\mathbf{J}+\frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t}\tag{1} $$ In an isolated system the total charge cannot change. Thus, we have the continuity equation that is related to conservation of charge: $$ \frac{\partial\rho}{\partial t}=-\nabla\cdot\mathbf{J}\tag{2} $$ From these three equations, if we take the divergence of both equations in $(1)$, and using $(2)$ in the Ampere-Maxwell's law, we can get the two Gauss' laws for electricity and magnetism: $$ \nabla\cdot\mathbf{B}=0\qquad\text{and}\qquad\nabla\cdot\mathbf{E}=\frac{\rho}{\varepsilon_0}\tag{3} $$ Therefore, the assumption of $(1)$ and $(2)$ implies $(3)$. At first glance, it could be said that we only need these three equations. Also, conservation of charge looks like a stronger condition than the two Gauss' laws (it's a conservation law!), but, as the article in Wikipedia says, ignoring Gauss' laws can lead to problems in numerical calculations. This is in conflict with the above discussion, because all the information should be in the first three equations. So, the question is, what is the information content of the two Gauss' laws? I mean, apart of showing us the sources of electric and magnetic field, there has to be something underlying that requires the divergence of the fields. If no, then, what is the reason of the inherently spurious results in the numerical calculations referred? (Also, I don't know what type of calculation is referred in the article.)
I don't agree that you get that you obtain the Gauss law using the method proposed. What you obtain instead is $$\frac{\partial\nabla\cdot\mathbf{B}}{\partial t} = 0,\\ \frac{1}{c^2}\frac{\partial\nabla\cdot\mathbf{E}}{\partial t} + \mu_0\nabla\cdot\mathbf{J}= \frac{1}{c^2}\frac{\partial\nabla\cdot\mathbf{E}}{\partial t} - \mu_0\frac{\partial\rho}{\partial t}=0.$$ These equations give you only the rate of change of $\nabla\cdot\mathbf{B}$ and $\nabla\cdot\mathbf{E}$, but not their value, which needs to be defined by time integration and gives you the answer up to a position-dependent constant (whose time derivative is zero). E.g., the Gauss law for the electricity is given now by $$\nabla\cdot\mathbf{E}(\mathbf{r},t) = \frac{1}{\epsilon_0}\rho(\mathbf{r},t) +C(\mathbf{r}).$$ So we do need an additional constraint to specify function $C(\mathbf{r})$, i.e. the Gauss law, which in these terms can be written as: $$C(\mathbf{r}) =0.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
Why does the dielectric field not cancel out the capacitor's field? When a conductor is in a region with electric field, free charges will move until they balance out the external electric field. However in dielectrics this does not happen. I know that charges are bounded to the atoms, and there is only a small portion that will be near the surface of the the capacitor, but should we not also consider the small electric fields inside the polarized atoms? They may add up and cancel out the external field.
The charges in the dielectric to rearrange and lessen the field inside of the capacitor, the field just isn't completely canceled. This is because, as you said, we are dealing with bound charges. If you compare a vacuum filled capacitor with charge $\pm Q$ on its plates to a capacitor with the same charge filled with a linear dielectric of dielectric constant $k$, you will find the field inside the second capacitor to be $$E_\text{eff}=E_0-E_\text{polarization}=\frac 1k\cdot E_0$$ The field will only be canceled when $k\to\infty$, which is true for conductors, as you seem to be aware of already.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why a ball doesn't stop when it collides with a wall? If a ball collides with a ball of same mass the first ball stops and the second ball gets the velocity of first ball.The first ball stops due to the reaction force acting on it. But when a ball collides with a wall why doesn't it stop due to the reaction force?
Try considering momentum conservation. The wall won’t be moving before or after the collision with the ball, and presuming the collision is elastic (no energy lost to heat, sound etc.) then the sum of the momenta will be conserved before and after the collision and you should find that the ball must have the same momentum before and after the collision. So it must be in moving too.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Newton 3rd law of motion If I took both hands and hold as I am greeting someonei.e.namaste. Now, I push both hands with same force say 10N , there is no change in position.But Newton 3rd law has said,equal and opposite forces are on different bodies and they never cancel each other (action reaction). So, why they don't move? (Here , they- hands)
On each hand, you really have two forces: one from the other hand (10N, pushing out) and one from the arm (10N, pushing in). These are the equal and opposite forces, hence your hands do not move.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Tree Level Scattering Amplitudes in Spinor QED I am currently trying to calculate scattering amplitudes of tree level QED processes and i am a bit confused due to the ordering of the factors yielded by the feynman spinor QED rules. Especially in processes having photons in the initial or final states. Could someone write a explanation how and why to order the factors? All of the literature i looked at just showed the feynman rules but did not gave further explanation about the order of the factors ...
Let't take a simple process for example $$e^-+e^+ \to \mu^-+\mu^+$$ The only tree level diagram for this process is the following To build up the matrix element of this process you just have to follow the fermion lines backwards, that's all. Beginning from the end of the process you have then $$(\mu^- \to \text{vertex}\to\mu^+)\text{ Photon propagator }(e^+\to\text{vertex}\to e^-)$$ which gives $$\mathcal{M}=\bar u(p_3,\sigma_3)(-ie\gamma^\nu)v(p_4\sigma_4)\frac{-ig_{\mu\nu}}{q^2+i\epsilon}\bar v(p_2,\sigma_2)(-ie\gamma^\mu)u(p_1,\sigma_1)$$ with $q=p_1+p_2$ This holds for every Feynman diagram in general, you have to follow backwards the fermion lines and use the appropriate spinors for particles and antiparticles. The convention of starting from the end does not matter, you could as well start from the beginning as long as you follow the fermion lines backwards. Edit Since the OP asked for the case of Compton scattering I add it. So the s-channel Compton scattering process is given by this diagram (and the one with the photons with exchanged positions) where the two fermion lines are just electrons. As we said before let's follow the fermion lines backwards $$(e^-\to\text{vertex}\to\text{photon})\text{Fermion propagator}(\text{photon}\to\text{vertex}\to e^-)$$ which gives $$\bar u(p^\prime, \sigma^\prime)(-ie\gamma^\nu)\epsilon_\nu^{*}(k^\prime)\frac{\not p+\not k+m}{(p+k)^2-m^2+i\epsilon}\epsilon_\mu(k)(-ie\gamma^\mu)u(p, \sigma)$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why the maximum number of supercharges in supergravity must be $Q=32$? For a supergravity theory not to have particles with spin greater than 2, all books state that $Q\leq 32$, where $Q$ is the number of fermionic supercharges and for a given dimension $D$ it's related to the number of supersymmetries $\mathcal{N}$ through the number of components of the fundamental spinors in that dimension, $C$ as $Q=\mathcal{N} C$. Naively, I would expect each supercharge $Q$ to raise or lower the spin of a given particle by $1/2$, as it does in the 4 dimensional case, but I suspect this is not the case in different dimensions (though all the books I have read are pretty confusing, just analyzing $D=4$ in detail using the chiral properties in that dimension and then hand-waving their way to higher dimensions). If each of the $\mathcal{N}$ supersymmetries could be used once and only once to raise the spin of the states, then I would expect the limit to be $\mathcal{N} \leq 8$ instead of a bound on $Q$, but this is not the case. According to one of my professors, $Q=32$ is just the consequence of wanting $\mathcal{N}=8$ at the most in 4 dimensions, where $C=4$, so that we can compactify the higher dimensional theory and obtain an acceptable model for our world. However, the $D=11$ supergravity should only include the graviton in that case, right? And $D=10$ theories would only have the graviton and $\mathcal{N}$ gravitinos, which is not true. So what's the right justification for $Q \leq 32$?
You have to first understand the contruction of massless multiplets which is found in the beginning of every introduction to supersymmetry, so I won't repeat it here. Then the argument goes like this: * *Given $Q$ supercharges, half of them will be zero for the massless case, thus leaving you with $Q/2$ non-zero supercharges. *From the remaining $Q/2$ supercharges we can construct $Q/4$ lowering operators, and $Q/4$ raising operators. *Every raising/lowering operator changes the helicity $\lambda$ by $\pm 1/2$. So avoiding helicities $\lambda$ greater than $|2|$ requires that \begin{equation} Q/4\leq 8 \end{equation} Therefore the maximum number of supercharges is $Q=32$ for any supergravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Massless string vs massless spring in a mass-spring system Two masses connected by a massless spring, on a frictionless surface , and a force of $60$N is applied to the 15kg mass such that it accelerates at 2 $\frac{m}{s^2}$. What is the acceleration of the $10kg$ mass? I came across this question. I first thought that that the $10$kg was constrained to move at the same acceleration. But when I work it out, I get $a_2$ = 3 $\frac{m}{s^2}$. And it is the correct answer according to the book. What I am unable to understand is, isn’t the $10$kg mass constrained to move at the same acceleration as the $15$kg mass? I thought we could replace the massless spring by (or treat it as) a massless string and results would be the same. Am I making a fundamental mistake?
The thing is, an 'inelastic' massless string ensures constrained motion because it has a definite length. If x is the distance moved by the first block, then the second block is also constrained to move so that the net extension of the string is 0. But, a spring can become compressed or stretched. So, if block 1 covers a distance x, three cases arise: i) spring becomes compressed: the second block moves a distance more than x, and hence has greater acceleration ii) spring becomes stretched: the second block moves a distance less than x, and hence has less acceleration iii) spring remains in original shape: the blocks have equal acceleration
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Manipulation of the diffusive term in MHD induction equation I am trying to solve the magnetohydrodynamic (MHD) equations with a spatially varying resistivity, $\eta$. To remove some of the numerical stiffness from my finite volume approach, I am trying to get rid of these curl expressions with some vector calculus identities. The expression that is causing me issue is: $$ \nabla\times\left(\eta\nabla\times B\right) $$ I have also seen this written as: $$ \nabla\cdot\left(\eta\left(\nabla B-\nabla B^{T}\right)\right) $$ such as in the paper: Space–time adaptive ADER-DG schemes for dissipative flows: Compressible Navier–Stokes and resistive MHD equations, Computer Physics Communications. My question is: are these two expressions equal? I can kind of see how they might be using the cross product rule: https://en.wikipedia.org/wiki/Vector_calculus_identities but I'm a little uneasy of using this identity with the vector operator $\nabla$. Would anybody kindly be able to shed any light on this for me, and possibly take me through the steps to cast the first expression as the second form? Thank you in advance. P.S. This is my first post, I hope it's OK.
There are several vector/tensor calculus rules that will come in handy, so I will defined them here first (in no particular order): $$ \begin{align} \nabla \cdot \left[ \nabla \mathbf{A} - \left( \nabla \mathbf{A} \right)^{T} \right] & = \nabla \times \left( \nabla \times \mathbf{A} \right) \tag{0a} \\ \nabla \cdot \left( f \ \mathbf{A} \right) & = f \nabla \cdot \mathbf{A} + \nabla f \cdot \mathbf{A} \tag{0b} \\ \nabla \times \left( f \ \mathbf{A} \right) & = f \nabla \times \mathbf{A} + \nabla f \times \mathbf{A} \tag{0c} \\ \nabla \left( \mathbf{A} \cdot \mathbf{B} \right) & = \mathbf{A} \times \left( \nabla \times \mathbf{B} \right) + \mathbf{B} \times \left( \nabla \times \mathbf{A} \right) + \left( \mathbf{A} \cdot \nabla \right) \mathbf{B} + \left( \mathbf{B} \cdot \nabla \right) \mathbf{A} \tag{0d} \\ \mathbf{A} \cdot \left( \nabla \mathbf{B} \right)^{T} & = \left( \mathbf{A} \times \nabla \right) \times \mathbf{B} + \mathbf{A} \left( \nabla \cdot \mathbf{B} \right) \tag{0e} \\ \left( \nabla \mathbf{B} \right) \cdot \mathbf{A} & = \mathbf{A} \times \left( \nabla \times \mathbf{B} \right) + \left( \mathbf{A} \cdot \nabla \right) \mathbf{B} \tag{0f} \\ \nabla \cdot \left( \mathbf{A} \times \mathbf{B} \right) & = \mathbf{B} \cdot \left( \nabla \times \mathbf{A} \right) - \mathbf{A} \cdot \left( \nabla \times \mathbf{B} \right) \tag{0g} \\ \nabla \times \left( \nabla \times \mathbf{A} \right) & = \nabla \left( \nabla \cdot \mathbf{A} \right) - \nabla^{2} \mathbf{A} \tag{0h} \\ \nabla \times \left( \mathbf{A} \times \mathbf{B} \right) & = \nabla \cdot \left( \mathbf{A} \mathbf{B} \right)^{T} - \left( \mathbf{A} \mathbf{B} \right) \tag{0i} \end{align} $$ From these relations, one can show the following: $$ \nabla \cdot \left\{ \eta \left[ \nabla \mathbf{B} - \left( \nabla \mathbf{B} \right)^{T} \right] \right\} = \nabla \times \left( \eta \nabla \times \mathbf{B} \right) + \left( \nabla \eta \cdot \nabla \right) \mathbf{B} - \left( \nabla \eta \times \nabla \right) \times \mathbf{B} \tag{1} $$ where we have taken advantage of Maxwell's equations to eliminate the divergence of the magnetic field term. The second term on the right-hand side can be expanded to the following form: $$ \left( \nabla \eta \cdot \nabla \right) \mathbf{B} = \left( \mathbf{B} \cdot \nabla \right) \nabla \eta - \nabla^{2} \eta \mathbf{B} - \nabla \times \left( \nabla \eta \times \mathbf{B} \right) \tag{2} $$ where all the terms on the right-hand side involve second order derivatives of $\eta$. Generally, to simplify this down to make the two expressions of interest in your question equal one needs to make assumptions about the properties of the system. For instance, the $\nabla \times \left( \eta \nabla \times \mathbf{B} \right)$ comes from an approximation of Ohm's law and Ampere's law, i.e., $\mathbf{E} \approx \eta \mathbf{j}$ and $\mathbf{j} \propto \nabla \times \mathbf{B}$. If there are no local electric field sources (i.e., no excess charges), then $\nabla \cdot \mathbf{E} = 0$, which implies: $$ \nabla \cdot \left( \eta \mathbf{j} \right) = \eta \nabla \cdot \mathbf{j} + \nabla \eta \cdot \mathbf{j} = 0 \tag{3} $$ If $\mathbf{j} \propto \nabla \times \mathbf{B}$ is true, then the first term is zero as the divergence of the curl of a vector is always zero so we are left with: $$ \nabla \cdot \left( \eta \mathbf{j} \right) \approx \nabla \eta \cdot \mathbf{j} = 0 \tag{4} $$ The right-hand side can be rewritten as $\nabla \eta \cdot \left( \nabla \times \mathbf{B} \right) = 0$. We can then use Equation 0g above to show that the following is also true: $$ \nabla \cdot \left( \nabla \eta \times \mathbf{B} \right) = 0 \tag{5} $$ where we have used the fact that the curl of the gradient of a scalar is always zero. We also know another relationship from Faraday's law where: $$ \begin{align} \nabla \times \mathbf{E} & = - \frac{ \partial \mathbf{B} }{ \partial t } \tag{6a} \\ & = \nabla \times \left( \eta \ \mathbf{j} \right) \tag{6b} \\ & = \eta \nabla \times \mathbf{j} + \nabla \eta \times \mathbf{j} \tag{6c} \\ & = \frac{ 1 }{ \mu_{o} } \left[ \eta \nabla \times \left( \nabla \times \mathbf{B} \right) + \nabla \eta \times \left( \nabla \times \mathbf{B} \right) \right] \tag{6d} \\ & = \frac{ 1 }{ \mu_{o} } \nabla \times \left( \eta \nabla \times \mathbf{B} \right) \tag{6e} \end{align} $$ where $\mu_{o}$ is the permeability of free space. My question is: are these two expressions equal? In general, no. Under the right approximations, yes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
WKB method to calculate the ground eigenvalue of a Quartic Potential I've come across a problem while studying the WKB method. I want to calculate the eigenvalues of a symmetric quartic double well potential. It could be any potential. I chose it to be $$V(x) = x^4 - 4x^2 +4$$ The hamiltonian with $\hbar$ = $m$ = $1$ gives $$H = -\frac{1}{2}\frac{d^2}{dx^2} + V(x)$$ and I plan to find the eigenvalues of the bound states given by the potential $V(x)$ represented bellow $\hskip1.7in$ with returning points $x_2 > x_1$ and roots $x=\pm \sqrt{2}$ The quatization* problem for the double potential well with respect to even (odd) solutions is, for $x>0$ $$\theta \simeq (n + \frac{1}{2}) \pi \mp \frac{1}{2} e^{- \phi} \tag{1}$$ with $$\theta = \int_{x_1}^{x_2} p(x') dx'$$ $$\phi = \int_{0}^{x_1} |p(x')| dx'$$ $$p(x) = \sqrt{2m(E_n - V(x))} = \sqrt{2m(E_n - (x^4 - 4x^2 + 4))}$$ ( *Introduction to Quantum Mechanics by David J. Griffiths, problem $8.15$ ) My problem lies exactly in solving eq. ($1$), since it involves integrals of the square-root of a quartic function $$\int_a^b \sqrt{2m(E_n - (x^4 - 4x^2 + 4))} dx$$ I used Mathematica but it couldn't compute a solution. Is there any approximation or trick I could use to solve it analytically? If not, any software that could do the computation? PS: After solving numerically the $Schr\ddot{o}dinger$ equation for the ground state eigenvalue I obtained $E_0 \simeq 1.8$ with $\hbar$ = $m$ = $1$ as stated above. With the WKB method I'm hoping to obtain a similar result.
You can do this in Mathematica using NIntegrate to do numerical integration and either NSolve or FindRoot to find the energy that satisfies equation (1). In this way I found the $n=0$ energies $E_0^\text{even}\approx 1.74646$ and $E_0^\text{odd}\approx 2.07823$ with a few lines of code. For the former, $\theta\approx 1.43953$ and $\phi\approx 1.33739$; for the latter, $\theta\approx 1.73284$ and $\phi\approx 1.12677$. Since this is only an approximation, it seemed pointless to go beyond standard precision. The next energies with $n=1$ appear to be "above the hump", where I don't think your equations apply, because they give nonsense.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can a photon collide with an electron? Whenever I study the photoelectric effect and the Compton effect, I have always had a question about how a photon can possibly collide with an electron given their unmeasureably small size. Every textbook I've read says that the photo-electrons are emitted because the photons collided with them. But since the photons and electrons virtually have no size, how can they even collide? I have searched for the answer on the internet but I couldn't find any satisfying one.
Indeed, the picture that one gets from a typical discussion of the Compton effect is far from realistic. Here are a few points to consider, if you are to do an actual experiment (the list is by now means exhaustive). * *Detecting a single photon or a single electron is hard (if not impossible) even with the modern equipment. Thus, in reality we are talking about many photons scattered from many electrons. This in fact is consistent with the Copenhagen interpretation of the quantum mechanics, as measurements done on a statistical ensemble. *Photon is not a point-like particle, but a field. Many photons are an electromagnetic wave. Thus, we can view the interaction of an electron with a photon as an interaction of a point-like electron with an electromagnetic field periodic in space and time (this is actually the direct answer to the question). *As I have already mentioned, many electrons are involved in an actual experiment, and electrons repel each other. Thus, one cannot really do such an experiment with free electrons, but rather with the electrons weakly bound on some material.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 9, "answer_id": 4 }
What's the meaning of a continuity equation with $\nabla^2 \rho$ on the right-hand side? I stumbled upon a continuity equation with a $\nabla^2$ term on the right-hand side: $$ \partial_t \rho + \nabla (\vec b \rho) = D \nabla^2 \rho , $$ where $b$ denotes the forward velocity and $D$ is a constant. What's the meaning of such a diffusion equation? Some background: Since, we have particle number conservation, we have $$ \partial_t \rho + \nabla (\vec v \rho) = 0 , $$ where $v$ denotes the ordinary flux velocity. Moreover, if there are sources, we have $$ \partial_t \rho + \nabla (\vec v \rho) = \sigma . $$
It looks like a diffusion equation with advection. Such an equation would be relevent for heat transport in a fluid moving at velocty ${\bf b}$. If $D=0$ the LHS says that the stuff whose density is $\rho$ is being moved about the flow. If ${\bf b}=0$ the suff is just diffusing. With both ${\bf b}$ and $D$ non zero, you have a combination of both processes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What are the optical properties that cause the glossy look of wood varnish? Applying varnish to a painting or wood makes the colors more vibrant. Why?
Disclaimer: Since I am not an expert in this field, this is in no way a complete answer. The only intention is to provide some insights into some of the notable developments in the area, from which the readers can set out to explore. What an interesting observation! Wooden surfaces offer an incredibly uneven surface for the reflection of light. When you apply varnish on its surface, it would seep into all the irregularities and solidify into a smooth outer skin. Most of the reflections from this surface will be direct reflections rather than diffusive reflections, resulting in glossiness. Apart from the glossiness, making the surface more reflective can give it a brighter appearance. But that's not all; applying varnish to an object can change its color. The effect is due to the changes in reflective and scattering properties of the varnish layer and the its pigments. For many artists, varnishing is an essential finishing touch for their artworks as it brings out the fine details in them. It is also used in art resoration projects. Related reading 1 ,Related reading 2 ,Related reading 3 Image credit : cowans blog
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating work of an object moving up a slope If an object is being pushed across a horizontal surface, the equation for the work done is $W = F s$, where $s$ is the horizontal displacement. If an object is being lifted to a height of $h$, the equation for the work done is $W = F h$, where $h$ is the vertical displacement. If an object is being pushed up a slope, or if a human is moving up a flight of stairs, however, only the vertical displacement is concerned in calculating work, while the horizontal displacement is omitted. Why is this so? (With hindsight, I think this is a conceptual misunderstanding that only arises because it is quoted out of context!)
You have to consider the forces acting on that body. If the body moving on a slope experiences no horizontal force i.e. its velocity in horizontal direction is zero, no work will be done in horizontal direction. But, if you consider a force acting in vertical direction then work will be done by the body to move in vertical direction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
“Bananagrams” under black light? There is a game called “Bananagrams” which includes a bunch of pieces with a letter on each. It seems when I shine a black light flashlight on the letters, the “M” letters glow, but no other pieces do. All the pieces appear the same under normal lights (except of course the letters on each piece). Why would only the M’s glow, can someone explain what may be happening here?
The blocks with the M came from a different batch than the others. There is no way to know why without more information. It could have been that a previous batch of M blocks was defective (e.g. incomplete letters due to a defective M die) and had to be replaced, but that is pure speculation and there could be many other explanations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there anyway to know when the wave function collapse? If we get 2 entangled particles and move them away from each other, is there a way to put one of them is some kind of "sensor" that would tell if the entangled particle have been measured? If yes, how does one function? If not, how do we know that the measure of one did collapse the other?
In the case of entangled particles, they are usually separated so that a signal cannot pass from one to the other. The wave function is an expression of the probability for the result of a measurement. Probability is a human assessment of likelihood. Since Alice and Bob have different information, it is natural that they assign different probabilities, and consequently different wavefunctions. When Alice performs a measurement of one particle, entanglement ensures that Alice's wavefunction for both particles collapses. This does not affect Bob's wave function in any way. There is no way he can know whether Alice has performed a measurement. Only later, when the results of measurement are brought together, is a correlation found between the measurements of Alice and Bob.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Power loss in power cable contradiction To minimize the power loss in long-distance power cables it is best to minimize the current and maximize the voltage. This is because the power loss in the cable is calculated by $P=VI$, which we can also express as $I^2R$ by use of Ohm’s law. But by the same reasoning the power loss can also be expressed as $V^2/R$, which clearly shows that the power loss is minimized when the voltage is minimized. How can I explain the contradiction? Why can’t I use $V^2/R$?
Power loss in cables (Joule heating) cannot be calculated as $P=VI$ where $V$ is voltage between the lines. The proper voltage to use for power loss over some wire segment is voltage drop over this segment, which is much smaller than $V$ and is given by Ohm's law (the wire used to transfer current over long distances is usually made of aluminium): $$ V_{drop} = RI. $$ Here $R$ is ohmic resistance of the wire segment and $I$ is current flowing through the wire. Now we can use the formula $$ P_{loss} = V_{drop}I $$ and we get $$ P_{loss} = RI^2. $$ So the lower the current used, the lower the power loss in the wires. We could express this loss in terms of $V_{drop}$, but this is not common in engineering practice. The moral of the usual story is that to minimizes losses, we need to minimize current (or, in the alternative expression, the voltage drop) and the usual way to do that while maintaining net transmitted power $VI$ at hand is to crank up voltage between the power lines $V$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does a string store information during wave superposition? Given we have two Guassian wave pulses in the same medium (string) but in opposite directions. The principle of superposition states that they should pass through each other without being disturbed, and that the net displacement is the vector sum of the individual displacements. Like this: My question is, how does the string have 'memory' of the information on the shape and velocity of both pulses separately? For instance, consider if the interference were destructive. Then when the string is perfectly flat, it cannot have memory of what brought it to this state; similar to how a particle does not continue to accelerate after a force has stopped acting: it has no memory. I understand that although the displacement is zero, the velocity is non zero and at this point the string has kinetic energy. How can the string know from the velocities of various points which waves will emerge? How do both the waves emerge unchanged? I think my confusion stems from a lack of conceptual clarity on the principle of superposition, I may not completely understand it. Any help is appreciated. Note to moderators: I request that this question not be treated as a duplicate. While there are similar questions here and here, none of the answers are able to justify how this happens in terms that make sense to me.
Wave superposition happens because each point in a string of superimposed state, gets a pair of velocities, from a first and second impulse : Actually, it's this individual point speed addition rule that makes wave interference work.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Fraunhofer line width What sets the width of Fraunhofer lines on the solar spectrum ? I first thought of Doppler broadening, but numerical applications result in much too high temperatures. For instance, using these data, I find a $\Delta \lambda =$ 0.01nm line width on the 630.25nm line of iron, corresponding to a temperature of $$ T = \frac{mc^2}{k_B} \frac{\Delta \lambda ^2}{\lambda ^2} \simeq 100\, 000 \, {\rm K} $$ which is way above the Sun's photosphere temperature. Is there something wrong with the above calculation, or is the line width coming from something else ?
To begin with, your reasoning assumes near transparency of the solar atmosphere. This is generally not true. The solar atmosphere is “optically thick” (highly absorbing in the vicinity of a spectral line) as opposed to “optically thin” (nearly transparent). The shape of such a spectral line generally requires radiation transport theory (Foukal “Solar Astrophysics”) to describe adequately. Also, in addition to thermal broadening, there is a “non thermal” contribution to the (Gaussian) width which arises from unresolved random motion along the line of sight. This non thermal velocity component is generally referred to as "microturbulence". I went through the exercise of calculating a solar filament temperature using the H alpha and Ca K lines based on a simple application of transport theory which incorporates the effect of microturbulence . You can read it here: https://solarchatforum.com/viewtopic.php?f=8&t=25215 Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Is there any operator in quantum mechanics that measure an observable with non-zero uncertainty? What does a measurement do? The answer is: If the detector is designed to measure some observable O, it will leave the measured object, at least for an instant, in a zero-uncertainty state. I want to know, in the context of quantum mechanics and base on Hilbert space, can define an operator to measure an observable with non-zero uncertainty?
Defining an observable $M = \sum_m m P_m$, with $m$ the eigenvalues and $P_m$ the eigenspace projectors, is equivalent to defining a set of projective measurements $\{P_m\}$ such that $\sum_m P_m = I$ and $P_m P_{m'} = \delta_{mm'} P_m$ Since the operators $P_m$ are idempotent ($P_m^2 = P_m$), two consecutive measurements, one immediately after the other, necessarily yield the same result. Any other set of measurements $\{M_m\}$ would be measuring something else. On the other hand, it is possible to define a set of measurements $\{M_m\}$ such that two immediately consecutive measurements can yield different results. This can be expressed using the POVM (positive-operator-valued measures) formalism. In that case, the measurement operators no longer are orthogonal projectors. To give an example: such measurements can be used to unambiguously distinguish non-orthogonal quantum states. By virtue of the no-cloning theorem, this is impossible to achieve with complete reliability. However we can have a measurement that is sometimes inconclusive but never makes an error of mis-identification. Let's take the simple case of a system prepared in one of two states $|\psi_1 \rangle = |0 \rangle$ or $| \psi_2 \rangle = \frac 1 {\sqrt 2}(|0 \rangle + |1 \rangle)$. We will then apply the following POVM: $$\begin{align} E_1 &= \frac {\sqrt 2} {1 + \sqrt 2} |1 \rangle \langle 1| \\ E_2 &= \frac {\sqrt 2} {1 + \sqrt 2} \frac {(|0 \rangle - |1 \rangle) (\langle 0| - \langle 1|)} 2 \\ E_3 &= \mathbf I - E_1 - E_2 \end{align}$$ We can see that if we have state $|\psi_1 \rangle$, there is zero probability of getting the result $E_1$: $\langle \psi_1 | E_1 | \psi_1 \rangle = 0$. Similarly, if we have state $| \psi_2 \rangle$, there is zero probability of getting the result $E_2$. Therefore, if we observe $E_1$, we know that the state was $| \psi_2 \rangle$ and vice versa. However, in both cases, there is a non-zero probability of observing $E_3$, our "inconclusive" result: $$\langle \psi_1 | E_3 | \psi_1 \rangle = \langle \psi_2 | E_3 | \psi_2 \rangle = \frac 1 {\sqrt 2} \approx 0.71$$ Adapted from: * *Quantum Computation and Quantum Information, Nielsen and Chuang (2010) *Unambiguous Quantum State Discrimination, Keyes (2005)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are antileptons and antibaryons linked? The recent news about the T2K experiment got me thinking: is there any linkage in the Standard Model between the matter and antimatter categories across the families of Standard Model particles? Are antileptons necessarily linked to antibaryons? As a specific example: In our universe "matter" is made up of electrons $e^-$ and protons $p$. Antimatter particles are positrons $e^+$ and antiprotons $\bar p$. $p$ and $\bar p$ are obviously a matter-antimatter pair, but is there any theoretical reason the $e^-$ is the same type of matter as the $p$? Could there be a universe in which $p$ and $e^+$ are the "matter" particles and $\bar p$ and $e^-$ are the "antimatter" particles?* * Besides the fact that obviously that would be a weird universe where you couldn't make atoms.
Protons and electrons are not composed of the same "type of matter". Electrons i.e. leptons are fundamental particles whereas protons are Hadrons which are compositions of quarks. Quarks and leptons are the fundamental particles. Also, obviously opposite charges attract so an atom could not be made where you've combined a matter and antimatter as the 'matter pair'. You could argue that possibly you could have $\bar{p}$ and $e^+$. As in antimatter is the new matter but they're are theoretical mechanisms in which matter dominates antimatter in the earlier universe to get what we have today. They are grouped because their charges are ~equal in magnitude and opposite. Opposite so they attract and experimentally so very close in magnitude. If you make the argument that the decay of a proton is to a positron and neutral pion (which is hypothetical and not observed yet) you can see why their charges are of equal magnitude and hence why they 'form up together'.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Wavefunction of a photon Does anyone have an explicit closed-form expression for the wavefunction of a single photon from a multipolar source propagating through free space? Any basis is acceptable as long as it is a single photon state. A reference would also be appreciated, but not essential. ———————— A possible duplicate has been suggested: Does a photon have a wave function or not? But this question primarily concerns the existence of the wavefunction and is not what I am looking for. None of the answers provide an explicit expression for the wave function, and neither the question nor the answers discuss a multipole source. The multipole source, in particular, is central to my question.
I think you are asking for the wavefunction of a single photon in a state of sharp total angular momentum $j$. This is, \begin{equation} |jmk\lambda\rangle=\frac{(2j+1)}{4\pi}\int\sin{\beta}d\alpha d\beta \{D^{j}(\alpha,\beta,0)^{m}_{\ \lambda}\}^{*}|\vec{k},\lambda\rangle \end{equation} In this equation the 3-momentum of the photon is the vector $\vec{k}$. The magnitude of $\vec{k}$ is $k$. The vector $\vec{k}$ is defined as being rotated by Euler angles $\alpha,\beta$ from a fiducial momentum $\vec{k}_{0}$ along the z-axis. The actual 3-momentum is, $\vec{k}=R(\alpha,\beta,0)\vec{k}_{0}$ where $R(\alpha,\beta,\gamma)$ is the rotation matrix and $\alpha,\beta,\gamma$ are Euler angles. The Euler angle convention is $\alpha$ is rotate about z, $\beta$ is rotate about resultant y, $\gamma$ is rotate about resultant z. The matrix $D^{j}(\alpha,\beta,0)^{m}_{\ \lambda}$ is Wigner's D-matrix. $\lambda=\pm 1$ is the photon's helicity. The states $|\vec{k},\lambda\rangle$ are the linear-momentum-helicity eigenvectors. The states $|jmk\lambda\rangle$ are angular-momentum-helicity eigenvectors. This result is equation (8.7.2), page 147 of thge book "Group Theory in Physics" by Wu-Ki Tung. It is also equation (28.35) on page 218 of the book "Relativistic Theory of Reactions", by J. Werle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If a charge comes into existence, and there is another charge 20 light years away(which existed for eternity), what will be the interaction? Our physics teacher just taught us a bit about electric fields, and why the concept was invented. He said that if a charge comes into existence, another charge won’t feel its effect instantaneously, rather the field will travel with the speed of light and forces will happen when the field reaches the other charge. But if a charge comes into existence, let’s call it particle 1, while another charge has been 20 light years away from that point in space from a long time, let’s call this one particle 2, what will the interaction be like? My best guess was that particle 1 will feel the effect of particle 2 as soon as it comes in existence, while particle 2 will the effect from particle 1 after 20 years. I simply don’t know, please help?
If a cork is floating on a still pond and then you drop a brick into the pond several metres away, does the cork start bobbing about as soon as the brick touches the water? Or does it have to wait until the waves reach it? So the remote charge has to wait on the electric field to propagate from the initial charge. By the way, a single charge can't come into existence on its own - there has to be an equal and opposite charge as well (Law of Conservation of Charge).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Graphical explanation for length contraction I can understand the mathematical explanation for the reason why there should be a length contraction, but I fail to understand it intuitively. That is why I tried to explain it using spacetime diagrams, but for some reason, I was unable to do so. Let us use the following procedure to measure the length of the rod from S's reference frame: S moves with $\vec{v} = v \hat x$ and sets it's watch to $t = 0$ when it is in one end of the rod, and looks at it's watch when it is in the other end of the rod and sets $t = t_2$. Let us also put one end of the rod, the one which $S$ visits first, to the origin of $S'$. We have two events $$e_1: \quad (t_1', x_1') = (t_1', 0) \quad and \quad (t_1, x_1) = (0,0),$$ $$e_2. \quad (t_2', x_2') = (t_2', L_0)) \quad and \quad (t_2, x_2 = (t_2, 0))$$ Note that, the rod is stationary wrt S' and both events happen at the origin of S wrt S. Since we are relying on $t_2$ to calculate the length of the rod, graphically (see the above figure), $$t_2 = \sqrt{L_0^2 + (t_2')^2}.$$ If we just cheat a bit (to see whether we are on the right track) and use Lorentz transformations, we can see that $t_2' = \frac{t_2}{\sqrt{1-v^2}}$, which means that the above equation implies $$t_2 = \sqrt{v^2 + 1}t_2' \quad \Rightarrow \quad x_2 = \sqrt{v^2 + 1}L_o,$$ which is clearly wrong. Question: What am I doing wrong?
Let $S'$ be a reference frame fixed on the rod and let us measure the positions of both ends of the rod (at the same time) when one end is at $x = 0$ wrt (with respect to) $S$. We have two events $$e_1: \quad (t, x_1) = (0, 0)$$ $$e_2. \quad (t, x_2) = (0, L) \quad and \quad (t_2', x_2') = (t_2, L_0))$$ Using the Lorentz transformations, we have $$t_1' = \gamma (0 - v*0) = 0$$ $$x_1' = \gamma (0 - v*0) = 0$$ $$t_2' = \gamma (t - v*0)$$ $$L_0 = x_2' = \gamma (L - v*0) = \gamma L \quad \Rightarrow L = \frac{L_0}{\gamma},$$ as desired. Moreover, it can be described graphically as,
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Where did the work done by smaller force go? Suppose I have a spring of spring constant 150 N/m.One person pulls with it a force of 15 N. The extension produced is 0.1 m. Now another person comes and pulls with a force of 30 N (The first person is still there). The final extension is 0.3 m.The initial and final potential energies of the spring are 0.75 J and 6.75 J respectively. The work done by the bigger force is 6 J which is exactly equal to the change in P.E of the spring. So where did the work done by the smaller force (when they were pulling it together) go? Why didn't it help to further increase the potential energy of the spring?
When both are pulling it together, the total fore is $45\ \mathrm N$ and the extension is $0.3\ \mathrm m$ and the work-done is still $6.75\ \mathrm J$. When they pull individually the work done is $6\ \mathrm J$ and $0.75\ \mathrm J$ respectively irrespective of who pulls first. It doesn't matter how they are being pulled; together or individually, the contribution remains the same.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Matsubara frequencies and analytic continuation - Domain and uniqueness In the Matsubara formalism, there is a commonly-made statement that an imaginary-time correlation function is related to a retarded correlation function via the replacement $i\omega_n \rightarrow \omega + i0^+$, also called analytic continuation. In a mathematical sense, on the other hand, analytic continuation means to find an analytic function whose values match the "input data", which in this case are the values at the discrete Matsubara frequencies $i\omega_n$. For analytic continuation to retarded correlation functions, I have two questions: * *What is the domain of analytic continuation and why? That is to say, do we only care about analytically continuing in the upper half plane, excluding all frequencies $\omega_n \leq 0$ as part of our input data to analytic continuation? Do we also exclude the real line, so that we ignore the zeroth bosonic frequency? And why is that the "physical" choice? *In what sense is analytic continuation unique? Does one require additional "physical" conditions other than analyticity? There is a strong statement on the uniqueness of complex-analytic functions via the identity theorem; does the infinite Matsubara frequency count as an accumulation point and thus be sufficient for uniqueness?
I hope am not confusing something, the issue is that the retarted propagator $G^R$ is regular in the upper half plane, so by the second statement from your question, two functions equal on the set, with an accumulation point at infinity are equal at every point in the domain of analycity. The same statement also holds in lower plane, with $G^A$ instead of the retarded and $\omega - i 0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Heat transfer in glass top ranges * *The "glass" in glass top ranges apparently has quite low heat conductivity as areas just a short distance from the heating element don't get so hot - especially compared to a metal surface which has excellent heat conductivity. *The glass does not get red-hot. *Presume the glass top is essentially transparent to infrared? *Conclusion: the dominant heat transfer mechanism is radiation. Questions: * *Is #3 above true? *Is #4 above true?
You are correct that the primary mechanism of heat transfer to the cooking utensils on a glass ceramic cook top is infrared radiation from the hot metal coil just below the surface. The low thermal conductivity of glass ceramic helps keep the conductive heating of the glass localized below the utensil. Hope this helps
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why doesn't a backward wave exist? (Huygens principle) Since every point on a wavefront act as a source of secondary wave (wavelets) then why do we get only forward wavefront not backward. Huygens principal says that amplitude of the backward wave is zero, but why and how it happens?
Huygens original description of wave propagation did not adequately explain the backward wave. Elimination of the backward wave is the reason for the 'obliquity factor' or 'inclination factor' that was added by Fresnel/Kirchhoff. It adjusts the strength of the Huygen's wavelets as a function of the direction of propagation of each segment of the wavelet so that there is no backward wave. It is: 1 + cos θ, where θ is the angle between the normal to the original wavefront and the normal to the secondary wavefront. Google "obliquity factor" and also see this: Intuition of inclination factor in Kirchhoff's diffraction law However the obliquity factor is often considered arbitrary and may not be necessary--see my "Huygens' Principle geometric derivation and elimination of the wake and backward wave" https://www.researchgate.net/publication/340085346
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the net torque of an only translating rigid body zero, independent of the point chosen? I know that the Newton-Euler equations can be proven using the center of mass as reference, but I was wondering if this is a special case, or if you can provide a counter-example. We know that when a rigid body is only translating the net torque through the center of mass is zero. Is this true when we evaluate the torque using other points too?
No, it is not true; for example, consider a uniform-mass ball of radius $r$, which has a force $\boldsymbol{F}$ acting horizontally through its centre, such that it translates without rolling. The torque about its centre of mass is zero (because it is not rotating), but the torque calculated about its contact point with the ground is not zero: \begin{equation} \boldsymbol{\tau} = \boldsymbol{r}\times\boldsymbol{F} \neq 0\,. \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the force of gravity always directed towards the center of mass? This is a pretty basic question, but I haven't had to think about orbital mechanics since high school. So just to check - suppose a [classical] system of two massive objects in a vacuum. If the density of either object is the same at a given distance from the center, and both objects are spherical, then both objects can be treated as point-masses whose position is the [geometric] center of the original sphere. In the case that either object is not spherical or has an irregular distribution of mass (I'm looking at you, Phobos!), both objects can still be treated as point-masses but the center of mass rather than the geometric center must be used. Is this correct?
No. It is not correct. Consider this ridiculously contrived counter-example... Three spherically symmetric bodies (or point masses if you can tolerate this) are at the three vertices of a 45°, 90°, 45° triangle, ABC. The masses of the bodies are: $m_\text{A}=m,\ \ m_\text{B}=M,\ \ m_\text{C}=2M$. Regard the bodies at B and C as a single body, BC; join them, if you like, by a light rod. The centre of mass of body BC is at point P, $\tfrac23$ of the way between B and C. But the pull due to BC experienced by $m$, at A, is not directed towards P, as one can easily show by vector addition of the forces due to B and C. [In this case the forces are of equal magnitude, so the resultant bisects angle BAC and clearly doesn't pass through P!] The reason for the discrepancy is the inverse square law of gravitation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 6, "answer_id": 0 }
Can massless particle have effective mass? The effective potential was probably very familiar in many concepts. However, what about effective mass? Suppose a massless particle. For simplicity, suppose it's not some superficial particle, i.e. it has observable effect. Is it possible for such massless particle to gain an "effective mass" through dynamical interaction? For example, a photon could well obtain a $e^-\sim e^+$ pair in space, but I'm not sure weather it's a meaningful case. Further, what does it mean to four momentum for such effective mass, if it exists.
The answer is no, a particle that has zero mass, as the photon, is a relativistic particle and the algebra of the Lorentz transformations does not allow it because of energy and momentum conservation laws. The mass is the length of the energy-momentum fourvector. For example, a photon could well obtain a $e^-\sim e^+$ pair in space, but I'm not sure weather it's a meaningful case. No it is not, again because of energy momentum conservation laws. The electrons if on shell have a mass each of about o.5MeV. The invariant mass of the the produced pair would have to be at least ~1MeV, whereas the photon has 0 mass. This means that the energy momentum of the system, will have to be different from the incoming, leading to violation of energy and momentum conservation. Pair production of photons has to happen with interactions with another field In this instance the field of a nucleus Z.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
If I apply a constant force to an object until I've reversed its starting velocity, does its final position remain unchanged? A body of mass $m$ has initial velocity $v_0$ in the positive $x$-direction. It is acted on by a constant force $F$ for time $t$ until the velocity becomes zero; the force continues to act on the body until its velocity becomes $−v_0$ in the same amount of time. Write an expression for the total distance the body travels in terms of the variables indicated. I let $t'$ denote the time at which the velocity goes to zero so I can use $t$ as a variable. Then I found $$a = \frac{\Delta v}{\Delta t} = \frac{-2v_0}{2t'} = -\frac{v_0}{t'}$$ Substituting this expression for $a$ into $s(t) = \frac{1}{2} a t^2 + v_0 t$ yields $$\begin{align} s(t) & = \left(-\frac{1}{2} \frac{v_0}{t'}\right) t^2 + v_0 t \\ s(2t') & = \left(-\frac{1}{2}\frac{v_0}{t'}\right)(2t')^2 + v_0 (2t')^2 = 0 \end{align}$$ This makes intuitive sense to me. If the positive $x$ direction is up and the force is gravity, then if we shoot a ball upward with velocity $v_0$, it will decelerate to velocity $-v_0$ exactly when it returns to the launch position. To double check, I tried taking (from $a = -\frac{v_0}{t'}$ and $F = ma$) $a = \frac{F}{m}$ and $v_0 = -\frac{F}{m}t'$. I get the same answer when substituting into $s(t)$: $$\begin{align} s(t) & = \frac{1}{2} a t^2 + v_0 t \\ & = \left(\frac{1}{2}\frac{F}{m}\right) t^2 -\left(\frac{F}{m}t'\right) t \\ \implies s(2t') &= \left(\frac{1}{2}\frac{F}{m}\right) (2t')^2 -\left(\frac{F}{m}t'\right) (2t') =0 \end{align}$$ My textbook, however, says without comment that the answer is $\frac{F}{m}(t)^2$, or in my notation, $\frac{F}{m}(t')^2$. Am I wrong, or is the textbook wrong, or could this be an ambiguity in wording—e.g. is there a way to construe "total distance the body travels" as "the furthest position the body reaches from its starting point"? Or could it be that the question is asking not about the object's location at time $t = 2t'$ but about its location as a function of time $s(t)$? In that case my answer is found above and would still be wrong.
The textbook's answer reflects the total distance traveled, i.e. $$\vert s(t') - s(0) \vert + \vert s(2t') - s(t') \vert = \vert \frac{1}{2} v_0 t' - 0 \vert + \vert 0 - \frac{1}{2} v_0 t' \vert = v_0 t' = -\frac{\vec F}{m}(t')^2 = \frac{F}{m}(t')^2$$ I prefer this way of organizing it because it lets me treat the displacement as a single function $s(t)$ and preserves the distinction between a specific value $t'$ and the variable $t$ that I am accustomed to in mathematics. Others may find the answer from @Anton Baranikov more legible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the potential energy stored in a spring proportional to the displacement or the square of it? Suppose a mass of $M$ kg is hanging from a spring in earth. The mass will stretch the spring about $x$ m. So the change in the gravitational potential energy is $mgx$ J (supposing $x$ to be very small compared to the radius of earth). And this amount of energy will be stored in the spring as potential energy. So, Change of gavitational energy = $mgx$ = potential energy stored in the spring And it seems that the potential energy stored in a spring is proportional to displacement $x$. But the potential energy in a spring is $U=\frac{1}{2}kx^{2}$ and so it's proportional to $x^2$, the square of displacement. So surely I am wrong somewhere. But where am I wrong?
You are correct that $$ \text{change of gravitational energy} = mgx \ \ \ \ (= \text{potential energy stored in the spring}). $$ No mistake there. That only gets you halfway there though. It is ALSO true that $$ \text{potential energy stored in the spring} = \frac{1}{2}kx^2 \ \ \ \ (= \text{change of potential energy}). $$ For both to be true, we must have $$ mgx = \frac{1}{2}kx^2. $$ Solving for $x$, $$ x = 2\frac{mg}{k} $$ Voilá. I put the text on the right of those first two equations in parentheses, to emphasize that, while they are technically true, just to stop there is to argue in circles. You need both constraints to arrive at a unique solution. Also, to be strict, the change in the gravitational energy (of the mass) is actually $-mgx$, not $+mgx$, and this is countered by the change in the potential energy of the spring. So really, $$ \text{change of gravitational energy} = - (\text{potential energy stored in spring}) $$ but the end result remains the same (the two minus signs cancel).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
How does cutting a spring increase spring constant? I know that on cutting a spring into n equal pieces, spring constant becomes n times. But I have no idea why this happens. Please clarify the reasons
Let us consider that you join these n pieces of springs in series. Now you know that you have got the original spring whose spring constant is $k$ (say). Now joining springs in series is like joining resistors in parallel (identical formulae) which can easily proven by balancing forces. Hence, $$\frac{1}{k} = \frac{1}{k'} + \frac{1}{k'} + \frac{1}{k'}+\dots n ~\rm times$$ where $k'$ is the spring constant of individual cut springs. On solving the above equation you will get that the spring constant becomes $n$ times.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 3 }
Massive spin-1 field and Proca Lagrangian In his book Quantum Field Theory and the Standard Model, Matthew D. Schwartz derives the Lagrangian for the massive spin 1 field (section 8.2.2). In eq. (8.23) he finds this to be \begin{align} \mathcal L&=\frac{1}{2}A_\mu\square A^\mu-\frac{1}{2}A_\mu\partial^\mu\partial_\nu A^\nu+\frac{1}{2}m^2A_\mu A^\mu, \end{align} where $\square = \partial_\mu\partial^\mu$. In the very same equation, he equates this to the Proca Lagrangian \begin{align} \mathcal L=\mathcal L_\mathrm{Proca}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}m^2A_\mu A^\mu, \end{align} where $F^{\mu\nu}=\partial^\mu A^\nu-\partial^\nu A^\mu$. I fail to understand however, how the first Lagrangian can be rewritten to this Proca Lagrangian. My attempt was to rewrite the first term of the Proca Lagrangian into something that resembles the first two terms of the first Lagrangian above. It involves the product rule \begin{align} -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}&=-\frac{1}{4}(2\partial_\mu A_\nu\partial^\mu A^\nu-2\partial_\mu A_\nu\partial^\nu A^\mu)\\ &=-\frac{1}{4}(2\partial_\mu[A_\nu\partial^\mu A^\nu]-2A_\nu\partial_\mu\partial^\mu A^\nu-2\partial_\mu[A_\nu\partial^\nu A^\mu]+2A_\nu\partial_\mu\partial^\nu A^\mu)\\ &=\frac{1}{2}A_\mu\square A^\mu-\frac{1}{2}A_\mu\partial^\mu\partial_\nu A^\nu+\frac{1}{2}\partial_\mu(A_\nu\partial^\nu A^\mu)-\frac{1}{2}\partial_\mu(A_\nu\partial^\mu A^\nu), \end{align} having applied some relabelling in the second term of the final expression. The first two terms in this final expression are the first two terms in the Lagrangian, but then I'm stuck with the final two terms. Could someone explain to me what I'm missing here? Also, the equation of motion for the Proca Lagrangian are \begin{align} (\square+m^2)A_\mu=0\\ \partial_\mu A^\mu=0 \end{align} Substituting this in the first Lagrangian would make it vanish. How does that make sense?
This somewhat sloppy statement means that the two Lagrangian expressions are the same up to a total derivative. Such a total derivative does not contribute to the action $A=\int d^4x \cal L$, for suitable conditions at the edge of the integration domain, and therefore is considered to be without physical consequence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Mathematically prove that a round wheel roll faster than a square wheel Let's say I have these equal size objects (for now thinking in 2D) on a flat surface. At the center of those objects I add equal positive angular torque (just enough to make the square tire to move forward). Of course the round tire will move faster forward and even accelerate (I guess). But how can I mathematicaly prove/measure how better the round tire will perform? This for my advanced simulator I'm working on and I don't want to just Hardcode that rounds rolls better, square worse, etc. I know the answer could be very complex, but I'm all yours.
Circular objects are not the fastest. Any other smooth convex shape can roll faster than a circle. As a random example, this shape (picture found on wikimedia) can roll faster than a circle: Start it in the orientation shown. This is the orientation where its center of mass is highest. Then it will generally be rolling faster than the circle due to having converted some of its potential energy to kinetic energy. Only at those instants where its center of mass has returned to the original height will it be going as slowly as the circle. Even your example of a square will go faster than the circle, if you replace the flat sides with slightly bulging sides and round the corners slightly, and rotate it 45° so it starts out "standing on a corner".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 12, "answer_id": 8 }
What is the range of Pauli's exclusion principle? In many introductions to the pauli's exclusion principle, it only said that two identical fermions cannot be in the same quantum state, but it seems that there is no explanation of the range of those two fermions. What is the scope of application of the principle of exclusion? Can it be all electrons in an atom, or can it be electrons in a whole conductor, or can it be a larger range?
The most common way to visualize the range of the exclusion principle comes to us from the study of ultra-dense objects like white dwarf stars and neutron stars. In a white dwarf, gravity squeezes the matter in it so hard that the wave functions of the electrons in it begin to overlap- and that's where the exclusion principle kicks in, and fights back against gravity to support the white dwarf and prevent it from being squeezed down more. This effect is called degeneracy pressure and a complete description of it would be the length of several chapters in an astrophysics text. Degeneracy pressure only kicks in when the atoms are being squeezed together so hard that most of the empty space within the atoms has been compressed away. In effect, this means that the distance range over which degeneracy pressure becomes important is far smaller than the dimensions of a typical atom in its unsqueezed state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 2 }
Do things have colors because their electrons are getting excited when photons hit them? Atomic electron transitions can be caused by absorbing a photon with a certain wavelength. An electron jumps to an higher energy level, then it falls back and a photon is emitted. The perceived color of the photon depends on the energy absorbed by the electron. Could we say that electrons in the atoms of different objects are excited when white light hits them, and they release photons which in turn causes the object have a color?
Could we say that electrons in the atoms of different objects are excited when white light hits them, and they release photons which in turn causes the object have a color? What you describe there is known as fluorescence but that's not what happens when white light hits a coloured object, at least not in most cases. Most objects are made up of molecules. These molecules are in turn made up of atoms, bound together by so-called molecular orbitals (MO, aka bonds). When a photon of the right wavelength (and thus right energy) hits an electron in an MO the photon is absorbed and the electron is moved to a state of higher energy. The white light, minus some of the absorbed photons, causes the reflected (or transmitted, in the case of transparent objects) light now to appear coloured. So the phenomenon is caused by VIS photon absorption, not fluorescence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to model a quantum circuit Let's say we have a system of 2 qubits, which are entangled in an unknown Bell basis configuration. Since the qubits are in a Bell configuration, each state is orthogonal to every other state, and thus must be distinguishable from each other. My understanding is that there are 4 measurement operators (see below) that can be simultaneously applied to a single qubit (because they commute). And this is why we can unambiguously identify which Bell state the qubits are in, even if Bob and Alice measure their respective qubits with the 4 measurement operators, individually and concurrently with each other. Question How does one come up with a quantum circuit for something like this? How do we go about mapping an arbitrary measurement to one of the well known quantum gates? On a related note, let's say I have an arbitrary unitary matrix. How does one map that to a quantum gate? Bell configuration: * *$\vert{T_1}\rangle = \frac{1}{\sqrt{2}} (\vert{10}\rangle - \vert{01}\rangle)$ *$\vert{T_2}\rangle = \frac{1}{\sqrt{2}} (\vert{10}\rangle + \vert{01}\rangle)$ *$\vert{T_3}\rangle = \frac{1}{\sqrt{2}} (\vert{00}\rangle + \vert{11}\rangle)$ *$\vert{T_4}\rangle = \frac{1}{\sqrt{2}} (\vert{00}\rangle - \vert{11}\rangle)$ Distinguishability: * *$\langle T_i \vert T_j \rangle = \delta_{i,j}$ Measurement operators: * *$M_{T_i} = \vert{T_i}\rangle\langle{T_i}\vert$ *Commutation: $[M_{T_i}, M_{T_j}] = 0$
On a related note, let's say I have an arbitrary unitary matrix. How does one map that to a quantum gate? There is a several quantum gates you can use for construction other more complex gates. See list of gates on Wikipedia. Please also find other technique how to decompose arbitrary unitary gate in article Elementary gates for quantum computation. Moreover, it is possible to approximate any quantum gate by proper combination of CNOT, Hadamard, phase gate (also called $S$ gate) and $\pi/8$ (also called $T$ gate).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is 1 Joule the work done to lift ~100g through a distance of 1m? I am seeing many videos saying that 1 Joule is the work done to lift ~100g through a distance of 1m (like this one https://youtu.be/BYpZSdSEk4A?t=348). The idea is that 100g has a gravitational force downward of 1 Newton. So lifting it means applying 1 Newton over 1m, and therefore is 1 Joule. But if I apply 1 Newton upward, and gravitational force is 1 Newton downward, the object shouldn't move. So I believe you need to apply a force greater that 1 Newton in order to lift it. Or am I missing something? Would it be more accurate to say that 1 Joule is the work done by gravity to make fall 100g by 1 meter?
Joule is the work done by a force of 1N (1 Newton) when moving the object by 1m (1 meter). It is one of the SI units, that is: it is defined in terms of the basic units of kilogram, meter and second. * *Given the free fall acceleration of $9.8m/s^2\approx 10m/s^2$ the gravity force acting on an object of mass 100g=0.1kg is about 1N, which means that (in terms of the magnitude of the force) the quoted statement is approximately correct. *If the net force acting on an object is zero, it will remain at rest of move with a constant speed (assuming that we are in the inertial reference frame). Thus, if we apply the force that balances the gravitational force, we can move the stone with constant speed. Both the applied force and the gravity force do work of approximately 1J, but these works are of different sign.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Gravity at light speed my question in regards to light. Since gravity warps space time, and we have discovered gravitational waves. Would the light at the front of the wave, be traveling faster than lightspeed? Essentially like a surf board on a wave? Also, does gravity behave differently at light-speed? Would faster than light-speeds nullify gravity?
Gravitational waves always move at the speed of light as you say. Light also moves at the speed of light. For the question does light move faster than the speed of light in a gravitational wave. Well the answer is no. This is because light can never move faster than the speed of light. This is because the maximum speed through spacetime is the speed of light,c. Normal object move through time at the speed of light. Sometimes this can be a combination of moving through space and time. However normal matter could never reach the speed of light through space. Light however can move at the speed of light but since it is moving at this speed it experiences no time. Since light will not move any faster it will lose or gain energy to the gravitational wave. The gravitational wave will not speed up either due to the fact that it too is moving at the fastest speed possible-the speed of light.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How do gravitons and photons interact? First of all, I am a noob in physics (I‘m a computer scientist) and started reading Hawking‘s „A brief history of time“. In Chapter 6 he says that “electromagnetic force [...] interacts with electrically charged particles like electrons and quarks, but not with uncharged particles such as gravitons.” My question now: how come that extremely massiv object are able to bend light (e.g. we are able to see distant stars that are behind the sun)? I mean, how can gravitation (actually gravitons) affect photons if gravitons are not charged? I know that there are some questions here that go in the same direction but as I‘m a noob in physics, I don‘t quite get the answers. I‘d appreciate if someone had a laymen‘s explanation for this that not necessarily covers all different aspects (I might pose some follow-up questions) but explains the essence. Thanks to y‘all!
Gravity couples to energy, not just mass as in Newtonian theory (really it couples to energy density, momentum, and stress). Since photons have energy, they feel gravity. As a classical phenomenon, lensing is generally thought about as light interacting with the curvature of space rather than gravitons. A physical process involving the interaction of gravitons with photons would be the electromagnetic scattering of gravitational waves, as hypothetically happened in the early universe when primordial gravitational waves scattered off the cosmic microwave background.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Why does the ideal gas law exactly match the van't Hoff law for osmotic pressure? The van't Hoff law for osmotic pressure $\Pi$ is $$\Pi V=nRT$$ which looks similar to the ideal gas law $$PV = nRT.$$ Why is this? Also, in biology textbooks, the van't Hoff law is usually instead written as $$\Pi=CRT =\frac{NC_m RT}M$$ where $C_m$ is the mass concentration, $N$ the number of ions, and $R$ the ideal gas constant. Why?
The law $PV = n RT$ gives the pressure $P$ of $n$ moles of ideal gas in volume $V$. Meanwhile, the law $\Pi V = n R T$ describes the osmotic pressure $\Pi$ due to $n$ moles of solute in volume $V$. These are qualitatively very different situations, but there's a simple fundamental reason that they end up looking the same. Both of these laws are derived under the idealized assumption that the ideal gas/solute molecules don't interact with each other at all. So the expressions for the entropy of the ideal gas/solute are the same, and since the pressure of a system can be derived from the entropy, both situations yield the same pressure. The reason that you see $\Pi V = n RT$ expressed in such different units in biology textbooks is simply because they're using the units that are most convenient for them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 2, "answer_id": 1 }
Biomechanics statics of an push-up So I am really struggling with a statics problem about calculating forces and torques of an exercise. So I need to calculate the forces acting on the feet and hands, as well as the torques, for 2 static push-up positions. I've linked the 2 static push-up positions at the bottom. How do I go about this? BTW: I know I have not given any lengths, angles, and so on, because I'm more interested in the way you would write up the equations step by step rather than a final answer. My thoughts: * *I know there are 3 forces [Hand, feet, and weight(at the center of mass)] *My assumption is that there is static equilibrium. *Since it's a static problem there's no acceleration *I know for the start position(second picture) there is an angle theta which plays a factor Thank you in advance, Thomas
Hint: There are two equations for static equilibrium, (1) the sum of the forces equals zero and (2) the sum of the moments (torques) equals zero. For the first equation you only need the persons weight. For the second equation you need the location of the person's center of mass. You will need to apply the second equation twice, once for each diagram, because the horizontal location of the center of mass of the person will be different for the two positions. The angle comes into play for the second diagram. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can a very small piece of material be superconducting? The existing theory of superconducting seems to be based on statistical mechanics. Can an ultrasmall piece of material, like a quantum dot with very few atoms (like a small molecule), be superconducting? For example, can a cubic of 3 * 3 * 3 = 27 copper atoms be superconducting? What is the minimum n for a cubic of $n*n*n$ copper atoms to be able to be superconducting? Can a few unit cells of a complex high temperature superconducting material be superconducting? If so, then maybe some calculation from first principles can be done on such a piece of material as a molecule to understand the exact mechanisms of high temperature superconducting. If not, can some first principle calculation on such a small piece of material be done to find some pattern that lead to a possible theory of high temperature superconducting?
Tunneling of Cooper pairs through metallic islands has been a subject of research fir quite some time now. One does not call these quantum dots, since the QDs are usually understood in the context of semiconductors, but many ideas are the same - particularly relevant here is the Coulomb blockade, which permits counting electrons/pairs. On the one hand, the operator of the particle number (i.e. the number of the Cooper pairs) does not commute with the semiconductor phase $$[\varphi, N] = 1,$$ i.e. the Coulomb blockade destroys the superconductivity. On the other hand, when the tunneling is possible, the number of particles on the island is not fixed, and the superconductivity does occur. But this is because the small volume in question is not really isolated. To summarize * *Once the system is reduced to a countable number of particles (as suggested in the question), the superconductivity is impossible *One can relieve this constraint by coupling the system to the environment, and thus inducing particle fluctuations. *I would like to stress that this answer is complimentary to the one by @fra_pero, who focuses more on the long-range order and the boundary effects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Why is the Force of Gravitational Attraction between two “Extended” bodies proportional to the product of their masses? Newton’s Law of gravitation states that force of attraction between two point masses is proportional to the product of the masses and inversely proportional to the square of the distance between them. I know that the force of attraction between two spheres turns out to be of the same mathematical form as a consequence of Newton’s law. But I am not able to prove how the force between any two rigid masses is only proportional to the product of their masses (as my teacher says) and the rest depends upon the spatial distribution of the mass. So $F$ is ONLY proportional to $Mmf(r)$ where $f(r)$ maybe be some function based on the specifics of the situation.
It is not true that the gravitational force between extended mass distributions depends only on the product of the total masses. It is true that the time averaged total force integrated over each body is $$\vec F = Gm_1m_2 \frac{\vec r_{12}}{r_{12}^3} ~.$$ However, unless both mass distributions are spherical, the attraction has higher moments. These higher moment forces cause the bodies to be stressed and to nonuniformly rotate or wiggle. Only for certain relative orientations these higher moment forces exactly cancel the internal stresses. An example is the Earth-Moon system. The moon is deformed but it is almost at rest in the corotating frame. It only wiggles a little. Weirder is the rotation of Mercury. It has a slight permanent dipole deformation causing it to rotate in a tidal 3:2 resonance. See https://en.wikipedia.org/wiki/Mercury_(planet)#Spin-orbit_resonance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 9, "answer_id": 4 }
Actual meaning of refraction of light The definition of refraction which I found on wikipedia is In physics, refraction is the change in direction of a wave passing from one medium to another or from a gradual change in the medium. But in the below case, there is no change in direction of light. So, is this also refraction?
I really feel like I genuinely understand your question. In essence you are asking "do we count it as refraction when there is no refraction in virtue of the angle of incidence being 90 degrees (or zero degrees, however you look at it). The example you give is an example of something that is vacuously true. It's a bit like a child claiming that he ate all of his vegetables because there were in fact, no vegetables served on his plate this evening. I suppose that you are correct that there is no refraction, but only in virtue of the fact that the angle of incidence is 90 degrees. But in physics, we try to avoid conundrums involving vacuous statements like this. They don't really improve our understanding of the how and why of the physical world. It's a bit like asking "if a tree falls in the forest, but no one is there to hear it, does it make a sound". The question, by its very construction, is designed to generate endless debate rather than move our understanding forward. Your question is a great one - it shows inquisitiveness and curiosity, and shows your interest in pushing the envelope. When I was working on my undergraduate degree in physics I used to ask questions like this all the time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why does graphene break on mechanical exfoliation? We know that graphene is the strongest metal on earth. It has very high tensile strength. The inter-planar bonds are van der waals force but the covalent bond across the plane between C-C atoms is what gives them the strength. If the covalent bond is soo strong, then, when we mechanically exfoliate graphene, why does it break into several pieces? It is also said that when we write on paper with a graphitic pencil, many graphene pieces are formed on the trail of the graphite line. Why? If graphene would have been that strong then why would it break?
Graphene is strong per bond, but at the end, it's still a 2D material. The strength of most objects scale with surface area (proportional to the number of bonds), but graphene can only scale linearly. Also, in the case of a pencil, that's not a pure crystal of graphite, it has defects. Most graphite don't have contiguous layers of graphene running through it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is electric flux through a cube the same as electric flux through a spherical shell? If a point charge $q$ is placed inside a cube (at the center), the electric flux comes out to be $q/\varepsilon_0$, which is same as that if the charge $q$ was placed at the center of a spherical shell. The area vector for each infinitesimal area of the shell is parallel to the electric field vector, arising from the point charge, which makes the cosine of the dot product unity, which is understandable. But for the cube, the electric field vector is parallel to the area vector (of one face) at one point only, i.e., as we move away from centre of the face, the angle between area vector and electric field vector changes, i.e., they are no more parallel, still the flux remains the same? To be precise, I guess, I am having some doubt about the angles between the electric field vector and the area vector for the cube.
A good way to visualize the problem is to imagine first that the charge is enclosed by a sphere. Draw a small area on the surface of the sphere, ane draw lines from the charge through the small area. Those lines are the flux through the area. Now imagine a larger sphere concentric with the first one. The continued lines trace out an area of the same shape on the second sphere, and the same lines pass through that second area. Now deform the second sphere into a cube, but leave the lines alone. Imagine the area the lines will trace out on the cube. Even though the new area is tilted relative to the corresponding area on the sphere, and the new area is distorted, all the same lines pass through it. In other words, the flux through the (tilted & distorted) area is the same as it was through the corresponding area on the sphere. The mathematical operation is an expression of this fact.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 6 }
Can we have pressure with zero net force on a 2d plane? From Wikipedia: Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Suppose we apply 2 equal and opposite forces on a 2d plane with area 'A' perpendicular to it ($F_a$ & $F_b = F$) Will we say that pressure is $0$ or $\frac{2F}{A}$ Suppose we have static fluid in a container, then force applied by water on top of a cross-section (because of its weight) is equal the force being applied from the downside as it is a static fluid Now as in the example of the plane if the answer is that pressure is 0 then the fluid at some depth also has equal and opposite forces then pressure even at some depth should be zero And if it is $\frac{2F}{A}$ then the pressure should be multiplied by 2 which is not done in the derivation of "variation of pressure vertically with depth"
If you insert the plane into a container of gas at pressure P, the plane experiences equal pressure on both sides. That pressure is simple P. You are mixing up force and pressure. If pressure is only on one side of a plane, the net force on the plane is PxA (force per unit area, times area). Since the plane in this case is on both sides, and the area normal points in opposite directions on the two sides, the net force on the plane is PxA - PxA = 0.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Selections rules for spin what do we mean by the selection rule $\Delta S=0$? Can you give me some example for hydrogen atom? For example if I want to go from $1s$ to $2p$ how can I calculate $S$ for $1s$ or for $2p$?
$\newcommand\bs\boldsymbol$ We note that the spin $S$ of an atom is the spin angular momentum sum of all the electrons (and nuclei) in the atom, as given by $$\bs{S} = \sum_i \bs{S}^{(i)} .$$ Most often, selection rules help us specify what happens to the atom upon interaction with a photon, which has spin angular momentum of norm $\sqrt{2}\hbar$, and a projection of $\pm\hbar$. (The orbital angular momentum of a photon is beyond the scope necessary for atomic-physics.) When the atom absorbs/releases a photon, the $orbital$ angular momentum $\bs{L}$ changes, but spin orbital angular momentum $\bs{S}$ does not change, and total angular momentum is conserved. This is what we mean by $\Delta \bs{S} = 0, \Delta \bs{L} = \pm 1$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is there an electric potential drop in electric circuits? I know a battery creates a potential difference, making an electric field that exerts a force on the electrons, who start moving. But why is there a potential drop after a resistor for example? How does it go in hand with electric potential being a scalar assigned to a point in space? How can a resistor change the potential of all the points in the conductor succeeding him? Or am I looking at it in a wrong way? I think I'm considering electric potential from an electrostatics point of view and it gets me nowhere.
Think of it in terms of impedance to a wave. Say there is a rope in air with some portion in the middle of it being in water. Now if we move one end of the rope continuously, the wave travels until it reaches the water-air interface. Here some of the wave reflects and other gets transmitted, constrained by energy conservation. And similarly at the second interface. Under equilibrium condition, something called as the steady state is achieved. Here the flow rate is constant. This means that the reflections are coming in constantly to the constant wave generation done at the left end. The energy flow in the system is constant. Here it’s easy to see that in the presence of the water region in between, the energy transferred to the other end has decreased. The flow of energy from left to right of the whole system has decreased due to the presence of impedance at some portion in the middle. This is exactly what happens in the case of resistance present in the circuit. It provides an impedance to the flow (current) and in steady state, the reflections from the interface cause the current in the whole circuit to decrease.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How does focal length affect magnification? My answer would be the longer the focal length, higher the magnification will be, resulting in a larger image. But in a ray diagram, how does it look? I am searching for a comparison of ray diagram between short focal length and long focal length but didn't manage to get anything. My high school textbook didn't explain much about focal length also. I wanted to know more about how does different focal length affect the image in a telescope as well. But Google shows only the simple ray diagrams. Thanks in advance.
NOTE: Lines are not straight and don't question the size of mirror and I have used concave mirror only. In the image the position of the object is fixed (30cm) and two concave mirrors of focal length (20 and 10cm) are taken. [ In the image it can be noticed that as focal length decreased the magnification also decreased . But it doesn't happen always. Magnification is related to the focal length of the mirrors by using the formula m=-(v/u) or it can be deduced to m= f/(f-u) {using sign convention} For lenses it is m=f/(f+u) Thanks for asking.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is meant with overdamped motion? I'm learning about Brownian motion. I use the approximation of overdamped motion. I read that the average acceleration is $0$ then, but I don't really understand the concept. So, what does overdamped exactly mean, especially in the context of Brownian motion, and why is the acceleration $0$? Thanks!
Overdamped means that viscosity forces are much more "relevant" than inertia. When this is the case, essentially any movement will very quickly reach terminal velocity, so the acceleration will be $0$. As a toy model, imagine trying to push an object through a viscous medium. If we apply some constant force $F$, then Newton's second law gives us the following differential equation: $$m\ddot x=F-b\dot x$$ It is easy to see that terminal velocity is $v_T=F/b$, and the relevant time scale to relax to terminal velocity here is $\tau=m/b$. If the system is overdamped, then $b$ is very large, which makes $\tau$ very small. In other words, the more damping you have, the faster you reach terminal velocity where the acceleration is $0$. Iin this regime you can create a "trick Newton's second law" where the velocity is proportional to the applied force: $$F=b\dot x$$ Don't show introductory physics students this ;)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation of motion for a particle under a potential in special relativity The equation of motion of a particle in Newtonian mechanics in 3D under an arbitrary potential $U$, is written as $$m\frac{\mathrm{d}^2 \mathbf{r}}{\mathrm{d} t^2}=-\nabla U.$$ Now, my question is, how can this be generalised to Special relativity? I know that the naive answer, $$m\frac{\mathrm{d}^2 x^{\mu}}{\mathrm{d} \tau^2}=-\partial^{\mu} \Psi$$, where $\Psi$ is some relativistic generalisation of potential energy, cannot work, since every four force $K^{\nu}$ has to satisfy $K^{\nu} \dot{x}_{\nu}=0$, where dot indicates derivative with respect to proper time, so for this shows that the above naive generalisation cannot work, unless $\Psi$ is a constant, which makes it physically useless. How can one solve this caveat, in order to obtain a physically useful generalisation that works in special relativity?
As you point out, if $K$ is the force 1-form and $v$ the velocity 4-vector, $K(v) = 0$. This means that we cannot hope to find a scalar field $\Psi$ on space-time that gives $K$ by exterior derivative, that is, $K=\text d\Psi=(\text d_0\Psi, \text d_{(3)}\Psi)$. To see this, assume that the spatial part of $K$ is $\text d_{(3)}U$. Then the temporal part must be of the form $$K_0 = \frac{\mathbf v\cdot\nabla U}{\gamma c}$$ which is not the derivative w.r.t. $t$ of $U$ in general. One can already see this in electrodynamics, where the force 1-form is proportional to the contraction between Faraday's 2-form with the velocity 4-vector, viz. $K= \iota_vF$. Indeed, given that $F$ is a 2-form, $K(v) = (\iota_v F)(v) = 0$ because of the skew-symmetry.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What does CERN do with its electrons? So to get a proton beam for the LHC, CERN prob has to make a plasma and siphon off the moving protons with a magnet. Are the electrons stored somewhere? How? I don’t mean to sound stupid but when they turn off the LHC, all those protons are going to be looking for their electrons. And that’s going to make a really big spark.
You're right that CERN gets its protons by ionizing matter and collecting them. But the number of electrons & protons CERN deals with is far smaller than you might think. They get about 600 million collisions a second at CERN. So call it 1.2 billion protons used per second. $1.2 \times 10^9$. That'd be a large number in dollars, but it's not much in Coulombs. For comparison, a wire carrying a 30 amp current has about $2 \times 10^{19}$ electrons flowing through it every second. That's a factor of 10 billion. So there's not really any issue disposing of CERN's unneeded electrons. You probably make a bigger spark when you rub your feet on the carpet. If memory serves, the LHC has been running for its whole history off of a single canister of hydrogen gas.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 5, "answer_id": 0 }
Is three dimensional supergravity dynamical? So it is well known that standard $D = 3$ Einstein gravity is non-dynamical in the sense that the graviton has no on-shell degrees of freedom (d.o.f $= D(D-3)/2$ and the theory is topological). However, I have also seen that there are various theories of supergravity with up to $\mathcal{N} = 16$ supersymmetries in $D = 3$. Are any of these theories dynamical (with a massless graviton), and if so, how?
No; which is to say, the 3D SUGRAS most closely analogous to pure Einstein gravity in three dimensions are also topological in the sense you mention. In fact the original reference that showed 3D gravity can be formulated as a Chern-Simons theory actually treated 3D supergravity! (Achucarro & Townsend https://inspirehep.net/literature/21208)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What happens to an inductor if the stored energy does not find a path to discharge? Suppose an inductor is connected to a source and then the source is disconnected. The inductor will have energy stored in the form of magnetic field. But there is no way/path to ground to discharge this energy? What will happen to the stored energy, current and voltage of the inductor in this case?
But there is no way/path to discharge this energy? What will happen to the stored energy, current and voltage of the inductor in this case? In that case it makes its own circuit with its own path to ground. Often, that is through dielectric breakdown at the switch itself, but the details are highly unpredictable and depend very sharply on environmental conditions. So the breakdown can occur elsewhere. An inductor has a voltage that is proportional to the rate of change in its current. An arbitrarily high rate of change of current produces an arbitrarily high voltage. That high voltage can overcome insulation and create a dangerous path to ground where there should not be one. Circuit breakers that are designed to operate with high currents and inductive loads need to be very carefully designed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 10, "answer_id": 6 }
Notation for vector time derivatives So I am self-studying mechanics using Marion and, as many books, it uses the notation of the dot over the function to express a time derivative, as in $$x = x(t)$$ $$\dot{x}= \frac{dx}{dt}(t) $$ The book also uses the bold notation for vectors, like for example the position vector is: $$\textbf{r} = \textbf{r}(t)$$ Putting these together, we get that using this notation the velocity vector is: $$ \textbf{v} (t) = \dot{\textbf{r}} (t)$$ My question is: when it comes to handwriting, which is the most common/ standard notation? Is it something like: $\dot{\vec{r}}$? Or do you stop specifying it as a vector with the arrow on top?
Consistent notation is needed for clear communication, but there is no one "correct" notation for anything. If you can remember what things mean in your own work, you can do whatever you want. If you are writing something that others must read too, like a graded homework assignment, it is important to define any notation. That being said if you clearly define your notation, you can do whatever seems best to you.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculation of effective mass from bandstructure The effective mass is defined as $$ \frac{1}{m_{ij}^*} = \frac{1}{\hbar^2} \frac{\partial^2\epsilon}{\partial k_i \partial k_j} $$ where, $m_{ij}^*$ is the effective mass, $\hbar$ is the Planck's constant, $\epsilon$ is the energy and $k_i,\ k_j$ are reciprocal latttice vectors. Now let us consider that we have the values of energy corresponding to the points in a line connecting the center of Brillioun zone and a point in the reciprocal space (say $k_x = 0.5$, $k_y = 0.5$, $k_z = 0.5$). Now I would like to know if it is possible to get value of effective mass from this data. If yes how can I do that?
From your data, it seems like you can obtain the longitudinal effective mass in that direction. By the way, that line in the reciprocal space, the [111] direction, is often called the $\Lambda$ axis. I suppose that if you represent the energy as a function of $|\vec{k}|$ in that direction there will be a minimum or a maximum. You could try then to fit the data around the minimum or maximum to a parabola $\varepsilon(k)=Ak^2+Bk+C$ and from there $$\frac{1}{m^*_l}=\frac{1}{\hbar^2}\frac{d^2\varepsilon}{dk^2}=\frac{2A}{\hbar^2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Chemical potential in canonical partition function I'm a bit confused on the interpretation of the chemical potential in a canonical ensemble (a system which can only exchange energy with a reservoir but not particles). Here is what I think I know: As far as I understand, when one deals with a system that can exchange energy and particles with a reservoir one is dealing with a grand canonical ensemble. At the end of the day this means that we are either working with Boltzmann's factors, $e^{-\epsilon \beta}$ or with Gibb's factors, $e^{-(\epsilon -\mu)\beta}$, depending on whether or not the system exchanges particles with the reservoir. A way to see the Boltzmann's factor is as the special case of $\mu=0$ of the Gibb's factor. One way of deriving these distributions (the one I've seen) is to start with something like this: $$ \frac{P(s_2)}{P(s_1)} = \frac{\Omega_R(s_2)}{\Omega_R(s_1)}= \frac{e^{S_R(s_2)/k_b}}{e^{S_R(s_1)/k_b}}= e^{(S_R(s_2)-S_R(s_1))/k_b} $$ now one invoques the thermodynamic identity $dU=TdS-PdV+\mu dN$, solves for $dS$ and discard the constant terms like $dV=0$ or $dN=0$ for the canonical case and substitute above. For the grand canonical case $dN\neq 0$ and one yields the Gibb's factor. Here is my question: I've seen that one can define the chemical potential through the canonical free energy $F=-k_bT \log (Z)$ by $$ \mu = \left( \frac{\partial F}{\partial N}\right)_{T,V}. $$ What is the meaning of this $\mu$? We were dealing with a system that couldn't exchange particles with the reservoir and yet it has a non zero chemical potential like a Grand canonical ensemble!
By a "fixed" value of $N$ here we mean that for the system at equilibrium, the value of $N$ is not allowed to fluctuate. So yes, $N$ is fixed for a canonical ensemble. If you allow particle number to change to new value so that the system finds a new equilibrium state with a new fixed value of $N$, the system has a new and different canonical ensemble. But you can still ask: How did the free energy of the system change when we changed the particle number? The answer to this question is equal to the chemical potential of the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Why is surface tension measured in units of milliNewtons per meter? Rather than square meter(s)? Why is liquid surface tension written in units of mN/m, or milliNewtons per meter? The related concept of surface energy for solids uses units of milliJoules per square meter.
Both are equivalent, since $\text{mJ}=\text{mN}\cdot \text{m}$ where $\text{mJ}$ is millijoules, $\text{mN}$ is millinewtons and $\text{m}$ is metres. Thus $$\text{mJ}\cdot \text{m}^{-2}\equiv \text{mN}\cdot \text{m}^{-1}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Is the zero-point energy of helium stronger than other liquids to disfavour freezing? Under normal atmospheric pressures, liquid helium does not freeze even when cooled very close to absolute zero. This is attributed to the uncertainty principle or due to zero-point energy. But the quantum uncertainty or zero-point energy is not an exclusive feature of liquid helium only. Then, why should it stop the freezing of helium but not that of other liquids? If it is strong in helium, then why?
Helium, at pressures below 25 atm and absolute zero, does not freeze because it's zero point energy is high enough to stop it from going into a solid phase, and hence is stable as a liquid. Other gases in general do not have such high zero point energies, and thus transition from liquid to metal as the temperature plunges. As to why it Helium has a high zero-point energy, the analysis is very complicated, but in 1935, F. London did a (sophisticated back of the envelope) calculation that explained the phenomenon and in 1950, C.L. Pekeris increased the accuracy of prediction by one order of magnitude. London had essentially summed this up as : One can roughly take account of the decisive contribution of the zero point energy which is due to the quantization of the mean free path. Closest packed structure has been found to be stable only under pressure and this seems to explain why solid helium can only exist, even at the absolute zero, under pressure. If no external pressure is applied, a configuration with the coordination number four has proved to have considerably lower energy. In seems that this configuration gives a rough model of the liquid modification of helium which is stable at the absolute zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Examples of path integral where path of extremal action does not contribute the most? I have learnt that by doing a saddle point approximation in the path integral formulation of quantum mechanics, the classical action (extremal action where $\delta S=0$) is the one that contributes the most, hence seeing how classical physics arises from quantum physics. The question is: is there any example in quantum physics (especially QFT) where the most contribution does not come from the path of extremal action? I know that the proof using the saddle point approximation seems very general, but I was thinking that there might be some peculiar/interesting terms in the action (such as topological) where the path with $\delta S=0$ do not contribute the most?
In general it seems quite unlikely, unless there is some special symmetry that enforces this. I guess it could also sometimes happen in 0+1 dimensions because here the space of fields is much nicer. But anyway, the closest example that I can think of is $\mathcal N=2$ supersymmetry in four dimensions. Here, due to supersymmetry, the effective action is one-loop exact, and the only corrections are non-perturbative: $$ F=\Psi^2\log\Psi^2+\text{instantons} $$ where the instantons come from classical field configurations with a macroscopic value. These have non-vanishing action $S\sim 1/g^2$, so their contribution is $e^{-1/g^2}$ and thus very small. Anyway, at zero scalar field the classical equations of motion admit non-trivial instanton solutions (cf ref. 1), $$ A\sim \frac{1}{x^2+\rho^2},\qquad \psi\sim\frac{\gamma}{x^2+\rho^2} $$ where $\rho$ is the size of the instanton. For non-zero scalar field, this is no longer a solution to the classical equations of motion, but thanks to supersymmetry one can argue that they still yield the leading contribution to $F$ (essentially, because other contributions vanish!). So this is a situation where field configurations that do not strictly speaking satisfy the classical equations of motion still give the largest contribution to the path integral. The reason is, other contributions are exactly zero due to the large amount of symmetry provided by $\mathcal N=2$. This is a quite non-generic situation, and you would not expect it to happen in more realistic theories. References. * *arXiv:hep-th/9602073.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Lorentz Transformation Proof - Special Relativity This is from A.P. French Special Relativity book, Chapter 3 (page 78) Setup of the proof: $S$ and $S'$ be inertial reference frame. $S'$ move to the right with respect to $S$ at velocity $v$. Let co-ordinates in $S$ be $(x,t)$ and co-ordinates in $S'$ be $(x',t')$ Equation (3-8) in the book, he writes that transformation will be of the form: $x = ax' + bt'$ and by symmetry of the reference frames as implied by relativity principle, $x' = ax - bt$ My question: How does symmetry of reference frame argument lead to the above conclusion? For e.g. why can't the second equation above be of the form $x' = -ax - bt$ or maybe $x' = -ax + bt$. These equations look as symmetric to me (mathematically!) as the one author uses. (I know I'm wrong but want to understand more clearly why am I wrong) Thanks
$b$ has to have the dimension of a velocity to be consistent. When the reference frame $S$ looks at the reference frame $S^\prime$, he sees it has moving to the right with velocity $+v$. But if you swap the frames, in the $S^\prime$ frame, if you look at the $S$ frame, you'll see it moving with the same velocity as before but in the opposite direction, so $-v$. There's no reason to let even the $a$ change sign since swapping refrence frames just swaps the velocity of the other frame in that reference.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How is it physically possible that the electric field of some charge distributions does not attenuate with the distance? Let's consider for instance an infinite plane sheet of charge: you know that its E-field is vertical and its Absolute value is $\sigma / 2 \epsilon _0$, which is not dependent on the observer position. How is this physically possible? An observer may put himself at an infinite distance from all charges and he will receive the same E-field. It seems strange.
First, by symmetry, you would expect the field to be perpendicular to the plane. All the sideways components cancel. You can calculate the field at a point by adding up the field from all the charges in the plane. We start with a point a distance $z$ from the sheet. We can get the total field by integrating over rings as shown. We will focus on just one ring at angle $\theta$ and angular thickness $\Delta \theta$. We calculate that the field from this ring is $\Delta E$. Now we double the distance to $2z$ and look at another ring using the same angles. This ring is twice as far away, but has twice the circumference and twice the thickness. So the field from it is the same $\Delta E$ as before. If you integrate over all such rings, you can see that the total doesn't change. This shows the result is right, but still leaves the uncomfortable feeling that as you get closer to a collection of charges, the field from each one gets bigger. So the total ought to get bigger. Yet that that doesn't happen. Consider our original ring from a distance $z$. Now move in to $z/2$ and look at the same ring. We are closer to it, and the field from each element in it is stronger. But the angle has changed. The fields from elements on opposite sides come closer to canceling each other. The total field from that ring is actually smaller at a closer distance. If we were actually in the plane, the field from that ring would be $0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
The 1D wave equation with gravity and the catenary I'm in the middle of writing a PDEs assignment and I thought I'd use the wave equation for a horizontal string with gravity. Easy I thought: $u_{tt} = c^2u_{xx} - g$. We solved it without gravity. Then added gravity: I set up the question on what the steady solution looks like, and I got a parabola. But wait, I know that it's a catenary, not a parabola. So... confusion ensues. My mental picture now is that the equation $$u_{tt} = c^2u_{xx} - g$$ is appropriate in the small deflection limit, and the catenary resembles a parabola in that limit. Am I correct on that, and as the deflection gets bigger, what is the (first) correction that comes in?
See Time Independent and Time Dependent Catenary Problem, a short paper that includes diagrams and derivations and behind some of what probably_someone posted.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What means "Standard Sirens" associated to Gravitational Waves observable(s)? We know the Gravitational Waves (GW) provide a new way to observe the Universe. Now, we face a new era for cosmological and astrophysical researches. I know that it is possible to obtain the gravitational redshift from some astrophysical phenomena like merge of black holes and other sources. Furthermore, I understand the meaning of Standard Ruler (@ Baryonic Acustic Oscillations) and Standard Candle (@ Surpernovae etc). However, I do not understand what is the meaning of Standard Sirens for GW observable(s). Thanks!
The meaning is, that by measuring the amplitude and frequency of the gravitational waves received from an inspiralling binary system, and by also measuring the rate of change of that frequency, we can determine both the"chirp mass" and the luminosity distance to the gravitational wave source. In an analogous way, the observed brightness of a standard candle depends on how far away it is. Here, because it is the pitch and the rate of change of pitch that is being measured$^*$, we call these inspiralling binaries "standard sirens". A distance can be estimated directly from the measurements without any appeal to external calibrations or cosmological parameters. This explanation is to first order. There is another complication, which is the amplitude also depends on the inclination of the plane of the binary orbit to the line of sight. However, this can also be estimated/eliminated by measuring the amplitude of each of the two possible gravitational wave polarisations separately. This is best done by having two independent detectors (like LIGO and VIRGO) with different orientations. In addition, if the source redshift is known (e.g. it is located in a galaxy with a known redshift), then cosmological parameters like the Hubble constant can themselves be independently estimated. A good popular account of this can be read here. Abbott et al. (2017) describes how a standard siren can be used to estimate the distance to a merging neutron star binary in an identified galaxy, leading to an estimate of the Hubble constant. $^*$ There is also the fact that the frequency of the binary gravitational wave sources that are detected by ground-based interferometers is actually in the range that would be audible (if they were sound waves).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Acceleration as a function of position and time I know if you have an acceleration as a function of $t$, $a(t)$, to find the velocity you simply integrate $a(t)$ with respect to $t$. Moreover, if the acceleration was a function of position, $a(x)$, you use the fact that $a(x) = v(x) \cdot dv/dx$ and solve for $v(x)$. However, what if the function of acceleration was dependent on both $x$ and $t$, $a(x,t)$. How would you solve for a velocity $v(x,t)$ ?
This is a standard mechanics problem. There definition of acceleration is $$a=\frac{\text d^2x}{\text dt^2}$$ So if you know what $a(x,t)$ is, and you have initial conditions like $x(t_0)=x_0$ and $v(t_0)=v_0$, then you just need to solve the differential equation $$\frac{\text d^2x}{\text dt^2}=a(x,t)$$ From there it is straightforward to determine the velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Power consumption of phototubes A phototube1 (or "photoemissive cell") is a simple vacuum tube device that works by the photoelectric effect; it produces a current when light strikes the photocathode. For it to work, a potential difference is generally applied between the cathode and the anode of 15V, I only need to know the current to calculate the power. Is the current it consumes equal to the current it generates? I search the data sheet but did not find the power consumed. 1as opposed to a photomultiplier tube
An old fashion photo-tube (These days you might use a photo-transistor.), has a plate (the cathode) and an anode in a vacuum. A small fraction of the photons striking the plate cause electrons to be ejected into the vacuum. The applied voltage produces an electric field which accelerates these free electrons toward the anode. This flow constitutes the current, which depends on the intensity of the light, and can be quite small. This current times the voltage determines the power, which can also be quite small.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does the number of accessible microstates decrease overall when heat is transferred? We have two systems of ideal gas with different temperatures. $N$ & $V$ are being kept constant. The number of accessible microstates of each gas is thereby only influenced by a change in $E$. The number of accessible microstates is: $$\Omega = \frac{(N-1+U)!}{(N-1)!\,U!}. $$ In regards to $E$ the function is growing at an increasing pace. Since all the energy is kinetic energy this means that the number of accessible microstates further only depends on the temperature. Now we connect the two systems for only an extremely short amount of time, so that they keep their respective volumes and number of particles. Just a long enough timeframe that a small amount of $Q$ can be transferred from the warm system to the cold system. This decreases the number of accessible MS in the warm system and increases the number of accessible MS in the cold system. Since $\Omega$ increases rapidly with $E$ this means that the change in the warm system is bigger than the change in the cold system. So if the decrease of MS in one system is bigger than the increase in the other the number of accessible MS overall is decreasing. How is that possible if we know the number of accessible MS should always increase as stated by the 2nd law of thermodynamics? kind regards
I think the main confusion here is that you use $\Omega$ and the entropy interchangeably. $\Omega$ is not directly proportion to entropy. Rather $S$, the entropy, is proportional to $\log \Omega$. Let’s call the two systems you have $A$ and $B$. The entropy is additive $S_{tot} = S_A + S_B$. However the total number of states isn’t, rather we have $\Omega_{tot} = \Omega_A \Omega_B$. Now to answer your question, suppose system $B$ has more energy than system $A$ (and hence higher temperature) and let’s allow the two systems to exchange energy and see what happens to the total number of states $\Omega_{tot}$. As you mentioned, as $B$ loses energy and $A$ gain that energy, $\Omega_A \rightarrow \Omega_A + \delta_A$ and $\Omega_B \rightarrow \Omega_B - \delta_B$ and so, $$\Omega_{tot} \rightarrow (\Omega_A + \delta_A)(\Omega_B - \delta_B) = \Omega_{tot} - \Omega_A \delta_B + \Omega_B \delta_A, $$ here I’m only keeping first order terms of $\delta$’s. So you see the all very important point is that we don’t directly compare $\delta_A$ and $\delta_B$ but rather $\Omega_A \delta_B$ and $\Omega_B \delta_A$. Indeed as you mentioned, $$\delta_A < \delta_B,$$ however $$\delta_A \Omega_B > \delta_B \Omega_A,$$ and so the total number of states does increase. You can check this directly from your formula, but I’ll give a more physical reason here. The condition $\delta_A \Omega_B > \delta_B \Omega_A$ also mean $\delta_A / \Omega_A > \delta_B / \Omega_B.$ It’s not hard to see that $\delta / \Omega \propto \frac{d}{dU} \log \Omega(U)$. Not to get much into statistical mechanics details, but $\frac{d}{dU} \log \Omega $ is a decreasing function of the energy, in fact $\frac{d}{dU} \log \Omega \propto 1/T$, and so since system $A$ has lower temperature, it will have the higher ratio of $\delta/\Omega$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Does tangential acceleration change with radius? Do tangential velocity and tangential acceleration change with radius (change of radius on the same object)? For example consider a spinning disk. Does the equation $$a_t = \alpha R$$ (where $a_t$ is the tangential acceleration, $\alpha$ is the angular acceleration and $R$ is the radius of the disk) give me the tangential acceleration of the centre of mass or of a point on the edge of the disk? As you go inwards from a point on the edge of the disk , the radius decreases. So doesn’t that mean the tangential acceleration of the centre of mass is zero? I have the same doubt regarding tangential velocity. What is wrong with my reasoning? Does the centre of mass have the highest tangential velocity and acceleration or the lowest of all points on the disk?
To an extent, you are correct in your assumptions. The formula $$\mathbf{\vec a = \vec \alpha \times \vec R \implies a=R \, \alpha }$$ when $\vec \alpha$ and $\vec R$ are perpendicular to each other, which is quite the general case. The aforementioned formula relates the tangential acceleration $\vec a$ of a point particle placed at a distance $R$ from the center of rotation to the angular acceleration $\vec \alpha$ at that point. And if the rotating center has no translatory motion, then the tangential acceleration described by the above equation is equal to the net linear acceleration of the particle. And the same is true for the tangential velocity as well, which goes as: $$\mathbf{\vec v = \vec \omega \times \vec R \implies v=R \, \omega }$$ So, if $R$ decreases in magnitude, that is, you move to points which are closer to the rotation center, then obviously you get lower values of tangential velocity as well as tangential acceleration. And, at the center of rotation, both the tangential velocity and tangential acceleration are zero, at all times. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
How to verify the Klein-Gordon field commutation relations, Peskin and Schroeder Equation (2.30) I am trying to verify the commutation relation given in Peskin and Schroeder. In particular, I don't know how to go between these two lines: $$[\phi(\textbf{x}), \pi(\textbf{x}')] = \int \frac{d^3p d^3p'}{(2\pi)^6} \frac{-i}{2}\sqrt{\frac{\omega_{p'}}{\omega_p}}\left([a^\dagger_{-p}, a_{p'}] - [a_p, a^\dagger_{-p'}] \right)e^{i(p\cdot{}x+p'\cdot{}x')}$$ $$[\phi(\textbf{x}), \pi(\textbf{x}')] = i\delta^{(3)}(\textbf{x}-\textbf{x}') \hspace{10mm}(2.30)$$ Using equations (2.27) and (2.28) for $\phi$ and $\pi$: $$\phi(\textbf{x}) = \int \frac{d^3p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}}(a_p + a^\dagger_{-p})e^{ip \cdot{} x} \hspace{10mm}(2.27) $$ $$\pi(\textbf{x}) = \int \frac{d^3p}{(2\pi)^3} (-i)\sqrt{\frac{\omega_p}{2}}(a_p - a^\dagger_{-p})e^{ip \cdot{} x} \hspace{6mm}(2.28) $$ And the ladder operator commutation relation: $$[a_p, a^\dagger_{p'}] = (2\pi)^3\delta^{(3)}(\textbf{p} - \textbf{p}') \hspace{10mm}(2.29)$$ My Attempt Using the commutation relation, I sub in for the two ladder operator commutators: $$ 1) \hspace{5mm}[a^\dagger_{-p}, a_{p'}] = -[a_{p'},a^\dagger_{-p}] = -(2\pi)^3\delta^{(3)}(\textbf{p}'- (-\textbf{p)}) = -(2\pi)^3\delta^{(3)}(\textbf{p}' +\textbf{p}) $$ Where I have used a negative $\textbf{p}$ inside the dirac delta, since the commutator is $a_{-p}$ (I am unsure whether this is correct). $$ 2) \hspace{5mm}[a_{p}, a^\dagger_{-p'}] = (2\pi)^3\delta^{(3)}(\textbf{p}-(-\textbf{p}')) = (2\pi)^3\delta^{(3)}(\textbf{p} + \textbf{p}') $$ Using the same thinking as before. Subbing this into the integral: $$[\phi(\textbf{x}), \pi(\textbf{x}')] = \int \frac{d^3p d^3p'}{(2\pi)^6} \frac{-i}{2}\sqrt{\frac{\omega_{p'}}{\omega_p}}\left(-2(2\pi)^3\delta^{(3)}(\textbf{p} + \textbf{p}')\right)e^{i(p\cdot{}x+p'\cdot{}x')} $$ Dealing with the minus and cancelling terms: $$ [\phi(\textbf{x}), \pi(\textbf{x}')] = \int \frac{d^3p d^3p'}{(2\pi)^3} i \sqrt{\frac{\omega_{p'}}{\omega_p}}\delta^{(3)}(\textbf{p} + \textbf{p}')e^{i(p\cdot{}x+p'\cdot{}x')}$$ Here I am stuck: I do not know hot to deal with the dirac-delta in the integral, and I'm unsure whether I'm even right up to here. Any help on how to proceed or corrections thus far are appreciated! I'm told it's important to understand this part for the upcoming chapters.
Well you're nearly done. That delta function $\delta^3(\mathbf{p}+\mathbf{p}^\prime)$ just sets $\mathbf{p}^\prime = -\mathbf{p}$ when you integrate in $\mathbf{p}^\prime$ (or the opposite if you integrate in $\mathbf{p}$, but it's the same). What you get at this point is what follows $$\begin{align} [\phi(\textbf{x}), \pi(\textbf{x}')] &= \int \frac{d^3p d^3p'}{(2\pi)^3} i \sqrt{\frac{\omega_{p'}}{\omega_p}}\delta^{(3)}(\textbf{p} + \textbf{p}')e^{i(p\cdot{}x+p'\cdot{}x')}\\ &=\int\frac{d^3p}{(2\pi)^3}i\sqrt{\frac{\omega_p}{\omega_p}}e^{ipx-ipx^\prime} = i\int\frac{d^3p}{(2\pi)^3}e^{ip(x-x^\prime)} \end{align}$$ Now by definition the last integral is just a delta function $\delta^3(\mathbf{x}-\mathbf{x}^\prime)$ since it's just the Fourier transform of $1$. At this point you just get the result you search for. The fact that $\omega_{-p}=\omega_p$ just comes from the fact that the energy is quadratic in $p$ so the sign does not matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Approaches to Model Building I often see references to 'model building' in the particle physics literature, presumably to refer to creating new QFTs which go beyond the Standard Model. How exactly does this process of model building begin? Does one simply write down a Lagrangian which has the desired properties and then alter it with trial and error, or if not, how does one arrive at the Lagrangian? If one assumes the Standard Model gauge group is embedded in a larger gauge group like $SU(5)$ or makes some extension to the gauge group, how does that change all the terms going into the definition of the Lagrangian and the covariant derivative?
I am writing this in an answer because comments tend to be deleted, it is a sort of answer from an experimental physics physicist. One does not start from mathematical impulses to generate models. In physics the process is data driven, there exist data that the standard model cannot fit. This starts a search for a mathematical model that could do so. It is not always a different Lagrangian. For example neutrino oscillations were introduced to explain the neutrinos from the sun data, using the standard model Lagrangian. A number of Lagrangians have already been explored in trying to fit data and predict more, so a new model builder that tries to fit a new high mass resonance in the LHC data can choose among existing solutions that could fit it. But it is not just new QFTs that are explored but also new formats. These have to be able to embed the standard model lagrangian as it is a resume of most of the data up to now. Example : string theories. Also there are creative proposals to fit strong interactions, like the amplituhedron.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does putting a thin metal plate beneath a heavy object reduce the pressure it would have applied without it My dad bought an earthen pot and he kept it on our glass table. Worried that the glass could break on filling the pot with water. I kept a metal plate beneath it. At first, it seemed like a good idea , but on further thinking I was unsure if it would actually help in bringing down the pressure on the glass. what if I put three coins beneath the pressure points instead of the plate, would it be any different than placing the plate (assuming the coins to be nearly as thick as the plate). and if it won't be any different, am I right to think that I could further keep reducing the area of the coins until they start looking extensions of the stand on which the earthen pot rests.
Here's how to think about this case. imagine a very small glass table, just wide enough so that the three legs of the pot fit inside the circumference of that small table. For such a small table the force exerted by each of the three legs would not be a problem at all. For comparison, imagine a table where the top is made of very soft wood. Then you do want to put coasters underneath each of the legs, otherwise you might get indentations in the wood. Well, glass is in no danger at all of getting indented. I would not be surprised to see coasters that are made out of glass. The relevant failure mode is the glass plate breaking as a whole. An instantaneous full length crack. For that failure mode the metal plate shown in the photo will not make much difference, if any. To assess how much load the glass is subjected to I suggest looking at reflection in the glass as the earthen pot is filled with water. Whithout load the glass will be straight/flat, and the reflection will be like a mirror image. When the glass bends the view reflected in the glass is distorted accordingly. If you put some load on the table, and you see no distortion (or very little) then there is very little bending, and the table is presumably strong enough. If you do see bending of the glass as the pot is filled then that table is just not a good location for that pot.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Evaluating sum of torques for different choices of origin when solving equilibrium problem My textbook says, When applying equilibrium conditions for a rigid body, we are free to choose any point as the origin of the reference frame. (source) I am trying to understand this by looking at the following picture (from an exercise problem), in which the ball is in static equilibrium because of the applied horizontal force $\vec F$ and the friction between the ball and the surface. (source) If we pick the point where the surface and the ball are in contact as the origin, then I can see how the torques cancel out. We have a negative torque $F \cos \theta$ acting on the center of the ball at a distance of $R$ and a positive torque from gravity $ mg \sin \theta$ acting again on the center of the ball at distance $R$. So $$\sum \tau = 0 = - F R \cos \theta + mg R \sin \theta$$ and we can begin solving for the external force. Alternatively, let's try defining the center of the ball as the origin. Since the applied force and gravity both act on the center of the ball, they provide zero torque. Likewise, the normal force from the surface is pointed directly at the ball's center and therefore provides zero torque. The only nonzero torque I can see is provided by the frictional force, whose magnitude is $mg \cos \theta$, and acts at a distance of $R$ in the positive (CCW) direction. $$\sum \tau' = 0 = \mu_s mg R \cos \theta$$ Since the friction force is nonzero, what force causes the torque to balance when choosing the center of the ball as the origin?
The friction force is proportional to the normal force, which depends on both gravity and the applied force. Letting the friction force $F_s$ stand on its own, the correct expression for the torque about the origin is $$\sum \tau' = 0 = F_s$$ From this it follows that the friction force is, in fact, zero. The same can be seen by taking the torque about the point of contact: $$\sum \tau = 0 = - F R \cos \theta + mg R \sin \theta \\ \implies F\cos\theta = mr\sin\theta$$ and the sum of force in the $x$-direction (parallel to the slope): $$\begin{align}\sum F_x =0 &=F_s + F \cos\theta - mg \sin\theta \\ & = F_s + 0\end{align}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Applying the principle of Occam's Razor to Quantum Mechanics Wolfgang Demtröder writes this in his book on Experimental Physics, The future destiny of a microparticle is no longer completely determined by its past. First of all, we only know its initial state (location and momentum) within limits set by the uncertainty relations. Furthermore, the final state of the system shows (even for accurate initial conditions) a probability distribution around a value predicted by classical physics. If the quantum probabilistic distribution always lie near the classical prediction, why do we need quantum mechanics in the first place? According to the Feynman interpretation, if an electron has to go from A to B, it can take all the paths but the weight is more on the path predicted by classical mechanics. We know that it is unlikely that the electron travel through the mars to go from A to B on earth. Then, is not that path through mars is unnecessary? Should not in the spirit of Occam's razor, we exclude such thing in a theory?
I don't know what Demtröder meant but the sentence Furthermore, the final state of the system shows (even for accurate initial conditions) a probability distribution around a value predicted by classical physics. Is wrong. Aspect's famous experiment on Bell's inequality gave a result (as predicted by quantum mechanics) 5 standard deviations away from that of any possible classical description Aspect's experiment has been confirmed many times. Here is the link to the original paper (free to read)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 10, "answer_id": 1 }
Identical particle state with spin We construct identical particle state by symmetrizing or antisymmetrizing the tensor product of single partice states. When considering spin, a two fermions state should be $$|\psi\rangle=\frac{1}{\sqrt{2}}(|\psi_1\rangle_{\sigma_1}\otimes|\psi_2\rangle_{\sigma_2}-|\psi_2\rangle_{\sigma_2}\otimes|\psi_1\rangle_{\sigma_1}),$$ where $|\psi_1\rangle_{\sigma_1}=|\psi_1\rangle\otimes|\sigma_1\rangle$. However, many text books said that $|\psi\rangle=|\phi\rangle\otimes|\chi\rangle$, where $|\phi\rangle$ is the spatial part and $|\chi\rangle$ is the spin part. My question is which one is correct, are the two answers coherent?
Neither answer is right in general. * *The only actual requirement is that the state be [anti]symmetric with respect to exchange. That's it. *The first state you give, $|\psi\rangle=\frac{1}{\sqrt{2}}(|\psi_1\rangle_{\sigma_1} \otimes|\psi_2\rangle_{\sigma_2}-|\psi_2\rangle_{\sigma_2} \otimes |\psi_1 \rangle_{ \sigma_1})$, is a special case, but it is not general. This can be thought of as a single 'configuration', or Slater determinant, and those are often good approximations in the absence of strong correlations. But there are valid states which are superpositions of multiple different Slater determinants and cannot be reduced into this form. *The second state you give, $|\psi\rangle=|\phi\rangle\otimes|\chi\rangle$, generally with spatial and spin parts of definite exchange symmetry, is again a special case, in which the spatial and spin parts are factorizable. If the system's hamiltonian has no spin-orbit coupling, then you are guaranteed an eigenbasis of states of this form. But in general (i) states need not have this form, (ii) your system's hamiltonian may well have spin-orbit coupling, and thus (iii) the eigenstates won't have this form.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Do virtual electron-positron pairs have mass? When a photon produces an electron-positron pair, do both these particles have mass? Why or why not?
A virtual pair is by definition "off shell". That is, it does not obey the usual mass relation for that kind of particle: $$E^2 = m^2 + p^2$$ Rather, it can have any $E$ and $\vec{p}$ such that 4-momentum is conserved at vertices. That means that the meaning of the "mass" of an off-shell system is pretty nebulous. You can, of course, choose to compute a value of $m$ for the virtual pair as though the usual mass relation were accurate. If you do, you will find that it could be anything, including zero. Computing the mass of a virtual pair serves no useful purpose, nor does contemplating whether it is nonzero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How do you hang a bottle off a toothpick? I came across this video: https://www.youtube.com/watch?v=mHsVxNMFWwA How is this possible? I'm guessing the vertical toothpick is exerting an upward force on the table toothpick to balance the torque. But how? I am confus
Firstly, the bottle creates tension in the string which is then imparted on the tootpick. This creates a torque on the toothpick which is neutralized by the truss-like arrangement attached to both strings. The support beam (toothpick) which exerts a normal force on the vertical beam then carries over as an opposing torque on the one hung from the table. Namely, the arrangement is created to prevent the toothpick on the table to rotate and tip over by exerting this force on the section of the string from the table to the support beam through tension.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does energy conservation imply time invariance? This is similar to this question: Is the converse of Noether's first theorem true: Every conservation law has a symmetry?. However, the answer given there is very technical and general. I am only interested in the specific case of energy conservation (mostly because dark energy seems to break energy conservation / time invariance).
It is actually the other way around. Time translation symmetry refers to Energy conservation. We define the Hamiltonian as $$\normalsize {H} = \Large{\Sigma_i}\normalsize{p_i\overset{.}{q_i} - L} $$ This says that the Hamiltonian in other words the energy is conserved when the Lagrangian has no explicit time dependance. i.e. $$\frac{dH}{dt}=\frac{\partial L}{\partial t}$$ This means as long as the laws of motion are time translation invariant, the energy of the system in consideration is conserved. The converse is true as well. As you can see the equations say that $$\frac{dH}{dt}=\frac{\partial L}{\partial t}$$ Which means that if energy is conserved it means that the Lagrangian has no explicit time dependance. Now even though it might seem in certain systems that the energy is not conserved, we must remember that the system is not necessarily isolated, so when we see that the energy is not conserved it just means that the flow of energy is from the surroundings
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why don't hovercrafts move West relative to the Earth Suppose that there is a hovercraft floating a few centimetres above the Earth's surface. As it is disconnected from the Earth, which is spinning from West to East, shouldn't it appear to move East to West to observers on the ground? Does this happen? If not, why not?
Let's make it even more interesting! If what you said were true, then all you'd need to do to travel large distances is jump up in the air. The Earth turns at a speed of around 500 m/s (at the equator) and so if you were up in the air for a second, you'd travel 500 metres! Well, obviously, you don't. (If you did, long-jump events would be much more exciting.) The same thing could be said about trains like the TGV that travel at 100 m/s. Imagine you were sitting on one of these trains, and you dropped a coin and it took half a second to fall. It would hit the ground right below where you released it, not 50 m behind you. According to someone on the platform, the train, the coin, and you are all moving at the same horizontal velocity. When you release the coin, the force of gravity accelerates it vertically, but its horizontal velocity is unchanged as there are no forces in that direction. You are currently travelling with the same angular speed as the Earth, and you continue to travel at that speed even when you're not "touching" the Earth's surface. When you jump, you are only changing your "vertical" velocity, not your "horizontal" velocity, which stays the same.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How does the reversibility of physics interact with nuclear fission? The laws of physics are reversible and quantum information is never destroyed. Given this, how do I time reverse the $U_{235}$ fission reaction, n which ${}^1_0n + {}^{235}_{92}U \rightarrow {}^{141}_{56}Ba + {}^{92}_{36}Kr + 3 {}^1_0n + \gamma +$ 202.5 MeV (in kinetic energy plus gamma ray energy). That is, would reversibility require that if we bring together the $Ba$, $Kr$, $3ns$, and the $\gamma$ all at the same instant, a $U$-$235$ and a neutron might just pop out with a certain probability? (EDIT: Here is a fun video from PBS Space Time explaining why quantum information can never be created or destroyed.)
The reverse reaction, in which six particles join to make two, is allowed by time-reversal symmetry but forbidden by entropy considerations arising from classical thermodynamics. You just plain can't get those six particles to meet each other "just so"; there are too many ways for them to miss. The way that thermodynamic entropy breaks time reversal symmetry is a subject of some discussion. Blanket statements about classical and quantum information are a way to lead yourself astray. A rare example of a process with a three-body initial state is the "triple alpha" process by which helium fuses to carbon. It's so improbable that Hoyle proposed there must be an excited state in beryllium-8 to allow the sequential two-body reactions $$\rm 3\alpha \to \alpha + {}^8Be^* \to {}^{12}C +\gamma $$ and successfully predicted the energy of the excited state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do peaks and troughs of a wave cancel each other out? And why peaks and peaks or troughs and troughs add up? I have thought that the cancellation of peaks and troughs is a consequence of Newton's third law of motion that equal and opposite forces cancel each other out. Or it has something to do with conservation of energy or momentum.But I have never truly understood it correctly. I believe there is an obvious and clear explanation that I am simply not aware of. In other words, how does interference work?
If you flick a taught rope from both ends, you will see waves forming and moving. * *A rising peak happens because the particles were flicked upwards by an upwards force by the person, and this upwards force propagates from particle to particle. *A lowering valley happens likewise due to a downwards initial force which propagates. When waves meet, then both forces act on the same particles. If they are equal, then a particle feels an upwards and a downwards force that are equal at the same time. It thus doesn't move due to Newton's 1st law. This same mechanism happens with sound waves where the moving media is air molecules. In other types of waves, such as electromagnetic radiation travelling through space with no medium, we can think of the rising radiation wave as a positive value of the electric or magnetic field or the like. Two meeting waves add together, and if one is upwards and one is downwards, then we have a positive and a negative value added together, causing them to mathematically cancel out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why doesn't water boil in the oven? I put a pot of water in the oven at $\mathrm{500^\circ F}$ ($\mathrm{260^\circ C}$ , $\mathrm{533 K}$). Over time most of the water evaporated away but it never boiled. Why doesn't it boil?
The "roiling boil" is a mechanism for moving heat from the bottom of the pot to the top. You see it on the stovetop because most of the heat generally enters the liquid from a superheated surface below the pot. But in a convection oven, whether the heat enters from above, from below, or from both equally depends on how much material you are cooking and the thermal conductivity of its container. I had an argument about this fifteen years ago which I settled with a great kitchen experiment. I put equal amounts of water in a black cast-iron skillet and a glass baking dish with similar horizontal areas, and put them in the same oven. (Glass is a pretty good thermal insulator; the relative thermal conductivities and heat capacities of aluminum, stainless steel, and cast iron surprise me whenever I look them up.) After some time, the water in the iron skillet was boiling like gangbusters, but the water in the glass was totally still. A slight tilt of the glass dish, so that the water touched a dry surface, was met with a vigorous sizzle: the water was keeping the glass temperature below the boiling point where there was contact, but couldn't do the same for the iron. When I pulled the two pans out of the oven, the glass pan was missing about half as much water as the iron skillet. I interpreted this to mean that boiling had taken place from the top surface only of the glass pan, but from both the top and bottom surfaces of the iron skillet. Note that it is totally possible to get a bubbling boil from an insulating glass dish in a hot oven; the bubbles are how you know when the lasagna is ready. (A commenter reminds me that I used the "broiler" element at the top of the oven rather than the "baking" element at the bottom of the oven, to increase the degree to which the heat came "from above." That's probably why I chose black cast iron, was to capture more of the radiant heat.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "139", "answer_count": 7, "answer_id": 3 }