Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Is the Lagrangian density of electromagnetism half-blind? The Lagrangian density of electromagnetism is $$ \mathcal{L}_{EM}=\frac{1}{4\mu_0}F^{ab}F_{ab} $$ This represents one of two fundamental Lorentz invariants of electromagnetism. The second one is: $$ \frac{1}{2}\epsilon_{abcd}F^{ab}F^{cd} $$ Since $\mathcal{L}_{EM}$ contains only 1 out of 2 fundamental Lorentz invariant, how is it the case that $\mathcal{L}_{EM}$ not "half-blind"? Does the absence of the second fundamental Lorentz invariant from $\mathcal{L}_{EM}$ erases any features of electromagnetism from the solutions, that would otherwise be present in nature who obviously accounts for both invariants?
You can add this to the Lagrangian if you want, but it will have no effect whatsoever. Try running the Lagrangian with the extra term through the Euler-Lagrange equation; it's a bit tedious, but you'll see it has no effect on the equations of motion. The reason why is that this term can be written as a total derivative (see this question), and two Lagrangians differing by the total derivative of a function will describe the same physical system (i.e. will return the same equations of motion).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Describing forces in rolling Consider a wheel on a frictionless horizontal surface. If we apply a horizontal force (parallel to the surface and above the level of the center of mass), what happens to the wheel? Does it roll or slide forward or rotate only or does any other phenomenon happen? Please guide me. Also draw a free body diagram. Note: This is a thought experiment. If the question is not satisfying, I am sorry for that and please guide me.
The equation of motions are: $$\sum_M F_y=m\frac{d}{dt}\,v-F=0$$ $$\sum_M \tau_x=I\,\frac{d}{dt}\,\omega-F\,d=0$$ Thus : $$\dot{v}=\frac{F}{m}\tag 1$$ $$\dot{\omega}=-\frac{F\,d}{I}\tag 2$$ Roll condition: $$v=\omega\,R$$ Slide condition : $$v \lessgtr \omega\,R$$ $\Rightarrow$ $$\dot{v}\lessgtr \dot{\omega}\,R\tag 3$$ with equations (1) ,(2) and (3) you get $$F\left(\frac{1}{m}+\frac{R\,d}{I}\right)\lesseqgtr 0\tag 4$$ where $=$ is for roll and > for silde Thus: $F=0$ for rolling $\Rightarrow$ $$v=v_0\,t\quad,\omega=\frac{v_0}{R}\,t$$ where $v_0$ is the initial velocity . and $F> 0$ for slide
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Why does the entanglement of quantum fields depend on their distance? When watching Seans Carrol's "A Brief History of Quantum Mechanics", he mentioned around the 50th minute (the video I linked to starts at that point) that [about quantum fields in vacuum] ... and guess what! The closer they are to each other, the more entangled they are. Why is it so? I was under the impression that entanglement is not dependent on the distance (two entangled particles getting further from each other are not less entangled). If this is at all possible I would be grateful for an answer understandable by an arts major - just kidding a bit, I simply would like to avoid an answer which starts with courtesy of Redorbit
I don't think you should pay any attention to this. Entanglement refers to quantum states, not to quantum fields.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/560399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Is the question asking for the primitive translation vector of simple cubic or reciprocal lattice? Can anyone please give me a clue on what the question wants? Based on the question, I am clueless if it asks for primitive translation vector of simple cubic or reciprocal lattice? Because the form of the given $\mathbf k_1$, $\mathbf k_2$, $\mathbf k_3$ is very different from simple cubic one.
From what you wrote I understand the question as follows. You first need to determine what the Brillouin zone is for a simple cubic lattice. I recommend you try to do this yourself, and you should find that for a simple cubic lattice of lattice parameter $a$, then the Brillouin zone is also simple cubic with reciprocal lattice parameter $2\pi/a$. Once you have determined the boundaries of your Brillouin zone cube, then the question is asking: are the given $\mathbf{k}_1$, $\mathbf{k}_2$, and $\mathbf{k}_3$ inside the Brillouin zone? If they already are inside the Brillouin zone, then that's it. If they are not, then you are asked to construct an equivalent $\mathbf{k}$ vector that is inside the Brillouin zone. So the final question you need to ask is: how do I build an equivalent $\mathbf{k}$ vector inside the Brillouin zone? To do this, you need to use the fact that, due to periodicity of the reciprocal lattice, you can add any linear combination of reciprocal lattice vectors to a given $\mathbf{k}$ vector and you will obtain an equivalent $\mathbf{k}$ vector. What you need to do is figure out which linear combination of reciprocal lattice vectors to add to make sure that the resulting vector is inside the Brillouin zone.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/560545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why doesn't the ergodic hypothesis hold for most systems? Is there a physical (intuitive) explanation for why most systems are not ergodic? As my book states, it is a natural assumption that a system is at least quasi-ergodic; it then proceeds to state that this hypothesis is, in fact, false, and that we need a different basis for statistical mechanics. I don't understand why most systems aren't, and how we can prove this.
I'm not sure how to quantify "most" systems, but off the top of my head, there are many processes that we encounter in our daily lives that break ergodicity within the timeframe under which we consider them. * *Symmetry breaking in general breaks ergodicity. Take, for example, a magnet. Ergodicity would imply that a magnet's magnetization would point in all directions (as every direction of magnetization occupies the same volume of phase space) with equal probability if sampled over a sufficiently long time. However, human timeframes is manifestly not anywhere near "sufficiently long". *Disordered materials (like wood and glass) also break ergodicity. A block of wood has equal energy in many different configurations, but it only explores a tiny portion of that phase space volume. (i.e. if your block is a cube, it doesn't spontaneously move into a position rotated 90 degrees about one of its axes, again within human timeframes.) Since statistical mechanics also aims to (and does) explain the behavior of these types of materials as well, it shouldn't need to depend on ergodicity to do so.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/560660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the difference between a force and a net force? I read in Newton's first law, it states that an object will continue to have a constant velocity unless acted upon by a force whilst for other articles, it states "unless acted upon by a net force." Which one is correct? Are they both interchangeable? Is there any difference between these two concepts?
Force is a vector quantity. The first law talks of a single object and a force, without going in the details. A net force means a vector addition of forces, two equal and opposite forces add up to zero net force . This is expressed more clearly here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 8, "answer_id": 7 }
Harmonic oscillator partition function via Matsubara formalism I am trying to understand the solution to a problem in Altland & Simons, chapter 4, p. 183. As a demonstration of the finite temperature path integral, the problem asks to calculate the partition function of a single harmonic oscillator. The coherent state path integral is $$ \mathcal{Z} = \int D(\overline{\phi},\phi) \exp \Big[ -\int_0^{\beta} d\tau \, \overline{\phi} (\partial_{\tau} + \omega) \phi \Big] \sim [ \det(\partial_{\tau} + \omega) ]^{-1} \tag{4.53}$$ where the $\sim$ follows from simply treating the path integral as if it were an ordinary Gaussian integral. Using the fact that $\phi(\tau)$ must be periodic, we can expand $\phi$ in a Fourier series and find that the eigenvalues of $\tau$ are $\omega_n = 2\pi n / \beta$, from which we obtain the expression $$ \mathcal{Z} \sim \prod_{\omega_n} (-i \omega_n + \omega)^{-1} = \prod_{n = 1}^{\infty} \Big[ \Big( \frac{2\pi n}{\beta} \Big)^2 + \omega^2 \Big]^{-1}. $$ We obtain the latter expression by pairing each $n$th term with the $-n$th term. Now, here comes the question: to compute this infinite product, Altland & Simons perform the following steps: $$ \prod_{n = 1}^{\infty} \Big[ \Big( \frac{2\pi n}{\beta} \Big)^2 + \omega^2 \Big]^{-1} \sim \prod_{n = 1}^{\infty} \Big[ 1 + \Big( \frac{\beta \omega}{2\pi n} \Big)^2 \Big]^{-1} \sim \frac{1}{\sinh(\beta \omega / 2)}. $$ It seems to me that to get from the first to the second expression, they are multiplying and dividing by $\prod_{n = 1}^{\infty} (\beta / 2\pi n)^2 $, so as to use the formula $x/ \sin x = \prod_{n = 1}^{\infty} (1-x^2 / (\pi n)^2 )^{-1} $. This seems completely unjustified to me -- not only are you dropping temperature dependence in the $\sim$, but you're effectively multiplying and dividing by zero! Not to mention that the final $\sim$ conveniently ignores a factor of $\beta$ in the numerator in order to get the correct final answer. Is there something I'm missing, or is this calculation completely bogus? And what is the correct means to get the right answer?
OP's partition function for the harmonic oscillator $$\begin{align}Z^{-1} ~=~&\prod_{n\in \mathbb{Z}}\left[ -\frac{2\pi i n}{\beta} + \omega\right] \cr ~=~&\omega\prod_{n\in \mathbb{N}}\left[\left( \frac{2\pi n}{\beta} \right)^2 + \omega^2\right] \cr ~=~&\omega\left[ \prod_{n\in \mathbb{N}}\frac{2\pi }{\beta}\right]^2\left[ \prod_{n\in \mathbb{N}}n\right]^2 \prod_{n\in \mathbb{N}}\left[1 + \left( \frac{\beta \omega}{2\pi n} \right)^2 \right] \cr ~\stackrel{(2)}{=}~&\omega\cdot \frac{\beta}{2\pi }\cdot 2\pi \cdot\frac{\sinh\frac{\beta\omega}{2}}{\frac{\beta\omega}{2}}\cr ~=~&2\sinh\frac{\beta\omega}{2}\cr ~=~&\left(\sum_{n\in\mathbb{N}_0}e^{-(n+1/2)\beta\omega}\right)^{-1} \end{align}\tag{1}$$ can be understood via the following zeta function regularization rules: $$ \prod_{n\in\mathbb{N}} a ~\stackrel{(3)}{=}~\frac{1}{\sqrt{a}} \quad\text{and}\quad \prod_{n\in\mathbb{N}} n ~\stackrel{(3)}{=}~\sqrt{2\pi}, \tag{2}$$ stemming from the zeta function values $$ \zeta(0)~=~-\frac{1}{2} \quad\text{and}\quad \zeta^{\prime}(0)~=~-\ln\sqrt{2\pi} ,\tag{3} $$ respectively. See also e.g. this & this related Phys.SE posts.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Invariance of Lagrangian under rotations in a constant magnetic field The Lagrangian for the motion of a particle with mass $m$ and charge $q$ in a constant magnetic field $B$ is given by $$\mathcal{L}(x,v)=\frac{m}{2}\left|v\right|^2-\frac{q}{2c}\left(v\cdot[x\times B]\right).$$ Show that rotations around the $B$-axis leave the Lagrangian invariant, where each rotation is given by $O_{\eta}:=\exp(\eta\,[B\,\times \,.]),\,\eta\in\mathbb{R}$. I can see that $\left|O_{\eta}(v)\right|^2=\left|v\right|^2$, since rotations are supposed to leave the "length" unchanged but that's about as far as I've gotten with this. I'm guessing that one needs to apply some certain identities here regarding the cross product and the $\exp$ function, which I haven't been able to find on Wikipedia or other websites so far.
Write your Lagrangian in cylindrical coordinates. You will see that the Lagrangian doesn't depend on $\theta$, where $\theta$ is the angle that measures the rotatión about the z-axis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is this synthetic molecular motor and what is the energy source? In the "Molecular dynamics" entry of 2018 version of Wikipedia (it have been removed for the current version), there is such a synthetic molecular motor: You can also find this image by searching "MD_rotor_250K_1ns" for image on Bing. (1) Any references about this synthetic molecular motor? (2) According to the animation, it seems to be driven by thermal energy instead of chemical energy, is that true? If so, how to explain it from the second law of thermal dynamics, since the random thermal motion seems to be transformed into more ordered directional rotation?
The wiki article still exists, with the simulation too. Molecular dynamics simulation of a synthetic molecular rotor composed of three molecules in a nanopore (outer diameter 6.7 nm) at 250 K In the wiki article: The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation. So energy has to be supplied. There are light driven and chemically driven rotors. The reference for the simulation is : Palma, C.-A.; Björk, J.; Rao, F.; Kühne, D.; Klappenberger, F.; Barth, J.V. (2014). "Topological Dynamics in Supramolecular Rotors". Nano Letters. 148: 4461–4468. the article states that As of 2020 the smallest, atomically precise molecular machine has a rotor, which consist of four atom Thermal motion is utilized as follows in the latest experiment: By breaking spatial inversion symmetry, the stator defines the unique sense of rotation. While thermally activated motion is nondirected, inelastic electron tunneling triggers rotations, where the degree of directionality depends on the magnitude of the STM bias voltage. ...... This ultrasmall motor thus opens the possibility to investigate in operando effects and origins of energy dissipation during tunneling events, and, ultimately, energy harvesting at the atomic scales.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Current and conductance from the Landauer formula The Landauer formula for a one dimensional quantum system (potential step scattering) can be written as $$ I(V)=\frac{2e}{h}\int_{-\infty}^\infty dE T(E) (f_S(E) - f_D(E)), $$ where $T(E)$ is the transmission probability and $f_i(E)$ is the Fermi function of source $S$ or drain $D$. In Cuevas it is claimed that if the temperature is zero (Fermi functions are potential steps) and if low voltages is assumed, the expression reduces to $$ I = GV, $$ where the conductance is given by $G=(2e^2/h)T$. What is the low voltages assumption? In other words, if I assume low voltages, along with zero temperature, what is left to compute in the integral?
In general, the relationship $I(V)$ for arbitrary voltages is nonlinear in the voltage difference $V$, and the assumption of low voltage allows one to write the linear approximation $I \approx GV$ by expanding at first order the difference of the Fermi-Dirac functions. At zero temperature, the relationship $I(V)$ is still nonlinear unless $T(E)$ is a constant independent of $E$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does an up quark decay into products more massive than itself? According to https://en.wikipedia.org/wiki/Up_quark the up quark can decay into a down quark plus a positron plus an electron neutrino. The problem is that the mass of the by-products is greater than the original particle. This would violate conservation of mass/energy unless some source of energy or mass was put into the system to trigger the decay.
The most common example of this is beta plus decay. In this process one of the up quarks in a proton decays into a down quark and a $W^+$, and the $W^+$ then decays into a positron and electron neutrino. As a result of the decay the proton converts to a neutron. As you say, the process violates conservation of energy and that means it cannot occur unless energy can be supplied from some other source. An isolated proton cannot undergo beta plus decay to a neutron. However in a nucleus the rearrangement of the nucleons following the decay of the proton to a neutron can supply the required energy, and some nuclei can undergo this type of decay. So you are quite correct that the decay violates conservation of energy, and therefore it can only happen when that missing energy required can be supplied from elsewhere.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Do atoms absorb the same amount of light? I'm currently working on a project on my own where I'm interested in finding information about an object based on a spectrum. Namely, I want to use the spectrum that I input into my program to be able to analyze what atoms are present in the analyzed object. (I know this is probably hard but it's a fun project). However, when I started to work on this my question arose: Do atoms that are exposed to the same amount of light absorb the same amount as well? (Albeit different frequencies). So, when the atoms are exposed to light (uniform over the EM spectrum), will two atoms that absorb different frequencies absorb the same amount of light? And if so, one could infer that the less light of a specific frequency that we can find, (the less compared to the maximum that would be emitted at that frequency) the more there is of the element that absorbs this specific frequency? (Though it would probably be useful to look at more than one "black line" in the spectrum)
Different materials will absorb different amounts. You cannot rely on two materials to absorb the same number of watts per mole, or anything like that. The ultimate case study would be the white paint used to coat roofs and charcoal. They both obviously do have an absorption spectra, in different frequencies. However, it is trivial to show that the white paint fundamentally absorbs less, which is why we use it on roofs in the first place!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Individual particle states in Fock space I am currently learning QFT, and after watching the wonderful lectures by Leonard Susskind (https://theoreticalminimum.com/courses/advanced-quantum-mechanics/2013/fall), I am still struggling to see the connection between multi-particle (Fock) states and harmonic oscillators. When constructing Fock states, prof. Susskind used the "particle in a box" model for individual particle states. In this model, the particle wave functions are the energy eigenstates (standing waves) of a particle in a box. A Fock state is written as a sequence of occupation numbers for each energy eigenstate (i.e. how many particles exist in each state). However, from other QFT lectures, I recall that adding a particle with a specific momentum corresponds to increasing the excitation number of a harmonic oscillator. This is quite different from the "particle in a box" model. What am I missing here? Is the "particle in a box" model just a simplification, and the actual states should be associated with harmonic oscillators?
Fock space description and second quantization are not specific to harmonic oscillators - this is simply counting how many particles are in each state, whatever is the nature of the states. Creation/annihilation operators serve here to increase or reduce the number of particle in a state. What often serves as a source of confusion is that for a one-particle oscillator (not in a Fock space!) one can introduce creation and annihilation operators that increase/reduce the excitation number. Moreover, when we quantize electromagnetic field, which is interpreted as a collection of oscillators, the excitation numbers are interpreted as the number of photons - quite literally becoming the creation and annihilation operators in the Fock space (note that the second quantization formalism applied to the electromagentic field is actually the first quantization of this field, sicne it is already a wave field).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
In metals, the conductivity decreases with increasing temperature? I am currently studying Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th edition, by Max Born and Emil Wolf. Chapter 1.1.2 Material equations says the following: Metals are very good conductors, but there are other classes of good conducting materials such as ionic solutions in liquids and also in solids. In metals the conductivity decreases with increasing temperature. However, in other classes of materials, known as semiconductors (e.g. germanium), conductivity increases with temperature over a wide range. An increasing temperature means that, on average, there is greater mobility of the atoms that constitute the metal. And since conductivity is due to the movement of electrons in the material, shouldn't this mean that conductivity increases as temperature increases?
In metals, increase in temperature decreases average time between collision of charge carriers which increases the resistivity and therefore conductivity decrease.(also increase in temperature don't affect the no of charge carriers in metals).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
If mass is homogeneously distributed why would there be gravitational attraction between bodies? Assuming the mass of the universe was spread completely evenly throughout space why would gravitational attraction happen? All bodies in the universe would feel gravitational tug equally in all directions so why would they go anywhere?
If one assumes the question in Newtonian dynamics (as distinct from gr) then the answer is that Newtonian gravity for an infinite uniform matter distribution in flat space is inconsistent. This can be shown from the equations of Newtonian gravity, in which the problem is that the integrals over all space do not converge, but a simple argument can also be found from Newton's shell theorem. Let mass density be constant, $\rho$. Take any two points, $\mathrm A$ and $\mathrm O$, a distance $R=\mathrm {OA}$. According to Newton's shell theorem the gravitational force at $\mathrm A$ due to any spherical shell containing $\mathrm A$ and centred at $\mathrm O$ is zero. The gravitational acceleration due to matter inside a sphere of radius $R$ centred at $\mathrm O$ is $$ \frac {4\pi} 3 G \rho R $$ In other words the gravitational tug does not cancel out, but is towards $\mathrm O$, which is clearly inconsistent because $\mathrm O$ can be any point in the universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Velocity of the touching point between 2 rotating circles I'm trying to solve the following problem that I'm having a hard time with: We have circle ${\Sigma}_1$ with center $O_1$ and radius $a_1$. The center $O_1$ is also the center of the static orthonormal coordinate system $R_0 (O_1, x_0, y_0, z_0)$. ${\Sigma}_1$ rotates at the angular speed ${\omega}_1$. Be the circle ${\Sigma}_2$ with center $O_2$ and radius $a_2<a_1$ rolling without slipping on top of ${\Sigma}_1$ at a constant angular speed ${\omega}_2$. We call I the touching point between the two circles. Be ${\Sigma}_3$ a solid keeping ${\Sigma}_1$ and ${\Sigma}_2$ in contact. The coordinate system $R_0$ defined by $(O_1, _0, _0, _0)$ is fixed and does not rotate. The coordinate system $R$ defined by $(O_1, , , )$ is mobile and fixed to ${\Sigma}_3$ and rotates around $z≡z_0$ at the angular speed ${\omega}_3$. I need to find the velocity of I in the $R_0$ coordinate basis when $\omega_1=0$ and then find $\omega_3$ as a function of $\omega_1$ and $\omega_2$. I know how to express the velocity of I when $\omega_2=0$ which I solved, but after trying for more than $2$ hours with different methods like changing coordinates systems and creating a third one centered at $O_2$, I could not find a satisfactory answer. Does any of you have an answer? Thanks!
When ${\omega}_1=0$, ${\omega}_2=(\frac{a_1}{a_2}+1){\omega}_3$ (the factor $1$ appears because ${\Sigma}_2$ makes one extra turn after having rolled one turn around ${\Sigma}_1$ in the $R_0$ coordinate base). So ${\omega}_3=\frac{{\omega}_2}{(\frac{a_1}{a_2}+1)}$. When ${\omega}_1\neq0$, we have to add this to ${\omega}_3$, so: ${\omega}_3=\frac{{\omega}_2}{(\frac{a_1}{a_2}+1)}+{\omega}_1$. You can use this expression for ${\omega}_3$ to find the linear velocities (involving the $\sin$ and $\cos$ functions).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Electric field energy density In vacuum, the energy density of the electric field is given by $\mathcal{E}=\epsilon_0\frac{E^2}{2}$ with $E$ the total electric field present. So, if you have a static $E_0$ and dynamic $e(t)$ field, the energy density becomes $$\mathcal{E}=\epsilon_0\frac{\left[E_0+e(t)\right]^2}{2} = \epsilon_0\frac{E_0^2 +2E_0e(t)+e(t)^2}{2}\,.$$ Is this correct? What does the term $2E_0e(t)$ physically represent? It looks like an additional energy contribution from the interaction between the two fields...
The cross term represents interference. It is the term that makes it so that the energy density is reduced when the two fields are in opposite directions and so that the energy density is increased when the two fields are in the same direction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Standing waves in optical cavities an optical cavity is "an arrangement of mirrors that forms a standing wave cavity resonator for light waves" (wikipedia). The possible standing wave patterns for such structure are like these: As you can see, the vertical black lines (which are the mirrors) are the nodes of the standing waves, since they force the wave to be 0 at those points. Well, I have studied a similar situation for electromagnetic resonant cavities. In such devices, the mirrors were replaced by walls made of perfect electric conductors, and these faces were the nodes of the standing waves because they forced the tangential electric field to be 0 along them (which is the interface condition for perfect electric conductors). But in this case, the walls are generic mirrors, so I do not understand why they force the wave to be 0 along them. So my question is: why do the mirrors force the wave to assume always 0 amplitude, i.e. mirrors are the nodes of the standing waves?
The reflected wave obtains a phase shift of $\pi$. If 100% of the light is reflected, the amplitude at the mirror vanishes, because the phase shift flips the sign, $e^{i \pi}=-1$. Refering to the comments: The following graph shows the situation, where the mirror reflects only 50% of the incident light: The blue points are the superposition. Personally I would not describe this as a superposition of a standing wave and a propagating wave. Although this formulation is mathematically fine, it does not describe what I see in the graphic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How is the frequency of a wave defined if it propagates on three different directions? Let's consider a wave which propagates on 2 or three directions, like for instance an electromagnetic wave inside a rectangular waveguide totally closed on two ideal conductor surfaces: The walls of the guide force the wave to assume an integer number of half-wavelenghts along x,y,z: $$l_{x,y,z} = m_{x,y,z} \cdot \frac{\lambda}{2}$$, with m integer. When we indicate a certain mode, such as $TM{2,1,1}$ we mean that there are 2 half-wavelength along x, 1 along y and 1 along z. Suppose now $$l_{x,y,z} = l$$ (i.e. all dimensions are equal: the waveguide is a cube). Obviously lambda will be different for x,y,z: $$\lambda_x = \frac{2l}{m_x}=l$$ $$\lambda_y = \frac{2l}{m_y}=2$$ $$\lambda_z = \frac{2l}{m_z}=l$$ So, three different wavelenghts. What does it mean? In physics I have always studied that frequency corresponds to wavelength, if the propagation medium is fixed. What is the definition of frequency in this case?
So, three different wavelenghts. What does it mean? In physics I have always studied that frequency corresponds to wavelength, if the propagation medium is fixed. What is the definition of frequency in this case? The $\text{2D}$ or $\text{3D}$ solution to the wave equation doesn't have a single frequency, it has a spectrum of frequencies. For the $\text{2D}$ case: $$u_{tt}=c^2(u_{xx}+u_{yy})$$ Assume (Ansatz): $$u(x,y,t)=X(x)Y(y)T(t)$$ $$\frac{1}{c^2}XYT''=TYX''+TXY''$$ Divide by $XYT$: $$\frac{1}{c^2}\frac{T''}{T}=\frac{X''}{X}+\frac{Y''}{Y}=-n^2$$ where $n$ is a Real number. $$\frac{1}{c^2}\frac{T''}{T}=-n^2$$ $$\frac{X''}{X}+\frac{Y''}{Y}=-n^2$$ $$\frac{X''}{X}=-n^2-\frac{Y''}{Y}=-m^2$$ $$X''+m^2X=0$$ $$X=A\sin mx+B\cos mx$$ Assume a square domain with length $L$ and homogeneous BCs: $$u(0,y,t)=u(L,y,t)=0$$ And: $$u(x,0,t)=u(x,L,t)=0$$ $$\Rightarrow B=0$$ $$mL=2\pi p \Rightarrow m=\frac{2\pi p}{L}$$ For $p=1,2,3,4,...$ $$X_p(x)=A_p\sin\Big(\frac{2\pi px}{L}\Big)$$ Similarly for $Y$: $$Y_q(y)=D_q\sin\Big(\frac{2\pi qy}{L}\Big)$$ For $q=1,2,3,4,...$ **Note that** there is equivalence between @Michael Seifert's $k$ values and what we use here, e.g.: $$X_p(x)=A_n\sin k_xx$$ with: $$k_x=\frac{2\pi p}{L}$$ For $p=1,2,3,4,...$ We can also show: $$n^2=\frac{4\pi^2}{L^2}(p^2+q^2)$$ Going back to: $$\frac{1}{c^2}\frac{T''}{T}=-n^2$$ $$T''(t)=-c^2n^2T(t)$$ $$T''(t)+c^2n^2T(t)=0$$ $$T(t)=c_1\cos\Big(\frac{n\pi ct}{L}\Big)+c_2\sin\Big(\frac{n\pi ct}{L}\Big)$$ Use a boundary condition: $$\partial_t u(x,y,0)=0 \Rightarrow \frac{\text{d}T(0)}{\text{d}t}=0\Rightarrow c_2=0$$ So: $$T_n(t)=c_{1,n}\cos\Big(\frac{n\pi ct}{L}\Big)$$ Putting it all together: $$u_{n,p,q}(x,y,t)=c_{1,n}\cos\Big(\frac{n\pi ct}{L}\Big)A_p\sin\Big(\frac{2\pi px}{L}\Big)D_q\sin\Big(\frac{2\pi qy}{L}\Big)$$ Using the Superposition Principle: $$\boxed{u(x,y,t)=\displaystyle\sum_{p=1}^{\infty}\displaystyle\sum_{q=1}^{\infty}c_{1,n}\cos\Big(\frac{n\pi ct}{L}\Big)A_p\sin\Big(\frac{2\pi px}{L}\Big)D_q\sin\Big(\frac{2\pi qy}{L}\Big)}$$ The coefficient $c_{1,n}A_p D_q$ can be determined with the initial condition: $$u(x,y,0)=f(x,y)$$ with a Fourier series (not shown). This would give you the amplitude spectrum. We have: $$\cos\Big(\frac{n\pi ct}{L}\Big)=\cos\omega_nt$$ So: $$\boxed{\omega_n=\frac{n\pi c}{L}}$$ with: $$n=\frac{2\pi}{L}\sqrt{(p^2+q^2)}$$ For $p=1,2,3,...$ and $q=1,2,3,...$ So the solution shows an infinity of $\omega_n$ (frequencies). The solution can be extended to the $\text{3D}$ case by adding: $$Z(z)=G_r\sin\Big(\frac{2\pi rz}{L}\Big)$$ and: $$n=\frac{2\pi}{L}\sqrt{(p^2+q^2+r^2)}$$ For $p=1,2,3,...$ and $q=1,2,3,...$ and $r=1,2,3,...$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Any boundary conditions missing from this problem? Recently I was solving some boundary value problems in Electrostatics. I stumbled upon a problem with an infinitely long cylinder (axis along the $z$-direction and radius $a$) with a plate inside it (centered at $z=0$). The plate is perpendicular to the axis of the cylinder and has the same radius as the cylinder. The plate is maintained at a constant potential $V_0$. And the surface of the cylinder is maintained at a potential $V(\varphi, z$). It is asked to find $\Phi(\rho,\varphi,z)$ inside the cylinder. Since it is an infinitely long cylinder I've used eigenvalues of the form $e^{ikz}$ and $e^{-ikz}$ and put the boundary conditions accordingly. But I'm missing a Boundary condition. Also, I'm considering one of the modified Bessel functions $I_\nu (x)$ as the region of consideration is bounded to inside the cylinder. Can someone help me with this? Edit : $\Phi(\rho,\varphi,z)$ is the electrostatic potential. $\Phi(a,\varphi,z) = V(\varphi,z) $ and $V(\rho,\varphi,0) = V_0$ are the two Boundary conditions. Since $V(\varphi,z) $ is a general function, I think I'm missing one boundary condition
Correct me if I'm missing something @HeyDosa, but for the Laplace equation, $\nabla^2 \phi =0$, $\phi$ has the uniqueness property that if it is specified for the boundary of the region(volume) where you want to find it, then it is uniquely determined. Your region of interest is (in cylindrical coordinates), $S = [0,a]\times [0,2\pi]\times [-\infty,0] \cup [0,a]\times [0,2\pi]\times [0,\infty]$. Its boundary is simply given by $$\partial S = \{(a,\phi,z) \cup (\rho,\phi,0) | z \in \mathbb{R}, \phi \in [0,2\pi], \rho \in [0,a]\}$$ (pardon my sloppy notation, but I hope you get the idea.) The boundary conditions $\phi(a,\phi,z)=V(\phi,z)$ takes care of the first half of $\partial S$ and $\phi(\rho,\phi,0) =V_0$ takes care of the second half. With these two, the whole boundary is taken care of and therefore the solution must be unique (so mathematically no other degree of freedom is left to be fixed by an additional BC).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Meaning of the Planck Temperature I don't understand what makes the Planck Temperature the "absolute hot". To my understanding Temperature is just a measure of the kinetic energy of the particles, so is the Planck Temperature the temperature at which the particles are moving at a speed so close to the speed of light that their behavior can no longer be understood? If not, what are the formulas that break down as an object is simulated above the Planck Temperature?
The short answer is "We don't know" if there is an "absolute hot" or if there is, what it is. This column by Peter Tyson : https://www.pbs.org/wgbh/nova/zero/hot.html is what I point people to when they want to know why I can't explain it better. But here goes my attempt: as the thermodynamic temperature rises from absolute zero, where particles don't exhibit significant movement, matter changes. First the classic phase changes from solid to liquid to gas, then as temperature continues rising, molecules can no longer exist, atoms are broken down until eventually at the Hagedorn temperature hadronic matter (ordinary matter) "evaporates" for lack of a better word. Current theories predict that a similar boundry exists at about $10^{30}K $ where quarks/gluons will similarly no longer exist, although obviously we have no way of actually testing this. The Planck temperature, ~ 1.42 x $10^{32}K$ is where the models and theories run into the wall. We literally have nothing yet to predict how the universe behaves beyond this point, although what models there are predict that at this point particle energies would be enormous. Gravitational forces would become as strong as the other fundamental forces. In short someone has to come up with a quantum theory of gravity to start taking a crack at the problem - Holy Grail anyone?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sound Horizon in cosmology I was trying to write the sound horizon in terms of the scale factor, however I don't understand all the steps in the derivation. I know that I should get: $$r_{s}=\int_{0}^{a_{d}}\frac{da}{a^{2}H(a)}$$ What I tried $$c_{s}dt=a(t) dr$$ Where $a(t)$ is the scale factor as the function of time, and $c_{s}$ is the velocity of the sound wave Integrating the last equation: $$r_{s}=\int _{0}^{t_{d}}\frac{dt}{a(t)} $$ If I use the relation $H=\frac{\dot{a}}{a}$, then: $$\mathrm{d} t= \frac{\mathrm{d} a}{Ha}$$ So $$r_{s}=\int_{0}^{a_{d}}\frac{da}{a^{2}H(a)} $$ Question I don't understand what is the meaning of the equation $c_{s}dt=a(t)dr$, I just write that equation because I found it in a book. So where does the equation $c_{s}dt=a(t)dr$ come from ? EDIT If you have another derivation it will be really helpul if you explain it or provide a link to read about that, I search but I can't find anything clear, most of the books just put the formula and don't explain where it comes from.
The equation for the sound horizon is simply the equation for the particle horizon, with the speed of light replaced by the speed of sound, there's nothing more to it. Nevertheless, you have to keep in mind that the speed of sound also changes with time, since the matter density dilutes with a growing scale factor, so you have to treat the speed of sound as a function of the scale factor and integrate over it as well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What do $\ell$ and $A$ precisely mean in the formula for electrical resistance? The formula for resistance is $$R=\rho\frac{\ell}{A}$$ Generally in most of the textbooks it simply written that $\ell$ is the length of the conductor and $A$ is it’s cross-sectional area. But my question is which length and area do we need to consider as a 3D body has many possible lengths and cross sectional areas. Textbooks simply take an example of a solid cuboid whose opposite faces are supplied with potential difference. But what if I change the faces across which potential difference is applied (for example if I choose two adjacent faces of same cuboid) or I change the shape of the conductor itself (for example a solid sphere whose two faces (across whom potential difference is applied) are opposite semi-hemispherical surfaces. I’m a beginner in electromagnetism and needs a lot of new learning. So please help.
Textbooks simply take an example of a solid cuboid whose opposite faces are supplied with potential difference. But what if I change the faces across which potential difference is applied(for example if I choose two adjacent faces of same cuboid) It depends all on the direction of current flow. Let's take a cuboid with side lengths $\ell_x$, $\ell_y$, $\ell_z$ (in $x$, $y$ and $z$ direction). * *Now let's connect a voltage between the left and right faces of the cuboid, so that current flows in $x$-direction. Then the length is $\ell=\ell_x$ and the cross-section is $A=\ell_y\ell_z$. So the resistance becomes $R=\rho\frac{\ell}{A}=\rho\frac{\ell_x}{\ell_y\ell_z}$. *As another example let's connect a voltage between the top and bottom faces of the cuboid, so that current flows in $z$-direction. Then the length is $\ell=\ell_z$ and the cross-section is $A=\ell_x\ell_y$. So the resistance becomes $R=\rho\frac{\ell}{A}=\rho\frac{\ell_z}{\ell_x\ell_y}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Which force is doing the work here? My text book (Fundamentals of Physics by Halliday, Resnick, and Walker) mentions the following about the work done in internal energy transfers: An initially stationary ice-skater pushes away from a railing and then slides over the ice. Her kinetic energy increases because of an external force F on her from the rail. However, that force does not transfer energy from the rail to her. Thus, the force does no work on her. Rather, her kinetic energy increases as a result of internal transfers from the biochemical energy in her muscles. This is confusing me a lot. The energy transfer is clearly internal but work must be done by the force as work done is defined as the (dot) product of force and displacement and the definition makes no reference to any transfer of energy. I thought work done by a force just means that the force is causing a transfer of energy to (or from) an object, and gives no information about whether the energy is coming from the object exerting the force. My confusion is not over whether work is being done or not but which force is doing the work which ends up causing the change in kinetic energy.
Let's make a simple example. A block with a compressed spring attached to it is on a frictionless horizontal surface against a stationary, immovable wall. The spring is released, and the block is then pushed away from the wall, thus gaining kinetic energy. The relevant forces here are 1) the force between the spring and the block and 2) the force between the spring and the wall. Which force does work here? Force 1 did, because it is applied over a distance. The energy is transferred from the potential energy stored in the spring to the kinetic energy of the block. In your example, the skater is the block, and the arms/muscles are the spring.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
The differential cross section and cross section As we know the total cross section can always be obtained from the differential cross section: $$\sigma = \int_0 ^{2 \pi } \int_{0}^{\pi } \frac{d \sigma}{ d \Omega} d \Omega $$ I understand how the integration is done. For example, sometimes I see the differential cross section formula is written as: $$ \frac{d \sigma}{ d^4 q \ d \Omega} = K (1 + \cos ^2 \theta ) $$ The $d^4q = \frac{1}{2}dq_0 \ dq_t^2 \ dq_z$ How would I write the total cross section? $$\sigma = \int dq_0 \ dq_t^2 \ dq_z \int_0 ^{2 \pi } \int_{0}^{\pi } \frac{d \sigma}{ d \Omega} d \Omega \ \sin \phi \ K \ (1 + \cos ^2 \theta ) \ \ \ \ \ ?$$
Your first formula makes no sense. You're integrating over the solid angle, which is ok but then you're integrating another time over no variable in the $[\pi,\pi]$ interval, which again makes no sense. In general the differential cross section can be given in a number of different ways depending on how one treats the phase space integral. In fact, the differential cross section for a two particle scattering process is given by $$d\sigma = \frac{1}{4\sqrt{(p_1p_2)^2-m_1^2m_2^2}}(2\pi)^4\delta^4\left(\sum_f p_f-\sum_i p_i\right)|\mathcal{M}_{fi}|^2\prod_{i=1}^n\frac{d^4p_i}{(2\pi)^3 2E_i}$$ and as you can see there are $n$ integrations to be done, one for every phase space of every particle. One could even not bother to integrate over any phase space and instead study the differential cross section $$\frac{d\sigma}{dp_1\,dp_2\,\dots dp_n} = \frac{1}{4\sqrt{(p_1p_2)^2-m_1^2m_2^2}}(2\pi)^4\delta^4\left(\sum_f p_f-\sum_i p_i\right)|\mathcal{M}_{fi}|^2\prod_{i=1}^n\frac{1}{(2\pi)^3 2E_i}$$ or instead, which is often done, integrate over some specific phase space and leave behind others. One example of that is exactly the differential cross section over the solid angle. As you gave, one often uses the differential cross section $$\frac{d\sigma}{d\Omega}$$ which is what one gets in a two particle scattering process $a+b\to c+d$ integrating over the phase space of one variable and then integrating the other variable only over the energy. To get back the full cross section from this, one needs to integrate over the solid angle which in three dimensions gives $d\Omega= \sin\theta d\theta d\phi$ and so $$\sigma = \int_0^{4\pi} \frac{d\sigma}{d\Omega} = \int_0^\pi\int_0^{2\pi}\frac{d\sigma}{d\Omega}\sin\theta\, d\phi\, d\theta$$ The same goes for the other integral. Moreover $d^4q$ is not what you gave, but rather $d^4 = dq^0 d^3q$ where $d^3q$ is the normal euclidean measure and $dq^0 = dE$. Just a little post scriptum: I'm not sure the second differential cross section you gave us makes any sense. But i could be wrong on this one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/563990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is internal resistance of battery considered outside the terminals although it is present between the terminals inside the battery In ideal battery the internal resistance is zero whereas in non-ideal battery there is some internal resistance now this internal resistance is due to the battery material (electrolyte) and is present inside the battery between the terminals then why do we represent and eventually do calculations by considering that internal resistance to be connected with battery terminals externally. I’m totally unable to get the point. Please help
Because there is no potential difference inside of the cell due to the fact that the battery does not form a closed circuit. The circuit is internally separated, or 'terminated', at the posts. Voltage is defined as the difference between the electric potential at two points, or the work required per unit of charge to move one unit of charge between those points; connect the battery terminals and you get electron flow across the connecting structure, which is limited by the resistance of the reactive components in the circuit. The connecting mechanism must be included in the calculation, due to the fact that electromotive conduction is a diabatic(thermally inclusive) process.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
How does one (physically) interpret the relationship between the graviton and the vielbein? One can naturally think of the vielbein $e_\mu^a$ as a gauge field corresponding to local translation invariance. Moreover, the metric may be written $$g_{\mu\nu}=e_\mu^a e_\nu^b \eta_{ab}.$$ I have always seen the graviton $h$ given by $$g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}.$$ Obviously, the graviton is the gauge field that carries the force of gravity. So, I suppose that means I could write $$h_{\mu\nu}=e_\mu^a e_\nu^b \eta_{ab}-\eta_{\mu\nu},$$ but my question is really this: how does one (physically) interpret the relationship between the graviton and the vielbein? In particular, I'm interested in how to interpret it from the perspective of quantum fields.
So your first equation is more general than the linearised version of the second equation, so let's focus on the most general relation. In general your vielbein $e^\mu_a(x)$, depend on the coordinates of your manifold. So vielbein represent a local frame transformation away from flat space. And this phenomenon, after using the geodesic equation is what we call gravity. The upshot is that the vielbein contains all the degrees of freedom of the graviton so sometimes it is called the graviton in the litterature. And obviously in linearised gravity, the dofs are contained in $h_{\mu \nu}$ so we call this one the graviton. Now to interpret in terms of quantum, well that's a tall order. Like you said it is the gauge field of gravity and it is a spin-$2$ particle in terms of representation of the Lorentz group, which is a subgroup of Diff($M$). edited my mistake about the large group of diffeomorphism (thanks to reading madmax's comment)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Why does glass, in spite of being amorphous, often break along very smooth surfaces? When a crystalline material breaks, it often does so along planes in its crystalline structure. As such this is a result of its microscopic structure. When glass breaks however, the shapes along which it breaks are typically very smooth as well, rather than being very irregular or jagged. Being amorphous, one shouldn't expect any smooth surfaces (of more than microscopic size) across which the atoms are bonding more weakly than in other direction to be present at all. One possibility that I can think of is that real glass is locally crystalline, and some surfaces of weaker bonding are actually present in the material, and an ideal glass would behave differently. Another possibility is that unlike in crystalline materials, this is not a result of its microscopic structure, but rather of its macroscopic structure namely its shape: when the glass is hit, it vibrates in a way that is constrained by its shape. We see that harmonic vibrations in a solid typically has very smooth shapes along which the amplitude is 0 (nodal patterns), like in Chladni plates Does anyone know what is the actual reason?
As PM 2Ring has mentioned in a comment, if the crack is due to a mechanical impact (as opposed to gradually increasing stress beyond a critical value), then the shape of the crack is defined by the shape of the shock waves / vibration patterns, in addition to the structure of the material. In crystalline materials with natural planes of separation this effect contributes very little to the final shape of the crack, but in amorphous materials such as glass it leads to clearly visible patterns of shock waves (conchoidal fracture) propagating from the initial impact point: Poly-crystalline materials and crystals with no planes of weakness also produce similar cracks on impact.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 3, "answer_id": 0 }
Do Maxwell's equations contain any information on the time evolution of the current density $J$? The answers to Can the Lorentz force expression be derived from Maxwell's equations? make clear that Maxwell's equations contain only information on the evolution of the fields, and not their effects upon charges; the Lorentz force equation is an added equation. Does this imply that any arbitrary time evolution of a current density can be defined beforehand, and the corresponding fields always found that satisfy Maxwell's equations?
Maxwell's equations place a constraint on the current, namely that it be conserved. To see this, take the divergence of Ampere's law for $$0 = \mu_0 \nabla \cdot \mathbf{J} + \mu_0 \epsilon_0 \nabla \cdot \frac{\partial \mathbf{E}}{\partial t}$$ which is equivalent to $$\nabla \cdot \mathbf{J} = - \epsilon_0 \frac{\partial}{\partial t} (\nabla \cdot \mathbf{E}) = - \frac{\partial \rho}{\partial t}.$$ This is precisely the statement of charge conservation. If you plug in a $\rho(\mathbf{r}, t)$ and $\mathbf{J}(\mathbf{r}, t)$ that aren't conserved, then the equations will have no solutions at all.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/564983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Newtonian Limit of Schwarzschild metric The Schwarzschild metric describes the gravity of a spherically symmetric mass $M$ in spherical coordinates: $$ds^2 =-\left(1-\frac{2GM}{c^2r}\right)c^2 \, dt^2+\left(1-\frac{2GM}{c^2r}\right)^{-1}dr^2+r^2 \,d\Omega^2 \tag{1}$$ Naively, I would expect the classical Newtonian limit to be $\frac{2GM}{c^2r}\ll1$ (Wikipedia seems to agree), which yields $$ds^2 =-\left(1-\frac{2GM}{c^2r}\right)c^2 \, dt^2+\left(1+\frac{2GM}{c^2r}\right)dr^2+r^2 \,d\Omega^2 \tag{2}$$ However, the correct "Newtonian limit" as can be found for example in Carroll's Lectures, eq.(6.29), is $$ds^2 =-\left(1-\frac{2GM}{c^2r}\right)c^2 \, dt^2+\left(1+\frac{2GM}{c^2r}\right)\left(dr^2+r^2 \,d\Omega^2\right) \tag{3}$$ Question: Why is the first procedure of obtaining the Newtonian limit from the Schwarzschild solution incorrect?
Carroll is merely matching the Schwarzschild solution to the linearized weak field solution, treated as a consistent truncated Laurent series in $c^{-1}$, cf. this Phys.SE post. The main point is that the spatial components of the metric are subleading in an $c^{-1}$ expansion and may receive non-trivial contributions in order to maintain EFE.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Are there any quantum effects which we can see in every day life? I am wondering if there are any natural phenomenon in every-day life that cannot be explained by classical physics but can only be explained by quantum mechanics. By classical physics, I mean Newtonian mechanics and Maxwell's electromagnetic theory. I know that there are macro-scale quantum phenomena such as superconductivity, but that isn't something that we can see in ordinary life.
The whole "color temperature" notion and the finite speed of the radiative heat exchange. A classical blackbody has an infinite power of electromagnetic radiation at any non-zero absolute temperature (see UV catastrophe). One needs a quantized light in order to understand the thermal radiation. The whole "chemistry" thing is based on the fact that "atoms" (quantas of matter) do exist. Atoms themselves and the substances as a whole have a finite volume because their electrons have quantized energy levels. Classical atoms will have decaying orbits of their electrons and these electrons will fall over their nuclei. Shot noise - in any low-light photography, in sound processing and in a lot of other places. It wouldn't happen and the noise as a whole would have a different properties if it wasn't for the finite number of the signal carriers (electrons, photons). Star twinkling... Well, our world is quantum-based. I can add more and more.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 9, "answer_id": 4 }
Vacuum polarization or electron with structure? Is it possible to construct some charge density $ρ(r)$ to get the Uehling-Potential? $${\displaystyle V_{\text{Uehling}}(r)\approx -Z\alpha \hbar c{\frac {1}{r}}\left(1+{\frac {\alpha }{8\pi ^{2}{\sqrt {2}}}}\left({\frac {\lambda }{r}}\right)^{3/2}e^{-4\pi {\frac {r}{\lambda }}}\right)+{\mathcal {O}}(\alpha ^{3})}$$ The electric potential of a continuous charge distribution $ρ(r)$ is $${\displaystyle V_{\mathbf {E} }(\mathbf {r} )={\frac {1}{4\pi \varepsilon _{0}}}\int {\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}d^{3}r'.}$$ So interpreting the difference from the Coulomb potential not as vacuum polarization, but as some structure in the electron?
The entire theory (quantum electrodynamics) used by Uehling to derive this potential, is based on the assumption that the electron is a point particle. So the mainstream interpretation of the extra charge density “outside” the electron is that it is polarization of the vacuum by the point electron. Any other conclusion contradicts the premises of the theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Probability current density confusion As we all know, the probability current density in quantum mechanics is defined as: $$\textbf{J}=\dfrac{\hbar}{2mi}(\Psi^* \nabla \Psi-\Psi \nabla \Psi^*)$$ For simplicity let us work in one dimension and let us suppose a wave function $\Psi= A\ \text{cos}\ {kx}$. Applying the above definition and thus using $$J=\dfrac{\hbar}{2mi}\Big(\Psi^* \dfrac{\partial \Psi}{\partial x}-\Psi \dfrac{\partial \Psi^*}{\partial x}\Big)\quad\quad \text{we get:}\quad\quad J=0$$ Using the equation of continuity this means that: $$\dfrac{\partial \rho}{\partial t}=0,$$ which after solving gives us: $\rho=f(x)$. Thus the probability density at any point is independent of time. Now, this result will follow even if we take $\Psi= A\ \text{cos}\ {(kx-\omega t)}$. But here we can clearly see that the probability density i.e. $$|\Psi|^2=|A|^2\ \text{cos}^2\ {(kx-\omega t)}$$ is time dependent. Is it $A$ which carries the time dependence and is responsible for this apparent discrepancy?
A solution of the free one-dimensional Schroedinger equation: $$ i\hbar \frac{\partial \psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \psi}{\partial x^2}\,\,\,\quad \text{(1)} $$ is: $$\psi = A e^{i(kx -\omega t)} \quad\quad\quad \text{(2)} $$ where $\omega$ fulfills the condition $\hbar \omega = \frac{(\hbar k)^2}{2m}$. If tentatively one tries to construct a $\cos$-solution one would write $$\psi = \frac{A}{2} e^{i(kx -\omega t)} + \frac{A}{2} e^{-i(kx -\omega t)} = A \cos (kx -\omega t)$$ Upon checking if $$\psi = A e^{-i(kx -\omega t)}$$ solves the Schroedinger equation one would only find a solution only if the following condition is fulfilled: $$E = \hbar \omega = -\frac{(\hbar k)^2}{2m}$$ However, negative energy solutions are not allowed in the non-relativistic theory, therefore this solution has to be discarded, consequently also the $\cos$-solution has also to be discarded. This can, of course, be directly checked by inserting the $\cos (kx-\omega t)$ in the free Schroedinger-equation (1); it is not a solution. So one cannot expect it to fulfill the continuity equation. So the only reasonable solutions in this context are either (2) or $$\psi(x) = \cos(kx)\quad\quad\quad \text{(3)} $$ for the free time-independent Schroedinger equation $$ \frac{\partial^2 \psi}{\partial x^2} +\frac{2m}{\hbar^2}E =0$$ with the condition $\frac{(\hbar k)^2}{2m} =E$. Both solutions (2) and (3) fulfill the continuity equation, even if in the case of (3) it turns out to be quite uninteresting. Solution (3) can of course be upgraded to a time-dependent solution by choosing $$\psi(x,t) = e^{-i\omega t} \cos(kx)$$ Of course appropriate superpositions of either (2) or (3) would also be solutions, but using the right sign of $i$ in case of time-dependent solutions. EDIT In case of the time-dependent solution (2) the probability current $J$ is non-zero, but its gradient is zero, therefore even if $\dot{\rho}=0$ $$ \dot{\rho} + \nabla J =0$$ is fulfilled.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Why does higher frequency sound dampen faster in air? I know as a general fact that higher frequency sound dampens quicker in air so when music is heard from a distance only the bass part is audible.But I don't know what the physical reasoning behind this is. I couldn't find an answer anywhere on the internet (or they were too technical for me to understand). I would appreciate any insight on this topic. Note: I'm a second year physics undergraduate and I know a little wave mechanics and acoustics.
Recall that wave equations will usually have a damping term and acoustic waves are no different. Wave damping is usually modeled with a velocity dependent term. The faster you try to distort the medium, the higher the damping. The viscosity of the fluid through which the sound wave is traveling plays a large role in the damping. The link here supposedly gives an interactive player so you can model attenuation for various parameters such as humidity and temperature. I can't seem to get it to work, but nonetheless the plot shows how absorption (damping) increases with frequency. He also mentions relaxation processes as a factor in sound attenuation. Hope this helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is meant by the transverse nature of gravitational waves? Gravitational waves, like the electromagnetic waves, are also transverse. By transversality of the EM waves, we mean that ${\vec E}\cdot\vec{k}={\vec B}\cdot\vec{k}=0$ i.e., the accompanying electric and the magnetic field (which are two $3$-vectors) vibrate in a plane perpendicular to the direction of propagation specified by the unit vector along ${\vec k}$. A gravitational wave $h_{\mu\nu}(z,t)=h^{(0)}_{\mu\nu}\sin(kz-\omega t)$ is a rank-$2$ tensor. * *How can I understand what is meant by the transverse nature of gravitational waves $h_{\mu\nu}$, mathematically? Physically/geometrically, which quantities vibrate perpendicular to the spatial direction $\hat{k}$? Note The comment points out that transversality of gravitational wave means $\partial^\mu h_{\mu\nu}=0\leftrightarrow k^\mu h_{\mu\nu}=0$. But I find it hand to interpret geometrically because of two main reasons. First, $k_\mu$ is not a $3$-vector but a $4$-vector and second, all the $16$ components of $h_{\mu\nu}$ are not physical.
The typical image of transverse gravitational waves is that as the wave travels thru spacetime, space itself stretches perpendicular to the direction of travel ($\hat{k}$). So space starts out "square", then get stretched vertically, goes back to square, gets stretched horizontally, goes back to square, etc. You can Google "transverse" "gravitational waves" (with the quotes) and go to their Images section to see many examples of this. I don't know if I can copy-and-paste an image from a website here due to copyright issues.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
I'm unable to wrap around the concept of antennas and more specifically microstrip antennas. How does a microstrip antenna which is bended work? So, I had come to the agreement that to understand how antennas work, it is best to assume light/electromagnetic waves act like photons/particles in the presence of recieving antennas. And therefore a high frequency waves consume more energy and less in intensity, but more in energy(As in each photon has more punch on it). Like a torch light. And by that, antennas are nothing but metal wires which work on photoelectric effect and the slight spike in voltage, which imitates the frequency, is amplified and then decoded as intended. I also believe that a photon is released when a complete kink is formed at the transmission antenna, so like for every half time cycle, a wave is released. To sum it up: * *A low frequency wave takes a huge time to be released (as half cycle-time period is long), but when it does, a huge amount of it is released for a given standard transmission voltage. And for high frequency waves, the half cycle is small, so quick transmission takes place and therefore more information can be packed. *So, since only half a wavelength needs to be analysed, it makes sense to have quarter wave antennas which use the image of the half wave. I just want to know the mistakes in my understanding. Especially, the part where intensity is associated with frequency. please provide better understanding. After writing this and re-reading it, ironically, the reception and transmission part sounds more and more like wave phenomenon than photoelectric effect. sorry for the question being all over the place.
The simplest way to think about a transmitting antenna is a surface (or linear shape) with an electric field oscillating at fixed frequency every on it. Then add in a position dependent phase offset (possibly zero) that remains fixed, too
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Electron Capture or K-Capture and Heisenberg's Uncertainty principle I read about Electron Capture or K-Capture in radioactivity. There I found that the electron in the K shell is captured by the nucleus and as a result the atomic number of the element decreases by 1 unit. But by Heisenberg's-Uncertainty-principle states that electron cannot fall into the nucleus because it's speed will exceed speed of light. So how it is possible?
The electrons are not in fixed obits around the nucleus, the Bohr model is a semi-classical model of the real quantum mechanical solutions. The electrons are in orbitals, probability loci. Have a look , for simplicity, of the orbitals of the energy levels available to the electron of the hydrogen atom. Note that there is a probability for s orbitals for the electron to overlap the nucleus. electron orbitals in the K shell have a higher probability of overlapping with the nucleus . This does not defy the Heisenberg uncertainty, which is an envelope that covers the possibility of measuring two variables at the same time.like position and momentum. There is no measurement when an electron orbital gives a probability for the electron to be in the nucleus, and thus allow it to further interact with the nucleons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/565967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why we add the individuals quantities to find the total amount of a system's "quantity"? Is this by definition of "total"? Why to find e.g. the total energy of a system of particles (non-interacting) we add their individual kinetic energies? Is total kinetic energy defined to be that sum? It may seem obvious for scalar quantities like energy but what if we consider vectors? For example the total momentum of a system of particles is their vector sum of individual momenta. Is this again a definition? I think it is a silly question but I can't understand why we do such "additions". To make the question more clear. I am asking if the momentum/energy/mass of a system is defined to be that sum over all particles. I mean we could define the mass of a system to be: $$M\equiv\frac{1}{2}\sum_{i=1}^{n}m_i$$ But it is not the case. A definition is not right or wrong. It is just a definition.
In short: yes it's by definition. The longer explanation is: In physics we postulate that certain quantities behave like vectors and other behave like scalars ecc. This allows us to develop mathematical models to understand and predict phenomena. But why are we allowed to postulate such claims? It is simply on the base of empirical evidence. For example we observe that forces and velocities behave like vectors, at list at first glance, so we try to understand their behaviour using mathematical models regarding vectors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Impulse operator on real wave function The impulse operator in quantum mechanics is given by \begin{align} \hat{p} = \frac{\hbar}{i}\nabla \end{align} As a Hermitian operator, the expected value of this operator $\langle{p}\rangle = \langle \psi|\hat{p}\psi\rangle$ should be real. However, for a real wave function $\psi(\vec {r})\in \mathbb{R}$ (a valid solution to the Schrödinger equation) the resulting integral is imaginary: \begin{align} \langle{p}\rangle = \frac{\hbar}{i}\int d^3r \cdot \psi \nabla \psi \end{align} Is there an error in my thinking or is it impossible to calculate the expected value that way? An alternative approach would be to use the Fourier transform.
If your wavefunction $\psi$ is real, as is the case when you are dealing with a solution to the time-independent Schrodinger equation, then indeed the expectation value is automatically $0$ since the expectation value must be real and the integral $-i/\hbar\int dx \psi^* (\nabla)\psi$ is necessarily complex unless it is $0$. If the wavefunction is complex, then one cannot say: the expectation can be $0$ or not. For instance, the combination of h.o. wavefunctions \begin{align} \psi(x)=\alpha \psi_n(x)+i\beta\psi_{n+1}(x)\, ,\qquad \alpha^2+\beta^2=1\, ,\quad \alpha,\beta\in\mathbb{R} \end{align} will have non-zero $\langle p\rangle$. However, \begin{align} \psi(x)=\alpha \psi_n(x)+i\beta\psi_{n+2}(x)\, ,\qquad \alpha^2+\beta^2=1\, ,\quad \alpha,\beta\in\mathbb{R} \end{align} will have $\langle p\rangle=0$ even if it is a complex combination.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Measuring the Hubble constant in a curved universe In an article from the University of Chicago, July 17, 2020, it is stated that "Judging cosmic distances from Earth is hard. So instead, scientists measure the angle in the sky between two distant objects, with Earth and the two objects forming a cosmic triangle. If scientists also know the physical separation between those objects, they can use high school geometry to estimate the distance of the objects from Earth." That seems straightforward, except for the fact that high school geometry only works in flat space where the angles enclosed by a triangle add up to precisely 180 degrees. In a curved universe, a triangle can enclose either more or less than 180 degrees. Unless the curvature is known, triangulation shouldn't work reliably in a curved space. So my question is: in measurements of the Hubble Constant by the triangulation method, what assumptions are made about curvature of the universe? And, how well-founded are those assumptions?
What appears to be a sufficient answer to the question can be found in this SE answer by @JohnRennie, combined with a few other articles. I had confused "flat space" with "flat spacetime". As John Rennie said in that answer, spacetime is not flat in an expanding universe, but space can be flat. So, indeed, it is necessary to account for the expansion of space when measuring the Hubble constant via triangulation. The link given by @Layla provides the formula used to relate distance, physical separation, and angular separation, and space curvature. The formula is based on the FLRW model, described in this link provided by @Umaxo This NASA article outlines the discrepancies between results obtained by different measurement methods. Various approaches are taken to measure the curvature of space (space, not spacetime), including this article which describes a method using gravitational lensing. So, the answer is this: generally, when calculating the Hubble constant it is assumed that the universe is spatially flat but expanding at a rate that may change over time. The assumption of spatial flatness is well-founded, at least to a close approximation, on several types of astronomical observations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is 'covariant variation'? What is 'covariant variation'? As opposed to the usual variation with respect to a gauge parameter?
Briefly, Ref. 1 considers generalizations of Yang-Mills-type gauge theories based on a Leibniz algebra structure. Concretely, Ref. 1 defines a covariant variation as $$ \Delta {\cal A}~:=~e^{-{\cal A}}\delta e^{\cal A}. \tag{3.33}$$ In physics jargon, ${\cal A}$ is a gauge field; $\Lambda=\delta {\cal A}$ plays the role of an infinitesimal gauge transformation; and $\Delta {\cal A}$ is an infinitesimal covariant gauge transformation. References: * *R. Bonezzi & O. Hohm, arXiv:1910.10399; eq. (3.33).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does critical temperature exist? This question has been previously asked over here and the comment and answer there has already answered my original question (the one that I had in my mind), but the following question arises: * *Why isn't it possible for the for a fluid to form the persistent structure$^{\dagger}$ above critical temperature? I mean the atoms might be moving fast but can't we make the molecules come arbitrarily close so that the force of attraction can hold them together? $\dagger$ I don't understand the meaning of persistent structure, so it would be kind of if you explain so.
For "persistent structure" read "bound structure by quantum mechanical potential sollutions". At the atomic and molecular level structures arise because the number of atoms can settle at a lower energy level, than when free. This means there is a binding energy that has to be payed for the atoms to be freed from the structure. the atoms might be moving fast but can't we make the molecules come arbitrarily close so that the force of attraction can hold them together? In the quantum mechanical world, the word "close" has little meaning. Molecules scattering off each other do have a quantum mechanical probability to fall into a lower energy level by giving off their extra energy as a photon. It so happens that this probability is small for energetic molecules , the smaller the higher the difference in the bound energy level and the kinetic energy of the free molecule. Please note that the energy levels between molecules that keep liquids as liquid are very small. This is a basic explanation , here you can see the complexity in a particular study.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/566906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Two spherical conductors $B$ and $C$ having equal radii and carrying equal charges on them repel each other with a force $F$ Two spherical conductors $B$ and $C$ having equal radii and carrying equal charges on them repel each other with a force $F$ when kept apart at some distance. A third spherical conductor having same radius as that of $B$ then brought in contact with $C$ and finally removed away from both. The new force of repulsion between $B$ and $C$ is My question is: Is there any difference between in the answers of conductor and point charge?
Call the new spherical conductor D. As Michael Seifert implies, the charge on D and the remaining charge on C will depend on where C and D touch. When they touch, if the distance from B to D equals the distance from B to C, then half of C's charge will flow to D. If D is between B and C when they touch, then D will have an opposite charge and C will have a stronger charge than before. If C is between B and D when they touch, then C will have the opposite charge and D will have almost as much charge as C would in the other case. The point charge approximation does not predict this.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Will cutting sand paper with scissors make the scissors sharper or duller? This is a little question that I have been wondering when I need to cut sand paper with scissors. Sand paper can be used to sharpen knives etc. when applied parallel with the blade surface. Also it can be used to dull sharp edges when applied nonparallel with the blade surface. My assumption is that it should dull the scissors since paper is being cut using the sharp edge and nonparallel with the abrasive material. But I still have doubts about the validity of the assumption. How is it?
Duller Scissors are sharpened by honing the narrow "chisel" edges, not the broad flat edges where the two blades sandwich together. Here's an image of someone sharpening tiny scissors with an even tinier file: So, cutting sandpaper will abrade the point between the chisel edge and flat edge. *You can use the sandpaper to re-sharpen the scissors afterwards, if it makes you feel better!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Does the absorbtion of photons from the Sun decrease with distance for a point charge? Let's suppose we have a point charge with no dimensions (zero volume and surface area) absorbing photons from Sun. Does it change the amount of photons and energy absorbed when we move the point charge close to, or far from the Sun? Since photons do not decrease in energy with distance and come one after another in a line, it really doesn't matter how far we are from the Sun. How does classical electromagnetism explain this in terms of electromagnetic waves?
Quantum mechanics: although the energy of the individual photons does not change with distance, the density of photons (the number of photons per volume) does. This is because every spherical shell around the sun must have the same number of photons, but ones with bigger radius have a greater surface, so the photons are more spread apart. Classical EM: the frequency of the EM waves is the same everywhere, but their amplitude decreases with distance. Again, this happens because the total energy in spherical shells must be the same, but this time it is calculated from the square of the fields, that is, of their amplitude.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Is energy required in generating magnetic field in simple resistance circuit? Consider a simple resistance circuit with a cell and a resistor. It is stated that energy stored in cell appears as heat in resistance as current flows in ideal circuit (neglecting EM radiation) as whole. POWER/RATE OF HEAT GENERATION = POWER/RATE OF ENERGY CONSUMPTION in CELL = VI However we also know that flowing current produces magnetic field. So my questions are: * *Is energy needed to create magnetic field in general? *Does the energy of cell also appears in the energy of the magnetic field? *Is there any such thing as "energy of magnetic field" *Any relevant information. P.S. I am an undergrad. I do not know Special Relativity but I understand that feeling the effects of magnetic field depends on frame of reference.
* *Yes, energy is needed to create the magnetic field. Once the field has been created, no further energy is needed. The electric current in the circuit preserves the magnetic field. 2. The chemical energy in the cell is converted to electrical energy that moves the charges in the current, and as the charges start to move, they produce the magnetic field and its energy. 3. A magnetic field has energy, which is proportional to $B^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is the point of a voltage divider if you can't drive anything with it? The voltage divider formula is only valid if there is no current drawn across the output voltage, so how could they be used practically? Since using the voltage for anything would require drawing current, that would invalidate the formula. So what's the point; how can they be used?
You don't have to draw significant current to "use" a voltage. For example, if you want to measure the output voltage, which is a perfectly useful thing to do, then you can just attach a voltmeter. And ideally, voltmeters don't draw current at all. If you wanted to drive something at a lower voltage than the input, you wouldn't use a voltage divider because that would be extremely wasteful; most of the energy would be lost in the resistors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/567978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 8, "answer_id": 6 }
Is magnetic field due to current carrying circular coil, zero everywhere except at its axis? Consider a current ($I$) carrying circular coil of radius$ R$ of $N$ turns.Consider a rectangular loop $ABCD$,where length $AB=CD=\infty$ Performing the integral for axial points, $$\int_ {-\infty}^{\infty}\vec{B}\cdot \vec{dx}=\int_ {-\infty}^{\infty} \frac{\mu_0INR^2dx}{2(R^2+x^2)^{3/2}}=\mu_0IN=\int_ {C}^{D}\vec{B}\cdot \vec{dl}\tag{1}$$ Now applying Ampere's law on loop ABCD, $$\int_ {A}^{B}\vec{B}\cdot \vec{dl} + \int_ {B}^{C}\vec{B}\cdot \vec{dl}+ \int_ {C}^{D}\vec{B}\cdot \vec{dl}+ \int_ {D}^{A}\vec{B}\cdot \vec{dl}=\mu_0NI\tag{2}$$ $$\Leftrightarrow \int_ {A}^{B}\vec{B}\cdot \vec{dl} + \int_ {B}^{C}\vec{B}\cdot \vec{dl}+ \int_ {D}^{A}\vec{B}\cdot \vec{dl}=0\tag{3}$$ My book writes that "Apart from the side along the axis,the integral $\int\vec{B}\cdot\vec{dl}$ along all three sides will be zero since $B=0$". I don't quite get this. Magnetic field lines due to a coil are like, Now, the question, Is magnetic field due to current carrying circular wire zero everywhere except at its axis? Why exactly $$ \int_ {A}^{B}\vec{B}\cdot \vec{dl} + \int_ {B}^{C}\vec{B}\cdot \vec{dl}+ \int_ {D}^{A}\vec{B}\cdot \vec{dl}\tag{4}$$ is zero?
They say more than the sum of those three line integrals being $0$. They correctly say that $\mathbf B$ is $0$. This is because, as stated on your diagram, those ends of the rectangle are at an infinite distance away from the circular loop, and $\mathbf B$ must go to $0$ infinitely far away from the circular loop.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Photon description of quantum-optical interference experiments I am currently studying the textbook The Quantum Theory of Light, third edition, by R. Loudon. In the introduction, the author says the following: In the customary photon description of quantum-optical interference experiments, it is never the photons themselves that interfere, one with another, but rather the probability amplitudes that describe their propagation from the input to the output. The two paths of the standard interference experiments provide a sample illustration, but more sophisticated examples occur in higher-order measurements covered in the main text. The first sentence is a bit unclear. Is the author saying that it is never the photons themselves that interfere with one another, but rather the probability amplitudes (of the photons) that interfere with each other (which sounds weird, since the photons themselves are probability amplitudes, right?)? Or is the author saying that the photons (in the form of probability amplitudes) never interfere with each other at all, and that the photon propagation from input to output is fully described by the probability amplitude (that is, photons do not affect each other at all)? Or is it saying both? I would greatly appreciate it if people would please take the time to clarify this.
There is no difference between the two interpretations that you list. Photons do not interfere with themselves or other photons. The wave function should not be identified with photons. It gives the probability of detecting photons. It can be seen as the average of a poisson distribution describing the number of photons that can be detected.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Conservation Laws and What Happens if they go Wrong? I read this excellent article on the Conservation Laws and also I was taught in Schools that Conservation Laws cannot be proven and only verified. I was wondering what would actually happen if a Conservation Law turned out to be false? I know it would question our measurements as well as our calculations as we use them almost unknowingly everywhere like unless stated every mechanics problem has the mass taken to be conserved so let's say the laws hold true here but break in the boundary cases as most things in Physics do like break when we approach the speed of light or the edge of the universe or some other drastic condition. Are there any good discussions on what consequences it may have?
Noether's theorem is the thing to pay attention to here. This theorem basically says that if there is a symmetry in the underlying physics, there is also a conserved quantity. Traditionally the best-known symmetries are: * *Time invariance, i.e. the laws of physics do not change with time. If this is the case, then energy is conserved. *Translational invariance, i.e. the laws of physics do not vary with position. If this is the case, then linear momentum is conserved. *Rotational invariance, i.e. the laws of physics do not vary depending on the direction at which you look at the experiment. If this is the case, then angular momentum is conserved. In the same way, if any of the three quantities above are not conserved, then we know that physics varies with time/position/direction. There are other conserved quantities, such as electric charge, but the relevant symmetries are more technical.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/568908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Work done on object being carried upwards If you carry a book in your hands, and you walk up stairs with a change in height of $h$, the net work on both you and the book would be $-M_{\mathrm{total}}gh$ since $W = - \Delta U$. This would be due to gravity. However, when considering the book alone, the work done by the normal force, i.e. your hands, would be $M_{\mathrm{book}}gh$. Furthermore, the work done by gravity solely on the book would be $-M_{\mathrm{book}}gh$. This means that the net work done on the book through the process of walking up the stairs is $0$. Since work is equal to negative change in gravitational potential energy this means that the change in GPE of the book is $0$? But then doesn't the book have a change in gravitational potential energy of $M_{\mathrm{book}}gh$? Am I missing something regarding the kinetic energy of the book?
This means that the net work done on the book through the process of walking up the stairs is 0. That is correct, because gravity does negative work of $-m_{book}gh$ equal to the positive work done by you of $+m_{book}gh$ for a net work of zero. All that means, per the work energy theorem, is the change in kinetic energy of the book is zero (it starts at rest and ends at rest). It doesn't mean there is no change in potential energy of the book. Since work is equal to negative change in gravitational potential energy this means that the change in GPE of the book is 0?? That is not correct. The energy you transferred to the book in doing positive work is not lost. Gravity, in doing negative work, takes the energy you supplied to the book and stores it as gravitational potential energy of the book-earth system. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/569042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Effect of Earth's Rotation on Time According to Wikipedia: Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that a modern-day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. The angular velocity of Earth is reducing (although not very much). That means our speed with respect to the earth's axis is dropping down. According to Einstein's special relativity, time goes slower for faster particles (Time Dilation). As our speed is decreasing day by day, time goes faster and faster. I mean that in future angular velocity is less than the present. If time is going fast, we'll reach that future point of less angular velocity faster (making time to go still faster). That means this is a cyclic process making time to go faster and faster (recursion). Is this explanation correct? Will this affect Earth and us (like day and night duration)?
For the speed at which the earth rotates, relativistic time dilation would be pretty much unnoticeable. For the satellites in the GPS system, it does have to be taken into account in order to predict accurate positions on the earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/569146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are resistors, inductors, capacitors the only possible passive components in this universe? I wonder whether or not there is a possibility to find another passive component (with value either $X$ or $Y$) that will change the well known RLC circuit equation below $$ L\; \frac{\mathrm{d}^2 \; i}{\mathrm{d}\; t^2} + R\; \frac{\mathrm{d} \; i}{\mathrm{d}\; t}+\frac{i}{C}=f(t) $$ to either $$ L\; \frac{\mathrm{d}^3 \; i}{\mathrm{d}\; t^3} + R\; \frac{\mathrm{d}^2 \; i}{\mathrm{d}\; t^2} + \frac{1}{C}\; \frac{\mathrm{d} \; i}{\mathrm{d}\; t} + \frac{i}{X}=f(t) $$ or $$ Y\; \frac{\mathrm{d}^3 \; i}{\mathrm{d}\; t^3} + L\; \frac{\mathrm{d}^2 \; i}{\mathrm{d}\; t^2} + R\; \frac{\mathrm{d} \; i}{\mathrm{d}\; t} + \frac{i}{C}=f(t) $$ Question Are resistors, inductors, capacitors the only possible passive components in this universe? Is there a possibility to find another one? From a mechanical point of view, I can rephrase the question into the equivalent mechanical question: Are dampers, springs and masses the only possible mechanical components in this universe?
Look at it this way: what possible forces can be applied to an electric current in a constrained environment, e.g. a wire? Or, more specifically, to the value of the voltage at any location along the wire. About all you can do is apply a lead, a lag, or an attenuation (and all of those as a function of $\omega$ ) . Since physical components can be modeled as combinations of ideal R,L and C elements, unless you can demonstrate a different action which can be taken to modify V(t), there's no need for a new kind of component (passive).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Do adjacent sarcomeres oppose each other during contraction? A sarcomere is the contractible portion of the muscle cell. And here is a figure of three sarcomeres in series before and after contraction: I was taught that the thick fiber, myosin, pulls on the thin fiber, actin. I am confused as to how contraction can happen because it seems that there is a tug of war going on between myosins on either side of the Z-line. Is there another force vector I am not accounting for? Or is there some additional biophysics going on that I am not aware of? Edit: I guess it could all contract if the outermost sarcomeres had a weaker opposing tension than the internal tension. But it would start from the outside and radiate in. In other words, there would be a gradient of contraction with the shortest (most contracted) to longest from outside to inside until all are equally contracted. I'm not sure if that's how it works in reality.
Yes, there is a tug of war, which results in tension being developed in the muscle fibre. As it has the ability to contract, increased tension will cause it to do so. Once the muscle contracts, the muscle has no mechanism to provide an outward force, thus a compressed muscle has to rely on the compression of another muscle to relax. (eg, biceps & triceps)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Different variations of covariant derivative product rule This is a follow-up question to the accepted answer to this question: Leibniz Rule for Covariant derivatives The standard Leibniz rule for covariant derivatives is $$\nabla(T\otimes S)=\nabla T\otimes S+T\otimes\nabla S$$ so for $T\otimes\omega\otimes Y$ this would translate to $$\nabla(T\otimes\omega\otimes Y)=(\nabla T)\otimes(\omega\otimes Y)+T\otimes(\nabla\omega\otimes Y)+T\otimes(\omega\otimes\nabla Y).$$ My question is: given a vector field $X$, how do I get from the above that $$\nabla_X(T\otimes\omega\otimes Y)=(\nabla_X T\otimes\omega\otimes Y)+T\otimes\nabla_X\omega\otimes Y+T\otimes\omega\otimes\nabla_XY$$ as written in that answer?
Use that tensor product is associative, so $\nabla(T\otimes \omega \otimes Y)=\nabla[(T \otimes \omega ) \otimes Y]$ Thus you have the Leibniz rule $$\nabla(X\otimes Y)=\nabla(X)\otimes Y+ X\otimes \nabla(Y)$$ that gives you $$ [\nabla(T\otimes\omega)]\otimes Y + T\otimes \omega \otimes \nabla Y$$ Using again in first term: $$ \nabla T \otimes \omega \otimes Y+ T\otimes \nabla \omega \otimes Y+T\otimes \omega\otimes \nabla Y$$ Finally just replace $\nabla \rightarrow\nabla_X$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why do complex number seem to be so helpful in real-world problems? Complex numbers are often used in Physics especially in Electrical Circuits to analyze them as they are easy to move around like phasors. They make the processes easy but it seems kind of amusing to use something which has no other real world analogous meaning to my knowledge being used to solve the most practical real world problems. What other method were used prior to having developed complex numbers and why were they replaced? For example, can every problem where we use complex numbers also be done using other techniques such as matrices, how did the insight come to use such an obscure entity, or did doing the operations just seem easy with it?
How do you describe a rotation? One approach is to do $$x' = x\cos\theta - y\sin\theta$$ $$y' = x\sin\theta + y\cos\theta$$ That's unwieldy. Another approach is to declare: I use a complex coordinate $r$ where $x$ equals the real part, and $y$ equals the imaginary part. Now we can write the rotation much more simply: $$r' = re^{i\theta}$$ Physicists hate unwieldy math expressions, so they usually opt to describe anything that has to do with rotations, like waves and vibrations, with complex numbers to keep their formulas simple. It makes reasoning about the physical contents of the equations so much easier. TL;DR: Complex numbers are so helpful because they concisely describe rotations, and rotations crop up just about everywhere in physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
What do I need to build special relativity? If I postulate the principle of relativity and the constancy of the speed of light for every inertial observer can I then prove all SR? Or do I need some other postulate? For example: do I need to also postulate the structure of the Lorentz transformations or the Lorentz transformations derive completely only from this two basic postulates. (Do I have to also postulate, for example, that the transformation are linear to prove them from the two starting postulates?)
You can not prove all of SR. You can derive the Lorentz transformation using those two postulates plus linearity. The Lorentz transformation then gives you time dilation, length contraction and relativity of simultaneity. But this is not all of SR. You can not get the relativistic formula for momentum and the well-known formula $E=mc^2$ without also postulating conservation of momentum and energy, and using some definition of momentum and energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/570980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
What makes the electron, as an excitation in a field, discrete? In standard quantum mechanics, the wave function have discrete energy-values due to a potential. However, my very limited understanding of QFT is that electrons are excitation in the Dirac field, and the number of electrons is discrete even in free space. What is the reason for this, and why is there a minimum excitation?
There are two facts to be distinguished. Electrons are what we loosely call particles, so they only ever occur in discrete numbers. Millikan demonstrated the discreteness of charge. Secondly, localised states in general have discrete energies. Examples of these are atomic and molecular states. Free electrons have continuous electrons. So called free electrons in metals for all practical purposes have continuous energies if the metal volume is macroscopic. Why electrons are discrete particles or why quantum states are often discrete is not known. A further question is how we account for these properties mathematically, but I believe this is not what you are asking.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Maxwell Coexistence from Entropy Considerations As explained by Maxwell, (1875), a realistic thermodynamical system will, at low temperature, have pressure vs. volume curve that is non-monotonic. In practice, though, the observable states of the system will lie along a straight line of constant pressure, where part of the system is in the condensed liquid state and part of the state is in the gas state (see picture from Wikipedia below). The pressure on the straight line corresponds to the maximum pressure of the gas and the minimum pressure of the liquid. As further explained by Maxwell, this pressure can be determined by requiring that the areas above and below the line be equal. Although the equal area law is necessarily correct as a consistency condition, it does not explain physically what is going on, i.e. what the fluid is doing. After all, the fluid is not sweeping out the PV curve or calculating integrals. I would like there to be some entropic (or other) argument to explain the coexistence pressure $P_{\rm e}$. For example, an argument that a system which is partly in the liquid phase and partly in the gas phase has more entropy than a homogeneous system, and where this maximally entropic pressure agrees with Maxwell's area law.
The vapor and liquid phase coexist along the straight line. Call the left and right intersection with the original curve $v_A$ and $v_B$ and the vapor pressure $p_v$. Coexistence implies the equality of chemical potentials between the phases. Because if the phases were at different chemical potentials, particles would flow (i.e condensate/vaporize) from one phase to the other until equilibrium is reached. At constant temperature, chemical potential is the same as Gibbs' free energy $g = f + pv$, where $f$ is Helmholtz' free energy. This implies $f_A-f_B = p_v(v_B-v_A) $ On the other hand, $f_A - f_B = \int_B^A \partial_v|_T f\,dv = \int p dv = p_v(v_B-v_A) - \text{area under} + \text{area over}$ establishing the result. In the end it is an entropic argument, because all thermodynamic potentials derive from the entropy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Work done by normal force on a block placed on movable inclined wedge A block placed on movable smooth inclined wedge placed on a smooth surface. Both are released and allowed to move. I was told that If the center of mass of wedge and block system is fixed (in the horizontal direction I assume, otherwise there won't be any motion) then in the journey of the wedge from A to B the Normal does not do any work because the displacement is point-wise perpendicular. I do not understand Why is this so? Even if center of mass of the wedge and block is fixed the displacement of block $\vec{d}$ (as in the figure below) is not zero. More formally if $d\vec{s}$ is small displacement of the block we can write it as a sum of small displacement of the wedge $d\vec{s_1}$ and small displacement of the block $d\vec{r}$ wrt wedge. $$\int_{A}^{B}\vec{N}.d\vec{s}=\int_{A}^{B}\vec{N}.d\vec{r}+\int_{A}^{B}\vec{N}.d\vec{s_1}$$ The first term is zero, but the second is not. Hence the work done by Normal must not be zero. Am I misintrepretating something? Or what I was told was not correct? Any help will be highly appreciated. This problem is quite similar. I read it's answer, but that still leaves the question unasnwered. The only thing it says about the work done by normal is The vertical component of the force on the block due to the wedge N does negative work on the block but the horizontal component of force N does positive work on the block.
The normal force indeed does work on the block.However, it does zero work on the block + wedge system: Suppose the block gets displaced by $\vec{dr}$ relative to the wedge, and the wedge gets displaced by $\vec{dx}$. Then the block, as seen from the ground , is displaced by $\vec{dr}$+$\vec{dx}$. The work done on the block=$dw1=\vec{N}.(\vec{dr}+\vec{dx})$, and the work done on the wedge = $dw2=(\vec{-N}).\vec{dx}$. So after adding, we get $dw1+dw2$ = $\vec{N}.(\vec{dr})$=$\vec{0}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to determine the result of 2D elastic collision As shown by the image, a disk of radius $R_1$ mass $M_1$ and initial velocity $V_0$ collides with another still disk of radius $R_2$ mass $M_2$. Both disks has no rotation initially. The direction of $V_0$ is indicated by $\theta$. For three situations there are unique solutions: * *When $\theta = 0$, the problem becomes 1D, and both disks has no rotation afterword. *When there is no friction, both disks has no rotation afterward, and the still disk gain a speed $V_2 = 2 V_0 \frac{\cos \theta}{1 + \frac{M_2}{M_1}}$ along N. *When $\theta$ is sufficiently large so that $f = \mu_0 N$, in which $\mu_0$ is the static frictional coefficient. In this case, the momentum transfer along N is $\mu_0$-fold of the momentum transfer along $f$. Both disks rotate but in opposite direction afterward. The solution for $V_2$ in the $N$ direction afterward is $$V_2 = 2 V_0 \frac{\cos \theta + \mu_0 \sin \theta}{(1 + \frac{M_2}{M_1})(1 + 3 \mu_0^2)}$$ In the case when $\theta$ is small, how to find a unique solution? Newton mechanics should have unique solution in all cases. And experimentally the outcome should not be random. So what constrain did I miss?
Friction acts only so long as there is relative slipping between the surfaces of the sphere. For smaller angles, $\mu$ is sufficient for the relative slipping to cease by the end of the collision. For larger angles $f=\mu N$ may not be sufficient to reconcile the tangential velocities of the surfaces by the end of the collision. For smaller angles, after the collision, the relative tangential velocity of the surfaces of the two spheres is zero, thanks to friction. This should give you your final constraint. Do the math assuming friction is sufficient to prevent slipping by the end of the collision, ie the angle is small enough. If this value of friction turns out greater than $\mu N$, solve it with $f=\mu N$ (your initial assumption of a small angle was wrong). The words "smaller" and "larger" are qualitative.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why can't photons cancel each other? The textbook argument against photons canceling each other draws upon the conservation of energy. Does this mean that energy conservation is a "stronger" principle than superposition? Waves in other media than the EM field, e.g., sound or water, do cancel out---presumably by passing on their energy to some other degree of freedom (e.g., heat). Could this imply that EM waves don't have any alternative channel to pass on the destructed energy and thus can't cancel out?
It is a curious thing, because in the early days of quantum mechanics it was thought that wave interference effects (which is what I think you mean by cancelling) were only possible for a particle and itself (Dirac said exactly that, although I would have to put in a fair amount of work to find a reference). This is indeed true for most particles in quantum mechanics. However, since the advent of quantum field theory, it has been recognised that the wave function for a photon does not describe the probability where where the particle may be found, but rather describes the probability for where it might be found that a photon has been annihilated. It actually does not have to be the same photon. Wave interference effects have actually been observed by astronomers for photons originating in different galaxies (sorry I don't have more in the way of details). What I can say, is that your text book is out of date on this particular topic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/571824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Ward identity of QED - whether the fields are all $c$-number fields I am following Sidney Coleman's lectures of Quantum Field Theory. At the end of ch.32, he derived the Ward identity for the 1PI generating functional $\Gamma[\psi,\bar{\psi},A_{\mu}]$ for QED: \begin{equation} ie \bar{\psi} \frac{\delta \Gamma}{\delta \bar{\psi}(x)} - ie \frac{\delta \Gamma}{\delta \psi(x)} \psi(x)- \partial^{\mu} \frac{\delta \Gamma}{\delta A^{\mu}(x)} = \frac{-1}{\xi} (\partial_{\nu}\partial^{\nu})(\partial_{\mu} A^{\mu}). \end{equation} The term on the RHS is the gauge fixing term in the original QED Lagrangian. I am now wondering that whether all of the fields involved in the Ward identity are $c$-number fields. As $\psi$ and $\bar{\psi}$ represent Fermi field, it seems like we should interpret their classical correspondence as Grassmann fields. However, it is clear on the RHS, we have a $c$-number function. Then this equation seems to have both $c$-number and Grassmann number involved, which I think may not make sense? I am wondering whether we should interpret both $\psi$ and $\bar{\psi}$ also as $c$-number fields? But if that is the case, how does the Fermi minus sign issue be properly handle under the above Ward identity?
No, $\psi$ and $\bar{\psi}$ are $a$-number fields, while $A_{\mu}$ is a $c$-number field. The above Ward-Takahashi identity (WTI) is a supernumber-valued identity. The WTI is often used by differentiating it a number of times wrt. the fields, and then set the remaining fields to zero. See also this related Phys.SE post. References: * *Bryce DeWitt, Supermanifolds, Cambridge Univ. Press, 1992.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are the positive charged particles moving the the right direction? The following is the description for this figure provided by my textbook: The paths of different types of radiation in a magnetic field. Using the right-hand slap rule, we see that positively charged particles are forced to the right. [...] Why are the positively charged particles going to the right? I think there isn't enough information. Based on the figure, one can only deduces that magnetic field is going out from the screen or page. It still isn't to me that why the positive charges move to the right? I do know that whatever the direction in which the positive charges move, the electrons will move directly opposite. How can I figure out where is the direction of the lorentz force? Subsequently, how can I figure out the direction of the individual charged particle?
The hidden assumption is that the particles enter from the bottom of the diagram and are moving (initially) towards the top. That gives the direction of $\vec v$. You've correctly read the direction of $\vec B$, so you're ready to find $\vec v\times\vec B$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Feynman rules for interactions with derivatives: How exactly do the momentum factors appear? I know how to treat Feynman interactions without derivatives by Wick contraction. But now, take for example $$\mathcal{L}_{int}=\lambda \phi (\partial_{\mu}\phi)(\partial^{\mu}\phi).$$ Now many books write that in momentum space the derivatives turn into momenta. While I can imagine this happening, I don't really know how to write this down explicitly. At what point do I consider the Fourier transform of the field? Am I still using Wick contractions, but now with the field depending on the momenta? I have not found a source doing this explicitly.
For your case, starting with this interaction term, let us substitute the expansion of $\phi$ in Fourier modes: $$ \phi = \sum_k \phi_k e^{i kx} $$ The action of derivative produces a factor of $ik$. Then, in the action you sum(integrate) over all $x$ : $$ \sum_x \sum_{k_1, k_2, k_3} (ik_{2 \mu}) (ik^{3 \mu}) \lambda \phi_{k_1} \phi_{k_2} \phi_{k_3} e^{i (k_1 + k_2 + k_3) x} = \sum_{k_1, k_2, k_3} (ik_{2 \mu}) (ik^{3 \mu}) \lambda \phi_{k_1} \phi_{k_2} \phi_{k_3} \delta (k_1 + k_2 + k_3) $$ Where in the last expression we have employed the well-known integral for exponent. The change from derivatives to momenta is simply results of change from positional basis, to momentum basis and has nothing to do with Wick theorem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the difference between the specific heat capacities of water under isobaric and isochoric conditions Can the difference of specific heat capacity of water under isochoric and isobaric conditions be explained in terms of the internal energy of the system? Most of the videos I have watched base their explanation in terms of ideal gases. I guess its something to do with the fact isochoric conditions mean all the heat energy provided goes to the internal energy of the molecules. I also have the graphs of the specific heat capacities plotted against time
The specific heats diverge mainly after 100 C when at 1 atmosphere water changes phase and begins acting like a gas approaching ideal gas behavior. Based on the first law the internal energy explanation for the specific heat at constant pressure (isobaric) $C_P$ being greater than the specific heat at constant volume (isochoric) $C_V$ is because when heat is added at constant pressure the substance expands and does work. When added at constant volume it does no work. Based on the first law: At constant pressure (isobaric): $$Q=C_{P}\Delta T=\Delta U+W$$ At constant volume (isochoric) where $W=0$: $$Q=C_{V}\Delta T=\Delta U+W=\Delta U$$ Assuming ideal gas behavior internal energy is a function of temperature only, from the above equations it takes more heat to achieve the same increase in internal energy (increase in temperature) in an isobaric process than an isochoric process because some of the heat does work. That requires $C_P$ to be greater than $C_V$. Hope this helps
{ "language": "en", "url": "https://physics.stackexchange.com/questions/572925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Would a moving elementary particle follow the Heisenberg's Uncertainty Principle with respect to itself? An observer at rest or in motion different from the particle cannot determine its momentum and position to great accuracy at the same time. But what if the observer is on the particle itself or moving with the same velocity as the particle?
As elementary particles are point particles, there cannot be an observer "on them" . The only observation can happen with interactions with other particles, and yes, the envelope of the Heisenberg uncertainty has to be obeyed, whether the system is at rest ( studied in its center of mass) or moving. In mathematical calculations, the interaction is calculated with Feynman diagrams and the result is the probability distribution for the interaction happening, which it itself carries the HUP.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How is melting time affected by flow rate and temperature of surroundings? Suppose you have a solid sphere of m, where m is an element with freezing point of 0 degrees Celsius. In one scenario, you place your sphere in a (“static”) 25 degree Celsius environment and measure time, t, until melting. The sphere is fixed and cannot be displaced. In the other, you place your sphere in environment with temperature, T, and with constant flow rate, v. Again, you measure time, t, until melting. What is the equation that would relate the two scenarios? In other words, at what temperature and flow rate would time required for melting in the second scenario equal time required in the first?
In the static case, you need to give a better definition of the problem. How big is the container that the ice sphere resides in? Are the walls of the container insulated, or can they exchange heat with the environment? If heat exchange occurs with the environment, what are the container walls made of, what is their thermal conductivity, is the container in shade, etc.? Does the melted water "puddle up" around the bottom of the sphere, or is it drained in some way? Is the ice sphere surrounded by air, water, or something else? What is the initial temperature of the material surrounding the ice sphere? For the dynamic case, what is flowing around the sphere, what is its temperature, and how fast is velocity "v"? At very low velocities, you will have laminar flow, whereas at somewhat higher velocities, you will have turbulent flow. Turbulence is one of the huge unsolved problems in physics, and no equations currently exist for this phenomenon. Due to this, practical heat transfer problems are very dependent on the geometry of the situation, flow rates, etc., which means that a lot of empirical equations have been developed for very specific applications. Your problem will almost certainly require the collection of a lot of data for your specific geometry and details, such that you can develop an empirical equation for this one case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Gibbs free energy, Helmholtz free energy and their contribution to expansion and non-expansion work In the book "An Introduction to Thermal Physics" by Daniel Schroder, I got the following expressions Helmholtz free energy : F = U - TS and Gibbs Free energy : G = H - TS = U + PV - TS The author explained the intuition behind Gibbs free energy the following way I found in different places (Chemistry StackExchange, Wikipedia etc.) that Gibbs free energy is the capacity to do non-expansion work and Helmholtz free energy is the capacity to do both expansion work (pressure-volume work) and non-expansion work. But in the definition of Gibbs free energy there is a pressure-volume term which Helmholtz free energy does not have. Therefore, my intuition is that it should be the other way around. What am I missing here? I would really appreciate if anyone could help me with this.
But in the definition of Gibbs free energy there is a pressure-volume term which Helmholtz free energy does not have. Therefore, my intuition is that it should be the other way around. The Gibbs free energy definition $G=U+PV-TS$ doesn't add an expansion term, it removes it. The internal energy $U$ is $U=TS-PV+\Sigma_i \mu N_i$, where $\mu$ is the chemical potential and $N_i$ is the amount of species $i$. Thus, $G=\Sigma_i \mu N_i$, which is why we also call $\mu$ the partial molar Gibbs free energy of species $i$. The process of defining $G$ thus strips away the factors associated with heating and expansion work.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is curved space able to change an object's velocity (vector)? I don't really understand what is meant by curved space. Why does mass warp space? Why does curved space alter the velocity of a massive object? Normally to change an object's direction you have to apply some force to overcome inertia. So how does curved space do it? What is space anyway? Layman's terms, please.
As far as I know, it is not known why mass warps space (one of the biggest problems in physics, uniting general relativity with quantum mechanics). It is just a model, and all observations so far support this model. As for the second question, as far as I understand spacetime does not change an object's velocity, it just appears to change from an outside observer's viewpoint. That is, it is traveling straight and with constant speed through it's surrounding spacetime, but since this spacetime is curved locally, if you are watching from a distance (in a part of spacetime curved differently), it appears as if the object is accelerating or in a curved trajectory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Can a single-slit experiment demonstrate the particle nature of light? Young's two-slit experiment is generally credited for demonstrating the wave nature of light. But what about a similar experiment with just one slit? My understanding is that this will create an interference pattern. Shouldn't that be enough to demonstrate light's wave nature? Perhaps the technology available at the time wasn't good enough to create interference, or perhaps there's a plausible wave explanation?
I believe you have that backward. The two-slit experiment demonstrates the wave nature of light. Light must be quantum because it interacts with single atoms and either has an effect or does not have an effect. It has to do that in a single place. It is established that this is because of the quantum nature of light and not the quantum nature of atoms. But to create a two-slit interference pattern it must pass through both slits at once, so it has to be in two places. It must be a wave, and be everywhere. Feynman resolved the paradox. Light is a particle and is in exactly one place at a time. But it has a probability function that travels like a wave, that decides the probability that the photon is in each place. So a photon is a particle that appears in every possible way to travel exactly like a wave, except when it interacts with matter and acts like a particle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/573765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why is QED renormalizable? My understanding of renormalizability is that a theory is renormalizable if it the divergences in its amplitudes can be cancelled out by finitely many terms. I see that by adding counterterm (in the MS-bar scheme) $$L_{ct}=-\frac{g^2}{12\pi^2}\left(\frac{2}{\epsilon}-\gamma+\ln4\pi\right),$$ the one-loop divergence of QED can be made finite. However, I do not see how this makes QED renormalizable? Surely as we work with diagrams with more loops, we will get more counterterms - given that we can have diagrams with arbitrarily many loops, do we not need an infinite number of counterterms to cancel these out?
We get infinite number of counterterms, but that will all be the same form (or in a closed set), it is just that the coefficients in front of the term will be expanded in a power series of the coupling constant. What it means by "infinite number of counterterm -> non-renormalizable", at least from my understanding, is something like phi^5 theory. We will need to add infinite number of counterterms, like phi^6, phi^7, phi^8, ..., to cancel the divergence, and this goes on forever. This is different from QED that we just need a finite number of counterterms, but the coefficients in front of them are determined order by order.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How big (in meters) could the difference be between ECI and ECEF coordinates at midnight UTC? Is the only difference due to leap seconds, etc. or other differences between UTC and updated forms of universal time, such as UT1? In other words, are all earth-centered inertial (ECI) coordinate frames constructed on purpose so they match at approximately midnight each day, and if so, how big of a difference could there be in either the X, Y, or Z coordinates (in meters)?
In other words, are all earth-centered inertial (ECI) coordinate frames constructed on purpose so they match at approximately midnight each day? No, they are not. There are multiple Earth centered inertial frames. All have one thing in common: An Earth-Centered, Earth-Fixed rotates more or less about the ECEF frame's z axis at the rate of more or less one rotation per sidereal day -- not one rotation per mean solar day. Your question implicitly assumes that the Earth rotates once per mean solar day. Ignoring the equation of time, the Earth does indeed rotate once per mean solar day with respect to the Sun. All Earth-centered inertial frames are defined in terms of the "fixed stars" rather than with respect to the Sun. Suppose you asked about ECI coordinates of a point with fixed ECEF coordinates one sidereal day later rather than one mean solar day later. Note that I used "more or less" twice. The Earth's instantaneous rotation axis is not quite the same as the ECEF z axis. The axis of revolution moves around a bit with respect to the Earth's crust; this is called polar motion. The other "more or less" was the rotation rate. This, too, varies a bit. (It is also slowing down, but this is very gradual.) One final issue is the orientation of the ECEF z axis (ignoring polar motion) and the ECI z axis. In addition to rotation and polar motion, the Earth also undergoes precession and nutation. Precession is slow but rather large. The ECI coordinates of a point with fixed ECEF coordinates change a bit over exactly one sidereal day, and change a lot after exactly 100000 sidereal days.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Could gravity be much stronger (or weaker) at the atomic scale? If gravity is mediated by particles and you are at a scale where those particles are relatively much larger does that perhaps imply that gravity can't work exactly the same way at very small scales as it does at much larger (like planetary, galactic) scales?
Regarding the broad question which you bring up, yes, it is possible for gravity to have a different strength from expected at extremely small scales. But it doesn't have to do with the size of particles. It involves what's called the ADD model, or the theory of "large extra dimensions" ("large" being a bit of a misnomer). It postulates that our universe may have hidden extra dimensions which we never notice because they're very small relative to our everyday concepts of distance. But at extremely small scales, these extra dimensions would become very important, changing the relationship between distance and the strength of gravity (The inverse-square law would no longer apply.). At these scales, we would find that gravity is much stronger than expected. In fact, this theory was proposed as a possible explanation for why gravity appears to be so much weaker than the other fundamental forces (the so-called "hierarchy problem"). It's so much weaker because much of the gravitational force essentially "bleeds out" into this hidden higher-dimensional space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does the spring constant not depend on the mass of the object attached? It is said that: $$ F = -m\omega^2 x = -kx, $$ so $k=m\omega^2$. Since $k$ is the spring constant it doesn't depend on the mass of the object attached to it, but here $m$ signifies the mass of the object. Then how is $k$ independent of the mass attached?
Then how $k$ is independent of mass attached? The clue is in : $$F=kx$$ It states simply that the spring, when extended by $x$, will provide a restoring force $F=kx$. The force needed to affect the extension (displacement) $x$ can be provided by almost anything. A mass (its weight) can do it but is just one way, one way of many.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why do electrons flow in the opposite direction to current? I'm 15 and just had a question about physics and electric fields. I've read that electrons flow in the opposite direction to current. Isn't current the flow of negative charge and therefore the flow of electrons? Or are they referring to conventional current?
We can safely say, that you are talking about electric current. It is defined as the "electric charge, which flows trough some point or region in given amount of time" (by Wikipedia): $$ \textbf{I}=\frac{de}{dt}\hat{a} $$ Bear in mind, that differential $d$ mrans small change, eg. small amount of charge divided by time needed for it to pass (which is also small). For understanding you can write as $I=\frac{e}{t}\hat{a}$. $\hat{a}$ is unit vector. This is an arrow according to which you determine direction, and number line. Let's say, that it points towards positive numbers. We can express current as with upper equation. We divide the amount of charge which was moved in the direction into which $\hat{a}$ points in time needed for this. This equation and explanation is valid even if you don't know anything about electrons in material. So in metals (or other conductors, where main carriers of charge are electrons) the change of charge $de$ is negative. When their velocity points in the direction of $\hat{a}$ (eg. is positive), the direction of current $\textbf{I}$ is negative, because of negative $de$. So you can se, that you have positive direction of travel for electrons, and negative direction of charge, which means, that they are exactly opposite. You could also say, that direction of velocity is negative, and current will than become positive, which will also lead us to opposite direction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
When can the velocity of a car exceed the limit imposed by the static friction? My physics mentor was teaching about circular motion. To explain it, he used an example of a car moving in a circle with constant speed $v$ having mass $m$. Then he talked about a situation when the car would go out of the track. He calculated maximum velocity as followed: $$f = \frac{MV^2}{R}$$ $$\mu_s m g = m\frac{v_{\max}^2}{R}$$ So that one obtains $$v_{max} = \sqrt{\mu_s g R}$$ Then he said, if $v > v_{max}$ then: $$R_c = \frac{v^2}{\mu_s g}$$ Therefore for $R_c > R$, the car would go out of the track. My Question * *How can the speed of the car exceed its maximum value? *Does kinetic friction starts getting applied on the tire?
"Maximum speed" here isn't related to the absolute top speed of the car. Rather, it's the maximum speed at which the car can maintain its circular course. The maximum possible force that friction can exert is governed by the normal force and coefficient of friction - if you try to push off the pavement with greater force, friction cannot oppose it all, and you will slip. If the car exceeds its maximum cornering speed, friction will not be able to supply enough centripetal force to keep the car in a circular path, so the car will skid to the outside of the track. You can imagine a racecar driver taking a turn as tight and fast as possible, but if he goes any faster, he will skid out - it's not the engine that limits his maximum speed around the corner, but the tires' grip on the road (friction). Whenever two surfaces are moving relative to one another, kinetic friction describes their interaction. Static friction is only applicable when there is no sliding/skidding. If the car exceeds its maximum circular velocity, it will start to skid and will be governed by kinetic, rather than static friction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What will happen if we try to take a voltage reading by keeping it in current mode in a multimeter? There are different modes present in a multimeter. one is the current mode and voltage mode for their respective measurements. what will happen if one try to take a voltage reading by keeping it in current mode?
Whenever a multimeter measures a current it is actually measuring the voltage drop across the so called shunt resistor. A shunt resistor is a resistor with a very low resistence so that whenever it's placed in series with the circuit, and so when you try to measure the current of said circuit, it does not effect, to an extensive degree, the current draw of the circuit. The meter measures the voltage drop across the shunt, which has a more or less precise resistence, like $R=0.01 \Omega$, and using Ohm's law it finds the current $$I = \frac{\Delta V}{R} = 100\Delta V\; [A]$$ where the final result is found using the hypotetical resistence of $R=0.01\Omega$. Now that the current measurement is clear, you should be able to understand that if you measure a voltage, which requires the multimeter to be taken in parallel with the circuit, you're just putting the shunt resistor in parallel with your circuit at the point of measurement. Suppose that you're trying to measure the volage drop across a $100\Omega$ resistor with the multimeter in current mode. What you're actually doing is putting the shunt resistor, suppose again $R = 0.01\Omega$, in parallel to the $100\Omega$ resistor. Suppose that your power supply is $10 V$. Since the shunt is in parallel to the $100\Omega$ resistor you'll have $10V$ across your shunt which means that the multimeter will display a current of $$I = \frac{\Delta V}{R} = (100*10) A =1000 A$$ So your multimeter will draw a big amount of current from your circuit, and so it will just melt the fuse that your multimeter should, hopefully, have.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/574888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Effects of the discrete quotient of chiral symmetry group For a theory of $N_f$ massless Dirac fermions coupled to a Yang-Mills field, the usual story is that we have a $U_L(N_f)\times U_R(N_f)$ symmetry, which is then expressed as $SU_L(N_f)\times SU_R(N_f) \times U_V(1) \times U_A(1)$. However, since $U(n)\cong \left(SU(n)\times U(1)\right)/ \mathbb{Z}_n $, shouldn't the above include the quotient by the centers of $U_L(N_f)$, $U_R(N_f)$? What are the possible physical effects of this?
Keeping track of discrete chiral symmetries is important for discrete anomaly matching, see here. In QCD discrete anomaly matching can be used to exclude the option that spontaneous chiral symmetry does not lead to a chiral condensate $\langle\bar{q}q\rangle$, but is signaled by the vev of a higher dimension operator, such as $\langle(\bar{q}q)^2\rangle$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interpreting the Negative Sign in Simple Harmonic Motion What I Know: $$ \vec F = -k \vec x $$ where the negative sign indicates the Force acts in the opposite direction to the displacement. If we were to take the integral so... $$\int_{x_i}^{x_f} Fdx = -\Delta U$$ What would the negative sign in this instance represent? From my understanding, we cannot produce negative energy...or can we? I have attached the image below for the context of my confusion. Thank you.
See first of all Potential Energy arises when work is done against a conservative force.This means that: Change in Potential Energy = - Work done against conservative force $$ΔU = - W$$ Thats where the negative sign comes from What would the negative sign in this instance represent? This means that work done against conservative force (or its negative value ) is equal to change in Potential Energy. From my understanding, we cannot produce negative energy...or can we? What you are missing here is that we are not producing 'negative energy' but negative change in energy. The change in Energy may be positive or negative or zero. Notice how I have stressed on change. P.S. If you still have any doubt,comment below.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Energy Conservation in Rolling without Slipping Scenario A solid ball with mass $M$ and radius $R$ is placed on a table and given a sharp impulse so that its center of mass initially moves with velocity $v_o$, with no rolling. The ball has a friction coefficient (both kinetic and static) $μ$ with the table. How far does the ball travel before it starts rolling without slipping? The solution I found starts by setting up a conservation of energy and setting $v = rw$: $$ \ \frac{1}{2}m v_o^2 = \frac{1}{2}m v^2 + \frac{1}{2}Iw^2 \to v_o^2 = \frac{7}{5}v^2 \quad{(1)}$$ It then goes on to say $W = \Delta K_{rotation}$ and solves for $D$ : $$ \ \int_{0}^{D} F_{f} dx = μmgD= \frac{1}{2}Iw^2\quad{(2)}$$ There are a couple of things I do not understand about this approach. How does $(1)$ account for the loss of energy due to the friction force which is causing the rotation and the slipping that occurs before it starts rolling purely? Second, how does $(2)$ account for the change in center of mass velocity? Wouldn't $W = \Delta K_{rotation} + \Delta K_{transitional}$ ? I am most likely misunderstanding something and help is greatly appreciated.
Hi there mister Radek Martinez! Good to see you've joined the game! And with a good question too! Here's what I think: The energy loss due to kinetic friction makes the ball move slower, thus the change in linear kinetic energy is equal to the loss in friction-induced loss of energy. Energy is left for linear and rotational velocity. The force going hand in hand with kinetic friction is equal to the kinetic friction coefficient $\mu$ times the normal force acting on the mass, which is $F_n=Mg$, so: $$F_{friction}=\mu Mg$$ This means the energy loss until the ball starts will be: $$\int _0^D\mu Mgdx$$ So now we can write the equation for the conservation of energy: $$\frac{1}{2}M {{{v_o}_{lin}}}^2=\int _0^D\mu Mgdx +\frac{1}{2}I{\omega}^2+ \frac{1}{2}M v_{lin}^2=\mu MgD+\frac{1}{2}I{\omega}^2+ \frac{1}{2}M v_{lin}^2,$$ where $v_{lin}$ with corresponding energy $\frac{1}{2}M v_{lin}^2$ is the linear motion of the ball when it's starting to move due to static friction and $\frac{1}{2}I{\omega}^2$is (a part) of the initial energy left for letting the ball rotate with static friction. From this, we can easily extract $D$, the distance at which the ball starts to roll with static friction (all constants are known). I'll leave that for you to do. I hope this answers the question (implicitly).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Does Energy & Momentum also Dilate & Contract respectively? Does energy and momentum also dilate and contract as time and length do respectively, since energy and time and momentum and length are complementary quantities both in relativity & QM?
In a way, yes, one can roughly say so. This can be seen from the way their formula "change". The non-relativistic momentum (dropping vector signs) $$P=mv$$ turns into $$P=\gamma mv$$ and the non-relativistic (kinetic) energy $$E=1/2mv^2$$ turns into $$E=(\gamma-1)mc^2.$$ These changes in the formulas for momentum and energy are similar to the changes for time dilation $$\Delta t=\gamma \Delta t_0$$ and length contraction $$\ell =\ell _0/ \gamma$$ $\gamma$ is defined as $$\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$ This is also very clearly visible from their plots. Source for the images.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/575870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why do fluids not accelerate? A fluid flowing in a horizontal pipe must be flowing at a constant velocity because of the conservation of mass. However, considering how there would be a pressure and hence force acting behind the fluid, for it to have a constant velocity, there must be an equal force slowing it down (depicted as $F?$). I can't see a force that would be as big as the driving force. Can someone explain to me what this force is and how it's created?
Yes you have basically constant velocity once you get into the pipe. The pressure difference between bottom of tank (pipe inlet) and the atmosphere (pipe outlet) will drive flow fast enough that viscous-drag force equals pressure-difference force: $$A~(P_i-P_f) = F_{\mu}$$ But $F_{\mu}$ is a function of flow rate. The flow rate will quickly settle at the point where that equation is true. The acceleration $ma$ is not a big part of it, and is only at the beginning.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How does a Foucault pendulum work? How exactly does a Foucault pendulum work? The usual explanation says that the plane of the oscillation of the pendulum is fixed while the earth rotates underneath. On Wikipedia, there is a demonstration of this effect, showing what it's like on the north pole. But surely that can't be right, for this gets at the heart of the hovering helicopter argument, which says that a helicopter hovering above the surface could wait for its destination to arrive (because the earth rotates underneath). But as we know, this doesn't work because of conservation of momentum. So surely the same must apply to the Foucault pendulum? Now, I suspect the phenomenon has something to do with the Coriolis effect, but I can't really understand how. Wouldn't the Coriolis force just be too miniscule? Also, what if we set the pendulum oscillating in the east-west direction (at some point on the northern hemisphere)? Then surely there will be no Coriolis force?
Yes, the point is Coriolis force. If you want to study the motion of the Foucault's pendulum you have to consider the fact that it oscilates in a non inertial frame, which is Earth's surface, so apparent forces have to be taken into account. Wouldn't the Coriolis force just be too miniscule? If you want to restrict your study to few oscilations, then it's definitely minuscle, and you can set it to zero, but you want to go beyond this approximation and explain why the oscilation plane rotates, then the Coriolis force is the first order perturbation you have to account for. what if we set the pendulum oscillating in the east-west direction (at some point on the northern hemisphere)? Then surely there will be no Coriolis force? In this case still you have a non vanishing Coriolis force because it vanishes when the pendulum velocity vector $\vec{v}$ and the Earth's angular velocity $\vec{\omega}$ are parallel. In the case of East-West motion, $\vec{v}$ and $\vec{\omega}$ are perpendicular. They are parallel in a North-South oscillation at the equator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why should the particles meet at a common point? I saw a question in my physics book asking for the time when all the three particles (each at the corner of an equilateral triangle and each having constant velocity v along the sides of the triangle) meet at a common point. I can't find the reason why these particles should meet at a common point. What I think is that since they all have the same velocity and each travels the same distance so after some time ($ t = \frac{a}{v}$) ($a$ is the side of the triangle) their corners should be interchanged and this should continue all the time and they should never be at the same point. But it is not the answer and the solution shows that they met at the centroid of the triangle. Why should they follow a curved path? Shouldn't they just go on along the sides of the triangle?
I think it's possible that you either misunderstood the problem or that it was badly phrased. I think you're right that if the particles were simply moving in straight lines with a constant velocity, they would never meet. I believe, however, that the problem was probably intended as a Pursuit Curve-type problem: the particles are initially at the vertices of a triangle, but each particle "pursues" the other, with their velocity directed towards the particle they're pursuing. $A$ pursues $B$, $B$ pursues $C$, and $C$ pursues $A$. (In other words, the points $A$, $B$, and $C$ represent the particles, and not fixed vertices of some triangle.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Confusing conceptual question An observer $A$ standing on the circumference of a disc rotating with an uniform angular velocity $\omega = 1$ units , and radius $r=1$ units observes a person $B$ at rest w.r.t ground.Given the $\angle \theta = 30^\circ $ as shown in the figure Find out the, * *relative velocity of $A$ w.r.t $B$ *relative velocity of $B$ w.r.t $A$ My approach , for the first part was that $|V_{A/B}|=rw=1$ units, for the second part i thought since $|V_{B/A}|=|V_{A/B}|=1$, but i found out that the answer to second part is wrong and this formula doesnt work for rotating frames,i also tried this solving with proper maths but i always end up to this conclusion only, maybe i am not hitting the right concept can anyone please help with correct maths, concept for the second part , also how can we write a general expression of $V_{B/A}$ varying with time, the expression i was deriving is as follows $\vec{V_{B/A}}=-(\cos{t}\hat{i}+\sin{t}\hat{j}) $, where centre of circle is the origin. i am really sorry for not typing my work but if users want my work i can share its photo :) . Edit: All units are in SI system
Let's fix a Cartesian co-ordinate system that is at rest w.r.t the ground and has origin at the centre of the circle. Let's call the co-ordinates of $A$ and $B$ in this co-ordinate system $\vec r_A(t)$ and $\vec r_B(t)$. At time $t=0$ we have $\vec r_A(0) = (0, -1) \\ \vec r_B(0) = (\sqrt 3, -1)$ At a general time $t$, $\vec r_B(t)$ does not change so $\vec r_B(t) = (\sqrt 3, -1)$, but $A$ has moved around the circle by an angle $\omega t$, so $\vec r_A(t) = (\sin (\omega t), - \cos (\omega t))$. At time $t$ $A$'s position with respect to $B$ is $\vec r_A(t) - \vec r_B(t)$, and $A$'s velocity with respect to $B$ is $\displaystyle \frac d {dt} \left( \vec r_A(t) - \vec r_B(t) \right) = \vec v_A(t) - \vec v_B(t) = v_A(t)$ since $v_B(t) = 0$. Similarly, $B$'s position with respect to $A$ is $\vec r_B(t) - \vec r_A(t)$, and $B$'s velocity with respect to $A$ is $\displaystyle \frac d {dt} \left( \vec r_B(t) - \vec r_A(t) \right) = \vec v_B(t) - \vec v_A(t) = - \vec v_A(t)$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can rockets fly without burning any fuel with the help of gases under extreme pressure only? Why is it necessary to burn the hydrogen fuel coming out of the engine for the lift of rockets? If it is done to create a greater reaction force on the rocket then why can't we get the same lift with just adjusting the speed of the hydrogen gas going out of the engine like we can release them at a great pressure (and also by adjusting the size of the nozzle opening) and thus at a greater speed? Is it possible for rockets to fly without burning the fuel and just releasing the fuel with a great force? (I know the rockets are too massive). How does the ISP of the ordinary rocket engines compare with the one in my question ? Most of the answers have done the comparison (and a great thanks for that), but help me with the numerical difference in the ISP's. (Compare it using any desired values of the amount of fuel and other required things for taking off.)
Releasing compressed gas will produce some thrust. But when the gasses are combusted they expand much more. This produces a much higher exhaust velocity which gives a much greater thrust.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 12, "answer_id": 4 }
Acceleration transformation in special relativity I am having a hard time understanding the transformation of acceleration when it is not parallel to the instantaneous displacement of the particle, in particular the its dimension. Suppose a particle is in projectile motion. Acceleration is downward because of gravity but I understand "uniform acceleration" depends on frame so we just note it goes downward. Let's transform the acceleration in the stationary frame to the instantaneous frame at the particle. I would expect the transformed acceleration would also point downward, but according to the transformation given in wiki, the direction of the resulting acceleration vector is a combination of the acceleration vector of the stationary frame and instantaneous velocity vector, which does not necessarily mean it accelerates downward. Why does this happen and if the equations are correct, where is the source of the acceleration in the horizontal direction?
If by projectile motion, you mean falling only under action of gravity, the acceleration in the frame of the object is zero. It is in free fall, and any test particle in that frame has no acceleration. But we can suppose a rocket making a curve by using its engines, so that the crew will feel an acceleration. The components of its 3-acceleration vector for an external inertial observer is: $$a_i = \frac{1}{1-v^2}\left(\frac{v_i\mathbf {v.a}}{1-v^2} + \frac{dv_i}{dt}\right)$$ If the inertial frame is momentarily moving with the rocket, $\mathbf v = 0$, and the first term of the parentheses vanishes. The components of the acceleration $a_i = \frac{dv_i}{dt}$, are the same as that measured by the rocket. If the inertial frame is momentarily moving transverse to the rocket, $\mathbf {v.a} = 0$. The acceleration measured by the inertial frame has the same direction, but is different in modulus from that measured in the moving frame. In any other situation, the components of the acceleration measured by the inertial frame are different of that measured by the accelerated frame.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why are voltage and volt both are denoted by $V$? Why are voltage and volt both are denoted by $V$? Won't it cause confusion?
For this same reason some texts denote Voltage by $v$ and Volt by $V$. Historically Volt was named after Alessandro Giuseppe Antonio Anastasio Volta to honour his remarkable work. Further I think the denoted of Voltage by $V$ was started by Sir Pierre-Simon Laplace. Since then $V$ has been used for denoting Voltage. I don't think it causes much confusion as long as you know what you are talking about.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/576917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are there examples of dark matter at intra-galactic scales? In articles I've read, evidence of dark matter (rotation of galaxies / gravitational lensing / galaxy collisions etc) is presented at galactic scales. Are there examples of dark matter at smaller scales than this? One possibility I could think of, would be a misidentified 'silent' black hole vs a cold dark matter clump. This example is probably a completely incorrect assumption on my behalf, but I'm curious to know if we do have evidence, or even possibilities at this scale.
One could argue that the rotation curves of galaxies are intra-galactic evidence, since it requires the somewhat continuous presence of dark matter across the galaxy. If you want smaller scales, the smallest systems where dark matter is firmly established are dwarf galaxies; these are dark matter dominated. Finally, there are tidal streams. Some of those exhibit e.g. holes and other structures that some have interpreted as evidence for dark matter interacting with those streams. While that's an extremely interesting possibility, it's not firmly established as evidence for dark matter. Concerning your misidentified silent black hole: primordial black holes are dark matter candidates. But none have yet been found, despite searches for them. More generally, MACHO searches look for such objects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Kleppner and Kolenkow, 2nd edition, problem 4.23 - Suspended garbage can I am working through Kleppner and Kolenkow's An Introduction to Mechanics on my own and have a question about the solution of the mentioned problem. Problem Statement: An inverted garbage can of weight $W$ is suspended in air by water from a geyser. The water shoots up from the ground with a speed of $v_0$ at a constant rate $K$ kg/s. The problem is to find the maximum height at which the garbage can rides. Neglect the effect of the water falling away from the garbage can. The book/TA solution I have found is quite nice and uses $\bf{F}_{tot} = \dot{\bf{P}}_{in}$ from the text. It also uses a fully elastic collision of the water and bucket, so that the momentum transfer and force are doubled. My question is how to work this problem using $P(t)$ and $P(t+\Delta t)$, as is done in sections 4.7 and 4.8 of the text. Here is what I have, which doesn't work. I think I probably have setup the problem incorrectly: $P(t) = Mv + \Delta m u$ $P(t+\Delta t) = (M + \Delta m)(v+\Delta v)$ which gives $\frac{dP}{dt} = M \frac{dv}{dt} + (v-u)\frac{dm}{dt} = -Mg = F_{tot}$ and $u = v_0 - gt$. Substituting in $u$, then solving the first order ODE and eventually eliminating $t$ leads to $h = \frac{1}{2g}(\frac{2Mg}{K} + v_0)^2$, which is incorrect. I have also tried $P(t+\Delta t) = M(v+\Delta v) -\Delta m u$, to account for the elastic collision, but this leads to a 3rd power after integration that does not work out either. Any help would be appreciated, thanks.
The Two equations you started with in the question, seems incorrect to me! Have a Look at my approach.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof of a vector calculus identity In https://arxiv.org/abs/hep-ph/0010057 the following vector calculus equality is claimed without proof although in note [4] the cryptic comment is made that "The relation is essentially the momentum space identity $(\mathbf{k}\times\mathbf{A})^2=\mathbf{k}^2\mathbf{A}^2-(\mathbf{k}\mathbf{A})^2$ in position space": Indeed there is the vector relation: \begin{align} \int \mathbf{A}^2(x)d^3 x & = \frac{1}{4\pi} \int d^3 x d^3 x' \frac{[\nabla \times \mathbf{A}(x)] \cdot [\nabla \times\mathbf{A}(x')]}{\vert \mathbf{x}-\mathbf{x'} \vert} \\ & \qquad + \frac{1}{4\pi} \int d^3 x d^3 x' \frac{[\nabla \cdot \mathbf{A}(x)] [\nabla \cdot \mathbf{A}(x')]}{\vert \mathbf{x}-\mathbf{x'} \vert} \tag{6}\label{6} \\ & \qquad + \rm{surface\ terms} \end{align} Each of the two terms is positive; hence (up to the surface term question) we can minimize the integral of $\mathbf{A}^2$ by choosing $\nabla \cdot \mathbf{A} = 0.$ With this choice the integral of $\mathbf{A}^2$ is minimal in accord with our above remarks and is expressed only in terms of the magnetic field $\nabla \times \mathbf{A}$ This $\eqref{6}$ is indeed a very interesting identity and Gubarev, et al, go on to show it also in relativistically invariant form. When $\mathbf{A}$ is the vector potential, $\mathbf{B}=\nabla\times\mathbf{A}$, then in the Coulomb gauge $\nabla\cdot\mathbf{A}=0$ and $$\int \mathbf{A}^2(x)d^3 x = \frac{1}{4\pi} \int d^3 x d^3 x' \frac{\mathbf{B}(x) \cdot \mathbf{B}(x')}{\vert \mathbf{x}-\mathbf{x'} \vert} + \rm{surface\ terms}$$ Ignoring the "surface terms" in the infinity and assuming that the integrals of $\eqref{6}$ are positive indeed then we have the gauge independent minimum on the right side dependent only on the $\mathbf{B}$ field: $$\int \mathbf{A}^2(x)d^3 x \ge \frac{1}{4\pi} \int d^3 x d^3 x' \frac{\mathbf{B}(x) \cdot \mathbf{B}(x')}{\vert \mathbf{x}-\mathbf{x'} \vert}.$$ I have two questions: * *I would like to see a more detailed explanation of the proof based on the momentum space - position space equality *Why is it obvious that on the right side of $\eqref{6}$ each of the two integrals is positive?
As @flevinBombastus has suggested here is a sketch of the proof of the equality in Equation $(6)$ based on [1]. Start with $$\nabla^2\frac{1}{|\mathbf x - \mathbf x'|}=-\delta(\mathbf x - \mathbf x')$$ and $$\nabla \times (\nabla \times \mathbf v)=\nabla (\nabla\cdot \mathbf v) - \nabla^2 \mathbf v$$ Then $$\mathbf{A}(\mathbf x) = \int d^3x' \mathbf{A}(\mathbf x')\delta(\mathbf x - \mathbf x') =-\int d^3x' \mathbf{A}(\mathbf x')\nabla'^2\frac{1}{|\mathbf x - \mathbf x'|},$$ therefore $$\int d^3x \mathbf{A}(\mathbf x)\cdot\mathbf{A}(\mathbf x)=-\int\int d^3x d^3x' \mathbf{A}(\mathbf x)\cdot \mathbf{A}(\mathbf x')\nabla'^2\frac{1}{|\mathbf x - \mathbf x'|}$$ Now integrate RHS by parts over $\mathbf x'$. If $\mathbf A$ vanishes at infinity then the surface term will vanish, and after some more rearrangements and partial integration we get the required identity Eq. (6). Interestingly, the same procedure also works for the scalar product of two vector fields. This takes care of the 1st question. [1] Durand: "On an identity for the volume integral of the square of a vector field,” Am.J.Phys. 75 (6), June 2007
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does gravity act at the centre? Why does gravity act at the centre of earth and how does that happen?
Given a spherically symmetric Earth and a point $P$, the gravitational field vector at $P$ can be determined in two steps: * *Why does the field point toward the Earth's center $C$? Consider rotational invariance about the axis passing through $C$ and $P$. The mass distribution of the Earth is unchanged by this rotation, and so must be the field -- i.e., the field is directed along the axis. *Why is the field magnitude equivalent to concentrating the Earth's mass at $C$? This is a special property of the inverse-square law that is more easily understood via Gauss's law. Consider the spherical surface centered at $C$ and passing through $P$. The gravitational field on this surface is everywhere perpendicular to it, as argued above. Gauss's law says that the integral of the perpendicular field component (i.e., in this case, the field magnitude) over the surface is proportional to the mass $M$ enclosed by the surface. Thus, given spherical symmetry, the field magnitude depends only on $M$ and is the same as if it were a point mass at $C$. Note that if $P$ is at or above the Earth's surface, then $M$ is the Earth's total mass; if $P$ is below the Earth's surface, then $M$ is the amount of mass located deeper than $P$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Maximizing entropy with Lagrange multipliers This is a problem I saw in a stat mech textbook, and I think it is a fairly common problem. Given the entropy function: $$S = - \sum_{i=1}^N p_i \log p_i$$ Maximize $S$ subject to constraints: $$ \sum_{i=1}^N p_i = 1 \\ \sum_{i=1}^N p_i e_i = c$$ It was suggested to solve this problem using Lagrange multipliers. So this is how I went about it: $$L(p,\lambda, \mu) = - \sum_{i=1}^N p_i \log p_i - (\lambda \sum_{i=1}^N p_i -1)- (\mu \sum_{i=1}^N p_i e_i - c) $$ $$\frac{\partial L}{\partial p_k} = -(\log p_i +1) - \lambda - \mu e_i = 0$$ A little arithmetic gives: $$p_i = e^{-\lambda - \mu e_i -1}$$ Then I used the above constraints to solve for $p_i$. $$\sum_{i=1}^N p_i = \frac{\sum e^{-\mu e_i}}{e^{\lambda+1}} = 1 \implies e^{\lambda +1} = \sum e^{-\mu e_i} $$ And $$\sum_{i=1}^N e_i p_i = \frac{\sum_{i=1}^N e_i e^{-\mu e_i}}{e^{\lambda +1}} = c$$ Since I am not sure how to solve this final constraint and get a value for $\mu$, I said $$p_i = \frac{e^{-\mu e_i}}{\sum_i e^{-\mu e_i}}$$ My question is, how do I solve for $\mu$?
One may apply a following trick, let $f(\mu) = \sum e^{-\mu e_i}$, then: $$ \frac{\partial f}{ \partial\mu} = -\sum e_i e^{-\mu e_i} $$ the constraints lead to following differential equation: $$ \frac{\partial f}{ \partial\mu} = -cf \qquad f(0) = N $$ Where $N$ is number of operands in $e_i$. Which has a solution: $$ f(\mu) = N e^{-c \mu} $$ However, in general, there is no way to resolve the equation for $\mu$ in a closed form: $$ \sum e^{-\mu e_i} = N e^{-c \mu} $$ One may get a solution by some numerical technique.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the centripetal force when instead of a mass point we have a physical rotating body? I was wondering what is the centripetal force of a body rotating in a circular motion. I know that the centripetal force of a point mass is $mv^2/r$. I only have done an introductory physics class so I can not find the answer.
you probably come to that in your cours later. for short: you take all masses with the same r for them its just your formula, then you have to add all the forces for the masses with different r. If you know what integrating is you integrate over all radii. For simple forms of bodies you calculate their "moment of inertia" I -look this up- and then you know calculate the force. but for a firs approximation you take r to the center of mass and use the formula for a pointmass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/577965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Gravitational potential energy of an $n$-body In my CIE A level course, the gravitational potential energy of a mass in a gravitational field is defined as the work done in bringing the mass from infinity to that point without changing it’s k.e. energy. I thought about the gravitational potential energy if a system of more than 2 masses; it would obviously be lower but I cannot compute it with this definition. What would be the proper definition of the g.p.e of an $n$-body system and how would it be calculated? P.S: I know that it is nonsensical to define g.p.e of a mass, but that’s just how they define it for some reason, and it works for a 2-body system.
The gravitational potential energy of a system of $n$ bodies is calculated by the formula, similar to the formula of potential energy of the two bodies. You just need to apply the two-bodies-formula for all pairs of bodies and sum up the results: $$ E = - \sum_{i=0}^n{\sum_{j=i+1}^n}Gm_im_j/r_{ij}$$ Why is it so? Suppose the formula is correct for $n-1$ bodies. Now you bring $m_n$ from the infinity. At any moment the total gravitational force acting on $m_n$ is equal to the sum of gravitational forces produces by individual masses $m_i$, so the total work done by gravitational forces will be the sum of of works done by these individual forces, so the total work would be $E = \sum_{i=0}^{n-1}Gm_nm_i/r_{ni}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do electric field lines curve at the edges of a uniform electric field? I see a lot of images, including one in my textbook, like this one, where at the ends of a uniform field, field lines curve. However, I know that field lines are perpendicular to the surface. The only case I see them curving is when drawing field lines to connect two points which aren't collinear (like with charged sphere or opposite charges) and each point of the rod is collinear to its opposite pair, so why are they curved here?
These are so-called edge effects. The straight electric field lines connecting two surfaces is a solution for the infinite charged plates. In practice, no plates are infinite: they have edges. Far from the edges (close to the center of the plates) one can still think of the plates as infinite, but at the edges this is clearly not true. Note that the same is true for an infinite charged wire or cylinder: in practice one always has a finite one, but far enough from the edges, one can assume that it is infinite and thus simplify the math.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 2 }
Can we Predict the Trajectory of a hypersonic missile? I read in a newspaper that we can't predict the trajectory of a hypersonic missile and that this property renders the missile undetectable. However, what I could not understand is why can't we predict it's trajectory? What factors do we have to look at for predicting the trajectory of such high speed missiles? Is this feature associated with its speed? I know that there would be forces like the thrust from propulsion, gravity, and the drag force. Is there anything else affecting the trajectory?
I'll deal here with "classic" hypersonic missiles which travel through the upper reaches of the atmosphere and stay aloft by generating lift and operate under continuous thrust. Such missiles are maneuverable, unlike ballistic missiles whose post-boost phase coasting trajectories can be mapped in real time and for which precise intercept courses can therefore be plotted. This means that when a defensive missile launch against a hypersonic missile is detected, it can quickly change course in response and avoid the incoming missile. It is in this sense that the hypersonic missile's trajectory is "unpredictable". Also note this does not mean it is "undetectable"; it just means it is much harder to shoot down than a ballistic missile.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can we represent 4D graphically? Actually I know that axes are always perpendicular but after three axes we cannot draw any other axis that is perpendicular to all the other three axes. can any one say how can we draw another axes which is mutually perpendicular
This cannot be done. Humans can only perceive three dimensions and the axis that you are asking for would imply that we can vision a fourth spatial dimension which cannot be done. However, it is not impossible that there could be other spatial dimensions in a addition to the three we have. We just cannot see or sense them. There are actual theories (like string theory) which studies dimensionality beyond this 3D setup. Such dimensions are said to "compactified". Picture a cylinder in a line extending in the x-direction. Obviously this cylinder has three dimensions. Now consider yourself to be moving far away from this cylinder. At some stage it will appear to be a line or 1-dimensional. In reality, it has more dimensions but they are so small or "compactified" that they are not perceivable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is there anything such as gravitational field-lines in GR similar to the electric/magnetic field lines in electromagnetism? I sometimes mistake space-time curvature for gravitational field lines. Do geodesics in some ways represent $g$-field lines? Why is not it traditional to show $g$-field lines around a massive object in general relativity the same as we do for $E$ or $B$ field lines around an electrical charge or a magnet in electromagnetics?
I would encourage anyone to draw any lines that help either them or others to get a good understanding. But the reason field lines are so much used in electromagnetism, and much less used in gravity, is because they nicely capture a mathematical property of a field whose divergence is zero (on a flat spacetime background). The zero divergence of the field translates to the field lines being continuous, and their spacing then expresses the field strength. There are aspects of gravity that are a bit like this. In the weak field limit and for simple cases (e.g. a static case or one with steady rotation) one can express gravitational influences using a pair of fields analogous to electric and magnetic fields. But mostly in GR we are interested in stronger effects where the equations are more complicated and non-linear. Then there is nothing quite so convenient as the field lines of electromagnetism. But there are plenty of things one can do. Drawing sets of light cones can help in getting an impression of a region of spacetime. Or one could draw a selection of null geodesics. I like to add to this a set of timelike lines marked off by proper time along them. But such diagrams can be stretched and squeezed, twisted and distorted in all sorts of ways merely by changing the coordinates being used to plot them, so they have to be interpreted with care. If one merely changes coordinates then the diagram changes but the spacetime does not.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/578862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }