Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How do photons evade the unavoidable consequence of angular momentum algebra? I think an inescapable consequence of the angular momentum algebra is that a particle with spin-$j$ must have $(2j+1)$ spin projections in any direction. However, photons seem to evade this conclusion. Why?
The technical answer is that spin is property associated with representation of the Poincare group P. The representations are are induced from the "little group" i.e. the sub-group of P that leaves the particle's four momentum fixed. For a massive particle we can go to the rest frame in which the four momentum is $p=(m,0,0,0)$. This vector this is left fixed by rotations SO(3), so for a massive particle spin is a property of rotations and their spin $j$ representations. For a massless particle there is no rest frame and the reference momentum must be a null vector $p =(|{\bf p}_0|, {\bf p}_0)$. The little group now consists of space rotations SO(2) about the three-vector ${\bf p}$, together with operations that are generated by infinitesimal Lorentz boosts in directions perpendicular to ${\bf p}_0$ combined with compensating infinitesimal rotations. Remarkably the combined operations mutually commute, possess all the algebraic properties of Euclidean translations, and the resulting little group is isomorphic to the symmetry group SE(2) of the two-dimensional Euclidean plane. Wigner argued that the translations must do nothing and so the spin of a massless particle is asociated with the one-dimensional representations of SO(2). A more intuitive explanation is that for something moving at the speed of light, any vector originally have a component perpendicular to the drection of travel will be Lorentz trasformed to one pointing in the direction of travel
{ "language": "en", "url": "https://physics.stackexchange.com/questions/680862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Microstates of the canonical ensemble In the micro canonical ensemble the microstates of a system in an arbitrary macrostate, are also eigenstates of the Hamiltonian. Does the same apply to the microstates of the canonical ensemble? Are they eigenstates of the the Hamiltonian? I would expect them not to be, since here the energy is not constant. But I am not sure
At the core of the statistical mechanics using ensembles, there is the possibility of assigning a probability to the set of all possible mechanical states of the system (microstates). Therefore, the starting point is the identification of such microstates. In principle, any complete set of commuting observables could be used. However, for equilibrium macrostates, one knows (von Neumann's equation) that it is possible and convenient to use eigenstates of the energy for all possible energies of the mechanical system. Such a statement does not depend on which energy eigenstates contribute to a macrostate. Therefore, they are used to label the microstates, in the case of a microcanonical ensemble, where only one particular value of the energy is picked up, but also for canonical and garn-canonical ensembles, where some finite probability is assigned to the microstates of any possible value of the energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/681132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
No net generation or recombination of electrons is assumed I am currently studying the textbook Physics of Photonic Devices, second edition, by Shun Lien Chuang. Section 2.1.1 Maxwell's Equations in MKS Units says the following: The well-known Maxwell's equations in MKS (meter, kilogram, and second) units are written as $$\nabla \times \mathbf{E} = - \dfrac{\partial}{\partial{t}}\mathbf{B} \ \ \ \ \text{Faraday's law} \tag{2.1.1}$$ $$\nabla \times \mathbf{H} = \mathbf{J} + \dfrac{\partial{\mathbf{D}}}{\partial{t}} \ \ \ \ \text{Ampére's law} \tag{2.1.2}$$ $$\nabla \cdot \mathbf{D} = \rho \ \ \ \ \text{Gauss's law} \tag{2.1.3}$$ $$\nabla \cdot \mathbf{B} = 0 \ \ \ \ \text{Gauss's law} \tag{2.1.4}$$ where $\mathbf{E}$ is the electric field (V/m), $\mathbf{H}$ is the magnetic field (A/m), $\mathbf{D}$ is the electric displacement flux density (C/m$^2$), and $\mathbf{B}$ is the magnetic flux density (Vs/m$^2$ or Webers/m$^2$). The two source terms, the charge density $\rho$(C/m$^3$) and the current density $\mathbf{J}$(A/m$^2$), are related by the continuity equation $$\nabla \cdot \mathbf{J} + \dfrac{\partial}{\partial{t}}\rho = 0 \tag{2.1.5}$$ where no net generation or recombination of electrons is assumed. I'm curious about this part: where no net generation or recombination of electrons is assumed. What does this mean in simpler terms? Why is this assumption necessary for $\nabla \cdot \mathbf{J} + \dfrac{\partial}{\partial{t}}\rho = 0$?
The number/concentration of electrons in a volume may be due to their flow into / out of the volume (electric current), or due to the electrons appearing/disappearing inside of it. In vacumm, the latter possibility can be usually safely ignored (although not in QFT), so we have the continuity equation: $$\nabla\cdot\mathbf{J}+\partial_t\rho=0\Leftrightarrow \int_S\mathbf{J}\cdot\mathbf{ds} + \partial Q=0,$$ where the second equation is just the integral form of the continuity equation: the total current flowing through the surface surrounding the volume is the change of the charge within. If, however, the charge may appear/vanish within the volume – which is a real option in semiconductors' interaction with the electromagnetic field – then we need to augment the continuity equation with a source term: $$\nabla\cdot\mathbf{J}+\partial_t\rho=s(t)$$ It is necessary to point out that the total charge conservation still holds (creation of an electron is accompanied by creation of a hole), but we would often want to describe electrons and holes separately – writing a continuity equation for each of them, or one type of the carriers may be quickly removed, and considered non-existent for the purposes of description (e.g., holes may be localized, but electrons highly mobile).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/681233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If you were invisible, would you also be cold? If you were invisible, would you also be cold? (Since light passes through you, so should thermal radiation.) Additionally, I'd like to know if you were wearing invisible clothes, would they keep you warm? In my understanding, the heat radiation from the body would pass through the cloth. Is it even necessary to be permeable for heat radiation in order to be invisible? Could there be a form of invisibility (hypothetically speaking, of course) that makes you permeable for light in the visible spectrum, but not for heat radiation? Can those two things be separated?
Thermoregulation There are four avenues of heat loss: evaporation, convection, conduction, and radiation. If skin temperature is greater than that of the surrounding air temperature, the body can lose heat by convection and conduction. But, if air temperature of the surroundings is greater than that of the skin, the body gains heat by convection and conduction. In such conditions, the only means by which the body can rid itself of heat is by evaporation. So, when the surrounding temperature is higher than the skin temperature, anything that prevents adequate evaporation will cause the internal body temperature to rise.[24] During intense physical activity (e.g. sports), evaporation becomes the main avenue of heat loss.[25] Humidity affects thermoregulation by limiting sweat evaporation and thus heat loss.[26]
{ "language": "en", "url": "https://physics.stackexchange.com/questions/681335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 9, "answer_id": 8 }
Can I conclude that acceleration happens a bit later after force is felt? We define forces like electric force, magnetic force and gravitational force etc, to be caused by field lines such as electric field, magnetic field and gravitation field respectively. Since these fields take time to reach the object on which the force is applied for acceleration, the acceleration should occur after the force is applied. Also, does it apply to all cases or are there any interactions that happens with contact? What I think is that when object A applies force on B, A first feels the force and then B feels the force and so accelerates. Means that force applies on B and B accelerates at the same time but A feels force first.
For the idealization of point particles, the acceleration of a particle at a particular time is determined by the force at that same instant in time. This is also true for the idealization of rigid bodies, as acceleration of one point means instantaneous acceleration (in general) of all points in the body in order to keep the points fixed relative to each other. For non-rigid bodies, it does take a finite time for the "signal" that a force has been applied to one part of the body to propagate to other parts of the body. So in this sense you can think of there being a delay. However, for each point on the body it is still the case that the acceleration is determined by the force at the same instant in time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/681734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Electric field given flux through a plane Suppose you have a hidden, arbitrary, static charge distribution below the plane $z=0$ and you know the electric flux through the plane at every point on that plane. There is no charge above the plane. Is it possible to determine the electric field at every point above that plane? I would think so. Heuristically, if you know the flux, then don't you have full knowledge of the field lines? I'd think that would imply some Neumann boundary conditions which then uniquely determines the potential and so the field, but I've only managed to get so far. The differential flux element is $d\Phi=E_zrdrd\theta$. The flux and area elements are known, so $E_z$ is known, consequently the $\theta$ and $r$ derivatives of $E_z$ are known. Since a static electric field is irrotational, the $z$ derivative of $E_r$ and $E_\theta$ are also known on the plane. Can any more be deduced? Not sure where to go from here.
I believe this should be possible. Is the region above the plane a charge-free region? If so, then you should be able to write down a general solution to Laplace's equation using separation of variables. Then apply boundary conditions at $z=0$, and $r\rightarrow \infty$, basically at the boundary of the upper hemisphere. The BC at the surface will involve $E_\perp=-\partial V/\partial n$, and likely you would choose $V(r\rightarrow \infty)=0$. With azimuthal symmetry, you should be able to write $$V(r,\theta) = \sum_l{\left(A_l r^l + \frac{B_l}{r^{l+1}}\right)P_l(\cos{\theta})}$$ Note that $r$ is the distance from the origin and $\theta$ is the polar angle measured from the z-axis. Imposing the boundary conditions in this coordinate system might prove tricky, because you need to evaluate $(\partial V/\partial z)_{z=0}$. However, the boundary condition that $V\rightarrow 0$ at $r\rightarrow \infty$ immediately gives $A_l=0$, so you only have the $B_l$ to determine from the boundary condition at $z=0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/681888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the resonance frequency for forced damped oscillations I have a problem regarding a forced, damped harmonic oscillator, where I'm trying to find the resonance frequency. I have calculated the frequency for free oscillations as $$\omega_{free}=\sqrt{\frac{\kappa}{I}-\left(\frac{b}{2I}\right)^2},$$ where $b$ is the damping coefficient. To the best of my knowledge, $\omega_{free}$ should be the same as the resonance frequency, but when I try to calculate the resonance frequency from the amplitude $$A=\frac{\tau_0}{I \sqrt{(\frac{\kappa}{I}-\omega^2)^2 + (\omega \frac{b}{I})^2 }}$$ by finding the maximum value, I get a slightly different equation: $$\omega_{max}=\sqrt{\frac{\kappa}{I}-\left(\frac{\sqrt{2}\cdot b}{2I}\right)^2}.$$ Which one is correct to use as the resonance frequency, and why is $b$ in $\omega_{max}$ scaled by a factor of $\sqrt{2}$ compared to $\omega_{free}$?
Your equations seem to be correct. There are three types of frequencies to consider: * *$\omega_0$ is the frequency of undamped oscillations, i.e. when $b = 0$, aka natural frequency *$\omega_d$ is the frequency of damped oscillations, i.e. when $0<b<2m\omega_0$ *$\omega_r$ is the frequency at which system gain is maximum, aka resonant frequency The resonant frequency is not equal to the natural frequency except for undamped oscillators which exist only in theory. Here is a physical (intuitive) explanation: https://physics.stackexchange.com/a/353061/149541 However, for oscillators with high quality factor the resonant frequency equals natural frequency $\omega_r \approx \omega_0$, as I will show here. The differential equation of the forced damped oscillator is: $$m \ddot{x} + b \dot{x} + k x = u$$ where $m$ is the object mass and $b$ is the dampening coefficient. This system equation is also often written in the following form: $$\ddot{x} + \gamma \dot{x} + \omega_0^2 x = \frac{1}{m} u$$ where $$\gamma = \frac{b}{m} \quad \text{and} \quad \omega_0^2 = \frac{k}{m}$$ The quality factor is a dimensionless number that describes how underdamped an oscillator is. The higher the number, the oscillation amplitude decays more slowly: $$Q = \frac{\omega_0}{\gamma}$$ The transfer function of the system is: $$G(s) = \frac{1}{m} \frac{1}{s^2 + \gamma s + \omega_0^2} = \frac{1}{m \omega_d} \frac{\omega_d}{(s+\sigma)^2 + \omega_d^2}$$ where $$\sigma = \frac{\gamma}{2} \quad \text{and} \quad \omega_d = \sqrt{\omega_0^2 - \sigma^2} = \omega_0 \sqrt{1 - \frac{1}{4Q^2}}$$ The system is underdamped when $\omega_0^2 - \sigma^2 > 0$, i.e. when $b < 2 m \omega_0$. When this condition is satisfied the system oscillates with amplitude which decays with time. Also note the effect quality factor has on the system - the higher the $Q$, the oscillations are less damped and the frequency $\omega_d$ is closer to $\omega_0$, where $Q > \frac{1}{2}$. The response to any input in Laplace domain is $X(s) = G(s) U(s)$. When the input signal is impulse $u(t) = \delta(t) \leftrightarrow U(s) = 1$, then the corresponding response (impulse response) is $$x(t) = \frac{1}{m\omega_d} e^{-\sigma t} \sin(\omega_d t), \qquad t \geq 0$$ From this it is clear what each parameter does: $\omega_d$ is the frequency of damped oscillations and $\sigma$ is the oscillation amplitude decay rate. We need to find the transfer function in complex representation: $$G(j\omega) = \Bigl. G(s) \Bigr|_{s = j\omega} = \frac{1}{m} \frac{1}{(-\omega^2 + \sigma^2 + \omega_d^2) + j(2\sigma\omega)}$$ The system gain is defined as $$A(\omega) = \left| G(j\omega) \right| = \frac{1}{m} \frac{1}{\sqrt{(\omega^2 - \sigma^2 - \omega_d^2)^2 + (2\sigma\omega)^2}}$$ The maximum gain with respect to frequency can be found from $$\frac{d}{d\omega} A(w) = -\frac{1}{2m} \frac{2(\omega^2 - \sigma^2 - \omega_d^2)2\omega + 2(2\sigma\omega)2\sigma}{\Bigl(\sqrt{(\omega^2 - \sigma^2 - \omega_d^2)^2 + (2\sigma\omega)^2}\Bigr)^3} = 0$$ The solution is obtained from $$2(\omega^2 - \sigma^2 - \omega_d^2)2\omega + 2(2\sigma\omega)2\sigma = 0$$ $$\omega^2 = \omega_d^2 - \sigma^2 = \omega_0^2 - \frac{\gamma^2}{2}$$ Therefore, the system gain is at maximum for $$\omega_r = \sqrt{\omega_d^2 - \sigma^2} = \sqrt{\omega_0^2 - \frac{\gamma^2}{2}} = \omega_0 \sqrt{1 - \frac{1}{2 Q^2}}$$ The resonant frequency equals $\omega_0$ for high-Q oscillators. For example, for $Q = 10$ the resonant frequency is $\omega_r = 0.9975 \cdot \omega_0$. The system gain at the resonant frequency is $$\Bigl. A(w) \Bigr|_{\omega=\omega_r} = \frac{1}{m} \frac{1}{2\sigma \omega_d} = \frac{1}{k} \frac{Q}{\sqrt{1 - \frac{1}{4Q^2}}}$$ The system gain is proportional to the Q factor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/682059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is opposite to $\mathbf{w}_\parallel$ in a FBD of a box on a ramp? I tried doing research on this but to no avail so my question is this: If the normal force of an object with mass $m$ on a ramp inclined with angle $0<\theta<90^\circ$ is equal and opposite to the component of gravity pulling the object perpendicularlly into the ramp ($\mathbf{w}_\perp$), then what force is "equal and opposite" to the component of gravity that is parallel to the ramp? As newtons third law says, there has to be an equal and opposite force of $\mathbf{F}_w$. $\mathbf{F}_\perp$ seems to only take care of the perpendicular component $\mathbf{w}_\perp$, so what takes care of the parallel component $\mathbf{w}_\parallel$? Is it friction?
As newtons third law says, there has to be an equal and opposite force of Fw. Indeed, but your interpreting it wrong. Put it this way, The force on A due to B is equal and opposite to the force on B due to A. The key point being that the action-reaction pairs are forces on different objects. Not the same object. (the box) Let us identify the action reaction pairs - * *The normal force on the box by the ramp is at an inclined angle, perpendicular to the ramp. And the normal force on the ramp by the box exactly opposite in direction, inclined downwards. *The gravitational force on the box by the earth which points straight down. And the gravitational force on the earth by the box which points straight up. So the normal force on the box is not an action-reaction pair with gravity. To put it inline with your question, * *Normal force perpendicular to the ramp felt by the box is equal and opposite to the normal force experienced by the ramp due to the box. *The gravitational force perpendicular to the ramp is equal and opposite to that component of gravitational force felt by the earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/682236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why there is no reaction Deuterium + Deuterium $=\rm {}^{4}He$? Why there is no reaction like $D+D={}^{4}He$ specified here and in other places like this? Apparently $2\times2.0141-4.0026=0.0256$ is positive. What is the problem with this reaction?
As far as I understand the resulting $^4He$ is highly excited and immediately splits into either $^3He$ and a neutron or $^3H$ and a proton. Probably this is due to the fact that (in the center of mass frame) the resulting $^4He$ is at rest and thus the excess energy cannot be transferred into kinetic energy and thus has to stay inside the $^4He$ nucleus as excitation energy. By splitting into two parts, which can carry away some of the excess energy in form of kinetic energy, the nuclei can stabilize.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/682357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Simple oscillator displacement, speed, and acceleration diagram I'm currently studying the textbook Fundamentals of Acoustics (2000) by Kinsler et al. Chapter 1.2 The Simple Oscillator says the following: $$\dfrac{d^2 x}{dt^2} + \omega_0^2 x = 0 \tag{1.2.5}$$ This is an important linear differential equation whose general solution is well known and may be obtained by several methods. One method is to assume a trial solution of the form $$x = A_1 \cos(\gamma t) \tag{1.2.6}$$ Differentiation and substitution into (1.2.5) shows that this is a solution if $\gamma = \omega_0$. It may similarly be shown that $$x = A_2 \sin(\omega_0 t) \tag{1.2.7}$$ is also a solution. The complete general solution is the sum of these two, $$x = A_1 \cos(\omega_0 t) + A_2 \sin(\omega_0 t) \tag{1.2.8}$$ where $A_1$ and $A_2$ are arbitrary constants and the parameter $\omega_0$ is the natural angular frequency in radians per second (rad/s). Chapter 1.3 Initial Conditions says the following: If at time $t = 0$ the mass has an initial displacement $x_0$ and an initial speed $u_0$, then the arbitrary constants $A_1$ and $A_2$ are fixed by these initial conditions and the subsequent motion of the mass is completely determined. Direct substitution into (1.2.8) of $x = x_0$ at $t = 0$ will show that $A_1$ equals the initial displacement $x_0$. Differentiation of (1.2.8) and substitution of the initial speed at $t = 0$ gives $u_0 = \omega_0 A_2$, and (1.2.8) becomes $$x = x_0 \cos(\omega_0 t) + (u_0/\omega_0) \sin(\omega_0 t) \tag{1.3.1}$$ Another form of (1.2.8) may be obtained by letting $A_1 = A\cos(\phi)$ and $A_2 = -A\sin(\phi)$, where $A$ and $\phi$ are two new arbitrary constants. Substitution and simplification then gives $$x = A\cos(\omega_0 t + \phi) \tag{1.3.2}$$ where $A$ is the amplitude of the motion and $\phi$ is the initial phase angle of the motion. The values of $A$ and $\phi$ are determined by the initial conditions and are $$A = [x_0^2 + (u_0/\omega_0)^2]^{1/2} \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \phi = \tan^{-1}(-u_0/\omega_0 x_0) \tag{1.3.3}$$ Successive differentiation of (1.3.2) shows that the speed of the mass is $$u = -U \sin(\omega_0 t + \phi) \tag{1.3.4}$$ where $U = \omega_0 A$ is the speed amplitude, and the acceleration of the mass is $$a = - \omega_0 U \cos(\omega_0 t + \phi) \tag{1.3.5}$$ In these forms it is seen that the displacement lags $90^\circ$ ($\pi/2$ rad) behind the speed and that the acceleration is $180^\circ$ ($\pi$ rad) out of phase with the displacement, as shown in Fig. 1.3.1. (Arrows in figure 1.3.1 are mine.) We can see from figure 1.3.1 that the displacement is out of phase with the acceleration by $\pi$ radians (green arrow), as stated, but it seems to me that, according to figure 1.3.1, displacement is actually $3\pi/2$ radians out of phase with speed (blue arrow), rather than the stated $\pi/2$ radians (red arrow). Is this an error, or am I misunderstanding this?
Note that in the diagram below the velocity leads the displacement by $\dfrac \pi 2$ which is the same as the velocity lagging the displacement by $\dfrac{3\pi}{2}$. So when mentioning phase it is important to state which two quantities are being compared, eg $A$ and $B$, and then whether there is a lead or lag between them, eg $A$ leads/lags $B$. $A$ leading $B$ by $\phi$ is the same as $A$ lagging $B$ by $2\pi -\phi$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/683706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Fitting of experimental data affected by different kinds of errors It is quite easy to evaluate the best-fit curve for a set of n data points when the dependent variable is affected by a statistical error (namely when you have n triplets $(x_i,y_i,\sigma_{y_i})$. I use $\chi^2 $ minimization (with ROOT software, mainly) because it also helps me evaluate the goodness of fit. But how should I behave when the $x_i$ variables are affected by a maximum uncertainty? Namely not statistical, just sensitivity uncertainties? How do I tell if their uncertainties can be neglected? If they can't be neglected, how do I treat them?
Instead of fitting the function $y=f(\vec x)$ for fixed input parameters $\vec x$ a single time you could change the input parameters $\vec x$ according to your uncertainty model and perform multiple fittings. This yields a distribution of the fit coefficients. The distribution captures the uncertainty of your inputs. As nu pointed out in many software packages we have the opportunity to capture an uncertainty in the input parameters. What the software usually does is to calculate the residual of the fit by using the shortest distance between the data point and the fitted line -- in contrast, the vertical distance is used if the input parameters have no uncertainty. I reckon you should also compare the distribution of the fit coefficients to the result obtained using such a software package. There is of course a bunch of possible other options. You might want to start on wiki and then look for "uncertainty in independent variable".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/683930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do Einstein's two postulates allow for handling acceleration in special relativity or is something else needed? When I was taught special relativity, we started with Einstein's two postulates and worked from there. However we were also taught that a proper resolution of the twin paradox required general relativity - because one twin accelerates. Apparently this was Einstein's opinion as well. However modern texts, such as M,T&W's Gravitation, state that special relativity can handle the paradox. Specifically they state that when a uniformly accelerating observer momentarily passes a non-accelerating observer travelling at the same velocity, they will agree that their clocks are running at the same speed. With that statement, if accepted as part of special relativity, the twin paradox can be resolved. However, I do not see how this last statement follows from Einstein's two postulates. Does it? Or is special relativity, as understood now-a-days, reliant on more than the two postulates?
In the twin paradox, which is a veridical paradox, meaning that the conclusion is correct even if on the face of it it seems paradoxical, is a consequence of special relativity even if in the natural interpretation accelerations are required and so one might think GR. As you suggest, both Einstein and Born did think of it this way but we can see that we can do without the acceletation by formulating the problem with the following device: There is another auxilary spacecraft which travels towards earth at exactly the same speed and in the opposite direction as the outward going spacecraft. At the moment they pass each other, the clock reading from the outward going twin is transferred to that of auxilary observor. In this way we replicate an instant turn around with no acceleration. Analysing this situation in special relativity shows that the paradox relies on the inertial frame switch of the twin in flight. By the way, velocity along a curve on any curved manifold can always be defined in a natural generalisation of the derivative. But to define acceleration requires a connection (equivalently, a covariant derivative) and then acceleration is the covariant derivative of the velocity in the direction of the velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/683964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }
Why does mass bend the temporal dimension more than the spatial dimensions of spacetime? From my (limited) understanding of general relativity, most of what we experience as gravity is a result of the distortion of the temporal dimension, and not the spatial dimensions. Therefore, most of the spacetime curvature caused by the earth (and most astronomic objects, with the exception of maybe black holes) occurs along the temporal dimension, with very little on the spatial dimensions. This is why the bent sheet analogy is misleading, if I am not mistaken. Why is this so? Why aren't all four dimensions distorted equally, or the spatial dimensions distorted more than the temporal?
There is actually no such thing as curvature in one dimension, so the premise of the question is based on a misunderstanding. When we talk about curvature in general relativity, we mean intrinsic curvature, such as the curvature of a basketball that can be detected by a bug that never leaves the surface of the basketball and can't conceive of a third spatial dimension. The bug can detect phenomena like the fact that the angles of a triangle add up to more than 180 degrees. Intrinsic curvature can't exist for a one-dimensional curve, e.g., a circle has no intrinsic curvature. For these reasons, curvature always involves at least two dimensions. At a fancier mathematical level, we can see this because the Riemann curvature tensor is antisymmetric, but an antisymmetric tensor in one dimensions is zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
Is the electron a pointlike particle? And if yes, how is that possible, because the energy then would diverge, wouldn't it? My problem is that I read (besides others in this post Why are electrons and quarks 0-dimensional?) that the electron is a point-like particle. My question is on the one hand whether that is true and on the other hand if the electrostatic energy of the electron would not diverge if it was a point-like particle?
This is an excellent question. We know from experiment that the electron behaves like a point charge to all accessible scales, that is no deviation from the potential of a point charge has been observed. This indeed leads to a diverging self energy. Main stream physics has no answer to the question how this is possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
At what speed would a wind affect a bullet? Firing a gun loaded with the fastest bullet (.220 Swift 1,422m/s or any bullet that is super fast and excellent aero dynamics) in a close range (2cm) from the tip of an air blower. What would be the speed of the air coming out of the air blower to be able to deflect the bullet off course 90 degrees?
Remember that the bullet does not know what the wind speed is. The bullet only knows to travel in its given medium. So if the wind was blowing from the side at the same 1,422m/s, then the bullet would travel sideways in that medium at the same rate that it is travelling forwards. In this case, in one second it would travel forward 1,422 metres and sideways 1,422 metres, so it would travel at a 45 degree angle. This might be the answer you are looking for. About 5,000 kph. Otherwise, to travel in a complete 90 degree angle, it would have to have no forward travel at all, and complete sideways travel from the moment it leaves the gun. So if you said that in the first millisecond after leaving the gun, it travelled directly sideways, then the wind speed would have to be 1,000 x 1,422 = 1,422,000 metres per second. Darned fast. But in general, any wind speed at all changes the path of a bullet. This is a key aspect of being a target shooter, and is the reason why they set up ribbons along the rifle range, to show the wind speed at different points.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
In Srednicki's book, when calculating loop corrections to the propagator, why doesn't he include both diagram topologies at second order? This might be a somewhat basic question, so apologies in advance for that. I've only recently started learning QFT, and so I'd really like to make sure I understand this. In Srednicki's textbook, in chapter 14 Loop Corrections to the Propagator, he discusses the corrections to the full propagator in $\phi^{3}$-theory. This is how he begins: The issue I'm having is understanding why equation $(14.2)$ takes that form. I agree with the $\mathcal{O}(g^{0})$ term, but for the $\mathcal{O}(g^{2})$ term given by $$\frac{1}{i}\tilde{\Delta}(k^{2})\left[i\Pi(k^{2})\right]\frac{1}{i}\tilde{\Delta}(k^{2}).$$ I'm a little unconvinced. For the exact 2-point propagator in $\phi^{3}$-theory, at this order, we have two distinct connected diagram topologies, given in chapter 9: Unless I'm misunderstanding/miscalculating, the aforementioned term only takes into account the first of these diagrams, and not the second. The value of this diagram is explicitly $$\frac{1}{2}(ig)^{2}\left(\frac{1}{i}\right)^{2}\left(\frac{1}{i}\tilde{\Delta}(k^{2})\right)\left[\int\frac{d^{d}l}{(2\pi)^{d}}\,\tilde{\Delta}((l+k)^{2})\tilde{\Delta}(l^{2})\right]\left(\frac{1}{i}\tilde{\Delta}(k^{2})\right),$$ which is indeed the expression in $(14.2),$ but why do we not get a contribution from other diagram here as well? Shouldn't the propagator take into account all the possible diagram topologies? Any help clarifying this would be much appreciated!
The second diagram in Fig. 9.6 is a tadpole diagram, which is zero due to the renormalization condition $$\langle \phi(x)\rangle_{J=0}~=~0 \tag{9.2},$$ cf. e.g. my Phys.SE answer here or the last paragraph on p. 67 in Srednicki.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a name for the type of boundary condition where the initial boundary values are known but are not held constant over time? I'm exploring the heat equation to model a particular 1D scenario, and I understood the Dirichlet and Neumann boundary conditions, but neither are sufficient for my scenario. Assuming a rod of length L, I want the boundaries to have a particular initial value ($U(0,0) = 400$, $U(L,0) = 300$), but the temperatures at the boundaries do not need to be constant across time ($U(0,0) ≠ U(0,t)$, $U(L,0) \ne U(L,t))$. Heat does flow in and out of the boundary, but only towards the rod, not the air. Now, my question is, is there any sort of name for this type of boundary condition, where the initial boundary values are known, and are not held constant over time? I hope the explanation of my scenario was clear. Please drop a comment in case you need clarification on some point.
Well, after consulting my professor, it seems it was a Neumann boundary condition with zero flux at both boundaries ($\phi(0,t) = \phi(L,t) = 0$)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does Newtonian mechanics work in polar coordinates? Our teacher suggested that Newtonian Mechanics only applies in cartesian coordinates. Is this true? He gave this example. Suppose there a train moving with constant velocity $\vec{v}=v_0\hat{x}$, with initial position vector $\vec{r}=(0, y_0)$, where $v_0,y_0$ are constants. He argued that Newton's second law would not hold in polar coordinates. Any ideas? (We can assume 2D or 3D cases as well, so spherical or polar, it doesn't really matter)
Newton's laws are vector relations, which are independent on the coordinate systems. It is likely that the OP misinterprets the statement made by the professor. E.g., one of the following could be the case: * *That addition of components of vectors in curvilinear coordinates (such as polar coordinates) is not as simple as in in the rectangular coordinates *That the Newton laws do not work in a rotating frame of reference. It is also possible that the professor said what they actually said, simply to guard off the predictable errors that most students make (alas, after a year or two of teaching the same course the errors and questions are very predictable), but this simplification made the statement indeed incorrect, when examined more rigorously.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 10, "answer_id": 8 }
Parallel Dp-branes and force While studying basic Dp-brane dynamics from A. Giveon & D. Kutasov reference, on page 24 they state the following Since Dp-branes are BPS saturated objects, parallel branes do not exert forces on each other. It is not clear to me why this must be true even if the statement seems obvious. Why it is that the condition on BPS saturation implies the said statement?
If you have two objects with an associated potential energy $V(x)$, where $x$ is the distance between the objects then the force attracting them is $V'(x)$. If the objects are stationary then there is no kinetic energy. Therefore, if you know that the energy $H = 0$ is a constant that does not depend on the distance between the objects, then you know that there is no attractive force. Edit: Note that a BPS state has zero energy because it is annihilated by the supersymmetry operators
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How come the number of wandering electrons is same as the number of the positive ions? My book mentions the following: Cause of resistance : When an ion of a metal is formed , its atoms lose electrons from its outer orbit . A metal ( or conductor ) has a large number of wandering electrons and an equal number of fixed positive ions . The positive ions do not move , while the electrons move almost freely inside the metal These electrons are called free electrons . They move at random , colliding amongst themselves and with the positive ions in any direction as shown The book mentions that :A metal has a large number of wandering electrons and an equal number of fixed positive ions. My doubt arises that lets says the metal is aluminium since aluminium has 3 valence electrons a single atom will loose 3 electrons which becomes the free electrons in the metal, so since a atom looses 3 electrons to form a cation so in this case should not the number of wandering electrons be three times the number of positive ions. So how come the number of wandering electrons is same as the number of the positive ions
The paragraph should be read as "The total charge of the wandering electrons equals the total charge of positive ions". The actual number of positive atoms may be less, but that is offset by the any that are doubly or triply ionized. The important idea is that the conductor is usually electrically neutral, with charge separation possible, but not a bulk excess or deficit of charge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Atwood machine with cylinder I was trying to solve the following problem: Basically we have a normal Atwood machine, the pulley is negligible and we have two masses, a square that falls normally and a cylinder that also spins. The rope goes round the cylinder so the rope doesn't do weird stuff. They both have mass m, and it has to be found the accelerations of both objects. My approach was to consider the square and we see that $mg-T=ma_s$. Then considering the cylinder we can see it spins, so $I\alpha=I\frac{a_c}{R}=TR$ and thus $\frac{M}{2}a=T$. Substituting we get $m\frac{a_r}{2}=mg-ma_s$. The best shot I can think of is assuming $a_r=2a_s$ cause the rope acceleration has to split between both right and left. Thus $\frac{a_r}{2}=g-2a_s \Rightarrow a=\frac{2}{5}g$. But the solution given is that they are both $\frac{g}{2}$. I can see it has to be something about rototraslations but I can't get how to solve it, any helping hands? If possible I'd prefer dynamic approaches rather than energetic ones.
Atwood machine with cylinder constraint equations: $$x_1+x_2+\frac{\pi}{2}\,r=L_R\\ x_2+r\,x_3=0$$ where $~L_R~$ is the rope length from here you can apply the Euler-Lagrange with the above holonomic constraint equations . you obtain the accelerations $~\ddot x_1~,\ddot x_1~,\ddot x_3~$ and the generalized constraint forces $~\mathbf\lambda $ results: $$\ddot x_1=-{\frac {g \left( m_{{R}}-m_{{C}} \right) }{m_{{R}}+m_{{C}}+M}}\\ \ddot x_2={\frac {g \left( m_{{R}}-m_{{C}} \right) }{m_{{R}}+m_{{C}}+M}}\\ \ddot x_3=-{\frac {g \left( m_{{R}}-m_{{C}} \right) }{r \left( m_{{R}}+m_{{C}}+M \right) }} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are normal force and apparent weight the same? Are normal force and apparent weight the same thing? I'll let ya know the context from which I am asking this question: Is there a normal force on an object submerged in water? So, from what I gathered from this question, you CAN treat buoyancy as the normal force in fluids. But if that was true, then why don't we treat buoyancy explicitly as the apparent weight in fluids. Why do we subtract it instead.(possible duplicate suggestions : Why is weight in a fluid not equal to the buoyant force? Why isn't the apparent weight of a body in a fluid equal to the buoyant force? Why does buoyancy reduce it instead? Look this is the "context" from which I'm asking the question. You only have to answer them if you think that apparent weight and normal force are the same thing. Then you'd have to explain this particular situation. But if you think otherwise, well, then, case solved. But The question itself is broader than this.) If they are not the same thing, then could you also please mention if there are separate formulas to find them.( It'd also be helpful if you described their relations with ' the weight we feel ' and ' the weight a weighing scale would read if you were to stand on it ')
The term “normal force” refers to the perpendicular component of the force from a surface. If there is a scale between the surface and the object under study, then the reading on the scale will give the normal force and the apparent weight. If the object is hanging below the scale, there may be no normal force (or the weight might be shared between the two). Buoyancy is generally treated as a separate force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why are there just 3 main units ($L$,$T$,$M$) in physics? Most physics books define physical units in terms of length, time and mass. Some books add temperature. And yes, the SI unit system has 7 base units, but some are clearly redundant. Why are exactly three basic units sufficient? Or to make the point even more direct: is the number of units somehow due to the number of dimensions of space? Did anybody speculate about this in the past? And yes, one can get rid of all units altogether, if desired, by setting $c=\hbar=G=1$. Still, the question wants an answer...
This answer is inspired by arXiv: 0711.4276 [physics.class-ph]. The paper I referred to argues that, in fact, there are only two fundamental units: length, and time. Mass is not necessary. The reasons is because everything we measure are actually space and time intervals, and never really make any other direct measurements. For example, when you are measuring a mass on a scale made with a spring, you are actually measuring a space interval and using Hooke's law and Newton's law for gravity to convert this space interval to a mass. You never really measured the mass. The paper further elaborates on this and describes another aspects of how you can measure masses with rulers and clocks. As a consequence, notice that the number of fundamental constants does not coincide with the number of spatial dimensions, and hence I'd say there isn't really much to speculate about.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How exactly do vortices generate sound and cause pressure fluctuations to produce sound waves? I wish to understand qualitatively, how vortices generate sound by creating longitudinal sound waves? Vortices are often mentioned as the cause of sound production for things like corrugated whirly tubes, edge tones, etc. But I havent seen pictures and descriptions explaining how they cause pressure waves, what is their direction. I have no physical understanding of it and it is like throwing the word without understanding the details. First of all, how do vortices cause pressure fluctuations? In the rotating air comprising a vortex, what is pressure distribution inside it and in ambient air? Is there a simple qualitative and visual explanation explaining the generation and propagation of sound?
This is a non-linear process and do not think that there is a simple explanation or theory. The starting reference is James Lighthill's On Sound Generated Aerodynamically. I. General Theory (the link is to a recent paper citing the original)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Will a planet rotate if it is the only being in the universe? As a senior student , I have been wondering whatever the word inertia mean . Is inertia lying in the interaction between all the objects , or is it the nature of a space even without anything put into it ? In our life it seems like the latter , since wherever you throw out a stone into a space it will go along a parabola . But that is not the case , for there is still the earth and the sun and all the distant galaxies that interact with the stone outside its moving space . So if all the interactions are removed , and there's only a planet thrown into a universe of nothing . Then will it rotate , or can we detect its rotation through , for example , a Foucault pendulum ? If not , can we conclude that inertia relies on the interaction of the objects , and thus a consequence of universal gravitation?
If the "planet" is not an elementary particle, irreducible, but made of matter with chemical bonds, etc, governed by EM or EW force then the planet is made of a large number of objects each with its own frame of reference and the rotation would be "felt" by each element of the planet as stress from neighboring elements.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 1 }
Lag in Direction of Earth-Sun Gravity vector When the earth is orbiting around the sun, it experiences a force vector pulling it towards the sun, which acts as a centripetal force for its elliptical orbit. However, when the earth moves a bit from a given position, wouldn't it take time for the information that it's in a new spot to travel to the sun, thereby delaying the direction of the force vector by approximately 8 minutes?
Newtonian theory is not adequate to answer this question. The answer from General Relativity for the gravitational problem is like the answer to a similar question in electromagnetism, when one charge experiences the fields due to another charge. In either case it is helpful to focus ones thoughts on two events called the source event and the field event. The field event is some event (a place and time) where we want to calculate the field---for example, the gravitational field due to the Sun (or, if you prefer, the effect on spacetime curvature owing to the Sun). The source event is the event where the worldline of the source (taken here as a point source) intersects the backwards light cone from the field event. For the Earth--Sun problem, the source event is about 8 minutes before the field event in the rest frame of either Sun or Earth. So Earth now responds to the field (or the curvature) caused by the Sun 8 minutes ago. But the interesting fact is that that field (caused by the Sun 8 minutes ago) points towards the location of the Sun now! And this is true no matter which frame you pick! Take a look at the electric field due to an inertially moving point charge and you get a similar observation. The fields lines point to where the charge is now (in whatever inertial frame you have picked), but that very field configuration was caused by the charge at earlier points on its trajectory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to measure the intensity of a pen-type laser beam? I need to measure the intensity of a green pen-type laser, rated at $500\,\mathrm{mW}$, before and after it undergoes Bragg diffraction by a synthetic Opal cube. I've basically zero experience with such measurements. The first port of call appeared to me an instrument like this one but I've no idea of its suitability for my purpose. Specifically: * *I'm unsure whether $500\,\mathrm{mW}$ would fall in the advertised $0 - 200,000\,\mathrm{lux}$ range, *whether such an instrument would be able to cope with the 'point-like' shape of the beam, without causing sensor-overload (as happens with digital cameras when I try to photograph laser beams), *how does the distance between the laser source and the sensor affect the reading (output)? For context, the experimental work I'm referring to can be found here. Perhaps there are other types of instrument out there that are more suited for my purpose? If I use an ND: because of the inevitable reflection $I_R$, how do I determine: $$\frac{I_T}{I_0}$$
If you are sure about the shape of the beam (e.g. Gaussian) it's enough to get a power meter and measure the total power. However, we usually used a so called beam profiler. We build it ourself using a ccd chip and matlab, but "every" optical lab has one, nowadays. So, if you don't have one, why not ask a colleague. An alternative (simpler) method is to use a translation stage, a razor plate and a power meter: recording the power for different cuts of the beam and assuming that the beam is rotational symmetric, you are able to determine the shape of the beam.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What mechanism will force mechanical watch to tick slower when go fast, due to relativistic effects? To make mechanical watch tick slower, watch tick rate must be changed, oscialtion of balance wheel must be SOMEHOW changed, how would speed change oscialtion of balance wheel, due to relativistic effects? I dont understand mechanism between speed and parts inside mechanical watch that will somehow mysteriously start ticking slower? This video show how watch works.
Your video shows - very nicely - the balance assembly swinging to and fro due to the balance spring. It has a mass (actually a moment of inertia - look at those weights round the rim) that determines how fast this happens. Suppose I set it up in a laboratory swinging to and fro once every second. You observe this while travelling past me in a fast train (or plane, or space-ship). Because you are moving you will see the mass of the balance wheel increased by a $\gamma$ factor. The spring properties are the same (under certain assumptions...) so - according to you - the balance assembly takes longer to accelerate through its cycles and the watch ticks more slowly.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Does anything in an incandescent bulb actually reach its color temperature (say 2700 K)? This question is inspired by a question about oven lightbulbs over on the DIY stack. It spawned a lengthy comment discussion about whether an incandescent lightbulb with a color temperature of 2500 K actually has a filament at a temperature of 2500 K. The articles I could Google are focused on explaining how other types of bulbs like LEDs are compared to an idealized blackbody to assign a color temperature, which makes sense to me. I couldn't find one that plainly answers my more basic question: Does any component in an incandescent lightbulb actually reach temperatures in the thousands of degrees? If so, how are things like the filament insulated from the filament leads or the glass, which stay so (comparatively) cool? Is this still true of bulbs with crazy high 20000 K color temp such as metal halide-aquatic? Do they actually sustain an arc that hot?
Other answers are good, but it should be noted that the word "incandescent" actually means that the thing is glowing because (or mostly because) of its temperature. The color temperature of incandescent light bulbs (including halogen bulbs) is by definition not cheating: the filament must actually be that temperature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 4, "answer_id": 1 }
Isn't AdS/CFT an end to String theory as a fundamental theory? I start with the Large $N$ QCD paper by 't Hooft. When 't Hooft published his paper on Large $N$ QCD it was clear why the string theory of hadrons due to Gabriele Veneziano could make sense. But at the same time, it was an end to strings as fundamental entities responsible for strong nuclear force. In fact, it turned out that the large $N$ QCD gives rise to Flux tubes and turned out as an effective field theory of QCD. My question is, why don't we interpret AdS/CFT in the same manner, in the sense that the large $N$ limit of super Yang-Mills theory gives rise to a Stringy picture that is by definition only an effective field theory of the Super Yang-Mills in the large $N$ limit? The fact that such duality is proven in that particular limit and not for arbitrary finite $N$ is the core of my question.
If string theory were really a UV completion of gravity then it would make a prediction for quantum gravity S-matrix elements when Newton's constant is large. But this cannot be done with the perturbative formulation based on the genus expansion. Rather, the main way people have of computing these large $G$ observables is to "believe the duality" and then do a small $N$ calculation in $\mathcal{N} = 4$ Super Yang-Mills using field theory methods (Feynman diagrams, integrability, bootstrap, localization and maybe lattice). So for now, $\mathcal{N} = 4$ Super Yang-Mills with small $N$ defines what we mean by strongly coupled type IIB string theory around $AdS_5 \times S^5$. I.e. it admits a limit which agrees with weakly coupled string theory and stays well defined in principle for all other values of its parameters. In other words, we never understood strings well enough to even know what it would mean for the duality to "break down". It is conceivable that somebody might find another way of doing non-perturbative string theory which disagrees with Super Yang-Mills despite passing the same checks but so far that hasn't happened.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Does power consumption vary over time as a projectile travels down the barrel of a rail gun? My assumptions (possibly incorrect) for a rail gun are that are that the mass of the projectile is constant, the current through the armature is constant, and the magnetic field strength along the barrel is constant. Therefore, according to Lorentz's law, the force on the projectile should be constant, and thus its acceleration should be constant. But if the acceleration is constant then the power consumption is not constant, according to Power(kg*m2/s3) = m(kg) * a(m/s2) * v(m/s). So, presumably, somehow, power consumption increases over time as a projectile travels down the barrel of a rail gun. If this is correct, can someone explain the physics and math of how and why the power consumed by a rail gun varies with time?
There is a back-emf caused by the motion of the projectile through the magnetic field (Faraday's Law of induction). The back-emf is seen as a voltage across the driving terminals. The voltage is proportional to the projectile velocity (ignoring ohmic and contact losses). So, although the current may be relatively constant and held near some maximum value, the driving voltage will rapidly increase as will the power (current x voltage) that must be delivered. Designing the system to efficiently and safely deliver that power is quite a challenge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How did Ernest Sternglass’ phenomenologically incorrect model of the neutral pion predict its mass and lifetime so accurately? In 1961, Ernest Sternglass published a paper where, using what seems to be to be a combination of relativistic kinematics and Bohr’s old quantisation procedure, he looked at the energy levels of a set of metastable electron-positron states, and found the lowest of these to be a mass surprisingly close to the measured mass of the neutral pion. He also calculated its lifetime, through what looks to me to be a form of dimensional analysis, to be close to that of the neutral pion also. We now know, of course, that this is not the correct model of the neutral pion, but how did his analysis manage to produce these curiously close results? Is it understandable in terms of our modern model of neutral pions, a mistake in the argument, a coincidence, or some combination of these?
I don't know about lifetime, but as for mass (and the following is just speculation), one of the (very rare) decay mode of the neutral pion is to gamma and positronium (P.A. Zyla et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 083C01 (2020), p.38). As the decay is very rare, the relevant phase space volume must be very low, and it is possible that it corresponds to low gamma energy, so it is possible that the mass of the resulting positronium is close to that of the initial neutral pion, so this may be the reason for Sternglass' calculations for "relativistic positronium" giving a good estimate of neutral pion's mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Why do we need the concept of Gravitational and Electric Potential? I understand that we need potential energy for the concept of energy conservation. However, why would we come up with a definition like 'energy required per unit mass/charge to bring the mass/charge from point A to B. The part says 'per unit mass/charge' allegedly to avoid mass/charge dependence as the potential energy depends on the mass/charge. Why do we need to get rid of the mass/charge dependence and invent a new concept like 'potential' out of potential energy?
It's handy conceptually because it allows you to think about the cause of PE separately from the effect (ie the PE itself), which in turn makes it easier to model physical rules in a way that's more generally applicable. For example, an object gains PE if you raise it above the Earth's surface. If you imagine a cliff 100m high, then ten different objects with ten different masses would gain ten different amounts of PE by moving from the bottom of the cliff to the top. If you ask 'what's the difference in potential energy between the top and the bottom' the answer is that it depends on the mass involved. However, if you work with the concept of potential, you can say that the potential difference between the top and bottom of the cliff is always gh, and you can compare that in a meaningful way with other potential differences. Also, potentials as a function of space or distance crop up in many equations. In quantum mechanics, for example, the Schrödinger equation includes the potential as part of the Hamiltonian.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Force of photons from the Sun hitting a football field = weight of 1 dime? I read, I think, some time ago that the "weight" of photons from the Sun hitting an area the size of a football field at noon on a sunny day would be about the "weight" of a dime? Would appreciate it someone could flesh that out, verify if correct or false?
Photons are massless so their weight is 0. However, photons do have momentum so they can exert force. This force is due to their momentum and would occur even in the absence of gravity, so it is not a weight. The solar irradiance during peak hours is approximately $1000 \mathrm{ \ W \ m^{-2}}$ and the size of a football field is about $7200 \mathrm{ \ m^2}$ for a total radiant power of $7.2 \mathrm{ \ MW}$. Since $p=E/c$ and $F=\frac{dp}{dt}$ we get that the force from this energy is $(7.2 \mathrm{\ MW})/c = 0.024 \mathrm{\ N}$. In comparison, a dime has a mass of $2.268 \mathrm{\ g}$ which on the earth turns into a gravitational force, or weight, of $0.022 \mathrm{\ N}$. So the force of the sunlight on a football field during peak solar hours is close to the weight of a dime.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 3, "answer_id": 0 }
Does a blueshift mean that time goes faster? This is a follow-up question to this answer. The assumption in this answer is that time dilation always causes a small redshift when an observer looks at an object moving at a significant fraction of the speed of light when not taking into account the shifts caused by the directional Doppler effect. So, if time going slower always causes a redshift, does that mean that if we see a blueshift it means that time appears to move faster? In other words, if B, that is far away from A, moves towards A really fast, A will appear to be blueshifted to B due to the relativistic doppler effect and thus B will see A's time moving faster? The confusion I have is linking the concepts of redshifts and blueshifts with time going slower and faster.
So, if time going slower always causes a redshift, does that mean that if we see a blueshift it means that time appears to move faster? Yes. The machine that produces the wave-crests that appear to follow each other extra rapidly, appears to work extra rapidly. For example if wave-crests appear to follow each other at one nano-seconds intervals, then the machine that produces those wave-crests appears to produce one wave-crest each nano-second. I mean, when looked very closely through a big telescope, then the machine can be seen to do that. Those machines that appear to person X to work extra rapidly work extra slowly according to person X. I mean, person X subtracts the directional blue shift, and notes that there is a redshift.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Dirac-delta-distribution charge densitity Are the charge distributions $$\rho(\vec{r})=\frac{Q}{2\pi R^2}\delta(r-R)\delta(\vartheta-\pi/2)$$ and $$\rho(\vec{r})=\frac{Q}{2\pi r^2\sin(\vartheta)}\delta(r-R)\delta(\vartheta-\pi/2)$$ of a charged circle the same? I would say yes because integrating over them gives the same result but is this in general true?
Two distributions are defined as being equal if, when integrated with respect to an arbitrary test function, they always yield the same result. In other words, $D_1(r,\theta)$ and $D_2 (r, \theta)$ are equal if for all test functions $f(r, \theta)$, we have $$ \int D_1(r,\theta) f(r, \theta) = \int D_2(r,\theta) f(r, \theta). $$ If you have a copy of Griffiths, this is discussed briefly in Section 1.5.2 (Equation 1.93 et seq.) In your case, it is straightforward to evaluate both integrals and show that both are equal to $\frac{Q}{2 \pi R} f(R, \pi/2)$ regardless of your choice of $f$. Thus, the two distributions are equal.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Would light bend the other way, if I use antimatter instead? Imagine the following setup: an antimatter straw, an antimatter glass filled with antimatter water and we have antimatter atmosphere just in case. My question is: does Snell's law still apply here as though they are regular matter, if I were to observe the straw inside the water?
I would say no. If everything is anti-*, also the refractive index will be. Thus, being both negative the resulting bending of light will be the same. As an example take an electric field and throw an electron through it, the e- will be deflected in one direction. Now, if you take the anti- of everything the E-filed will be essentially reversed but the old e- is an e+ and thus the deflection will be identical.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Electric field of an electron in motion in a wire How do I correctly model the electric field of an electron in motion in a wire? I could treat the electron as a point charge moving through the wire. If I use the Liénard Wichert equations, they will predict radiation if the wire turns, since the electron is being accelerated here. But we know that constant currents doesn't radiate like this. Alternatively I could view the electron as a wave function distributed over the entire wire. Which equations would I then use to obtain the field? And would the wave function then be used as a chargedistribution?
If it's an Ohmic material $\vec{J}=\sigma \vec{E}$. Current density is proportional to electric field. $J=I/A, E=V/l$ Where $I$ is current $A$ is cross sectional area, $V$ is voltage between endpoints of the wire, and $l$ is length of the wire. From here we get $\sigma =\frac{Il}{AV}$ Dimensional analysis tells us the numerator is Coulomb Meters per second. That's charge times velocity. Charge is contributed by many electrons electrons. If we multiply numerator and denominator by length, we have volume times voltage in the denominator. So $\sigma =\frac{(N)evl}{(Al)V}=\frac{ne^2vl}{eV}$ Where $n=N/Al$ is the charge carrier density and numerator and denominator have been multiplied by $e$ to get an energy term $(eV)$ in the denominator. $eV\approx \frac{1}{2}mu^2$ where $m$ is mass of an individual electron and $u$ is the velocity obtained when an electron traverses that potential difference. $\sigma=\frac{ne^2}{m} (\frac{2vl}{u^2})$. $v$ is the average speed of an individual electron, $u^2/2l$ is as an average acceleration. Average velocity over average acceleration gives you a characteristic time of the system, in this case the time is the mean free flight time. So $\sigma=\frac{ne^2\tau}{m}$ For a more rigorous derivation, terms like momentum and mean free path can be used to deduce a relationship between the conductivity and motion of the electrons in a wire. This gives you the Drude Model
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is the size of image increasing as observer moves away from lens? I was using a convex lens and placed the object at principal axis at a distance from optical center lesser than focal length (between $F_1$ and optical center). Then I started observing the size of the image from other side of lens. At first I had placed my eye close to the $F_2$ and between $F_2$ and $2F_2,$ then moved it away from that towards $2F_2.$ I found that as I moved away from lens, the image was getting bigger and bigger. That's where my confusion comes in. What I understand is that the size of the image formed at any point is only dependent on its position from lens and lens. It should not be dependent on observer but the size of object seen by observer can get smaller and smaller as he moves away from lens just like a tree when seen from a distance appears small as compared to looking at it from closer distance. Why is the size of image of object increasing?
It's really not easy to judge the absolute angular size of an object (see Moon illusion). The image you see may get larger relative to the lens frame, but a bit smaller in angular size due to perspective. In any case, with a perfect lens you're watching the virtual image at a fixed distance behind the lens, farther from the lens than the object, and magnified. This is equivalent to an experiment with a window (without optical power, just flat glass) and an object behind it. As you go away from the window, the object will seem larger—compared to the window frame. But it actually becomes smaller, as you can confirm if you try to measure its angular size e.g. by using a coin at an arm's length as a reference. I've done the experiment you describe, and indeed the image grew relative to the lens but shrank relative to a SIM card I placed at a fixed distance to my eye.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Interference of standing waves inside black body? Does electromagnetic wave inside a cavity (modeling black body ) interfere with each other? And why in the derivation of Rayleigh law of black body radiation we add energy of different modes (are we supposing constructive interference of the modes inside the cavity)?
It can be shown that the energy of an electromagnetic field is the sum of energies of its modes. E.g., in a free space, although the waves in different modes do interfere, they all have different frequencies and wave vectors, so that the cross-contributions cancel out (in other words, the EM modes are orthogonal).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Phase difference between two waves in opposite directions Suppose I have two waves travelling along the positive and negative $x$ axis, and are given by : $$y_1=A\sin(kx-\omega t)\,\,\,\,,\,\,\,y_2=A\sin(kx+\omega t)$$ What would be the phase difference between these two waves at a particular point ? If I define the phase difference as the difference between the arguments, then I get : $$\Delta \phi=kx+\omega t-(kx-\omega t)=2\omega t$$ But, I could have easily defined the waves, by keeping a positive sign in front of $\omega t$ instead of $kx$. So in that case, my arguments would have become $\omega t-kx$ and $\omega t+kx$ instead. In this case, the phase difference at any point comes out to be : $$\Delta\phi=\omega t+kx-(\omega t-kx)=2kx$$ At any value of $x=x_0$, this phase difference is constant. So, I get two contradictory answers here. In the previous case, the phase difference at any point, varied over time. In the second case, this phase difference was constant at a given point, and varied from point to point. Which one is correct, and how should I know, which one to choose, in situations such as these ?
Phase difference as a constant, independent on time, can be defined only between two waves with the same wave vector and frequency, which is not the case in the example given in the OP, where the waves propagate in the opposite directions. More generally, the phase difference is defined between two points in space and time. E.g., if we have waves $$y_1(\mathbf{x},t)=\cos\phi_1(\mathbf{x},t), y_2(\mathbf{x},t)=\cos\phi_2(\mathbf{x},t),$$ we could define a phase difference between points $\mathbf{x}_1,t_1$ and $\mathbf{x}_2,t_2$ as $$\Delta \phi(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2 )=\phi_1(\mathbf{x}_1,t_1)-\phi_2(\mathbf{x}_2,t_2).$$ Thus, the phase differences defined in the Op correspond to two different cases: * *same space point, but different time *same time point, but different locations in space Remark One also has to agree about what is considered as a positive/negative frequency and the phase - the paradox in the OP might be simply due to exploiting even symmetry of the cosine function.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How does potential energy increase with no work? If you're dragging an object up a hill at a constant velocity, work is technically 0 (as acceleration is 0), but potential energy constantly increases. How would you represent this situation mathematically, and how does the potential energy increase despite a lack of work?
The work-energy theorem should always be your starting point: $$\boxed{K_1 + W = K_2} \quad \text{or} \quad \boxed{\Delta K = W}$$ You should read this as: Change in object's kinetic energy equals total work done on the object. Total work means work done by all forces! If change in kinetic energy is zero, that means the total work done on the object is zero. But if gravitational force has done some work, which we know it did because object changed its altitude, then there must have been some other force (or forces) which did exactly the opposite work: $$W_F + W_G = 0$$ Now say you push the object down the hill, i.e. there is no external force except for the initial push which we will neglect at the moment. The object starts at $K_1 = 0$ and along the way the gravitational force did some work. What is the final kinetic energy? Assuming there are no other forces such as friction, the whole gravitational potential energy is converted to the kinetic energy. In general, work done by gravitational force is defined by $$\boxed{W_G = - \Delta U_G = -(U_{G,2} - U_{G,1})}$$ where $U_G$ is the gravitational potential energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Functional Calculus in QFT Does anybody know some good sources with detailed derivations of the main results we need to compute generating functionals in QFT (and functional calculus used in the subject in general). I find that in mainstream books, such as Peskin and Schroeder Chapter 9, the details are glanced over, and that there are some "hidden" product or chain rules that I would like to get a better grasp of. Note: I'm aware of the product and chain rules for functional derivatives, I'd just like to see more detailed examples with explanations of their application.
While this might sound a bit odd at first, I particularly recommend Nivaldo Lemos' book Analytical Mechanics. In Chapter 10, it deals with classical field theory, and in order to do so the author chooses to deal with a bit of functional calculus. When doing so,he provides some more detail on the definition of functional derivative than most QFT books I've seen and provides some more examples. This contrasts a bit with P&S's approach of defining the functional derivative in terms of Dirac deltas and pretty much just imposing the main results. Lemos doesn't spend a lot of time on functional derivatives, but does provide some examples and I find it more clear than most other texts. This answer of mine to another post might also provide an example of application of functional derivatives in field theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conformal Ward identities for local conformal algebra: error in textbook? In Schottenloher's mathematically-oriented CFT textbook, "A Mathematical Introduction to Conformal Field Theory," Proposition 9.8 on page 160 states the conformal Ward identities for 2D CFTs as follows: For all $m \in \mathbb{Z}$, for any primary fields $\phi_j$ with scaling dimensions $h_j$, $$ 0 = \sum_{j=1}^n (z_j^{m+1} \partial_{z_j} + (m+1) h_j z_j^m ) \langle \phi_1(z_1) \ldots \phi_j(z_n) \rangle.$$ For the special case of $m \in \{-1,0,1\}$, these are the Ward identities corresponding to global conformal transformations; see e.g. Di Francesco ("yellow book"), Eq. 5.51. However, the identity is claimed for all $m \in \mathbb{Z}$, and for $m \not \in \{-1,0,1\}$ the formula would appear to be wrong. For instance, when applied to a two-point function $\langle \phi(z_1) \phi(z_2)\rangle = \frac{1}{(z_1-z_2)^{2h}}$, the formula holds only for $m \in \{-1,0,1\}$. (It's not a typo when Schottenloher claims the formula for all $m \in \mathbb{Z}$; cf. his Proposition 9.5.) Is Schottenloher simply wrong about this major point? The statements in this section are proven from formal CFT axioms, but unfortunately the proof is largely omitted for this particular claim (Proposition 9.8). The entire book is careful to distinguish the consequences of the global conformal group versus local conformal algebra (see e.g. discussion at beginning of Section 9.3), which is why I would be surprised by the error.
The equation you quote cannot hold for any integer $m$. A function of finitely many variables cannot obey infinitely many independent PDEs! There are infinitely many local Ward identities but they involve Virasoro descendant fields. See for example Eq. (2.2.15) in my review article: https://arxiv.org/abs/1406.4290
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Elastic potential energy and work equations Elastic potential energy is $\frac{1}{2} k x^2$ and work is $F \cdot d$. Why these numbers do not evaluate to the same value in a problem? The change in potential energy is the work done on a spring - $W = \Delta U$. However, every time I do an example I always get that the work is double the elastic potential energy. What am I missing? If it takes $2 \text{ N}$ of force to displace a spring by $0.2 \text{ m}$ with a spring constant of $10 \text{ N/m}$ then the work is $W_e = 2 \text{ N} \cdot 0.2 \text{ m} = 0.4 \text{ J}$. However, the elastic potential energy stored in the spring is $U_e = \frac{1}{2} 10 \text{ N/m} \cdot (0.2 \text{ m})^2 = 0.2 \text{ J}$.
so if potential energy in a spring is 1/2kx^2 and work f*d. Why do these numbers not come out to the same thing in a problem? You have to start out with the general definition of work, which is not simply force times displacement, but is $$W=\int\vec F.d\vec x$$ It only equals $Fx$ if the force it constant and can come out of the integral. But the force exerted by the spring is not constant, if varies linearly with displacement. So, for the spring, since the force is in the same direction as displacement and since $F=kx$ $$W=\int (kx)dx=\frac{1}{2}kx^2$$ Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Relationship between angular and translational velocity on inclined surface I have been researching about rolling motion and I was calculating a way to predict the translational velocity of the object at the bottom of the incline. I know that the kinetic energy of a cylinder undergoing rolling motion is given as $$E_k = \frac{1}{2} I \omega^2$$ Can angular velocity $\omega$ be replaced as $v/r$ even if the object is a partially filled cylinder?
You can say that $\omega = v/r$ — but only for the cylinder, not for the water. The cylinder will be rolling without slipping, and so you can only the rotational kinetic energy equation for the cylinder. The water inside the cylinder will be executing a different sort of motion. The simplest assumption is that the water will not be "sloshing" around and will therefore be at rest relative to the center of the cylinder. In that case, the water will only have translational kinetic energy, with zero rotational KE.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Do counter rotating galaxies have dark matter? Have counter rotating dark matter galaxies been observed? Counter rotating galaxies, you may already know, are galaxies where some stars or arms rotate in one direction and other stars or arms rotate in an opposite direction, possibly due to the merger of two or more galaxies.
As you probably know, the presence of dark matter in galaxies can be assumed true due to the analysis of the velocity curves. In 1970, Freeman determined the velocity profiles of galaxies using the 21 cm line and he found that for NGC300 and M33 there should have been much more gravitational mass outside the last bright point. In the same year, Rubin and Ford (1970) determined the velocity profile for M31: the profile was flat until 24kpc, which is much greater than the last photometric radius. The physical predicted model of a rotation curve of a galaxy must decrease smoothly following a keplerian model after the last luminous radius. As you can see, most studied galaxies show that their velocity curves are flat outside of their last visible point. The most accepted idea to solve this discrepancy between real and the predicted models is the hypothesis of the presence of dark matter in the galaxy halo. Another important parameter to estimate the presence of dark matter is the mass/luminosity ratio. For our Galaxy it has an approximate value of $~50 M_{\odot}/L_{\odot}$ (Binney and Tremaine 2008). This means that there should be mass that is not visible, maybe condensated in dark matter, brown dwarfs or other non luminous bodies. In order to answer your question, counter rotating galaxies may have similar velocity curves and they can have the presence of gravitational but not luminous mass in there halo. As you can see in this small paper on the counter rotating Sa NGC3539 galaxy, https://ned.ipac.caltech.edu/level5/March14/Corsini/Corsini2.html, there is a plot at the end, which perfectly shows the velocity profile: it stays flat outside of the last radius instead of decreasing as predicted. This can be explained assuming that the halo is filled with dark matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are the electron and hole densities always equal in an intrinsic semiconductor? In my laser physics book (Laser physics by Hooker and Webb) it is stated that the density of electrons injected into the intrinsic active region of a diode laser is equal to the density of holes. I am not convinced because the electrons do not come from the valence band but rather from a neighbouring n-type layer, and similarly holes come from a p-type layer on the other side. These two processes seem quite independent so why should the electron and hole densities match, particularly when the system is not symmetric between the p-type and n-type layers?
In general, the densities of electrons and holes in the depletion region of a pn junction are not equal, but governed by relation $$pn=p_B n_B e^{-\varphi_B/V_{th}},$$ where $n_B, p_B$ are bulk carrier densities, $\varphi_B$ is the barrier height and $V_{th}=kB T/q$ (see here). This relation is further modified when the diode is biased. This means that the junction is charged (positively or negatively), even though the whole circuit remains neutral. However, this is not the carrier density that enters the laser rate equations. Rather, the carrier density entering the semiconductor laser rate equations is the density of the carriers carrying current and recombining in the active region. If the numbers of the electrons and the holes entering this region were different, there would be constant accumulation of charge, i.e., the charge in the region would continuously grow and create potential, reducing the excess carriers current and increasing the current of the carriers that are lacking. No doubt this is what is happening when the laser is turned on. However, in a steady state regime such charge accumulation should have already stopped (otherwise, we are not in a steady state). Furthermore, it is assumed that these processes are much faster than the emission of photons, and need not be accounted for to describe the lasing dynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How Would a Car Move in Zero-G? Consider a car floating in a microgravity environment. Assuming the engine can still function (i.e. it is surrounded by normal atmosphere; fuel can still be pumped, etc.), in what ways (if any) will the car move when the accelerator is pressed? There is air moving into the intake and out of the exhaust, will that cause a net acceleration forward? Will air resistance with the wheels cause any sort of net acceleration? Will the torque from the engine cause the car to rotate at all?
in what ways (if any) will the car move when the accelerator is pressed? For conservation of momentum, the momentum of the exhaust gases rearward out the tail pipe must be equal to and opposite to the momentum of the car forwards. So in theory the car can accelerate in the same manner as a rocket in space, though the thrust of a car due to its exhaust gases would likely be weak. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Question about the Wave equation I have a question. I was looking for the Wave equation (first Eq. of this wikipedia page). I saw for the first time a version of this equation during an Acoustic course, where we obtained it for the sound wave combining the Euler equation, the Continuity equation, the general gas equation. So, how is a generical wave equation, as the one described in wikipedia, derived? Is there behind a mathematical derivation or is it just a specific form of Differential Eq. that was found the same for some scalars, so we have to take it "as it is"? Thank you in advance
There is no unique answer to this question. Domain-specific derivations In electromagnetism the wave equation arises from the Maxwell equations. In elasticity or hydrodynamics it arises from the correspondinge quations for the media. Note that in these latter cases the wave equation is actually an approximation - more general equations for waves can be derived, which are either non-linear or higher order. Theory of second order partial differential equations In general, linear second order partial differential equations can be classified into three types: hyperbolic, parabolic and elliptic. (Note how this classification follows the classification of the conic sections.) The canonical representatives of these types are often referred to wave equation, diffusion equation, and Laplace equation. So wave equation si just one of the general second order PDEs. Why second order is more important in physics than anything else? One the one hand, unlike the first order, it does not contain inherent asymmetry/direction. On the other hand, higher order equations often result in non-local theories, which are harder to deal with (although sometimes one has to). See also these threads: Why do wave equations produce single- or few-valued dispersion relations? Why no continuum of possible $\omega$ for one $|k|$? Big misconceptions with the fundamentals of “ waves” Why do we need the Schrödinger equation, if we have wave equation?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
What is causing this sign difference in the centrifugal term between Lagrangian and Hamiltonian formalism? Consider a central force problem of the form with the Lagrangian $$ L(r, \theta, \dot{r}, \dot{\theta}) = \frac{1}{2} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right) - V(r), $$ where $r = |\vec{x}|$. Since $\theta$ is cyclic, we can show that $m r^2 \dot{\theta}$ is a constant of motion, and rewrite the Lagrangian as $$ L(r, \dot{r}) = \frac{1}{2} m \dot{r}^2 + \frac{l^2}{2mr^2} - V(r). $$ If I calculate the Hamiltonian from this, I get $$ H_{1}(r, p_r) = \frac{p_r^2}{2m} - \frac{l^2}{2mr^2} + V(r) $$ Taking another direction, I calculated first the Hamiltonian from the Lagrangian as $$ H_{2}(r, \theta, p_r, p_{\theta}) = \frac{p_r^2}{2m} + \frac{p_{\theta}^2}{2mr^2} + V(r) = \frac{p_r^2}{2m} + \frac{l^2}{2mr^2} + V(r) = H_{2}(r, p_r), $$ where I concluded that $p_\theta = m r^2 \dot{\theta} = l$ is a constant. The problem is, that I get an apparent sign difference between the $\frac{l^2}{2mr^2}$ and $V(r)$ terms in $H_{1}$ and $H_{2}$, which I don't understand. I'm pretty sure that $H_1$ is wrong, but I don't know what kind of conceptual mistake did I make when calculating $H_1$. Conceptual issue Apparently, when I introduce the additional potential term in the Lagrangian formalism first, then calculate the Hamiltonian, I don't get the same Hamiltonian when I do it in reverse order. Why do I get different Hamiltonians?
Just to give a bit of further intuition, what you have done is added a quantity which you expect to be zero “on shell” to the Lagrangian, in order to try to simplify its form. But the Lagrangian is being used in a sort of minimization procedure, we are trying to find a trajectory where nearby trajectories do not change the action. This means that we also care about what manipulations do “off shell” where the Euler Lagrange equations do not hold, or more precisely we care about what the manipulations do in the vicinity of the shell because it can move the shell through the phase space. Once you see that this is what you are trying to do, it becomes more obvious that it has no mathematical validity. The fact that something is zero at a minimum does not mean that it does not perturb that minimum to add it. So for example the family of curves $y=x^2-cx+1$ has a minimum at $x=c/2$, but if I try to use this observed consequence to simplify the function, say by adding $0=cx-c^2/4$, now the minimum is at $x=0$ not at $x=c/2$. The fact that it happens to be zero along the family of solutions does not matter one bit, because the slope does not happen to be zero along the family of solutions, so when I add this the solutions roll to one side or the other. Hope that helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
On an infinite plane, with gravity the same of that of Earth, how far could light at an arbitrary angle travel until bending to hit the plane? Now, I'm a complete idiot, so bear with me. I've recently come across the idea that standing an infinite flat Earth would in theory appear the same as standing inside a hollow earth, since light would, due to gravity, bend towards the flat earth. Here illustrated like so: However, I have yet to find any source that has an actual way of telling how far this distance would be. I have found calculations for the gravity of an infinite flat earth here and a formula for gravitational lensing here, but I'm not smart enough to understand the latter or how one would somehow combine the two. So, as this has started to drive me insane, I've decided to turn to people who know more about this than I do. Basically, there's an infinite flat plane with uniform gravity equivalent to that of earth. Is there any sort of formula or calculation that one could do to to figure out how far along the plane a ray of light would travel if casted at an arbitrary angle?
Keep in mind that if you shine the light horizontally on this long flat planet, the light will hit the ground at exactly the same instant as if you dropped a stone on the ground, or if you shot a bullet horizontally. They will all hit the ground simultaneously. This is because gravity has nothing to do with the falling object, but with the curvature of space time. So, if it takes 1/4 second for the stone to hit the ground from dropping it, then it will take 1/4 second for the light to hit the ground. Then the math is simple, light travels at 300,000,000 meters per second, so 1/4 of that is 75,000,000 meters.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What Lorentz symmetries do electric and magnetic fields break? When we turn on an external (non-dynamical) electric or magnetic field in (3+1)-dimensional Minkowski space we break rotational invariance because they pick out a special direction in spacetime. Does this also break boost invariance? What about in (2+1)-dimensions when the magnetic field is a scalar? Now the magnetic field does not seem to break rotations. Does it break boosts? How can I show this?
The electromagnetic field is a bivector field. The components called $E_x,E_y,E_z$ are the $tx,ty,tz$ bivector components, and $B_x,B_y,B_z$ are the $yz,zx,xy$ components. A component of a bivector is unchanged by a rotation in its plane or in any perpendicular plane (perpendicular in the sense that every vector lying in one plane is perpendicular to every vector in the other, which is only possible in 4 or more dimensions). So $E_x$ is unchanged by rotation in the $tx$ and $xy$ planes, and so is $B_x$. A rotation in the $tx$ plane is also known as a boost in the $x$ direction. In three dimensions, the bivector space is spanned by $tx, ty, xy$. You can identify these by Hodge duality with vector components $y,x,t$, and the field breaks the continuous Lorentz symmetry the same way a vector does: the residual symmetry is rotation about the vector axis or in the plane of the bivector, which is spatial rotation for the $t$ axis or $xy$ plane, a boost in the $y$ direction for the $x$ axis or $ty$ plane, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Adding extra gas to a high velocity burner but the temperature in the vessel won't rise So I'm doing a refractory dryout of a vessel and I'm stuck at 350°C. Normally I would add some more gas to the burner and the temperature would go up, but now it just stays at the same level. I've tried to add some more air to the system, but it resulted in a temperature drop. I cannot reduce the air more than the initial setting because then the burner would jump in safe mode. This is the 3rd time I do this vessel so I know I can go higher. Any suggestions are welcome
I would guess incomplete combustion. To check this add more gas and check the amount of flame that goes in exchaust, or try to ignite exchaust. With incomplete combustion it doesnt help to add more air or gas. To solve it you need to improve mixing. If you are using pressurised gas, it usually does air mixing in a venturi-like tube, any debree there can impede the mixing. If you are using separate air input, you could try to make jets of air and gas collide, to mix better.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the probability of a collision $\frac{dt}{\tau}$ I'm reading about Drude theory in the book Solid State Physics by Ashcroft and Mermin. This book and most other sources I can find simply state that the probability $P$ of an electron "collision" in a time $dt$ is just $$P=\frac{dt}{\tau}$$ where $\tau$ is the relaxation time. I'm having trouble understanding how this follows from the basic definition of probability (i.e. desired outcome / all outcomes). The Feynman lecture on diffusion came very close to what I'm looking for by writing: $$P=\frac{N_{collided}}{N} =\frac{\left(\frac{Ndt}{\tau}\right)}{N}=\frac{dt}{\tau}$$ where $N$ is the total number of particles, $N_{collided}$ is the number of particles that collided within time interval $dt$ Still, once you cancel out the $N$, I can't see why $\frac{dt}{\tau}$ makes sense as a probability.
I think I found the answer to my own question. $P=\frac{1}{\tau}dt$ is a function for computing probability values. Although the function definition has the form of a ratio, it is not a probability value itself, so it doesn't have the form (desired outcomes) / (all outcomes). $P=\frac{1}{\tau}dt$ is essentially the cumulative distribution function (CDF) for the uniform probability density function (PDF) $f=\frac{1}{\tau}$. $P=\frac{1}{\tau}dt$ makes sense as a CDF because the probability of a collision should scale linearly with the value of the time interval $dt$ that you observe. The longer you observe, the higher the probability of getting a collision. Notable values would be at: $$(dt=0) \rightarrow P=0 $$ $$(dt=\tau) \rightarrow P=1 $$ Having a constant function $f=\frac{1}{\tau}$ makes sense as a PDF in this case because the probability of a collision during a given time interval $dt$ starting at t=0 will be the same as for starting at a later time value.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we derive Boyle's law out of nothing? My textbook states Boyle's law without a proof. I saw Feynman's proof of it but found it to be too handwavy and at the same time it uses Boltzmann's equipartition theorem from statistical mechanics which is too difficult for me now. So to state roughly what Boyle's law is, it states that at a constant temperature and mass of gas, $$PV=k$$ Where $P$ is pressure and $V$ is the volume and $k$ is constant in this case. Is there a proof for this that isn't based on any other gas law, perhaps based on Newtonian mechanics?
Yes, it's almost all very intuitive. Let's figure out the force $F$ that the gas applies to a (flat) section of its container wall, with area $A$. You know that $F = ma = \frac{dmv}{dt} = \frac{dp}{dt}$, that is the rate at which momentum is transferred from the wall to molecules of the gas. This is of course proportional to the rate at which molecules collide with the wall, which is in turn proportional to the area of the wall, $A$, because that's how much is exposed to the gas, the density of gas molecules, $n/V$, and the average speed of those molecules, $|v|$, because if they go faster then they collide more often. It's also proportional to the average momentum imparted per collision. When a particle collides, the component of its momentum normal to the wall is reversed, so the total momentum imparted is proportional to its average momentum, or $m|v|$. Putting that all together, we have, for some constant $R$: $$F = \frac{nRAm|v|^2}{V} \Leftrightarrow P=\frac{nRm|v|^2}{V}\Leftrightarrow PV=nRm|v|^2$$ We're almost there. Temperature, $T$, is average kinetic energy per molecule, and $m|v|^2$ certainly has the right units. But remember that $|v|$ is the average speed of each molecule, and (average $|v|$)2 is not the same as average $(|v|^2)$. However, if we assume that the shape of the distribution of velocities doesn't depend on the average speed, or the molecular mass, then these two quantities will be proportional, and we can fold the factor into our constant $R$, finally leaving $PV = nRT$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
Why is amplitude going to infinity in forced damped oscillator at resonance? I'm trying to find the amplitude of steady state response of the following differential equation: $$\ddot{x}+2p\dot x + {\omega_0}^2x=\cos(\omega t)$$ A particular solution is $$x_p=\Re{\dfrac{e^{i\omega t}}{\omega_0^2 - \omega^2 + i2p\omega}} $$ The amplitude at steady state is then $$A=\dfrac{1}{\sqrt{(\omega_0^2 - \omega^2)^2 + (2p\omega)^2}}$$ The denominator has minimum value when $\omega^2 =\omega_0^2 - 2p^2 $: $$A=\dfrac{1}{2p\sqrt{\omega_0^2-p^2}}$$ This expression seems to suggest that the amplitude goes to infinity as $p$ approaches $\omega_0$. But amplitude has to be finite(from other examples of LRC tank circuit etc). Pretty sure I'm wrong but not able to see where. Any help?
You algebra is wrong. If $\omega^2-\omega_0^2=-2p^2$ you get $(-2p^2)^2+(2 p\omega )^2= (2p^2)^2+(2 p\omega )^2$= under the square root and, being the sum of two postive numbers, this can never be zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How Do Pions Mediate The Residual Strong Force? I know that the continuous exchange of gluons between quarks is what holds hadrons together, and that the exchange of pions between nucleons is what creates the strong residual force. However, how exactly does the pion mediate the residual strong force--is it emitted by one nucleon and absorbed by another, equivalently emitting and absorbing the gluons that hold the pion together? How is the pion formed in the first place? The way I reasoned was: the quarks in two nucleons get close enough to each other to pull them away from the quarks in their parent nucleons; it is energetically favorable to form new particles instead of pulling the quarks apart, so pions are formed.
Here is how I visualize this process. Please tell me if it is incorrect and I will delete: we imagine two nucleons sitting right next to one another. Note that a pion has two quarks inside it and the nucleons have three each. A quark that just happens to come close to the "surface" of its nucleon at the same time another quark happens to do the same thing right next door will "see" the other quark and be influenced by it. At the moment the two quarks are thus "looking" (i.e., throwing gluons back and forth) at one another from neighboring nucleons, one could claim that those two quarks constitute a temporary pion being shared in some sense between the two nucleons, which would tend to hold the nucleons together. Note that because the quarks are each confined in their respective nucleons, we cannot reach down in there and actually pluck that pion-like thing away for further study as if it were a real, live pion. Of course the real physics is far, far more complicated and subtle than this crude model; please let me know if this is a sensible (although hand-waving) way to imagine how pion exchange glues nucleons together.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Electric field of a point very far from uniformly charged rectangle sheet I was wondering what is the Electric field at a point which is very far from a rectangular sheet and it is also above the center of the rectangle. So form a mathematical perspective you get Electric field due to a finite rectangular sheet of charge on the surface $$ S = \left\{(x,y,z)\in \mathbb{R}^3 \mid -a/2< x < +a/2; -b/2< y < +b/2 ; z = 0 \right\} .$$ is $$ E(0,0,r) = \frac{\sigma r}{4\pi\epsilon_o} \int_{x=-a/2}^{x=+a/2}\int_{y=-b/2}^{y=+b/2} \frac{dx dy}{(x^2+y^2+r^2)^{3/2}} $$ so $$E(0,0,r) = \frac{\sigma}{\pi \epsilon_0} \arctan\left( \frac{ab}{4r\sqrt{(a/2)^2+(b/2)^2+r^2}} \right)$$. It seems very counter intutive that for $r>>a$ and $r>>b$ electric field is not $$E(0,0,r) = \frac{\sigma}{\pi \epsilon_0}\arctan\left( \frac{ab}{4r^2} \right)$$ but $E(0,0,r) =k_e\frac{q}{r^2}$ where $q=\sigma ab$. My question is shouldn't it behave like a point charge if it is very far away from the point where I am calculating electric field? Why is that not so? What am I doing wrong?
For $a \ll r$ and $b \ll r$ the argument of the arctan function (call it $x \equiv ab/4r^2$) is much less than 1. And for $x \ll 1$, we have $\arctan x \approx x$. Take it from there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is there a physically meaningful example of a spacetime scalar potential? From Misner, Thorne and Wheeler, page 115. 0-Form or Scalar, $f$ An example in the context of 3-space and Newtonian physics is temperature $T\left(x,y,z\right),$ and in the context of spacetime, a scalar potential, $\phi\left(t,x,y,z\right).$ I'm trying to think of an example of such a scalar potential. Is there one? Electrostatic potential is the time component of the electromagnetic 4-vector potential, so it's really a vector with 0-valued space components.
There are two different things one can mean by a potential. The first is in the sense of a gauge field whose derivatives (in some combination) give the field-strength tensor. For example, the electromagnetic potential $A^\mu(t,\vec{x})$ as you mention. One can certainly write down a theory with a scalar gauge field $B(t,\vec{x})$, however, such a gauge field does not appear in the standard model of particle physics. But, there is nothing wrong with writing down such a theory. See, for example, this paper where the $U(1)$ symmetry is gauged using a scalar gauge field. The second is in the sense of the potential terms, i.e., the interaction/self-interaction terms in the Lagrangian density. For example, we say things like "the shape of the Higgs potential looks like the logo of a famous StackExchange site". All such potential terms are always scalar because a Lagrangian is not allowed to be charged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Can two waves be considered in phase if the phase angle is a multiple of 2$\pi$? Question is essentially what the title states. Wavefront is defined as the locus of points that are in phase. So I wanted to know if the locus would be the points of only a single circle or multiple circles whose points all have the same displacement? Or in other words can all the points that are at the peak at a specific time be considered as part of a single wavefront/inphase? Can all the points in all the green circles be said to be in phase? Can they be said to be in the same wavefront?
Can two waves be considered in phase if the phase angle is a multiple of 2π ? Yes. Common cases in modulus arithmetic : $$ \begin{align} \phi \mod 360^{\circ} &= 0^{\circ} \to \text{in phase}\\ \phi \mod 360^{\circ} &= 180^{\circ} \to \text{out of phase}\\ \phi \mod 360^{\circ} &= 90^{\circ} \to \text{neither in phase, nor out of phase} \end{align} $$ And many of states in-between of these extremes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How to make the Moon spiral into Earth? I recently watched a video of what would happen if the Moon spiraled into Earth. But the video is pretty sketchy on the physics of just what would have to happen for that to occur. At first I thought I understood (just slow the Moon down enough), but my rudimentary orbital mechanics isn't enough to convince me that's sufficient (e.g., wouldn't the Moon just settle into a lower orbit?). What forces would have to be applied to the Moon to get it to spiral into the Earth, at what times? What basic physics are involved? (And why should I have already known this if I could simply remember my freshman Physics?)
The Earth-moon binary system is emitting gravitational waves, and so is steadily losing energy and the separation between them is decreasing. Eventually, (ignoring the death of the Sun, heat death of the Universe, the possibility of proton decay or the decay of our vacuum state, etc etc), the two bodies will collide. We can estimate how long it will take the Earth and Moon to collide via this mechanism (under the wildly wrong assumption that the Earth and Moon will still exist by the end of this process), using Eq. 16 of http://www.bourbaphy.fr/damourgrav.pdf \begin{equation} t_c = \frac{5 c^5 D^4_{\rm moon}}{256 G^3 \mu M^2} = 2.8 \times 10^{15} \ {\rm years} \end{equation} where $D_{\rm moon}$ is the distance from the Earth to the Moon today, $\mu=m_1 m_2/(m_1 + m_2)$ is the reduced mass, and $M=m_1+m_2$ is the total mass (and $m_1$ and $m_2$ are the mass of the Earth and Moon, respectively). To put it in perspective, this is more than $200,000$ times longer than the age of the Universe. I have neglected the finite size of the Earth and Moon, so really they will coalesce earlier, but not by enough to change the qualitative "wow, that's a long time" feeling you should have gotten :). Still, this at least gives an upper bound on how long the Earth-Moon system can remain stable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 4 }
Why doesn't time contract? I'm tutoring a Year 12 (high school) physics subject which requires me to understand special relativity, in particular, time dilation and length contraction. I have only studied 1 semester of 1st year uni physics, so bear with me for sounding ignorant. I've tried reading parts of Physics for Scientists and engineers but have more questions than answers. I get that if you're traveling away from a clock at high speed, the clock will "appear" to slow down with respect to the person. After one tick, you've moved a few more million metres away and so the light has further to travel to reach you, which will take longer. A bit like looking up at the stars and thinking that is it happening now, but it's actually something that happened thousands of years ago. What happens if you move toward the clock. Does time speed up? So why is there only time dilation and not contraction? My only understanding I can lean on here is the doppler effect, but I have a suspicion that has nothing to do with it. PS I have a degree in engineering, but I struggle to get my head around this stuff.
Instead of two observers, it is more clear to think of one observer travelling between two points in the same frame, and the clocks of that frames were previously synchronized. For example, a rocket between Earth and a (future) Mars base. If the rocket has a really big velocity, the crew will see at arrival, that the time of travel $(t_M - t_E)$ for the basis clocks is greater compared with their own clock. This is time dilation. Any communication from Earth will be in slow motion, and from Mars fast forward, but it is not time dilation, its an effect of the relative velocity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
Time constant versus half-life — when to use which? In some systems we use half-life (like in radioactivity) which gives us time until a quantity changes by 50% — while in other instances (like in RC circuits) we use time constants. In both cases the rate of change of a variable over time is proportional to the instantaneous value of variable. What is a simple intuitive way to know the difference between the kind of systems where half-life is useful, versus systems where time constants are more meaningful? (Does it have anything to do with the shape of the curve representing the change in value over time, for example?)
Even for radioactive systems the usage can be mixed. Isotopes are reported as half-lives, but individual nucleons or fundamental particles are often reported as lifetimes. See for example https://pdg.lbl.gov/2021/web/viewer.html?file=../tables/rpp2021-sum-leptons.pdf where the muon and the tau leptons have their decay quoted as mean lifetimes, despite the decay being similar in quality to radioactive nuclei! You'll note that many particles have a width measurement instead; this is used for very short-lived systems. The same notation is occasionally used for systems with extremely short lifetimes, for example 8Be, which has units of eV to describe its decay: https://www.nndc.bnl.gov/nudat3/reCenter.jsp?z=4&n=4 Given this I think it's often just historical reasons why one is used vs. the other. But in the case of extremely small lifetimes it's often neither.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
How to prove time dilatation from the Lorentz transform? How to prove time dilatation from the Lorentz transform formula: $$ t' = \gamma\left(t-\frac{Ux}{c^2}\right) $$ (U: the velocity of the referential R' relative to R) So far I've found this formula : $$ \Delta t' = \gamma\left(\Delta t-\frac{U\Delta x}{c^2}\right) $$ but I don't know how to handle the $ \Delta x $ from here. I have seen in the literature that $ \Delta t' = \frac{\Delta t}{\gamma} $ but I clearly don't know how to infer this from the Lorentz Transform. T.I.A.
In the formula$$\Delta t'=\gamma\left(\Delta t-\frac{U\Delta x}{c^{2}}\right)$$ we suppose that two events occur in the same place, i.e.$\;\Delta x=0$ we find in the moving reference frame $\mathcal{R}'$ $$\Delta t'=\gamma\Delta t$$ The inverse Lorentz transformation gives $$\Delta t=\gamma\left(\Delta t'+\frac{U\Delta x'}{c^{2}}\right)$$ if in $\mathcal{R}'$ two events occur in the same place , i.e.$\;\Delta x'=0$ we have $$\Delta t=\gamma\Delta t'$$ or $$\Delta t'=\frac{\Delta t}{\gamma}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Stark Effect in Hydrogen Degenerate Perturbation Theory I am going though this example of degenerate perturbation theory. We are examining the Stark effect in hydrogen for $n=2$. After finding the 4 degenerate cases; $|0, 0⟩, |1,0⟩, |1,1⟩, |1,-1⟩$, we apply the perturbation $\hat{V} = eEr\cos{\theta}$. The matrix representation of the perturbation is: $$\hat{V} = \begin{bmatrix}0&-3eEa_0 & 0 & 0\\-3eEa_0&0 &0 &0 \\ 0&0&0&0 \\0&0&0&0\end{bmatrix}$$ Finding the eigenvalues of this perturbation gives: $\Delta E = -3eEa_0,0,0,3eEa_0$. Whats throwing me of is knowing what eigenvalue applies to what original degenerate case. In the text, they say that $|1,1⟩$ and $|1,-1⟩$ degeneracy is not lifted, but provide no real explanation. Basically, how do I keep track of what eigenvalue belongs to what solution?
Your matrix contains a (degenerate) subspace spanned by $\vert 1,1\rangle$ and $\vert 1,-1\rangle$ by simple inspection of the original ordering of the basis states. The similarity transformation $T$ that will bring $\hat V$ to diagonal form $T^{-1}VT$ will only mix $\vert 0,0\rangle$ and $\vert 1,0\rangle$, again by inspection: this similarity transformation will be of the form $$ T=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc} 1&1&0&0\\ -1&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{array}\right) $$ since there only mixing of the $m=0$ states, again by inspection of your $\hat V$. Thus, the subspace spanned by $\{\vert 1,\pm 1\rangle\}$ will remain unchanged by the mixing of $\vert 0,0\rangle$ and $\vert 1,0\rangle$ states. Your final basis will be $\vert\psi_\pm\rangle=\frac{1}{\sqrt{2}}\left(\vert 0,0\rangle\pm \vert 1,0\rangle\right),\vert\phi_\pm\rangle = \vert 1,\pm 1\rangle$. The eigenvalues for the $m=\pm 1$ states remain unchanged as they were unaffected by $T$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the acceleration of a ramp on a table when a body slides on it? I found an Olympiad problem: Find the acceleration of a ramp on a table when a body slides on it. Assume there is no friction between the body and the ramp, and between the ramp and the table. I found the final solution to this problem but I do not understand it: * *What is $m \vec{a}_1$, and (ii) why $m \vec{a}$ is parallel to the table in the free-body diagram? *How do they come up with the equation in the solution?
There is no $ma$ in the diagram or in the text. There is the term $Ma$. You have two bodies in the problem: * *the block on the incline, with mass m and acceleration $a_1$ and *the inclined plane with mass M and acceleration a. You already have the free body diagrams for each of the two objects. The inclined plane can only move horizontally, it will not fly up and it will not dive into the ground. The block moves both horizontally and vertically.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Derivative of operator with respect to parameters From Shankar's QM book pg. 56: For an operator $\theta(\lambda)$ that depends on a parameter $\lambda$ defined by $$\theta(\lambda)=e^{\lambda\Omega}$$ where $\Omega$ is also a constant operator, we can show that $$\frac{d}{d\lambda}\theta(\lambda)= e^{\lambda\Omega}\Omega=\theta(\lambda)\Omega .\tag{1.9.7}$$ Hence if we are confronted with the above differential equation, its solution is given by $$\theta(\lambda)=Ce^{\lambda\Omega}$$ where $C$ is a constant operator. My question is why does the constant operator $C$ appear?
For essentially the same reason that it appears in differential equations of functions. The differential equation $$\frac{\text{d}\theta(t)}{\text{d}\lambda} = \theta(\lambda) \Omega$$ defines a family of operators, given by $$\theta(\lambda) = C e^{\lambda \Omega}.$$ Different choices of the constant operator $C$ lead to different operators $\theta(\lambda)$, all of which satisfy the same differential equation. In other words, the choice of $C=\mathbb{I}$ is just one of the possibilities. This mirrors the case when you're working with functions, the solution to a differential equation of the form $f'(t) = a\times f(t)$ is the family of functions $f_c(t) = c \exp(at)$, where $c$ is a constant that is set by the value of $f(t)$ at $t=0$. Another way to see explicitly that any choice of the operator $C$ satisfies this equation is by explicitly writing out the operator in its power-series form, i.e.: \begin{align} \theta(\lambda) = C e^{\lambda \Omega} &= C + \lambda C \Omega + \frac{\lambda^2}{2!} C \Omega^2 + \frac{\lambda^3}{3!} C \Omega^3 + ... \\ \implies \frac{\text{d}\theta}{\text{d}\lambda} &= 0\,\,\mathbb{I} + C \Omega + \lambda C \Omega^2 + \frac{\lambda^2}{2!} \lambda C \Omega^3 + ...\\ &= C \left( \mathbb{I} + \lambda \Omega + \frac{\lambda^2}{2!} \Omega^2 + ...\right) \Omega \\ &= C e^{\lambda\Omega} \Omega,\\ \text{i.e. }\quad \frac{\text{d}\theta}{\text{d}\lambda} &= \theta(\lambda) \Omega. \end{align} Note that since $C$ and $\Omega$ need not commute, so I pulled $C$ out to the left, and $\Omega$ out to the right. Thus, $\theta(\lambda) = C\exp{\lambda\Omega}$ satisfies the differential equation for any constant operator $C$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why should a clock be "accurate"? Having read that atomic clocks are more accurate than mechanical clocks as they lose a second only in millions of years, I wonder why it is necessary for a reference clock to worry about this, if the definition of the second itself is a function of the number of ticks the clock makes. Why don't we just use a single simple mechanical clock somewhere with a wound up spring that makes it tick, and whenever it makes a tick, treat it as a second having elapsed? (Assuming this clock was broadcasting its time via internet ntp servers to everyone in the world)
Time doesn't flow, nor is it perceived, according to the ticking of a clock. If you boil an egg while watching a clock that runs slow, you're going to overcook it, regardless of the fact that the clock says you cooked it for exactly the intended duration. "Boil an egg for 10 minutes" is not a useful instruction if the actual duration of 10 minutes is not constant. A wind-up mechanical clock is not terribly precise and can produce "seconds" of different durations depending on environmental factors like temperature, humidity, etc. If your clock doesn't have a constant tick rate, an egg cooked for "10 minutes" may be overcooked or undercooked, since that same "10 minutes" can represent a variable amount of time. We need to know that 10 minutes measured today is the same as 10 minutes tomorrow. A mechanical reference clock could slow down over time, resulting in constant-duration processes appearing to take less time - in 100 years, you might find that a perfectly cooked egg only takes 5 "minutes" according to your slowed clock, when in reality, it's the exact same duration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 19, "answer_id": 1 }
How can a boys mass equal more than a seesaw plank when it is balanced? I had a question that says a boy plays solitary seesaw by placing a long plank over a small rock and sitting at one end of the plank. When the seesaw is balanced the boys mass is likely… the answer is greater than the mass of the seesaw. I don’t understand how it could be greater. I understand the concept of how to get a balanced seesaw using distance from the center of gravity but what I don’t understand is why his mass is more.
Since torque is $\text{weight}\times \text{distance}$ we can write $$W_b\times l=W_p\times L$$ would be the condition when the seesaw is balanced, and $W_b$ is the weight of part of the plank and boy on one side, and $W_p$ is the weight of the plank on the other side, and $l$ and $L$ are the lengths of the plank from the pivot on each side similarly. We can write the equation above $$m_bgl=m_pgL$$ Now for the seesaw to balance, there must be more plank length, and therefore more weight, on the opposite side without the boy for the possibility of equilibrium. So if we write $$\frac{m_b}{m_p}=\frac{L}{l}$$ and as discussed, $L\gt l$ meaning $\frac Ll \gt 1$ which means $$m_b\gt m_p$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can a photon be partially absorbed an electron? I am familiar that the there is a similar question , but elaborating my question " In the photoelectric effect the electron absorbs only a part of the photon 's energy which is needed for liberating the electron and the rest is used as the kinetic energy , why or how can this be possible because I believe that if u pass a photon through an atom , either a) the electron absorbs this energy ( keeping in mind that the energy required is exactly equivalent to the energy required to liberate the electron from its ground state ) or b) it passes through the electron . I am a beginner learning quantum mechanics so I might have some kinks and problems in my understanding that I hope you can solve them .
The basic reason is because the photoelectric effect happens in metals,and depends on the electron solutions within metalic solids . In the band theory of solids, a quantum model useful in studying solid state physics, electrons are bound in bands. In metals, there are electrons in the conduction band, which are considered "free" to move through the whole metal lattice. They are very weakly bound to the solid metal. These electrons, when interacting with an incoming photon have a large probability to escape the solid . So the interaction is not with a bound in an atom electron,(valence band) but with the almost free electrons of the conduction band.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In radiotherapy, why do normal tissiue or organ cells not die of radiation? In radiotherapy, why don't normal tissiue cells or organ cells in the way of incoming radiation die, but tumours die instead?
Living cells which are in the process of actively dividing i.e., replicating DNA strands, peeling them apart and sorting them out, rebuilding them into duplicate genes, and so on are particularly susceptible to any sort of challenge which might create transcription errors- most of which would lead to the death of the cell by either jamming the transcription machinery itself or leaving the daughter cells unable to function correctly. Since cancer cells are almost always in the process of uncontrolled growth, at any point in time most of them in a tumor will be actively dividing. This puts them at much greater risk of being killed by chemicals or radiation than the noncancerous tissue nearby. Note that since the cells lining your digestive tract and the cells that produce hair growth are also frequently dividing, they will be killed too as a side effect of chemo or radiation. This is why your hair falls out and your digestive system is seriously damaged by chemo agents in particular, since those agents circulate throughout your blood stream and are not "beamed" specifically at the tumor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 0 }
Is the momentum in the infinite square well observable or not? I've read in posts such as this and this that the momentum operator is not self-adjoint in the infinite square well because the geometric space is a bounded region of $\mathbb R$, for example $[0,a]$ for a well of width $a$. As such, it leads to weird stuff happening like momentum not being conserved. What I don't understand is why the domain of the wave functions cannot be extended to $\mathbb R$ and have $\psi$ simply equal $0$ outside the well. That way, instead of integrating from $0$ to $a$, we can integrate from $-\infty$ to $+\infty$. Then $$ \langle \psi | \hat p \psi \rangle = \frac{\hbar}{i}\psi^*\psi \bigg\rvert_{-\infty}^{+\infty} + \int_{-\infty}^{+\infty} \left(\frac{\hbar}{i} \frac{\mathrm d \psi}{\mathrm dx} \right)^* \psi \; \mathrm dx = \langle \hat p \psi | \psi\rangle, $$ and $\hat p$ would still be self-adjoint. Also, does the boundary condition for the stationary states $\psi(0) = \psi(a) = 0$ not predicate on the assumption that $\psi = 0$ outside the well? If $\psi$ was not defined outside the well, $\psi$ does not have to be continuous at the walls of the well, so $\psi(0)$ and $\psi(a)$ could equal any value.
It is not necessary to extend the wavefunctions to the whole real line. As far as I can understand, you are defining an operator $\hat{p}$ in $L^2([0,a], dx)$ with the domain $D(\hat{p}) := \{ \psi\in C^1([0,a])\:|\: \psi(a) = \psi(0)=0\}$ and acting in that way $$(\hat{p}\psi)(x):= -i\hbar \psi'(x)\:.$$ As you notice, that operator is Hermitian. However, in QM observables need to be selfadjoint operators, which is a much stronger requirement. Selfadjoint operators admit a spectral decomposition, simply hermitian ones do not. From a mathematical perspective, posts you found yourself and other posts indicated by Qmechanic focus on related issues. In particular, if candidate momentum operators defined as above or in a similar way (with Dirichlet boundary conditions) admit a unique selfadjoint extension. The answer is negative. There is no good momentum observable in an infinite square well (i.e., with vanishing boundary conditions).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
On "cosmic flux" units I've trouble understanding the following graph, taken from Wikipedia: It's supposed to show the cosmic ray flux vs particle energy. I've never seen a "flux" written in these units... Why ${GeV}^{-1}$?
It is a differential flux. If you wanted to know the total particle rate per unit area, per unit solid angle, per unit time, with units $\rm m^{-2}\,sr^{-1}\,s^{-1}$, you would have to choose an energy interval that you care about and integrate the curve in the figure. Some authors would write a monstrous differential symbol rather than $F$, like $$ F = \frac{\mathrm dN}{\mathrm dA\ \mathrm d\Omega\ \mathrm dt\ \mathrm dE} $$ to make explicit that the number of particles observed $N$ depends on your detector’s area $A$, its solid angle acceptance $\Omega$, your experiment’s running time $t$, and your choice of energy window. The horizontal lines on the diagram are probably integrated over $4\pi$ steradians for all particle energies higher than the intersection of the horizontal line and the blue differential flux curve.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does the north pole of a magnet always induce a south pole in the near end of a magnetic material? If a magnetic material such as an iron bar is placed near a magnet, the former becomes an induced magnet with the end closest to the north pole of the magnet becoming a south pole and vice-versa for the other end. But why does this happen? Why does a magnet always induce a pole such that it attracts? Or to put it in another way: what causes magnets to always attract magnetic materials such as iron? 1 * *“Induced Magnetism.” Untitled Document, University of Leicester, https://www.le.ac.uk/se/centres/sci/selfstudy/mam6.htm. Accessed 28 February 2022. Note: By magnetic materials, I mean metals such as iron, cobalt, nickel and others that can be "turned" into a magnet. (OK, a "why" question, I know, but I'm interested in reading through possible explanations anyway—if there are any.)
Electrons are not only electric charges, they are also magnetic dipoles. For the electron see its intrinsic value here). BTW, it does not matter at all for the explanation of the magnetic properties of the magnetic materials whether the observation of the magnetic dipole of the electron is considered as an intrinsic property per se or as a consequence of a relativistic self-rotation of the particle. It is observable that (any) substance becomes magnetic itself under the influence of an external (enough strong) magnetic field, for example your bar magnet. This is simply based on the influence of the previously more or less chaotic orientation of the magnetic dipoles of the subatomic particles in your iron bar.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Narrow-bandwidth laser and its beam size on uncertainty principle I read that a single frequency laser can have a bandwidth as low as a few kHz, but according to the uncertainty principle, $\Delta x \Delta p = \Delta x \Delta f h/c >=\hbar $, so $\Delta x \sim c/\Delta f$, how come the laser beam can be so narrow spatially?
The $\Delta x$ relevant for your calculation is the longitudinal length of the wave. If you have a narrow bandwidth, then you need a lot of wave cycles to define it, and so the wave is long. The transverse width of a laser beam is limited by diffraction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are black holes spinning balls of quark-gluon plasma? I had this idea a few days ago that the Higgs event might have been a naked singularity, i.e. the colliding protons (very briefly) fall into a state of infinite density and release two gamma-ray photons as decay products. One thing led to another, and I was led to extrapolate that perhaps atomic nuclei can be seen as something akin to quark-gluon plasmas; that is, we tend to think of them as bundles of protons and neutrons, but how often do we really observe nuclei directly (hydrogen nuclei don't count)? Wouldn't quantum mechanics imply that all the 'protons' and 'neutrons' are sort of smeared into one another? And, if so, would that not therefore be a quark-gluon plasma? Wouldn't these rigid categories of 'proton' and 'neutron' have somewhat limited applicability in the nuclear setting? Building on that, I thought perhaps it's possible to thereby imagine a black hole as a sort of giant nucleus, and that the difference between neutron stars and black holes is that one passes the Chandrasekhar limit, forcing this lattice of neutrons and electrons to form around the QGP, whereas in the black hole setting everything collapses into QGP and it forms an event horizon. Does this seem likely?
1/2 spin particles like quarks are subject to the Pauli exclusion principle. Therefore they cannot contract down to a singularity. However gluons are spin 1 particles and therefore not subject to the Pauli exclusion principle. Thus it could be possible that the singularity in a black hole is made solely of gluons. Physicist John Wheeler, who coined the term black hole, believed that an imploding star converts its protons and neutrons into radiation during the black hole formation. Thus this is a real possibility.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a name for this sort of thermo relationship? taking physical chemistry at the moment. My textbook does not go over the derivation of the relationship below: $$H_{vap\;/\;sub\;/\;cond}(T')-H_{vap\;/\;sub\;/\;cond}(T)= \int_{T}^{T'}\Delta C_{p,m}\; dT$$ Where $T$ is the enthalpy value at standard conditions, and $T'$ is $(T + dT)$, such that we may find the "new" enthalpy value at that temperature. Can I use a similar process to determine a new $\Delta S _{vap}$ or $\Delta G_{vap}$, provided C(T) and the $S$ or $G$ value at standard conditions? Is there a name I can use to look into this topic more deeply?
The relation arises from integrating the general partial-derivative expansion $$dH=\left(\frac{dH}{dT}\right)_PdT+\left(\frac{dH}{dP}\right)_TdP,$$ or—replacing the partial derivatives with the corresponding material properties— $$dH=C_P\,dT+V(1-αT)dP,$$ with constant-pressure heat capacity $C_P$, temperature $T$, bulk modulus $K$, thermal expansion coefficient $\alpha$, and volume $V$, for the specific cases of an ideal gas, for which $\alpha = 1/T$, or constant pressure ($dP=0$), thus giving $dH=C_P\,dT$ and then $\Delta H=\int C_P\,dT$. To calculate $\Delta S$, for instance, you'd express $dS$ in the variables you wish to use, e.g., $$dS=\left(\frac{dS}{dT}\right)_PdT+\left(\frac{dS}{dP}\right)_TdP.$$ Then you'd figure out what material properties those partial derivatives refer to, simplify, and integrate over the temperature range of interest. Here, $C_P\equiv T\left(\frac{\partial S}{\partial T}\right)_P$, so at constant pressure $dS=\frac{C_P}{T}dT$ and then $\Delta S=\int\frac{C_P}{T}dT$. The same general strategy would be applied to calculate $\Delta G$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $\Delta$ notation commonly used for the difference in a quantity between two objects? In a standard Atwood machine, the acceleration is $$a = g\dfrac{m_1 - m_2}{m_1+m_2}.$$ Would writing this as $$a = g \dfrac{\Delta m}{M}$$ where $M$ is the total mass be an abuse of notation to most physicists? Alternatively, suppose that we are analyzing a heat engine and use $\Delta T$ for the difference in temperature between the hot and cold reservoirs. Would this be clear notation, or confusing notation to most physicists? In general, does $\Delta$ indicate only the change in a particular quantity between two times, or can it indicate the difference between two quantities at the same time?
This use is extremely common in thermodynamics, at least. In the Clasius–Clapeyron equation, for instance, $\Delta$ is used to represent simultaneous differences between certain properties of two phases. It would be standard practice to use your $\Delta T$ example to calculate, say, the entropy generation associated with a thermally conducting rod connecting two heat reservoirs. (As a side note, the notation in the Clausius–Clapeyron equation that most confuses students is far from the use of $\Delta$—it’s the fact that $dP/dT$ is used to refer, without any clarifying annotation, to a coexistence curve rather than to the behavior of any single system.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is a relativistic calculation needed unless $pc$ is much smaller than the rest energy of a particle? After introducing the de Broglie wavelength equation, my textbook gives a rather simple example where it asks to find the kinetic energy of a proton whose de Broglie wavelength is 1 fm. In the solution to this problem, it states that "A relativistic calculation is needed unless $pc$ for the proton is much smaller than the proton rest energy." Could someone please explain why this is the necessary condition? I'm not sure what the quantity '$pc$' represents or means here. I know that for massless particles like photons, the total energy $E$ is equal to $pc$. I'm not sure what it means for particles having rest mass like protons.
The momentum of a particle of mass $m$ moving at $v$ is $\gamma m v$ where $\gamma$ is given by the following. $$\gamma = \frac{1}{ \sqrt{ 1 - \frac{v^2}{c^2} } }$$ Let's always take $v$ to be non-negative. If you need a direction then put it in later. Since $v <= c $, $\gamma$ is a real number greater than or equal to 1. For $v$ very small compared to $c$ it is only slightly larger than 1. The rest energy of a proton is just $m c^2$. So the comparison they are making is the following. $$\gamma m v c << mc^2 $$ So dividing out the common factors you get the following. $$\gamma v << c $$ And for $v$ small compared to $c$, this is just $v << c$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Uncertainty principle manifesting in $j(j+1)$ vs $j^2$ To motivate my question, please consider a system with total angular momentum $j$. The fact that the largest eigenvalue of $J_z$ is $j$, while $J^2 = J_x^2 + J_y^2 + J_z^2$ has all eigenvalues equal to $j(j+1)$ is often ascribed to the uncertainty principle. For example, quoting page $51$ of the textbook "Models of Quantum Matter" by Hans-Peter Eckle, "...However, the eigenvalue of $L^2$ is $l(l+1)$, larger than $l^2$. This implies that the angular momentum operator $\bf{L}$ can never align with certainty with $L_3$ and the uncertainty principle is satisfied. If $\bf{L}$ could be aligned with $L_3$, then $L_1=L_2=0$ and we would have simultaneously sharp values of all three components of the angular momentum operator, in contradiction to Heisenberg's uncertainty relations..." To what extent is this reasoning true in general? It is hard for me to formulate my question more precisely, but I will attempt to do so: Consider some set of Hermitian operators $K_i$ which all pairwise do not commute but instead each commute with the sum of their squares, $K^2 = \sum_i K_i^2$. Is it guaranteed that $K^2$'s largest eigenvalue is strictly greater than any of the eigenvalues of the individual $K_i^2$? The strictly greater is key, as it is greater than or equal by this answer. I hope to see that the failure of the individual $K_i$ to commute amongst themselves imposes a stronger statement.
The following two operators $A$ and $B$ have the properties that: * *$A$ and $B$ do not commute; *$A^2 + B^2$ commutes with both $A$ and $B$; and *the largest eigenvalue of $A^2$ equals the largest eigenvalue of $A^2 + B^2$: $$ A = \begin{bmatrix} \lambda & 0 & 0 \\ 0 & 0 & 1/\sqrt{2} \\ 0 & 1/\sqrt{2} & 0 \end{bmatrix} \qquad B = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & i/\sqrt{2} \\ 0 & -i/\sqrt{2} & 0 \end{bmatrix} $$ for $\lambda \geq 1$. We have $$ [A, B] = \begin{bmatrix} 0 & 0 & 0 \\ 0 & -i & 0 \\ 0 & 0 & i \end{bmatrix} \neq 0 $$ so the two operators do not commute. However, $$ A^2 = \begin{bmatrix} \lambda^2 & 0 & 0 \\ 0 & 1/2 & 0 \\ 0 & 0 & 1/2 \end{bmatrix} \quad \text{ and } \quad A^2 + B^2 = \begin{bmatrix} \lambda^2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}. $$ and we can see that for any $\lambda > 1$, the largest eigenvalues of both $A^2$ and $A^2 + B^2$ are $\lambda^2$. It can also be easily shown that $A^2 + B^2$ commutes with both $A$ and $B$. The "loophole" being exploited here is that $[A, B]$ has a non-trivial null space even though the commutator does not itself vanish. Since $A$ and $B$ are simultaneously diagonalizable on this subspace, we're allowed to have a state with zero uncertainty of $A$ and $B$ lying within this subspace.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Can two passing black holes touch without merging? I've read a report about LIGO saying that the ring-down oscillation showed that it took time for the pair of BHs to fully merge, implying there was a momentary bulge as one circled around and into the other. I imagine two black holes going past each other, but coming close enough that the event horizons intersect slightly. Would they have sufficient momentum to NOT merge? Effectively to pull in on each other at something less than acceleration at C? At the intersection zone I picture gravity pulling in both directions, so momentarily the intersection would have zero local gravity. So perhaps this zone would no longer be behind the event horizon. And moving fast enough the black holes would seemingly clip each other before they merged. Perhaps some ring-down noise would happen as each event horizon got perturbed in the close pass.
Once something crosses a black hole event horizon, it can never escape. This means that if the event horizons of two black holes touch or overlap, they will merge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Door Slamming Against Wall There is a door with length $r$ and mass $m$ hinged on a wall with angle $\theta$, and an equally distributed gust of wind with constant velocity $v$ pushing on that door in the direction normal to the wall, as shown in the drawing: I wish to find a function of $\theta$ by time. This is what I made of it so far (as someone with no experience in physics): According to this video, for a fluid acting normally on a wall, the force is: $$F = v_a^2\cdot \rho\cdot A$$ Where $\rho$ is the fluid density and $A$ is the area on which the fluid hits the wall. I have silently replaced $A$ with $L$ (representing length) in my calculations since the problem here is 2 dimensional instead of 3. However, I am not sure I can do this. So since the wind is acting down, the force vector is: $ \newcommand\mycolv[1]{\begin{bmatrix}#1\end{bmatrix}} $ $$\overrightarrow{F}=\mycolv{0\\-L\rho \cdot v_a^2}$$ However the wind is not normal to the door. So taking the vector normal to the door, that is $$\mycolv{\cos(\theta-\frac{\pi}{2})\\ \sin(\theta-\frac{\pi}{2})}$$ and taking the dot product with the force vector to get the part that is normal to the door, we get: $$F_n=-L\rho \cdot v_a^2\cdot \sin(\theta-\frac{\pi}{2})$$ L is the projection of the door onto the wall, since the wind hits normal to the wall. Meaning: $$F_n=-r \cos(\theta)\rho \cdot v_a^2\cdot \sin(\theta-\frac{\pi}{2})$$ Simplifying, we finally get: $$F_n=r \rho \cdot v_a^2\cdot \cos^2(\theta)$$ So: $$F=ma \Longrightarrow r \rho \cdot v_a^2\cdot \cos^2(\theta)=m\ddot{\theta}$$ And we reach the ODE: $$\ddot{\theta} = \frac{r \rho v_a^2}{m}\cdot \cos^2(\theta)$$ Which, while to my knowledge cannot be solved using standard mathematical functions, provides a satisfying enough answer for me. When I spoke to my physics teacher about this however, he approached it differently. First he said he didn't think my method would work, because "the moment changes with time". I assume he's referring to moment of inertia. He then approximated the motion of the rotating door with a point particle which was traveling in a straight line. He said this was a standard method of approximation, some term with the word circular. And while the answer he gave is (according to him) a good approximation, I would like to know if my "answer" is the completely correct one, and if not, what is.
Take the sum of the torque about the point A you obtain $$I_A\,\ddot\theta-\frac 12 F\,r=0\tag 1$$ where $$I_A=I_{\text{CM}}+m\,\left(\frac r2\right)^2\\ F=v^2\,\cos^2(\theta)\,\rho\,A=v^2\,\cos^2(\theta)\,\rho\,r\,b$$ * *I is the door Inertia *b is the door width *CM center of mass
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Constraint rate change problem Two small rings O and O' are put on two vertical stationary rods AB and A'B' , respectively. One end of the inextensible threads tied at point A'. The thread passes through ring O' and it's other end is tied to ring O. Assuming that ring O' moves downwards at constant velocity $ v_1$, determine the velocity $v_2$ of the ring O, when $\angle$AOO' =$\alpha$. My approach A'O'O is the length of the string which is constant. $\rightarrow$ A'O' + h = l ${d\over dt}$(A'O')+ ${d\over dt}h$ = $0$ ∵ ${d\over dt}$(A'O') = $ v_1$ $\rightarrow$ ${d\over dt}h = - v_1$ h= $\sqrt{y^2+d^2}$ So, we have ${1\over 2}.\Bigl({1\over \sqrt{y^2+d^2}}.\Bigr).2y{d\over dt}y = - v_1$ ${d\over dt}y = - v_1.\Bigl({\sqrt{y^2+d^2} \over y }\Bigr)$ ${\sqrt{y^2+d^2} \over y }$ = sec($\alpha$) So, ${d\over dt}y = {- v_1 \over cos($\alpha$)}$ $v_2$ = - ${v_1 \over cos(\alpha)}$ But the answer is $v_2$ = - $v_1 {sin^2({\alpha \over 2}) \over cos(\alpha)}$ I don't know where I got it wrong and what was the mistake. Any help would be appreciated
I dont think dy/dt is the velocity $V_2$ as you have taken y from O to a point below A (say it C where you have drawn the line d). Here even AC changes , so only dy/dt won't account for velocity of O . AC+CO=AO. $$d(AC)/dt + dy/dt =V_2$$ where y=CO. This is more of a comment rather than an answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
SAXS vs. X-ray diffraction? Both small-angle X-ray scattering and X-ray diffraction can be used to obtain structure factors, though I imagine the wave vectors accessible to each are different (?). What are the main differences between both, and why are structure factors obtained using the former technique often plotted on a log scale in y (and sometimes in x too?), whereas that's not the case for the diffraction-obtained functions?
Small-angle X-ray is a device to investigate the structure near the surface (1-100 nano-meters). The x-ray is a high penetrating measurement for the structure of whole crystal structure. Therefore, looking into the surface structure, we make the x-ray a glancing incidence angle (0.1 - 5 degree). For a glancing angle x-ray, the diffraction intensity is, of course, much smaller that the larger incidence angle. In order to observe the diffraction intensities over a larger scale of magnitude, we use log scale. For example, the ratio of two diffraction intensities $1$ and $ 10^{-6}$, the weaker diffraction will be completely depressed in a linear-scale plot. The log-scale will be employed to observe both diffraction lines.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In a capacitor, is there energy in the electric field, is there potential energy, or both? The electric field between two capacitor plates is very simple. $$ \vec{E} = \frac{Q}{\epsilon_0 A} \vec{e}_z $$ I can get the energy stored in the field by integrating the energy density, $u_e$, over the volume (between the plates). $$ U = \int_V u_e \; \text{d}^3\!x = \int_V \frac{\epsilon_0}{2} E^2 \; \text{d}^3\!x $$ Since the field is constant, if I pull the plates appart—say that I double the distance—the integration volume is now twice what is was, and the energy stored in the field doubles. Fine! Simultaneously, we can make an argument from potential energy of the charges in the plates. The charges in each plate are attracted to the other, so when I pull them appart there is a force, and I'm doing work which gives the charges additional potential energy; in virtue of their increased separation. My question is: Are these two separate processes, where energy stored in the field AND in the potential energy of the charges. Or, are these two different ways of describing the same physical fact that the energy of the system is increasing? Cheers!
As Griffiths has said. It is simply a matter of book keeping on whether or not you would like to say that the collection of charges has an associated potential energy to it. Or you would like to say that the E field possesses some energy density. It is the exact same thing, and yes 2 different ways of describing it! Look up the derivation of field energy ( in griffiths). You start with the formula for the potential energy of a general charge distribution, then use maxwells equations to eliminate ρ in favour of the fields!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Necessary and sufficient conditions for operator on $\mathbb C^2$ to be a density matrix Consider a one-qubit system with Hilbert space $\mathscr H\simeq \mathbb C^2$. Define the hermitian operator $$\rho := \alpha\, \sigma_0 + \sum\limits_{i=1}^3 \beta_i\, \sigma_i \quad , \tag{1}$$ where $\alpha,\beta_i \in \mathbb R$, $\sigma_0 = \mathbb I_{\mathbb C^2}$ and $\sigma_i$ are the usual Pauli matrices. What are the necessary and sufficient conditions for $\rho$ to be a density operator, that is a positive semi-definite operator with unit trace? Under which conditions is $\rho$ pure? Can these conditions be derived without using the explicit matrix representation of the Pauli matrices?
A very simple derivation, without using the specific form of the Pauli matrices, can be obtained if one uses that the vector of Pauli matrices transforms as a $\mathrm{SO}(3)$ rotation under the adjoint action of $\mathrm{SU}(2)$ -- i.e, one has that $$ U (\vec r\cdot \vec\sigma) U^\dagger = (R_U\vec r)\cdot \vec\sigma $$ for any $U\in\mathrm{SU}(2)$, where $R_U$ is the $\mathrm{SO(3)}$ rotation corresponding to $U$ (modulo $\pm 1$). Once you know this fact, $\rho = \alpha I + \sum \beta_i\sigma_i$ equals to $$ U\rho U^\dagger = \alpha I + |\vec\beta| \sigma_z\ . $$ Now you could use the explicit matrix form of $\sigma_z$ -- but you don't need to, all you need to know is that it has eigenvalues $\pm1$: Then it is immediate to see that $$ \mathrm{eig}(\rho) = \alpha\pm|\vec\beta| $$ and $$\mathrm{tr}(\rho) = 2\alpha\ . $$ This immediately answers all your questions: * *$\rho$ is a density operator iff $\ 2\alpha=1$ and $|\vec\beta|\le \alpha$. *$\rho$ is pure iff $\ \alpha = |\vec\beta|$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Will a clock that is isolated and stationary with respect to the CMB report the highest possible value for the age of the universe? We have a very special clock that has existed since the dawn of time. Its purpose is to measure the age of the universe. It is always very far from any massive body or gravitational field and it is always held stationary with respect to the cosmic background radiation. A clock in a gravitational field will run more slowly because of the time dilation due to gravitational potential. A clock that had moved at some point in its history would suffer time dilation because of its non-zero velocity. Is it possible that any other clock could ever run faster than our very special isolated stationary clock? So will our very special clock measure the highest possible value for the age of the universe?
The fastest clock will be one that is at the center of a large void in the universe. This will be slightly faster than a "standard" clock embedded in an extended region of average mass density which will suffer more gravitational time dilation. (this answer just formalizes the exchange of comments with @PM 2Ring) Assuming the universe is infinite and homogeneous, etc. with mass density, $\rho$, we can take the average gravitational potential as our zero reference. The potential at the surface of a large spherical void is thus $+GM/r$ and the potential at its center is $\Phi = +{3 \over 2}GM/r $. Here $M = \rho.{4 \over 3} \pi r^3$ is the missing mass. This gives $\Phi = +G\rho.2\pi r^2 $ (this seems to involve the surface area of the void - is that a coincidence?) The ratio of the time-rates is $ {t_{fastest} /t_{standard} } \approx 1+\Phi/c^2 = 1+ (G/c^2) \rho . 2\pi r^2 $ Taking the average density of the universe as $\rho = 6 \times 10^{-27} kg/m^3 $ and using the Giant Void as an example (radius ~0.6 billion light years = $5 \times 10^{24} $ meters), we get $ {t_{fastest} /t_{standard} } \approx 1.0007$. So the conclusion is that a clock in the middle of the Giant Void would run 0.07% faster than a standard clock. This is 10 million years over the age of the universe (but is still small compared with the uncertainty on that age.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Work of a spaceship in circular motion Say a spaceship is traveling though space in a uniform circular motion. It's not orbiting any planet, it just flies in circles in an empty space. The only force working on the spaceship would be the centripetal force caused by the ship's engine. Thus, the work would be $0$, as the force would always be perpendicular to the ship's path. But that sounds counterintuitive to me, it would seem that the spaceship must do some work, otherwise it would just float in a straight line. Can anyone point out the error in my reasoning?
Without a force the spaceship would be floating in straight line at constant velocity. The reason why work is zero and the object is accelerated comes from its very definition. From the second Newton's law: $$\mathbf F = m\frac{d\mathbf v}{dt}$$ Making a dot product with an infinitesimal displacement: $$\mathbf {F.dr} = m\frac{d\mathbf v}{dt}\mathbf {.dr} = m\mathbf {v.}d\mathbf v = d\left(\frac{1}{2}m\mathbf {v.v}\right) = d\left(\frac{1}{2}m|v|^2\right)$$ Because $\mathbf {F.dr} = dw$ by definition, it is zero if there is no change in the modulus of the velocity, even if the velocity vector change.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
AdS$_4$ and $\mathbb{H}^4$: What is the difference between them? This figure (source) shows the embedding of 4D hyperbolic space $\mathbb{H}^4$ and 4D de Sitter space dS$_4$ in 5D Minkowski space $\mathbb{M}^5$. $\mathbb{H}^4$ is a hyperboloid of two sheets and dS$_4$ is a hyperboloid of one sheet. However, I also understand that 4D anti-de Sitter space AdS$_4$ can be embedded in $\mathbb{M}_5$, and that it is also hyperbolic but simply connected everywhere. I want to know why the author calls the figure on the right de Sitter space but he calls the figure on the left hyperbolic space rather than anti-de Sitter space. Is AdS$_4$ just one of the $\mathbb{H}^4$ hyperboloids? Do the two possible hyperbolic embeddings correspond to the $\{\mp\pm\pm\pm\}$ metric signature freedom? Does $\mathbb{H}^4$ have a Lorentzian signature or is it just the Euclidean version of AdS$_4$? If so, how can I reconcile the simply connected property of AdS$_4$ with the disconnected property of $\mathbb{H}^4$?
$\text{AdS}_n$ is a sphere of timelike radius in a space of two timelike and $n-1$ spacelike dimensions.* $\text{AdS}_n$ itself has one timelike dimension. For comparison: $\mathbb H^n$ is a sphere of timelike radius in a space of one timelike and $n$ spacelike dimensions,** and has zero timelike dimensions itself; $\text{dS}_n$ is a sphere of spacelike radius in a space of one timelike and $n$ spacelike dimensions, and has one timelike dimension itself. * Actually, it's usually taken to be the universal cover of that sphere, since otherwise it's periodic in time, i.e., has closed causal loops. ** Usually with opposite points identified, so that there's only one sheet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The kinetic energy of a rotating globe I can find formulas for the kinetic energy of a globe (ball) in motion but not for just rotating. Anyone has the formula to calculate the kinetic energy of a rotating globe?
The total kinetic energy of a body is sum of translational and rotational kinetic energies $$K = \frac{1}{2} m v^2 + \frac{1}{2} I \omega^2$$ where $v$ is translational speed of center of mass, $\omega$ is rotational velocity, and $I$ is moment of inertia about axis of rotation. Moment of inertia of a homogeneous solid sphere is $I = \frac{2}{5} m r^2$ where $r$ is sphere radius. Note that the Earth is not homogeneous solid sphere but could be approximated as one within some margin of error.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Notation for rule of thumb, without breaking dimensional homogeneity? I'd like to know how to write rules of thumb in a concise way, without breaking dimensional homogeneity. For example, if a runner has an average speed of ~10 km / h, an approximation of the covered distance would be $\mathrm{distance} \approx \mathrm{duration} * 10 \frac{\mathrm{km}}{\mathrm{h}}$ Is there any shorter way to write it? The goal would be to make it clear that you can simply multiply the number of hours by 10, and you'd get the number of kilometers. $\mathrm{km} = 10 * \mathrm{h}$ is concise, but it's also obviously wrong because it breaks dimensional homogeneity. There was a question on bicycle.stackexchange ("How to convert calories to watts on Strava rides?"), and one of the answers was Calories(kcal) = Watts * Hours * 4. This rule of thumb doesn't break homogeneity, but it still looks weird because one kcal is 1.163Wh, and not 4Wh. What would be a better way to write it?
My advice: * *Have a symbol for each quantity, including coefficients such as your distance-to-duration ratio; the symbols should represent the quantities, not what they become when nondimensionalized on division by a unit. *State coefficients' values, where known, in separate equations. *Trust your reader to remember how the arithmetic of ordinary (dimensionless) numbers translates into that of dimensionful quantities. Your example is $s\approx vt,\,v=10\text{km/h}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Thrusters in space Suppose we propel ourself using thrusters in space. If we have two thrusters pushing against each other like this: Nothing will happen. We can imagine the "strain" on the vehicle chassis; the forces are not working. All the thruster's fuel is wasted. Now, if we have two thrusters like that: The resulting force intensity is $\sqrt{2}$, which is less than $2$, the sum of forces provided by thrusters. Does it mean, like on the previous example, that some fuel is wasted ? Consequently, does it mean that ideally we would like to orient the thrusters toward the direction we want to push ?
Consequently, does it mean that ideally we would like to orient the thrusters toward the direction we want to push ? Assuming a spherical cow (which is a big assumption), your conclusion is correct because there are vector components of the the two thrusters that oppose each other and cancel out. In practice, you need to consider the weight, complexity, and cost of mechanisms to-orient thrusters and that of two smaller thrusters versus one larger thruster.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Data transmission through optical fiber and copper wire I read the following in one reference: A copper wire (twisted pair), the link traditionally associated with low bit rate transmission, is still in use in the modern data centers transmitting data at 20 Gbit/sec. The secret? it does so only over a few meters (the bandwidth distance product is constant). My question: I know that optical fibers, for example, are used for high data rate transmission over long distances. High data rates means high bandwidth and high carrier frequency. According to the constant bandwidth distance constant principle, we can decrease the carrier frequencies used with optical fibers, and thus increase the transmission distance. My question is: can we use very low frequencies (in the range of kilohertz) with optical fibers to transmit data over very large distances? Of course the bit rate will be very low, but I am asking about the possibility to transmit very low frequencies over optical fibers.
Optical fibers offers best transmission around 1310-1550nm or 850nm of wavelength spectrum depending on type of fiber. for lower frequencies, the attenuation reaches beyond 10dB per Km. However there have been increasing trend of transmission of radio waves over fiber through radio over fiber (ROF) in certain conditions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Force between two protons Yesterday my teacher was teaching about the production of photons, he told that photons are produced when the electron move from a higher energy level to a lower energy level then suddenly a idea struck in my mind that if electrons are responsible for photons and photons are responsible for electromagnetic force then how will the electromagnetic force will come between two individual protons? Is there more ways to generate photons?
Electrons can make photons, but they aren't the only particles that can make photons. Any particle with electric charge can emit or absorb photons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Why doesn't the non-degeneracy definition of the metric tensor assure $g(v,v)=0\implies v=0$? We know that a defining property of the metric tensor is that it is non-degenerate, meaning $\forall u,\, g(v,u)=0\implies v=0$. Yet from a textbook I read that $g(v,v)=0$ does not assure $v=0$. Why is this? Can't we simply let $v=u$ in the definition and obtain $g(v,v)=0\implies v=0$? Thanks.
On Lorentzian manifolds there is an obvious counter example to your claim, namely null-vectors. Let $$g = \begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}$$ be the Minkowski metric in 2D. Consider $$v= \begin{pmatrix}-1 \\ 1 \end{pmatrix}$$ We see that $g(v,v)=0$ although $v \neq 0$. Therefore, your implication is false.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Difference between the wave forms in the water and in the Young double slit experiment We can observe when we cause a slight disturbance at two points on the water surface which is intially totally undisturbed , it will form water waves which would look like as shown in below image: we can observe that there are constructive and destructive interferences at some places and also which lies in between these type of interferences (that is between fully destructive and fully constructive) . We notice that it doesnt need screen to show the interference effects at all , so why in YDSE we need screen to show the interference patterns, is it because we cant be able to observe the interference being happen in air or any other medium from our naked eyes?
Did you know that a water wave in a single slit will NOT make an interference pattern ... but light will! Water and light are similar but also very different. In the DSE there is NO light in the dark areas .... photons will travel to areas that are more "resonant" (path length is a multiple of wavelength), this is quantum mechanics/optics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why do we use different differential notation for heat and work? Just recently started studying Thermodynamics, and I am confused by something we were told, I understand we use the inexact differential notation because work and heat are not state functions, but we are told that the '$df$' notation is only for functions and that the infinitesimal heat and work are 'not changes is anything' surely they can be expressed as functions of something? and they are still changes as they do change? What is the thermodynamic reason for describing them as not being changes in anything?
"What is the thermodynamic reason for describing them as not being changes in anything?" Well, what would they be changes in? There isn't some quantity of heat belonging to something of which $\delta Q$ is a change; it's only heat while the energy is flowing. A similar remark applies to work, which is also energy in transit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Intuitive explanation for coefficient in the Larmor formula So the Larmor formula tells us the total power radiated by an accelerating point charge that doesn't go too fast with respect to the speed of light is $P=\frac{2}{3}\frac{q^2 a^2}{c^3}$ (written in CGS units). Now my question is: Is there an intuitive explanation behind this expression as to why the coefficient of $\frac23$ is the way it is except for the argument that it came from integrating over solid angle?
The 2/3 comes from the average value of $\sin^2\theta$ in the angular integration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Force acting on a negative particle in a magnetic field I have recently learned about magnetic fields and particles. The recent one I have learnt is the right hand rule. The force F acting on a negative particle is always opposite to the force we get from the right hand rule, if I understand the right hand rule correctly. An exercise in my book said that I was wrong about this, so I'll ask from the physisists here. My school book says that the force acting on the negative particle in c) will go towards left. That would be correct if the particle was positive, right? As I understand things the force acting on the negative particle is towards right? edit: I would appreciate if I could get a second opinion/answer here.
$\vec{V} × \vec{B}$ follows the right hand rule, where: $\vec{V}$ is pointer finger. $\vec{B}$ is the middle finger. Pointer finger down, middle finger into the page, would result in the force being to the right, . Multiply by a negative means the force is towards the left.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }