Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why does not dark matter gather and form celestial bodies? since the only thing we know about dark matter that it "attracts" and affect our Baryonic matter's momentum and speed, which means that it does have mass of a sort. so why didn't we witness a darkmatter-darkmatter interactions in form of collisions of celestial bodies like stars, Black holes or other distinct things, what do we know about that? PS: it would be very helpful for me if someone has an answer can cite it with a paper on the topic. thanks in advance!
The standard answer is that dark matter does not seem to interact strongly with itself (although self-interacting dark matter is an active research topic), and does not emit electromagnetic radiation. The latter property means that a clump of dark matter cannot lose energy by radiating it away, and will remain a diffuse clump. Ordinary matter can coalesce, heat up, radiate away the energy, and coalesce further. Hence dark matter seems to form diffuse halos that do not form celestial bodies.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/471226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is Earth's gravitational acceleration $9.8 \frac{m}{s^2}$? How was the value of $g$ determined as 9.8 $\frac{m}{s^2}$? I am not requesting the derivation but the factors/parameters that influence this value.
One thing I'd like to add to these excellent answers is that although you didn't ask for a derivation, I think it is necessary in understanding the $g = \frac{GM}{r^2}$ equation. The force due to gravity between two masses is given by Newton's law of universal gravitation; $F=G\frac{m_{1} m_2}{r^2}$, where; $F$ = Force between the two masses $G$ = Gravitational constant $m_1$ = Mass of object 1 (In this example, this can be the mass of the Earth) $m_2$ = Mass of object 2 (In this example, this can be the object on the surface of the Earth) $r$ = distance between the centre of the two masses (In this example, this will be the radius of the Earth - assuming the object is on the surface of the Earth) Therefore, the acceleration of the object on the surface of the Earth can be found using Newton's second law; $F=m_2a_2$. Therefore, $m_2a_2 = G\frac{m_{1} m_2}{r^2}$, and this simplifies to; $a_2 = G\frac{m_{1}}{r^2}$. Then, the acceleration can be replaced with $g$, the mass of the Earth can be replaced with $M$ and this forms the equation; $g=\frac{GM}{r^2}$. This is the reason why the acceleration at the Earth's surface is always 9.81 $m{s^{-2}}$, regardless of the mass of the other object. Therefore, in answer to your question, the only factors affecting the acceleration due to gravity at the Earth's surface are; * *The gravitational constant *The mass of the Earth *The radius of the Earth (if we assume the centre of mass is exactly in the centre of the Earth)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/471496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why is normal force at the bottom dependent on normal force on top? Why does the normal force on bottom of the track have anything to do with the normal force on top of the track? Why isn't the normal force at the bottom simply $mg$?
The connection is that the centripetal force that keeps the car going in a circle is the same at the top of the track as at the bottom (because the car is travelling at a constant speed). If this centripetal force is $F$ then at the top of the track we have $F = N_{top} + mg$ where $N_{top}$ is the normal force at the top of the track. At the bottom of the track we have $F = N_{bot} - mg$ where $N_{bot}$ is the normal force at the bottom of the track. We can elimiate $F$ from these two equations to get $N_{top}+mg = N_{bot}-mg \\ \Rightarrow N_{bot} = N_{top}+2mg$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/471561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Potential by Assembling Charges For finding electric potential energy of a uniformly charged sphere, we can assemble the sphere by brining charges from infinity to that point. So to make a uniformly charged sphere of radius $R$ and total charge $Q$, at some instant, charge will be assembled up to a certain radius $x$. In order to find potential of this sphere at the surface, why is my approach giving different answers? Approach 1: $$\rho = \frac{3Q}{4 \pi R^{3}}$$ $$q = \frac{4}{3} \pi x^{3} \rho = Q \frac{x^{3}}{R^3}$$ Potential at the surface would be $$V = \frac{q}{4 \pi \epsilon_0 x} = \frac{Q x^{2}}{4 \pi \epsilon_0 R^{3}}$$ Approach 2: $$\rho = \frac{3Q}{4 \pi R^{3}}$$ $$q = \frac{4}{3} \pi x^{3} \rho = Q \frac{x^{3}}{R^3}$$ $$E = \frac{Q x}{4 \pi \epsilon_0 R^{3}}$$ (From Gauss' Law) Potential at the surface would be $$V = -\int{\vec{E} \cdot \vec{dx}} = -\frac{Q}{4 \pi \epsilon_0 R^{3}} \int_{0}^{x}{xdx} = -\frac{Q x^{2}}{8 \pi \epsilon_0 R^{3}}$$ Why is the answer different in both the cases?
Approach 2 is wrong. You didn't take into account the corresponding limits for potential. Potential at centre of sphere is not zero!! The expression is V(x)-V(0) instead of V(x).... Find potential at surface by integrating for electric field outside sphere from X to infinity V(infinity)=0. So Then if you wish you can find V(x) by integrating from x=x to any general x=y(
{ "language": "en", "url": "https://physics.stackexchange.com/questions/471655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Why was M87 targeted for the Event Horizon Telescope instead of Sagittarius A*? The first image of a black hole has been released today, April 10th, 2019. The team targeted the black hole at the center of the M87 galaxy. Why didn't the team target Sagittarius A* at the center of our own galaxy? Intuitively, it would seem to be a better target as it is closer to us.
Of course they targeted Sgr A* as well. I think though that this is a more difficult target to get good images of. The black hole is about 1500 times less massive than in M87, but is about 2000 times closer. So the angular scale of the event horizons should be similar. However Sgr A* is a fairly dormant black hole and may not be illuminated so well, and there is more scattering material between us and it than in M87. A bigger problem may be variability timescales$^{\dagger}$. The black hole in M87 is light days across, so images can be combined across several days of observing. Sgr A* is light minutes across, so rapid variability could be a problem. The penultimate paragraph of the initial Event Horizon Telescope paper says: Another primary EHT source, Sgr A*, has a precisely measured mass three orders of magnitude smaller than that of M87*, with dynamical timescales of minutes instead of days. Observing the shadow of Sgr A* will require accounting for this variability and mitigation of scattering effects caused by the interstellar medium $\dagger$ The accretion flow into a black hole is turbulent and variable. However, the shortest timescale upon which significant changes can take place across the source is the timescale for light (the fastest possible means of communication) to travel across or around it. Because the material close to the black hole is moving relativistically, we do expect things to vary on these kinds of timescales. The photon sphere of a black hole is approximately $6GM/c^2$ across, meaning a shortest timescale of variability is about $6GM/c^3$. In more obvious units: $$ \tau \sim 30 \left(\frac{M}{10^6 M_{\odot}}\right)\ \ {\rm seconds}.$$ i.e. We might expect variability in the image on timescales of 30 seconds multiplied by the black hole mass in units of millions of solar masses. This is 2 minutes for Sgr A* and a much longer 2.25 days for the M87 black hole. EDIT: 12th May. And here it is, an image reconstruction, published by the Event Horizon Telescope consortium (see here) for the black hole at the centre of the Milky Way. The image is a time-averaged composite reconstructed using a novel dynamical imaging process for about 10 hours of VLBI data.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/471792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 1, "answer_id": 0 }
Why can infinite planes be approximated as Gaussian surfaces? A little background: I'm an undergraduate studying Electrodynamics, currently in Chapter 8 of Griffiths. A question I came across (8.4 part a for those curious) asks for a calculation of the force exerted by one point particle on another point particle of equal charge. This is meant to be done through means of: $\oint_S \bar{T} \cdot d \vec{a}$ $\bar{T}$ being the Maxwell stress tensor. I'd expect that you'd have to create a closed surface around one of the point charges, but this question explicitly wants the surface integral to be done over the plane. Easy enough, but my question is why would this be a viable closed surface? The explanation I've been given so far from the lecturing professor (and wikipedia) is that the plane is an approximation of a closed surface. It seems that as the "bubble" (see the below cross-section illustration) extends to infinite size, its function becomes negligible and what ultimately matters is the infinite plane. This explanation makes intuitive sense, but I feel is a little hand-wavey. Why is this the case? Does it work for all systems of particles (continuous and not continuous), or just for a point particle. My intuition would tell me that if you zoomed in on the very edge of a sphere, it would start to look like a plane (but again this explanation is not very mathematical). Any insight would be greatly appreciated.
Imagine the infinite plane orthogonally intersected by a cylindrical Gaussian pillbox of area $A$. Force lines are normal to the infinite plane, therefore the totality of force lines exiting the cylinder are through the ends of the cylinder. If the charge density of the infinite plane is $\sigma$, and the integral only needs to be evaluated over the two ends, then $$g(2A) = 4\pi GM = 4\pi G\sigma A$$ hence $$g = 2\pi G\sigma$$ This is a constant, independent of the length of the cylinder.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/471907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proving that $dS$ is an exact differential mathematically OK...so I hope this is not too dumb a question: We know that we can express $dS$ as $$dS=\frac{dQ}{T}=\frac{C_v}{T}dT+\frac{R}{V}dV,$$ where $C_v$ is the thermal capacity at constant volume and $R$ is the gas constant. However, I recall that for a differential of the form $dz=X(x,y)dx+Y(x,y)dy$ to be exact we must have $$\frac{\partial X(x,y)}{\partial y}=\frac{\partial Y(x,y)}{\partial x}.$$ Now my problem is how do you show that $$\frac{\partial}{\partial V}\left(\frac{C_v}{T}\right)=\frac{\partial}{\partial T}\left(\frac{R}{V}\right).$$ My math skills are kind of rusty, so I'm having trouble here. I hope someone can help me out on this.
For a perfect gas, once one arrives at an expression for $dS$, the integrability condition is a trivial check. $C_v=\alpha R$ with alpha constant. Thus, $C_v/T$ does not depend on $V$, while $R/V$ does not depend on $T$ and the condition for integrability is trivially satisfied. A little less trivial (but not too difficult task) would be to show for a perfect gas that $\delta q_{rev}/T$ is an exact differential, where $\delta q_{rev} = dU + PdV$. For a general system (not a perfect gas) this would be equivalent to Carnot's theorem and then an expression of the 2nd law.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/472418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why was the imaging of M87 black hole easier than imaging the Milky way's Sagittarius A*? In the recent EHT press release of the image of the super massive black hole at M87, I am curious to know why the super massive black hole at the centre of milky way has not been imaged yet.
The EHT needs a long exposure time. M87* is rather stable, so while SgrA* is apparently quite variable even during exposure time. The team is busy de blurring the SgrA* image. See this link and practice your Dutch at the same time: https://www.astroblogs.nl/2019/04/10/een-opzienbarend-eht-middagje-in-brussel/amp/
{ "language": "en", "url": "https://physics.stackexchange.com/questions/472674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Stress-Energy Tensor and Conformal Invariance in String Theory Since the Euler-Lagrange Equations corresponding to the Polyakov Action implies no dependance on the auxillary metric we arrive at the constraint $T_{ab}=0$. We then change to lightcone coordinates $++$ and $--$ and write $T_{++}$, $T_{+-}$, $T_{-+}$, and $T_{--}$ in terms of the $T_{ab}$ which all vanish due to the vanishing of the $T_{ab}$. One way to see that the trace vanishes is via Weyl Symmetry, but since all of the $T_{++}$ etc vanish isn't it obvious that the trace vanishes? And then isn't the equation $$\partial_{-}T_{++}=0$$ true trivially? Given the importance of these results towards establishing conformal field theory in String Theory I would appreciate any help understanding this reasoning.
The stress-energy-momentum (SEM) tensor $T_{ab}$ doesn't vanish as an operator identity/off-shell. The Virasoro constraints $T_{ab}\approx 0$ are on-shell equations that hold in quantum average $\langle T_{ab}\rangle=0.$ If there is no Weyl-anomaly, we may consistently impose off-shell * *Dilation symmetry $\Rightarrow$ tracelessness of SEM tensor $T_{\pm\mp}=0$. *World-sheet (WS) translation symmetry $\Rightarrow$ continuity eq. for SEM tensor $\partial_{\mp}T_{\pm\pm}=0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/472811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is there no jet visible in the M87* black hole radio image? M87 is known to contain a jet that is likely created by the supermassive black hole at its center. The recently published radio image taken by the Event Horizon Telescope does not display any noticeable jet. I can think of a few reasons why, but lack knowledge to give a scientific answer. For example * *the jet starts further out (how far out?) *the jet does not radiate enough in the particular radio spectrum the EHT measured *spacetime is so warped it is hard to discern the jet from the accretion disc Can someone give a scientific explanation?
In fact if you see the two jets in the image M87. They are very weak and can not be clearly supported because of lack of resolution of the EHT. but if you are looking for a good quality image you can see that two jets come off in opposite directions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/472910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is frequency entanglement? If frequency (or energy) is regarded as a quantum property, can one generate a pair of photons with frequency entanglement? What would be the uses of this type of entangelment in say, sensing? How would you describe it mathematically? And would you be able to give a bit of intuition?
We are basically talking about time-energy entangled photons. This is used for quantum communication. The real term is frequency bin entanglement. First, a monochromatic laser pump produces frequency entangled photons, where the frequency of each photon is uncertain, but the sum of their frequencies is well defined. THen, narrow band frequency filters resolve the frequency of each photon.This leads to the concept of frequency bin, two photons whose frequency is so close, that they cannot be distinguished by the filters. Then different frequency bins are made to interfere using electro optic phase modulators. In this chapter, we introduce our method to manipulate energy-time entangled photons. First,we describe classical and single-photon schemes for manipulating the frequency degree of free-dom with EOPMs, which create frequency sidebands. These experiments can be viewed ashigh-dimensional frequency analogues of the spatio-temporal and polarization experimentspresented in section 1.3. Correspondence principles imply that classical interference translatesto single-photon interference, and single-photon interference translates to two-photon interfer-ence. We show how we can transpose the setup to the entangled-photon case, leading to the no-tion offrequency-bin entanglement. We also show how the high-dimensional interference patterncreated by EOPMs can be restricted to interference between effective two-dimensional states.We describe in detail our experimental tools —mainly composed of off-the-shelf components—to implement the method and acquire reliable data. Finally, we present our experimental re-sults, i.e. high-visibility two-photon interference patterns in optical fibers at telecommunicationwavelengths, demonstrating that the method can be reliably implemented. Please see here: https://tel.archives-ouvertes.fr/tel-01743877/document
{ "language": "en", "url": "https://physics.stackexchange.com/questions/473144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What causes the color difference of lightning flashes? Some flashes of lightning are seen in a blue shade while some have a yellowish/orange appearance .What is the possible cause of colour difference?
Lighting produces light by ionising the air. For different proportions of gases in the atmosphere, you get different colours emitted during lightning. Learning from the comments, the following are the likely factors that affect the colour of lightning. * *Atmospheric composition varies with the place of interest. Most of the main components like nitrogen and oxygen doesn't vary with place and time. But water vapour composition does vary greatly(Wikipedia). This may give rise to the change in colour. * *The presence of pollutants and atmospheric impurities can change the colour seen by you, mainly through scattering. The yellowish colour may have come when the high frequency colours were scattered away by the pollutants in the air. EDIT: I am not very sure of the answer now, but I don't want to delete it. I think the more knowledgeable users can provide a more accurate explanation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/473296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Magnesium has no unpaired electrons then why is it paramagnetic? Paramagnetism is often associated with presence of unpaired electrons in atomic orbitals of atoms of the element.But magnesium has no unpaired electrons in it's atomic orbitals but still is paramagnetic.Why?Please explain.
Often, but not always. In metals the free electrons contribute to the magnetic properties and this contribution is not related to atomic orbitals. See "Pauli paramagnetism" for example. The free electrons have a diamagnetic component too (Landau diamagnetism). The balance between all the contribution will determine the behavior of the metal (para- or diamagnetic).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/473445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does a heavier element have a low specific heat capacity? Lead has 207 amu and 125j/kg/c while copper has 63 amu and 376 j/kg/c, why is that? So if energy is stored in the motion of the particles, heavier particles should move slower and therefore wouldn't this means that it require more energy to increase the temperature by 1-degree Celcius? I looked up on google and the main reason it said is that there are more particles, but how? I am taking 1 kg of lead and 1 kg of copper, and from my understanding, their masses are different because the lead particles are heavier instead of more of particles in lead than copper. So please explain why does a heavier element tend to have a low specific heat capacity?
I think you mean to say that they have a different number of particles- 1kg of lead and 1kg of copper surely have the same mass. There are $~2.91\times10^{24}$ atoms in a kg of lead, and $~9.45\times10^{24}$ atoms in a kg of copper. Heat (more accurately, thermal energy) is "stored" in the particular degrees of freedom for the motion of the atomic "particles"- and since there are more atoms in a kilogram of copper, it's gonna have more atoms with more degrees of freedom. Thus, more capacity for storage.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/473672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If a satellite speeds up, does that make it move farther away or closer? If a satellite is in a stable circular orbit and goes about 41% faster (escape velocity) then it leaves its host forever. I get that. However, what if it speeds up by less than 41%? Intuitively, it would seem to make the satellite move farther away from the host and thus enter a higher (more distant) orbit. However, according to my understanding, a stable orbit requires the satellite to move more slowly the farther away it is from the host. For example, the earth moves more slowly around than the sun than Venus because it is farther away from the sun than Venus. So, if a satellite speeds up then the stable orbit would be closer to the host, not farther away. What am I missing here?
Since the stable velocity v for a given orbital radius $R$ is given by $v = \sqrt\frac{GM}{R}$, I would assume that the satellite spirals outwards at an accelerating rate. Since its velocity is greater than the stable velocity for that orbit, it will start to increase the radius of that orbit. As the radius increases, the magnitude of the gravitational force decreases, and thus the rate of spiraling increases (therefore, an accelerating spiral). I'm not sure myself either because this would also imply that the satellite will eventually leave the host, but this is my best guess.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/473934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Calculating refraction between numerous media Last year, our teacher made us an exam to check our knowledge in light reflection and refraction. I don't perfectly remember it, but I know that one of the exercises included a ray with its angle of incidence, and let's say 7 media with their respective indexes of refraction. We had to find the last angle of refraction. Even though it's fairly easy to do, it took my classmates a lot of time. Meanwhile I didn't even have time to reach that question: we were very limited on time. I was thinking if there was another way to find it, a quicker one. Then, while looking at a figure of refraction in my physics book, I came up with an idea: what if we calculate it using only $n_1$, $n_7$, $\Theta_1$ and $\Theta_7$? I'll explain. Let's say the first medium is air, and I'm calling it $n_1$. The seventh medium is glass, and I'm calling it $n_7$. There are 5 other media between air and glass. The angle of incidence will be $\Theta_1$ and the last angle of refraction, the one inside $n_7$, will be $\Theta_7$. Is it possible to calculate $\Theta_7$ as $$\Theta_7 = \frac{n_7 \cdot \Theta_1}{n_1},$$ or is it required to calculate each angle of refraction one by one until we reach $\Theta_7$?
Yes, you are right, except for that it is not the division of $\theta$, but $\sin\theta$. However, the real reason what you said is true can be seen by a bit of observation. $$n_i\sin(\theta_i) = constant$$ Because: $$\frac{n_2}{n_1} = \frac{\sin\theta_1}{\sin\theta_2}$$ And for the ray going to the third medium from the second, $$\frac{n_3}{n_2}=\frac{\sin\theta_2}{\sin\theta_3}$$ What do you see in common (try rearraging them)? $$n_1\sin\theta_1=n_2\sin\theta_2=n_3\sin\theta_3 ...$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/474162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show two Lagrangians are equivalent I need to show that these two Lagrangians are equivalent: \begin{align} L(\dot{x},\dot{y},x,y)&=\dot x^2+\dot y + x^2-y ,\\ \tilde{L}(\dot x, \dot y, x, y)&=\dot x^2+\dot y -2y^3. \end{align} It is the case iff they differ for a total derivation like $\frac{dF}{dt}(x,y)$. In this case, the difference is $x^2+y^3$ and I can't imagine such an $F(x,y)$ whose total derivative is the one above. How should I move? I tried with the following $F(x,y)=\frac{x^3}{3\dot x} + \frac{y^4}{4\dot y}$, but it shouldn't have the dotted terms. Actually, I just proved they don't give rise to the same Lagrange equations, so I can conclude they're not equivalent, right?
I leave it to OP and the reader to prove that OP's two Lagrangians are indeed classically inequivalent, but let me make the following general remarks: * *Two Lagrangians $L_1$ and $L_2$ are classical equivalent iff they give the same Euler-Lagrange (EL) equations. *A sufficient condition is that the difference $L_2-L_1=\frac{dF}{dt}$ is a total derivative, but it should be stressed that it is not a necessary condition, cf. e.g. my Phys.SE answer here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/474567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What had Feynman meant when he told nobody understands Quantum mechanics? What do we mean by understanding Quantum mechanics? What had Feynman meant when he told nobody understands Quantum mechanics? What do we mean by understanding Quantum mechanics?
I read the two existing answers (Anna; Valter Moretti) and I judge that whereas they are saying some correct things, they are not identifying correctly the issue which Feynman was pointing out. The situation is that with quantum theory we have a very well-defined and detailed prescription for calculating, correctly and precisely, what a vast range of phenomena are like. In this sense we understand quantum mechanics very well. Feynman knew this as much as anyone. But he also knew that it is very hard, perhaps impossible, to present quantum theory in the style of `here is the state of the system at some initial time, here is the equation describing the evolution, and consequently here is the state of the system at some final time'. The formalism doesn't fall into that neat separation, and nor is there any other straightforward picture for the type of evolution it describes. For example, in the calculation method famously associated with Feynman (the path integral approach), the integrals tell you the value of a quantum amplitude such as $$ \langle B | A \rangle $$ where $A$ could be some initial state of affairs (e.g. particles located at some given places, or having some given momenta and spin etc.), and $B$ some final state of affairs. The modulus-squared of this is (proportional to) the probability for that process. But the problem is that the further evolution of the system might involve a quantum superposition of state $B$ with some other possibility $C$. If this happens then one cannot say the process $\langle B | A \rangle$ happened on its own; one can only say that it is part of a larger process. Thus one is led inexorably into the much-written-about puzzles of exactly what words like "measure" or "observe" mean. Also Bell's inequality indicates the impossibility of describing quantum systems as if each one can be characterized by a set of physical parameters local to itself. This hints at a rather subtle limit to the concept of reductionism, but one should be careful not to overstate that limit. In short, it was these sorts of questions that Feynman was alluding to in his comment.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/475662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why does my baby's feeding bottle get smashed in fewer pieces when fuller? After and after again my baby has tossed in the air his glass made feeding bottle and get it smashed on the floor, I realized that the more the milk the bottle has the fewer fragments I had to collect... This makes me curious enough but as my level in fluids and physics is at high school, I cannot answer it. What I guess is the more milk means more pulling forces to the glass or the milk, the fluid acts as an absorbent material... Thanks for any answer of you. PS the milk is powder dissolved with water.
When the bottle is in the air it has a potential energy of $(m_b+m_m)gh$. As you mentioned in the comment, more the bottle is filled more will be the potential energy. But in collisions it is not necessary for all the kinetic energy of the bottle to be converted to kinetic energy of the pieces. The energy is converted to break the bottle, to produce some breaking sound, etc.. In the case when the bottle is filled milk, the milk acts as a suspension, i.e it absorbs the energy. The bottle should be filled to some minimum level before it starts to act like a suspension. If you fill the bottle with very little milk then the increase in kinetic energy due to mass will be more than its ability to absorb energy. So there will be more energy that can be used in breaking the bottle, hence more pieces. When the bottle is filled with enough milk then the increase in kinetic energy due to mass will be less than its absorbed energy. Hence there will be less energy spent in breaking the bottle. Mathematically it will look something like this: $$(m_m+m_b)gh= K_{pieces} + E_{break} + E_{milk} +E_{surrounding}$$ When there is very little milk then $E_{milk}$ is small. Hence $E_{break}$ will be more (since energy is constant). And obviously $E_{break} \propto N_{pieces}$. Similarly $N_{pieces}$ will be less if $E_{milk}$ is more. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/475922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why does nature favour the Laplacian? The three-dimensional Laplacian can be defined as $$\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}.$$ Expressed in spherical coordinates, it does not have such a nice form. But I could define a different operator (let's call it a "Laspherian") which would simply be the following: $$\bigcirc^2=\frac{\partial^2}{\partial \rho^2}+\frac{\partial^2}{\partial \theta^2}+\frac{\partial^2}{\partial \phi^2}.$$ This looks nice in spherical coordinates, but if I tried to express the Laspherian in Cartesian coordinates, it would be messier. Mathematically, both operators seem perfectly valid to me. But there are so many equations in physics that use the Laplacian, yet none that use the Laspherian. So why does nature like Cartesian coordinates so much better? Or has my understanding of this gone totally wrong?
This is a question that hunted me for years, so I'll share with you my view about the Laplace equation, which is the most elemental equation you can write with the laplacian. If you force the Laplacian of some quantity to 0, you are writing a differential equation that says "let's take the average value of the surrounding". It's easier to see in cartesian coordinates: $$\nabla ^2 u = \frac{\partial^2 u}{\partial x ^2} + \frac{\partial^2 u}{\partial y ^2} $$ If you approximate the partial derivatives by $$ \frac{\partial f}{\partial x }(x) \approx \frac{f(x + \frac{\Delta x}{2}) - f(x-\frac{\Delta x}{2})}{\Delta x} $$ $$ \frac{\partial^2 f}{\partial x^2 }(x) \approx \frac{ \frac{\partial f}{\partial x } \left( x+ \frac{\Delta x}{2} \right) - \frac{\partial f}{\partial x } \left( x - \frac{\Delta x}{2} \right) } { \Delta x} = \frac{ f(x + \Delta x) - 2 \cdot f(x) + f(x - \Delta x) } { \Delta x ^2 } $$ for simplicity let's take $\Delta x = \Delta y = \delta$, then the Laplace equation $$\nabla ^2 u =0 $$ becomes: $$ \nabla ^2 u (x, y) \approx \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) } { \delta ^2 } + \frac{ u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$ so $$ \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) + u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$ from which you can solve for $u(x, y)$ to obtain $$ u(x, y) = \frac{ u(x + \delta, y) + u(x - \delta, y) + u(x, y+ \delta)+ u(x, y - \delta) } { 4 } $$ That can be read as: "The function/field/force/etc. at a point takes the average value of the function/field/force/etc. evaluated at either side of that point along each coordinate axis." Of course this only works for very small $\delta$ for the relevant sizes of the problem at hand, but I think it does a good intuition job. I think what this tell us about nature is that at first sight and at a local scale, everything is an average. But this may also tell us about how we humans model nature, being our first model always: "take the average value", and maybe later dwelling into more intricate or detailed models.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 4, "answer_id": 0 }
What happens when a the volume of a perfect vacuum (0 psi) is increased? I'm an engineer working on a design in which a "plunger" will be pulled out of a sealed vessel, where the starting volume of the vessel is essentially zero and I need to know what will happen to the surrounding structure. If I'm asking this in the wrong place then please feel free to point me in the right direction. Say you have a hand pump with an outlet that allows you to fully depress the plunger until the volume inside is zero. The outlet is then closed and the plunger is pulled, increasing the volume inside of the pump. At some point a "perfect" vacuum will form inside of the pump and from what I understand the pressure inside the container can never drop below the inverse of pressure outside the container (1 atm). If this is true, where does the energy that is applied to the system as you continue to pull the plunger go? If the pressure can no longer decrease inside the pump then shouldn't the plunger require zero force to pull?
Suppose the plunger is 1 square inch in diameter. The work you do to pull it out is 14.7 pounds (force) times the distance you pull it out. Where's the energy? It's potential energy you got by lifting a 1 square inch column of air that distance. Let go of it, and that column of air will fall back, converting potential to kinetic energy. Then when the plunger hits the end the kinetic energy will get converted to heat. The vacuum will not be perfect, but that won't make any practical difference. If there are 3 molecules of nitrogen in there, that's practically none.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why is radiation dangerous? From Wikipedia: Exposure to radiation causes damage to living tissue; high doses result in Acute radiation syndrome (ARS), with skin burns, hair loss, internal organ failure and death, while any dose may result in an increased chance of cancer and genetic damage Why exactly is radiation dangerous for people and animals? I see a lot of mentions regarding danger of radiation on the Internet but I couldn't find any detailed info on how exactly does it affect cells of an animal in a way that exposes them to a cancer or other health issues. Are different radiation sources (i.e. cosmic radiation or radiation emitted from a nuclear reactor) affect health in different ways? Are some animals more resistant than others and if so, which attributes are responsible for that?
@David White is correct. This is a very broad question. Radiation, or more specifically, electromagnetic radiation, covers a broad range from very low frequency long wavelength radio waves that pass right through you with no interaction to very high frequency short wavelength radiation that can cause severe biological damage, and everything else in between. In addition to David Whites link, I suggest you Google "interaction of electromagnetic radiation with matter" and check out the Hyperphysics website on the subject. It provides a very good overview how radiation interacts with matter from a physics standpoint more that a biological standpoint. It covers the interaction of low frequency radio waves, microwaves, infra red, ultraviolet, etc. with matter. The combination of this link and the radiobiology link should give you a good perspective. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How is the relative force of the fundamental forces measured? My physics textbook includes the following table: My question is about the fourth row, where it compares the relative strengths of the fundamental interactions. How are these determined? Is the ratio of electromagnetic and gravitational simply the ratio of the force between 2 1kg point masses separated by 1m, and the force between 2 1C point charges separated by 1m? (that was the explanation my teacher gave me) If so, how can this be justified, since the C and kg are just arbitrary units?
Of course anna v's answer is right, here are a few things I would like to add: * *EM force strength is measured from experimental data *weak force strength is measured from experimental data *gravitational force strength is not measured, but is only theoretically predicted *strong force strength is theoretically measured and in experiments too, like with the exotic atoms, like the pionic atom. The pionic atom is an atom where around the proton, the electrons are replaced by pions. Since the pions are made of quarks and antiquarks, they show bosonic characteristics, and thus the nucleus and the pions are not held together by the EM force but by the strong force. This way they can measure the strength of the strong force too. Please see here: https://tel.archives-ouvertes.fr/tel-01674426/document
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What should be the independent variable in a resistance experiment? I was asked a question by a student today and I have been pondering about it for a while now. In an experiment to measure resistance of a conductor, should we vary voltage across the conductor and measure the current or should we vary current flowing in the conductor and measure voltage across it? Experimentally, which would give us better results? I've been thinking that in either case the drop across the ammeter or voltmeter should be very little (and almost of the same order). So would it make any difference?
In principle it doesn't matter so long as you measure both the voltage and the current accurately (don't just trust that the value of the independent parameter is whatever you set the source to). You ideally want to use Kelvin sense connections to be able to measure the two operating parameters independently. You also usually want to avoid applying a voltage or current that will cause significant self-heating which might alter the properties of the resistor being tested. For low-value resistors (below 1 $\rm \Omega$) it's usually more convenient to use a fixed current source and for high-value resistors (above maybe 10 $\rm M\Omega$) we most often use a fixed voltage source.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Intrpretting questions on Fickian diffusion I am considering the Zimm model for polymer dynamics, and have come across a question Find an expression for the time it takes for the polymer to diffuse a distance equal to its contour length $L=Nb$, if the drag coefficient for the polymer is $\gamma = N\beta b$ where the polymer consists of $N$ segments of Kuhn length $b$. My thoughts on this question were: * *For Fickian diffusion $\langle R^2 \rangle = 2Dt$ *So if we plug in $D=\frac{k_BT}{\gamma}$ and $R=L$, we should get the right answer? For some reason, I am not convinced that "diffusing a distance $L$" translates to $\langle R^2 \rangle = \langle L^2 \rangle$. I'm not sure if this should be obvious or not. Unfortunately, I do not have answers to compare with.
I guess that should be that easy; you may swap $L$ with any other length... the only thing that could be bit fishy for me is that $\sqrt {\langle R^2 \rangle}$ (the position variance) is not necessarily equal to $\langle \vert R \vert \rangle$ (the average diffusion distance). But usually I use them interchangeably.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/476915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Wigner phase space operator correspondence: how to order? According to Gardiner-Zoller (Quantum Noise), operators acting on the density matrix can be mapped via e.g. (I'm taking Wigner space as an example, but the same holds for P and Q) $$a\rho\leftrightarrow\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)W( \alpha,\alpha^*)$$ $$\rho a^\dagger\leftrightarrow\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right) W(\alpha,\alpha^*)$$ Below, an example is given using the P-function, frow which it is clear that if multiple operators are applied on the left or the right of the density matrix, the same correspondences hold, as long as the operators closest to $\rho$ are applied first (i.e. the phase-space representation most to the right, so closest to $W$). Now my question is: what if there are operators acting on both sides of $\rho$? In the simplest case of $a\rho a^\dagger$ this does not seem to be an issue, because $\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right)$ =$\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right)\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)$, but it does lead to ambiguity for example for $aa\rho a^\dagger a^\dagger$. I would expect that the proper way of doing this is still from the inside out, alternating operators from the left and the right; also because this way I obtain a result that is real. Is this correct? How to properly see this?
$$a\rho\leftrightarrow\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)W( \alpha,\alpha^*)\equiv D W( \alpha,\alpha^*),$$ $$\rho a^\dagger\leftrightarrow\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right) W(\alpha,\alpha^*)\equiv D^* W( \alpha,\alpha^*).$$ You proved $[D,D^*]=0$, and, of course, both D s commute with themselves. Thus there cannot be an ambiguity in $$ DDD^*D^*= D^*D^*DD= D^*DDD^*=... $$ In chiral symmetry, that means the left and right action groups commute--they don't know about each other. Check $$a^n\rho a^{\dagger ~n}\leftrightarrow D^n D^{*~n} W( \alpha,\alpha^*),$$ etc, if you must stick to hermitian/real objects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/477029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why can I throw a larger stone farther than a smaller stone? Recently I was throwing stones(don't ask me why) when I noticed that there seems to be an optimum weight of stone so that it travels the farthest. If I generate the same amount of force each time(and assuming all other variables like air resistance, angle of projection etc to be constant) shouldn't a smaller stone be projected with a higher velocity and thus have a higher range. You can try this yourselves. A cricket ball sized object goes farther than a small pebble (consider its size to be similar to a coin)( and also farther than a basketball size object, but that is due to the increased mass). My first thought was that it could be air resistance but shouldn't a larger body experience more air resistance? **(I have doubts whether this is a physics question or more of a biology question.)
Doriano Brogioli has pointed out correctly the important role of air resistance. However, I would like to flesh out one of the details in his answer. If I generate the same amount of force each time... In reality, your muscles can apply more force when they are moving more slowly. This is called the "force-velocity relationship". This is often approximated as a linear law, $$ f = f_0 \left(1- \frac{v}{v_0} \right) \textrm{ for } 0 < v < v_0. $$ When pushing against zero load ($f=0$), your muscles move at velocity $v_0$, and no faster. This is approximately the case for very light objects, so they end up with kinetic energy $T= m v_0^2/2$, as Doriano supposed. Then, when pushing against a heavier load, your muscles cannot move as quickly, and the object gains progressively less kinetic energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/477188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Why are there rings (halos) around street lights? Especially when it's foggy I was in a car that was turned off last night for some time and the windows became foggy via condensation (moisture droplets building up on one side of window). Looking outside, I could see that street lights which were near me had a halo or a ring around them. They would disappear if I wiped the moisture from the window. Why do I see halos around light through a foggy glass? Also, after stepping outside I DID notice a ring around the street light, but it had a larger diameter and it was VERY faint. Why does this light effect occur even without a foggy window? What is going on? Thank you.
The very fine droplets on the window act as a diffraction grating. In principle, a single droplet would produce (very faintly) something called an "Airy's disk" pattern. If they are all the same size, and the light is monochromatic, these will add constructively to make a clear ring. But in reality the droplets are many different sizes, and the light is not monochromatic. Consequently what you see is the sum of many of these patterns summed, each of a different size: this is the halo you see. The faint ring you saw when you stepped out of the car may have a different origin. It may be that you had very small droplets on your glasses (did you step out of an airconditioned car into a humid night?), or it may be there were other objects (small droplets from fog beginning to form?) that were generating this pattern. The smaller the spheres, the larger the ring. The more uniform the spheres, the more it looks like a ring rather than a disk.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/477273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the definition of beam energy in particle physics? For example, the proton beams in the LHC collider have 7 TeV energy. Does this mean that the individual protons in the beam have 7 TeV energy or that the energy of all the protons in the beam add up to 7 TeV?
Collision experiments are done to create particles that can not be studied under normal circumstances. Energy and momentum conservation as well as the famous Einstein equation $E=mc^2$ tell us that a heavier particle can not just "pop out of thin air". But if we let two particles with enough energy collide, they can create a new particle, if the incoming energy is high enough. Therefore the individual particles in the collision beam need to have a high energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/477364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is curved spacetime a real thing or just math? I was curious if the curving of spacetime by mass/energy was actually a real thing or is it just a mathematical construct, a way of visualizing the force of gravity and explaining it and that there is not truly a "fabric of spacetime" and this fabric doesn't actually curve. The math simply explains our observations, but isn't literally what is happening. I'm looking for a pure GR/SR answer to this, not asking about newer theories that build on GR. Did Einstein himself believe spacetime and its curvature was a real physical thing or did he know he was just using math and geometry to explain a phenomenon everyone can see/feel?
Spacetime is curved by gravity, this is a very useful model, but this model of curved spacetime is not compatible with quantum mechanics. However, gravity may also be represented in the form of gravitational time dilation in flat, uncurved space. This may be shown easily for the Schwarzschild metric: $$ ds^2 = -(1 - \frac{2GM}{c^2 r}) c^2 dt^2 + \frac{1}{1 - \frac{2GM}{c^2 r} } dr^2 + r^2 (d\Theta^2 + sin^2 \Theta d\Phi^2)$$ Gravitational time dilation from the point of view of a far-away observer is $$ C = \sqrt{1 - \frac{2GM}{c^2 r}}$$ As you may observe, we can insert the second equation into the first one, and we get: $$ ds^2 = -c^2 (Cdt)^2 + {(\frac {dr}{C})}^2 + r^2 (d\Theta^2 + sin^2 \Theta d\Phi^2)$$ This is still the Schwarzschild metrics of curved spacetime. Now we compare this equation with the Minkowski metrics of flat space: $$ ds^2 = - c^2 dt^2 + dr^2 + r^2 (d\Theta^2 + sin^2 \Theta d\Phi^2)$$ We see that both equations are only differing (twice) by the factor C which is gravitational time dilation, and we may conclude three things with respect to the Schwarzschild metric: * *Gravity may be perfectly and completely expressed by gravitational time dilation, both notions are equivalent *Gravity may be expressed also in uncurved space as gravitational time dilation and *Accordingly, instead of by spacetime curvature, the attraction force of gravity may be described as the tendency of particles to maximize their own time dilation. This model of gravity in flat spacetime does comply with quantum mechanics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/477699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Current in the inductor at $t=0$ $L_1 = 5H, L_2=0.2H, M=0.5H, R_0=10 Ω$, and $i_g=e^{-10t}-10 A$. I need to find $i_2$. I've started with DE $$i_2R_0+L_2(di_2/dt)+M(di_g/dt)=0$$ and solved it for $i_2$, so $$i_2=0.625e^{-10t}+Ce^{-50t}A,$$ where C is constant. I can't find C because I don't understand how to obtain $i_2(0^+)$. Is it possible to obtain this value? Any help appreciated!
The key is opened at t=0. Before that a steady current was flowing in the first network. So, before t= o, change in current is zero. Thus, $\frac{di}{dt} = 0$. Thus the induced current in the second network is zero at the instant t=0, i.e, $i_2(0) = 0$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/477798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Magnetic field at boundary of electromagnet iron core Assume you have an iron core in the interior of the solenoid. It is well known that the strength of the field should increase by a factor of several hundred inside the solenoid as a result of the iron core. However, at the boundary between the iron core and the surrounding air, what happens to the magnetic field strength? Does it instantaneously (with respect to position) drop by a factor of several hundred, or is there a gradual drop (so that the magnetic field immediately surrounding the iron core is stronger than in the air outside)?
At the boundary of air and iron core, the following relations hold: $$\vec {n}\cdot(\vec {B_1}-\vec {B_2})=0,\quad [\vec {n},\vec {H_1}-\vec {H_2}]=0$$ $\vec {n}$ is normal to the surface of the core. The surface-normal induction component is continuous. The tangential component of the magnetic field is also continuous. In practice, in the case of a cylindrical core and with sufficiently tight winding, the magnetic field induction (measured in Tesla) at the ends will be as in a core. And on the lateral surface of the core (adjacent to the winding), the tangential component $\vec {B}$ changes abruptly.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/478340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Griffiths Electrodynamics Problem 9.39: How can $\sin(\theta_T)$ be greater than one? When an electromagnetic wave strikes an interface between two linear media, Snell's law states that $\frac{\sin(\theta_T)}{\cos(\theta_I)} = \frac{n_1}{n_2}$ where $\theta_I$ is the angle of incidence, $\theta_T$ is the angle of transmission, $n_1$ is the index of refraction of the first medium, and $n_2$ is the index of refraction of the second medium. In the case where $n_2 > n_1$, we can then see that $\sin(\theta_T) = \frac{n_1}{n_2} \sin(\theta_I)$ From this we can derive a critical angle $\theta_C$ such that when $\theta_I = \theta_C$ we have $\theta_T = 90 \deg$. Consider, for instance, the case where light travels from water with an index of refraction of $n_1=1.35$ and air with an index of refraction of $n=1$. Then we find that $\theta_C=47.8 \deg$. By taking $\theta_I = \theta_C + \varepsilon$ where $\varepsilon$ is some small positive value, we find $\sin(\theta_T) = \frac{1.35}{\theta_C+\epsilon} > 1$. That is, the sin function is taking a value outside of it's range! Normally I would chalk this up to the problem being "out of the bounds" of the mathematical model of Snell's law, but Griffiths uses this fact to derive evanescent waves: The only change is that $\sin(\theta_T) = \frac{n_1}{n_2} \sin(\theta_I)$ is now greater than $1$, and $\cos(\theta_T) = \sqrt{1-\sin^2(\theta_T)} = i\sqrt{\sin^2(\theta_T)-1}$ is imaginary. (Obviously, $\theta_T$ can no longer be interpreted as an angle!) How is it possible that $\cos(\theta_T)$ is imaginary? What does it mean that $\theta_T$ cannot be interpreted as an angle?
Using Euler's formula $$e^{ix}=\cos{x}+i\sin{x}$$ one can write $$\cos{x} = \frac{e^{ix}+e^{-ix}}{2}$$ If $x$ is real, then $\cos{x}$ is real. However if you allow complex numbers $x=u+iv$ then $$\cos{(u+iv)} = \frac{e^{iu-v}+e^{-iu+v}}{2}$$ and you see that the cosine (and sine as well) can be complex valued and greater than one. Once you allow complex numbers you can't interpret the argument as an angle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/478480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Shape of orbitals in atoms with multiple electrons I found this statement when browsing the Wikipedia article for atomic orbitals: "Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen." Is this true? Googling around I could only found this article where in page 50 it seems to address how to obtain the wave function of atoms with multiple electrons, but I don't have the necessary background to understand if it proves the statement or not. Please include academic sources or a brief proof if possible. I find it surprising that adding electrons wouldn't change the shape of the orbitals substantially, but that's what is implied when I've studied chemistry.
An approximation that seems to work well for the multi-electron case is the Hartree-Fock method. In Hartree-Fock, we assume the mean-field approximation. Each electron feels the repulsion from other electrons based on their average, not instantaneous, positions. (This assumption prevents Hartree-Fock from predicting van der Waals forces.) We thus modify the hydrogen Hamiltonian by introducing two new operators. One is the average Coulombic repulsion between electrons, and the other is the exchange interaction. However, because we're using the average position of the electrons, then for our spherical atom these operators don't have an angular dependence. Thus the spherical harmonics are still separable as in the hydrogen case, so roughly the shape of the orbitals must remain the same. The only part that can change is the radial part of the wavefunction. Doing the calculations, you'll see that the radial part of the wavefunctions are squeezed or stretched a little bit due to Coulombic repulsion and the exchange interaction between electrons, and the increased Coulombic attraction to the nucleus. But as Wikipedia says, qualitatively they don't change much until you introduce multiple atoms. Without the mean-field approximation, I suppose even the angular shape would change, but that's beyond me.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/478583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can you change the wavelength of light keeping frequency constant and can you do the opposite as well? Can you change the wavelength of light keeping frequency constant and can you do the opposite as well? I understood the basics but please don't hesitate to go deeper into the concept. Also, If you happened to have an elegant explanation please drop it here if you can.
Wavelength times frequency gives the speed of a wave: $\lambda \nu=v$. The speed of light in a vacuum is a constant, but light can move more slowly in media (for example in water). For a photon of fixed energy, the frequency is fixed, so the wavelength of light should change when it goes into a medium in which the speed of light is slower than in vacuum. In fact, the amount that the wavelength changes is related to the index of refraction $n$ of the medium. If the wavelength in vacuum is $\lambda_0$, then the wavelength in the medium will be $\lambda = \lambda_0/n$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/478686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Pauli Exclusion and Black Holes Pauli exclusion principle states that 2 identical electrons cannot be in the same state, where state includes a spacial component. I have heard that, in order to avoid being in the same state, in a white dwarf, the De Broglie wave length of the electrons becomes shorter and shorter, meaning that they have a higher and higher momentum/ energy. Eventually, when the gravitational pressure is too high, they form neutron stars, since neutrons have smaller De Broglie Wave lengths due to their higher mass. My question is, by applying more and more pressure, can we confine more and more identical neutrons/ other fermions in an arbitrarily small space, eventually forming a black hole? Or, at some point, fermions must be converted into bosons?
My question is, by applying more and more pressure, can we confine more and more identical neutrons/ other fermions in an arbitrarily small space, eventually forming a black hole? Or, at some point, fermions must be converted into bosons? Black holes are classical entities.Fermions and bosons are quantum mechanical entities. Questions that mix the two frames will not have a definite answer until/when gravity is quantized. In the standard model, fermions can couple up and become bosons, as for example pions are made up out of two fermions, a quark and an antiquark. So this could be a hypothesis in some specific quantization model. At the moment cosmological models work with effective quantization of gravity, which replaces the point singularities into a fuzzy quantum mechanical region. In such a region it is not forbidden to keep the logic of higher and higher energy for the energy carriers, in your question fermions, because there is no limit to how many energy levels there can be in the fuzzy quantum mechanical region, as for example at the beginning of the Big Bang, and thus be able to fulfill the Pauli exclusion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/478906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why hydrogen lines are less visible in the Sun spectrum than in supernovae clouds? Supernovae clouds are very colorful, and if I trust documentaries I watched, the colors are due to excitation of elements, as in fireworks. Since the Sun is mostly made of hydrogen, I suppose those lines should be very apparent but they are not so much, its light looking like a blackbody radiation. What contributes to the rest of the spectrum up to the point it masks hydrogen lines?
The difference is that in the Nebulae you have black background for your hydrogen lines emission, and you can observe this easily with very simple imaging techniques (typically, 3.5-8nm pass-band filters do just fine). On the sun you have "white" background - black body emission from deep levels of the sun. Hydrogen lines are now visible as absorption lines - darker than background. But in order to see them with enough contrast on this white background you need much more sophisticated equipment - ~0.1nm pass-band filters, otherwise there is too little contrast to see anything. So the answer is simple: Hydrogen emission lines on black background have much larger contrast than absorption lines on white background. Hence emission lines in nebulae are much easier to observe and photograph.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why wavefunction becomes exponentially smaller during quantum tunneling? I am interested in quantum tunneling and I am wondering why the wavefunction of a particle would becomes smaller so that there is a slight possibility of finding it at the other side of a big energy barrier? Is there any interaction otherwise how can the wavefunction knows there is a barrier?
Wavefunctions are solutions of quantum mechanical differential equations, with given boundary conditions for the problem at hand, i.e. tunneling: Is there any interaction otherwise how can the wavefunction knows there is a barrier? The boundary condition of a barrier defines the wavefunction by construction. It has been found that experiments validate this model and its predictions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does kinetic energy warp spacetime? My interpretation of GR leads me to think that energy (namely kinetic) also adds to the curvature of space-time. Which, has raised a thought experiment. If a $10000$ kg ship closely passed a $1$ kg glass ball at $0.8c$ relative to the glass ball, would the glass ball be moved in the direction of the ship for the tiny fraction of a second that its passing by, more so than if the ship popped in and out of existence at rest for the same time period relative the glass ball?
Kinetic energy is part of the time-time component of the stress energy tensor, so by the Einstein field equations it does influence the curvature. However, the relationship is too complicated to justify a straightforward assertion that it adds to the curvature. First, the curvature is a rank 4 tensor, not a scalar. So it has many independent components and increasing the KE may impact many of those components, often in opposite directions. So while it certainly changes the curvature what would it even mean to simply “add to the curvature”? Second, an increase in the KE is always accompanied by an change in momentum also. The momentum will alter one or more of the time-space components of the stress energy tensor. Sometimes the momentum changes will roughly cancel the energy-based curvature changes, leading to minimal overall change in curvature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
How did Maxwell figure out the speed of light? The Wiki article is about 2 graduate years of physics beyond my understanding. What is a good high-school rendition of his thought process: regarding his use of the "distributed capacitance and inductance of the vacuum" to reach his conclusion?"
The speed of light has been accurately measured by Foucault in 1862. See http://www.speed-light.info/measurement.htm for a historical overview of deyermi actions of the speed of light. Maxwell unified electromagnetism and one of the consequences of his equations was that electromagnetic fields could propagate at the speed of $1/\sqrt{\mu_0 \epsilon_0} $ and theorized that light is an electromagnetic wave. Hertz confirmed the existence of such waves.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is light affected by space warping or time warping? Gravity, according to the General Theory of Relativity, is simply the curvature of space-time. Objects in the universe move through space-time in geodesic paths. Also, the most interesting part is that it is impossible to curve/ warp space without having an effect on time. They are intricately connected. Space warps can be notably seen near black-holes (gravitational lensing) and time dilations are so significant that even GPS systems on Earth have to adjust for it. But my main concern is the difference in the way objects and light behave when subjected to curved space-time. Projectiles follow parabolic paths in uniform gravitational fields. This can be shown using Newton's law of gravitation, but time dilation can also be used to prove this. But doesn't curved space also be needed to account for? Why is time dilation the only significant factor here? And what about light? I know that light bends when subject to curved space-time, but which part of space-time curvature is more responsible for this phenomenon? I guess that since light travels at the max limit, time is effectively not running for light from our frame of reference, so light shouldn't be affected by time dilations. Does this mean that light is only affected by space curvature? Any help to rid me of these confusions is greatly appreciated :)
Light is affected by both effects of gravity, you can count for time dilation and curvature too, this is the Shapiro effect. When light passes next to the Sun, its speed measured from Earth will be less then c because: * *it moves in curved spacetime *clocks near the Sun tick slower (compared to clocks on Earth) Please see here: https://en.wikipedia.org/wiki/Shapiro_time_delay
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Factorising a 4D Dirac delta function in a rest frame I'm working through a QFT problem and at one stage in the solutions we have this step: $$\delta^{(4)}(p - q_1 - q_2) = \delta(E_1 +E_2 - M)\delta^{(3)}(\bf{q_1} - \bf{q_2}).$$ We are working in the rest frame of a meson with mass $M$ and the process is a decay to a nucleon anti-nucleon pair. I cannot quite see why we are allowed to split the delta function this way. Can anyone break this down further for me?
Always $$ \delta^4 (k) = \delta^1(k_0) \delta^1(k_1) \delta^1(k_2) \delta^1(k_3) $$ If the momenta in your question are on-shell, then $\vec p=0$ because of the frame chosen,and $p^0=E_{p}=M$ , $q_j^0=E_j$, for the "on-shellness". Putting everything together you get your equality
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What will happen to atmospheres of very large planets? Our earth can hold our atmosphere whereas Mars cannot. So the atmosphere retention mass must be between the masses of Mars and Earth, but if mass is to be considered then can an iron ball having the same mass of the earth hold an atmosphere of its own? Also since larger planets can hold larger atmospheres, if a planet like Jupiter was like the Earth in composition and the atmosphere contained gases in the same ratio as here on earth, will it result in the atmosphere being modified to suit the conditions of higher pressure and gravity of the planet? What kind of atmospheric changes could be expected? Can a very large planet result in the liquefaction of gases (I know that planet size is not the prerequisite for the presence of liquified gases but), if it does, can it be due to gravity alone?
One reason Mars lost its atmosphere is that it lost its magnetic field. Without a magnetic field to protect it, the upper atmosphere is gradually blown away by the charged particles in the solar wind.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/479985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why aren't satellites disintegrated even though they orbit earth within earth's Roche Limits? I was wondering about the Roche limit and its effects on satellites. Why aren't artificial satellites ripped apart by gravitational tidal forces of the earth? I think it's due to the satellites being stronger than rocks? Is this true? Also, is the Roche limit just a line (very narrow band) around the planet or is it a range (broad cross sectional area) of distance around the planet?
When I was a kid I also wondered why artificial satellites within the Roche Limit were not pulled apart by tidal forces. When I was a kid I also wondered, if any body within the Roche Limit would be pulled apart by tidal forces, and since the surface of the Earth is deep within the Roche limit, why aren't all objects on the surface of the Earth - including Human bodies - pulled apart by tidal forces. Since my body was not being pulled apart by tidal forces the statement that all bodies within the Roche Limit were pulled apart by tidal forces must not be correct. Therefore the simple statement that all bodies within the Roche Limit are pulled apart by tidal forces must be an oversimplification as stated. But since such statements were made in non fiction sources it seemed probable that they were not totally false. Therefore I expected that sometime in the future I would read a fuller and more complex account of the Roche limit that would explain the seeming paradoxes. And I did. Eventually I learned that the Roche limit was not a single absolute distance but varied with the sizes, masses, and densities of the larger and the smaller objects. I also learned that the Roche Limit only applied to objects that were held together only by their internal gravitational attraction and not to objects like artificial satellites or Human bodies. Wondering why the Roche Limit didn't apply to my Human body was an example of using reductio ad absurdum to show that a statement was an oversimplification of a more complex situation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 6, "answer_id": 4 }
Why can't a low solidity wind turbine be used in high torque applications through gearing? From my understanding, low solidity wind turbines (such as the three blade type) are more efficient due to a higher tip speed ratio, giving a higher coefficient of performance. However, they are not well suited to high torque applications. Whereas, high solidity turbines have a lower tip speed ratio hence they are less efficient, but they produce more torque, making them better suited for applications like pumping water. This is what the text books tell me: low solidity for electricity production vs high solidity for pumping water. My question; why not just use high speed, low solidity turbines and gear them to produce a low speed, high torque output? This makes use of the more efficient turbine design and allows it to be applied to a high torque requirement. There are other factors in choosing a turbine, such as start up speeds but surely in terms of power output gearing can be used to match a turbine to a load? Thanks!
High speed turbines have higher blade stresses, more stringent balancing requirements, and more design issues with the rotordynamics of flexible blades. By comparison, a low speed turbine that works can be cobbled together with any sort of crude technology - it might not be very efficient, but it can do a useful amount of work and it won't fall apart in a high wind! People have been making useful low-speed turbines for literally thousands of years - they called them "windmills." From an engineering point of view, you don't fix designs that aren't broken! It might be worth commenting that large wind turbines (in the MW power range) are usually very low rotation speed and high torque devices (e.g. 5 or 10 RPM) compared with a typical small water pump turbine - but of course the blade tip speeds are high because the tip radius is very big.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How are atoms supported on each other in a material? Suppose we have a ball made up of iron. There are a "lot" of atoms in the ball. My question is "how" are the atoms supported on top of each other? And, is it due to the repulsion of electrons the atoms maintain distance between themselves?
Atoms in a solid are held in position by bonds which form between the atoms. Those bonds consist of either 1) electrons which are simultaneously shared by two atoms, yielding a covalent bond, 2) electrostatic forces of attraction which arise when one atom donates an electron to another, yielding an ionic bond, or 3) the electrostatic forces of attraction which arise when all the atoms in the solid share electrons with all the other atoms in that solid, giving rise to a metallic bond. The electrons that are involved in these bonding processes are generally in the outermost orbitals surrounding the atoms, which are referred to as the valence electrons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Physical intuition behind torque converter A torque converter (also here) is a device used in some cars. It uses several "fans" coupled through a liquid (transmission fluid) in order to perform the function of a clutch, but more importantly it acts as a liquid gear in the sense that it multiplies the torque going from the engine to the wheels. Is there an intuitive way to explain what is happening in the liquid? In particular, is it possible to explain the torque multiplication effect without resorting to numerical analysis?
To me, this is by far the most understandable video that explains this mechanism: https://www.youtube.com/watch?v=bRcDvCj_JPs&feature=emb_rel_pause To summarize, the torque multiplication is the result of the reactor that helps the pump increase the oil pressure. When the difference of speed between the pump and the turbine is high, the reactor derivates the oil flow in the same direction as the pump. The energy unused by the turbine is thus given back to the pump, which increases the torque. When the turbine speed and the pump speed are near, the reactor starts spinning, and the torque multiplication ceases.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Which side is convex in my plano convex lens? So, I am building an optical system and have got a Thorlabs plano-convex lens (part # LA1172-C) with a 400mm focal length. This makes the convex side of the lens so flat that it is difficult for me to discern (by naked eye) which side is convex on the lens. I need to know which side is convex in order to build my optical system. Is there a good technique to figure out the convex side of my plano convex lens?
If you are not concerned that it would damage the lens, you can put the lens on a smooth and flat surface and try pushing from the edges. I suppose you should be able to detect a small movement if the convex side is at the bottom. As for an optical solution? I am not sure. As planar-convex lenses are non-ideal, you could try setting up some equations and equipment to see what the focal points are and compare them to find which side is planar. But using a simple mechanical setup is easier than optical experiments and equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Why does the potential difference across a type of parallel circuit not act like a potential divider? Image credit (Q3) In this attached circuit, when $R_1=0\Omega$, I am failing to understand how the two cells affect the potential difference across the central resistor R3. I understand that potential difference is constant across different strands in parallel, and that so 12 volts should be distributed between R2 and R3 as a potential divider, and so 9V should be across R3 from V2. Likewise, from V1, 10V will cross R3 from V1, and so the total potential difference across R3 should be 19V. However, according to the answer sheet, the potential difference across R3 is 10V. Is my misunderstanding here conceptual, or something more basic, and why does potential difference act in this way?
First, when $R_1 = 0\Omega$, resistor $R_3$ is in parallel with the source $V_1$ (a zero ohm resistor is identical to an ideal wire). Parallel connected circuit elements have identical voltage across and so $V_{R_3} = V_1 = 10\,\mbox{V}$. But, as an exercise, you should work out the general solution for $V_{R_3}$ when $R_1 \gt 0\Omega$ and then look at that solution as $R_1 \rightarrow 0\Omega$. In fact, it would be good form for you to derive that solution and post your work as an answer to your own question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Liouville's integrability theorem: action-angle variables For classical dynamical systems, let $I_{\alpha}$ stand for independent constants of motion which commute with each other. 'Remark 11.12' on pg 443 of Fasano-Marmi's 'Analytical Mechanics' suggest that $I_{\alpha}$s can be taken as canonical coordinates. For a conservative system, the Hamiltonian $H$ is a constant of motion. Let's refer to $H$ as $I_1$. Then $I_1$ becomes one of the canonical momenta. Hence $H$ can be written as $H=I_1$. Application of Hamilton's eqns. of motion implies that only one angle variable $\phi_1$ (corresponding to $I_1$) evolves linearly in time while all others stay constant because $$ \dot{\phi_i}=\frac{\partial H}{\partial I_i} = 0 ~~~~~~~~~~~~~~~~~\mathrm{for~}i\neq1. $$ So, is it true that for every Liouville integrable (described here) and conservative system (where Hamiltonian does not depend on time explicitly), Hamiltonian can be written as a function of only one action variable $I_1$ and only one angle variable (corresponding to $I_1$) evolves linearly in time, whereas others stay constant?
* *Given $n$ functionally independent, Poisson-commuting, globally defined functions $(I_1, \ldots, I_n)$, so that the Hamiltonian $H$ is a function of $(I_1, \ldots, I_n)$ with $\mathrm{d}H\neq 0$, there certainly exist locally defined coordinate transformations: $$ (I_1, I_2,\ldots, I_n)\qquad \longrightarrow \qquad (I^{\prime}_1\!\equiv\!H,I^{\prime}_2, \ldots, I^{\prime}_n). \tag{*}$$ However, without further assumptions, it is not clear whether such globally defined coordinate transformation exists. *Moreover, if $(\phi^1,\ldots, \phi^n, I_1, \ldots, I_n)$ are angle-action (AA) variables with a constant (=$I$-independent) period$^1$ matrix $\Pi^{k}_{\ell}$ for the angle variables $(\phi^1,\ldots, \phi^n)$, a coordinate transformation (*) may make the corresponding period matrix [for the new angle variables $(\phi^{\prime 1},\ldots, \phi^{\prime n})$] dependent on the new $(I^{\prime}_1, \ldots, I^{\prime}_n)$ variables. -- $^1$ For the $n$-torus.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/480959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Information content in black holes Bekenstein-Hawking formula for entropy of a black hole tells us that information content in a black hole is proportional to its area which is in fact proportional to the mass^2 of the black hole. The information content before the formation of the black hole can be different which has nothing to do with the mass. Is there any kind of information loss during the formation so that we get two black holes with an equal amount of information. Is the information due to the fields lost in the process? So we get just the bare information which comes from the mass.
Infomation Content in Black Holes Your question is essentially (if i interpret it right) can stars with differing sets of initial detailed information--and thus differing information entropy-- converge to identical black holes? The short answer is Yes --provided the no hair hypothesis is correct. The longer answer has to take account of the measurement process applied to a Black Hole. By definition, measurements cannot be performed by an external observer to elements within the interior of a black hole, beyond the event horizon. We thus do our best to characterise the Black Hole by partitioning the event horizon of the Black Hole with elements each of minimal area (following Loop Quantum Gravity) proportional to the square of the Planck length $l_P^2$ and let N be the total finite number of partitions. The ‘no hair’ hypothesis states that a Black Hole can be completely characterised by its classical spin, mass and total electric charge. Thus there is no preferred location on the event horizon, so that each partition element must have the same weighting. The von Neumann 'entanglement ' entropy of this partition is thus given by; $- \sum\limits_1^N {\frac{1}{N}\log \left( {\frac{1}{N}} \right)} = \log N = \log \frac{S}{{l_p^2}} = \log \frac{{{c^3}S}}{{\rlap{--} hG}}$ with S the surface area of the black hole, which is similar to the Beckenstein-Hawking formula. This relates to classical relativity (ignoring the cosmological constant) via the macro- level Clausius definition of entropy E; $\oint {\frac{{\delta Q}}{T}} = \Delta E$ where, integrated over a Carnot cycle, a change in the energy tensor results in a change in entropy corresponding to a change in space-time curvature as measured by the Ricci Tensor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/481059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does light take the path of least time because it travels in straight lines or vice versa? My question is which of these two feats is a consequence of the other? Light travels in straight lines, mostly. Does it do that as a result of Fermat's principle of least time? and if so, is there a reason as to why it follows the path of least time? or is this another "that's the way the universe works" question? And by reason I mean a physical explanation not mathematical deduction. Or is it the other way around? meaning light taking the path of least time is just an obvious manifestation of the fact that it goes in straight lines?
Light travels in straight lines because it takes the path of least time. The reason it takes the path of least time is because this is the path where the phase, which is proportional to the time, is a minimum. Since it is minimum, its derivative is zero with respect to small changes in the path. Thus, decomposing the light into many coherent wavelets, each with amplitude $$ A \sim e^{i \vec{k}\cdot{x} - ik t}$$ the path of minimum phase, i.e. the path of least time, is the path where the wavelets all add up in phase because the derivative there (of the phase) is zero. Thus, for this path, all the wavelets add up constructively, and we have a nice light ray. Away from this path, the wavelets add up destructively because all the phases are different, and we get zero. From Feynman QED, pp. 38-45
{ "language": "en", "url": "https://physics.stackexchange.com/questions/481156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Confused concept of " force" in ordinary language versus precise concepts in physics I'd like to be able ( for philosophical purposes) to illustrate this general idea : " science makes clear and precise concepts that are confused in ordinary/prescientific thought". I think that a good example could be the concept of force. I'd like to know in how many ways, in physics, the following sentence could be completed: force should not be confused with ____________ In other words, I'd like to know what are the different cases in which a physicist could say that the word " force" is not used correctly in ordinary language. Also, are there common physics fallacies that can be refuted using precise distinctions relative to force and other physical quantities? Thanks for your help.
Best example I can think of is from the original Star Wars movie, in which Darth Vader intones: "Never underestimate the power of THE FORCE". What he should have said (to make the units come out right) was: "Never underestimate the power of THE FORCE times distance divided by time".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/481271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is motion smooth? It's obvious that for every particle velocity is smooth i.e it cannot undergo sudden finite change in its position in infinitisiminal time. Similarly any particle's velocity cannot undergo a change instantaneously (Infinite acceleration can't happen, intuitively). Does this pattern apply to higher time derivatives of position like jerk? If yes then till how much higher derivative? 10th? 100? Infinite?
In order for jerk to be non-zero, the acceleration $a=\frac{\mathbf{d}v}{\mathbf{dt}}$ must be time-dependent: $$a=\frac{\mathbf{d}v}{\mathbf{dt}}=f(t)\tag{1}$$ That's because the derivative of any number, no matter how large, is always zero. But if $(1)$ applies the jerk becomes: $$j=\frac{\mathbf{d}a}{\mathbf{dt}}=f'(t) \neq 0$$ If we take a very simple case, where: $$a=a_0+a_1t$$ then: $$j=(a_0+a_1t)'=a_1\tag{2}$$ And the even higher derivatives all become zero. $(2)$ also shows that $a$ could be very large (large $a_0$) and yet $j$ might be very small (small $a_1$)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/481535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why haven't we yet tried accelerating a space station with people inside to a near light speed? Is that something we could do if we use ion or nuclear thrusters? Wouldn't people in the station reach 0.99993 speed of light in just 5 years accelerating at 1g and effectively travel into the future by 83.7 years? That would be a great experiment and a very effective way to show relativity theory in action. I mean, the people inside the station would have effectively traveled into the future, how cool is that? Why haven't it been done yet?
I'm no physicist, but, just to add to the list of insurmountable problems with this idea, I've always thought the hardest problem was the "air resistance" in space. The density of interstellar space is about 1 atom per cubic centimeter. If your spaceship is 1 meter cubed, and travels at c for 1 second, you have travelled 300,000 kilometers, encountering 300 trillion atoms. When you are moving at relativistic speeds, each proton you run into is delivering 0.003 joules of energy into you. For the above distance, that's 900 GJ. 100 seconds in, and you have experienced pushback equivalent to a nuclear bomb. Things are a little bit better in the intergalactic medium, where the density is 1 atom per cubic meter, a million times less than in regular interstellar space. That means 900 MJ per second of travel. That's 1 ton of TNT every 5 seconds. Whew, much better! I'm not even taking into account the possibility that fusion will be undergone for many of these atoms on the surface of your spaceship. Good luck finding a material that can withstand that. I'm super amateur so I may be miscalculating here, please correct me if I am!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/481634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
Does pair production happen even when the photon is around a neutron? In order for a photon to decay into a pair of $e^+ e^-$, it must have at least $E_{\gamma}=1.022$ MeV and must be near a nucleus in order to satisfy the conservation of energy-momentum. But would this happen even if the photon is near a neutron and not necesarily a nucleus? Does the fact that the nucleus is charged have anything to do with this decay? Who acts upon the photon to induce the interaction?
Quantum mechanics says that everything that is not forbidden is compulsory. Any process that doesn't violate a conservation law will happen, with some rate or cross-section. However, this general principle doesn't tell you what the rate is. For example, it's theoretically possible for 124Te to decay into two 62Ni nuclei plus four electrons and four antineutrinos, but to predict the (very small) rate, you need to know the relevant nuclear physics. In your example, the process probably would go at some rate determined by electromagnetic interactions, because the neutron has a magnetic field. But the rate would presumably be small because the magnetic field of a dipole falls off like $1/r^3$, and magnetic effects are usually down by $\sim v/c$ compared to electric effects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/481916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Can someone explain how magnetic potential energy can exist even though the field is non-conservative? $U=-B\cdot \mu$ is defined to be the magnetic potential energy, I saw this in my lecture notes, but we had already talked about the fact that since the work done to move a charge there is path dependant, there can not be a unique potential at a point in a magnetic field. So there is no use in defining a magnetic potential energy. The only thing I have in my notes is that it shows for a current loop, it has the most potential energy when the magnetic moment is perpendicular to the field, and has 0 potential energy when it is parallel. I can sort of justify this by saying it makes sense that the potential should be a maximum in an orientation where there is the most restoring torque on the loop and minimum where there is none But what does a potential mean in a non conservative field?
The problem you're having is that your confusing a field's scalar "potential" with potential energy. The concepts are related, but in a particular way. Say I have a vector field $\mathbf{V}(\mathbf{x})$. There is a mathematical theorem that I can break this field down into two two parts $$\mathbf{V}(\mathbf{x}) = -\nabla \Phi + \nabla\times\mathbf{A},$$ $\Phi$ is something we call $\mathbf{V}$'s scalar potential and $\mathbf{A}$ is its vector potential. The connection to potential energy comes in if we can write a force on some particle is of the form $$\mathbf{F} = a \mathbf{V}$$ and $\mathbf{A} = 0$, then the potential energy of that particle will be $U = a\Phi$. So, in the case you're dealing with you don't have a force of needed form, so it's not a problem. Granted, we still have the statement, "The magnetic field can do no work," but showing how that works out in the case of a magnetic dipole like this is subtle. In fact, the force and torque on a magnetic dipole are \begin{align} \mathbf{F} & = \left(\mathbf{\mu}\cdot\nabla\right)\mathbf{B} \text{ and} \\ \mathbf{\tau} & = \mathbf{\mu}\times \mathbf{B}. \end{align} I'm less certain here, but I'm pretty sure you'll find that the combination of those two makes it impossible to harvest infinite energy from any static magnetic field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Flight Tme of Neutrinos between Emission & Absorbtion At a rough estimate,how long could a neutrino travel before striking a particle which would absorb it,& are there any neutrinos from the early days of the universe still wandering about?
i.e. So this estimate of mean free path is more than a light year of lead! A fairly common qualitative statement in physics texts is that the mean free path of a neutrino is about a light-year of lead. Griffiths makes the statement "a neutrino of moderate energy could easily penetrate a thousand light-years(!) of lead." This cross section can also be used to estimate the number of events which can be expected in a given size of detector. The cosmic neutrino background is still around, and is estimated theoretically: The CNB is a relic of the big bang; while the cosmic microwave background radiation (CMB) dates from when the universe was 379,000 years old, the CNB decoupled (separated) from matter when the universe was just one second old. It is estimated that today, the CNB has a temperature of roughly 1.95 K. This low energy is not detectable in the laboratory, due to the weak interaction of the neutrino, but there is indirect evidence that the estimates are true.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If an object is travelling near light speed, would it's actions seem to be in slow motion? Hypothetically if we were observing a clock travelling near light speed relative to us, we would see the clock ticking at a much slower speed than us. If that is true, then would all actions that are at rest relative to the clock seem to be slower too? For example if the clock were to explode, would we observe the explosion to be a slower speed? If my question doesn't make sense then please ask for clarification I'm having trouble putting my thoughts into words.
From our point of view (our reference frame) - we, who observe a clock traveling near light speed relative to us - all events are much slower - so the clock explosion is much slower, too. It's a fact. (The only problem is how to observe this wonderful slow clock explosion in an object passing us with so enormous speed. :-))
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can a wire having a $610$-$670$ THz (frequency of blue light) AC frequency supply, generate blue light? We know that when we give alternating current across a wire then it will generate an electromagnetic wave which propagates outward. But if we have a supply which can generate 610 to 670 terahertz of alternating current supply then does the wire generate blue light?
I second this Emilio Pisanty's point: the power supply you are envisioning is a light source. Now the question that remains is: can you propagate this light through a wire, just like you would do with a regular low-frequency electric signal? To get a hint of the answer, look at how people use wires to transport high frequency signals, into the many MHz up to the multi-GHz range. A single wire doesn't work, because it has a tendency to radiate all the power you feed it into the air as free electromagnetic waves. The trick is to use two wires carrying opposite currents. You can think of them as one being the signal and the other being the return wire, but their roles could be symmetric. If you keep them close enough, most of the electromagnetic field will be confined between them, and you will be able to transmit the power without too much loses. You can further reduce the losses by twisting the wires together. At the highest frequencies, you would get best results by putting one wire inside the other which, shaped like a tube, functions like a shield. This is called a coaxial cable, and some of them are good up to tens of GHz. The think that is not so intuitive is that, while the metal wires carry the current, the actual power is carried by the electromagnetic field that propagates between the wires. So the main role of the metal wires is thus to guide the electromagnetic waves and, for this reason, the high-frequency cables are considered to be waveguides. Could you adapt this waveguide technique to the propagation of light? The answer is yes, some people have indeed built nanosized coaxial cables for this very purpose.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 6, "answer_id": 4 }
Does rotation happen throughout the whole x axis of an object simultaneously? For example, if I draw a line on the side of a pencil top to bottom, then snap one end of it as in launching it due to the pressure of my fingers. Anyways, if I record the pencil launch in slow motion (perhaps it’s my phone that has to do with it) but it will focus on where the line was, and it appears that only some of the actual line is there, or out of focus. So that leads to the question, does rotation happen simultaneously down the pencil axis? Maybe I’m completely clueless and I’m missing something but figured I’d ask.
Your question is a bit difficult to understand,but what you seem to be saying is that if you launch your pencil like a dart & give it some spin as you launch it,will the rotation manifest itself instantaneously along the whole length of the pencil? No,it won't. No signal can travel faster than light,so the rotation imparted to the middle of the pencil by your fingers will travel to the ends of the pencil at speed c,but as the speed of c is 300,000K (186,000 miles)per sec & your pencil is perhaps 10" long,the rotation will appear to be simultaneous along the entire length of the pencil.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Difference continous - discrete symmetry I am trying to understand the difference between the two types of symmetries.Wiki Wikipedia says that * *Translation in time : $t \rightarrow t + a$ is a $\textbf{continuous}$ symmetry, for any real $t,a$ but *Time reversal: $t \rightarrow -t$ is a $\textbf{discrete}$ symmetry. But if we choose $a=- 2t$ we get the same transformation - can somebody explain to me why this is no contradiction?
In a nutshell, $a$ is an arbitrary but fixed 1-parameter, while $t$ is a running time coordinate. One cannot consistently put a fixed parameter equal to a running coordinate. Phrased differently, $a$ is here not allowed to depend on $t$. In particular, the 1-parameter family $(\mathbb{R}\ni t\mapsto t+a\in\mathbb{R})_{a\in\mathbb{R}}$ of time translation maps is a continuous deformation of the identity map $\mathbb{R}\ni t\mapsto t\in\mathbb{R}$. In contrast, the time reversal map $\mathbb{R}\ni t\mapsto -t\in\mathbb{R}$ is a discrete deformation of the identity map $\mathbb{R}\ni t\mapsto t\in\mathbb{R}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/482981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Initial speed is zero and so is power? If I want to accelerate something from standstill to max speed, with a constant force (acceleration and mass don't change), the equation P = F * v would say that in the beginning we use 0 W power. How is that possible? Since power is the rate of transference of energy to the body (J/s), it would appear that that rate should be constant. Why is it that a fast moving body requires more power to accelerate than a still body? And if we would like to know how much power we need to accelerate that body, should we then use the maximum speed?
If you want to accelerate a body from $v$ to $v+\Delta v$, the associated change in energy is $$\Delta E=E\left(v+\Delta v\right)-E\left(v\right)=\dfrac{1}{2}m\left(v+\Delta v\right)^{2}-\dfrac{1}{2}mv^{2}\approx mv \Delta v$$ You can see that for larger $v$ it costs more energy for the same increment $\Delta v$. Essentially the power is $$\dfrac{\Delta E}{\Delta t}=\underbrace{m\dfrac{\Delta v}{\Delta t}}_{ma=F}v=Fv$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 0 }
Unorthodox way of solving Einstein field equations Usually when we solve field equations, we start with a stress energy tensor and then solve for the Einstein tensor and then eventually the metric. What if we specify a desired geometry first? That is, write down a metric and then solve for the resulting stress energy tensor?
This is sometimes jokingly called Synge's method. Here's an excerpt from Ingemar Bengtsson's A Second Relativity Course describing it (see Chapter 5): We would now like to see a solution describing a physical system that approaches (in some sense) the Schwarzschild solution as it evolves. This can be obtained by means of a method invented by the Irish relativist Synge. Synge’s method is as follows. To solve $$ G_{ab} = 8 \pi T_{ab}, $$ rewrite as $$ T_{ab} = \frac{1}{8 \pi} G_{ab}, $$ choose any metric tensor $g_{ab}$, compute its Einstein tensor $G_{ab}$, and read off the stress-energy tensor $T_{ab}$ from Eq. (5.2). The result is a solution of Eq. (5.1). To avoid any misunderstanding, Synge meant this as a joke (and he did not predict dark matter). A stress-energy tensor computed in this way is not likely to obey any of the positivity conditions that are necessary for it to qualify as physical. Very occasionally the method works though. (Bengtsson then proceeds to describe the Vaidya solutions, which are found by basically writing down a metric that looks vaguely like a time-dependent Schwarzschild solution and then interpreting it.) It's possible that Synge describes his "method" in his 1960 book—the textbook I'm drawing from cites it in the passage above—but I don't have a copy handy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 2 }
Is my understanding of vectors correct? I recently learned that a vector in mathematics (an element of vector space) is not necessarily a vector in physics. In physics, we also need that the components of the vector on a coordinate transformation as the components of the displacement vector change. So, if my understanding is correct, if $|\mathbf{c}_1|, |\mathbf{c}_2|, |\mathbf{c}_3|,\, \ldots \,,|\mathbf{c}_n|$ are the components of a vector $\mathbf{A}$ and $f$ is the function of transforming coordinates (change of basis), then $$f(\mathbf{A}) = \sum_{i=1}^n{f(\mathbf{c}_i)}$$ where $\mathbf{A} = \sum_{i=1}^n\mathbf{c}_i$. That is to say, the transformed vector by applying $f$ to it should be equal to the vector formed by the vector components which have been transformed by applying $f$ to them. Am I correct?
I recently learned that a vector in mathematics (an element of vector space) is not necessarily a vector in physics A vector in "physics" is exactly the same thing as you have defined it in "mathematics". Any vector space contains a basis $e_i$ upon which each element can be expanded as $$ v = \sum_k v^k e_k. $$ By definition of basis as tangent vectors to a set of curves, one can show that they must transform in a certain way, say given a transformation matrix $\Lambda$. Since the vector $v$ must be independent of the representation, if the basis transform using $\Lambda$ then the components must transform using the inverse matrix $\Lambda^{-1}$. $\Lambda$ (respectively $\Lambda^{-1}$) are what physicists refer to as covariant (respectively contravariant) transformation laws for the basis (respectively vector components). Same holds for dual forms and tensors mutatis mutandis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
What is a Material Called that Translates the Image of a Touching Object's Surface? What is the name of a material that moves light through it in such a way that it appears that the surface of an object has translated through the material? Also, what is an example of this type of material? From very distant memory, I think this does not occur in nature, but is a metamaterial that can be created by binding fiber optic fibers together. But I cannot find a reference or a material on the market.
From en.wikipedia.org/wiki/Ulexite: The light transmits. An example found in nature is Ulexite.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does the Schrödinger equation work so well for the hydrogen atom despite the relativistic boundary at the nucleus? I have been taught that the boundary conditions are just as important as the differential equation itself when solving real, physical problems. When the Schrödinger equation is applied to the idealized hydrogen atom it is separable and boundary conditions are applied to the radial component. I am worried about the $r=0$ boundary near the nucleus. Near the proton, the electron's kinetic energy will be relativistic and looking at the Schrödinger equation itself for how this boundary should behave seems dangerous because its kinetic energy term is only a non-relativistic approximation. Is there any physical intuition, or any math, that I can look at that should make me comfortable with the boundary condition in this region?
The boundary condition at r=0 is that the wave function should be finite. The Schrödinger equation for hydrogen UC atoms and likely all atoms has solutions with negative $\cal l$, which are rejected because they diverge at r=0. See for example Schiff's textbook on quantum mechanics. As for relativistic effects, you may want to compare the hydrogen energy expressions for Dirac, better, Klein-Gordon - no spin, and Schrödinger. Check out another great text, Itzykson and Zuber, for these .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 4, "answer_id": 1 }
Can Hydraulic System work on a Moon Robot? Since the Moon has no atmosphere and the temperatures reach maximum 123 C and min minus 153 C, how feasible is it to use hydraulic actuators to move the Robot legs? Since my assistant Professor insists on going forward with the idea of building a smaller scaled model using the hydraulic system, i was a bit skeptic and wanted to know if it is actually doable. For the record the Robot will run with batteries with max capacity of around 25000 mAh. I thought it was important to mention that because of the possibility to keep the hydraulics System within operation temperatures. The robot should also be able to handle around 1000 N of load.
Yes, hydraulics will work - as long as you use a fluid that has a working range relevant to the ambient conditions. Skydrol is used on aircraft as it has the properties deemed necessary... not the nicest stuff if you get it on skin etc...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can I contract index in this expression? I'm reading Carrol text on general relativity, on page 96 they arrive to the term \begin{equation} \frac{\partial x^{\mu}}{\partial x^{\mu '}}\frac{\partial x^{\lambda}}{\partial x^{\lambda '}}\frac{\partial^2 x^{\nu '}}{\partial x^{\mu}\partial x^{\lambda}}.\tag{1} \end{equation} Can I contract this expression to get \begin{equation} \frac{\partial^2 x^{\nu '}}{\partial x^{\mu '}\partial x^{\lambda '}}~?\tag{2} \end{equation} I'm using the chain rule $$\frac{\partial x^{\mu}}{\partial x^{\mu '}} \frac{\partial}{\partial x^{\mu}}=\frac{\partial}{\partial x^{\mu '}}\tag{3}$$ (which I think is correct).
The chain rule (3) is correct, but expression (1) is only 1 out of 2 terms in expression (2) $$ \frac{\partial^2 x^{\nu ^\prime}}{\partial x^{\mu ^\prime}\partial x^{\lambda ^\prime}}~=~ \frac{\partial x^{\mu}}{\partial x^{\mu ^\prime}} \frac{\partial}{\partial x^{\mu}}\left( \frac{\partial x^{\lambda}}{\partial x^{\lambda ^\prime}}\frac{\partial x^{\nu ^\prime}}{\partial x^{\lambda}}\right)~=~ \frac{\partial x^{\mu}}{\partial x^{\mu ^\prime}}\left( \frac{\partial}{\partial x^{\mu}} \frac{\partial x^{\lambda}}{\partial x^{\lambda ^\prime}}\right)\frac{\partial x^{\nu ^\prime}}{\partial x^{\lambda}}+\frac{\partial x^{\mu}}{\partial x^{\mu ^\prime}}\frac{\partial x^{\lambda}}{\partial x^{\lambda ^\prime}}\frac{\partial^2 x^{\nu ^\prime}}{\partial x^{\mu}\partial x^{\lambda}}, $$ cf. Leibniz rule.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/483984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is plane altitude limited by engine power and if so does air density cause this? I notice that, for example, human-powered flight operates at low altitudes. This might of course be due to safety but I wonder if in fact the delta in air pressure is greater at lower altitudes and this prevents low-powered aircraft from reaching higher altitudes?
Here's what happens when airplanes go to higher altitudes. First, there's less oxygen, so they have to have ways to get enough, like turbo-charging. Jet engines have built-in turbo-charging, and supersonic engines also use ram effect. Second, since the air is less dense, they need to go faster to get the same lift. That's a good thing. It's why long-distance aircraft go as high as they do. But there's a problem. In colder air the speed of sound comes down, and if their airspeed approaches the speed of sound they can get into problems. This is called the "Q-corner" or "coffin-corner". To go faster than that they need to be designed for supersonic flight. So, for example, subsonic transports are limited to about 40k feet, while the supersonic Concorde cruised at 60k feet. For human-powered flight, it's strictly a matter of having enough power to overcome drag. Niels is right about ground effect. When an aircraft is within one wing-length above the surface, drag is greatly reduced, so much less power is needed. If you take a flying lesson you learn to watch out for this, because you don't want your aircraft to "float along" when you're trying to get it on the ground. But it can help if you're taking off on a poor surface. You get into ground effect and stay there while you accelerate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why choosing for prime numbers eliminates vibration? I have read that the spokes of a car wheel are usually five because, besides other substantial reasons, five being a prime number helps to reduce vibrations. The same also happens with the numbers of turbine blades and the way a microwave grill is spaced. Prime numbers are always preferred.
Gears should have (co)prime number of teeth to provide even wear (https://en.wikipedia.org/wiki/Prime_number#Computational_methods), but I don't see why a wheel needs to have a prime number of spokes. On the other hand, it seems that an odd number of spokes might be preferable for manufacturing (https://www.quora.com/Why-do-car-wheels-tend-to-have-an-odd-number-of-spokes), and the least odd and non-prime number is 9, which may be too much for the number of spokes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
What makes a atom more likely to become a cation (lose electron) What makes an atom more likely to lose an electron and become a cation? Does the exact location of the electrons maybe influence that? I know that you can't know the exact position of an electron until you measure it. This would prevent such experiments from taking place as you can't watch the electron before it would transfer to some other place as you either would not see the initial state as the transfer is already happening or you would prevent the transfer from happening altogether.
The environment of the atom plays a role. An isolated atom is likely to lose its electron due to ever-present background EM radiation, even if weak. When many other atoms are nearby, they help prevent loss of the electron, but for an isolated atom it is much more probable that the electron is free of the nucleus than being in a bound state with it. Hence the less dense a gas is, the more likely its atoms will become ionized by external radiation. Atoms in interstellar space are very rare and they are often strongly ionized.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pauli matrices acting on creation operators in the second quantization formalism I'm looking at some lecture notes for electron scattering taking place at a ferromagnet-superconductor junction. The idea is to start from a tight binding model, and eventually obtain the BdG equation. However, I have some problem with the algebra. In the ferromagnet the exchange term in the Hamiltonian reads $$H_{ex} = -\frac{\Delta_{xc}}{2}\sum_{i,\sigma} \vec{M} \cdot\vec{\sigma}c^{\dagger}_{i\sigma} c_{i \sigma}$$ where $\Delta_{ex}$ is just a scalar number, $\mathbf{M}=(\sin\theta \cos\phi, \sin\theta\sin\phi,\cos\theta)$ is the magnetization vector of the ferromagnet expressed in spherical coordinates, $\vec{\sigma}$ is a vector containing the Pauli matrices, and $c_{i,\sigma}$ annihilates an electron on site $i$ with spin $\sigma$. In the lecture notes they write that this becomes $$H_{ex} = -\frac{\Delta_{xc}}{2}\sum_{i,\sigma} \cos\theta c^{\dagger}_{i\sigma}c_{i\sigma}+ \sin\theta e^{-i\phi} c_{i,-\sigma}^{\dagger}c_{i\sigma} + \sin\theta e^{i\phi} c^{\dagger}_{i\sigma}c_{i,-\sigma} - \cos\theta c^{\dagger}_{i,-\sigma} c_{i,-\sigma}.$$ My question is how does one calculate this explicitly? My own attempt Clearly one must expand $$\vec{M}\cdot \vec{\sigma} = \sigma_x\sin\theta\cos\phi + \sigma_y \sin\theta \sin\phi + \sigma_z\cos\theta$$ and then act with this on the operator product $c_{i \sigma}^{\dagger}c_{i\sigma}$. However, I do not understand how the Pauli matrices act on the creation operators. For instance how can you calculate $$\sigma_x c_{i\sigma}^{\dagger}c_{i\sigma} = ?$$ I'm sure I can do the calculation above if only I knew how one handles one of these terms.
I strongly suspect you are merely looking at quadratic forms of Dirc oscillators, tallied in garbled aspirational notation. In the conventional basis of Pauli matrices, $$ \vec M \cdot \vec \sigma = \begin{pmatrix} \cos\theta & e^{-i\phi}\sin\theta \\ e^{i\phi}\sin\theta &-\cos\theta \end{pmatrix}. $$ In this basis, this matrix acts on the 2-vectors $$ v= \begin{pmatrix} c_\sigma \\ c_{-\sigma}\end{pmatrix}, $$ where now the bimodal variables $\sigma=\pm$ have been fixed to a direction, here z, and the sum over them is merely indicated in the bilinear form, $v^\dagger \vec M \cdot \vec \sigma v$, for each i index, still summed over, which I omit. You might well be overthinking an inept notation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the definition of functions of Grassmann numbers? I understand there are some relevant questions, but none of them solves my issue. From Atland and Simons (Condensed Matter Field Theory), the definition of functions of Grassmann numbers are defined by Taylor expansion. However, I do not understand the exact meaning of this definition. Let me take the function $f(x,y)=\exp(xy)$ as an example. I will use $x,y$ to denote ordinary numbers and $\alpha,\beta$ to denote the generators of Grassmann algebra. 1) If the definition of $f(\alpha,\beta)$ is: expand $f(x,y)$ as if $x,y$ are ordinary numbers, and then substitute $x\to\alpha$ and $y\to\beta$, then there would be ambiguities of how to order $\alpha$ and $\beta$. For example, $f(x,y)$ may be Taylor expanded to $1+xy+...$ or $1+yx+...$. 2) If the definition of $f(\alpha,\beta)$ is expanding $f(\alpha,\beta)$ directly with $\alpha$ and $\beta$ being Grassmann numbers, then one need to define derivative with respect to Grassmann numbers. However, the definition of derivatives are defined only for polynomials of Grassmann numbers and then for arbitrary function by Taylor expansion. Therefore, the definition of derivatives rely on the definition of functions. Therefore, I have not been able to have a consistent picture of functions of Grassmann variable.
* *The main point when dealing with non-commutative objects [like Grassmann-odd numbers, which anticommute rather than commute], apart from, say, Taylor coefficients, we also need to specify an order of objects: The function/symbol itself does not amount to a full characterization. A similar issue arises when we want to replace e.g. the classical Hamiltonian $H(x,p)$ with a Hamiltonian operator because $\hat{x}$ and $\hat{p}$ do not commute: We might also need a choice of operator ordering. *Returning to one of OP's examples, if the quantity $f$ depends on 2 Grassmann-odd variables $\theta^1$ and $\theta^2$, it is customary to write $f(\theta^1,\theta^2)$. We know that there exist 4 (possibly supernumber-valued) coefficients $f_0,f_1,f_2,f_3$ such that $$f~=~f_0 + f_1\theta^1 + f_2\theta^2 +f_3\theta^1\theta^2.\tag{*}$$ Note that the order in each term of (*) is important. E.g the 2nd term $f_1\theta^1=(-1)^{|f_1|}\theta^1f_1$, where $|f_1|$ denotes the Grassmann parity of $f_1$. So if we chose another ordering convention, the above 4 coefficients may acquire sign factors. *Another of OP's examples considers a function $f(\theta^1\theta^2)$. Here there are no ordering issues because the variable $x=\theta^1\theta^2$ is Grassmann-even, and therefore commutative, so that standard calculus applies [with the additional rule that the square $x^2=0$ is zero].
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a way to add lenses together to shorten their focal length? I need to know if there is a way to put lenses in front of each other to make their focal length shorter. I tried putting them directly in front of on another, but I wasn't sure if there was a better way. I don't know anything about lens physics. Thank you in advance!
If you put the lenses very close together and if the thin lens approximation also holds, then their dioptric strengths (inverse of the focal lengths) are approximately additive. If they're not very close together, then in general the combination will not show behavior that's equivalent to that of any simple lens.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Problem with Sudden Approximation in quantum mechanics If the Hamiltonian of a system changes abruptly (over a very short time interval) from one form to another, we would expect the wave function not to change much, yet its expansion in terms of the eigenfunctions of the initial and final Hamiltonians may be different but the wave function at first is the same. Now my problem : Consider the famous problem that we have, for example, a particle is initially in the ground state of an infinite, one-dimensional potential wall with walls at $x=0$ and $x=a$. First if the wall at $x=a$ is suddenly moved to $x=8a$, we can expand the wave function(in first moment it does not change) in new eigenfunctions and there is no problem!But if the wall at $x=a$ is suddenly moved to $a/2$ then the wave function in new situation is not normalized and we can not expand it in new eigenfunctions because it's not zero at $x=a/2$!So what's wrong with this? Does this tell us that we cannot do the second scenario, and it's impossible? Why?
Leaving aside the a-physical approximation of a mathematically infinite potential, the validity of the sudden approximation relies on the alteration of the Hamiltonian imposing sufficiently small changes on the state of the system, and that really isn't the case when you push the boundary far into the original space in the well. How do we know? Because the time-evolution of the wave-function is given by the application of the Hamiltonian to the wave-function $$ \frac{\partial}{\partial t}\Psi(x,t) \propto \left[\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x) \right] \Psi(x,t) \;, $$ but from the conditions of the problem and the initial state we know that during the time the well is evolving $V(x)\Psi(x,t)$ is a large potential applied to a non-trivial wave-function. So the time-evolution of the state will be non-trivial, so treating the system as un-perturbed by the action is a problem. Contrast this with the "expanding the well" case where $\Psi(x,t)$ is small (exactly zero in the ideal case) in the spatial region where the large change in $V$ occurs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/484874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If the pressure inside and outside a balloon balance, then why does air leave when it pops? Sorry for the primitive question but when we inflate a rubber balloon and tie the end, its volume increases until its inner pressure equals atmospheric pressure. But after that equality is obtained why does the air goes out when we pop the balloon? If there is pressure equality what causes the air flow?
I had the same doubt. But what i think as an answer is the pressure inside balloon is higher because not the atmosphere pressure is same as pressurein balloon, but because effective force is same. Let me explain Suppose a normal balloon, now you blow it what actuallyyou are doing is creating high pressure in balloon so maintain an equilibrium balloon expands , Now pressure is decreasing but ...It will not decrease upto atmospheric pressure but a little higher ,Why??!! Because I am neglecting something, as balloon is expanding it is gaining an elastic potential or tension along with a little bit decrease in inner pressure ( as area increase pressure decrease ) so elastic potential is like a ...just an example u r squeezing balloon with your hand, I know it is very rough example still helpful! And yah remember the tension of balloon skin is on balloon itself not on the air inside ( this was my misconception that tension of balloon wall is applied on air inside) . So there are kinda more net OUTSIDE force i.e atmosphere pressure plus tension of balloon on itself , so to balance this net force the INNER pressure has to be high comparative to "Alone" atmospheric pressure. Hope it helps. Any expert here please check if I am correct or not but this is my understanding , correct if I am wrong
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 6, "answer_id": 4 }
What was the vertical beam of light in Chernobyl? In the HBO miniseries Chernobyl after the initial explosion we see a clear bright light shooting vertically up from the plant. I presume this was a thing that actually happened and not some creative license they took. What was the cause of this light and what are the mechanisms by which it works?
There are 2 sources: * *Ionized-air glow, caused by gamma radiation from the core (more bluish color). While gamma radiation is emitted in all directions, it is shielded on the sides , and escape to air directly only in the vertical direction. *Just light scattering (like in regular projectors), where core is a bright light source (more reddish color - due to high temperature and fire). Light scattering is enhanced by dust in the air. Again, light can only escape upwards. It seems to me that the effect was somewhat exaggerated in the movie. Not sure there is anything which could make light so well collimated. I would expect much less "focused" beam of light.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Would setting the ideal gas constant to $1$ yield an attractive natural temperature scale? In this recent question, there was a comment 'The "zero point" of Kelvin is natural, but the scale is not'. This led me to wonder whether setting $R = 1$ in the ideal gas law would be an attractive and more natural temperature scale. I am aware that changing to such a scale is not practical, the investment in the Kelvin is too great.
In natural units Boltzmann's constant, $k$, is normally set to one, rather than $R$. They differ by a factor of Avogadro's number; a mole is an arbitrarily defined unit based on the kilogram and is not "natural". At least in my experience of high energy physics choosing $k = 1$ is common practice; I'm sure it occurs in other branches too.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Bernoulli's equation's contradiction Using Bernoulli's equation P Pressure p density V velocity of fluid $$P_1+ \rho gy_1+\frac{1}{2}\rho V_1^2 = P_2+\rho gy_2+\frac{1}{2}\rho V_2^2$$ $$V_1^2-V_2^2 =\left(2g(y_2-y_1) +\frac{2(P_2-P_1)}{p}\right)$$ $$V_1^2-V_2^2 =K$$ (1) Where K is constant Using equation of continuity $$V_1^2(\frac{ A_1}{A_2})^2 = V_2^2$$ Substitution in (1) gives $$V_1^2(1-(\frac{A_1}{A_2})^2)=K$$ Here as A1 increase V1 increase which is opposite to equation of continuity in which as A1 increase V1 decrease. Help(・へ・)
The relation is true for A1=A2 only, in essence a varying area must lead in varying a pressure, there you assumed that the change in all external pressures is zero whereas area has changed, which is wrong. Addendum: the relation is also true for V1,V2=0 where no change in external pressure causes no flow, any fluid moves from high pressure to low pressure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does the second-order correction to degenerate perturbation theory vanish? Consider a degenerate two-state system with states denoted by $|1\rangle$ and $|2\rangle$. If we apply a perturbation $H^\prime$, the first order correction to the energy is obtained by choosing two linear combinations of $|1\rangle$ and $|2\rangle$ that diagonalizes $H^\prime$. So can we say that the second order correction always vanish in this case because $H^\prime_{12}$ vanishes? I am disturbed by the denominator which blows up.
According to Sakurai, once you have the first order energy shifts, you no longer deal with a degenerate case and you can use the non-degenerate perturbation formulas to calculate higher order corrections. So, I suppose for the second order you need to use the new energy levels in the denominator and not the unperturbed ones.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do phone loudspeakers work? From what i understand, loudspeakers require AC signals to cause an electromagnet to oscillate due to changes in current direction, thus force direction. How can this happen with a phone's battery/cell? shouldn't the battery only be capable of producing direct current?
How can this happen with a phone's battery/cell? shouldn't the battery only be capable of producing direct current? First, let's address what is meant here by direct current since the term has different meanings according to context. Direct current may mean unidirectional current which may be constant or time varying. A phone battery is (typically) rechargeable and so the battery current is not unidirectional nor is it constant. Often, in electronic engineering, DC is a synonym for constant, e.g., a DC voltage source is a voltage source with constant voltage across. A battery is an approximation of a DC (constant) voltage source but the battery current is typically time varying depending on the load (and can be bidirectional as pointed out above). Now, it's straightforward to generate a true AC (zero mean) signal with a circuit connected to a single DC (constant) power supply and there is more than one way to do this. Here's an example (analog) circuit from a question at the Electrical Engineering stack exchange site: This circuit is powered by a single DC power supply voltage (+Vs) and so, the output of the amplifier IC (pin 4) is always positive, i.e., it has non-zero mean (AKA, a DC offset). Connecting this output directly to a speaker would be a bad thing as speakers generally do not tolerate (significant) DC offset. The solution is simple - drop the DC offset across a coupling capacitor (C7 in the schematic) so that the speaker sees only the AC portion of the output voltage at pin 4. An entirely different approach is to create both positive and negative DC power supply voltages from the battery voltage using DC-DC converters. An amplifier with both positive and negative power supply voltages can produce true AC signals at the output so that a coupling capacitor is not required.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/485751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Which particle mediates the Aharonov-Bohm effect? BACKGROUND The Aharonov-Bohm (AB) effect induces phase shifts between the two paths that an electron could take around an enclosed magnetic field. In radial coordinates, assume that the magnetic field is localized around the origin and that the two paths traced by the electron form two complementary half-circles at radius some R. Assume further that the magnetic field is initially switched off. QUESTION At the moment the magnetic field is switched on, which particle travels outward from the origin towards the electrons' path so as to mediate the phase shift? And at what speed? Clearly, such a particle can't be a disturbance of the electromagnetic field since the magnetic field is restricted to the origin and its vicinity.
It is the electromagnetic field disturbance. You may have static magnetic field enclosed in the solenoid, but it is not possible for dynamic field. $\mathbf{\dot{B}}=-\boldsymbol{\nabla}\times\mathbf{E}$ So a switching on of the solenoid will produce a spreading wave AB-effect is purely electromagnetic (and quantum)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/486203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why isn't my calculation that we should be able to see the sun well beyond the observable universe valid? I recently read an interesting article that states that a human being can perceive a flash of as few as 5 or so photons, and the human eye itself can perceive even a single photon. The brain will filter this out, however. I wanted to calculate how far away you'd have to be standing from our sun for not a single one of its photons to be hitting your pupil over a given second. The first thing I did was assume that the sun emits $10^{45}$ photons per second, because, well, that's the only number I could find through internet research. The next step is to assume that the average angle between photons emitted from the sun is pretty much the same, and is equal to $3.6 × 10^{-43}$ degrees. The next step is to assume that the average human pupil diameter is 0.005 meters, and then draw a triangle like so: The length of the white line through the center of the triangle equals the distance at which two photons from the sun would be further apart than your pupil is wide, meaning not even one photon should hit your eye. I broke the triangle into two pieces and solved for the white line by using the law of sines, and my final result is ridiculous. $3.97887×10^{41} $ meters is the length of the white line. For reference, that's over $10^{14}$ times the diameter of the observable universe. My conclusion says that no matter how far you get from the sun within our observable universe, not only should some of the photons be hitting your pupil, but it should be more than enough for you to visually perceive. But if I was right, I'd probably see a lot more stars from very far away every night when I looked up at the sky. Why is my calculation inconsistent with what I see?
Although I don't understand even the tiniest bit of the equations, and I am by no means a physicist, there is one additional factor: The atmosphere of our planet partially filters out what we can see beyond it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/486300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70", "answer_count": 6, "answer_id": 5 }
Berry connection in a solid I am having troubles to understand an equation-sign for the Berry connection in a solid. The general formula reads \begin{equation} \vec{A}(\vec{R}) = \mathrm{i} \langle \Psi(\vec{R}) \, | \nabla_{\vec{R}} \, | \, \Psi(\vec{R}) \rangle \text{.} \end{equation} Now assuming that \begin{equation} H_0 \psi_{\vec{k}}^n (\vec{r}) = E_n(\vec{k}) \psi_{\vec{k}}^n (\vec{r}) \text{,} \end{equation} where $u_{\vec{k}}^n$ denotes the function coming from the Bloch-wavefunctions $\psi_{\vec{k}}^n (\vec{r}) = \mathrm{e}^{\mathrm{i} \vec{k} \cdot \vec{r}} u_{\vec{k}}^n(\vec{r})$, it seems (for $\vec{R} \equiv \vec{k}$) to be too obvious to explain why ... \begin{equation} \vec{A^n}(\vec{k}) = \mathrm{i} \cdot \left( \mathrm{i} \cdot \langle u_{\vec{k}}^n \, | \vec{r} \, | \, u_{\vec{k}}^n \rangle + \langle u_{\vec{k}}^n \, | \nabla_{\vec{k}} \, | \, u_{\vec{k}}^n \rangle \right) = \mathrm{i} \cdot \langle u_{\vec{k}}^n \, | \nabla_{\vec{k}} \, | \, u_{\vec{k}}^n \rangle \end{equation} ... the first term vanishes. I would be grateful if someone could help me out.
$\mathbf{k}$ is not a parameter of the Hamiltonian $H_0$ for the eigensystem of your second equation. For the general definition of Berry connection, the Hamiltonian $H(\mathbf{R})$ depends on the parameter $\mathbf{R}$. So for Bloch states, one should use $u_{n\mathbf{k}}$ for the eigensystem $H_{\mathbf{k}}u_{n\mathbf{k}}=E_{n\mathbf{k}}u_{n\mathbf{k}}$, where $H_{\mathbf{k}} = e^{-i \mathbf{k \cdot r}}He^{i \mathbf{k \cdot r}}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/486501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difference between 'white light' and spectral light for rings of Newton In my book of waves and optics, there's a chapter about the rings of Newton and as a question there is: how is the interference pattern when we use white light, considering me it's : there are rings in different color, the ring with the smallest diameter is in blue. But then they ask what's the difference with spectral light and I don't really know this.
"Spectral light" is a fairly weird phrasing, but presumably it means monochromatic light of well-defined wavelength. This will produce a well-defined set of rings. White light, by comparison, is formed by multiple wavelengths, all of which will create rings at different diameters, overlapping with each other. At the very center, you'll be able to observe some interference, with light and dark fringes separated by coloured boundaries (caused by the different spacings of the different colours), but these will quickly wash out and the interference will be destroyed as dark fringes in one colour start to overlap completely with light fringes in other colours.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/486758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Baker-Hausdorff for normal ordering exponential Let $A=A^+ +A^-$ where $A^+,A^-$ denote the creation and annihilation portion of the field. Then in Eduardo Fradkin, Field Theories of Condensed Matter Physics, equation (5.284), it states that $$ :e^A::e^B: ~=~ e^{[A^+,B^-]}:e^{A+B}:\tag{5.284} $$ where $::$ denotes normal-ordering of $A^+,A^-$. I'm familiar with the regular Baker-Hausdorff formula, but I'm not sure why this identity is true. EDIT: Here's my attempt. \begin{align} :A^n: &= \text{He}_n(A) \\ :e^A: &= \sum_{n=0}^\infty \frac{1}{n!}\text{He}_n(A) =e^{A-1/2}\\ :e^A::e^B: &= e^{-1} e^A e^B\\ &= e^{-1} e^{A+B} e^{\frac{1}{2} [A,B]} \\ &= :e^{A+B}:e^{-1/2} e^{i \Im[A^+,B^-]} \end{align} where I implicitly assumed that $[A,B]$ is a complex multiple of the identity. However, you can see that my result doesn't quite match the equation.
Ref. 1 contains several$^1$ typos, e.g. the aforementioned eq. (5.284) if we use$^1$ the definition above eq. (5.262): Let $\phi^+(x)$ ($\phi^-(x)$) denote the piece of $\phi(x)$ which depends on the creation (annihilation) operators only, $$\phi(x) ~=~\phi^+(x)+\phi^-(x).\tag{5.262}$$ The corrected eq. (5.284) is derived as follows: $$\begin{align} :e^A::e^B:~=~&e^{A^+}e^{A^-}e^{B^+}e^{B^-}\cr ~=~&e^{A^+}e^{[A^-,B^+]}e^{B^+}e^{A^-}e^{B^-}\cr ~=~&e^{[A^-,B^+]}e^{A^++B^+}e^{A^-+B^-}\cr ~=~&e^{[A^-,B^+]}:e^{A+B}:\end{align} \tag{5.284'}$$ References: * *E. Fradkin, Field Theories of Condensed Matter Physics, 2nd ed. (2013). -- $^1$ Independently, there is a wrong sign in the truncated BCH formula (5.269). $^2$ Alternatively, eq. (5.284) is correct in its printed form if we use the opposite notation for creation & annihilation operators.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the drag force proportional to $v^2$ and defined with a factor of $1/2$? $$Drag = \frac{1}{2}C_d \rho Av^2$$ I understand that the strength of the drag depends on the density of the fluid the body passes through, the reference area of the body, the drag coefficient, and the velocity of the object. I don't, however, understand the 1/2 and the $v^2$ in the equation.
Adding just a bit to the previous answer, I believe that the drag coefficient definition is based on the dynamic pressure term in Bernoulli's equation, $\frac{1}{2}\rho v^2$. Thus dependence on velocity squared is expected, and is often observed. However, fluid flow is complicated, and $C_d$ determined empirically in many cases varies with flow velocity, and Reynolds number, depending on boundary layer transition from laminar to turbulent flow, form drag, wake, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Is the velocity of the spinning rod constant after it's hit? Say we've got a rod floating around in space, with two masses of $m_0$, one attached at each end. Let's say the rod has a length of $l$. There's another mass, $m_1$, moving at some velocity $v$ towards one of the masses. $m_1$ collides and sticks instantaneously with $m_2$. In the picture below I drew the collided masses as one big blob. After the collision, the rod will have some angular velocity $w$, and some linear velocity $v_f$. My question is this...will $v_f$ be constant? For the linear velocity, I want to say: "Well, linear momentum is conserved, so..." $m_1v_0=(2m_0 + m_1)v_f$ $v_f=\frac{m_1v_0}{(2m_0 + m_1)}$ However, now I'm doubting how the linear velocity of the entire rod $v_f$ can be constant at all, and thinking that the situation is a lot more complicated. This is because if it is constant, it seems to me that linear momentum isn't being conserved as the rod spins! Consider the case when the rod is vertical, the heavier side is moving left, and the lighter side is moving right, versus the case when the rod is vertical, the heavier side is moving right, and the lighter side is moving left. If the velocity of the center of mass of the rod is constant, then there's more net momentum when the rod is vertical and the heavy side is moving right than when the rod is vertical and the heavier side is moving left...!!! Which would...disagree with the conservation of linear momentum?
The linear momentum is determined from the velocity of the center of mass, and uses the entire mass of the rod. The mass of the rod in total will not change, and thus we need the velocity to be constant as well. The center of mass in this case will lie somewhere closer to the heavier side of the rod, but when you are discussing the linear momentum it is easiest to just think of the rod as a point of mass located entirely at the center of mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why can't we take space as universal frame of reference? Suppose we have a ball filled half with water in space with nothing else around (nothing else in the whole space except the ball) and suddenly it accelerates for time t. obviously, there would be movement in water which will tell us that the ball underwent motion. But since we have nothing to compare the motion with how can we say that it was in motion? can we say that it was at point A inertially (in space) and then in point B (again in the empty space)?
You are asking if we can take space as a universal reference frame. Now what we usually use as universal reference frame, is the CMB. But in SR and GR, there is no universal frame of reference in theory. But let's disregard that, and say you want to move that bucket with water. First of all, how would you say if the bucket was in motion (constant speed) in the first place? In an empty universe you could not tell (about constant speed). Speed is relative. You need to specify what your speed is relative to. In your case, in an empty universe, there is nothing to compare it to. Now if you want to move the bucket what you really do is you accelerate it (at least for a while). Now acceleration is absolute. Even in an empty universe, you are able to tell if your bucket accelerates or not. First, as you say, the water will move differently from the bucket because of the acceleration. But, you can also drop beacons from the bucket as it accelerates. With a laser, you can check the distance between the following beacons, and you will see that this distance increases with each new pair of beacons. Thus, the bucket is accelerating, and you can tell even in an otherwise empty universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What makes north pole of a magnet north pole in the first place? This question might seem absurd and illogical to many. But it just popped out in my mind while I was reading about magnetism. -Like in case of charges, positive and negative charge on an atom means absence and presence of extra electrons respectively. So my question is what aspect exactly makes a pole of magnet north or south? Is it absence or presence of something? -I asked my teacher about this and he simply replied that north pole is something which attracts south pole. But this is more of a property to me rather than an exact meaning of what exactly is north pole of a magnet.
It is history, and it is worse than you think. North pole was defined as the pole that was attracted to the geological north of the earth. Compasses were very important to the sailors sailing the oceans. A compass is a magnetic dipole, as magnetic monopoles do not exist as far as we know experimentally. This means that the geological magnetic dipole is defined the opposite than the compass dipole, since it must be the south that is drawn to the north. Anyway, the answer is: a definition of the magnetic dipole as seen in magnetic materials. See the analogy with the electric dipole in my answer here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Pion decay as a point-particle The $\pi^-$ meson is a composite particle of $\bar{u}d$ quarks, but for many practical purposes it can be treated as a point particle with an effective interaction. The vertex responsible for the $\pi^-(p)\rightarrow e^-(q_1)+\bar{\nu}_e(q_2)$ can be written as: $$(-i)\sqrt{2}G_FV_{ud}f_{\pi}\gamma^\mu\gamma_Lp_\mu$$ I want to write the amplitude of this process but I don't know what to do with the pion. So far I've written: $$iM=\bar{u}(q1)[(-i)\sqrt{2}G_FV_{ud}f_{\pi}\gamma^\mu\gamma_Lp_\mu ]v(q2)$$ But I still have a pion meson entering the vertex, so how do I take that into account? Does the above expression suffice?
On the face of it, your expression looks fine, and the pion momentum is the only usable trace of the annihilated pion. So you must evaluate the pion momentum you wrote. In your conventions, you have conservation of momentum, so $p = q_1-q_2$, and you must proceed to apply the equations of motion on your spinors. Hint: the one of the neutrino will collapse and disappear; while the one on the electron will lead to $~~/\!\!\! q_1 \to m_e$, so then the spinor bilinear in your expression will reduce to $m_e \bar u \gamma_L v$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mathematical form of distribution function with high energetic beam Maxwell Boltzmann distribution function (MBDF) has the form $$f(v)=n(\frac{m}{2\pi k_BT})^{\frac{3}{2}} exp(-\frac{mv^2}{2k_BT})$$ [Basic Space Plasma Physics by Rudolf A. Treumann & Wolfgang Baumjohann]. The shifted MBDF has the form $$f(v)=n(\frac{m}{2\pi k_BT})^{\frac{3}{2}} exp(-\frac{m(v-v0)^2}{2k_BT}).$$ This is shown in the below figure for ms= 1.660539040 $10^{-27}$, kb=1.380 $10^{-23}$, T=100, v0=2000, wherein thick lines give the MBDF and dashed one gives the shifted MBDF. . As far as the beams in plasma are concerned, I believe that one should see a figure of the form: Kindly correct me if I am wrong. What is the mathematical expression that describes the above figure?
Since the only plasmas where that bump-on-tail velocity distribution function (VDF) can exist are those that are either weakly collisional or collisionless, it is perfectly okay to add VDFs. That is, those plasmas are neither in thermodynamic or thermal equilibrium, so there is nothing wrong with adding two VDFs as they are not a single temperature (e.g., see https://physics.stackexchange.com/a/268594/59023 or https://physics.stackexchange.com/a/375611/59023). The general bi-Maxwellian VDF of species $s$ is given by: $$ f_{s}\left( v_{\parallel}, v_{\perp} \right) = \frac{ n_{s} }{ \pi^{3/2} \ V_{T \parallel, s} \ V_{T \perp, s}^{2} } \ exp\left[ - \left( \frac{ v_{\parallel} - v_{o, \parallel, s} }{ V_{T \parallel, s} } \right)^{2} - \left( \frac{ v_{\perp} - v_{o, \perp, s} }{ V_{T \perp, s} } \right)^{2} \right] \tag{0} $$ where $\parallel$($\perp$) refer to directions parallel(perpendicular) with respect to a quasi-static magnetic field, $\mathbf{B}_{o}$, $V_{T_{j, s}}$ is the $j^{th}$ thermal speed (actually the most probable speed), $v_{o, j, s}$ is the $j^{th}$ component of the bulk drift velocity of the distribution (i.e., from the 1st velocity moment), and $n_{s}$ is the number density or zeroth velocity moment of species $s$. Typically beams (i.e., the little bump in your merged VDF) are not isotropic, so it is common to have a VDF for the core and a VDF for the beam. It is okay to use a form like Equation 0 for both, each with different densities and thermal speeds. The non-equilibrium nature of the plasma allows two such VDFs to effectively stream past each other. They tend to excite instabilities that lead to fluctuations like Langmuir waves. You could get a little fancier and use a bi-kappa VDF. The bi-kappa distribution function is given by: $$ f_{s}\left( v_{\parallel}, v_{\perp} \right) = A_{s} \left[ 1 + \left( \frac{ v_{\parallel} - v_{o, \parallel, s} }{ \sqrt{ \kappa_{s} - 3/2 } \ \theta_{\parallel, s} } \right)^{2} + \left( \frac{ v_{\perp} - v_{o, \perp, s} }{ \sqrt{ \kappa - 3/2 } \ \theta_{\perp, s} } \right)^{2} \right]^{- \left( \kappa_{s} + 1 \right) } \tag{1} $$ where the amplitude is given by: $$ A_{s} = \left( \frac{ n_{s} \ \Gamma\left( \kappa_{s} + 1 \right) }{ \left( \pi \left( \kappa_{s} - 3/2 \right) \right)^{3/2} \ \theta_{\parallel, s} \ \theta_{\perp, s}^{2} \ \Gamma\left( \kappa_{s} - 1/2 \right) } \right) \tag{2} $$ and where $\theta_{j, s}$ is the $j^{th}$ thermal speed (also the most probable speed), $\Gamma(x)$ is the complete gamma function, and $\kappa_{s}$ is the kappa index and can be any value larger than 3/2. Further we can show that the average temperature is just given by: $$ T = \frac{ 1 }{ 3 } \left( T_{\parallel} + 2 \ T_{\perp} \right) \tag{3} $$ if we assume a gyrotropic distribution (i.e., shows symmetry about $\mathbf{B}_{o}$ so that the two perpendicular components of a diagonalized pressure tensor are equal).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Rising sea levels due to thermal expansion According to NASA, one of the main reasons for the rising of sea levels is the increase in ocean temperature. The increase was of $0.4^\circ \text{F}\sim 0.2^\circ C$ for waters with depth $\sim700\text{ m}$. The observed sea level rise in that period was of around $\sim 10\text{ mm}$. If the radius of earth is $R$, sea level heigth is $h$, and $\beta$ is the volumetric temperature coefficient at $17^\circ C$, a very simple model gives the volume change by $$ \Delta V = 4\pi R^2h\beta\Delta T. $$ The volume of the thin spherical shell due to volume change is $$ \Delta V = 4\pi R^2\Delta h. $$ Hence $\Delta h = h\beta\Delta T$. Considering that $\beta = 1.7\times 10^{-4}/^\circ C$, we find $$ \Delta h \sim (700\times 10^3\text{ mm})\times 1.7\times 10^{-4}\times 0.2 =23.8\text{ mm}. $$ This is huge, much larger than the observed sea level rise. What is the greatest source of error of this calcultation? I want to make this calculation in class and then present the reasons why it is not precise.
The first source of error I noticed was the temperature difference. it's listed as approximately .2 - .4 degrees. This is not known very accurately and your equation is linear in the temp difference. Also, β is given for 17º C while the water was much colder. β could also depend on pressure which is much greater at 700' depth. The last possible source of error I noticed was based on my assumption. i assumed from your description that you were looking at how much the volume of the first 700 meters of water expanded. To this, I would comment that the Volume increase would be less since parts of oceans are not 700' deep. Or maybe I don't understand how you're doing the calculation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is effective mass a tensor? So I came across the effective mass concept for solids the other day. It was mentioned that the effective mass is a tensor and may have different values in different directions. However, this is stark contrast to ordinary mass which is direction independent (as far as I know). So how do we physically explain this direction dependence of electrons (holes) mass inside a solid? Is it called "mass" just because it has the dimensions of mass? Or is it just a mathematical tool without any physical significance?
Effective mass $m_{\ast}$ is just a constant that shows up in the dispersion relation $\epsilon(\mathbf{k})$ of an energy band. Consider a one-dimensional band with dispersion $\epsilon(k)$. Expanding near a minimum of $\epsilon$ \begin{align} \epsilon(k) \approx \epsilon_0 + \frac{\hbar^2 k^2}{2m_{\ast}} \end{align} It's defined by analogy to the quantum mechanical energy of a free particle of mass $m$, \begin{align} E(k) = \frac{\hbar^2 k^2}{2m}. \end{align} For example, consider a one-dimensional tight-binding model with dispersion \begin{align} \epsilon(k) = -2 t\cos(ka). \end{align} Taylor expanding $\epsilon$ near $k=0$ we get \begin{align} \epsilon(k)\approx -2t + tk^2 a^2. \end{align} so that the effective mass is \begin{align} m_{\ast}= \frac{\hbar^2}{2ta^2}. \end{align} In higher dimensions, the dispersion relation can be more complicated. If the dispersion is isotropic (the same in every direction), then \begin{align} \epsilon(\mathbf{k}) = \epsilon_0 + \frac{\hbar^2 |\mathbf{k}|^2}{2m_{\ast}}. \end{align} However, you could have a dispersion where the constants in front of different components of $\mathbf{k}$ are different, e.g. \begin{align} \epsilon(\mathbf{k}) = \frac{\hbar^2}{2}\left(\frac{k_x^2}{m_x} + \frac{k_y^2}{m_y}\right) \end{align} Such a dispersion could arise from, e.g., a tight-binding model in which the hopping parameters or lattice constants in each direction are different. You could even have a dispersion with cross-terms between the components of $\mathbf{k}$, e.g. \begin{align} \epsilon(\mathbf{k}) = \frac{\hbar^2}{2}\left(\frac{k_x^2}{m_1} + \frac{k_x k_y}{m_2} + \frac{k_y^2}{m_3}\right) \end{align} In general, the dispersion (to quadratic order, and dropping the constant) will be \begin{align} \epsilon(\mathbf{k}) = \frac{\hbar^2}{2}\sum_{a,b}h_{ab} k_a k_b \end{align} for a tensor $h_{ab}$ — the inverse of the effective mass tensor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/487969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is dimension? What is the size of dimension? Recently I heard a TED talk by Brian Greene where he was speaking about String Theory working on $(10+1)$ dimensions. Plus he said that we live in only in $(3 +1)$ dimensions. So where are others? He explains that it is crippled to small sizes which we cannot perceive. Also, he gives the analogy of an ant and a man walking on the rope. So my doubt is, what do physicists mean by saying word dimension? What do you mean by size of dimension? Thanks in advance for your help and support. Edit: Any link to papers discussing philosophical aspects of the above mentioned is greatly welcome.
The number of dimensions of space is the number of coordinates required to specify a point in space. The space we see is three-dimensional because we can specify a point in it as $(x, y, z)$. If we needed $(x,y,z,v,w)$ there would be five dimensions. Each dimension is different, independent direction in which we can move: forwards/backwards, left/right, up/down for 3D. The size of a dimension can be loosely thought of as how far you could go in that direction before you come back to where you started. For each of the three directions we see, you can go at least many billions of light-years. These may well be dimensions of infinite size. We’re not sure whether they are infinite in size or just really, really big. Extra dimensions would have to be microscopically tiny, so that we are constantly going “all the way around” in them without even realizing it. If they were macroscopic in size, we would have noticed them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/488080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Gaussian path integral is equivalent to saddle-point? If we have a path integral involving many fields, $$Z = \int \mathcal D \phi_1 \cdots \mathcal D \phi_n \exp(-S[\phi_1,\ldots, \phi_n]),$$ and $\phi_n$ occurs only quadratically-- i.e. the $\mathcal D \phi_n$ integral is Gaussian-- I've been told that integrating over $\phi_n$ is equivalent to solving for $\phi_n$'s equation of motion $$\phi_n= f(\phi_1,\ldots, \phi_{n-1})$$ using Euler-Lagrange and plugging in. Up to normalization. Can one show in general why this is true?
Gaussian integration is a particularly simple case of the WKB expansion, cf. e.g. this Phys.SE post. Of course, the caveat is that the saddle point may be complex-valued. In other words, in the 1D case, the saddle point may lie in the complex plane, and one has to show that one can close the integration contour between the real axis and the line of steepest descent through the saddle point. Some of these issues are addressed in e.g. this & this related posts.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/488189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Idealization of Eletric Field at a point According to Jackson "Classical Electrodynamics", in the first chapter about electrostatics: [...] point charges or electric fields at a point must be viewed as mathematical constructs that permit a description of the phenomena at the macroscopic level, but that may fail to have meaning microscopically. I think I've understood why point charges represent an idealization, but why also the electric field at a point fails to have meaning microscopically? Moreover, what does it mean in practice that in electrostatics we are only interested in macroscopic phenomena? Does this also affect electrodynamics?
We ususally talk about the EM field (as E and M together) as per QFT, and the electric and magnetic field are manifestations of the EM force. Now you are asking why the EM field must be viewed as a mathematical construct. This means that the interaction of the EM field with other charges is mathematically modeled by virtual photons. Virtual photons are not real particles, they are a mathematical method that we use to describe the phenomenon, when an EM field has a effect on the fabric of spacetime so, that in the region where the EM field exists (near field), it will have an effect on spacetime so that any particle that interacts with the EM field will have an altered trajectory. These fail to have a meaning at the microscopic level, because in reality, we do not know how the EM field interacts, what we do know is that we do experiments, and we use virtual particles as a mathematical model to describe this phenomenon, and our data fit this model best (QFT).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/488561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Metric tensor in the comoving cordinate frame In the textbook "Gravitation" by Misner,Thorne and Wheeler (page 717), when the isotropy of the universe is considered, it is stated that in the comoving frame the metric tensor can be written as: $g_{\alpha \beta} =\frac{\partial}{\partial x^\alpha}\frac{\partial}{\partial x^\beta}$ How does one come at this conclusion?
As Matt says in a comment, this is just a notation stating the definition of the metric. MTW notate it with a dot, to make it clear that the right-hand side is the inner product of two vectors. $\partial/\partial x^\alpha$ is a notation for the vector corresponding to a unit change in the coordinate $x^\alpha$. In this notation, the partial derivative is just being used as a basis vector for the space of vectors. By taking the dot product of unit two vectors, you get a component of the metric. This isn't anything special about cosmology. It's generic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/488671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is it that when a chalk board gets cleaned, the area that used to have chalk is the cleanest? Why is it that when you erase a chalk board, the area where the chalk used to be becomes the cleanest? By that I mean that when you erase a chalk drawing, the board gets smeared with chalk dust, but the area where the drawing used to be has less dust on it than the rest of the board. For example: In the first picture below I draw a simple chalk smiley face. Here the face is noticeable because it is the area with the most chalk. For the second picture, I erase it. You can still make out the picture, but notice that you recognize it because it is now the area with the least chalk. I would expect that if chalk was stuck to a certain region of a chalk board, then after erasing it, some chalk residue would remain, but instead it seems like the opposite happens. I don't have a good answer for this problem.
When you press a chalk stick into the chalkboard to write on it, the chalk particles tend to clump up and stick to each other. Thus, when you erase, it's significantly easier to remove the big clumps of chalk particles together. On the other hand, when you erase, you tend to leave chalk dusts behind. These chalk dusts are fine particles which can sieve through the chalk eraser. As such, the smaller chalk dusts fills the board when you erase, while the bigger chalk drawing are removed much easier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/488780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 2 }