source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
706,859
From the CERN website: In the first moments after the Big Bang, the universe was extremely hot and dense . I've always heard this about the big bang but I've never thought about it before now. If "heat" is vibration of atoms and "density" is the amount of matter in a unit of volume, what do these terms mean in the context of a universe that doesn't yet have any matter?
In Physics, hot is an adjective meaning at high temperature . Heat is a different concept. In the case of ordinary matter, the temperature can be associated with atomic speeds. However, it is possible to generalize the concept of temperature to deal with systems different from moving particles. The most general thermodynamical definition of temperature is based on the rate of change with the energy of the number of the microscopic states of a system. Quite an abstract definition but sufficiently general to allow using the word temperature in cases where the classical concept of atomic movement cannot be used (quantum systems, or electromagnetic fields, to cite a couple of examples). In the context of the Big Bang, the concept of density of the universe also needs a generalization of the usual idea of the mass per unit volume. The proper context for Big Bang theory is General Relativity (GR). Relativity tells us that mass and energy are not separate concepts: an increase in energy implies an increase in mass. Therefore, extremely dense is equivalent to saying that a huge amount of energy was confined in a very small volume. Unfortunately, in many cases, the popularization of Science uses common language words without explaining that they may have a different meaning in Physics.
{ "source": [ "https://physics.stackexchange.com/questions/706859", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/334574/" ] }
707,081
Given the 1/3rd of Earth's gravity on Mars and neglecting space suit limitations and also assuming you have maintained your muscle strength, would you run faster on Mars? The answer may not be so straightforward. This is similar to the reduced 1/6th gravity on moon compared to the 1/3rd on Mars. We all have seen the video footage from the moon landings and walks on the moon surface. The astronauts appear to have increased foot strength but they have in a rather slow general movement, especially evident when they are waving their hands. This is maybe because absent or reduced gravity (microgravity) makes you actually float like inside water. Your limbs' muscles are constantly fighting your own mass's inertia (buoyancy replaced by the word inertia) and your feet maybe stay longer on "air" not touching the ground since you are not assisted by the Earth's gravity increased downward force. So I guess this is an open question. image source: https://www.pinterest.com/pin/763078730595604862/ Update 7 May 2022: Seems this question has risen quite a debate in the last couple of days. I did some more digging in the literature and could find only one directly dedicated publication to this question about running on Mars: https://pubmed.ncbi.nlm.nih.gov/15856558/ (Abstract only) Also about running on the moon this research here says that experiments have shown that the maximum speeds achieved will be much greater than initially theoretical predicted mainly due to the extra momentum gained by the hands movement, but still inferior speeds to Earth's gravity: https://www.theverge.com/2014/9/17/6353517/nasa-astronauts-tested-how-fast-humans-can-run-on-the-moon
The speed of walking and running depend on pendulum-like motion of the legs. If you walk at different speeds the power used varies, and has a minimum roughly corresponding to the free pendulum motion of your legs. That swing time is $T\approx 2\pi \sqrt{L/g}$ , and since each step has a length proportional to your leg length $L$ the speed scales as $v\propto L/T = \sqrt{gL}$ . So you will tend to walk more slowly in low gravity. This is complicated by people talking longer steps in lower gravity. This can be tested using parabolic flight or having weight-reducing spring suspensions. However, running involves moments when both legs are in the air. It becomes energetically favourable for bipeds when the Froude number is $\approx 0.5$ , or $v=\sqrt{g L/2}$ . So at lower gravity you start running at lower speeds, which also checks out with suspended runners . The energetically most favourable speed is at Froude = 1/4, which means you will tend to run at a speed scaling as $\sqrt{g}$ . So Martian runners would tend to run at about 60% of Earth runner speeds. Low gravity running also involves a flatter trajectory with less bouncing , and has a reduced energy cost compared to walking : on Mars people may be running more, but do it more slowly.
{ "source": [ "https://physics.stackexchange.com/questions/707081", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/183646/" ] }
707,452
Because the CMB is everywhere and is isotropic, if an object would have a certain velocity, it could have a pressure differential produced by the CMB which would produce drag till it would stop with respect to the CMB. However, wouldn't this mean that there is a 'universal' reference frame created by the CMB? Wouldn't this be going against special relativity assumptions?
The CMB does in fact produce a preferential reference frame. Even without pressure, the preferential frame would be the one that equalizes the red and blue shift in all directions. For example Earth's motion around the Sun and the Sun's motion around the galaxy can be extracted from the red and blue shift in CMB data. This does not contradict relativity, though, because the equations of physics are still valid and take the same form in any reference frame, even ones moving with respect to the CMB. Also, comparing your speed to the CMB requires looking far away, and relativity is based on the idea that your local neighborhood behaves the same regardless of your state of (inertial) motion. In the same way, when analyzing motion on Earth, it usually makes sense to define our coordinate system so one axis aligns with gravity (i.e. "up and down"), because gravity is one of the main forces at work, and it reduces the number of sines and cosines needed in the equations. But that doesn't mean you could not obtain equally valid answers in a coordinate system with any orientation. And that "up and down" oriented coord system may not be the most natural choice in other situations, like a long way across the globe, or in the middle of space. EDIT: In terms of "pressure" from the CMB, I think (?) you are referring to the radiation pressure $P$ of the distant source on a moving perfect reflector, as a function of velocity $v$ , which Einstein in 1905 derived to be: $$P=\frac{1}{4\pi}A^2\frac {(1-\frac{v}{c})}{(1+\frac{v}{c})}$$ where $A$ is the EM field amplitude. So this pressure does depend on velocity.
{ "source": [ "https://physics.stackexchange.com/questions/707452", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/318923/" ] }
707,827
One of the most used schemes for solving ordinary differential equations numerically is the fourth-order Runge-Kutta method . Why isn't it used to integrate the equation of motion of particles in molecular dynamics?
In molecular dynamics simulations, the overwhelming part of the computational time is spent evaluating forces. For this reason, since the very beginning of the method, the algorithms of choice were those requiring the minimum evaluation of forces per unit of time . Higher-order algorithms are usually not necessary for two reasons: in molecular dynamics simulations, one is not interested in the maximum accuracy of the trajectories but in an efficient sampling of the phase space compatible with the physical constraints. For this reason, usually, lower-order but symplectic schemes are preferred. the increased accuracy of higher-order algorithms does not allow a proportional increase of the time step size. The reason is the stiff nature of the interatomic forces at short distances. Based on these considerations, the fourth-order Runge-Kutta method is usually excluded. It is not time-reversible nor symplectic and requires four force evaluations per step. Even the existing symplectic Runge-Kutta methods are not appealing when compared with methods of the same order using fewer force evaluations.
{ "source": [ "https://physics.stackexchange.com/questions/707827", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/335110/" ] }
707,828
Is there a way to correctly predict the direction of the unit radial vector and the unit transverse vector in problems like the one below or is it just better to take a guess and solve the problem base on your guess?
In molecular dynamics simulations, the overwhelming part of the computational time is spent evaluating forces. For this reason, since the very beginning of the method, the algorithms of choice were those requiring the minimum evaluation of forces per unit of time . Higher-order algorithms are usually not necessary for two reasons: in molecular dynamics simulations, one is not interested in the maximum accuracy of the trajectories but in an efficient sampling of the phase space compatible with the physical constraints. For this reason, usually, lower-order but symplectic schemes are preferred. the increased accuracy of higher-order algorithms does not allow a proportional increase of the time step size. The reason is the stiff nature of the interatomic forces at short distances. Based on these considerations, the fourth-order Runge-Kutta method is usually excluded. It is not time-reversible nor symplectic and requires four force evaluations per step. Even the existing symplectic Runge-Kutta methods are not appealing when compared with methods of the same order using fewer force evaluations.
{ "source": [ "https://physics.stackexchange.com/questions/707828", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/323276/" ] }
707,836
So, we have a body $P_1$ of mass $m$ that travels from $A$ to $B$ along a straight line, on wich is exerted a constant force $F$ dependent only on the position of $P_1$ . We know that the work done is $W_{P_1} = \frac{1}{2}m(v_{1A}-v_{1B})^2$ , with the expression within the parenthesis being the difference in velocity from $A$ to $B$ . Now, suppose that a second body $P_2$ with the same mass $m$ travels from $A$ to $B$ . We know that the work depends only on the path and the force, so the work $W_B$ on $B$ satisfies $W_B = W_A$ , hence $\frac{1}{2}m(v_{1A}-v_{1B})^2 = \frac{1}{2}m(v_{2A}-v_{2B})^2$ and $(v_{1A}-v_{1B})^2=(v_{2A}-v_{2B})^2$ . Assuming a force opposite to the direction of motion, we conclude: $v_{1A}-v_{1B}=v_{2A}-v_{2B}$ , in other words the change in velocity over the path $AB$ of both the bodies does not depend on their initial velocity. But I know this is false: is there a fallacy in my logic? In my math? Where am I doing a mistake?
In molecular dynamics simulations, the overwhelming part of the computational time is spent evaluating forces. For this reason, since the very beginning of the method, the algorithms of choice were those requiring the minimum evaluation of forces per unit of time . Higher-order algorithms are usually not necessary for two reasons: in molecular dynamics simulations, one is not interested in the maximum accuracy of the trajectories but in an efficient sampling of the phase space compatible with the physical constraints. For this reason, usually, lower-order but symplectic schemes are preferred. the increased accuracy of higher-order algorithms does not allow a proportional increase of the time step size. The reason is the stiff nature of the interatomic forces at short distances. Based on these considerations, the fourth-order Runge-Kutta method is usually excluded. It is not time-reversible nor symplectic and requires four force evaluations per step. Even the existing symplectic Runge-Kutta methods are not appealing when compared with methods of the same order using fewer force evaluations.
{ "source": [ "https://physics.stackexchange.com/questions/707836", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/335049/" ] }
708,552
Since a star heats up when it radiates more heat (via gravitational compression), and since that's also how protostars turn into stars, I was wondering what are the chances of Jupiter reaching the point where nuclear fusion kicks off within it given that it survived long enough?
The smallest objects (given an elemental abundance mixture appropriate to a giant planet like Jupiter) that can attain hot enough interiors to ignite a sustained thermonuclear reaction are about 13 times that of Jupiter. The fusion reaction in question is that of deuterium, which burns at lower temperatures than "normal" hydrogen. The equivalent mass for hydrogen (protium) ignition is about 75 times the mass of Jupiter. Jupiter is therefore nowhere near massive enough to instigate nuclear fusion reactions in its interior. The question arises - why cannot Jupiter simply contract until its core becomes hot enough to ignite these reactions? The virial theorem tells us that any contraction will be associated with half the gravitational potential energy being radiated away and half being used to heat the interior. The answer is degeneracy pressure. The electrons in Jupiter's deep interior are dense enough to form an increasingly degenerate Fermi gas. This Fermi gas exerts a pressure that is almost independent of the temperature. This means that as Jupiter radiates away its interior heat, the pressure hardly diminishes and as a result any contraction of Jupiter is rather small and slow and will not lead to any significant temperature increase. Ultimately, Jupiter will attain a "zero temperature" configuration that is not much smaller than it is today and could cool at nearly constant radius. At no point will it ever become hot enough to ignite nuclear fusion. This is the basic reason for the mass limits quoted above. I found the following plot in Fortney et al. (2006) which shows this general behaviour. These are model calculations of the evolution of radius (in Jupiter radii) for a Jupiter-mass planet either with (dashed lines) or without (solid lines) a solid core. The different colours represent different distances from the parent star (to account for the effects of insolation ). The appropriate curve for Jupiter is between the 1 au and 9.5 au curves. What this shows is that the rate at which the radius gets smaller decreases with time (note the time axis is logarithmic).
{ "source": [ "https://physics.stackexchange.com/questions/708552", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/262071/" ] }
708,568
It has been said that the Big Bang started from a singularity. Think about a balloon radially growing over time. Fix a time $t_0, t_1 > 0$ , and let $M_0, M_1$ be two balloons at time $t_0, t_1$ respectively. I can find a two-parameter diffeomorphism $\phi(t_0, t_1): M_0 \rightarrow M_1$ . However, I cannot find a diffeomorphism if I let $t_0 = 0$ and $t_1 > 0$ , i.e. $\phi(0, t_1): \{*\} \rightarrow M_1$ . In what sense should I interpret a homotopy between initial state (big bang) and final state (the current universe)? Is it even true that the Big Bang started from a singularity?
The singularity at the start of the universe in the Big Bang model is not supposed to be understood as part of the smooth manifold of spacetime, precisely for this reason. The time function on spacetime does not actually assign a "point" to $t = 0$ . It's undefined (otherwise spacetime wouldn't be a Lorentzian manifold), and the same is true if you take an FLRW universe and try to keep the initial spatial slice 3d - since the scale factor goes to zero, the manifold is not Lorentzian there. If you want to model the initial singularity of the Big Bang as part of spacetime, you need to consider more general models of spacetime than a Lorentzian manifold.
{ "source": [ "https://physics.stackexchange.com/questions/708568", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/279703/" ] }
709,124
We know that neutrons exert short ranged nuclear forces over other nucleons in a nucleus, and these forces are only attractive in nature. Also this force is universal and doesn't differentiate between charged or uncharged bodies. So why doesn't a nucleus-like body exist, purely made up of neutrons? That would be highly stable because there is no coulombic repulsion between neutrons as they don't have any charge, and they will still be strongly bounded by nuclear forces. So why doesn't this exist? What am I missing?
The reason for this is that unlike the electrostatic force the nuclear force depends on how the spins of the two particles are aligned. The force is stronger when the spins are in the same direction than when they are in opposite directions. To see why this causes a problem imagine trying to assemble some number of neutrons into a nucleus. We expect there will be energy levels like the energy levels for electrons in an atom, though they'll be more complicated (this is the nuclear shell model ) and each energy level will contain two particles. We try to put the first two neutrons into the first energy level, but the problem is the strong force wants the spins to be in the same direction and the Pauli exclusion principle doesn't allow this. We would have to flip one of the spins to make the two spins opposite, but this reduces the nuclear force between the particles and it raises the energy of the level. Then we try to add the third and fourth neutron into the second lowest energy level, and we run into the same problem. We can do it, but the energies will be much higher than they would be if the spins were parallel, so the nucleus would be much less strongly bound. Now this doesn't mean the collection of neutrons wouldn't be bound, but there's a problem. Neutrons freely convert to protons, and you can put a proton and neutron together into a single orbital with their spins in the same direction because they have different isospins. So if you put two neutrons into the lowest orbital with their spins opposite one of the neutrons will turn into a proton. Now the two particles can have their spins parallel, which lowers the energy of the orbital and therefore makes the nucleus more strongly bound. The same will happen with the next lowest orbital, then the next and so on. Your nucleus made up of neutrons would spontaneously convert into a 50/50 mixture on neutrons and protons because it lowers the energy. This argument implies all nuclei should be a 50/50 mixture of protons and neutrons, and this is approximately right but only approximately. This is because binding in nuclei is more complicated than the rather simple model I've described above. But while the model is wrong in detail it does capture the general principle involved.
{ "source": [ "https://physics.stackexchange.com/questions/709124", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/335762/" ] }
709,547
According to my understanding of General Relativity, gravity is not a force and an observer which is falling freely under the influence of gravity should be considered inertial. Now, I have come across some texts about black holes that say a body approaching black hole will be eventually ripped into pieces due to large difference in the gravitational field intensity between, say head and toe. So my question is if the observer is an inertial one and he is not experiencing any force why would his body parts be ripped apart?
You don't need GR to see this effect. It's due to tidal forces. Suppose you are 2 meters tall. Then the force of the Earth on your feet is $GMm/r^2$ , and the force on your head is $GMm/(r+2)^2$ . The difference between the two is the tidal force you feel. Now if you calculate these two forces, you'll find that they are almost the same. It's why you aren't ripped apart. But say the Earth was compressed to size of about $1$ cm (approximately the Schwarzschild radius of an Earth-mass black hole). Then the same calculation would find two vastly different forces. It's why you are ripped apart by small black holes but not by large ones. (Of course all this can be made more precise in GR, but Newtonian mechanics suffices to answer your question.) Edit : to answer your comment, the observer's frame is inertial as long as tidal forces aren't large enough. Once they're large enough the frame ceases to be inertial. There are two ways to make tidal forces smaller: the first is to have weak gravity, and the other is to make the observer smaller. You can see both of these in the equations above: weak gravity corresponds to large $r$ or smaller $M$ , while smaller observer corresponds to you having a smaller height. Both will reduce the tidal forces, and will lead to you not getting ripped apart.
{ "source": [ "https://physics.stackexchange.com/questions/709547", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306186/" ] }
709,780
As I understand, we usually talk about gravity at a macro scale, with "objects" and their "centre(s) of mass". However, since gravity is a property of mass generally (at least under the classical interpretation), it should therefore apply to individual mass-carrying particles as well. Has this ever been shown experimentally? For example, isolating two particles in some manner and then observing an attraction between them not explained by other forces. To pose the question another way, let's say I have a hypothesis that gravitation is only an emergent property of larger systems, and our known equations only apply to systems above some lower bound in size. Is there any experiment that reasonably disproves this hypothesis?
For the interaction of one small (atom scale) mass and one large mass, measurements of the Earth's atmosphere that anyone could do with a homemade barometer and a nearby mountain constitute direct experimental confirmation. We find more gas molecules at low altitudes than at high altitudes. Only gravity acting on each gas molecule independently could be responsible for the observed behavior - they behave like a gas, but they don't just float away and uniformly distribute themselves across the cosmos, but instead assemble themselves into a pressure gradient pointing towards the center of the planet. For the interaction of two small masses, rather than one small mass and Earth, or large distributions of small masses (e.g. nebula formation), the smallest I've read about is this one from last year, using ~90mg gold spheres. See arXiv: 2009.09546 [gr-qc] .
{ "source": [ "https://physics.stackexchange.com/questions/709780", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/336058/" ] }
709,792
In this article about the EMC effect (in no way relevant to my question), the statement is made: The problem is that the complete QCD equations describing all the quarks in a nucleus are too difficult to solve, Cloët and Hen both said. Modern supercomputers are about 100 years away from being fast enough for the task, Cloët estimated. And even if supercomputers were fast enough today, the equations haven't advanced to the point where you could plug them into a computer, he said. I want the simplest possible example of equations that can't be plugged into a computer (This really should go without saying, but I want an example that typifies the reason the QCD equations specifically can't be plugged into a computer. So I guess in some sense it is relevant to my question contrary to what I said above. Also if you want to talk about QCD or EMC that's fine; I wouldn't have read the article if I wasn't interested in those things, and maybe it's helpful in the context of the question as well?).
For the interaction of one small (atom scale) mass and one large mass, measurements of the Earth's atmosphere that anyone could do with a homemade barometer and a nearby mountain constitute direct experimental confirmation. We find more gas molecules at low altitudes than at high altitudes. Only gravity acting on each gas molecule independently could be responsible for the observed behavior - they behave like a gas, but they don't just float away and uniformly distribute themselves across the cosmos, but instead assemble themselves into a pressure gradient pointing towards the center of the planet. For the interaction of two small masses, rather than one small mass and Earth, or large distributions of small masses (e.g. nebula formation), the smallest I've read about is this one from last year, using ~90mg gold spheres. See arXiv: 2009.09546 [gr-qc] .
{ "source": [ "https://physics.stackexchange.com/questions/709792", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60849/" ] }
710,111
In Interstellar , wouldn't Miller's planet be fried by blueshifted radiation? The 61,000x time dilation multiplier would make even cosmic background radiation photons into extreme UV photons. I was told that this is off-topic for Science Fiction & Fantasy , so I'm posting it here. For those not in the know - Miller's world orbits an almost maximally spinning massive black hole at just above its Schwarzschild radius. This results in extreme time dilation - 1 hour on Miller's world passes in about 7 years according to a distant observer.
Miller's world would be fried by a strong flux of extreme ultraviolet (EUV) radiation. The cosmic microwave background (CMB) would be blueshifted by gravitational time dilation and then would be very strongly blueshifted and beamed coming from the direction of orbital motion. The overall effect would be a very strong dipolar distribution of temperature that is then distorted by the curved ray paths close to the black hole, whose shadow would fill nearly half the sky. However, the size of the ultra-blueshifted spot is correspondingly very small. A detailed numerical calculation $^\dagger$ comes up with an equilibrium temperature for Miller's world of 890 $^{\circ}$ C ( Opatrny et al. 2016 ), with a flux of about 400 kW/m $^2$ from an EUV blackbody(!) arriving from the CMB "hotspot". I guess you would classify this as "fried" $^{\dagger\dagger}$ . It is hotter than Mercury anyway. $\dagger$ According to Opatrny et al. the peak blueshift in the direction of orbit is $275000$ - i.e. wavelengths are shortened by a factor of $275000+1$ . Since temperature goes as redshift, then a tiny spot on the sky is an intensely bright (brightness goes as $T^4$ ) blackbody source of soft X-rays and EUV radiation. The source size is of order angular radius $1/275000$ radians due to Doppler beaming. Back of the envelope - the source is 130 times hotter than the Sun but covers a $(1200)^2$ times smaller solid angle in the sky. Thus the power per unit area received should be $130^4/1200^2 = 200$ times greater than from the Sun. This is in pretty good agreement with Opatrny et al.'s calculation that also claims to take into account the lensing effects. $\dagger\dagger$ Apparently, the typical temperature in a frying pan is 150-200 $^{\circ}$ C
{ "source": [ "https://physics.stackexchange.com/questions/710111", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/263465/" ] }
710,606
If perfect rigid bodies were to exist, then consider a scenario in which two rigid bodies of equal masses moving with velocities of equal magnitude but opposite in direction colliding against one another. During the collision, the velocities of both the masses will decrease and they will reach zero for both the bodies (as the net kinetic energy is zero). Since the bodies are rigid, there will be no compression which stores the kinetic energy, which would further accelerate the bodies in opposite directions (as in case of a normal elastic collision). Is it correct to say that the existence of perfect rigid bodies would violate conservation of energy, and hence they cannot exist?
You are right. Perfectly rigid bodies are an idealization, like point particles or massless frictionless pulleys. They do not exist. But they are useful. Plenty of objects exist that are so rigid that you cannot ordinarily tell the difference. A perfectly rigid object would violate other laws as well. For example, if you pushed it on one side, the whole object would instantly begin to move. That is, the forces would have to be transmitted from the near side to the far side faster than light. In a real object, rigidity is caused by atomic bonds, which are electromagnetic forces. Changes in electromagnetism cannot travel faster than light.
{ "source": [ "https://physics.stackexchange.com/questions/710606", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/287156/" ] }
712,288
One easy answer would be that in practical purposes, it is a very special sort of parameter. This point comes up quite clearly when we do the derivation for the work-energy theorem: $$ W = \int\mathbf{ F} \cdot \text d\mathbf s = \int m \left(\frac{\text dv}{\text dx}\cdot \frac{\text dx}{\text dt} \right)\, \text dx = \int m v\, \text dv =\frac12 mv^2 +C$$ When we go from result of first equality to the one of second equality, we are implicitly choosing a very specific parameterization which is of the actual time experienced ticking as the object moves through the path, but, on a mathematical level, no matter how fast we would have run through the force field (assuming force is indep of time), it would be that the work integral gives same answer. So, this made me wonder, what properties does time have beyond just being a parameter when doing integrals in Newtonian mechanics?
Here is one way to address your question. "Time is defined so that motion looks simple." - Misner, Thorne, and Wheeler in Gravitation , p.23. Continue through to p. 26 where they say "Good clocks make spacetime trajectories of free particles look straight". Misner, Thorne, and Wheeler ( Gravitation , p.26) Look at a bad clock for a good view of how time is defined. Let $t$ be time on a "good" clock (time coordinate of a local inertial frame); it makes the tracks of free particles through the local region of spacetime look straight. Let $T(t)$ be the reading of the "bad" clock; it makes the world lines of free particles through the local region of spacetime look curved (Figure 1.9). The old value of the acceleration, translated into the new ("bad") time, becomes $$0=\frac{d^2x}{dt^2}=\frac{d}{dt}\left(\frac{dT}{dt}\frac{dx}{dT}\right)=\frac{d^2T}{dt^2}\frac{dx}{dT}+\left(\frac{dT}{dt}\right)^2\frac{d^2x}{dT^2}$$ To explain the apparent accelerations of the particles, the user of the new time introduces a force that one knows to be fictitious: $$ F_x=m\frac{d^2x}{dT^2}=-m\frac{\left(\frac{dx}{dT}\right)\left(\frac{d^2T}{dt^2}\right)}{\left(\frac{dT}{dt}\right)^2} $$ It is clear from this example of a "bad" time that Newton thought of a "good" time when he set up the principle that "Time flows uniformly" ( $d^2T/dt^2 = 0$ ). Time is defined to make motion look simple! UPDATE: For a similar argument, refer to p.415 in Trautman's "Comparison of Newtonian and Relativistic Theories of Space-Time" in Perspectives in Geometry and Relativity, Essays in honor of V. Hlavaty, ed. by B. Hoffmann, 1966 http://trautman.fuw.edu.pl/publications/Papers-in-pdf/22.pdf (item 95 from http://trautman.fuw.edu.pl/publications/scientific-articles.html )
{ "source": [ "https://physics.stackexchange.com/questions/712288", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236734/" ] }
712,633
How much energy can be added to a small volume of space? Perhaps like the focal point of a very high power femtosecond laser for a short time, or are there other examples like the insides of neutron stars that might be the highest possible energy density? Is there any fundamental limit?
There is a limit to how much energy that can be contained in a finite volume, after which the energy density becomes so high that the region collapses into a black hole . We also know that matter and energy are equivalent according to the Einstein equation $$E=mc^2\tag1$$ So if we can determine the greatest amount of matter that can fit into a volume just before it collapses into a black hole, the corresponding energy should also indicate the greatest energy confined in the volume just before it becomes a black hole. The maximum amount of matter, mass $M$ , that can be contained in a given volume before it collapses into a black hole, is given by the Schwarzschild radius $$r_s=\frac{2GM}{c^2}\tag2$$ Using (1) we can then write $$M=\frac{E}{c^2}$$ so that equation (2) becomes $$r_s=\frac{2GE}{c^4}$$ or $$E=\frac{r_sc^4}{2G}$$ Note that this is still energy, and to get to energy density we need to define the volume which is of course $$V=\frac 43 \pi r_s^3$$ so that the energy density is $$\epsilon =\frac{3c^4}{8\pi Gr_s^2}$$ This computation is based on not much more than the equivalence of matter and energy. It represents a bound on the maximum amount of matter, and therefore energy, in a spherical volume of radius $r_s$ before the volume containing the matter collapses into a singularity, which of course has no properly defined volume.
{ "source": [ "https://physics.stackexchange.com/questions/712633", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/293269/" ] }
713,023
Consider the green material in the figure to be conducting. So, I was wondering how the current will flow in the rod, as the battery is not connected at the ends, but on the surface of the rod. Thus, will there be no current at the ends of the rod and what about voltage?
An approximate numerically calculated figure is attached. The figure is a two-dimensional result. Although 3D calculations is possible, the figure is easier to see in 2D. $\phi$ is potential in volts. $\vec{J}$ is the electric current density. (Edit #1) I add a figure highlighting the elements in the two left-most columns.
{ "source": [ "https://physics.stackexchange.com/questions/713023", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/276132/" ] }
713,177
The probability of an electron found outside the atom is never zero. Consider building an electron detector, it must receive permanent signals from all electrons in the universe, as they can exist everywhere. Of course the probability decreases with increasing distance from the atom, but for the huge amount of atoms in the universe the probability of finding an electron at every point in space cannot be negligible.
I think this is a very creative question. You are thinking: There are so many atoms in the universe that all of their electron wavefunctions should overlap and lead to a detection everywhere. But remember that the radial wavefunction of the electron very roughly goes as $e^{-r/a_0}$ , where $a_0$ is the Bohr radius, which is (also very roughly) $10^{-10}$ m, and $r$ is the distance. The estimated amount of atoms in the observable universe is something of the order of $10^{80}$ , but it does not matter if its some other order close to that. Let's now assume they are distributed not as in the observable universe, but ALL of them are just ONE meter away (which would of course lead to other complications). But still, this means the calculation of detection probability will contain a factor (or even the square of) $10^{80}e^{-1\mathrm{m}/a_0}=10^{80}e^{-10^{10}}(\approx0)$ , which for all intents and purposes is zero. You cannot beat the exponential suppression at larger distances even with very large numbers of electrons. Since you seem to be an outside of the box thinker: Keep in mind that even more atoms in the maybe un-observable universe do not matter, since they are so far away that their exponential suppression is even stronger. Does not matter how much you add.
{ "source": [ "https://physics.stackexchange.com/questions/713177", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/337868/" ] }
714,520
Why do cumulus clouds have well defined boundaries? In other words, what are the physical mechanisms that hold a cloud together, as an entity separate from other clouds, that prevent it from spreading, etc. Naively, one could expect the atmospheric vapour to spread homogeneously or (in presence of nonlinearities inherent in hydrodynamics) form periodic structures or even vortices - these are indeed observed (see, e.g., horizontal convective rolls ), but does not seem to explain the cumulus clouds. Dust clouds in outer space, held together by gravity seem closer phenomenologically, but physically gravity seems as a less plausible explanation than fluid/gas dynamics
Clouds are fuzzier than they look. Clouds get their white colour from Mie scattering of light from water droplets of size comparable to the wavelength of light. But for smaller droplets Rayleigh scattering is the best approximation. The formula for the intensity of that radiation seen by an observer at distance $R$ , scattering angle $\theta$ , wavelength $\lambda$ from a particle of refractive index $n$ with diameter $d$ is $$ I=I_0 \left(\frac{1+\cos^2\theta}{2R^2}\right)\left(\frac{2\pi}{\lambda}\right)^4\left(\frac{n^2-1}{n^2+2}\right)^2\left(\frac{d}{2}\right)^6 $$ Note the last term: the scattering intensity increases with the sixth power of the diameter. That means that as the vapour density increases in the air and droplets start forming, even a completely smooth gradient of droplet sizes will look like it has a sharp edge where the scattering goes from minuscule to dominant. Once droplets are $\sim 10$ % of the light wavelength the full Mie theory is needed, but the effect is roughly the same (and less wavelength dependent). There are doubtless other forces keeping cumulus clouds sharp, like the upper boundary often corresponding to the top of an upwelling convective flow into drier air and hence having a strong vapour gradient, and the cloud base being set by temperature, pressure and the dew point.
{ "source": [ "https://physics.stackexchange.com/questions/714520", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/247642/" ] }
715,056
Suppose I am deriving a length contraction formula using natural units. If I arrive at $L = L_0 \sqrt{1 - v^2}$ , I know that I should divide $v^2$ by $c^2$ to get the correct answer in SI units. But what if I mistakenly forgot to square the velocity and arrived at $L = L_0 \sqrt{1 - v}$ . I would then be inclined to divide $v$ by $c $ and conclude that the answer is $L = L_0 \sqrt{1 - \frac{v}{c}}$ . If I had not used SI units during derivation and only forgot to square the velocity, I would have arrived at $L = L_0 \sqrt{1 - \frac{v}{c^2}}$ . I could have kept track of the dimensions and told myself I had made a mistake. But that is not the case when using natural units. Is this the disadvantage of natural units? Or is there a way to get around this problem?
You are quite correct that the use of natural units removes a useful method for detecting errors. This is an example of a more general concept in information theory. If you use the minimum number of symbols to convey a given piece of information (in this example, an equation in physics or something like that) then you have a slimmed-down and efficient notation. However, by building in some extra symbols, in a suitably controlled or designed way, then you build in some error-detection capability. Suppose that you have $k$ symbols and the probability of making a mistake in copying each from one line to another is $p$ . Then the overall probability of making a mistake, for each such copy operation, is approximately $kp$ for small $p$ . Now suppose you add some further symbols such as $c$ or $\hbar$ , so that you have $n$ in total, with $n > k$ . Now the probability of making a copying error is $np$ , so it has gone up. It looks at first as if this makes matters worse. But now you have the error detection capability. An expression such as $1 + v/c^2$ is clearly wrong, and so is $2 + \hbar$ and things like that. This means that many of the mistakes will be detectable, so the overall probability of an error both occurring and also being undetected (by a dimensional check) can easily now be less than $kp$ , and usually is. In my experience, when doing calculations which you are already familiar with (e.g. collision problems in relativity if you have already done many of those), setting $c=1$ is useful to reduce clutter. But when entering into new territory in a calculation (e.g. doing general relativity when you are learning the subject), it is useful to retain $c$ in order to preserve a check and to keep track of what you are doing. Similar statements apply to $\hbar$ in quantum mechanics. In summary, errors can take many forms, not all of which will lead to a dimensional error, so not all are detectable. But the fact that many are detectable by this method is very useful. When doing familiar calculations by familiar methods, natural units are nice to keep things clean and uncluttered. When doing calculations in unfamiliar territory, on the other hand, the dimensional check capability often outweighs the cost of having more symbols. Added note to resolve an issue raised in comments It may be objected that the use of natural units does not entirely preclude a dimensional check. That is true, but it greatly reduces the number of errors that can be detected. For example, if two speed calculations gave the answers $v = x/t$ and $v=t/x$ then which is correct? If units with $c=1$ have been adopted then we can't tell. But if the calculations with $c$ included give the answers $v = x/(c^2 t)$ and $v=c^2 t/x$ then we can at least tell that the first one is not correct. (This example comes up in the case of a body undergoing hyperbolic motion).
{ "source": [ "https://physics.stackexchange.com/questions/715056", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/332123/" ] }
715,865
In the realm of pre relativistic physics. $$\vec{p}=m\vec{v}$$ $$F= \frac{dp}{dt}= m\vec{a}$$ If there exists an electric field in space, the force experienced by it would be $$F= q\vec{E}$$ Applying newtons laws: $$q\vec{E} = m \vec{a}$$ This an equation stating a relation between the force, and the acceleration of an object. Given $$m=0$$ It follows that $$q\vec{E} = 0$$ Since we assumed in the beginning that there exists and electric field. $$q=0$$ Thus a massless object must have no charge. [Or newtons laws don't work for massless objects even in pre relativistic physics] My question is: is my reasoning correct? And is there a generalisation of this to the relativistic equations.
As was pointed out in the comments above, one has to use relativistic mechanics to talk meaningfully about massless particles; you can't just write $F = ma$ and expect it to work. And, indeed, it turns out that we can come up with a reasonable question of motion for a charged massless particle if we use the machinery of relativistic dynamics. The equation of motion of a massive charged relativistic particle is $$ \frac{d p^\mu}{d\tau} = \frac{q}{m} F^{\mu}{}_\nu p^\nu, \tag{1} $$ where $m$ is the particle's rest mass, $q$ is its charge, $p^\mu$ is its four-momentum, $F^{\mu \nu}$ is the field strength tensor, and $\tau$ is the proper time along the (massive) particle's world-line. In particular, $\tau$ serves mainly as a parameter that traces out the particle's world-line through spacetime. If we multiply both sides by $m$ and introduce a new parameter $\lambda = \tau/m$ , it turns out that the above equation is equivalent to $$ \frac{d p^\mu}{d \lambda} = q F^\mu {}_\nu p^\nu. \tag{2} $$ What's more, under this parameterization we have $$ \frac{d x^\mu}{d \lambda} = m \frac{d x^\mu}{d \tau} = p^\mu. \tag{3} $$ The new parametrization (2) has a perfectly well-behaved limit as $m \to 0$ ; in fact, this is a conventional parametrization for the worldlines of massless particles. Defining our $x$ -direction to be the direction of the field, we have $F^{01} = - F^{10} = E_x$ , and so the equations of motion are \begin{align*} \dot{p}^t &= q E_x p^x & \dot{p}^x &= q E_x p^t & \dot{p}^y = \dot{p}^z = 0 \tag{4} \end{align*} where dots denote differentiation with respect to $\lambda$ . This can then be solved (in principle) for the four-momentum $p^\mu$ as a function of $\lambda$ . If desired, one can then find the trajectories $t(\lambda)$ , $x(\lambda)$ , etc. in parametric form, and (in principle) invert the first of these to obtain $x(t)$ , $y(t)$ and $z(t)$ . This means that the energy ( $p^t$ ) and $x$ -momentum ( $p^x$ ) of a massless charged particle will increase as it travels. But you can also show from the equations of motion (4) (try it!) that the quantity $p_\mu p^\mu = -(p^t)^2 + (p^x)^2 + (p^y)^2 + (p^z)^2$ is constant with respect to $\lambda$ . So these equations ensure that a massless charged particle (with $p_\mu p^\mu = - m^2 = 0$ ) stays massless as it travels; and that means that it always travels at the speed of light, even as its energy and momentum change.
{ "source": [ "https://physics.stackexchange.com/questions/715865", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/209185/" ] }
716,792
The speed of light is defined as $c=299{,}792{,}458\,\mathrm{m/s}$ , and a meter is defined as the distance that light travels in a $1/299{,}792{,}458=1/c$ of a second, but then we would have defined a meter in terms of the speed of light, but we also defined the speed of light in terms of a meter, seems a bit circular for me. My guess is that we defined a meter as the distance that light travels in a $1/299{,}792{,}458$ of a second so that the speed of light would be exactly $299{,}792{,}458\,\mathrm{m/s}$ , but then why didn't we define it as the distance light travels in a $1/100$ of a second, that would make $c=100\,\mathrm{m/s}$ , which is much more easy to remember and manage. Please tell me if there are any ambiguities in my question, I'll do my best to fix them, thanks.
Theoretically, we have not defined the speed of light in terms of the metre. We have defined it as a specific distance (that light can cover in one second). Now take that distance and divide it with $299792458$ , and then you have a smaller portion of a distance. That portion is defined as a metre. So, there's no circular metre definition here. Why this number? you may reasonably ask. The answer is that while we can change the definitions of fundamental units such as the metre so that they become more future-safe and universally accessible and thus scrap an old definition, we can't just change their values to something entirely different. Because those fundamental units have already been in use in everything from research to daily life through centuries. If we suddenly redefined the metre to be just $1/100$ of the distance covered by light in a second (which is an enormously long distance, by the way), then we would have to alter every ruler, every length scale, every textbook in the world, not to speak of altering people's uses, mindsets, traditions and so on. (Also, making the metre so enormously long as you suggest, might cause the use of the metre-unit to die out from every-day life and other units better fitting to the human-scale might become more used.) Such a value-redefinition would be an enormously impractical task to implement - to get this through, you might want a better reason than just that the definition becomes easier to remember. Nevertheless, it is an interesting question that goes to the historical roots of how standardisation is done.
{ "source": [ "https://physics.stackexchange.com/questions/716792", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/339669/" ] }
717,069
I was reading about the half life measurements and was curious to understand the experimental setups that allows so minute measurements to be captured. Specifically looking into half life of Higgs boson. I am looking to understand one example method of measuring Higgs boson half life. Please feel free to explain any one setup
The Higgs is a challenging example because the tabulated quantity is the decay width $\Gamma$ , from which a mean life $t≈\hbar/\Gamma$ is inferred. That is, nobody starts a clock when the Higgs is born and then stops it $10^{-22}$ seconds later. Instead, the short lifetime of the Higgs contributes an intrinsic uncertainty to its mass, and those variations in the masses of “Higgs events” show up in energy measurements. Sub-nanosecond timing is a solved problem — consider that you are probably reading this post on a computer whose processor is driven by a sub-nanosecond (multi-gigahertz) clock. For picosecond-level timing, you have to account for the fact that electromagnetic signals travel no faster than 0.3 millimeters per picosecond, about the size of a flake of pepper. So you can’t do reliable picosecond timing on a signal that’s gone through a coaxial cable, because the propagation time through the cable might vary with temperature. However, if you have a cloud chamber photograph of a relativistic particle, you might be able to measure the distance between two events with sub-millimeter precision, and get picosecond timing from your position data. Modern detectors don’t take photographs of vapor trails, but instead collect ionization from relativistic particles on an array of wires. The position and timing resolutions of modern detectors are the subject of lots of PhD theses. You can use $\hbar≈6\times10^{-4}\rm\,eV\,ps≈0.6\,eV\,fs$ to do your own conversions between energy width and lifetime (though there’s a $\ln2$ floating around if you want half-life). The Higgs width is $\Gamma≈4\,\rm MeV$ .
{ "source": [ "https://physics.stackexchange.com/questions/717069", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/311317/" ] }
717,274
I've just started looking at plasmas and I have some confusion. A metal is a lattice of positive ions bathed in a sea of delocalised electrons and conducts electricity. A plasma is a 'gas' of free ions and electrons and can also conduct electricity. Is a metal a solid-state plasma? Are all ionized gases a plasma? Even if they have not been heated?
There are several sorts of plasmas. An ionized gas is a sort of plasma, also called thin plasma. Thin plasmas, while physically different from a metal, share a surprisingly large number of properties with it: They both have two separate charge carriers, cations and electrons. On both, the cations have a negligible contribution to current and the electrons form a gas. They have similar dispersion relations (leading to similar "local" Ohm's law in both). It's because cations don't contribute much that they seem similar, in spite of one being a gas and the other a solid. When you heat a thin plasma, for a while it's still an ionized gas. As it gets hotter, you get the energy to rip more and more electrons from atoms, but the nuclei remain untouched. However, cations start to contribute more so properties differ noticeably from a metal. It's the field of magnetohydrodynamics. When a plasma is hot enough to rip nucleons from the nucleus, it becomes a thermonuclear plasma, and its properties change again. This is the sort of plasma the Sun is made of. Heat such a plasma again (a lot!) and you have the energy to rip quarks and gluons from the nucleons. This is the quark-gluon plasma. Overall, you get a different sort of plasma each time you become able to extract a new sort of particle from the system. The similarity with a metal only exists at low energy.
{ "source": [ "https://physics.stackexchange.com/questions/717274", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/317932/" ] }
717,888
If everything you are working with is in Euclidean 3-space (or $n$ -space) equipped with the dot product, is there any reason to bother with distinguishing between 1-forms and vectors? or between covariant and contravariant tensor components? I'm fairly certain that if you do not, then none of your calculations or relations will be numerically wrong, but are they "mathematically wrong"? Example: I'll write some basic tensor relations without distinguishing between vectors and 1-forms/dual vectors or between covariant/contravariant components. All indices will be subscripts. Tell me if any of the following is wrong: I would often describe a rigid transformation of an orthonormal basis (in euclidean 3-space), ${\hat{\mathbf{e}}_i}$ , to some new orthonormal basis ${\hat{\mathbf{e}}'_i}$ as $$ \hat{\mathbf{e}}'_i\;=\; \mathbf{R}\cdot\hat{\mathbf{e}}_i \;=\; R_{ji}\hat{\mathbf{e}}_j \qquad\qquad (i=1,2,3) $$ For some proper orthogonal 2-tensor ${\mathbf{R}\in SO(3)}$ (or whatever the tensor equivalent of $SO(3)$ is, if that's a thing). It's then pretty straightforward to show that the components, $R_{ij}$ , of ${\mathbf{R}}$ are the same in both bases and $\mathbf{R}$ itself is given in terms of ${\hat{\mathbf{e}}_i}$ and ${\hat{\mathbf{e}}'_i}$ by $$ \mathbf{R}=R_{ij}\hat{\mathbf{e}}_i\otimes\hat{\mathbf{e}}_j = R_{ij}\hat{\mathbf{e}}'_i\otimes\hat{\mathbf{e}}'_j = \hat{\mathbf{e}}'_i\otimes\hat{\mathbf{e}}_i \qquad,\qquad\quad R_{ij}=R'_{ij}=\hat{\mathbf{e}}_i\cdot\hat{\mathbf{e}}'_j $$ Then, given the basis transformation in the first equation, the components of some vector $\vec{\mathbf{u}}=u_i\hat{\mathbf{e}}_i=u'_i\hat{\mathbf{e}}'_i$ and some 2-tensor $\mathbf{T}=T_{ij}\hat{\mathbf{e}}_i\otimes \hat{\mathbf{e}}_j = T'_{ij}\hat{\mathbf{e}}'_i\otimes \hat{\mathbf{e}}'_j$ would transform as $$ u'_i = R_{ji}u_i \qquad \text{matrix form: } \qquad [u]'= [R]^{\top}[u] \\ T'_{ij} = R_{ki}R_{sj}T_{sj} \qquad \text{matrix form: } \qquad [T]' = [R]^{\top}[T][R] $$ and for some p -tensor we would have $$S'_{j_1j_2\dots j_p} \;=\; \big( R_{ i_1j_1}R_{ i_2j_2} \dots R_{ i_pj_p} \big) S_{ i_1 i_2\dots i_p} $$ and if ${\hat{\mathbf{e}}_i}$ is an inertial basis and ${\hat{\mathbf{e}}'_i}$ is some rotating basis, then the skew-symmetric angular velocity 2-tensor of the ${\hat{\mathbf{e}}'_i}$ basis realtive to ${\hat{\mathbf{e}}_i}$ is given by $$ \boldsymbol{\Omega} \;=\; \dot{\mathbf{R}}\cdot\mathbf{R}^{\top} \qquad,\qquad \text{componenets in } \hat{\mathbf{e}}_i \;: \qquad \Omega_{ij} = \dot{R}_{ik}R_{jk} $$ Or, in matrix form (in the $\hat{\mathbf{e}}_i$ basis) the above would be $[\Omega]=[\dot{R}][R]^{\top}$ . The third equation can be used to convert to the $\hat{\mathbf{e}}'_i$ basis. The familiar angular velocity (pseudo)vector is then given by $$ \vec{\boldsymbol{\omega}}= -\tfrac{1}{2}\epsilon_{ijk}(\hat{\mathbf{e}}_j\cdot \boldsymbol{\Omega}\cdot \hat{\mathbf{e}}_k)\hat{\mathbf{e}}_i \qquad,\qquad \text{componenets in } \hat{\mathbf{e}}_i \;: \qquad \omega_i = -\tfrac{1}{2}\epsilon_{ijk}\Omega_{jk} $$ where $\epsilon_{ijk}$ are the components of the Levi-Civita 3-(pseudo)tensor, $\pmb{\epsilon}$ , which itself may be written in any right-handed orthonormal bases as $$ \pmb{\epsilon} = \epsilon_{ijk}\hat{\mathbf{e}}_i\otimes\hat{\mathbf{e}}_j \otimes \hat{\mathbf{e}}_k = \tfrac{1}{3!}\epsilon_{ijk}\hat{\mathbf{e}}_i\wedge\hat{\mathbf{e}}_j \wedge \hat{\mathbf{e}}_k = \hat{\mathbf{e}}_1\wedge\hat{\mathbf{e}}_2 \wedge \hat{\mathbf{e}}_3 \quad,\quad \epsilon_{123}=1 $$ The time-derivative of some vector $\vec{\mathbf{u}}=u_i\hat{\mathbf{e}}_i=u'_i\hat{\mathbf{e}}'_i$ would then be given in terms of the components in the inertial and rotating bases by the familiar kinematic transport equation $$ \dot{\vec{\mathbf{u}}} = \dot{u}_i\hat{\mathbf{e}}_i = \dot{u}'_i\hat{\mathbf{e}}'_i + \boldsymbol{\Omega}\cdot\vec{\mathbf{u}} \;=\; (\dot{u}'_i + \Omega'_{ij}u'_j )\hat{\mathbf{e}}'_i $$ where $\boldsymbol{\Omega}\cdot\vec{\mathbf{u}} = \vec{\boldsymbol{\omega}}\times\vec{\mathbf{u}}$ . end example question: So, I'm pretty sure that none of the above would give me numerically incorrect relations. But I called everything either a vector, 2-tensor, or 3-tensor. Nothing about forms, (1,1)-tensors, (0,2)-tensors, dual vectors, etc. Is the above formulation mathematically ''improper''? For instance, do I need to write ${\mathbf{R}}$ as a (1,1)-tensor, ${\mathbf{R}}=R^{i}_{\,j}\hat{\mathbf{e}}_i\otimes\hat{\boldsymbol{\sigma}}^j$ , using the basis 1-forms $\hat{\boldsymbol{\sigma}}^j$ ? Does the angular velocity tensor need to be written as a 2-form or (0,2)-tensor? context: My BS is in physics and I am currently a PhD student in engineering. Aside from a graduate relativity course I took in the physics department, I have never once seen raised indices or mention of dual vectors/1-forms in any class I have ever taken or in any academic paper, I have ever read. That was until I recently started teaching myself some differential geometry in hopes of eventually understanding Hamiltonian mechanics from the geometric view. So far, I have mostly only succeeded in destroying my confidence in my knowledge of basic tensor algebra involved in classical dynamics.
As long as you restrict yourself to orthonormal bases, then that's fine. The reason for this is that indices are "raised" or "lowered" via the metric, and in an orthonormal basis the metric components are $g_{ij}=\delta_{ij}$ . As soon as your basis is non-orthonormal, however, this goes out the window. There are many good reasons to use non-orthonormal bases in various circumstances, but since you've explicitly stated that you'd ultimately like to understand Hamiltonian mechanics from a geometrical standpoint, I'll highlight the most glaring problem: in Hamiltonian mechanics on a symplectic manifold, there is no metric , and so the entire concept of orthonormality goes out the window. It is still useful to define an isomorphism between tangent vectors and their duals on a symplectic manifold, but we need something other than a metric to do so. The structure we use is the symplectic form $\Omega$ , which is by definition antisymmetric; this immediately implies that $\Omega_{ij}=\delta_{ij}$ is ruled out as a possibility in any coordinate system. As a result, vectors and their duals always have different components, and distinguishing between them and their transformation behaviors is crucial.
{ "source": [ "https://physics.stackexchange.com/questions/717888", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/338329/" ] }
717,902
I have read in Griffiths' Quantum Mechanics that there is a phenomenon called tunneling, where a particle has some nonzero probability of passing through a potential even if $E < V(x)_{max}$ . What I don't understand about this is how to conceptualize how this can happen. I have read on Wikipedia that tunneling means that objects can "in a sense, borrow energy from their surroundings to cross the wall". How can the object "know" that across the wall there's going to be a lower energy and, thus, the borrowed energy will be restored and not depleted.
You're just beginning your study of quantum mechanics, so I would advise you to be careful not to try to interpret quantum mechanics through the lens of classical mechanics. It's a very reasonable thing to imagine quantum tunneling as a little ball which magically pops through a barrier and emerges on the other side, but that is an outstanding way to develop bad intuition which you'll need to fix down the line. Quantum mechanics is fundamentally different from classical mechanics, and it is the latter which should be understood as a limiting case of the former, not the other way around. In that sense, the real question should be not why quantum particles can tunnel, but why classical particles (whatever that means) apparently cannot . With that being said, the rough idea is the following. We can gain some useful intuition by studying the simpler case of what happens when a particle encounters a potential step of the form $$V(x) = \begin{cases} 0 & x<0 \\V_0 & x\geq 0\end{cases}$$ and then extend this to a potential barrier of width $L$ , because the latter is just a step up followed by a step down. The (generalized) eigenstate corresponding to a particle incident on the barrier from the left with energy $E=\hbar^2k^2/2m<V_0$ takes the form $$\psi_k(x) = \begin{cases} e^{ikx} + r_k e^{-ikx} & x < 0 \\ t_k e^{-q_k x} & x \geq 0\end{cases}$$ where $$\matrix{q_k \equiv \sqrt{\frac{2m(V_0 - E)}{\hbar^2}} \\ r_k \equiv \frac{2iq_k}{k-iq_k}\\ t_k \equiv 1+r_k = \frac{k+iq_k}{k-iq_k}}$$ Based on this picture, we might imagine (correctly) that there is a nonzero probability of measuring a particle with $E<V_0$ within the potential step. However, we need to be a bit careful - this is a non-normalizable (and hence unphysical) state, after all, so if we want to understand what happens dynamically , we should construct a real, physical state. Such states take the form of wavepackets, which may be written $$\Psi(x,t) = \frac{1}{\sqrt{2\pi}}\int \mathrm dk \ A(k) \psi_k(x) e^{-iE_kt/\hbar}$$ for some square-integrable function $A(k)$ (where $E_k \equiv \hbar^2 k^2/2m$ ). In essence, $A(k)$ tells us how much of the state with energy $E_k$ is present in the wavepacket. The take-away is that real states consist of an integral superposition of energy eigenstates, not specific energies, and if we want to understand what happens dynamically when a particle encounters a potential step, we need to consider what happens to one of these wavepackets. The specifics of this are actually rarely covered in detail because while the process is conceptually fairly simple, the calculations are tedious and need to be performed numerically. The qualitative picture goes like this: The components of the wavepacket with energy $E>V_0$ are partially reflected and partially transmitted. The transmitted parts propagate forever in the $+x$ direction. The components of the wavepacket with energy $E<V_0$ are all reflected eventually ; however, they penetrate into the barrier by an exponentially small distance ( $\psi_k\sim e^{-x/\ell_k}$ , where $\ell_k=1/q_k$ ) and are delayed by a correspondingly small amount of time before being reflected. In particular, if all of the components of the wavepacket have energy less than $V_0$ , then the wavepacket will be perfectly reflected - however, it will be distorted because the different components penetrate different depths into the step before being reflected, and during the reflection there will be a nonzero (but exponentially small) chance of measuring the particle to be physically located at some $x>0$ . We can now turn our attention to your main question of what happens when we have a potential barrier of width $L$ , and a wavepacket whose components all have energy less than $V_0$ . From a qualitative and dynamic perspective, everything proceeds exactly as it did with the potential step. As the wavepacket approaches the barrier, its components penetrate into the classically forbidden region by an exponentially small distance before being reflected. However, because the barrier has a finite width $L$ , a fraction $\sim e^{-L/\ell_k}\equiv e^{-q_k L}$ of the components of the wavepacket will make it all the way through the barrier and escape to the other side $^\dagger$ . You can find an animation of such a process here . Note that the mean energy of the wavepacket in this simulation is much lower than $V_0$ , and so essentially none of the wavepacket is able to reach the far end of the barrier. However, observe the exponentially-suppressed penetration of the wavepacket into the front side of the barrier, and then imagine what would happen if the barrier were significantly thinner so the wave amplitude at the back edge was not effectively zero. How can the object "know" that across the wall there's going to be a lower energy and, thus, the borrowed energy will be restored and not depleted. I think the "borrowing energy" metaphor is not really a good way to think about it, for essentially the reason you mention. The particle doesn't need to know that the barrier has finite width; the penetration of the wavepacket into the barrier proceeds the same way in both cases, but if the barrier is not infinitely long then an exponentially small fraction of the wavepacket will reach the back edge and escape. $^\dagger$ In fact, this is an oversimplification. In reality, the components of the wavepacket which reach the back edge of the potential are not perfectly transmitted - some of them reflect backward into the barrier, so the precise expression for the tunneling amplitude is a bit more subtle than simply calculating $e^{-q_k L}$ (though that does provide the right order of magnitude). Remark on Localization (My initial reading of the question was sloppy, and I thought OP was asking about a potential step rather than a potential barrier. As a result, this is no longer particularly relevant, but it is mildly interesting, so I elected to include it as an afterthought.) As an interesting side note, it turns out that a particle which is initially localized to some compact interval $[x_1,x_2]$ to the left of the barrier (by which I mean, $\psi_0(x)=0$ for all $x\notin[x_1,x_2]$ ), then the wavepacket must contain components with energy $E>V_0$ . This is related to a well-known theorem about Fourier transforms which says that a function and its Fourier transform cannot both be compactly-supported; in this context, the interpretation is that the better-localized you want your initial particle to be, the more high-energy components you will need to include in the wavepacket. As a result, a wavepacket with average energy $E<V_0$ which is initially localized to a compact interval $[x_1,x_2]$ will always be partially transmitted, even through an infinitely long potential step, because it will contain some high-energy components which exceed the barrier height. Of course, even more of such a wavepacket would be transmitted through a potential barrier of width $L$ , because the high-energy components would be partially transmitted and an exponentially small fraction of the low -energy components would be able to tunnel.
{ "source": [ "https://physics.stackexchange.com/questions/717902", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/337317/" ] }
718,044
It is said that the dusk remains for shorter time at equator than the poles. Because, the equator rotates faster than poles. But it is also true that time is the same in every latitude, and if it's true, then the dusk should remain the same at equator as the poles. So, does dusk really remain for a shorter period of time at the equator?
It is faster because the sun takes a higher trajectory through the sky typically, and crosses the horizon steeper and thus faster.
{ "source": [ "https://physics.stackexchange.com/questions/718044", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/329778/" ] }
718,059
The 1D Schrödinger equation reads: $$\frac{\partial \Psi}{\partial t}=\frac{i\hbar}{2m}\frac{\partial^2 \Psi}{\partial x^2}-\frac{i}{\hbar}V\Psi.$$ Now, generally we have $V=V(x)$ (or it dependending on any other number of real variables). But consider $V=V(\Psi)$ , suddenly the Schrödinger equation presents a nonlinear term (except for specific cases like $V=0$ or $V=1/\Psi$ ). Essentially my question, then, is: are there any systems where a potential depends partially or completely on the wavefunction, or is this simply a mathematical and non-physical curiosity?
Previous answers focus on the fundamental approach to Quantum Mechanics where the Hamiltonian operator is always a linear operator. However, they miss an extremely important situation where a non-linear Schrödinger-like equation appears in a natural way. That's the case with the so-called self-consistent one-particle approximations to the quantum many-body problem. They are approximations, but still well inside Quantum Mechanics. For example, the Hartree approximation for the electronic problems introduces an effective interaction including the electrostatic interaction in a Schrödinger-like equation for the state $\psi_i({\bf r})$ , due to the charge density (which depends quadratically on the one-particle wavefunctions): $$ \left(-\frac12 \nabla^2 + U_{ion}({\bf r})+ \sum_{j \neq i} \int d {\bf r'} \frac{|\psi_j ({\bf r'})|^2}{| {\bf r}- {\bf r'}|} \right) \psi_i( {\bf r})=\varepsilon_i \psi_i( {\bf r}). $$ Similar equations appear in the Hartree-Fock and Kohn-Sham approaches to Density Functional Theory. In conclusion, although a fundamental quantum Hamiltonian must be a linear operator, important approximate schemes introduce non-linear Schrödinger-like equations. Taking into account the basic importance of Kohn-Sham approximations for applications, the whole issue cannot be considered a curiosity , but it is a pillar of modern computational methods for electronic properties.
{ "source": [ "https://physics.stackexchange.com/questions/718059", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/295017/" ] }
718,069
Context I am studying special relativity using [1]. In [1], Gray writes that, The Minkowski diagram can help us see what is going on in a given special relativity problem: if we plot the relevant events on the diagram, we can see their relationship more clearly. However, it can also support some arguments directly. For example (this arguement is taken from Rindler), imagine a flashing lighthouse beam being swept across a distant shore. If the shore is far enough away and the beam is turned quickly enough, the illuminated points can be made to travel arbitrarily fast---faster than the speed of light . I have searched for Rindler's lighthouse online. I found citations to Rinder---including links to non-inertial frames or reference (of which this turning lighthouse would be one). On this cite, I have found [2]. The question in [2] may be related to mine, but I do not know. Question: I do not understand what is written here. Can you explain the example in your own words? In particular how can the illuminated points can be made to travel arbitrarily fast---faster than the speed of light ? Bibliography [1] N. Gray, A Student's Guide to Special Relativity , Cambridge University Press, 2022, p. 55. [2] Rindler Coordinates Derivation
Previous answers focus on the fundamental approach to Quantum Mechanics where the Hamiltonian operator is always a linear operator. However, they miss an extremely important situation where a non-linear Schrödinger-like equation appears in a natural way. That's the case with the so-called self-consistent one-particle approximations to the quantum many-body problem. They are approximations, but still well inside Quantum Mechanics. For example, the Hartree approximation for the electronic problems introduces an effective interaction including the electrostatic interaction in a Schrödinger-like equation for the state $\psi_i({\bf r})$ , due to the charge density (which depends quadratically on the one-particle wavefunctions): $$ \left(-\frac12 \nabla^2 + U_{ion}({\bf r})+ \sum_{j \neq i} \int d {\bf r'} \frac{|\psi_j ({\bf r'})|^2}{| {\bf r}- {\bf r'}|} \right) \psi_i( {\bf r})=\varepsilon_i \psi_i( {\bf r}). $$ Similar equations appear in the Hartree-Fock and Kohn-Sham approaches to Density Functional Theory. In conclusion, although a fundamental quantum Hamiltonian must be a linear operator, important approximate schemes introduce non-linear Schrödinger-like equations. Taking into account the basic importance of Kohn-Sham approximations for applications, the whole issue cannot be considered a curiosity , but it is a pillar of modern computational methods for electronic properties.
{ "source": [ "https://physics.stackexchange.com/questions/718069", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/194354/" ] }
718,082
On the images captured by Webb telescope one can see some lights with 6 rays, but most others don't have any. One would expect the optics to transform all light sources at infinity in the same manner. What causes these differences?
These are diffraction spikes . They are an interference pattern caused by the the arms and shape of the telescope. They occur around whatever is bright enough in the image which in this case is all the stars that are within the milky way. These stars show up super bright because the Webb is trying to look for super dim objects in the deep field. The spikes very near the star (horizontal and two diagonal ones) are the three arms and the inverted image of those 3 arms. You can think of them as a type of "shadow" the telescope is casting. The larger lines which form the 6 points are caused by the non circular shape of Webb also causing an interference pattern. The geometry of the arms was chosen so that the diagonal spikes from the arms line up with the spikes from the shape of the telescope. This was done to minimize the effect. It is worth noting that all the galaxies also have diffraction spikes but they are much dimmer due to the amount of light and diffuse due to the non point like nature of the extended objects. In particular the interference pattern is a type of Point Spread Function . As Prof Rob explains this is a type of Fourier Transform done on the light due to the missing modes from the shape and arms of the telescope. A more spread out object exhibts these Point Spread Functions much less because the missing information is recovered from a slightly different angle.
{ "source": [ "https://physics.stackexchange.com/questions/718082", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/30370/" ] }
718,659
The 3B1B YouTube channel has a video The more general uncertainty principle, regarding Fourier transforms which looks at thin peaks in frequency domain corresponding to long-lasting pulses in time domain, and vice versa . The transformation that makes these comparisons possible is the Fourier transform. Unfortunately no equation or mathematical relation is given in the video for this principle. Perhaps there exists a functional equation for this principle, or perhaps it is just something that visual inspection seems to confirm. The Tom Rocks Maths' YouTube channel has a video Heisenberg's Uncertainty Principle with @Michael Penn that derives the famous $\sigma_x \sigma_p \geq \frac{\hbar}{2}$ using (quantum) expectations and Schrodinger's equation. I want to learn how closely related these concepts actually are. On the face of it, the two explanations give me the impression that they are logically independent things because the general uncertainty principle assumes nothing about Schrodinger's equation and could really apply to almost any signal. But within QM we can think about both of these notions, motivating an ability to distinguish them. Certainly the complex exponential functions involved in solutions to Schrodinger's equation entail a relationship to Fourier series via Euler's formula, so it is natural to suspect that a correspondence between the inverse domains of the Fourier transform should feature somewhere in understanding QM. It isn't clear to me whether this general uncertainty principle is a natural generalization of Heisenberg's uncertainty principle , or only under certain constraints, or even that they are still logically independent considerations within QM. What relationship, if any, exists between these two principles?
3B1B's Youtube video mainly talks about the Fourier uncertainty principle between a function $\psi(x)$ in position space and its Fourier transform $\hat\psi(\xi)$ where $\xi$ is the spatial frequency (i.e. the reciprocal of wavelength $\lambda$ ). However, the video concentrates on intuitively explaining this principle, but doesn't provide its mathematical formulation. $$\sigma_x \sigma_\xi \geq \frac{1}{4\pi} \tag{1}$$ It basically says that a narrow peak in $x$ -space corresponds to a wide peak in $\xi$ -space, and vice versa. So far this is a purely mathematical statement. There is no physics involved yet. You get something physical by introducing de Broglie's relation , which says that every particle (with momentum $p$ ) behaves like a wave (with wavelength $\lambda=h/p$ ) propagating in space. This can be written in various ways: $$\begin{align} p&=\frac{h}{\lambda} \\ p&=h \xi \\ p&=\hbar k \end{align} \tag{2}$$ These are all identical because of $\xi=\frac{1}{\lambda}$ , $k=\frac{2\pi}{\lambda}$ and $\hbar=\frac{h}{2\pi}$ . Now multiply equation (1) by $h$ , and use $p=h\xi$ from equation (2). Voila, you have derived the Heisenberg's uncertainty principle between position $x$ and momentum $p$ . $$\sigma_x \sigma_p \geq \frac{\hbar}{2} \tag{3}$$
{ "source": [ "https://physics.stackexchange.com/questions/718659", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130161/" ] }
718,786
I have read that water (or any other liquid) cannot be compressed like gases and it is nearly as elastic as solid. So why isn’t the impact of diving into water equivalent to that of diving on hard concrete?
Adding another perspective to the existing answers: In your usual diving scenario, water is not confined to the points in space it occupied before, while a slab of ground is – on account of water being liquid and ground being solid. To construct a scenario where you primarily experience the compressibility when diving into water, you would have to exactly encase the body of water with a perfectly rigid wall with only an exactly diver-shaped hole in it – through which the diver needs to enter. (Also, your diver would have to have the same cross-section everywhere along the direction of diving.) In that case, the water cannot escape to the sides anymore and the diver would fully feel that water is incompressible: They would not be able to enter the water at all and crash into it like a wall.
{ "source": [ "https://physics.stackexchange.com/questions/718786", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/340928/" ] }
719,171
Say you have just four radioactive atoms with a half-life of one hour. (I am using a small number of atoms to keep it simple and illustrate my confusion more clearly). So that means one hour from now, two of the atoms will have decayed (on average) and two will remain undecayed (on average). Now, I am struggling to understand why the last two undecayed atoms won't, on average, both decay in the following one hour. After all, if it took one hour for the first two atoms to decay, then surely it should take one more hour for two more atoms to decay... In general, if it takes x years for half of a sample to decay, shouldn't it then logically take another x years for the ENTIRE other half of the sample to decay? Obviously, this isn't the case, but I am struggling to understand why it isn't the case... It's almost as if the atoms in a sample somehow 'know' how many other atoms are in the sample...
It might help you think of it in terms of tossing coins or rolling dice. Say you toss a bunch of coins every minute and get rid of all the ones that turn up tails. The coin collection would have a half-life of a minute. The point is that the coins (atoms) don't know anything about how many others there are or how long they've been waiting around to decay.
{ "source": [ "https://physics.stackexchange.com/questions/719171", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/341151/" ] }
720,787
I'm trying one of the most basic physics home experiments: creating an electromagnet by wrapping electrically-conductive wire around a metal screw. My ingredients: a metal screw I don't know the material, but it is attracted to magnets a pipe cleaner my improvised wire since I don't have a "true" wire I don't know the material of the metal core but it is attracted to magnets ends are trimmed of chenille (the fuzzy stuff); not 100% clean, but as best as I was able, with scissors a battery I tried 1.5V AA, a 3V CR2450, and a 1.5V D batteries a paper clip a "target object" to try attract with the working electromagnet confirmed as attracted to magnets The process is the familiar one you'd expect: I wrap my "wire" around the screw, in one direction only, then connect the stripped ends of the wire to the positive and negative ends of the battery. But: no electromagnet. As noted above, I tried different battery types. Question : what could be wrong with my "ingredients" or process? "Getting the thing to work" is actually of secondary importance: I'd like to learn ideas for how to diagnose (or "debug") what could be wrong. As you might be able to tell, I have the notion that "attracted to a magnet" also means "able to conduct electricity" (especially with respect to the pipe cleaner-as-a-wire-substitute), which I'm not certain is true.
The insulation on the pipe cleaner is fine (I tested it) and the only difficulty is getting good electrical contact at the ends. It is best to burn off the end insulation and then scrape the metal with a knife / emery paper until the metal is seen to be shiny. Your null result is due to a number of factor the main one being that the magnetic field produced by your electromagnet is very small and only realistically detected with a compass or a sensitive magnetometer . You can make one by straightening a paper-clip and then stroking along the paper-clip with a magnet to magnetise the paper-clip. I used a large paper-clip as then as a compass it is more sensitive to changes in the magnet field around it. You can then either float the paper-clip on water but putting in on an upturned bottle top weighted down with some Blu-Tack or suspending it from a fine thread which is what I did. The tread was about $70\,\rm cm$ long anchored on a table top with some Blu-Tack. You will find that if the paper-clip is suspended from its centre it aligns with the Earth's magnetic field even to the extent that it inclines along the line of the non-horizontal Earth's magnetic field lines . For ease of use adjust the point of suspension so that the paper-clip was horizontal. Take a steel screw and tested it by bringing it close to the compass and often you will find it is magnetised because one end of the screw repels one end of compass. Both ends of an unmagnetised steel screw would attract a compass. If you wished you could demagnetise the steel screw by heating it to red heat whilst it is orientated in a magnetic East-West direction. Wind a few turns of pipe cleaner around the screw connected it to a $1.5 \,\rm V$ C-type battery (or one that is close at hand) using finger and thumb and you might notice that the ends got warm. Bring the electromagnet close to the compass, note the effect on the compass and then reverse the battery and again note the effect on the compass. Hopefully that you will get attraction with the battery connected one way and repulsion with the battery connected the other way around. In some ways all this is "old school" and you can use an iPhone to detect magnetic field. The magnetometer is at the top right of the iPhone and I used the app Sensor Kinetic Pro which I downloaded from the Apple app store. Here is a screen shot of the recorded magnetic field before and after I switched on the electromagnet which shows how small the field due the electromagnet is. As a final point think of what you had at hand, a battery, a pipe cleaner etc and what you did not have, eg a compass, a reel of insulated copper wire, etc and then remember that Faraday and other also lacked basic "off the self" apparatus. They had to make the apparatus as they went along and to me it makes their discoveries all the more remarkable.
{ "source": [ "https://physics.stackexchange.com/questions/720787", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/273818/" ] }
720,800
The SI Brochure says that each angle is the ratio of two lengths - i.e. a number - and therefore a derived and dimensionless quantity. On the other hand, the papers [ 1 ] and [ 2 ] suggest that angles are inherently neither length ratios nor dimensionless, but rather have a dimension which is linearly independent from the base dimensions of the ISQ . That being said, I believe that you may consider the convention from the SI Brochure less natural for a very simple reason: On the one hand, I think that most of you would intuitively distinguish between angles (e.g. one radiant or one degree) and numbers. On the other hand, I think that you can not consider angles dimensionless and at the same time distinguish between angles and numbers without being inconsistent: Since each dimensionless quantity is a number, the convention that angles are dimensionless is equivalent to the convention that each angle is a number. Note that this claim is consistent with the fact that the SI Brochure literally says that 1 rad = 1 (see page 151 of the english edition ). Is there a mistake in my reasoning/am I missing something? My knowledge of dimensionless quantities is very restricted, so I would like to hear your opinions.
The insulation on the pipe cleaner is fine (I tested it) and the only difficulty is getting good electrical contact at the ends. It is best to burn off the end insulation and then scrape the metal with a knife / emery paper until the metal is seen to be shiny. Your null result is due to a number of factor the main one being that the magnetic field produced by your electromagnet is very small and only realistically detected with a compass or a sensitive magnetometer . You can make one by straightening a paper-clip and then stroking along the paper-clip with a magnet to magnetise the paper-clip. I used a large paper-clip as then as a compass it is more sensitive to changes in the magnet field around it. You can then either float the paper-clip on water but putting in on an upturned bottle top weighted down with some Blu-Tack or suspending it from a fine thread which is what I did. The tread was about $70\,\rm cm$ long anchored on a table top with some Blu-Tack. You will find that if the paper-clip is suspended from its centre it aligns with the Earth's magnetic field even to the extent that it inclines along the line of the non-horizontal Earth's magnetic field lines . For ease of use adjust the point of suspension so that the paper-clip was horizontal. Take a steel screw and tested it by bringing it close to the compass and often you will find it is magnetised because one end of the screw repels one end of compass. Both ends of an unmagnetised steel screw would attract a compass. If you wished you could demagnetise the steel screw by heating it to red heat whilst it is orientated in a magnetic East-West direction. Wind a few turns of pipe cleaner around the screw connected it to a $1.5 \,\rm V$ C-type battery (or one that is close at hand) using finger and thumb and you might notice that the ends got warm. Bring the electromagnet close to the compass, note the effect on the compass and then reverse the battery and again note the effect on the compass. Hopefully that you will get attraction with the battery connected one way and repulsion with the battery connected the other way around. In some ways all this is "old school" and you can use an iPhone to detect magnetic field. The magnetometer is at the top right of the iPhone and I used the app Sensor Kinetic Pro which I downloaded from the Apple app store. Here is a screen shot of the recorded magnetic field before and after I switched on the electromagnet which shows how small the field due the electromagnet is. As a final point think of what you had at hand, a battery, a pipe cleaner etc and what you did not have, eg a compass, a reel of insulated copper wire, etc and then remember that Faraday and other also lacked basic "off the self" apparatus. They had to make the apparatus as they went along and to me it makes their discoveries all the more remarkable.
{ "source": [ "https://physics.stackexchange.com/questions/720800", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/237923/" ] }
720,842
It's a well known fact that acceleration due to gravity is independent of the mass of the accelerating body, and only depends on the mass of the body it is accelerating towards and the distance from it. One can prove this mathematically very easily. $$F = \frac{GMm}{r^2}\tag1,$$ $$F = ma\tag2.$$ So, $ma = \frac{GMm}{r^2}$ and $m$ cancels out giving $$a = \frac{GM}{r^2}.\tag3$$ But what if we are to consider the acceleration acting on a massless object (like a photon)? From equation $(3)$ , there would still be an acceleration due to gravity, but from equation $(1)$ , the product of the masses is zero, and therefore the force would be zero. This means that the massless particle will experience acceleration with zero net force. What is the contradiction here? Is it because we cannot divide by $m$ when $m$ is zero?
There are no mass-less particles in Newtonian mechanics and generally classical mechanics. A photon belongs to the realm of quantum mechanics and special relativity. It cannot be accelerated because by mathematical construction of special relativity it always moves with speed c , the speed of light (as for all mass-less particles ). At the quantum level force is represented by dp/dt in the interactions between particles, and a photon interacting with an effective quantum gravitational field has a momentum and it can change , but its speed will always be c.
{ "source": [ "https://physics.stackexchange.com/questions/720842", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/300818/" ] }
720,845
There are many sources online which say that magnetic field lines are imaginary such as Toppr , Vedantu and CBSE Academic . I do know that magnetic fields are real and do exist. But when can we see magnetic field lines using Magnetic Field Viewing Film ? Why are they called imaginary?
Most of us will have experimented with placing iron filings around a magnet to get this sort of thing: This particular example is taken from Why iron filings sprinkled near a bar magnet aggregate into separated chunks? The iron filings line up in the direction of the magnetic field and this nicely shows us what the field looks like. Your magnetic field viewing film works in a similar way. It contains flakes of nickel that line up with the field in the same way as the iron filings, and this produces a pattern that shows us what the field looks like. The magnetic field is certainly real, and it has a direction at every point in space, but the field lines are just lines that trace out the direction of the field. They are no more real than contour lines on a map are real.
{ "source": [ "https://physics.stackexchange.com/questions/720845", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/342014/" ] }
721,393
According to Newtonian gravity, when dealing with multiple sources of gravity, the net gravitational field equals the sum of the individual gravitational fields of each source. Does the same hold for general relativity (GR)? Because gravity is described by a spacetime metric, and not by a field, is it accurate to say that given multiple sources of gravity, each source corresponding to some metric, the "net metric" at a given point equals the sum of the individual metrics of each source?
One of the most interesting, and complicated, features of general relativity is the fact it is a non-linear theory, i.e., adding solutions together won't yield a solution. One example of that behavior is a Schwarzschild black hole , which means a black hole with no charge and no rotation (a spinning black hole is a bit more complicated, but it would work as well). The Schwarzschild solution is what is known as a vacuum solution : there is no matter in the spacetime. At any point in spacetime you look, there won't be matter. Still, there certainly is gravity. One of the pictorial ways of interpreting this is by noticing that the gravitational field itself possesses "energy" (in quotation marks, because the notion of energy in general relativity is complicated, as mentioned in this blog post by Sean Carroll). By means of $E=mc^2$ , having energy means, in some sense, having mass, and hence the gravitational field is a source of even more gravitational field. The gravitational energy "creates" more gravity, which leads to more gravitational energy, and then more gravity and... And this is essentially what we call non-linearity. The effects start piling up on each other and the description gets quite complicated. Far more complicated than what one has in Newtonian gravity or Maxwellian electromagnetism, both of which are linear theories. Notice then that if you add two solutions together, you'll be increasing the amount of gravitational energy. That is a source of more gravity, and hence you'll need to account for this extra gravity, which was not present in the two original solutions. Disclaimer: notice that this "energy creates more gravity" view is pictorial, and meant only to bring more intuition. There is no way of assigning an "adequate" notion of energy to the gravitational field (see the Sean Carroll blog post for some more detail). While this picture can be used as a way of getting intuition and interpretation, it does have limitations and should be taken with a grain of salt. Black Holes are made of Vacuum I noticed this bit of the answer caused some discomfort on the comments, so maybe I should add some more resources on it. I believe Kip Thorne is someone particularly famous who quite often mentions how black holes are made of warped spacetime, instead of compact matter. His comments appear in this site now and again. Here are some instances: Why does Kip Thorne claim spacetime warping itself contains energy? Mass without Matter? If a black hole is just warped spacetime, then where is the electric charge? I should also have added before that all of my answer should be understood in the context of General Relativity , which means I'm neglecting all quantum effects. Within the framework of General Relativity, spacetime is a differentiable Lorentzian manifold, which means it must have a well-behaved metric at all points. This prevents the singularity of Schwarzschild spacetime from being a point in the manifold, since a curvature scalar blows up "there" . Hence, in the description provided by General Relativity, there is not a single point in Schwarzschild spacetime where there is matter. All points are at vacuum. "But near the singularity, quantum gravity effects should kick in and—" I agree. This description is not necessarily final, and most likely it will be modified by quantum effects. However, it is important to distinguish what happens in the actual Universe—in which the Schwarzschild solution doesn't even exist, since we had a Big Bang and we have a positive cosmological constant—and what is described by General Relativity. I discussed similar issues (the difference between theory and reality) in this post about Classical Electrodynamics . It is one thing to ask whether an actual black hole corresponds to complete vacuum, and another thing to ask whether a GR solution is a vacuum solution.
{ "source": [ "https://physics.stackexchange.com/questions/721393", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/327405/" ] }
721,809
On July the 29th 2022, the Earth finished its rotation about 1.5 milliseconds earlier than the entire 24 hours. Scientists link this to climate change, saying that a possible reason could be due to the melting of polar glaciers. I do not know for sure what dictates this would happen, but what came to my mind first was the law of conservation of angular momentum. If glaciers melt, then the water gets spread out across the oceans, so the mass located away from the rotation axis increases. This means that there is an increase in the moment of inertia. But doesn't this mean there should be a decrease in the rotational speed? I wonder whether there is some larger physical phenomenon at play, something with greater influence on the rotational speed.
Glaciers are water that is frozen and high up in the mountains. If you thaw that ice and the water flows back to sea-level, then it would seem that mass in the water would get closer to the rotation axis; the moment of inertia of the Earth would decrease and the angular speed increase . However, it is not such a simple calculation because ice is also melting at the poles, the sea level can rise more at the equator and in addition, the density of water is temperature dependent, the weight of water can deform the crust and the rotation axis of the Earth is also shifting in response to the distribution of water (e.g., Deng et al. 2021 ).
{ "source": [ "https://physics.stackexchange.com/questions/721809", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/259747/" ] }
721,815
We have defined the electric field to easily calculate the force exerted by a system of charges on another charge. What happens to the Electic field if the test charge is removed? Does it still exist or does it vanish? Is there a way to find out and prove its existence in the absence of test charges?
Glaciers are water that is frozen and high up in the mountains. If you thaw that ice and the water flows back to sea-level, then it would seem that mass in the water would get closer to the rotation axis; the moment of inertia of the Earth would decrease and the angular speed increase . However, it is not such a simple calculation because ice is also melting at the poles, the sea level can rise more at the equator and in addition, the density of water is temperature dependent, the weight of water can deform the crust and the rotation axis of the Earth is also shifting in response to the distribution of water (e.g., Deng et al. 2021 ).
{ "source": [ "https://physics.stackexchange.com/questions/721815", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/312651/" ] }
721,935
I always thought that raindrops look like this emoji . But today, I shot it in slow-mo (see on YouTube ), and they look more like sticks. Was it some light effect of my camera, or do they really look like that? And if it's the real deal, why do they look like that? Edit: I found another video that I took from that rain. I didn't see it at first, but after looking again, it shows rain drops as little balls. I took the videos 10 min apart. The second video(with balls) is when the rain just started. Why does the rain look like sticks in one video and like balls in another? I used the same camera to record it. Screenshot at 00:00:18
Your shutter speed is too slow, and you are seeing the raindrops travel within each video frame. Falling raindrops are approximately spherical. The teardrop shape sometimes occurs in droplets moving across a surface, such as a raindrop on glass (or, I suppose, a teardrop on a person’s face).
{ "source": [ "https://physics.stackexchange.com/questions/721935", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57376/" ] }
721,941
If we observe a charged particle like an electron passing us at some high speed $u$ , then as $u \to c$ the field we observe looks like a superposition of plane waves normal to the trajectory of the electron. The field can be Fourier transformed, and the modes associated with virtual photons. See for example the discussion in chapter 19 of Classical Electricity and Magnetism by Panofsky and Phillips. Is this virtual photon we talk about in classical electrodynamics the same as virtual photon that is the the force carrier in quantum electrodynamics?
Your shutter speed is too slow, and you are seeing the raindrops travel within each video frame. Falling raindrops are approximately spherical. The teardrop shape sometimes occurs in droplets moving across a surface, such as a raindrop on glass (or, I suppose, a teardrop on a person’s face).
{ "source": [ "https://physics.stackexchange.com/questions/721941", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/331720/" ] }
722,403
Based on the Maxwell equations we know that A time-varying magnetic field induces an electric field A time-varying electric field induces a magnetic field Suppose that an electric field, which is induced by a time-varying magnetic field, is itself time-varying (not stationary). Then the induced time-varying electric field itself induce an other time-varying magnetic field? Time-varying magnetic field -> Time-varying electric field -> Time-varying magnetic field -> Time-varying electric field -> ... If we measure the net magnetic (or electric) field in a point, we actually measure the resultant of all individually induced magnetic (or electric) fields in this chain?
You are thinking about it in the wrong way. What Maxwell's equations tell you is that when you have a time-varying magnetic field, there must also be an electric field present that satisfies the Maxwell-Faraday equation. The two fields co-exist (and indeed are different aspects of THE electromagnetic field). Similarly, there must be the right value of current density present such that both sides of the Ampere-Maxwell equation agree. In each case, the equality sign does not imply a causal relationship, it merely says that the left and right hand sides must be equal.
{ "source": [ "https://physics.stackexchange.com/questions/722403", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/321026/" ] }
722,421
I was told that the entropy change is $0$ during adiabatic expansion. However, according to $\mathrm dS=\frac{C}{T}\,\mathrm dT$ , $\delta S$ is not zero because temperature is not constant during adiabatic process. What is wrong in my derivation?
You are thinking about it in the wrong way. What Maxwell's equations tell you is that when you have a time-varying magnetic field, there must also be an electric field present that satisfies the Maxwell-Faraday equation. The two fields co-exist (and indeed are different aspects of THE electromagnetic field). Similarly, there must be the right value of current density present such that both sides of the Ampere-Maxwell equation agree. In each case, the equality sign does not imply a causal relationship, it merely says that the left and right hand sides must be equal.
{ "source": [ "https://physics.stackexchange.com/questions/722421", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/342755/" ] }
722,966
From Maxwell's equation, we can find out that certain waves exist. However, it's unclear to me why 19th-century people thought that what they had called light is a wave. As far as I know, 19th-century people weren't able to make light visible with electronic or magnetic devices of that era. So I wonder how they could connect these two together. How could they know? Or why did they think so?
As you already said, using all four Maxwell equations in a vacuum ( $\rho=0$ , $\mathbf{j}=\mathbf{0}$ ), we get the wave equations: $$\Delta\mathbf{E} =\nabla(\underbrace{\nabla\cdot\mathbf{E}}_{=0}) -\nabla\times\left(\nabla\times\mathbf{E}\right) =\nabla\times\frac{\partial\mathbf{B}}{\partial t} =\mu_0\varepsilon_0\frac{\partial^2\mathbf{E}}{\partial t^2}$$ $$\Delta\mathbf{B} =\nabla(\underbrace{\nabla\cdot\mathbf{B}}_{=0}) -\nabla\times\left(\nabla\times\mathbf{B}\right) =-\mu_0\varepsilon_0\nabla\times\frac{\partial\mathbf{E}}{\partial t} =\mu_0\varepsilon_0\frac{\partial^2\mathbf{B}}{\partial t^2},$$ which describe a wave propagating with the velocity $1/\sqrt{\mu_0\varepsilon_0}$ , whose value is exactly that of the speed of light, hence $\mu_0\varepsilon_0c^2=1$ . James Maxwell commented this result with: " This velocity is so nearly that of light, that it seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electro­magnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws ."
{ "source": [ "https://physics.stackexchange.com/questions/722966", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/272784/" ] }
722,986
The continuity equation states that if the cross-sectional area decreases (ex. a tube getting narrower), speed has to increase. If the tube is horizontal, this speed increase has to be provided by a force exerted by the surrounding fluid. In symbols, $pA > p'A'$ , where $pA$ is the force that is causing the acceleration, and $p'A'$ the force due to pressure on the other side of the fluid element. We know that the tube has become narrower, so $\frac{A'}{A} < 1$ . But then, couldn't $p'$ be larger than $p$ ? For example, if $A = 2, A' = 1$ , we need that $2p > p'$ . Something like $p = 1, p' = 1.5$ would be a valid solution. Edit: the second answer of the suggested question assumes the cross-section is constant.
As you already said, using all four Maxwell equations in a vacuum ( $\rho=0$ , $\mathbf{j}=\mathbf{0}$ ), we get the wave equations: $$\Delta\mathbf{E} =\nabla(\underbrace{\nabla\cdot\mathbf{E}}_{=0}) -\nabla\times\left(\nabla\times\mathbf{E}\right) =\nabla\times\frac{\partial\mathbf{B}}{\partial t} =\mu_0\varepsilon_0\frac{\partial^2\mathbf{E}}{\partial t^2}$$ $$\Delta\mathbf{B} =\nabla(\underbrace{\nabla\cdot\mathbf{B}}_{=0}) -\nabla\times\left(\nabla\times\mathbf{B}\right) =-\mu_0\varepsilon_0\nabla\times\frac{\partial\mathbf{E}}{\partial t} =\mu_0\varepsilon_0\frac{\partial^2\mathbf{B}}{\partial t^2},$$ which describe a wave propagating with the velocity $1/\sqrt{\mu_0\varepsilon_0}$ , whose value is exactly that of the speed of light, hence $\mu_0\varepsilon_0c^2=1$ . James Maxwell commented this result with: " This velocity is so nearly that of light, that it seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electro­magnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws ."
{ "source": [ "https://physics.stackexchange.com/questions/722986", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/332598/" ] }
723,247
The double-slit experiment requires wave-like interference of possible paths according to the quantum action principle, but it doesn't require entanglement since it's a single-particle phenomenon. Isn't entanglement arguably even more mysterious, since it violates local realism and thus can't be explained by de Broglie's pilot wave theory? Is there some reason Feynman wasn't concerned by entanglement?
I think what he was getting at is that we don't yet have a single agreed interpretation of what is 'really' happening at a quantum level, and the two-slits experiment typifies the nature of the conceptual gap we have yet to fill. Entanglement in that context can be considered as 'just' another example of the strange effects that arise from the quantum nature of matter. The fundamental issue with the two slits experiment- and all the other experiments that exhibit quantum interference effects- is that we don't really understand the link between a particle and its associated wave function. We know the wave function tells us probabilistically where the particle might be found, and we know that if we model the two slits experiment by assuming that the wave function of an incident electron interferes with itself, then we get a result that agrees with experiment. But why should the wave function behave in that way? What causes the wave function to be blocked by the screen and pass only through the slits, given that: a) the screen is in any case largely empty space at a microscopic level; b) it doesn't seem to matter what the screen is made of; c) the effect happens whether the incident particle is electrically charged or a neutron, say; and d) quite large objects with hundreds of atoms can be diffracted. So why, in all those disparate cases, can we model what happens simply by assuming that the inbound object has an associated wavelength and the screen blocks the propagation of the incident wave in exactly the same way that a macroscopic screen with two slits would affect a water wave? And bear in mind that wave-functions are abstract mathematical entities with imaginary components, so why should they be physically diffracted? I think that when we have a crystal clear and universally agreed answer to questions like that, we will also know how best to conceptualize entanglement.
{ "source": [ "https://physics.stackexchange.com/questions/723247", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230823/" ] }
724,438
If a person jumps from the first floor of a building and lands on a concrete surface, they will suffer serious injury because of Newton's third law. If the same person jumps the same distance and lands in swimming pool filled with water, however, then there will not be any serious injury. The person in both cases lands with same amount of force. Why doesn't water offer the same amount of force in return as concrete?
It is not the case that you "land with the same amount of force" - you land with the same amount of kinetic energy , the difference is how long it takes to dissipate that energy. It all comes down to the "stopping time" - when you land on concrete, you go from your impact velocity to zero velocity in a fraction of a second. When you land in water, you plunge below the surface and come to a stop quite a bit slower, over the course of many fractions of a second. $F=ma$ , and $a = \Delta v/\Delta t$ . In both cases, $\Delta v$ is the same (you go from impact velocity to 0), but when you land in water, $\Delta t$ is much greater, making $a$ and therefore $F$ much lower. This is the same principle behind crumple zones in cars, or why you should bend your knees when landing a jump - by extending the deceleration time, you decrease the force exerted. The reason why the deceleration times are different between concrete and water is related to the fact that concrete is a solid and water is a liquid. The molecules in concrete are locked into a rigid configuration. Concrete molecules don't move around freely - when you push on concrete, the concrete doesn't move, it pushes back to resist even large forces. Molecules in water, on the other hand, freely flow past one another - when you push on water, it accelerates out of the way. When confronted with a large force, a material can either resist it (like concrete), or yield to it (like water). Imagine being on ice skates - you can push off a rigid wall to accelerate yourself backwards, but if you push off another person on skates, you won't move as quickly, since the thing you're pushing off of yielded to the force of the push.
{ "source": [ "https://physics.stackexchange.com/questions/724438", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/343936/" ] }
724,694
I would expect two deuterium nuclei to fuse straight into a single helium-4 nucleus, because that's by far the most stable way to arrange 2 protons and 2 neutrons. But instead, any two fusing deuterons have a 50-50 chance of producing either a tritium nucleus and a neutron or a helium-3 nucleus and a proton. Why is this?
The deuterium fusion reaction is extremely exothermic. It releases about a million times more energy than a typical chemical reaction, and that energy has to go somewhere. If we had two deuterium nuclei fusing to form a helium-4 nucleus there is nowhere for the energy to go and the helium nucleus would just split up again. So the newly formed helium nucleus has to get rid of all that energy, and there are three ways to do this: the helium nucleus could release the energy as a gamma ray and form ${}^4\mathrm{He}$ directly the helium nucleus could release a proton to form ${}^3\mathrm{H}$ . Then the energy is carried away as the kinetic energy of the proton and the ${}^3\mathrm{H}$ nucleus. the helium nucleus could release a neutron to form ${}^3\mathrm{He}$ . Then the energy is carried away as the kinetic energy of the neutron and the ${}^3\mathrm{He}$ nucleus. But these three branches have very different probabilities. About 55% of the time reaction (3) occurs and we end up with helium-3. About 45% of the time reaction (2) occurs and we end up with tritium. Reaction (1) happens only about 0.0001% of the time so it's very rare for the fusion to form helium-4 in one step. Now the next question is why emitting a photon is so much less probable than emitting a proton or neutron, and as Chris commented below we can answer this in a handwaving way. Creating a photon involves an electromagnetic interaction, while ejecting a proton or neutron requires only a strong force interaction. The EM force is much, much weaker than the strong force so in general any process involving the EM force is much slower than interactions involving the strong force.
{ "source": [ "https://physics.stackexchange.com/questions/724694", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/310823/" ] }
724,715
In my country (and maybe all around the world I don't know) once electricity has been generated, it is then raised to 200k Volts for transportation. I know this is to reduce the loss. Given $P=U.I$ and $P=I^2.R$ , raising U will lower I and so limit the loss by joule effect . From what I've read, one of the reason electricity is transported in AC is because this is easier/cheaper to raise AC to 200k Volts than if it was in DC. Why?
Changing the Voltage of AC can be done with a simple iron core transformer. That's a simple device without moving parts that only consists of a magnetic core, copper wire and some isolation (optionally a cooling fluid). Almost nothing that can break. Good transformers can have amazing efficiency of way more than 95%. There are other benefits to using AC over DC as well (and also downsides). With AC you have way less problems with arcing on switches because. If arcing starts with AC it will often stop the next zero crossing of the AC. With DC, the arc won't stop by itself. Also, with AC you have less problems with material starting to wander because of electrolytic effects. And running motors with (especially 3 phase) AC is close to trivial without the need for brushes. With DC you need brushes or some smart electronics (BLDC-Motors are basically AC motors with some smart electronics attached). Also, a power grid with AC is self stabilizing (to some extent) via the frequency of the AC. Downside of AC is losses due to capacitance (blind current also causes resistive losses). Phase shift is always an issue as soon as you work with AC. Converting DC to another voltage takes more effort. One way is to drive a DC motor that is mechanically coupled with a DC generator. Such systems are big, have moving parts and have lower efficiency. Today, we have the electronics to do that better. We basically chop the DC up into AC, put that trough a transformer and rectify the output of that again... voila, a DC to DC converter (this is all very simplified).
{ "source": [ "https://physics.stackexchange.com/questions/724715", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/344072/" ] }
725,457
Because water is denser than air, sound waves travel faster and with more energy in water than air. However, we are worse at hearing in water than in air. Why is this? To clarify, I was comparing these two: Having both the sound source and the listener (human) underwater Having both the sound source and the listener (human) above water Supposedly, sound waves are 'better' with denser media, but we humans cannot hear very well underwater.
Impedance mismatch. The impedance ratio or the admittance ratio (admittance = inverse of impedance) describes how much of a wave is reflected or transmitted at the boundary of two media depending on the frequency. In principle, the ear is an impedance transducer that converts sound waves hitting the eardrum into smaller, more powerful vibrations by means of the auditory ossicles, which act on the cochlea. If the medium acting on the ear is water instead of air, to which the eardrum is optimised, there is an impedance mismatch and the waves are largely reflected instead and lead to only minor vibrations of the eardrum.
{ "source": [ "https://physics.stackexchange.com/questions/725457", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/341491/" ] }
725,471
https://youtu.be/EIqKG5TiSYs . I have some confusions regrading Group and Phase Velocity . Group Velocity exists for a group of waves, it's stated here in the video that Group Velocity the sum of the phase velocity of individual waves (Phase velocity is given by $\dfrac{ω}{k}$ ). If the Group velocity $\dfrac{dω}{dk}$ is the sum of the phase Velocity of individual waves then why is it a differential quantity and not an integral quantity?
Impedance mismatch. The impedance ratio or the admittance ratio (admittance = inverse of impedance) describes how much of a wave is reflected or transmitted at the boundary of two media depending on the frequency. In principle, the ear is an impedance transducer that converts sound waves hitting the eardrum into smaller, more powerful vibrations by means of the auditory ossicles, which act on the cochlea. If the medium acting on the ear is water instead of air, to which the eardrum is optimised, there is an impedance mismatch and the waves are largely reflected instead and lead to only minor vibrations of the eardrum.
{ "source": [ "https://physics.stackexchange.com/questions/725471", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/240797/" ] }
725,572
I've been reading up on photons, and find myself puzzled by an element of them... How can photons have an electric field without having a charge? Correct me if I am wrong but I believe electric fields can only be created by charged particles, which photons aren't. When I did some research most other sources also told me this, so what is going on here?
I believe electric fields can only be created by charged particles There are two things that can produce a (disturbance of the) electric field: A charged particle A changing magnetic field Since an electromagnetic wave has a changing magnetic field component, it can produce a (disturbance of the) electric field without a charged particle being present. Initially, there was a charged particle involved at the source of the electromagnetic wave, but this particle doesn't travel with the wave. This is similar to how if you drop a rock in a pond and produce a water wave, the rock doesn't have to travel along the wave to sustain the wave.
{ "source": [ "https://physics.stackexchange.com/questions/725572", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/344424/" ] }
725,582
So when we use light bulbs ... The heat excites an electron The energy makes the electron go to a higher orbital - higher energy level The electron comes back to a lower energy state Light is emitted in the process Now: Does the simple movement of the electron produces a disturbance in the EM field, thus generating a photon? Or the energy released from transitioning to a lower state disturbs the EM field, thus generating a photon? None of the above? I've found explanations on the web claiming that the photon is produced out of nothing, but it sounds strange... Is the EM field 0 with no disturbance? Then you could say the photon comes from nothing... but the field is there, it just has a 0 value
I believe electric fields can only be created by charged particles There are two things that can produce a (disturbance of the) electric field: A charged particle A changing magnetic field Since an electromagnetic wave has a changing magnetic field component, it can produce a (disturbance of the) electric field without a charged particle being present. Initially, there was a charged particle involved at the source of the electromagnetic wave, but this particle doesn't travel with the wave. This is similar to how if you drop a rock in a pond and produce a water wave, the rock doesn't have to travel along the wave to sustain the wave.
{ "source": [ "https://physics.stackexchange.com/questions/725582", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/344512/" ] }
726,654
Apparently expressions such as $$ \int \delta (x) f(x)dx = f(0)\tag{1}$$ are widely used in Physics. After a little discussion in the Math SE , I realized that these expression are absolutely wrong from the mathematical point of view. My question is: can this lack of rigorousness in Physics affect results (i.e. leading to mistakes or wrong results) or is it OK to use these expressions as long as we keep in mind that the Dirac-delta function is a distribution? PS: I'm adding an other example of fallacious expressions: the Fourier transform of the $\delta$ function being $1$ ( $\hat \delta = 1$ ). It turns out the Fourier transform of the Dirac is not the function $1$ , but the regular distribution associated with the function $1$ .
The expression $$ \int \delta(x)f(x)\mathrm{d}x = f(0)$$ is not wrong, you simply need to read the left-hand side of the equation as what a mathematician would write something like $\langle \delta, f\rangle$ , i.e. the application of the $\delta$ -distribution to a function. That is, given a function or distribution $g$ , we write its application/inner product with a test function $f$ as $\int g(x)f(x)\mathrm{d}x$ - because for an actual function that literally is how the associated distribution is defined. It's just notation - what is wrong is to believe that $g(x)$ on its own has any meaning in general, since distributions do not have values at points. This means that this notation does not really distinguish between a function $f(x)$ and the distribution defined by $\int f(x) \cdots \mathrm{d}x$ , and so saying that the Fourier transform of $\delta$ is the constant function 1 is perhaps sloppy, but not wrong. There is a difference between being a bit sloppy (as in these cases) and being meaningfully wrong , as when saying things like $\delta(x) = 0$ for all $x\neq 0$ . And even then you might sometimes find instances in physics where things like $\delta(0)$ are written and you might want to say it's all wrong because that doesn't mean anything but again, there are interpretations of this notation that make sense (in that example that $\delta(0)$ is essentially code for an infinite volume limit of a finite theory that we don't really want to spell out in detail). Being rigorous in the mathematical sense is not a binary state - we're not either completely rigorous or completely wrong, but almost always something in between, and walking on the boundary between physics and mathematics requires us to be careful with our judgements: Many things that seem "wrong" can be reinterpreted in terms of shorthand notation, others are secretly right but simply have unstated hypotheses, some might really be unsalvageable.
{ "source": [ "https://physics.stackexchange.com/questions/726654", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/328424/" ] }
727,970
This is what is written in The Feynman Lectures on Physics, Vol. 1 (ch.5) We now believe that, for various reasons, some days are longer than others, some days are shorter, and on the average the period of the earth becomes a little longer as the centuries pass. Why should some days be longer than the others? There is no “gravitational” source of external torque acting on the earth, so why does its rotational angular velocity change?
The Earth is not a single rigid body, but consists of at least five separate regions which can move relative to one another. These are the crust (which is the region that we use to measure day length), the mantle, the core, the oceans and the atmosphere. Although the total angular momentum of the Earth may not change, these regions can and do exchange angular momentum between themselves over timescales ranging from days to decades. This leads to fluctuations in the angular velocity of the crust, and hence fluctuations in the length of a day. This Wikipedia article describes some of the mechanisms by which the different regions exchange angular momentum. Over long periods of time, the Earth and the Moon exchange angular momentum through tidal effects, leading to a gradual but steady increase in the average length of a day. This effect is of the order of a few milliseconds per century.
{ "source": [ "https://physics.stackexchange.com/questions/727970", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/271783/" ] }
728,129
If all reference frames are valid, then why is the geocentric model taught as "wrong" in schools? I've checked many websites but none of them clear the issue. Wiki says that in relativity, any object could be regarded as the centre with equal validity. Other websites and answers make a point on the utility of the heliocentric model (simplicity, Occam's razor...) but just because something is not so easy to deal with doesn't mean it is wrong. Note: I am not asking for evidence that geocentrism is wrong; I am asking for a way to resolve the contradiction (from what I see) between relativity and this "geocentricism is wrong" idea.
If 'geocentrism' means that you can regard the Earth as stationary and describe the motion of Sun and planets accordingly, then geocentrism isn't wrong. But if 'geocentrism' means that The Sun and planets have simple (for example circular) orbits about the Earth, then it is wrong. Almost 2000 years Ago, Ptolemy knew that a geocentric solar system based on circles needed the planets to move in circles nested on circles nested on circles in order for theory to match observation – which for some planets even involves their stopping and going backwards for a while. [The nested circle treatment is analogous to a Fourier analysis of a complicated shape of orbit.] A heliocentric system based on circles rather than ellipses still needs these 'epicycles', but smaller ones and fewer of them. I'd add that I think it's perfectly reasonable to teach children that the Earth and other planets "go round the Sun". There's no reason, though, to say to them that the Sun, any more than the Earth, is stationary.
{ "source": [ "https://physics.stackexchange.com/questions/728129", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/311516/" ] }
728,212
In quantum mechanics and quantum field theory it is specially common to work in both position and momentum space. Passing the theory to momentum space is sometimes crucial, as one usually finds that the Hamiltonian of the system is diagonal in this space. However, it seems that the ultimate objects of the theory are always position-dependent. We are always looking for fields $\psi({\bf{x}},t)$ , propagators $C(x-y)$ , Schwinger functions $\langle 0|\psi(x_{1})\cdots\psi(x_{n})|0\rangle$ and so on. Is there any reason for always aiming at position dependent objects, even when calculations are easier in momentum space? Maybe the reason why is because real world experiments usually measure things in position space. If so, does it imply that momentum space is "less physical" in some sense?
Your question can be equivalently phrased as whether position space is more physical than momentum space. In a sense, yes. One of the basic facts about our universe (as far as we can tell) is that it exhibits locality in position space, which informally means that what is going to happen at a particular point in space over an infinitesimal interval of time depends only on the state of the universe in an infinitesimal neighbourhood of that point. Formally, it means the laws of physics are expressible as differential equations (there is a lot of nuance here that I am not qualified to discuss). You can't have nontrivial laws of physics that are local in both position space and momentum space. I guess if the laws of physics had been local in momentum space, then we would have just called momentum space "position space" instead. Of course, you can imagine hypothetical universes that don't have any locality. Anyway, that's why position space is special.
{ "source": [ "https://physics.stackexchange.com/questions/728212", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/283053/" ] }
728,239
I understand that the simplest equation used to describe capacitance is $C = \frac{Q}{V}$ . While I understand this doesn't provide a very intuitive explanation, and a more apt equation would be one that relates charge to area of the plates and distance between them, I'm having trouble understanding it in general. Capacitance seems to be describing, well, the capacity of two plates to store charge (I understand that the electric field produced between them is generally the focus more so than the actual charge). Shouldn't it just be measured in units of charge such as coulombs? I'm sure this is due to a lack of more fundamental understanding of electric potential and potential difference but I'm really not getting it.
An analogy here would be to a pressure vessel and asking what mass of air will fit inside. While the tank has a fixed volume, the amount of air that will go inside depends on the pressure you that you use to force it in. For quite a while the relationship is linear. At double the pressure, you have double the mass of air. Similarly, the capacitor doesn't have a fixed amount of charge that will fit. The amount depends on the electrical "pressure" (voltage) that is used. Actually your initial equation is the useful one. Unless we're constructing one, we usually do not care about the physical particulars of a capacitor. Instead we want to know how much charge will move if we change the voltage. For a "larger" capacitor (higher capacitance), more charge will fit at a given voltage.
{ "source": [ "https://physics.stackexchange.com/questions/728239", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/329127/" ] }
728,921
When I pull a cork out of a wine bottle, it usually expands slightly in circumference. This makes sense: you want the cork to be slightly compressed relative to its natural diameter when it's inside the bottle in order to make a tight seal. But I've noticed that the cork usually expands asymmetrically, with the end that was originally toward the inside of the bottle expanding significantly more than the end that was toward the outside of the bottle. After the cork is pulled out, it usually ends up noticeably asymmetrical, with the inner end wider than the outer end. This isn't just true for mushroom-shaped champagne-style corks, but also for corks that (as far as I can tell) are perfectly cylindrical and symmetric when they're initially inside the bottle. In fact, the asymmetry in the expanded cork is often so pronounced that the end that was initially toward the inside of the bottle has expanded so much that I can't fit it back into the bottle, but I can fit the end that was initially toward the outside into the bottle. That is, I can't fit the cork back into the bottle in its initial orientation, but if I flip it upside-down then I can fit it in. What causes this asymmetric expansion (for still wine corks)? Do the manufacturers actually make the corks asymmetric for some reason? Or does it have something to do with the fact that the corkscrew is penetrating the outer end of the cork but not the inner end, so the force is applied unevenly? Or maybe something to do with the difference in air pressure on the two ends as the cork is pulled out? Or maybe I'm just making up this whole phenomenon? (Although other people I've spoken to have noticed it as well.) It seems to me that the asymmetry is much more pronounced for synthetic corks than for natural ones, so I suspect that it may have something to do with corks' material properties.
Wine corks are cylindrical before they go into the bottle. When you remove the cork then they may have different shapes. Not all corks have the tapered shape. Only some. Champagne corks The tapered shape is especially present with champagne wine corks. Below you see the before and after image of a Champagne cork. The top is wider because the cork does not fully enter the bottle. But the bottom is tapered. The reason for this is that the bottom is made out of two dense cork disks rather than agglomerated material. The material is different. The degree to which the bottom part expands depends on the age and the shape is given names juponne (strongly tapered like a petticoat) for young corks that expand a lot, and cheville (little tapered almost straight like a wall plug) for old corks that expand less. Regular wine corks The tapered shape can still be present with other corks that do not have these disks on the bottom. Plasticiser A reason for this can be that the water activity of the cork is different on the inside than on the outside and this changes the properties of the cork material . One the one hand water works as a plasticiser and the Young's modulus decreases when corks are more humid (this is possibly why people sometimes soak wine corks in water before bottling). So you would expect that the wet bottom side of the cork is expanding less extremely but possibly the hysteresis is stronger for the dryer top side (although I can't find any sources about this). Also an aging effect like with the champagne corks might play a role. The wine cork will not return fully to it's original shape after having been compressed for such a long time and the dryer side might be more rigid and rearrange less fast or structures could have been broken which did not happen as much on the wet side. Swelling Another effect is that cork expands when it is more humid. This effect is described in this article: Rosa, M. Emília, and M. A. Fortes. " Water absorption by cork ." Wood and fiber science (1993): 339-348. "Water absorption in the cell walls causes the expansion of cork" Bottleshape These images from https://winemakersacademy.com/cork-closures-oxygenation/ show that it appears that the bottle neck does not cause the shape. In this image the neck is straight and the cork still comes out tapered. It is also visible untill what point the wine entered into the pores of the cork and it is that line where the difference between the diameter is more pronounced. However, the effect that causes this correlation might be reversed (the cork shape causes the wine to go up, instead of the other way around). If the bottleneck is tapered then the bottom part is compressed less and it is that what might cause the wine to move up the cork. As explained in the answer by FourW bottlenecks are not completely cylindrical and instead slightly conical. Other sources for this are in Prades López, C., and M. Sánchez-González (2019). They refer to European standard EN 12768 which states that the diameter at the top should be 18.5 ± 0.5 mm and 45 mm below the top it can be 20 ± 1 mm. In addition the mean diameter should be at most 1mm less than the diameter at the top. Prades López, C., and M. Sánchez-González "Behavior of Natural Cork Stoppers when Modifying Standard Corking Parameters: Three Practical Cases." Cork Science and its Applications II 14 (2019): 20. google-books link
{ "source": [ "https://physics.stackexchange.com/questions/728921", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92058/" ] }
728,927
What is a concise definition for electric potential? I'm interested to know because there are many different definitions. Any help is much appreciated.
Wine corks are cylindrical before they go into the bottle. When you remove the cork then they may have different shapes. Not all corks have the tapered shape. Only some. Champagne corks The tapered shape is especially present with champagne wine corks. Below you see the before and after image of a Champagne cork. The top is wider because the cork does not fully enter the bottle. But the bottom is tapered. The reason for this is that the bottom is made out of two dense cork disks rather than agglomerated material. The material is different. The degree to which the bottom part expands depends on the age and the shape is given names juponne (strongly tapered like a petticoat) for young corks that expand a lot, and cheville (little tapered almost straight like a wall plug) for old corks that expand less. Regular wine corks The tapered shape can still be present with other corks that do not have these disks on the bottom. Plasticiser A reason for this can be that the water activity of the cork is different on the inside than on the outside and this changes the properties of the cork material . One the one hand water works as a plasticiser and the Young's modulus decreases when corks are more humid (this is possibly why people sometimes soak wine corks in water before bottling). So you would expect that the wet bottom side of the cork is expanding less extremely but possibly the hysteresis is stronger for the dryer top side (although I can't find any sources about this). Also an aging effect like with the champagne corks might play a role. The wine cork will not return fully to it's original shape after having been compressed for such a long time and the dryer side might be more rigid and rearrange less fast or structures could have been broken which did not happen as much on the wet side. Swelling Another effect is that cork expands when it is more humid. This effect is described in this article: Rosa, M. Emília, and M. A. Fortes. " Water absorption by cork ." Wood and fiber science (1993): 339-348. "Water absorption in the cell walls causes the expansion of cork" Bottleshape These images from https://winemakersacademy.com/cork-closures-oxygenation/ show that it appears that the bottle neck does not cause the shape. In this image the neck is straight and the cork still comes out tapered. It is also visible untill what point the wine entered into the pores of the cork and it is that line where the difference between the diameter is more pronounced. However, the effect that causes this correlation might be reversed (the cork shape causes the wine to go up, instead of the other way around). If the bottleneck is tapered then the bottom part is compressed less and it is that what might cause the wine to move up the cork. As explained in the answer by FourW bottlenecks are not completely cylindrical and instead slightly conical. Other sources for this are in Prades López, C., and M. Sánchez-González (2019). They refer to European standard EN 12768 which states that the diameter at the top should be 18.5 ± 0.5 mm and 45 mm below the top it can be 20 ± 1 mm. In addition the mean diameter should be at most 1mm less than the diameter at the top. Prades López, C., and M. Sánchez-González "Behavior of Natural Cork Stoppers when Modifying Standard Corking Parameters: Three Practical Cases." Cork Science and its Applications II 14 (2019): 20. google-books link
{ "source": [ "https://physics.stackexchange.com/questions/728927", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/346454/" ] }
729,115
I understand that we can use the Heisenberg picture to show, for a Hamiltonian of the form $$ \hat{H}=\frac{\hat{P}^{2}}{2m}+\hat{V}(\hat{X}) $$ the Ehrenfest theorem: $$ m\partial_{t}\langle \hat{X}\rangle=\langle \hat{P}\rangle\ \text{ and } \partial_{t}\langle \hat{P}\rangle=-\langle \nabla\hat{V}(\hat{X})\rangle $$ thus we return to the classical equations of motion if we let $\langle \hat{X}\rangle$ correspond to the classically measured position and $\langle \hat{P}\rangle$ correspond to the classically measured momentum. I don't understand why this means it is necessary for $\langle \hat{X}\rangle$ correspond to the classically measured position and $\langle \hat{P}\rangle$ correspond to the classically measured position. It seems like the expectation values could still obey this relation without corresponding to the classical values. Any idea?
In general, there is no such thing as a "classically measured position" for a generic quantum system/state. Some situations are simply not well-modeled by classical physics, and Ehrenfest's theorem itself is not about the classical limit of quantum physics. No one is saying that there is a general link between quantum expectation values and classical measurements. What you're looking for is the correspondence principle : There is a certain class of quantum states (heuristically those with "large quantum numbers", in modern approaches technically often coherent states with high particle number) for which the uncertainties of the operators get small enough - compared to a relevant quantity such as the precision of the measurement apparatus - that the quantum nature of the states becomes invisible and their expectation value hence effectively the sole possible result of measurement. It is for these "corresponding states" that Ehrenfest's theorem implies that the classically measured values obey the same equation of motion as the quantum expectation values.
{ "source": [ "https://physics.stackexchange.com/questions/729115", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/315787/" ] }
729,238
If a body $A$ exerts a force over body $B$ , $B$ exerts a reaction force over $A$ . Is there an explanation of why this happens?
Yes, the explanation is the conservation of momentum. In Newtonian mechanics the third law produces conservation of momentum in mechanical systems. Later on you will see cases (matter interacting with fields) where Newton’s 3rd law is violated in some sense, but in these cases the conservation of momentum still holds (the fields have momentum). Conservation of momentum (and its associated spatial translation symmetry) have no explanation for why it is true. We have lots of solid experimental evidence that it is true, but no explanation why our universe behaves that way instead of some other way. This is what makes conservation/symmetry laws fundamental explanations. There are no further explanations in physics, just evidence that makes us believe this explanation.
{ "source": [ "https://physics.stackexchange.com/questions/729238", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58828/" ] }
729,289
When I rub quartz together, it glows due to triboluminescence but it also creates a burnt smell. What causes that smell?
A good question that has long been asked, as shown by a 1900 Nature letter inquiring about The Smell emitted by Quartz when Rubbed . Rubbed quartz can produce more than one smell, so this is an extension of the original answer in light of comments from others and new information from the original poster that the smell is not specific to triboluminescent quartz. As noted in the answer to Milky quartz stones give off odour after being struck - persistent tribo-odorosity(?) , an odour from rubbed quartz could be due to the release of hydrogen sulfide (H 2 S) trapped in tiny inclusions in the quartz. When you rub together two pieces of quartz, you break open some of the microscopic inclusions and release the hydrogen sulphide. As @akhmeteli's answer citing an 1858 Scientific American article shows, the basics of this have long been known, but the details have been more recently confirmed. The presence of hydrogen sulfide in inclusions has been noted by studies such as this 1984 article on " Gas chromatographic analysis of volatiles in fluid and gas inclusions " which reported that hydrogen sulfide was the dominant gas released when a sample of "fetid quartz" was crushed. Another study the same year reported on Characterization of H 2 S bearing fluid inclusions in quartz, fluorite, and calcite. As correctly noted in @user346760's answer and @kryan's comment , the question describes the smell as "burnt", not the usual "rotten egg" description of low concentrations of hydrogen sulfide. The 1858 cited Scientific American article mentions a rotten egg smell, but Thomas Wedgwood described it simply as "strong" in a 1791 Royal Society report , Delius referred to it as "sulphurous" in 1748, and other common descriptions include " metallic ". If the smell is "burnt" and not "rotten eggs", then also as noted by @user346760, the triboluminscence could be igniting contents of the inclusions. Hydrogen sulfide is a volatile flammable gas which burns to sulphur dioxide and water. Sulphur dioxide is a pungent gas which causes the smell of burnt matches. It is also possible that any elemental sulphur inclusions could be ignited producing sulphur dioxide, but such inclusions seem to be less frequently mentioned than H $_2$ S inclusions in the scientific literature. As noted by @KRyan, it is also possible other compounds could contribute to the smell. The most common other volatile components of quartz inclusions are odorless, i.e. water, methane, and carbon dioxide , but the surface of a quartz pebble could absorb organic materials from its environment. Although I believe all the above discussion is accurate, it may be moot in light of new helpful information from the original poster that the same smell is produced when two random rocks are rubbed together. Both the original poster and myself have confirmed that the same smell is produced by rubbing together a variety of rocks and even from simply vigorously rubbing two glass jars together, but not when metal or plastics are rubbed together. Since the smell can be produced by rubbing glass together, this is clearly not a sulphur related smell due to inclusions. Personally, the smell makes me think of electric motors, e.g. ozone. Others with more sensitive and experienced noses may be able to identify it better. That ozone may be significant component of the smell is certainly plausible. Ozone is produced when granite blocks are crushed or when grinding granite, basalt, schist, rhyolite, or gneiss . If ozone is being produced, then as noted in the answer to Where does the smell of electrostatic charge come from? , we can also expect nitrogen oxides. This is similar (but on a much smaller scale) to gas production by of lightning . The smell of ozone has been variously described as "chlorine, metal, burnt wire" Nitrogen oxide is a "sharp sweet-smelling gas" and nitrogen dioxide has "a strong, harsh odor" . It seems likely that the general smell of rubbing rocks (or glass) together is mostly due to some combination of these.
{ "source": [ "https://physics.stackexchange.com/questions/729289", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/346670/" ] }
729,462
What hasn't worked I thought of elementary particles, but with antiparticles they can be converted into photons (thus the particles and antiparticles vanish), and photons can be converted into heat (thus the photons no longer exist). I have also thought of information, but it is not confirmed that information is actually indestructible (if it is thrown into a black hole) and I have observed in my own experience from accidentally wiping my computer that information can be permanently lost (not to mention that since phones, hard drives, computers, monitors, CDs, DVDs, servers, and paper can be shredded, cut, burned, etc, the data on them can be lost permanently). I have thought of Black Holes, but Hawking radiation effects can make the Black Hole evaporate eventually. Space and time can be ripped/warped by a strong enough gravitational pull, and even things like depleted uranium, tungsten carbide, vanadium steel, and other relatively strong materials all melt if you throw them in the sun (which is why I think probes have never been to the surface or core of the sun, as no currently available material will maintain its integrity at the surface of the sun, much less the core. I have thought of atoms, but atoms can be split or fused. Also, protons and neutrons have been split into quarks by the LHC. What I mean by Indestructible When I say indestructible, I mean that it cannot be split, cut, burned, disintegrated, vanished with antimatter, warped, broken, erased, deleted, blown up, or otherwise made to not exist. Since indestructible is an absolute term, if there is any possibility of the thing in question ceasing to exist, it does not count as "indestructible." The essential question According to physics, is there any possible thing, object, particle, etc. that is or could be indestructible? Note: I was unsure what tags to use, so any edits to the tags would be welcome.
The fundamental laws of physics are time reversible. So if something cannot be destroyed then it follows that it cannot be created. And if it cannot be created then either it doesn't exist or it has always existed. As far as I know, we don't know of anything that has always existed. So then we don't know of anything that is indestructible.
{ "source": [ "https://physics.stackexchange.com/questions/729462", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/346750/" ] }
732,749
It seems like 2 eyes is enough “wetware” to do interferometry inside brain. Can you definitely see some reason why this could not be happening, or some way to test if it does happen?
To do interferometry in post-processing after detection of radiation, the detector must be able to record the phase of the radiation. The eye cannot do this: the photochemical reactions that record the radiation are insensitive to phase. In instrumentation, radio interferometry may be done post-detection because phase-sensitive radio detectors are practical. Optical interferometry is done pre-detection, using mirrors.
{ "source": [ "https://physics.stackexchange.com/questions/732749", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21456/" ] }
732,752
I've recently done an experiment where I analyzed a mass moving in circular motion around a point with a spring attached to the mass on one end and to the point around which the mass moves on the pother end. This system was set on top of an air table to reduce friction as much as possible and it was recorded using a camara set on top of the table. After analyzing the video using the program Tracker . Then, using Kaleidagraph I got the polar coordinates as well as the time derivatives of the radius and the angle. With this data, I got the hamiltonian of the system and it doesn't remain constant. According to the theory, as the holonomic constraints are stationary in this system then the hamiltonian should be exactly equal to the mechanical energy which should remain constant. Instead, what I get is some oscilating value which decreases. The fact that it oscilates downwards makes sense as this could be attributed to a friction force we are neglecting. However, it doesn't make sense that the energy of the system oscilates. In fact, this oscilation correlates with the oscilation of the radius (the distance between the point the mass orbits and the mass). This correlation makes me think that there could be some reason for why the energy doesn't remain constant but I still can't explain why this happens. One think I've thought of while representing some of the variables is that the velocity of the system could have something to do with the change in the hamiltonian not from a physics point of view but more on a technical way. As when the velocity is larger the camara couldn't capture the image properly and thus could create some error. Other than this last interpretation I have no idea why the energy would change and why this change would be correlated to the change in r. If anyone had some insights on why this happens I would appreciate the help. Sorry for any bad grammar or spelling, english is not my first language.
To do interferometry in post-processing after detection of radiation, the detector must be able to record the phase of the radiation. The eye cannot do this: the photochemical reactions that record the radiation are insensitive to phase. In instrumentation, radio interferometry may be done post-detection because phase-sensitive radio detectors are practical. Optical interferometry is done pre-detection, using mirrors.
{ "source": [ "https://physics.stackexchange.com/questions/732752", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/334891/" ] }
733,040
I've always found using the righthand rule to remember how forces, B-fields, and particle velocities to be intellectually cheating myself a bit. It feels like being able to multiply numbers by using your fingers without knowing what multiplication really is. Yet, when I try the "just think about it approach" I still end up just imagining what my hands would do in my head. How can I think about the directions of the electromagnetic right hand rule vectors to intuitively understand what's going on, rather than using the equivalent of a cheat sheet?
How can I think about the directions of the electromagnetic right hand rule vectors to intuitively understand what's going on, rather than using the equivalent of a cheat sheet? You can't intuitively understand it because it is a convention. We chose the "right-hand" rule and it is now the norm world-wide. We could have gone with the "left-hand" rule, but we didn't. As I mentioned in the comments, I believe that old (pre-WWII) German physics/chemistry papers often used the left-hand rule (or left-handed coordinate systems).
{ "source": [ "https://physics.stackexchange.com/questions/733040", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/337240/" ] }
734,513
In a racing video game series named Trackmania, there is a game mechanic where when you hit a jump and the car is in mid-air, you can stop the car from pitching downward by tapping on the brakes. I am curious if this mechanic has some basis in real physics. What would happen if a car were pitching nose down while in the air, and you hit the brakes thus stopping the rotating wheels. Would that affect the car's pitch, and if so, how?
It will affect pitch, but not the way it works in the game. In the game, braking while in the air stops your pitch axis rotation. In real life, it does something completely different. The wheels have angular momentum. When the wheels are slowed, this angular momentum must be conserved. This results in a transfer of angular momentum from the wheel to the vehicle. Since the wheels are moving clockwise in relation to a vehicle travelling left-to-right (i.e. the top of the wheel is moving toward the front of the car, and the bottom of the wheel is moving toward the rear) the car will also start to rotate clockwise, pitching the nose down. For a car already starting to pitch down after a jump, this will cause it to pitch down even faster, the opposite of what it does in-game. We can even calculate a rough magnitude of the effect! For simplicity, let's state some assumptions: this is a rear-wheel drive vehicle with stationary front wheels both rear wheels are moving at the same angular velocity (no slip diff) both rear wheels are the same size and have the same mass the rear axle is a rigid balanced cylinder (i.e. its center of mass is the center of the axle) the brakes bring the wheels to a complete stop we ignore the motion of the driveshaft, flywheel, clutch, gearbox, and other parts of the drivetrain we ignore air resistance, lift, and all other aerodynamics Each of the two rear wheels can be approximated as a cylindrical mass whose center of mass is the axle joint. The same can be said for the rear driveshaft - it's a long cylinder. As such, the momentum of each of these bodies can be described by spin angular momentum, which is angular momentum about the center of mass. This is in contrast to orbital angular momentum, which is angular momentum about an arbitrary point. Angular momentum is expressed as $L=I\omega$ , where $I$ is the angular moment of inertia and $\omega$ is the angular velocity in radians per second. You can think of the angular moment of inertia as a way to describe the mass distribution of an object about its axis (or axes) of rotation. A cube, a cylinder, and a sphere all have different moments of inertia, and those moments also change depending on where you put the axis of rotation (through the center, on an edge, etc.) A cylinder with mass $m$ and radius $r$ rotating about its $z$ axis has an angular moment of inertia described by $I = \frac 1 2 mr^2$ . As such, the moment of inertia for each wheel can be approximated by $I = \frac 1 2 mr^2$ , where $m$ is the mass of the wheel and $r$ is the radius of the wheel. The moment of inertia for the axle can be described similarly, since we can model that as a cylinder too. Given that we have two wheels rotating about the same axis, we can think of them as a combined cylinder of the same radius but with twice the mass, which cancels out the $\frac 1 2$ term. We can then add the moment of inertia for the axle to get the total moment of inertia: $$I_T = m_W {r_W}^2 + \frac 1 2 m_A {r_A}^2$$ (with $T$ meaning total, $W$ meaning wheels, and $A$ meaning axle) This can then be plugged into the angular momentum equation, $L=I\omega$ , where $\omega$ is the angular velocity in radians per second. $$L = \omega \left(m_W {r_W}^2 + \frac 1 2 m_A {r_A}^2\right)$$ If we assume that the vehicle's wheels have remained at a constant angular velocity since leaving the ground, we can estimate $\omega$ from the vehicle's land speed at the time of take-off and the radius of the wheel including the tyre. One revolution of the wheel moves the vehicle forward by the circumference of that wheel, and the circumference is $2\pi r$ . If we take the car's velocity in meters per second (1mph ≈ 0.447m/s) and divide it by the wheel circumference, that tells us how many times the wheel was rotating per second. One rotation is 360°, or $2\pi$ radians. As such: $$\omega \approx \frac {v_C} {2\pi {r_W}} \times 2\pi = \frac {v_C} {r_W}$$ Where $v_C$ is the car's velocity at the point of take-off, and $r_W$ is the wheel radius. Substituting this into our previous equation, we get: $$L = \frac {v_C} {r_W} \left(m_W {r_W}^2 + \frac 1 2 m_A {r_A}^2\right)$$ Where $L$ is the angular momentum, $v_C$ is the velocity of the car at take-off (for the purposes of angular velocity estimation), $r_W$ is the radius of the rear wheels including the tyre, $m_W$ is the mass of each of the two rear wheels including the tyre, $m_A$ is the mass of the rear axle, and $r_A$ is the radius of the rear axle. For the sake of simplicity in this worked example, we'll assume that the front wheels aren't spinning, even though in practice it would make sense for the front wheels to be rotating at the same angular velocity as the rear wheels. While it is entirely possible to calculate the resultant angular velocity of the car as a result of both the front and rear wheels, including the case where the front wheels are not facing straight forward, the calculations are much easier to follow in a system with angular momentum being transferred between two bodies in a single axis. Let's try a quick test-case: Each wheel weighs 25 kg including the tyre. The rear wheels have a radius of 25 cm (approximating a 16" diameter alloy with 2" thick tyres). The rear axle is 6 cm in diameter and weighs 50 kg. The car was travelling at 40 m/s (roughly 90 mph) when it left the ground. Plugging these numbers in, we get: $$L = \frac {40~\mathrm{m~s}^{-1}} {0.25~\mathrm{m}} \left(25~\mathrm{kg} \times (0.25~\mathrm{m})^2 + \frac 1 2 50~\mathrm{kg} \times (0.06~\mathrm{m})^2\right) = 265.4~\mathrm{kg}⋅\mathrm{m}^2⋅\mathrm{s}^{-1}$$ Note that kg⋅m 2 ⋅s −1 are the units for momentum. This is all well and good, but what does this mean in terms of the movement of the car? Since angular momentum must be conserved, the change in momentum in the wheels is passed on to the body of the car. The equations we used above can be used in reverse - we can start with angular momentum and a moment of inertia and use it to find the resulting angular velocity! However, there's a bit of a hitch: the angular momentum isn't being applied at the center of mass of the car, but instead at the location of the rear axle. This means that the car's movement is described by orbital angular momentum, not spin angular momentum. The car also isn't a cylinder, so we need a different equation. To keep things simple, let's imagine the car is a cuboid of uniform mass with the real axle running along one of the bottom edges: The moment of inertia for such a cuboid is described by: $$I = \frac {m(a^2 + b^2)} {12}$$ where $m$ is the mass, $a$ is the side of length a in meters, and $b$ is the side of length b in meters. We can now derive the equation for estimating the angular momentum of the car, using $L=I\omega$ : $$L_C \approx \omega \times \frac {m_C(l^2 + h^2)} {12}$$ where $L_C$ is the angular momentum of the car, $\omega$ is the angular velocity of the car, $m_C$ is the mass of the car, and $l$ and $h$ are the length and height of the car respectively in meters. The equation above is written in terms of $L_C$ , so to find the resulting angular velocity of the car we need to rearrange it in terms of $\omega$ : $$\omega \approx \frac {L_C} {\left(\frac {m_C(l^2 + h^2)} {12}\right)} = \frac {12L_C} {m_C(l^2 + h^2)}$$ Let's continue with our test case by defining the last few parameters: The car is approximately 1.25 m tall and 4.75 m long. The car weighs 1600 kg. After subtracting the mass of the wheels and rear axle, that's 1500 kg . (edit: thanks to nitsua60 for pointing out that since the wheels and axle move as part of the car, their mass counts as part of the overall moment of inertia and should not be subtracted) Since we know that the angular momentum being transferred from the wheels and axle to the car is 265.4 kg⋅m 2 ⋅s -1 , we can now plug everything in: $$\omega \approx \frac {12 \times 265.4~\mathrm{kg}⋅\mathrm{m}^2⋅\mathrm{s}^{-1}} {1600~\mathrm{kg} \times (4.75^2 + 1.25^2)~\mathrm{m}^2} = 0.0825~\mathrm{rad/s}$$ This is equivalent to 4.73°/s of nose-down rotation - small, but fairly noticeable! The approximations here are crude, but they give you a good idea of how the conservation of angular momentum results in the downward pitch when the brakes slow down the spinning wheels. It is possible to calculate the system's behaviour more accurately by considering the three-dimensional moment of inertia around the rear axle, angular momentum transfer of the flywheel and drivetrain (the car will tend to tilt slightly to one side as the drivetrain slows), non-uniform mass distribution of the car, air resistance, lift, and other aerodynamic effects, but the calculations are significantly more complicated and beyond the scope of this answer. As a final wrap-up point, if you express the conservation of angular momentum between two objects ( $a$ and $b$ ) as a single equation, you can gain some intuition for the behaviour of the objects as a function of their mass, size, and velocity. $$I_a\omega_a = I_b\omega_b$$ If we rearrange for $\omega_b$ we can see how a change in angular velocity on object $a$ affects the angular velocity of object $b$ : $$\omega_b = \frac {I_a} {I_b} \omega_a$$ When angular momentum is transferred, the change in angular velocity in object $b$ is a function of the ratio between the two angular moments of inertia. Since the moment of inertia of an object is proportional to its mass and size, a smaller lighter object imparts less angular velocity to a larger heavier object.
{ "source": [ "https://physics.stackexchange.com/questions/734513", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/349573/" ] }
734,825
I don't understand how, as a black hole gets smaller and smaller from the excretion of Hawking radiation, it retains its ability to capture photons. I imagine there would be a point in its life cycle where its mass/gravity just isn't enough for it to be able to do so and its body could be revealed, and perhaps gain back some of its previous volume from the lack of gravity being able to hold it together as tightly? I have no formal education in physics yet.
After writing this answer, I noticed there are a couple alternative explanations that might be interesting to mention, so I'll add them as well. Explanation 1 What makes something into a black hole isn't exactly how much mass it has, but also how compactified it is. In principle, any amount of mass can form a black hole, as long as you compactify it enough. The size needed for some amount of mass to form a black hole is know as the Schwarzschild radius. Roughly speaking , if you pick an amount of mass and manage to compress it below the Schwarzschild radius, you'll have a black hole. It is given by a simple expression. Namely, $$R_S = \frac{2 G M}{c^2},$$ where $M$ is the mass, $c$ is the speed of light, and $G$ is Newton's gravitational constant (which sort of measures how intense gravity is). For example, for something with the mass of the Earth, the Schwarzschild radius is roughly $0.88$ cm, while for the Sun it is about $2.9$ km (I must admit I didn't double check the computation, I'm trusting Google on these numbers, but they are pretty much what I remember). Hence, the black hole stays a black hole while it evaporates because it is shrinking while it is losing mass, and always shrinking enough so that it is always at the correct size. Explanation 2 The second way of thinking is a bit less familiar. It turns out that black holes aren't really objects, but rather regions in spacetime. In fact, this is so true that black holes are what we call vacuum solutions: there isn't matter anywhere in the spacetime. All of the mass of the black hole is there due to effects of gravity itself. Another way of thinking it is that a black hole is so collapsed that its mass is entirely due to gravitational energy. It is a bit harder to grasp this concept, but once you get it, the rest is simpler. The black hole stays there because it isn't "made" of anything. There isn't a star just below the event horizon waiting to come out. There is nothing there, but gravity. As it loses mass, gravity weakens and it gets smaller, but there isn't anything behind the horizon to come out. Edit: the question " What do we mean when we say that black holes aren't made of anything? " later asked for a more technical discussion of parts of this explanation. I suggest checking it out. Explanation 3 The third explanation might be a bit simpler than the second. Once something falls into a black hole, that's it . It can't come out. Ever. By the very definition of what a black hole is. Hence, as the hole shrinks, there is no way something could come out of the hole to be its "body". That would violate the very meaning of what is a black hole. This is a simplified answer. Since OP doesn't have formal education in Physics, I might have overlooked a few details and nuances in here, but I did my best to keep the answer as faithful as possible.
{ "source": [ "https://physics.stackexchange.com/questions/734825", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/349757/" ] }
735,441
Typing my question directly so people know what I am asking, afterwards providing background and context. Q: What does it mean when space is falling , faster than light? (I am specifically wondering about just that "space falling" which is what is confusing me the most). I mean it does this after it falls inside the horizon of a black hole. (A Schwarzschild black hole to be specific, so a "static", no electric charge & no spin) this is stated in both the documentary, and in the book(both provided below). Citing the book space is falling into the black hole. Outside the horizon, space is falling less than the speed of light; at the horizon space is falling at the speed of light; and inside the horizon, space is falling faster than light, carrying everything with it. This is why light cannot escape from a black hole: inside the horizon, space falls inward faster than light, carrying light inward even if that light is pointed radially outward. General Relativity, Black Holes, and Cosmology by Andrew J. S. Hamilton 4 December 2021 Available here at chapter 7. "Schwarzschild Black Hole" section '7.6 Horizon' - page 137 And, I include a screenshot of this Wikipedia link about the Event horizon of a black hole, which also illustrates this: Background and what I have tried Background I have no university level education in Physics yet, but I know some terms and have read some articles and books(such as Wikipedia and *edu sites, like jila.colorado.edu ). And I am reading books(like Sean Carroll's "spacetime and geometry" intro to general relativity, and Taylor/Wheeler's book as well, and many others). And so I am not completely "new" to Physics but, not at all an expert. What I have tried Before asking this question I thoroughly tried to find my question in various, numerous sites(including this one), and books. But didn't find anything that would explain the stating that "space" would fall. (excluding the one I typed above) Before this question was asked I read other questions and answers on this site, and specifically this question really inspired me how I should write my question, as well as of course first of all I read the following: don't-ask how to ask I want to be very clear that I "Keep an open mind", and I try to be on-topic and specific. Which is why I typed the question directly above, and, to be sure it doesn't become a vague or more discussion kind of question, I try to keep things short. Since this is also, my first question . Citing the Documentary This documentary about black holes - at timestamp 33:13 Hamilton, Andrew J S says the following: space is falling faster than light My question is about just that, and I do know this is from YouTube, which is why I provided the link to the book and checked the topic discussed about in here . Resources I have tried If spacetime can expand faster than the speed of light, then can a black hole do that too? why didn't that article help? the question is about "spacetime", whereas my question is about "space" itself. Related and External links Andrew Hamilton's Homepage - jila.colorado.edu Andrew's book - jila.colorado.edu black_holes - math.ucdr Can space expand with unlimited speed? - Physics.stackexchange and sub-links Frame-dragging - Wikipedia Thirring_effect - Wikipedia Schwarzschild_metric - Wikipedia Black_hole - Wikipedia
This is the "river" picture of black holes, as Dale said, but I disagree strongly with his statement that it is a "nice heuristic". Rivers flow at a certain speed. If you fall into a river, friction with the water will accelerate you until you're moving at the same speed as the water. Gravity doesn't work like that at all. Gravity doesn't give you a certain velocity, but a certain acceleration. The time reversal of a river is a river flowing in the opposite direction, but the time reversal of a gravitational field is a gravitational field in the same direction. If you film a ball being tossed in the air and play the film backward, the ball is still attracted toward the earth in the reversed film. Gravity is a conservative, not a dissipative, force. The river picture fails to capture any of those properties. The river picture leads people to wonder why black holes don't just suck up everything around them, and how the space that flows into them is replenished. Those would be reasonable questions if the river picture made sense, but it just doesn't. The river picture is inconsistently used only for black holes when it could equally be applied to any other gravitating body. Black holes are unique in that time symmetry is broken at the event horizon and inside it, but that's irrelevant to the observable behavior of astronomical black holes, which depends only on the physics outside the horizon. The physics outside is time-symmetric, and doesn't fundamentally differ from that of other gravitating bodies. The river picture tries to explain the broken time symmetry by breaking it everywhere, even outside the horizon. But it isn't broken outside the horizon. The explanation is wrong.
{ "source": [ "https://physics.stackexchange.com/questions/735441", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/276316/" ] }
736,316
The kinetic energy of a fluid occupying a region $\Omega \subset \mathbb{R}^3$ is given by $$T = \frac{1}{2}\int_\Omega |v(x)|^2 dx.$$ I am looking for some physical intuition on where the above comes from given that the classical definition of kinetic energy is $T = \frac{1}{2}mv^2$ . More specifically, I have two questions: What happened to the mass $m$ ? Why are we integrating? My best guess is that the mass is taken to instead be a density, $\rho$ , and we further assume that $\rho = 1$ (why is this justified?). In which case, for an infinitesimal amount of fluid, we find its average velocity $v(x)$ in that region and multiply the volume to get something like $T = \frac{1}{2}mv^2$ . Repeating this over all infinitesimally small areas in $\Omega$ and summing them up amounts to taking the integral. If this is indeed correct, then my question reduces to why is taking $\rho = 1$ justified?
The kinetic energy of a fluid is the same as normal mechanics, $T=mv^2/2$ . However, that's not generally useful as we don't usually have masses but densities, so we instead consider the kinetic energy density , $$\mathcal{T}\equiv\frac{1}{2}\rho v^2.$$ Then in order to get the total kinetic energy, you must integrate over all space, $$T=\int\mathcal{T}\,\mathrm{d}\mathcal{V}=\frac{1}{2}\int\rho v^2\,\mathrm{d}\mathcal{V}.$$ So to answer your enumerated questions, $m$ is absorbed by $\rho$ and you integrate over all space to get the total kinetic energy of the domain. As to the subsequent question, I do not know why the source of your equation has neglected $\rho$ . Presumably it was a typographical error, but that's not something someone can determine without seeing the source first hand. You may want to check other sources to confirm the equation.
{ "source": [ "https://physics.stackexchange.com/questions/736316", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/288281/" ] }
736,496
We know that the sun uses nuclear fusion to generate sunlight and heat energy. If we are using solar panels to harvest solar energy, aren't we putting some electrical load(resistance) on the sun? If yes, does it have any effects on it? Edit: Does the presence of a receiver affect the working of the broadcasting sender? It is more of a philosophical question.
There is no electrical connection between our solar panels and the Sun. The Sun radiates electromagnetic energy (photons) out into space. If we catch some of those photons, the Sun neither knows nor cares about it. In fact, we'll catch those photons regardless of whether we set up a solar panel or not. If you took away the solar panel from somebody's roof top, then the same photons that would have been absorbed by the panel, will be absorbed by the rooftop instead. Whereas some of their energy would have been converted into electrical energy by the panel, All of their energy will be converted to heat in the roof when the panel is taken away.* * Except, these days, people who live in warm sunny climates are learning to paint their roofs white. A white roof reflects a lot of the photons back out into space instead of absorbing them. But even then, the Sun still neither knows nor cares. As far as the Sun is concerned, once the photons leave it's surface, they are gone forever.
{ "source": [ "https://physics.stackexchange.com/questions/736496", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/347072/" ] }
738,208
Angular momentum is defined as $L = r \times p$ . By the definition of the cross product, $L$ is going to be orthogonal to both $r$ and $p$ , and the magnitude $|L|$ tells us how much angular momentum the object has. What I am unclear of is what does the vector $L$ itself tell us? For example, if $r$ and $p$ are confined in the $xy$ plane in $\mathbb{R}^3$ , then clearly $L$ must parallel to the $z$ axis. Does the vector $L$ tell us anything more than the magnitude of the angular momentum and the direction of its spin, or is there some real physical meaning to it, for example is there really momentum in the $z$ direction? To put it differently, what does the direction of $L$ really tell us?
Angular momentum is the generator of rotations. Linear momentum is the generator of linear translations. This can be given a precise mathematical meaning, but intuitively it means something like the following: if an object has angular momentum, this causes the object to rotate. If an object has linear momentum, this causes the object to move (“translate”). Technically speaking, angular momentum is best thought of as a “2-form.” In other words, you can think of the angular momentum as being defined in a plane. However in 3 dimensions there is a coincidence: every plane can be uniquely labeled (up to scaling) by a vector. The vector which labels the plane is taken to be the vector perpendicular to the plane. This coincidence is because a 3D space has 3 dimensions, and $3-2=1$ , so the plane with 2 dimensions can be labeled with the leftover 1 dimension. If we lived in a 4-dimensional spatial universe, this coincidence would no longer apply to us. So we wouldn't represent angular momentum with a vector, we would have to use a 2-form. The takeaway here is that if you find it helpful, you should think about angular momentum as properly being defined by a plane , together with a sign which gives the direction of rotation in the plane. In 3D, notice that a plane can just be labeled by the vector normal to it.
{ "source": [ "https://physics.stackexchange.com/questions/738208", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/288281/" ] }
739,831
NASA's DART impactor made a head-on collision with the asteroid Dimorphos on September 26, 2022. A real-time video feed gave immediate confirmation of the direct hit. But according to this press release , NASA had to observe Dimorphos for two more weeks before being able to confirm that Dimorphos's trajectory was indeed noticeably altered (as planned). Why? It seems to me that determining the collision's effect on Dimorphos's orbit would be a very simple exercise in Newtonian mechanics. I assume that Dimorphos's total mass was well-known from its orbital dynamics with Didymos. I know that its internal composition wasn't well understood, but is that really so important for understanding its post-collision dynamics? Conservation of momentum means that the subsequent overall motion of Dimorphos's center of mass should not be affected by the details of its internal composition. I know that the collision ejected some material off of Dimorphos's surface, so there's a bit of a semantic question as to whether after the collision, the term "Dimorphos" should refer to "all of that material that made up Dimorphos before the collision" or "what's left on the largest connected component of that material after the collision". But it doesn't seem to me that this would make a big difference regarding Dimorphos's overall dynamics. It seems to me that approximating the collision as a perfectly inelastic collision between two point particles would probably give a pretty good model. Even if the impactor did knock off a significant fraction of Dimorphos's mass (which seems unlikely), then it seems to me that this outcome would count as "significantly changing its trajectory" almost by definition. Was there ever really any genuine uncertainty whether DART would redirect Dimorphos given that DART directly impacted Dimorphos? What kind of plausible internal composition of Dimorphos could have led to a failure to be redirected? Edit to clarify question scope: As is often the case, many people are interpreting the title of my question too literally. (My understanding is that Stack Exchange's convention is that the "official" version of an SE question is found in the question body, and the purpose of the question's title is to draw attention rather to precisely state the question.) I'm not trying to have a general philosophical debate about how much you should trust theory vs. experiment. Nor am I trying to understand why NASA actually did observationally confirm the redirection, as a lot of complicated non-physics factors enter into that decision. (So any speculation about NASA's political incentives, etc. are out of scope for this question.) I'm just asking, very concretely, what were the main sources of scientific uncertainty in the extent to which Dimorphos would be redirected given a successful collision, and how those uncertainties would affect the extent of redirection. "The composition of Dimorphos" would not be a concrete enough uncertainty; I'd like to know how the composition of Dimorphos would change the redirection. Of the many comments and answer to this question so far, only John Doty's answer addresses my question within the scope that I intended it.
The spacecraft had a large amount of energy, but not a lot of momentum. Most of the impulse delivered to the target was due to the momentum of the ejecta. Energy scales as $mv^2$ , but momentum scales as $mv$ . For a given energy, cut the ejecta velocity in half, eject four times as much, and deliver twice the impulse. But whether the energy produces a small quantity of fast ejecta or a large quantity of slow ejecta depends the the material properties of the target. These were poorly known.
{ "source": [ "https://physics.stackexchange.com/questions/739831", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92058/" ] }
741,192
As far as I can see from this Wikipedia article on the incandescent light bulb , there have been only four types of light bulb filaments: those made of carbon , those made of osmium , those made of tantalum and those made of tungsten (wolfram). I wonder why it is impossible to use any other chemical element for that purpose. Is there a simple explanation of that reason?
Essentially: carbon, osmium, tantalum, and tungsten, along with rhenium , are the only (known, stable) elements that have melting points high enough to remain solid at the high temperatures required to achieve the colors of standard incandescent light bulbs. Why? Incandescent light bulbs produce light by heating a filament, which gets so hot that it emits enough radiation to light up the room. We can model the color of the light bulb well by approximating it as a blackbody at thermal equilibrium, governed by Planck's law . When electricity is passed through it, the light bulb filament heats up until it reaches, at equilibrium, the temperature referred to on the bulb label. The colder the temperature, the redder the bulb, and the higher the temperature, the bluer the bulb. Standard incandescent light bulbs have temperatures between 2700 K and 3000 K. This is because the temperature that produces a peak at the red-most end of the visible spectrum (with $\lambda=750$ nm) is ~3800 K. Thus, as as filament temperature is reduced below 3800 K, the peak will shift further out of the visible range and the light produced will appear dimmer (due to Wien's displacement law ) and so colder temperatures appear too dim to function as a light bulb. As the filament temperature is increased, however, the operational temperature of the light bulb approaches the melting point of the filament. Since the temperature is just the mean kinetic energy of molecules, many of the molecules in the filament would have more energy individually than that mean, severely reducing the lifetime of a light bulb with a filament whose operational temperature is too close to its melting point. This is why incandescent bulbs are rarely rated above 3000 K (at least those without some kind of special coating that makes them bluer). However, the only elements with melting points above 3000 K are the elements you mention: carbon, osmium, tantalum, and tungsten, with the exception of rhenium, one of the rarest elements on earth. See the elements in red (which have melting points at or above 3000 K) in the following periodic table (from ptable.com ): That doesn't mean there aren't other materials with high melting points that might function well as filaments for an incandescent bulb, such as tantalum carbide (which can melt at around 3900 K), for example. Here's a 1935 patent for a tantalum carbide lamp. Even rhenium has been considered. Here's a 2001 patent for a tungsten-rhenium alloy filament, although given the rarity of rhenium, it is likely not economical to use as a filament for consumer-grade light bulbs. Non-incandescent light bulbs produce light through entirely different mechanisms, or else don't use a solid filament, which is why LED light bulbs don't require tungsten et al. at all.
{ "source": [ "https://physics.stackexchange.com/questions/741192", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/249169/" ] }
741,544
In my textbook, Physics, Part II—Textbook for Class XI , there's a line which talks about why stress is not a vector: Stress is not a vector quantity since, unlike a force, stress cannot be assigned a specific direction. Force acting on the portion of a body on a specified side of a section has a definite direction. It does not elaborate why stress cannot be assigned a specific direction. I know that the stress on a body is the restoring force (applicable when the body is deformed) per unit area. Has it got something to do with the fact that area itself is a vector? Moreover, we often say that tensile (or compressive) stress is applied perpendicularly to the surface. That's specifying direction, isn't it? An intuitive explanation (instead of a rigorous mathematical one) is highly appreciated.
Draw a square on an elastomer strip and stretch it: "OK, I get this:" The lengthwise load (comprising two force vectors, to the left and to the right) applies a stress state on the shape. What kind of stress? "I'll assume the stress state can be expressed as a vector. I guess the vector corresponds to normal stresses to the left and to the right (i.e., forces acting perpendicular to the left and right sides). This is consistent with normal stresses changing side lengths of infinitesimal elements. I guess I'll call the vector [1 0 0], where I've normalized by the load magnitude." Now consider drawing not a square but a diamond, for the same load. "OK, now I get this:" What is the stress state? "It now includes some shear stress, since interior angles are now changing. Effectively, some forces on the sides are now parallel instead of solely perpendicular. The vector [1 0 0] doesn't capture this change, nor can I transform it rotationally to scale with [1 1 0] or [1 -1 0], say, because the diamond doesn't deform that way either; it stretches more to the left and right than it shrinks up and down. Hmm. "Nature doesn't care which way we draw our coordinate systems, so we need a mathematical representation that transforms correctly. I have to conclude that a vector is incapable of representing the stress. However, a tensor would work: $$\left[\begin{array}{ccc} 1 &0 &0\\0& 0& 0\\0& 0 &0\end{array}\right]$$ would transform upon a 45° rotation into $$\left[\begin{array}{ccc} 1/2 &1/2 &0\\1/2& 1/2& 0\\0& 0 &0\end{array}\right],$$ which is consistent with the observed deformation of the diamond. Specifically, the side lengths stretch equally from an equibiaxial stress—from the diagonal elements—of 1/2, and this is superimposed on a shape change from a shear stress—from the off-diagonal elements—of 1/2. "Furthermore, the tensor satisfies the standard requirements, such as invariance of the trace (here, 1) and two other invariants. These invariants capture the true essence of the stress state, which must be coordinate independent." Why not just list those indices as, say, [½ ½ 0 ½ ½ 0 0 0 0] to make a vector? "That not a true Cartesian vector, which has three elements and a well-defined direction. It's just a list." One more question. When we apply a load on a surface, the resulting stress state has a well-defined direction that corresponds to the load. Why isn't a tensor needed here? "A tensor is still needed to describe the stress state because of the above reasoning, but neither the surface nor the load are free to rotate, so any infinitesimal element aligned with the surface is constrained. Although this appears to suggest that stress has a single direction, its a particular result of the constraint and doesn't hold in general." (Images from my site , adapted from a photograph by Nelson Fitness.)
{ "source": [ "https://physics.stackexchange.com/questions/741544", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/346838/" ] }
741,557
For a certain (divergenceless) $\vec{B}$ find $\vec{A} $ such that $\vec{B}= \nabla \times \vec{A} $ . Is there a general procedure to "invert" $\vec{B}= \nabla \times \vec{A} $ ? An inverse curl? (I was thinking of taking the curl of the previous equation: $$ \nabla \times \vec{B}= \nabla \times \nabla \times \vec{A} = 0. $$ Then using the triple cross product identity $ \nabla \times \nabla \times \vec{V} = \nabla (\nabla \cdot V) - \nabla^2 V$ but that does not quite simplify things... I was hoping to get some sort of Laplace equation for $\vec{A}$ involving terms of $\vec{B}$ .)
Draw a square on an elastomer strip and stretch it: "OK, I get this:" The lengthwise load (comprising two force vectors, to the left and to the right) applies a stress state on the shape. What kind of stress? "I'll assume the stress state can be expressed as a vector. I guess the vector corresponds to normal stresses to the left and to the right (i.e., forces acting perpendicular to the left and right sides). This is consistent with normal stresses changing side lengths of infinitesimal elements. I guess I'll call the vector [1 0 0], where I've normalized by the load magnitude." Now consider drawing not a square but a diamond, for the same load. "OK, now I get this:" What is the stress state? "It now includes some shear stress, since interior angles are now changing. Effectively, some forces on the sides are now parallel instead of solely perpendicular. The vector [1 0 0] doesn't capture this change, nor can I transform it rotationally to scale with [1 1 0] or [1 -1 0], say, because the diamond doesn't deform that way either; it stretches more to the left and right than it shrinks up and down. Hmm. "Nature doesn't care which way we draw our coordinate systems, so we need a mathematical representation that transforms correctly. I have to conclude that a vector is incapable of representing the stress. However, a tensor would work: $$\left[\begin{array}{ccc} 1 &0 &0\\0& 0& 0\\0& 0 &0\end{array}\right]$$ would transform upon a 45° rotation into $$\left[\begin{array}{ccc} 1/2 &1/2 &0\\1/2& 1/2& 0\\0& 0 &0\end{array}\right],$$ which is consistent with the observed deformation of the diamond. Specifically, the side lengths stretch equally from an equibiaxial stress—from the diagonal elements—of 1/2, and this is superimposed on a shape change from a shear stress—from the off-diagonal elements—of 1/2. "Furthermore, the tensor satisfies the standard requirements, such as invariance of the trace (here, 1) and two other invariants. These invariants capture the true essence of the stress state, which must be coordinate independent." Why not just list those indices as, say, [½ ½ 0 ½ ½ 0 0 0 0] to make a vector? "That not a true Cartesian vector, which has three elements and a well-defined direction. It's just a list." One more question. When we apply a load on a surface, the resulting stress state has a well-defined direction that corresponds to the load. Why isn't a tensor needed here? "A tensor is still needed to describe the stress state because of the above reasoning, but neither the surface nor the load are free to rotate, so any infinitesimal element aligned with the surface is constrained. Although this appears to suggest that stress has a single direction, its a particular result of the constraint and doesn't hold in general." (Images from my site , adapted from a photograph by Nelson Fitness.)
{ "source": [ "https://physics.stackexchange.com/questions/741557", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/212834/" ] }
742,157
I want to know what a standing wave of light would like and what properties it might have that are interesting.
The resonant cavity of a laser is a standing wave. It doesn't really look like anything in particular because the standing waves are not travelling to your eyes. However, you can let some of the wave escape the cavity to do all sorts of interesting things. The most interesting property is the ability to control cats due to the tight collimation: Other less interesting properties include that the light is coherent and monochromatic. It can also be very intense and easily focused. It can produce more heat at a target than in the cavity, thus allowing to thermally ablate materials.
{ "source": [ "https://physics.stackexchange.com/questions/742157", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/352315/" ] }
742,232
Heat conduction is not considered to be a form of work, since there is no macroscopically measurable force, only microscopic forces occurring in atomic collisions. This excerpt is from a 2007 Wikipedia selection, and I was curious as to why a distinction between microscopic forces and macroscopic forces result in something to not be considered a form of work.
This is more or less a definition. The question is then why heat and work are defined as distinct channels of the increasing of internal energy. In my understanding the distinction is in the following sense. Let $(S,V)$ be our thermodynamic variables. What does this mean microscopically? At the end of the day volume has to do with microscopic energy levels in a (quantum) system. Fixing volume you fix the microscopic energy spectrum $\{ E_i \}$ of your system. Entropy $S$ on the other hand is related to how these levels $\{ i \}$ are populated $S=\sum \frac{N_i}{N} \log\left( \frac{N_i}{N} \right)$ (we assume we have some fixed number of particles $N$ that can populate the levels in whichever way $\{ N_i\}$ ) . The total energy of the system is $E_{tot}= \sum E_i N_i$ . In general it can change either when $\{ E_i \}$ or $\{ N_i\}$ are changed (through volume or entropy change respectively): $$\delta E = X d V + Y d S$$ Clearly these two channels are qualitatively different. In the first case the energy changes due to "drift" of the levels. In the second, the particles hop around from level to level. The former is then a reversible(= macroscopically traceable) process while the latter is not. The convention is that first term $X dV$ is referred to as work $\delta W$ and the second $Y dS$ as heat $\delta Q$ (obviously $X$ and $Y$ are pressure and temperature). I argue they correspond to very different microscopic processes and hence should not be mixed.
{ "source": [ "https://physics.stackexchange.com/questions/742232", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/340090/" ] }
744,462
I've seen this question asked multiple times, and the answer is never detailed. I initially assumed that either hydrogen or oxygen had relatively large neutron absorption cross sections, however that is not the case, so what actually makes water a good absorber?
Water is useful for neutron shielding , even though water is not an especially good neutron absorber. Oxygen nuclei are basically invisible to neutrons, since oxygen-16 is a spinless doubly-magic nucleus. However, hydrogen has a both large scattering cross section and a low mass. Basically, in every hydrogen-neutron scattering event, the outgoing neutron momentum is spherically symmetric in a reference frame with half of the neutron’s initial speed. Since the neutron momentum is roughly halved with every scatter, neutrons with basically any energy reach thermal equilibrium with the water quickly. High-scattering barriers are opaque for the same reasons that the undersides of clouds (which are made of transparent water droplets or ice crystals) are dark. There is some thickness of scatterer where the incident radiation is equally likely to be transmitted or backscattered. Many of these thicknesses, and the incident radiation is exponentially attenuated, even with negligible absorption. Thermal neutrons in water are about a thousand times more likely to scatter than to capture — which sounds like a small capture cross section, but really isn’t. Water is also extremely cheap. Liquid water has the property that a water barrier (unlike, say, a brick barrier) has no chinks or gaps where the radiation can shine through unimpeded. The mass of the water makes it pretty okay at absorbing the gamma photons emitted when the neutrons capture on the hydrogen. And to top it off, the “activation products” produced by neutron radiation in water are deuterium and oxygen-17. These are both non-radioactive; they aren’t even poisonous.
{ "source": [ "https://physics.stackexchange.com/questions/744462", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/355244/" ] }
744,520
I have learned about general relativity and how gravity arises from spacetime curvature. And I have always been taught that gravity is not a real force in the sense that $$\frac{dp}{dt} = 0$$ And from this, gravity does not accelerate objects while they are in freefall. They are only accelerated when they are on the ground at rest. On the other hand, when a spacecraft needs to reach a destination more quickly, they can use planets as velocity boosters. They use a gravitational assist from the planet to accelerate them to a greater velocity. How can this be if gravity does not accelerate objects in freefall since it is not a force? I am seeing a contradiction here and it is confusing me. What am I missing in my conceptual understanding of gravity?
Well, gravity is a force and it isn't . What is a force anyway? It's what makes you accelerate, which is already a statement about a second-order derivative of one variable with respect to another, and now all of a sudden your coordinate system is important. The point being made when someone says "gravity isn't a force" is that, if you express a body's location in spacetime, not space as a function of proper, not "ordinary" time along its path, gravity doesn't appear in the resulting generalization of Newton's second law in the same way as other forces do. In that coordinate system, the equation can be written as $\color{blue}{\ddot{x}^\mu}+\color{red}{\Gamma^\mu_{\nu\rho}\dot{x}^\nu\dot{x}^\rho}-\color{limegreen}{a^\mu}=0$ , where the red (green) part is gravity (other forces). But this red/green distinction looks different, or disappears, if you look at things another, mathematically equivalent way. In particular: Putting on Newton's hat This is the less elegant of two options I'll mention, one that uses pre-relativistic coordinates. If you look at the body's location in space, not spacetime as a function of ordinary, not proper time, the red term looks like the green term, hence like the stuff you learned from Newton. In particular, $\frac{dp^i}{dt}\ne0$ . Putting on Einstein's hat Even more elegantly, we don't need to leave behind the coordinates I suggested first to change our perspective. As @jawheele notes in a comment, we unlock the real power of GR if we use a covariant derivative as per the no-red formulation $\color{blue}{\dot{x}^\nu\nabla_\nu\dot{x^\mu}}-\color{limegreen}{a^\mu}=0$ . This time, the equation's terms manifestly transform as a tensor, making the blue term the unique simplest coordinate-invariant notion of acceleration. The main advantage of the $\Gamma$ -based version is doing calculations we can relate back to familiar coordinates. This not only recovers Newtonian gravity in a suitable limit, it computes a correction to it. Regarding the first bullet point above, have you ever spun on a big wheel? There's a similar perspective-changing procedure that says the dizziness you're feeling is due to something that's "not a force". You're still dizzy, though. This isn't a contradiction; they're just two different ways of deciding what counts as a force. The good news is we don't need to "forget" GR to understand a gravity assist. How does it work? It exploits the fact that, if a planet's in the right place at the right time for you, the red term is very different from what the Sun alone would normally give you there. This has implications for the blue part even without wasting fuel on the green part. Or you can explain it without GR; your choice.
{ "source": [ "https://physics.stackexchange.com/questions/744520", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231892/" ] }
744,548
I have read at many places that the pure Maxwell theory (without any matter) is self-dual. This is the general form for Maxwell Lagrangian density: $$\mathcal{L} = - \frac{1}{4} F_{\mu\nu} F^{\mu\nu},$$ where $F_{\mu\nu} \equiv \partial_\mu A_\nu - \partial_\nu A_\mu.$ There are two ways to see the self-duality that I know of. Either we can write it in terms of electric and magnetic fields, $$ \mathcal{L} = \frac{1}{2}\left(\mathbf{E}^2 - \mathbf{B}^2\right);$$ under the duality transformation $\mathbf{E}\rightarrow\mathbf{B}$ and $\mathbf{B} \rightarrow-\mathbf{E}$ , the form of the Lagrangian remains the same (up to a negative sign), and so the equations of motion also remain the same. Alternatively, we can also write (in the language of differential forms): $$\mathcal{L} = -\frac{1}{2} F \wedge *F,$$ where $*F$ is the Hodge dual of $F$ . This is invariant under the duality transformation $ F \rightarrow *F \ ,\ *F \rightarrow **F = -F $ which again lands us with the same form of Lagrangian (with a negative sign). My actual problem concerns the duality in presence of monopoles. But I think my confusion can be answered at the level of pure Maxwell theory itself. As described in this answer , we can introduce an extra gauge field (call it $A^m$ , and call the electric gauge field $A^e$ ) that couples with magnetic monopoles. I am assuming that under the self-duality, the $A^m$ and $A^e$ fields will be swapped (up to a negative sign). But then I don't see how the kinetic term will remain invariant. So perhaps I can word my question like this: 1) How can I express the Maxwell Lagrangian in terms of $A^e$ and $A^m$ , so that the self-duality is obvious? Let me also phrase this question in slightly different way: 2) Where is the $A^m$ field hidden in Maxwell Lagrangian? (Is it hidden as a constraint, for example?) My understanding might be completely incorrect about this. If you have any comment or reference about it, that will be appreciated.
Well, gravity is a force and it isn't . What is a force anyway? It's what makes you accelerate, which is already a statement about a second-order derivative of one variable with respect to another, and now all of a sudden your coordinate system is important. The point being made when someone says "gravity isn't a force" is that, if you express a body's location in spacetime, not space as a function of proper, not "ordinary" time along its path, gravity doesn't appear in the resulting generalization of Newton's second law in the same way as other forces do. In that coordinate system, the equation can be written as $\color{blue}{\ddot{x}^\mu}+\color{red}{\Gamma^\mu_{\nu\rho}\dot{x}^\nu\dot{x}^\rho}-\color{limegreen}{a^\mu}=0$ , where the red (green) part is gravity (other forces). But this red/green distinction looks different, or disappears, if you look at things another, mathematically equivalent way. In particular: Putting on Newton's hat This is the less elegant of two options I'll mention, one that uses pre-relativistic coordinates. If you look at the body's location in space, not spacetime as a function of ordinary, not proper time, the red term looks like the green term, hence like the stuff you learned from Newton. In particular, $\frac{dp^i}{dt}\ne0$ . Putting on Einstein's hat Even more elegantly, we don't need to leave behind the coordinates I suggested first to change our perspective. As @jawheele notes in a comment, we unlock the real power of GR if we use a covariant derivative as per the no-red formulation $\color{blue}{\dot{x}^\nu\nabla_\nu\dot{x^\mu}}-\color{limegreen}{a^\mu}=0$ . This time, the equation's terms manifestly transform as a tensor, making the blue term the unique simplest coordinate-invariant notion of acceleration. The main advantage of the $\Gamma$ -based version is doing calculations we can relate back to familiar coordinates. This not only recovers Newtonian gravity in a suitable limit, it computes a correction to it. Regarding the first bullet point above, have you ever spun on a big wheel? There's a similar perspective-changing procedure that says the dizziness you're feeling is due to something that's "not a force". You're still dizzy, though. This isn't a contradiction; they're just two different ways of deciding what counts as a force. The good news is we don't need to "forget" GR to understand a gravity assist. How does it work? It exploits the fact that, if a planet's in the right place at the right time for you, the red term is very different from what the Sun alone would normally give you there. This has implications for the blue part even without wasting fuel on the green part. Or you can explain it without GR; your choice.
{ "source": [ "https://physics.stackexchange.com/questions/744548", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/239645/" ] }
746,147
Exactly what the question says; If all the protons and electrons in every single atom in the universe were swapped for their anti-particles, what would essentially change?
Changing all particles into antiparticles and vice versa is known as a charge symmetry operation (C) and for a long time it was believed this would leave everything totally unchanged. Similarly mirroring the universe (a parity symmetry operation, P) would seem to leave everything unchanged. However, in 1956 it was found that there are weak interactions that do not obey parity symmetry - the left and right handed forms have different probability. It was believed that the combined CP symmetry was still true, but in 1964 that was also found to fail: CP violation . So the antimatter universe would be very slightly different from ours. The effect is small, and only occurs in weak interactions involving nonzero strangeness number. That means that the effects in everyday physics will be very small: while normal protons and neutrons have a pinch of strange quark presence it is tiny. It would be felt most strongly for weak interactions - that are also just a small part of what is going on. Presumably it would change energy levels in nuclear transitions to a tiny degree, which may change a few fusion pathways in stars. So there would be a difference, but it would likely be almost imperceptible, in the form of different elemental abundances of heavier elements.
{ "source": [ "https://physics.stackexchange.com/questions/746147", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/340090/" ] }
746,211
I was reading Classical Mechanics : The theoretical minimum by Leonard Susskind , and he says Assume that two clocks at different places can be synchronised. I don't understand why one should do that. Can't one clock at the origin be enough? Whenever I try to start on special relativity, this crops up. Can someone explain this to me or at least point me towards any resources which explain such issues in detail? Especially special relativity related.
All of special relativity is based on the assumption that any observer can set up a coordinate system and then label spacetime events with their coordinates in that system. Then we can use the Lorentz transformations to transform between the coordinate systems of different observers. The positions of events are easy because I have an infinite number of rulers and simply by laying them one after the other I can create a grid that fills all of spacetime. Then when some event happens my colleague who happened to be standing where the event happened can just look at my rulers and note down the position. But the time is trickier. Time measurements are easy for events at my position because I just look at my clock and note the time. But for any distant event I have to ask my colleague next to the event to note the time on their clock. I could wait for the light from the event to reach me, and subtract off the travel time to get the original time of the event, but this is now an indirect measurement of the time. This workable in SR, but in GR light travel times are impossible to calculate unless I know the exact trajectory the light took, and indeed the light could reach me by multiple paths as happens in gravitational lensing. So the only safe option is to put a clock at each point of my grid of rulers then synchronise them all. That way the event coordinates can be recorded by a colleague standing at that point. But this only works if all the clocks can be synchronised, and this is harder that it appears at first sight. If I move my clock to yours so we can synchronise them my clock will be time dilated by the motion and this spoils the timing. That's why we resort to protocols like Einstein synchronisation . Now this is all conceptual rather than realistic and we clearly don't actually measure events this way. However it is a concept that is at the heart of special relativity, and that's why books on SR tend to labour the point.
{ "source": [ "https://physics.stackexchange.com/questions/746211", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/158099/" ] }
746,531
If I understood it correctly, the shape of the wings and/or propellers generates lift/thrust with the difference in pressure in both sides of the wings/propellers; where the lower side has higher pressure airflow and the uper side has low pressure airflow. With this in mind, I was wondering if it is possible to generate an area of low pressure around the upper part of the an aircraft without the moving balloons, wings or propellers/rotors. A "static lift" is the best way I could put it. So, would such thing be possible? Or lift would only be achieved with the airflow that wings already work around?
The cartoon is missing a key feature: the flow beyond the wing is downward. This is necessary to create lift. The lift force is balanced by a force on the air, Newton's third law in action. This force accelerates the air downward. So, no, you cannot cannot generate lift statically.
{ "source": [ "https://physics.stackexchange.com/questions/746531", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/296235/" ] }
747,202
Since $W=Fs$ , $F=\frac{W}{s}$ . When you substitute this in the formula for acceleration, $a=\frac{F}{m}$ , you will get that $a=\frac{W}{ms}$ . Then, when work equals zero, acceleration will be zero.
Can there be acceleration without work? Yes. An object going round in a horizontal circle at constant speed is accelerating because the direction of its velocity is changing. However, the magnitude of its velocity (its speed) is constant, and so its potential and kinetic energy are constant. Therefore it neither does work nor has work done on it. Although there is a force acting on the object (the centripetal force which keeps it moving in a circle) this force is always at right angles to the velocity of the object and so it does no work on the object.
{ "source": [ "https://physics.stackexchange.com/questions/747202", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/355431/" ] }
747,546
According to Einstein, energy is equal to mass. Consider a planet that is in gravitational attraction to two stars. Normally I would say that the gravitational attraction is proportional to the masses of the two stars. But if they are orbiting each other, they possess energy. Is it correct to say that this star system therefore has a stronger gravitational pull that is greater than just the two added masses of the stars?
A double star has less energy than if the two stars were separated. It is fairly easy to see why this is. If you have two stars orbiting each other you would need to add energy to separate them. That is, assuming you had some form of Star Trek-esque tractor beam you'd have to use that to grab the stars and physically pull them away from each other. Then you'd be putting work into the system and that added work means the two separated stars have a greater combined energy than the original double star system. That means the gravitational field of a double star system is slightly smaller than you'd expect from the masses of the two stars, though in practice the difference is far too small every to be measured.
{ "source": [ "https://physics.stackexchange.com/questions/747546", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/337868/" ] }
747,560
Supposing in young’s double split experiment, I cover one slit with red filter and the other slit with blue filter. The light coming out from the first slit would be red and the second slit would be blue. Would there be any interference fringes? I tried googling this question but all the answers say that two different monochromatic lights cannot interfere and hence no interference pattern. But, we do know that if we did the experiment using white light, there is a pattern(for a few fringes). So what should be the right answer?
A double star has less energy than if the two stars were separated. It is fairly easy to see why this is. If you have two stars orbiting each other you would need to add energy to separate them. That is, assuming you had some form of Star Trek-esque tractor beam you'd have to use that to grab the stars and physically pull them away from each other. Then you'd be putting work into the system and that added work means the two separated stars have a greater combined energy than the original double star system. That means the gravitational field of a double star system is slightly smaller than you'd expect from the masses of the two stars, though in practice the difference is far too small every to be measured.
{ "source": [ "https://physics.stackexchange.com/questions/747560", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/356863/" ] }
747,645
Do you know of an elementary proof for the second law of thermodynamics, for example, from the Newton laws or perhaps some particular model in which it is equivalent/reduces to it? My naive concept of entropy was it was some concave function of all velocities of a system, for example, $min(\{v\})$ . Is it correct at all?
You cannot derive the second law of thermodynamics from Newton's laws. Boltzmann's H-theorem was intended to do that, but it's not an actual theorem: the proof is flawed. However, although it's not a theorem, it works in reality. Go figure.
{ "source": [ "https://physics.stackexchange.com/questions/747645", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/330871/" ] }
747,661
I am reading the book "Decoherence and the Quantum-to-Classical Transition" by Maximilian A. Schlosshauer, and I have come to understand that for a two-level system with eigenstates $|a\rangle$ , $|b\rangle$ for which the environmental respectively adopts eigenstates $|E_a\rangle$ , $|E_b\rangle$ corresponding to the system states, the suppression of the coherent terms in the $a,b$ basis is proportional to $\langle E_a|E_b\rangle$ . I also learned the general formula for how scattering of light environmental particles carries away information about a system, thereby reducing the environmental wavefunction overlap and decohering it. My question is: what about the kind of decoherence caused by photons either being absorbed or not being absorbed in a given molecule. For instance, if a molecule A absorbs at frequency $\omega$ in its ground state, and the excited molecule $A*$ absorbs at frequency $\omega_1$ , it seems reasonable to me that irradiating the sample with $\omega$ -frequency photons should suppress the ability of $A$ to exist in a coherent superposition of excited and ground states. However, I can't think of a mathematical theorem for what this decoherence would look like, and I can't find one in my book either. Does anyone know what theorem describes this specific type of decoherence?
You cannot derive the second law of thermodynamics from Newton's laws. Boltzmann's H-theorem was intended to do that, but it's not an actual theorem: the proof is flawed. However, although it's not a theorem, it works in reality. Go figure.
{ "source": [ "https://physics.stackexchange.com/questions/747661", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/344051/" ] }
747,673
From what I know, wherever there is an electric field that is propagating, there will be a magnetic field present too, because that's what an EM wave comprises of- if it is going to carry energy, we will have both of them at any instant. But in any problem or application, there is this notion of applying the "Electric field" or "Magnetic field", what exactly does that mean? How are we ignoring the effects of the other? I do not get how that would work, because even for a stationary charge, the field is propagating but only consider it to have an electric field- but then what about the magnetic field?
You cannot derive the second law of thermodynamics from Newton's laws. Boltzmann's H-theorem was intended to do that, but it's not an actual theorem: the proof is flawed. However, although it's not a theorem, it works in reality. Go figure.
{ "source": [ "https://physics.stackexchange.com/questions/747673", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/213851/" ] }
747,868
I was scrolling Instagram and saw this Reel which at first was normal but when I started to think how the child was able to move then I got confused. The video shows a child standing inside of a crib and repeatedly bouncing their body against the rail of the crib, which causes the crib to move. Here is the logic behind the confusion: The center of mass of any system if initially at rest will move only if there is an external force acting on the system. In this case the child is applying a force on the bed in the forward direction by hitting it continuously so friction should be acting on it in the backward direction but if that's the case then they should move in the backward direction and not forward. Can someone give an explanation for this kind of motion of the child? Edit :- As per the comment I am adding the screenshots of the child and the crib initially and how it ended up after some time (though it will not tell how the child make this happen)
The kid moves their crib by expertly exploiting the difference between static and kinetic friction. Initially they pull themselves towards the bars relatively slowly, or more crucially with a relatively small acceleration. The force on them required for this acceleration comes from the crib. They pull the crib to the right and, by Newton's 3rd law, the crib pulls them to the left. The key thing is that the force they apply on the crib is small enough that the force of static friction between the floor and the crib is large enough to keep the crib in place, so the crib does not move to the left during this stage. Then the kid collides with the bars. Their speed to the left decreases quickly, which means, for a short time, they have a large acceleration to the right. For this to happen there has to a be a large force on them to the right. This comes from the bars of the crib but it means that the kid simultaneously applies a large force on the crib to the left. This force on the crib is now large enough to overcome the static between the crib and the floor and it begins to slide to the left. When sliding, the coefficient of friction between the floor and the crib is less than when it's not sliding which helps the crib move a bit farther left before coming to rest. Now the kid pushes the bars away to the left keeping the crib moving to the left. After that the kid pulls themselves towards the bars again but, not with enough force to cause the crib to slide to the right. Since there is a force of friction between the crib and the floor, the crib can not be considered an isolated system. The whole floor/room/building/Earth is now part of the system so considering the centre of mass is not that useful.
{ "source": [ "https://physics.stackexchange.com/questions/747868", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/271783/" ] }
748,421
This might be a slightly naive question, and if so I apologize, but I am currently a little confused as to why the Heisenberg Uncertainty principle should apply to particles, i.e. our system (say an electron) after we observe it and collapse it’s wave function. From what I understand, the Heisenberg Uncertainty principle just comes from the fact that momentum is the Fourier transform of position (wave number technically I think, but all the same since momentum is related to wavelength which is related to wave number). The more localized one is, the less localized the other will be because ‘localized’ things require a larger distribution of frequencies to localize them. Nonetheless, it seems as those this should only hold, if our object is treated as a wave, but if we treat it like a particle, it feels like this should just go away. Even if you represent a particle like a wave by using something like the Dirac delta function or whatnot, you would get essentially an infinite number of corresponding wave numbers, in other words total uncertainty on the momentum which seem strange if we think of things like particles classically. It just feels like in order for Heisenberg to hold, things always need to be ‘wave-like’ in some sense. I apologize for the long winded question, but any help would be appreciated. Edit: Thank you all for your responses. I think my confusion has been cleared up.
I understand your confusion. It is due to an old-fashioned way of introducing Uncertainty relations based on wave formalism, dating back to Heisenberg, but probably quite misleading. Quantum mechanics (QM) does not say that particles are waves. That was de Broglie's original point of view, but today is untenable. Particle dynamics may be described using waves. But this is not the same as saying particles are waves. There are many reasons for that. I mention a couple of them: quantum wavefunctions for more than one particle are not functions of a single space point; in measurements, nobody ever measured a fraction of charge, spin, or any other property of the particle like it would happen if the physical properties would have been spread over an extended field. QM is a probabilistic theory from which we can extract consequences on the statistical behavior of many measurements on equally prepared systems. However, in most cases, the outcome of an individual measurement is a random variable. Moreover, QM can be formulated differently, and wavefunctions in a Hilbert space are just one of the possibilities. The real issue is the calculation of probabilities. The actual content of the Heisenberg relations is captured by the Robertson-Schrödinger theorem : $\Delta x \Delta p_x \geq \frac{\hbar}{2}$ is a statement about the variances of the random variables corresponding to independently measured position and momentum in an ensemble of equally prepared particles. As such, it is neither a statement about the measure of both momentum and position of a single particle nor an effect of the interaction with a measurement device. Limits for combined measurements on the same system exist, but it is a different story, and there are strong indications that such limits differ from the usual Robertson-Schrödinger result.
{ "source": [ "https://physics.stackexchange.com/questions/748421", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/357392/" ] }
748,453
I get the grade-school explanation that "the number of electrons equals the number of protons", but the electric field drops off with distance. If the protons are concentrated in the nucleus and the electrons are nebulously around the atom in orbitals, shouldn't the atom have a complicated electrical field that depends on both the position of the electrons and the distance from the nucleus?
Being neutral does not exclude having an electric field. Any dipole is neutral but still has electric field. Your question is based on a wrong premise. "Neutral" simply means zero net charge. It does not mean no field. If you consider the average field, for a spherical symmetric distribution the field will be zero even though the negative charge is distributed over a larger volume. The instantaneous field may be non-zero due to the fluctuations in the charge distribution. This may produce an instantaneous dipole field. This is the origin of the Van der Waals force between neutral atoms and molecules. If the distribution is non-spehrical then a permanent dipole field will exist even though the net charge is zero so the system is neutral.
{ "source": [ "https://physics.stackexchange.com/questions/748453", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/120723/" ] }
749,290
So assuming we know all the laws of physics in differential equation form, and I have an estimate for the current large scale state of the universe (whatever standard assumptions/data cosmologists use about the current large scale state of the universe in order to extrapolate the state of the universe on the large scale far into the future or far into the past... whatever standard assumptions are used to estimate that there was a big bang in the past) It seems to me that I could plug these into my differential equations and find out the state of the universe infinitely far back or infinitely in the future. So why couldn't I plug in a time 100 billion years before today (before the big bang) and find out the state of the universe far before the big bang? Is there something in the theory/mathematics that forces the equations to begin at a certain time t(big bang)... and not allow us to extrapolate prior to that?
Let's not even talk about big bangs yet. Consider a simple non-linear ODE $\frac{dx}{dt}=-x^2$ with the condition $x(1)=1$ . There is a unique maximal solution defined on a connected interval , which in this case is easily seen to be $x(t)=\frac{1}{t}$ for $t\in (0,\infty)$ . Ouch. Even for such a simple looking ODE, a simple non-linearity already implies that our solution blows up in a finite amount of time, and we can't continue 'backwards' beyond $t=0$ . You as an observer living in the 'future', i.e living in $(0,\infty)$ can no longer ask "what happened at $t=-1$ ?" The answer is that you can't say anything. Note that you can also cook up examples of ODEs for which solutions only exist for a finite interval of time $(t_1,t_2)$ , and blowup as $t\to t_2^-$ or as $t\to t_1^+$ . The Einstein equations (which are PDEs, not merely ODEs) are a much bigger nonlinear mess. It is actually a general feature of nonlinear equations that solutions usually blow up in a finite amount of time. Of course, certain nonlinear equations have global-in-time existence of solutions, but a-priori, there's no reason you should expect them to have that nice property. For instance, in the FRW solution of Einstein's equations, the scale factor $a(t)$ vanishes as $t\to t_0$ (if you plug in some simple matter models you can even see this analytically), and doing a bunch more calculations, you can show this implies somme of the curvature components blow up. What this says is the Lorentzian metric cannot be extended in a $C^2$ sense. We can try to refine our notion of solution and singularity , but that would require a deep dive into the harshness of Sobolev spaces etc, and I don't want to open that can of worms here or now. Anyway, my simple point is that it is very common to have ODEs which only have solutions that exist for a finite amount of time, so your central claim of It seems to me that I could plug these into my differential equations and find out the state of the universe infinitely far back or infinitely in the future. is just not true. Edit: @jensenpaull good point, and I was debating whether or not I should have elaborated on it originally, but since you asked, I’ll do so now. Are there functions that satisfy the ODE $\frac{dx}{dt}=-x^2$ which are defined on a larger domain? Absolutely! The general solution is $x(t)=\frac{1}{t}+C(t)$ , where $C(t)$ is constant on $(0,\infty)$ , and a perhaps different constant on $(-\infty,0)$ . So, we we have completely lost uniqueness . But, why is this physically (and even mathematically in some regards) such a big deal? In Physics, we do experiments, and that means we have only access to things ‘here and now’ (let’s gloss over technical (but fundamental) issues and say we have the ability to gather perfect experimental data). One of the goals of Physics is to use this information, and predict what happens in the future/past. But if we lose uniqueness, then it means our perfect initial conditions are still insufficient to nail down what exactly happened/will happen, which is a sign that we don’t know everything. We are talking about dynamics here, so our perfect knowledge ‘initially’ should be all that we require to talk about existence and uniqueness of solutions (Otherwise, our theory is not well-posed). So, anything which is not uniquely predicted by our initial conditions cannot in any sense be considered physically relevant. Btw, such ‘well-posedness’ (in a certain class) questions are taken for granted in Physics, and occupy Mathematicians (heck the Navier-Stokes Millenium problem is roughly speaking a question of well-posedness in a smooth setting). Dynamics is everywhere: Newton’s laws are 2nd order ODEs and require require two initial conditions (position, velocity). From there, we turn on our ODE solver, and see what the result is. Maxwell’s electrodynamics: although in elementary E&M we simply solve various equations using symmetry, the fundamental idea is these are (linear, coupled) evolution equations for a pair of vector fields, which means we prescribe certain initial conditions (and boundary conditions) and then solve. GR: initially, there was lots of confusion regarding what exactly a solution is. It wasn’t until the work of Choquet Bruhat (and Geroch) that we finally understood the dynamical formulation of Einstein’s equations, and that we had a good well-posedness statement and a firm understanding of how the initial conditions (a 3-manifold, a Riemannian metric, and a symmetric $(0,2)$ -tensor field which is the to-be second-fundamental form of the embedding) give rise to a unique maximal solution (which is globally hyperbolic ). So, my first reason for why we don’t continue past $t=0$ (though of course, the reasoning is not really specific to that ODE alone) has been that dynamics should be uniquely predicted by initial conditions. Hence, it makes no physical sense to go beyond $t=0$ . The second reason is that in physics, nothing is ‘truly infinite’, and if it is, then our interpretation is that we don’t yet have a complete understanding of what’s going on. So, rather than trying to fix our solution, we should fix our equations (e.g maybe the ODE isn’t very physical). But before we throw out our equations, we may wonder: have we been too restrictive in our notion of solution ? For instance, maybe it is too much to require solutions to be $C^1$ . Could we for instance require only weaker regularity of $L^2=H^0$ or $H^1$ ? Well, $H^1$ -regularity is indeed more natural for many Physical purposes (because $H^1$ -regularity means ‘energy stays finite’). However, for this solution, we can see that $\frac{1}{2}\int_0^{\infty}|x(t)|^2+|\dot{x}(t)|^2\,dt=\infty$ . In fact, this is so bad that for any $\epsilon>0$ , $\int_0^{\epsilon}[\dots]\,dt=\infty$ , so the origin is a truly singular point that even energy blows up. So, there’s no physical sense in continuing past that point.
{ "source": [ "https://physics.stackexchange.com/questions/749290", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62211/" ] }
749,907
I study mathematics but I have a deep interest in physics as well. I have taken a course in smooth manifolds where a tensor is defined as an alternating multilinear function. Recently I have learned about electrodynamics and how Maxwell's equations can be written in relativistic form. We introduce the "(anti-symmetric) 2-tensor" $F_{\mu\nu}$ , which from I have understood so far, has the benefit that it allows us to easily calculate how the fields transform under arbitrary Lorentz transformations. (As an aside question, is there really any other benefit)? I've understood how $F_{\mu\nu}$ is derived, but I've been stuck on why/how physicists call this object a tensor. How can an object such as $F_{\mu\nu}$ be seen as an bilinear function?
where a tensor is defined as an alternating multilinear function I think you may be confusing the general concept of tensors, with the specific case of volume forms which indeed by definition are always alternating . But if you drop the "alternating", this would be completely correct. Now, perhaps the confusion arises from the fact that in physics we may be a bit "sloppy" at times and represent something like the electromagnetic Faraday tensor as a matrix: \begin{equation} \left\{ F^{\mu \nu} \right\} = \begin{pmatrix} 0 & -E^1 & -E^2 & -E^3 \\ E^1 & 0 & -B^3 & B^2 \\ E^2 & B^3 & 0 & -B^1 \\ E^3 & -B^2 & B^1 & 0 \end{pmatrix} \end{equation} While in fact this isn't a great way to represent a bilinear function. We know that a matrix is a reasonable representation for a $(1,1)$ tensor, since it maps a vector (which is a $(1,0)$ tensor) to another vector. Suppose that $A$ is a $(1,1)$ tensor, then: $$A^{i}_jV^j = U^i$$ However, a matrix isn't a very good way to represent a bilinear function like $F^{\mu\nu}$ . A bilinear function either maps a co-vector to a vector, a vector to a co-vector, or a pair (two vectors or two co-vectors) to a scalar: $$F^{\mu\nu}V_{\mu}U_{\nu} = r$$ where for example we may assume $r\in\mathbb{R}$ . I apologize for the non-physicality of the example, this is only for illustration purposes :) (You can find much more about this index notation if you're interested and how it's related to the more straightforward notation of a multilinear function. Suffice to say that this is just a more economical notation for familiar operations from (multi)linear algebra). But apart from some differences in notation, tensors in physics are exactly the same objects as they are in math: multilinear maps . Perhaps most importantly: in physics those multilinear maps often depend on physically significant parameters, such as position in spacetime . A good example of that would be the metric tensor in relativity. So while at a point on the spacetime manifold, the metric tensor will indeed act like a multilinear map, we would still identify it as the same tensor at another point on the manifold, despite this dependence. This is related to the fact that the metric tensor is properly defined as a tensor field on the manifold, via the related notion of the fiber bundle , which you may be familiar with.
{ "source": [ "https://physics.stackexchange.com/questions/749907", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/288281/" ] }
2,655
I came to this website, just wanting to exercise my freedom of speech rights and tell everyone my opinion about [issue]. But people keep downvoting my questions and answers. Is this website infiltrated by the uninformed sheep who blindly believe [other opinion] and keep downvoting everyone they don't agree with? Do people use downvotes to suppress my opinion and further their own?
People don't downvote you because they disagree that your opinion should be posted on this website, they disagree that any opinion should be posted on this website. Many people on this website will downvote any question or answer which is one-sided and opinionated even when they completely agree with the opinion. The reason is that such contributions do not fit the purpose of this website. The " What topics can I ask about here? " section on the help center says: Politics Stack Exchange is for objective questions about governments, policies and political processes. It is not a place to advance opinions or debate, but rather for exchanging objective information about the policies, processes, and personalities that comprise the political arena. To avoid getting downvoted for being too opinionated, try to stay calm even when writing about topics you have very strong feelings about. Any question should have the goal to learn more about how politics work. Any answer should explain the workings of politics from a neutral and objective standpoint. When you are not here to learn or teach about governments and political processes and instead just want to spread the word about a political cause, then you are using the wrong website. See also this related FAQs: What is on-topic for this site? Should we encourage questions to be rewritten in nonpartisan terms?
{ "source": [ "https://politics.meta.stackexchange.com/questions/2655", "https://politics.meta.stackexchange.com", "https://politics.meta.stackexchange.com/users/3135/" ] }
2,828
I asked two questions that did not receive very good feedback. Both questions were meant to either have answers that showed the ridiculousness of the situation, or so I could hear an argument that I've never heard before. I've done this on other SE sites, and no matter how uncomfortable my questions may be, they've always received great feedback. I'm just curious if this is something that is unacceptable here. I'm willing to compromise, but if it's acceptable I'd like to continue asking my questions. One of the questions has been deleted, and the other is: Why aren't drug dealers required to ID customers? Is it okay to ask questions with the intention of proving a point?
I am not a member of this community, but I did comment on the question, so I thought I'd expand on my comment here. (In a broader sense, I am active on the Stack Exchange network in general and so what I say is based on that.) To me, asking a question to make a point doesn't seem right. The Help Center for all sites says: You should only ask practical, answerable questions based on actual problems that you face. Now, I think this isn't as clear as it could be. We run into issues of e.g. what makes something a "practical" question. But to me, part of what this means is that your question should be honest and not motivated by something like "trying to make people I disagree with look bad". The Help Center also says to avoid making posts where your question is just a rant in disguise: “______ sucks, am I right?” Reading your comments, I felt like your post is kind of like this: It just seems strange that kids can buy any drug they want, but purchasing a regulated product is very hard for them to obtain. if Marijuana was legal, then legislators could begin making rules to regulate whether it is sold to minors? But as it stands, such a rule is neither logical nor enforceable? who knows, maybe someone will provide an answer that explains why drugs are better left in the hands of people who don't care whether someone is a child or not. Then I'll learn something new and everyone wins It seems like your post is "Prohibition of drugs sucks, am I right?" in disguise. Asking "Why isn't there legislation that requires (illegal) drug dealers to ID customers?" is a pretty silly question, as people have pointed out in the comments. If you post a silly question, you should be prepared for it to get downvotes. If what you really want to ask is "How do drug prohibitionists respond to arguments about legalization allowing increased regulation of drug dealers" then you should say that outright. Your question in its current format seems disingenuous, and I don't like that. Note that there's no way for me to actually tell if a question-asker is being "honest" or not. So perhaps what I really want is just the appearance of honesty. I think it's fine to ask a question based on premises that you don't believe in, but you should try to make it look like you are asking the question from some understandable viewpoint that shows some research effort. The question you posted, "Why aren't drug dealers required to ID customers?", doesn't seem "plausible" in this way, which I think is why several commentators expressed confusion about the question.
{ "source": [ "https://politics.meta.stackexchange.com/questions/2828", "https://politics.meta.stackexchange.com", "https://politics.meta.stackexchange.com/users/11316/" ] }
4,021
I've noticed a string of questions from single-use accounts that are generally either ill-informed or offensive. The ones I can see from my meager 123-rep account: Is fascism an older, less liberal form of the welfare state? Marxism and land ownership Shouldn't countries like Russia and Canada support global warming? (this got a relatively high vote after the edit, but the original is more poorly worded.) But I remember there are at least a half dozen questions that have been deleted recently, and a couple other examples are alluded to the in the comments from the posted questions. What should I be doing here? I've been leaving custom mod flags on these pointing out that these seem to be the same person, but it hasn't shut down the three I pointed out and the flag from the global warming question was explicitly declined. And since I only get one flag, doing this doesn't impact the question score and I can't flag as rude/abusive.
When it comes to suspected trolls, my policy is to follow Hanlon's Razor . "Never attribute to malice that which is adequately explained by incompetence." So instead of assuming that the user has got to be a troll to post such an inappropriate question, try to assume that the user is just misguided about what kind of questions are and are not appropriate. Use the tools you have available to teach the user how to post better questions. The tools you have are: comments which explain to the user what's wrong with their question and how it can be improved edit to make those improvements yourself downvote vote to close flag as "rude or abusive" (but only if it clearly violates the Code of Conduct ) Even if you are absolutely sure that the user just wants to troll, remember that the user isn't the only one who sees your actions. You might not be able to teach them , but you can still teach other s. New users also learn the do's and don'ts of this community by observing how we treat posts of other people.
{ "source": [ "https://politics.meta.stackexchange.com/questions/4021", "https://politics.meta.stackexchange.com", "https://politics.meta.stackexchange.com/users/10183/" ] }