source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
564,224 | As tide approaches in lower part of some rivers (e.g. Ganges), a several feet high tidal wave enters from the sea against the flow of the river (making a great noise), and the water level suddenly rises as the wave proceeds. I would expect the water level to rise slowly and gradually, as the angle between the moon and the zenith continuously changes from $90^{\circ}$ to $0^{\circ}$ (or $180^{\circ}$ ). Why does the tidal wave appear so suddenly? I am unsure if this occurs in seas as well, but I have seen this phenomenon in rivers near the sea. | What you are describing is called a tidal bore. Quoted from Wikipedia - Tidal bore - Description : Bores occur in relatively few locations worldwide, usually in areas
with a large tidal range (typically more than 6 meters (20 ft) between
high and low tide) and where incoming tides are funneled into a shallow,
narrowing river or lake via a broad bay. The funnel-like shape not only
increases the tidal range, but it can also decrease the duration of the
flood tide, down to a point where the flood appears as a sudden increase
in the water level. A tidal bore takes place during the flood tide and
never during the ebb tide. In the ocean the sea level rises quite slowly (only a few feet per hour).
But in the shallow water of rivers (and also in funnel-like bays)
this will result in a sudden wave-like rise of the water-level.
Due to the small slope of the river the slow vertical rise (a few feet per hour)
is converted to high horizontal speed (several kilometers per hour) of the wave-front.
And due to the funnel-like shape (from a wide sea bay to a narrow river)
the height of the wave front piles up on the way. (image from Fisheries and Oceans Canada - Phenomena - Tidal bores ) Here is a real image of the phenomenon. (image from Spectacular tidal bore surges up China river ) | {
"source": [
"https://physics.stackexchange.com/questions/564224",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/60646/"
]
} |
564,304 | I've found the following paradox, and I wonder how to resolve it. Two discs are floating in space, call them A and B. They are at a fixed distance D, coaxial, and rotate at the same speed. Each of them has a hole near the border. The position of the hole in disc B lags behind the position of the hole in disc A, by a small amount of time. This time is exactly equal to the time it takes light to traverse D. This means that a laser pulse that gets through hole A is going to get through hole B, and hit a detector on the other side, but the size of the holes is such that there is very little margin for error. Now: an observer passes along this contraption, moving in the axial direction at a sizeable fraction of the speed of light. Due to Lorentz contraction, the distance between A and B is going to be smaller in the observer's frame of reference. Plus, the rotation of the discs is going to be slower, due to time dilation. Either of these effects would be enough to prevent the laser pulse from passing through hole B: it's still traveling at the same speed in the observer's frame of reference, but it has less ground to cover, and on top of that the other disc won't have rotated enough to put the hole in its path. So the detector doesn't get hit! It's illogical for the detector to be hit or not hit depending on the observer. What am I missing? How to resolve this? | Expanding on Dale's answer, by shifting your frame of reference, the relative alignment of the two disks changes, since what is "simultaneous" changes! If we take disk A as the origin, then the relative-simultaneous (undilated) time of disk B shifts under a frame-velocity shift of $v$ by $\beta \frac{x}{c}$ , where $x$ is the (non-contracted) displacement to disk B and the usual Lorentz-transformation definitions of $\beta = v/c, \gamma=1/\sqrt{1-\beta^2}$ . Disk B therefore is "now rotated ahead" of what it was before the coordinate transformation by the amount it rotated in a time of $ \beta \frac{x}{c}$ . The time it takes for the beam to traverse from A to B is now reduced by the spatial dilation (by a factor of $1/\gamma$ ) and by the movement of disk B during the travel time (by a factor of $1/(1+\beta)$ ); the rotation of Disk B is also slowed by time dilation (by a factor of $1/\gamma$ ). The pre-transformation rotation time of Disk B when the beam was traversing the distance was $\frac{x}{c}$ , while the new time is $\frac{1}{\gamma^2}\frac{1}{1+\beta}\frac{x}{c}=\frac{1-\beta^2}{1+\beta}\frac{x}{c}=(1-\beta)\frac{x}{c}$ , which is a reduction of $\beta \frac{x}{c}$ - this exactly cancels out the Relativity of simultaneity shift above! This cancellation is guaranteed by the conservation under any Lorentz transformations of the spacetime interval between the beam passing through the hole in disk A and the hole in disk B - that is, the beam passing through hole A then hole B always aligns with what happens during the traversal from hole A to hole B, no matter what your inertial frame of reference is. | {
"source": [
"https://physics.stackexchange.com/questions/564304",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/269362/"
]
} |
564,307 | Suppose a particle of negligible mass is placed at e.g. $x=1$ inside a one dimensional space with a force field generated by the gravitational attraction of a point mass at the origin $0$ . I.e. the force is (normalized) $$-\frac{ {\rm sgn} (x)} {x^2}.$$ Then after finite time, the particle will move to the origin. However, at the origin, its speed is infinite, and I'm not sure how to calculate what happens after it has reached the origin. Can I get a hint for how to do this? | Expanding on Dale's answer, by shifting your frame of reference, the relative alignment of the two disks changes, since what is "simultaneous" changes! If we take disk A as the origin, then the relative-simultaneous (undilated) time of disk B shifts under a frame-velocity shift of $v$ by $\beta \frac{x}{c}$ , where $x$ is the (non-contracted) displacement to disk B and the usual Lorentz-transformation definitions of $\beta = v/c, \gamma=1/\sqrt{1-\beta^2}$ . Disk B therefore is "now rotated ahead" of what it was before the coordinate transformation by the amount it rotated in a time of $ \beta \frac{x}{c}$ . The time it takes for the beam to traverse from A to B is now reduced by the spatial dilation (by a factor of $1/\gamma$ ) and by the movement of disk B during the travel time (by a factor of $1/(1+\beta)$ ); the rotation of Disk B is also slowed by time dilation (by a factor of $1/\gamma$ ). The pre-transformation rotation time of Disk B when the beam was traversing the distance was $\frac{x}{c}$ , while the new time is $\frac{1}{\gamma^2}\frac{1}{1+\beta}\frac{x}{c}=\frac{1-\beta^2}{1+\beta}\frac{x}{c}=(1-\beta)\frac{x}{c}$ , which is a reduction of $\beta \frac{x}{c}$ - this exactly cancels out the Relativity of simultaneity shift above! This cancellation is guaranteed by the conservation under any Lorentz transformations of the spacetime interval between the beam passing through the hole in disk A and the hole in disk B - that is, the beam passing through hole A then hole B always aligns with what happens during the traversal from hole A to hole B, no matter what your inertial frame of reference is. | {
"source": [
"https://physics.stackexchange.com/questions/564307",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/127307/"
]
} |
565,092 | When I was in school, I learned about alchemists, a group of scientists who sought a way to convert other materials into gold. They were never successful, so whenever I studied or read about them, they were portrayed as failures or foolish people, and that’s the impression I had about them. However, more recently, I was preparing for something else, and I read a question which said: Copper can be converted into gold by artificial radioactivity I’m not a science guy, but when I looked into this, it seemed to be true. Is it? And if it's true, were the alchemists right all along, not foolish people as history portrays them? | There are ways that gold can be produced by radioactivity: Chrysopoeia, the artificial production of gold, is the symbolic goal of alchemy. Such transmutation is possible in particle accelerators or nuclear reactors, although the production cost is currently many times the market price of gold. Since there is only one stable gold isotope, 197Au, nuclear reactions must create this isotope in order to produce usable gold. Italics mine. In a sense the goal of alchemists is reached,but they were aiming at getting gold for its value, not for the fun of it. Most readers probably are aware of several common claims about alchemy—for example, ... that it is akin to magic, or that its practice then or now is essentially deceptive. These ideas about alchemy emerged during the eighteenth century or after. While each of them might have limited validity within a narrow context, none of them is an accurate depiction of alchemy in general They were the "chemists " of their time , in their various pursuits. | {
"source": [
"https://physics.stackexchange.com/questions/565092",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/269663/"
]
} |
565,238 | I was playing table tennis the other day when I my ball fell off the table. I placed my paddle above it in order to slow it down, and then I brought the paddle to the ground so that the ball would come to a stop. A diagram of what I did is below: Why did the velocity of the ping pong ball increase so much at the end? I did not apply much force while lowering the paddle, so I didn't think it was because I applied a greater force to the ball. | There are three parts to the phenomenon, two real and one illusory. While you are lowering the bat, its relative velocity to the approaching ball increases that little bit. The ball bounces off it that bit harder, gaining twice that extra velocity relative to the floor. Repeat for several bounces and the difference might become noticeable. This is one real part. The other arises because the ball slows as it rises and accelerates again as it falls. Lowering the bat cuts out the bit where it slows down, so even though the local speed at any given point may not increase, the average speed does increase. The illusion is to do with the scale and period of the bouncing. As you lower the bat, the period of each bounce shortens, increasing the frequency of the bouncing. This combines with the shrinking scale to create an illusion of going faster. (Credit to user Accumulation for pointing this one out in another answer). A similar illusion takes place when you watch a scurrying insect. Compare say a horse, a cat and an insect walking along. The big horse seems slow and lazy, the tiny insect in a mad hurry, the cat somewhere in between. But in reality the horse is going the fastest and the insect the slowest. | {
"source": [
"https://physics.stackexchange.com/questions/565238",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/244496/"
]
} |
565,553 | Does stable mean that an isotope has a very long half life, for example xenon-124 has a half life of 1.8 x 10^22 years , or does it mean that fissure is theoretically not possible, or does it mean that the isotope has a very long half life, but the exact number is unknown? | Does stable mean that an isotope has a very long half life... or does it mean that fissure is theoretically not possible, or does it mean that the isotope has a very long half life, but the exact number is unknown? "Stable" effectively means that there is no experimental evidence that it decays. However, there are nuances within that statement. Most of the "stable" light nuclei can also be shown to be theoretically stable. Such nuclei would have to absorb energy to decay via any of the known decay modes, and so such decay cannot happen spontaneously. Many heavier nuclei are energetically stable to most known decay modes (alpha, beta, double beta, etc.) but could potentially release energy via spontaneous fission. However, they have never been observed to do so; so for all practical purposes they are considered stable. Some nuclei could potentially release energy via emission of small particles (alpha, beta, etc.), but have never actually been observed to do so. Such nuclei are often called "observationally stable". Several nuclides are radioactive, but have half-lives so long that they don't decay significantly over the age of the Earth. These are the radioactive primordial nuclides; your example of xenon-124 is one of them. Note that nuclides can in principle be moved from categories 2 or 3 into category 4 via experimental observations. For example, bismuth was long thought to be the heaviest element with a stable isotope. However, in 2003, its lone primordial isotope (bismuth-209) was observed to decay via alpha emission, with a half-life of $\approx 10^{19}$ years. One could defensibly claim that the nuclei in categories 2 & 3 are radioactive but their half-life is unknown; after all, the totalitarian principle says that any quantum-mechanical process that is not forbidden is compulsory. If you want to take this perspective, though, you have to assume that we have a good enough grasp on nuclear physics to know what is forbidden or not. | {
"source": [
"https://physics.stackexchange.com/questions/565553",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35006/"
]
} |
565,554 | I was studying the derivation of the relation between pressure ( $P$ ) and root mean square speed ( $v_{\text {RMS}}$ ) of an ideal gas from Fundamentals of Physics by Halliday, Resnick, Walker. (The same derivation can be found in Wikipedia page on kinetic theory of gases under "Equilibrium properties" section.) One of the steps in the derivation process bothers me, which is illustrated below: When a gas molecule collides with the wall of the container perpendicular to the x axis and bounces off in the opposite direction with the same speed (an elastic collision), the change in momentum is given by: $${\Delta p=p_{i,x}-p_{f,x}=p_{i,x}-(-p_{i,x})=2p_{i,x}=2mv_{x}}$$ $${\displaystyle \Delta p=p_{i,x}-p_{f,x}=p_{i,x}-(-p_{i,x})=2p_{i,x}=2mv_{x}}$$ where p is the momentum, i and f indicate initial and final momentum (before and after collision), x indicates that only the x direction is being considered, and v is the speed of the particle (which is the same before and after the collision). The particle impacts one specific side wall once every $${\displaystyle \Delta t={\frac {2L}{v_{x}}},}{\displaystyle \Delta t={\frac {2L}{v_{x}}}}$$ where L is the distance between opposite walls. So far no problem, but now: The force due to this particle is $${\displaystyle F={\frac {\Delta p}{\Delta t}}={\frac {mv_{x}^{2}}{L}}.}F={\frac {\Delta p}{\Delta t}}={\frac {mv_{x}^{2}}{L}}$$ How/why did they substitute the time period between two successive collisions for a given molecule with time for which the force is applied ? I might be wrong, but aren't the two things (time period between two successive collision and of the collison) different and hence shouldn't be substituted for one another? | Does stable mean that an isotope has a very long half life... or does it mean that fissure is theoretically not possible, or does it mean that the isotope has a very long half life, but the exact number is unknown? "Stable" effectively means that there is no experimental evidence that it decays. However, there are nuances within that statement. Most of the "stable" light nuclei can also be shown to be theoretically stable. Such nuclei would have to absorb energy to decay via any of the known decay modes, and so such decay cannot happen spontaneously. Many heavier nuclei are energetically stable to most known decay modes (alpha, beta, double beta, etc.) but could potentially release energy via spontaneous fission. However, they have never been observed to do so; so for all practical purposes they are considered stable. Some nuclei could potentially release energy via emission of small particles (alpha, beta, etc.), but have never actually been observed to do so. Such nuclei are often called "observationally stable". Several nuclides are radioactive, but have half-lives so long that they don't decay significantly over the age of the Earth. These are the radioactive primordial nuclides; your example of xenon-124 is one of them. Note that nuclides can in principle be moved from categories 2 or 3 into category 4 via experimental observations. For example, bismuth was long thought to be the heaviest element with a stable isotope. However, in 2003, its lone primordial isotope (bismuth-209) was observed to decay via alpha emission, with a half-life of $\approx 10^{19}$ years. One could defensibly claim that the nuclei in categories 2 & 3 are radioactive but their half-life is unknown; after all, the totalitarian principle says that any quantum-mechanical process that is not forbidden is compulsory. If you want to take this perspective, though, you have to assume that we have a good enough grasp on nuclear physics to know what is forbidden or not. | {
"source": [
"https://physics.stackexchange.com/questions/565554",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
565,564 | Given a metric tensor $g_{\mu\nu}$ it is possible to calculate the geodesic equations from: $$\dfrac{d^2x^{\mu}}{ds^2}=\Gamma^\mu_{\nu \eta}\dfrac{dx^\nu}{ds}\dfrac{dx^\eta}{ds}$$ where the $\Gamma^\mu_{\nu \eta}$ are the Christoffel symobols. How is it possible to know if there are runaway solutions in the geodesic equations? If the Ricci scalar is zero, this mean there are not this kind solutions? | Does stable mean that an isotope has a very long half life... or does it mean that fissure is theoretically not possible, or does it mean that the isotope has a very long half life, but the exact number is unknown? "Stable" effectively means that there is no experimental evidence that it decays. However, there are nuances within that statement. Most of the "stable" light nuclei can also be shown to be theoretically stable. Such nuclei would have to absorb energy to decay via any of the known decay modes, and so such decay cannot happen spontaneously. Many heavier nuclei are energetically stable to most known decay modes (alpha, beta, double beta, etc.) but could potentially release energy via spontaneous fission. However, they have never been observed to do so; so for all practical purposes they are considered stable. Some nuclei could potentially release energy via emission of small particles (alpha, beta, etc.), but have never actually been observed to do so. Such nuclei are often called "observationally stable". Several nuclides are radioactive, but have half-lives so long that they don't decay significantly over the age of the Earth. These are the radioactive primordial nuclides; your example of xenon-124 is one of them. Note that nuclides can in principle be moved from categories 2 or 3 into category 4 via experimental observations. For example, bismuth was long thought to be the heaviest element with a stable isotope. However, in 2003, its lone primordial isotope (bismuth-209) was observed to decay via alpha emission, with a half-life of $\approx 10^{19}$ years. One could defensibly claim that the nuclei in categories 2 & 3 are radioactive but their half-life is unknown; after all, the totalitarian principle says that any quantum-mechanical process that is not forbidden is compulsory. If you want to take this perspective, though, you have to assume that we have a good enough grasp on nuclear physics to know what is forbidden or not. | {
"source": [
"https://physics.stackexchange.com/questions/565564",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25083/"
]
} |
565,913 | Generally in text books they say that when a electron goes from high energy state to a lower energy state it emits photons.
My question is, it is possible that a proton that goes from high energy state to a lower energy state emits photons too? | Nuclei emit gamma rays, which are high energy photons. The photons emitted when electron in an atom changes its energy state are usually in optical spectrum, which are more frequently encountered in technology and real life, which is why they receive more attention in textbooks. Remarks To underscore why photons are less "important" for nuclei than for atoms: Nature of interactions It is worth noting that atoms, solids and molecules are held together by the Coulomb interaction (i.e. by electromagnetic forces), which is why their structural dynamics is strongly coupled to photons - the particles carrying this interaction. Nuclear forces are of different nature - although photons play a role, they are but one of many particles involved. Size Being charged particles protons should be coupled to EM field. The strength of this interaction is proportional to the dipole moment $d=r_ne$ , where $r_n\approx10^{-15}m$ (one Fermi) is the nuclear radius that is much smaller than the radius of an atom $r_a\approx10^{-10}m$ (one Angstrom). In other words, the coupling of protons to photon field is $10^{5}$ times weaker. Mass protons and neutrons both carry spin and could couple to electromagnetic field via Zeeman coupling. However, their mass is about thousand times bigger than that of electrons, resulting in a thousand times smaller gyromagnetic ratio (i.e. nuclear magneton is a thousand time smaller than Bohr magneton), i.e. the coupling is weak. Finally, here is an authoritative reference on the subject: Interaction of nuclei with electromagnetic radiation Quote The following quote is from book "Fundamentals in nuclear physics" by Besdevant, Rich and Spiro : While the numbers (A, Z) or (N, Z) define a nuclear species, they do not
determine uniquely the nuclear quantum state. With few exceptions, a nucleus
(A, Z) possesses a rich spectrum of excited states which can decay to the ground
state of (A, Z) by emitting photons. The emitted photons are often called
gamma-rays. The excitation energies are generally in the MeV range and their
lifetimes are generally in the range of 10^{−9}–10^{−15} s. Because of their
high energies and short lifetimes, the excited states are very rarely seen on Earth
and, when there is no ambiguity, we denote by (A, Z) the ground state of the
corresponding nucleus. | {
"source": [
"https://physics.stackexchange.com/questions/565913",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/103009/"
]
} |
565,998 | I got this question in school: Explain, based on the properties of an ideal gas, why the ideal gas law only gives good results for hydrogen. We know that the ideal gas law is $$P\cdot V=n\cdot R\cdot T$$ with $P$ being the pressure, $V$ the volume, $n$ the amount of substance, $R$ the gas constant and $T$ the temperature (Source: Wikipedia - "Ideal gas" ). An ideal gas must fulfill the following: The particles do have an infinitely small volume (or no volume), The particles do not interact with each other through attraction or repulsion, The particles can interact through elastic collisions. Now, why does only hydrogen sufficiently fulfill these conditions? I initially assumed that the reason is that it has the smallest volume possible as its nucleus only consists of a single proton. However, two things confuse me: (Let's first assume that my first idea was correct and the reason is the nucleus' scale/volume) helium's nucleus consists of two protons and two neutrons. It is therefore four times as large than hydrogen's nucleus. However, hydrogen's nucleus is infinitely times larger than an ideal gas molecule (which would have no volume), so why does the difference of $4$ significantly affect the accuracy of the ideal gas law, while the difference of an infinitely times larger hydrogen (nucleus) doesn't? My first idea is not even true, as atoms do not only consist of their nucleus. In fact, most of their volume comes from their electrons. In both hydrogen and helium, the electrons are in the same atomic orbital, so the volume of the atoms is identical . Other possibilities to explain that the ideal gas law only work for hydrogen and therefore only leave the collisions or interactions. For both of these, I do not see why they should be any different for hydrogen and helium (or at least not in such a rate that it would significantly affect the validity of the ideal gas law). So where am I wrong here? Note: I do not consider this a homework question. The question is not directly related to the actual problem, but I rather question whether the initial statement of the task is correct (as I tested every possible explanation and found none to be sufficient). Update I asked my teacher and told them my doubts. They agreed with my (and yours from the answers, of course!) points but still were of the opinion that Hydrogen is the closest to an ideal gas (apparently, they were taught so in university). They also claimed that the mass of the gas is relevant (which would be the lowest for hydrogen; but I doubt that since there is no $m$ in the ideal gas equation) and that apparently, when measuring, hydrogen is closest to an ideal gas. As I cannot do any such measurements by myself, I would need some reliable sources (some research paper would be best: Wikipedia and some Q&A site including SE - although I do not doubt that you know what you are talking about - are not considered serious or reliable sources). While I believe that asking for specific sources is outside the scope of Stack Exchange, I still would be grateful if you could provide some soruces. I believe it is in this case okay to ask for reference material since it is not the main point of my question. Update 2 I asked a new question regarding the role of mass for the elasticity of two objects. Also, I'd like to mention that I do not want to talk bad about my teacher since I like their lessons a lot and they would never tell us something wrong on purpose. This is probably just a misconception. | The short answer is ideal gas behavior is NOT only valid for hydrogen. The statement you were given in school is wrong. If anything, helium acts more like an ideal gas than any other real gas. There are no truly ideal gases. Only those that sufficiently approach ideal gas behavior to enable the application of the ideal gas law. Generally, a gas behaves more like an ideal gas at higher temperatures and lower pressures. This is because the internal potential energy due to intermolecular forces becomes less significant compared to the internal kinetic energy of the gas as the size of the molecules is much much less than their separation. Hope this helps. | {
"source": [
"https://physics.stackexchange.com/questions/565998",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/253179/"
]
} |
566,011 | So I was just reading a bit about magnetic dipole moments, Larmor precession, angular momentum etc., but there was one little thing that was bothering me. As far as I know, any angular momentum will precess around any magnetic field, no matter how big the angular momentum and the magnetic field is. So the angular momentum can be as small as you like. So then I thought about bar magnets, which I thought had a very tiny amount of angular momentum due to their magnetic dipole moments, for as we know, the magnetic dipole moment is the gyromagnetic ratio (gr) times the angular momentum. But of course, the gr is really big for bar magnets because it's so large for electrons (and as we know, it is the electrons that make up the currents that are creating the magnetic field of the magnet). Thus, the angular momentum of bar magnets must be microscopically small. But again, as I said, any angular momentum will do, meaning that bar magnets should actually precess. What is wrong with my thinking here? | The short answer is ideal gas behavior is NOT only valid for hydrogen. The statement you were given in school is wrong. If anything, helium acts more like an ideal gas than any other real gas. There are no truly ideal gases. Only those that sufficiently approach ideal gas behavior to enable the application of the ideal gas law. Generally, a gas behaves more like an ideal gas at higher temperatures and lower pressures. This is because the internal potential energy due to intermolecular forces becomes less significant compared to the internal kinetic energy of the gas as the size of the molecules is much much less than their separation. Hope this helps. | {
"source": [
"https://physics.stackexchange.com/questions/566011",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/267796/"
]
} |
566,503 | I was reading this article from Ethan Siegel and I got some doubts about a sentence about entropy, specifically when Ethan explains the irreversibility of the conditions of the hot-and-cold room, as in this figure: In his words: It's like taking a room with a divider down the middle, where one side is hot and the other is cold, removing the divider, and watching the gas molecules fly around. In the absence of any other inputs, the two halves of the room will mix and equilibrate, reaching the same temperature. No matter what you did to those particles, including reverse all of their momenta, they'd never reach the half-hot and half-cold state ever again. My question is: Is the spontaneous evolution from the equilibrium temperature (right side of the image) to the half-hot and half-cold state (left side) physically and theoretically impossible/forbidden, or is it simply so astronomically unlikely (from a statistical perspective) that in reality it never happens? The article seems to suggest the former, but I was under the impression of the latter. | The appropriate mathematical tool to understand this kind of question, and more particularly Dale's and buddy's answers, is large deviation theory. To quote wikipedia, "large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events". In this context, "exponential decline" means: probability that decreases exponentially fast with the increase of number of particles. TL;DR: it can be shown that the probability to observe an evolution path for a system that decreases entropy is non-zero, and it decreases exponentially fast with the number of particles; thanks to a statistical mechanics of "trajectories", based on large deviation theory. Equilibrium statistics In equilibrium statistical mechanics, working in the appropriate thermodynamical ensemble, for instance the microcanonical ensemble in this case, one could relate the probability to observe a macrostate $M_N$ for the $N$ particles in the system, to the entropy of the macrostate $S[M_N]$ : $\mathbf{P}_{eq}\left(M_N\right)\propto\text{e}^{N\frac{\mathcal{S}[M_N]}{k_{B}}}.$ Naturally, the most probably observed macrostate, is the equilibrium state, the one which maximizes the entropy. And the probability to observe macrostates that are not the equilibrium state decreases exponentially fast as the number of particles goes to infinity, this is why we can see it as a large deviation result, in the large particle numbers limit. Dynamical fluctuations Using large deviation theory, we can extend this equilibrium point of view: based on the statistics of the macrostates, to a dynamical perspective based on the statistics of the trajectories. Let me explain. In your case, you would expect to observe the macrostate of your system $(M_N(t))_{0\leq t\leq T}$ , evolving on a time interval $[0,T]$ from an initial configuration $M_N(0)$ with entropy $S_0$ to a final configuration $M_N(T)$ with entropy $S_T$ such as $S_0 \leq S_T$ , $S_T$ being the maximal entropy characterizing the equilibrium distribution, and the entropy of the macrostate at a time $t$ , $S_t$ being a monotonous increasing function (H-Theorem for the kinetic theory of a dilute gas, for instance). However, as long as the number of particles is finite (even if it is very large), it is possible to observe different evolutions, particularly if you wait for a very long time, assuming your system is ergodic for instance. By long, I mean large with respect to the number of particles.
In particular, it has been recently established that one could formulate a dynamical large deviation result which characterizes the probability of any evolution path for the macrostate of the system ( https://arxiv.org/abs/2002.10398 ).
This result allows to evaluate for large but finite number of particles, the probability to observe any evolution path of the macrostate $(M_N(t))_{0\leq t\leq T}$ , including evolution paths such as $S_t$ , the entropy of the system a time $t$ is non monotonous. This probability will become exponentially small with the number of particles, and the most probable evolution, that increases entropy, will have an exponentially overwhelming probability as the number of particles goes to infinity. Obviously, for a classical gas, N is very large, such evolution paths that do not increase entropy won't be observed: you would have to wait longer than the age of the universe to observe your system doing this. But one could imagine systems where we use statistical mechanics, where $N$ is large but not enough to "erase" dynamical fluctuations: biological systems, or astrophysical systems for instance, in which it is crucial to quantify fluctuations from the entropic fate. | {
"source": [
"https://physics.stackexchange.com/questions/566503",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270210/"
]
} |
566,712 | One can draw/imagine as many unique (curved/straight) lines as he/she wants in some specified finite area (assuming that each line is unique if it doesn't overlap with another line). Then how can the number of field lines in a particular area be a fixed quantity? This statement is contradicted by the fact that a particle will experience a magnetic force for each and every point in space. This would not be possible if at some specific points there are no magnetic field lines. The surface integral approach is clearer as some limits are taken into account and also there is no such thing as 'number of lines', but I find it very confusing when people say that the strength of the magnetic field is proportional to number of field lines/area. Why is this terminology still used? Is it because we assume that no magnetic field lines exist at places where the forces are very weak ? EDIT: Then why are there gaps between the iron filing lines? Is it because of my previous statement because we assume that no magnetic field lines exist at places where the forces are very weak And hence the iron filings align themselves to stronger field lines. Is this a reason why this terminology is still used? | why there are gaps between the the iron filling lines? Iron filings are ferromagnetic. They don't just show the field, they change it. ...hence the iron filings align themselves to stronger field lines. The filings self-organize into distinct lines because their presence concentrates the field. Magnetic field lines prefer to go through a ferromagnetic body rather than through empty space. The field actually is stronger inside the iron particles than in the gaps between them. If you drop a new filing into the gap between two of the visible "lines," it will feel attraction toward either of the surrounding lines. It will only stay put, and become the seed for a new line, if the magnetic force that it feels is too weak to overcome the static friction between the particle and the paper (or whatever) underneath. | {
"source": [
"https://physics.stackexchange.com/questions/566712",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/191417/"
]
} |
566,792 | Well, we know that it is impossible to say exactly when a radioactive atom will go on decay. It is a random process. My question is why then a collection of them decays in a predictable nature (exponential decay)? Does the randomness disappear when they get together? What is the cause of this drastic change of their behaviour? | Law of large numbers This law simply states that if you repeat a trial many times, the result tends to be the expected value. For example if you roll a 6-sided die, you could get any of the six results 1, 2, 3, 4, 5, 6. But the average of the six results is 3.5, and if you roll the 6-sided die a million times and take the average of all of them, you are extremely likely to get an average of about 3.5. But you 1) might not get a number close to 3.5, in fact there's a non-zero chance you get an average of, for example, 2 or 1, and 2) still can't predict which result you will get when you roll a single die. In the same way, you might not be able to predict when a single atom will decay (i.e. when you roll a single die), but you can make very good predictions when you have lots of atoms (i.e. equivalent to rolling the die millions of times). | {
"source": [
"https://physics.stackexchange.com/questions/566792",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/89673/"
]
} |
566,951 | If large mass causes a curvature around spacetime, then why don't we see a gravity lens around our planets? | Contrary to the other answers, I will point out that gravitational lensing due to planets in the solar system is a significant and measurable effect. The measured positions of stars, as seen from a point in the solar system near the Earth are altered by the gravitational deflection due to the fields of the Sun and then, in order of decreasing effect, Earth Jupiter, Saturn, Venus, Uranus, Neptune. The size of the effect depends on the angular separation of a star from the solar system object and can be anything from 0 to 70 microarcseconds. This sounds very small, but is easily within the precision of positional measurements by the Gaia spacecraft . In fact, the effects of these deflections have to be taken into account in order to provide accurate positions as intermediate data for the calculation of the parallaxes and proper motions of stars. A brief information sheet on the topic has been produced by ESA. However, this lensing and these kinds of deflections are not strong enough to produce multiple images or Einstein rings. The angular size of a perfect Einstein ring is given by $$\theta_E \simeq \left[ \left(\frac{4GM}{c^2}\right) \frac{D_{LS}}{D_L D_S}\right]^{1/2} \
,$$ where $M$ is the mass of the lens, $D_L$ is the distance from observer to lens, $D_S$ is the distance to the source and $D_{LS}$ is the distance between the lens and the source. Since in this case $D_{S} \gg D_{L}$ and $D_{S} \simeq D_{LS}$ , then $$\theta_E \simeq \left(\frac{4GM}{c^2D_L}\right)^{1/2} = 0.071 \left(\frac{M}{M_{\rm Earth}}\right)^{1/2} \left( \frac{D_L}{1{\rm AU}}\right)^{-1/2}\ {\rm arcsec}$$ Taking Jupiter as an example, this yields $\theta_E \sim 0.6$ arcsec. But this is a lot smaller than the angular diameter of the planet ( $\sim 50$ arcsec). In other words, to see multiple images or rings you would need to be much further away from Jupiter than a few AU and a star that is directly behind Jupiter is completely hidden when viewed from the Earth. | {
"source": [
"https://physics.stackexchange.com/questions/566951",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270355/"
]
} |
567,053 | In the Maxwell's demon thought experiment, initially, the gases in both boxes have the same temperature. The devil uses the door in the middle to allow the fast (hot) molecules on the left to pass to the right. But, we said the gases in both boxes have the same temperature. So, the right box is not completely hot. There exist still cold gas molecules on the right. But according to the thought experiment, the cold gas molecules are all collected on the left side of the chamber. Is this a contradiction in the paradox? How do slow molecules move as fast as other hot gas molecules? | Individual gas molecules are neither cold nor hot: They have kinetic energy. The absolute temperature of a gas is proportional to the average of the kinetic energies of its molecules, and what's important here, is that the kinetic energies are not all the same. There is a statistical distribution of different energies in any given body of gas. Even so when the gas is all one "temperature." https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics | {
"source": [
"https://physics.stackexchange.com/questions/567053",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/222576/"
]
} |
567,343 | This is a little question that I have been wondering when I need to cut sand paper with scissors. Sand paper can be used to sharpen knives etc. when applied parallel with the blade surface. Also it can be used to dull sharp edges when applied nonparallel with the blade surface. My assumption is that it should dull the scissors since paper is being cut using the sharp edge and nonparallel with the abrasive material. But I still have doubts about the validity of the assumption. How is it? | Sand paper removes material. When used properly, that removal of material can make a blade sharper. However, when cutting the sandpaper, there is no attempt to structure the removal of the material. It will simply dull the scissors. It will remove material in a relatively haphazard manner, taking off the sharp edge. If you have any questions of this, ask someone who sews for their nice fabric scissors, and let them know you're going to go cut some paper with them. Find out how quickly they respond in an effort to avoid dulling their scissors. Perhaps its not the most scientific approach, but it is a well documented one, and very evidence based! And that's just normal paper! | {
"source": [
"https://physics.stackexchange.com/questions/567343",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270492/"
]
} |
567,596 | There was a fill-in-the-blank question in my university test. It was something like: Quantum mechanics deals with ____ I wrote "everything" and my lecturer gave me no marks. He was expecting something like "small", "nano" or something. I tried to convince him that quantum mechanics deals with everything in the universe and its effects are obvious only in smaller things. But he was so certain that quantum mechanics if applied on big things will give incorrect results. So, am I wrong? Won't quantum mechanics work on bigger things? | The relationship between quantum and classical descriptions is somewhat tricky, unlike the relationship between the relativity and the classical mechanics. Classical mechanics can be simply thought of as the limiting form of the relativity at small velocities. Thinking of macroscopic objects, as if they were quantum objects with very short de Broglie wave lengths and therefore having low quantum uncertainty, is however not satisfactory. For one, these objects usually consist of many small objects interacting among themselves and with their surroundings, so one cannot avoid discussing decoherence/dephasing and adopting some kind of statistical physics description. Secondly, measurement is an essential element of quantum theory, which implies a microscopic ( small ) object coming in contact with a macroscopic one ( a big thing ), which may generate some logical paradoxes. All this complexity does not negate the fact that macroscopic object are also quantum objects , although describing them with quantum laws is by far more difficult than applying these laws to atoms and molecules. Nevertheless, it is an active field of research. The examples that come to mind are: nanomechanical systems - these can be C60 molecules or carbon nanotubes containing thousands of atoms or similar size nanorods made of other materials that exhibits quantum behavior. These object are still microscopic, but far bigger than what is usually seen as quantum. macromolecules, such as proteins or DNA - there have been claims that the exhibit quantum behavior, tunneling through each other. My evidence might be anecdotal, but there is research in this direction. Still, these are studied. everything related to superconductivity, superfluidity - this may happen at visible scales, although at very low temperatures. | {
"source": [
"https://physics.stackexchange.com/questions/567596",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/207197/"
]
} |
567,920 | This is a thought experiment where I have made a "C" shaped hole inside diamond. The refractive index $(\mu)$ of diamond is 2.45. Say we shine a laser from top of the "C" as shown. My calculations show that light reaching A can reach B in the least possible time if gone through the "C". but I'm pretty sure the perpendicular laser beam travels undeflected and straight down. Though I don't have experimental evidence, I see something wrong either with my intuition or with the theory.
It would be great if someone could clarify. * 'a' in the diagram is the thickness of the cutout and all comparable distances can be taken 'a'. | As others have said, Fermat's principle says that the path which light follows is stationary rather than a minimum of optical path length (though in fact it typically is a bona fide local minimum). The more important point, however, is that this is a necessary but not sufficient condition for a given path to be that followed by light. This is a mathy way of saying that there might be several paths which are local extrema of path length, but light need not follow all of them. This is a typical issue with variational arguments. The same thing can happen with a massive particle which has the option of following either of two paths to an endpoint. Feynman considered such scenarios in developing his path integral approach to quantum mechanics, but even for classical mechanics it is an interesting case study. If you solve the Euler-Lagrange equation for such a system, you'll find that there are two paths which make the action stationary, i.e. two paths which the particle can follow to get from its starting point to its ending point. But we know that a classical particle will only follow one path, so which will it take? Mathematically, the issue here is that variational problems are typically posed as boundary value problems—we specify where the particle needs to start and where it needs to end up. Unlike initial value problems, boundary value problems need not have unique solutions. But in real life, we don't actually control where the particle ends up. What we really control is the particle's initial position and velocity—i.e. we set up an initial value problem, a differential equation for which there is a unique mathematical solution. After we send off the particle and see where it ends up, we can then use its ending location and the Euler-Lagrange equation to see which path it took to arrive at the endpoint, but there can be multiple solutions. The same thing happens in optical systems. When you shoot a laser, you specify the initial conditions of the laser beam by the position of the laser and the direction it points. This sets up an initial value problem which has a unique solution. After you find out where the beam goes, you can then use the starting and ending points of the beam together with Fermat's principle to figure out the path it took to get there. But you may find that there are multiple solutions to Fermat's principle, and you need to use either common sense or some discrete data about the orientation of the laser to figure out which one is the right one. Some final remarks about the particular case you are considering. The actual shortest path in the system drawn in the OP would be that going straight from point A to the inside corner of the "C", then down the boundary between air and diamond to the other corner, then straight to point B. A curious feature of this path is that infinitesimal perturbations to the segment of the path along the boundary of air and diamond would result in discontinuous changes in the path length, because if you push the path from the air side to the diamond side the length gets 2.45 times longer. This means that usual variational calculus arguments (like those used in deriving the Euler-Lagrange equation) don't work, as they assume smooth variation of the action (i.e. optical path length) with small perturbations to the path. So you have to be more careful in this case. In fact, physically no light will typically ever follow this path (at least at the level of geometric optics), because there is nothing to "bend the light around the corner". Another interesting feature of this system is that there might (depending on exact positions of A and B) be another locally extremal path from A to B, namely that which enters the diamond at an angle, undergoes total internal reflection at the air-diamond interface, and then bounces back to B. So if you have a light bulb (which sends light in all directions) at point A and someone sitting at point B, the person at B would see two lights, one from the line straight to A and another coming at an angle from the left. This is another illustration of the caveats on Fermat's principle—if the light does not have a well defined initial direction, it may follow multiple stationary paths! | {
"source": [
"https://physics.stackexchange.com/questions/567920",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/262016/"
]
} |
567,934 | The propagation of light in optical fibers is governed by the equation $i\frac{\partial A}{\partial z} = \frac{\beta_2}{2}\frac{\partial^2A}{\partial T^2} - \gamma|A|^2A$ where $A(z,T)$ represents the amplitude of the field envelope and $\beta_2$ and $\gamma$ are constants. My textbook (Nonlinear Fiber Optics by Agarwal) says that the amplitude $A$ is independent of $T$ in case of CW radiation at the input end of the fiber ( $z=0$ ). Why is $A$ independent of $T$ ? | As others have said, Fermat's principle says that the path which light follows is stationary rather than a minimum of optical path length (though in fact it typically is a bona fide local minimum). The more important point, however, is that this is a necessary but not sufficient condition for a given path to be that followed by light. This is a mathy way of saying that there might be several paths which are local extrema of path length, but light need not follow all of them. This is a typical issue with variational arguments. The same thing can happen with a massive particle which has the option of following either of two paths to an endpoint. Feynman considered such scenarios in developing his path integral approach to quantum mechanics, but even for classical mechanics it is an interesting case study. If you solve the Euler-Lagrange equation for such a system, you'll find that there are two paths which make the action stationary, i.e. two paths which the particle can follow to get from its starting point to its ending point. But we know that a classical particle will only follow one path, so which will it take? Mathematically, the issue here is that variational problems are typically posed as boundary value problems—we specify where the particle needs to start and where it needs to end up. Unlike initial value problems, boundary value problems need not have unique solutions. But in real life, we don't actually control where the particle ends up. What we really control is the particle's initial position and velocity—i.e. we set up an initial value problem, a differential equation for which there is a unique mathematical solution. After we send off the particle and see where it ends up, we can then use its ending location and the Euler-Lagrange equation to see which path it took to arrive at the endpoint, but there can be multiple solutions. The same thing happens in optical systems. When you shoot a laser, you specify the initial conditions of the laser beam by the position of the laser and the direction it points. This sets up an initial value problem which has a unique solution. After you find out where the beam goes, you can then use the starting and ending points of the beam together with Fermat's principle to figure out the path it took to get there. But you may find that there are multiple solutions to Fermat's principle, and you need to use either common sense or some discrete data about the orientation of the laser to figure out which one is the right one. Some final remarks about the particular case you are considering. The actual shortest path in the system drawn in the OP would be that going straight from point A to the inside corner of the "C", then down the boundary between air and diamond to the other corner, then straight to point B. A curious feature of this path is that infinitesimal perturbations to the segment of the path along the boundary of air and diamond would result in discontinuous changes in the path length, because if you push the path from the air side to the diamond side the length gets 2.45 times longer. This means that usual variational calculus arguments (like those used in deriving the Euler-Lagrange equation) don't work, as they assume smooth variation of the action (i.e. optical path length) with small perturbations to the path. So you have to be more careful in this case. In fact, physically no light will typically ever follow this path (at least at the level of geometric optics), because there is nothing to "bend the light around the corner". Another interesting feature of this system is that there might (depending on exact positions of A and B) be another locally extremal path from A to B, namely that which enters the diamond at an angle, undergoes total internal reflection at the air-diamond interface, and then bounces back to B. So if you have a light bulb (which sends light in all directions) at point A and someone sitting at point B, the person at B would see two lights, one from the line straight to A and another coming at an angle from the left. This is another illustration of the caveats on Fermat's principle—if the light does not have a well defined initial direction, it may follow multiple stationary paths! | {
"source": [
"https://physics.stackexchange.com/questions/567934",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/114907/"
]
} |
567,978 | The voltage divider formula is only valid if there is no current drawn across the output voltage, so how could they be used practically? Since using the voltage for anything would require drawing current, that would invalidate the formula. So what's the point; how can they be used? | Oh, but you can. You can drive an high impedance input with it...including a buffer, which can then in turn be used to drive whatever you want. The more current you draw the more the voltage will droop, so you just make sure to draw as little current as possible. So that the output is, for example, 99.9% of what the divider formula says it should be. The divider formula is simply a equation that holds true under certain ideal conditions. If you want to mathematically analyze it under real conditions, the equation gets complicated and case-specific, so often it is just easier to force your real world usage such that the equation's assumptions are approximated very closely. | {
"source": [
"https://physics.stackexchange.com/questions/567978",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270732/"
]
} |
568,126 | Imagine if two objects of identical mass are under two different gravitational field,let's say two different planets (with Different value of gravity) both of the objects are of same mass,but we can easily notice that one body will be easier to move as compared to another (the object which will be on the planet with less gravity will move easily [obviously]), but the mass of both objects is identical, as mentioned above, which means that the inertia of both of the objects should be equal, but one body will be easier to move than the other, which means both of them have different inertia (as inertia is property often body to resist in change in motion). So does this mean that weight is measure for inertia rather than mass being the unit to measure inertia. I would like to mention the fact that this problem was also highlighted by Richard P. Feynman but I was not able to find its appropriate solution anywhere. Edit: I removed 'sir' before name of Feynman because I never knew that sir is added only to the title for people who received knighthood.)(This went off-topic) | Imagine a 10kg curling stone on a flat ice surface on Earth. If we apply 10N of horizontal force, the stone will accelerate at about 1 meter per second per second. On the Earth, a 10kg stone weighs approximately 98N. Now imagine the same 10kg stone on a flat ice surface on the Moon. If we apply 10N of horizontal force in this scenario, the stone will still accelerate at about 1 meter per second per second. On the Moon, a 10kg stone weighs approximately 16N. As you can see, the inertia of the stone is the same in both cases, but the weight of the stone is very different. This shows that it is the mass, not the weight, that is the appropriate unit of inertia. (There are two reasons your intuition tells you that heavier gravity will make it harder to move a weight; one is that when you are carrying an object, you have to lift it against the force of gravity, and the other is that when you are pushing an object the heavier it is the greater the force of friction has to be overcome. But in both cases this is because there are other forces involved, not because of inertia. In the example given above, we are dealing with horizontal motion on a surface with very little friction, so to a good approximation no other forces are involved.) | {
"source": [
"https://physics.stackexchange.com/questions/568126",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257018/"
]
} |
568,414 | Let's say that we have defined a certain physical quantity from a particular relationship and then we find another relationship and define the physical quantity again. For example, $$v = u + at$$ $$\text{and }v = \sqrt{u^2+2as}$$ where $v$ denotes the final velocity, $u$ denotes the initial velocity, $a$ is the acceleration, $s$ denotes the displacement and $t$ denotes the time. Why are the dimensions of the physical quantity, when evaluated using the first relation, the same as those when evaluated using the second relation? I know that this might sound like a silly question and the answer to this is most likely trivial but it seems like I have some misconception that is preventing me from fully grasping it, which I hope to clarify. | The answer is to think of it backwards. We don't start by saying $u+at$ and $\sqrt{u^2+2as}$ have equivalent units. We start by saying that, fundamentally, we think of "velocity" as a thing which is a physical quantity . If two expressions for the same physical quantity yield different units, we strongly question whether one of them is fundamentally wrong. Over the years, we have developed an axiomatic model of how units work. The traditional calculus for quantities defines the concept of a unit Z, and a quantity, which is $\mathbb R \times [Z]$ (a real number "multiplied" by a unit). From there, they define how that multiplication should distribute over other arithmetic operations, such as $$x\times[Z_1] + y\times[Z_1] = (x + y) \times [Z_1]$$ $$x\times[Z_1] \cdot y\times[Z_2] = (xy) \times ([Z_1]\times[Z_2])$$ $$\sqrt{x\times[Z_1]^2} = \sqrt x \times [Z_1]$$ and so forth. And, of course, we defined the concept of unit multiplication and division that we are now used to. We defined " dimensionality " to capture whether it was meaningful to add treat units as different "spellings" of the same quantities, or if they were fundamentally different. Several common dimensionalities are length, time, area (length squared), and speed (length divided by time) Over time, what we found was that equations which were consistent with this particular treatment of units could be "right," while those which were found inconsistent basically never were. So we declared these to be the "right" way to handle units, and added constants to handle any oddities that might occur. Now I do note that these are incomplete. There's two corner cases where people disagree on the best way to handle units. One of them is angles. Technically radians are dimensionless -- they are a length divided by a length. However, many people have found it convenient to treat radians as having a dimensionality of "angle."
This catches more mistakes, but runs into problems like the small angle approximation $sin(x\times[rad]) \approx x$ for small $x$ . This obviously runs into trouble if radians have a dimensionality that we can't just handwave away. The second area that causes problems are trancendentals. Decibels (dB) is a famously troublesome case because there is a logarithm in the equations for it. To date, we do not have an axiomization for such extended units, only the 7 major dimensions that we are used to from SI, so we have to admit that our quantity calculus is incomplete. For a handling on these issues, I recommend the article from Metrologia, On quantity calculus and units of measurement if you can access it. So in the end, the math works because we spent a lot of time finding math that fit reality. And, when necessary, we fudge it and create incomplete rules to keep it in line with reality. I wish there was a more precise answer, but that's the best we've managed over hundreds of years of scientific inquiry! | {
"source": [
"https://physics.stackexchange.com/questions/568414",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257479/"
]
} |
568,833 | I have studied physics up to 12th grade and I noticed that whenever new equations are introduced for certain entities, such as a simple harmonic wave, we never prove that it's continuous everywhere or differentiable everywhere before using these properties. For instance we commonly use this property that $v^2\cdot \frac{\partial^2f}{\partial x^2} = \frac{\partial^2f}{\partial t^2}$ holds for the equation to be a wave, and personally I've used this condition dozens of times to check if a function is a wave or not, but I've never been asked to check whether the function I'm analyzing itself is defined everywhere and has a defined double derivative everywhere. Is there a reason for this? There are many more examples but this is the one I get off the top of my head. | Short answer: we don't know, but it works . As the commented question points out, we still don't know if the world can be assumed to be smooth and differentiable everywhere. It may as well be discrete. We really don't have an answer for that (yet). And so what do physicist do, when they don't have a theoretical answer for something? They use Newton's flaming laser sword , a philosophical razor that says that "if it works, it's right enough". You can perform experiments on waves, harmonic oscillators, and the equation you wrote works. As one learns more physics, there are other equations, and for now we can perform experiments on pretty much all kind of things, and until you get really really weird as in black holes or smaller than electrons, the equations that we have give us the correct answer, therefore we keep using them. Bonus question: let's suppose that, next year, we have a Theory of Everything that says that the universe is discrete and non-differentiable. Do you think the applicability of the wave equation would change? And what about the results, would they be less right? | {
"source": [
"https://physics.stackexchange.com/questions/568833",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/248422/"
]
} |
568,836 | I'm trying to learn the concept of quantum dots and coulomb diamonds, and I'm trying to read this but I have encountered a problem which I couldn't find answer online. On the 4th page it mentioned: By tuning the gates it is possible to tune $\tilde{\mu}_{N+1}$ to lie between the electro chemical potentials in source and drain, allowing electrons to tunnel on and off the dot one at a time I think I can understand the meaning of chemical potential of the island/dot, as it is the energy required to put another electron onto the island/dot. But here the chemical potential of the source and drain doesn't seem to be clearly defined and I couldn't find any related information about it. Can anyone explain with more details? Also, are there any recommended textbooks that cover these details? Thanks! | Short answer: we don't know, but it works . As the commented question points out, we still don't know if the world can be assumed to be smooth and differentiable everywhere. It may as well be discrete. We really don't have an answer for that (yet). And so what do physicist do, when they don't have a theoretical answer for something? They use Newton's flaming laser sword , a philosophical razor that says that "if it works, it's right enough". You can perform experiments on waves, harmonic oscillators, and the equation you wrote works. As one learns more physics, there are other equations, and for now we can perform experiments on pretty much all kind of things, and until you get really really weird as in black holes or smaller than electrons, the equations that we have give us the correct answer, therefore we keep using them. Bonus question: let's suppose that, next year, we have a Theory of Everything that says that the universe is discrete and non-differentiable. Do you think the applicability of the wave equation would change? And what about the results, would they be less right? | {
"source": [
"https://physics.stackexchange.com/questions/568836",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/256572/"
]
} |
568,841 | There is a mass A attached to a horizontal spring fixed at an end. An equal mass B comes along and hits the mass with velocity $v$ . Can I use the equation of motion $v^2=u^2+2as$ , to calculate the acceleration of the masses after the moving mass hits the one on the spring and the spring gets compressed? I got the final velocity as $v/2$ by using equation of conservation of linear momentum of a system. I took initial velocity as $0$ , since the mass A was at rest initially. I was wondering whether this formula would work because initially there was the mass A and now the mass is doubled? The u=0 was for mass A but now final velocity was for the system of two masses. | Short answer: we don't know, but it works . As the commented question points out, we still don't know if the world can be assumed to be smooth and differentiable everywhere. It may as well be discrete. We really don't have an answer for that (yet). And so what do physicist do, when they don't have a theoretical answer for something? They use Newton's flaming laser sword , a philosophical razor that says that "if it works, it's right enough". You can perform experiments on waves, harmonic oscillators, and the equation you wrote works. As one learns more physics, there are other equations, and for now we can perform experiments on pretty much all kind of things, and until you get really really weird as in black holes or smaller than electrons, the equations that we have give us the correct answer, therefore we keep using them. Bonus question: let's suppose that, next year, we have a Theory of Everything that says that the universe is discrete and non-differentiable. Do you think the applicability of the wave equation would change? And what about the results, would they be less right? | {
"source": [
"https://physics.stackexchange.com/questions/568841",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/148222/"
]
} |
569,082 | I have read that heat radiation happens in the form of infrared, which is an EM radiation with a longer wavelength than visible light. So the heat radiation that you can feel in an oven or under the the sun is actually the infrared portion of the total radiation. This is why fluorescent or LED lights are so bright but they don't heat up a lot - they mostly produce radiation in the visible spectrum with negligible infrared, whereas incandescent bulbs used to produce a lot of infrared as a byproduct (some would say the visible light was the byproduct in this case). My question is why does electromagnetic-radiation in some wavelengths heat things up, whereas others, with both longer or shorter wavelength (RF, Microwave, UV, Gamma), don't have the same effect? Is it because of the size of the atoms/molecules, or inter-atomic distance, or the distance between nucleus and electrons? Some wavelengths are better suited to increase the vibration of the atoms than others? | In a solid, "heat" consists of random vibrations of the atoms in that solid around their equilibrium positions. If the radiation striking that solid has a wavelength component that is close to one of those possible vibration modes, then the radiation will couple strongly with that vibratory mode and the solid will accept energy from the incident radiation and its temperature will rise. If the incident radiation has too high a frequency (X-ray or gamma) the coupling is poor and the radiation just goes right through without interacting much. If the frequency is too low (radio frequencies lower than radar) the radiation bounces off and also doesn't interact much. This leaves certain specific frequency bands (like infrared and visible light wavelengths) where the interaction is strong. Note that this picture is somewhat simplified in that there are frequency bands in the gigahertz range where the RF energy bounces off electrically conductive materials like metal (this gives us radar) but interacts strongly with dielectrics and materials containing water molecules (this gives us microwave ovens). Note also as pointed out below by Frederic, molecules possess resonant modes that their constituent atoms do not and these can be excited by RF energy as well. Many of these molecular modes lie within the infrared range, giving rise to the field of IR spectroscopy. | {
"source": [
"https://physics.stackexchange.com/questions/569082",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/243892/"
]
} |
569,156 | Newton came up with gravity to explain apple falling from the tree, what would Gibbs have thought? The second law states that entropy of the universe, in general increases. This is relatively easy to see when you expand a gas or dissolve salt in water but how is entropy increasing by the falling of the apple. Of course the potential energy of the apple is decreasing but the second law has no mention of potential energies or forces. So how do you explain this or like charges repelling or any other macroscopic phenomena not related to gases or chemical reactions and processes? | The second law says that the entropy of the universe cannot decrease . In this situation Gibbs would say that the entropy of a freely falling apple does not change. Indeed the situation is completely time reversible. If you reverse time in this situation you go from a apple accelerating down under gravity to an apple decelerating under gravity as it travels up, exactly as if it had been thrown from the ground. There is no thermodynamics here, which is what you would expect. Now an apple breaking off a tree or hitting the ground do generate some entropy, related to the breaking of the apple's stem, the deformation of the ground and the dissipation of whatever energy is left over as thermal energy. | {
"source": [
"https://physics.stackexchange.com/questions/569156",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270106/"
]
} |
569,167 | Assume a perfectly spherical conductor with a spherical cavity inside it. Say we place a charge $+Q$ , at the center of the cavity. It induces a charge $-Q$ on the inner wall of the cavity, and a $+Q$ charge on the outer wall of the conductor. The question is regarding the flux through the "surface" of the cavity. Would it be $+Q/ε_0$ ? Or would the induced $-Q$ charge be involved in the equation, bringing the total enclosed charge and the flux to zero? Please do correct me wherever wrong. | The second law says that the entropy of the universe cannot decrease . In this situation Gibbs would say that the entropy of a freely falling apple does not change. Indeed the situation is completely time reversible. If you reverse time in this situation you go from a apple accelerating down under gravity to an apple decelerating under gravity as it travels up, exactly as if it had been thrown from the ground. There is no thermodynamics here, which is what you would expect. Now an apple breaking off a tree or hitting the ground do generate some entropy, related to the breaking of the apple's stem, the deformation of the ground and the dissipation of whatever energy is left over as thermal energy. | {
"source": [
"https://physics.stackexchange.com/questions/569167",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/233006/"
]
} |
569,173 | A block of mass $2kg$ is kept at origin at $t=0$ and is having velocity $4{\sqrt 5}$ m/s in positive x-direction. The only force on it is a conservative and its potential energy is defined as $U=-x^3+6x^2+15$ (SI units). Its velocity when the force acting on it is minimum (after the time $t=0$ ) is And the Solutions is: At $x=0$ $K= \frac{1}{2} (2)(80)$ and $U= 15J$ Total energy is, $E=K+U=95J$ Force, $ F= \frac{-dU}{dX}$ $F= 3x^2-12x$ for F to be minimum, $ F= \frac{-dF}{dX}=0$ $x=2m$ At $x=2m$ $E=K+U$ $95= \frac{1}{2}(2)(v^2) + (-8+24+15)$ $v=8 m/s$ Now my doubt is why $ F= \frac{-dF}{dX}$ needs to be zero for minimum force? I know I have some doubt in my basic concept so it would be very helpful if you'll be elaborate. | The second law says that the entropy of the universe cannot decrease . In this situation Gibbs would say that the entropy of a freely falling apple does not change. Indeed the situation is completely time reversible. If you reverse time in this situation you go from a apple accelerating down under gravity to an apple decelerating under gravity as it travels up, exactly as if it had been thrown from the ground. There is no thermodynamics here, which is what you would expect. Now an apple breaking off a tree or hitting the ground do generate some entropy, related to the breaking of the apple's stem, the deformation of the ground and the dissipation of whatever energy is left over as thermal energy. | {
"source": [
"https://physics.stackexchange.com/questions/569173",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270761/"
]
} |
569,273 | I have been very interested in this question since reading Electricity Misconceptions by K-6 There are two perspectives I have come across for how energy flows in a circuit: Electrons carry charge. As the charges move, they create an electromagnetic field that carries the energy around the circuit. The electrons do not act as carriers of electrical energy themselves. This explains the very fast nature of energy flow in the circuit in comparison with the slower drift velocity of the electrons. Electrons bump into nearby electrons, transferring energy to their neighbours through collisions. These neighbour-to-neighbour collisions explain the fast nature of energy flow in the circuit in comparison with the slower drift velocity of the electrons. As electrons pass through a bulb filament, for example, collisions between the bulb and these moving electrons excite the filaments atoms. De-excitation leads to the bulb lighting up. I have seen both of these explanations given by various sources. The 2nd explanation is the one that I see the most often, but the notes on Electricity Misconceptions convincingly put forth the 1st explanation. However, what I am struggling to understand is how this electromagnetic field generated by the current actually leads to energy being transferred in the circuit - how the field leads to bulb glowing. Or perhaps both explanations work together, but I can't see the whole picture. | This is a fantastic question, that indeed has a fantastic answer. I would like to answer your question by answering 3 other apparently disconnected questions, but then we'll connect them that will finally lead to your answer. Question 1:- Do mutually perpendicular moving charges violate Newton's 3rd Law? Assume 2 individually positive charges are moving perpendicular to each other as shown in the figure. One of the charges is moving along the x-axis, while the other moves along the y-axis. Now, due to their motion, they create a magnetic field according to the right hand rule. So, the magnetic field lines created by one charge will affect the other and vice-versa. If you calculate the magnetic forces acting on each charge, you will find that they are equal in magnitude but NOT opposite in direction , as shown in the figure. Now this is strange, since it is a direct hit to Newton's 3rd Law of Motion (which also implies a direct hit to the Law of Conservation of Momentum). Or Is it? Well you see, the magnetic force that we observe is a result of velocity (or motion) of the charges in a magnetic field. So, this force is due to the rate of change of "mechanical" momentum of the particle, i.e., momentum due to mass and motion. But hold on, aren't all kinds of momentum due to motion and mass only? Don't we know it directly from $\mathbf{p} = m\mathbf{v}$ ? Yes, but not always. Turns out, that not all momentum are due to motion and mass. There also exists all different sorts of momentum. One is due the momentum that is carried by the Electromagnetic field itself. (For a point charge Q in EM field, this momentum carried by the fields = $Q\mathbf{A}$ , where $\mathbf{A}$ is the vector potential). So, Newton's 3rd Law is actually not violated, since total momentum (Mechanical + EM field momentum) is actually conserved. Only that mechanical momentum is separately not conserved, hence the apparent violation. Okay, but so what? Hold on to this answer we'll need it. Question 2:- What is the significance of the Poynting Vector, and how is it connected to your 1st Explanation? For completeness, I am showing a small derivation of the Poynting Vector. If it's difficult to understand, simply skip it. There would not be any difficulty in continuing with the flow. Assume a small density of charge $\rho$ , moving at a velocity $\mathbf{v}$ in a EM field. The total force on this charge is $$\mathbf F\ = \int_V \rho(\mathbf{E+v\times B})\ d^3r$$ Thus, work done per unit time within volume V $$\frac{dW}{dt} = \mathbf{F\cdot v} = \int_V \mathbf{E\cdot J}\ d^3r$$ Substituting, $\mathbf{J\ = \frac{1}{\mu_{0}}\nabla\times B}$ and a little calculation would show, $$\frac{dW}{dt} = -\frac{d}{dt}\left\{\int_V \left(\frac{\varepsilon_0}{2} E^2 + \frac{1}{2\mu_0} B^2\right) d^3r\right\} - \oint\frac{1}{\mu_0} (\mathbf{E\times B})\cdot d\mathbf{a}$$ The 1st term in R.H.S is the rate of decrease of EM Field Energy within V, and the second term is the energy of the field that is moving out of the surface 'a', enclosing V, per unit time . Thus, the work done on charges per unit time equals the energy decreased in the fields minus the energy that left the surface 'a'. The Poynting Vector is given as $\frac{1}{\mu_0} (\mathbf{E\times B})$ , and it signifies the energy that leaves per unit area of a surface per unit time. Let's calculate the magnitude and direction of the vector for a wire with uniform current I flowing through it, as shown. The Electric field E inside the wire points along the direction of I, and is equal to $\frac{V}{L}$ , where V is the potential applied, and L is the length of the wire.
The Magnetic field is always perpendicular to the Electric field at all points on the surface, and is equal to $\frac{\mu_0 I}{2\pi r}$ (denoted in the diagram by H). The cross product therefore always points perpendicular to the surface inwards .
The magnitude of $\oint\frac{1}{\mu_0} (\mathbf{E\times B})\cdot d\mathbf{a}$ surprisingly yields $VI$ , which is indeed the power consumed by a wire having uniform current flow. Thus, we find that some sort of energy is flowing into the wires . But from where? Now look at this diagram. The current in a circuit always flows in the same direction, inside as well as outside a battery. So, the magnetic field lines always remain the same. However, the electric field inside the battery must reverse its direction, as shown (ignore the writings). So, the Poynting Vector must remain the same in magnitude but change its direction, now pointing perpendicular outwards from the surface of the battery . Aaah, there we are finally! Energy transfer thus takes place in the following manner: Battery deposits the energy per unit time into the surrounding EM field (= $VI$ ) Each section of the rest of the wire in the circuit draws little bits of energy from the field such that the entire wire draws a total of $VI$ units of energy per unit time. The Energy flows through the EM field at the speed of light (in vacuum) and hence, it can easily propagate from the battery to the bulb even if the current has not completely been developed throughout the circuit . The process is illustrated in the GIF below. I hope this answers your 1st Explanation. Question 3:- The Joule Heating produced due to the power consumption of the wires is no where to be seen in Explanation 1. So how to explain Joule Heating? Also, in order for the magnetic field to exist throughout the wire, the current needs to flow throughout the circuit. How does the current starts flowing in the bulb end of the circuit even before the EM field inside the circuit could reach there? Here is where, your Explanation 2 comes into play. You see, recall what we had discussed in Question 1. The total momentum is due to Mechanical + EM Field momentum. But as of now, we have only discussed the flow of energy due to EM Fields, which carry their field momentum. We are still left with our Mechanical momentum. As you know, mechanical momentum is due to mass and motion, so physical motion is absolutely needed for this transfer. However, what happens is that, there are so many electrons in a circuit, that a single particle cannot travel much further, without "colliding" with it's neighboring electrons or the fixed atoms. Thus, all the energy that individual electrons carry gets converted into the kinetic energy of the atoms and electrons, leading to Joule heating up of the wires. Also, this collision with each other provides the "push" needed to set up the current throughout the circuit. Similarly, from Question 2, we find that the energy propagating through the EM field (as an oscillating EM Wave) from the battery can easily reach the bulb, travelling at the speed of light. This wave after reaching the bulb sets up current inside the filaments of the bulb, even if the current has not been set up throughout the wire connecting the battery to the bulb . So,To conclude: Explanation 1 does take place and it explains the way Electro-Magnetic Energy flows from the source to the wires and bulbs. Explanation 2 does take place and explains the Joule Heating and the Mechanical part of the momentum carried by individual particles, and how the current starts flowing at the bulb end of the circuit where the EM field inside the circuit has not had enough time to reach . Hope it helps! | {
"source": [
"https://physics.stackexchange.com/questions/569273",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/148271/"
]
} |
570,486 | Imagine a closed loop in the shape of a trefoil knot ( https://en.wikipedia.org/wiki/Trefoil_knot ). How should one calculate the flux through this loop? Normally we define an arbitrary smooth surface, say, $\mathcal{S}$ whose boundary $\partial{\mathcal{S}}$ is the given loop and calculate the flux using its integral definition as $$\Phi_B = \int_{{\mathcal{S}}} \mathbf{B}\cdot d\mathbf{S}\tag{1}\label{1}$$ It is clear how to use $\eqref{1}$ when the loop is a simple loop and the surface is also a simple one, but how can one spread a surface on a trefoil and it be still true that for such surfaces the flux is always the same because $\nabla \cdot \mathbf{B}=0$ , in other words how does Gauss' theorem hold for surfaces whose boundary is a trefoil? Alternatively, one could introduce the vector potential $\mathbf{B}=\nabla \times \mathbf{A}$ and using Stokes' theorem derive from the definition of flux $\eqref{1}$ that $$\Phi_A = \int_{{\mathcal{S}}} \nabla \times \mathbf{A}\cdot d\mathbf{S}\\
=\oint_{\partial\mathbf{S}} \mathbf{A}\cdot d \ell \tag{2}\label{2}$$ So, whenever we can use Stokes' theorem we also have $\Phi_A=\Phi_B$ . How does Stokes' theorem hold if the loop is a trefoil? If in fact the application of either Gauss' or Stokes' theorem has a problem then does the fact that the line integral via $\eqref{2}$ can always be used to define the flux $\Phi_A$ mean that at least in this sense $\mathbf{A}$ is more fundamental than $\mathbf{B}$ ? | Every knot is the boundary of an orientable surface. Such a surface is called a Seifert surface . $^\dagger$ For any given knot (with a given embedding in 3-d space), the flux is the same through two such surfaces. As usual, the flux can be calculated either by integrating $\mathbf{B}$ over the surface, or by integrating $\mathbf{A}$ around the knot. Figure 6 in "Visualization of Seifert Surfaces" by van Wijk and Cohen ( link to pdf ) shows this nice picture of an orientable surface whose boundary is a trefoil knot: The boundary (the trefoil knot) is highlighted in yellow. To see that this really is a trefoil knot, imagine smoothing out the kinks and then looking down on the figure from above. The fact that the surface is orientable is clear by inspection (an insect on one side cannot walk to the other side without crossing the boundary), as is the fact that it does not intersect itself. Intuitively, we can see that Stokes' theorem will still work in this case by subdividing the surface into small cells, each with the unknot as its boundary, and applying Stokes' theorem to each individual cell. The contributions from the cell-surfaces add up to the flux over the full surface, and the contributions from the cell-boundaries cancel each other wherever two boundaries are adjacent, leaving only the integral over the trefoil. We can also see intuitively that the flux must be the same through any two such surfaces, because those two surfaces can be joined into a single closed surface over which the total flux must be zero because of $\nabla\cdot\mathbf{B}=0$ . The fact that the closed surface might intersect itself is not a problem, just like it's not a problem for two intersecting surfaces sharing the same unknot as the boundary. $^\dagger$ The idea behind the proof that a Seifert surface exists is sketched in "Seifert surfaces and genera of knots" by Landry ( link to pdf ). | {
"source": [
"https://physics.stackexchange.com/questions/570486",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/31748/"
]
} |
570,927 | The ideas of dark matter and dark energy are mind blowing.
Why is it said that dark matter overcomes dark energy in galaxies but it loses the battle in intergalactic space? In other words, why is dark energy dominant between galaxies but not inside galaxies? | These aspects of astronomy and cosmology are indeed very interesting and very significant, but don't allow the names to get in the way of your understanding. Dark matter is a form of matter made (most likely) of particles which don't interact very much with the matter we are more familiar with (i.e. protons, neutrons, electrons etc.). The evidence for it has several strands (rotation curves of galaxies, gravitational lensing, calculations of structure formation, calculations of matter content from nucleosynthesis in the early universe, etc.) The evidence for dark energy is summarised here: What is the evidence that dark energy exists? (as of 2020) ) "Dark energy" is a rather confusing name, in my opinion. It refers to the behaviour of the expansion of the universe at the largest scales. Ordinary matter tends to pull things together by gravitational attraction and therefore always slows the expansion. But the equations of general relativity allow that there might be effects which accelerate the expansion. Such effects get the name "dark energy". I wish the cosmologists had settled on a better name. But there it is. The name arises because this contribution to the overall dynamics of the universe enters the equations in two places, one of which behaves like energy and the other of which behaves like stress, in fact a form of tension (the opposite of pressure). But in physics if something behaves like X then we say it is X. So it is called energy. Dark because it does not emit electromagnetic radiation. The most significant thing about this contribution called dark energy is that it enters the equations of general relativity as a term which just gets added on, irrespective of where the matter in the universe may be. It is added on in exactly the same way everywhere. And most of the universe is vast empty voids between filaments of dark matter. Therefore the dark energy contribution adds up to a large total effect on average, even though it is tiny compared to the ordinary matter and dark matter at any given place where matter is present. The reason why the gravitational attraction of ordinary matter and of dark matter easily wins against the repulsive effects of this other term, wherever the matter is actually present, is simply that the dark energy per unit volume is so small. But after averaging over the whole volume of the universe it nevertheless makes the biggest contribution to the dynamics of the whole universe on average, because it is present throughout the otherwise empty voids, and those voids make up most of the volume. | {
"source": [
"https://physics.stackexchange.com/questions/570927",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271429/"
]
} |
570,966 | I can understand why, if the speed of light is invariant, a photon clock would tick slower. I find this explanation very useful in terms of introducing the idea of time dilation (also because it allows for the Lorentz formula to be derived intuitively, only using Pythagora's Theorem). But this approach has one important missing concept. A student might say; Okay I get why the photon clock would tick slower, but why is it an intrinsic property of time itself? Why is this not some effect of the mechanics of this specific clock? How are a pendulum clock, an atomic clock, circadian rhythms, a chemical clock, etc... all equivalent to the photon clock? Why the slowdown of the ticking of the photon clock is a probe on the very nature of all clocks and time itself and not just a probe on the nature of this particular clock (more so if we consider that the explanation relies on the specific mechanism of this clock to work)? For example some students might reason; a pendulum clock would slow down on the lunar surface, since the gravity is lower and therefore the pendulum would have a larger period, but we don't immediately jump to the conclusion that time itself has slowed down on the Moon with respect to Earth (in fact, ironically, in general relativity it is the other way around), just that the technical features of this particular clock make this happen because we have altered its functionality by altering the physical enviroment where it operates. The same could be said of a spring clock submerged in water for example. But if we don't think that the Moon gravity slows time with respect to Earth's just because the pendulum clock ticks slower, or that water slows time just because the spring clock ticks slower, then why should we think that moving at a certain relative speed slows the flow of time just because the photon clock ticks slower? | Invoke the principle of relativity. An inertial observer carries both a light clock and a mechanical wristwatch,
which agree when all are at rest.
If they don't agree when the inertial observer is moving [with nonzero constant velocity] carrying these clocks,
then that observer can distinguish being at rest from traveling with nonzero constant velocity. UPDATE: Q: What makes the photon clock special among all other clocks? A: Simplicity. It's easier to formulate, analyze, and interpret than other clocks. If the principle of relativity holds, it must turn out that one can eventually analyze any clock and get the same result as the light-clock---it probably takes a lot more analysis and interpretation [of the device, the surroundings, and the interactions]. | {
"source": [
"https://physics.stackexchange.com/questions/570966",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/235183/"
]
} |
570,975 | I was doing Kleppner-D.-Kolenkow-R.J. and I came across the following problem:- A pendulum is tied vertically to a car at rest, the car suddenly accelerates at a rate A. Find the maximum angle of deflection $\phi$ through which the weight swings. MY TRY: I saw the solution of this problem in the book which uses car's frame of reference, which was fairly simple. I tried to do it in the ground frame of reference. Deflection of the pendulum will be maximum when the angular velocity of the mass hung to pendulum relative to the hanged point will be zero, hence the velocity of mass relative to the car, perpendicular to the string is zero. But the constraint of a taut string doesn't allow velocity of mass relative to the car along the string also. So, velocity of mass relative to car is zero at the point of maximum deflection. I have the following two tools to solve the problem:- Apply Work energy theorem to the mass. Use the string constraint i.e. the acceleration of the mass and the topmost point along the string will be equal at any instant i.e. $T-mgcos(\theta)=masin(\theta)$ The tension force and gravity are only two forces acting on the mass. But, how could one find the work done by tension on the mass in the journey from $A$ to $B$ . Any hint would be a great help! | Invoke the principle of relativity. An inertial observer carries both a light clock and a mechanical wristwatch,
which agree when all are at rest.
If they don't agree when the inertial observer is moving [with nonzero constant velocity] carrying these clocks,
then that observer can distinguish being at rest from traveling with nonzero constant velocity. UPDATE: Q: What makes the photon clock special among all other clocks? A: Simplicity. It's easier to formulate, analyze, and interpret than other clocks. If the principle of relativity holds, it must turn out that one can eventually analyze any clock and get the same result as the light-clock---it probably takes a lot more analysis and interpretation [of the device, the surroundings, and the interactions]. | {
"source": [
"https://physics.stackexchange.com/questions/570975",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257380/"
]
} |
571,019 | Air has a density of about $\mathrm{1.3 kg/m^3}$ . From Carbon aerogels by Marcus A. Worsley and Theodore F. Baumann : Though silica aerogels held the title of "world's lightest material" for a long time at $\sim \mathrm{ 1 mg/cm^3}$ , recently, carbon-based aerogels have shattered that record with a density of less than $\mathrm{200 \mu g/cm^3}$ . So the above-named aerogels would have densities of $\sim \mathrm{1 kg/m^3}$ and $\mathrm{0.2 kg/m^3}$ respectively. How can they be lighter than air if a part of them is a solid (silica or carbon) that is heavier than air? | While the summary you cited is a convenient and easy to understand phrase, it is a paraphrase of another cited paper: Sun H., Xu Z., Gao C., "Multifunctional, Ultra-Flyweight, Synergistically Assembled Carbon Aerogels", Adv. Mater. 25 (2013) 2554–2560 . The paper says: The density was calculated by the weight of solid content without
including the weight of entrapped air divided by the volume of aerogel
(the density measured in a vacuum is identical to that in the air) So indeed the other answers are correct: the air is not factored into the density, presumably so aerogels can be compared objectively (despite those at higher altitudes and lower humidity being measured less dense). | {
"source": [
"https://physics.stackexchange.com/questions/571019",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/207398/"
]
} |
571,412 | I just studied Heisenberg's uncertainty principle in school and I came up with an interesting problem. Assume an electron which is moving very slowly and we observe it with a distance uncertainty of say $\Delta x=1\times10^{-13} \text{ m}$ if we try finding uncertainty of velocity using the formula $$\Delta x \cdot \Delta v\ge \dfrac{h}{4\pi m}$$ $$\Delta v=578838179.9 \text{ m/s}$$ Which is clearly greater than the speed of light but that is not possible. How did physicists overcome this challenge? | The right formula is $$\Delta X \Delta P \geq h/4\pi$$ where $P$ is the momentum which is approximatively $mv$ only for small velocities $v$ when compared with $c$ . Otherwise you have to use the relativistic expression $$P = mv/ \sqrt{1-v^2/c^2}.$$ If $\Delta X$ is small, then $\Delta P$ is large but, according to the formula above, the speed remains of the order of $c$ at most. That is because, in the formula above, $P\to +\infty$ corresponds to $v\to c$ . With some details, solving the above identity for $v$ , we have $$v = \frac{P}{m \sqrt{1+ P^2/m^2c^2}}\:,$$ so that $$v\pm \Delta v = \frac{P\pm \Delta P}{m \sqrt{1+ (P\pm \Delta P)^2/m^2c^2}}.$$ We have obtained the exact expression of $\Delta v$ : $$\pm \Delta v = \frac{P\pm \Delta P}{m \sqrt{1+ (P\pm \Delta P)^2/m^2c^2}} - \frac{P}{m \sqrt{1+ P^2/m^2c^2}},$$ where $$\Delta P = \frac{\hbar}{2\Delta X}\:.$$ This is a complicated expression but it is easy to see that the final speed cannot exceed $c$ in any cases.
For a fixed value of $P$ and $\Delta X \to 0$ , we have $$v\pm \Delta v = \lim_{\Delta P \to + \infty}\frac{P\pm \Delta P}{m \sqrt{1+ (P \pm \Delta P)^2/m^2c^2}}= \pm c\:.\tag{1}$$ Finally, it is not difficult to see that (using the graph of the hyperbolic tangent function) $$-1 \leq \frac{(P\pm \Delta P)/mc}{ \sqrt{1+ (P \pm \Delta P)^2/m^2c^2}}\leq 1\tag{2}\:.$$ We therefore conclude that $$-c \leq v\pm \Delta v \leq c,$$ where the boundary values are achieved only for $\Delta X \to 0$ according to (1).
Relativity is safe... | {
"source": [
"https://physics.stackexchange.com/questions/571412",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/263982/"
]
} |
572,565 | I'm building an autonomous boat, to which I now add a keel below it with a weight at the bottom. I was wondering about the shape that weight should get. Most of the time aerodynamic shapes take some shape like this: The usual explanation is that the long pointy tail prevents turbulence. I understand that, but I haven't found a reason why the front of the shape is so stumpy. I would expect a shape such as this to be way more aerodynamic: Why then, are shapes that have good reason to be aero-/hydrodynamic/streamlined (wings/submarines/etc) always more or less shaped like a drop with a stumpy front? | You are correct if your boat will only travel in a straight line. In real life the motion of the boat will often have a yaw angle, so that it is moving slightly "sideways" relative to the water. For example it is impossible to make a turn and avoid this situation. If the front is too sharp, the result will be that the flow can not "get round the sharp corner" to flow along both sides of the boat, without creating a lot of turbulence and waves which increase the drag on the boat. | {
"source": [
"https://physics.stackexchange.com/questions/572565",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271800/"
]
} |
573,618 | If at high temperatures atoms are more intensely interacting with each other or emitted photons that also could make the core vibrate. Is in these circumstances the radioactive material more likely to fission faster? Can this be used to get rid of radioactive garbage? | In the years following the discovery of radioactivity, physicists and chemists (recall that Rutherford was given the Nobel prize for Chemistry!) investigated the effect of heating radioactive substances. They could detect no effect on the activity, and therefore none on the half life. This was interpreted (as soon the atom had been established as a nucleus surrounded by electrons) as evidence that the radiation came from the nucleus. The argument was – and still is – that even at furnace temperatures (say up to 3000 K) there will be disturbance to the electron configurations but it will be rare for atoms to be totally stripped of electrons, and violent internuclear collisions will be very rare. Only such collisions would be likely to influence the emission of a particle from an unstable nucleus. At much higher temperatures and densities (e.g. in a tokamak or in a star) violent internuclear collisions will be common, and I'd guess that the half lives of unstable nuclei would be reduced, but this is not, as far as I know, detectable at 'ordinary' terrestrial temperatures. | {
"source": [
"https://physics.stackexchange.com/questions/573618",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/253499/"
]
} |
573,755 | So, based on this question , a molecule containing a radioactive atom will break when the atom decays. But suppose you need a lot energy to break the compound apart --- as in, more energy than the decay of the atom will release (obviously, a molecule this stable isn't actually possible... right?). Will the atom just be forced to stay static, or would something else happen? I can't think of a way for the compound to break, since that would probably need free energy. But maybe the compound can "soak up" energy, so a sharp jolt or high heat can cause the atom to decay and the bonds to break? | In principle, yes. If the would-be decay products have a higher energy than the original molecule, the decay cannot occur. In practice, chemical binding energies (typically in the $\rm eV$ range) are much, much smaller than nuclear decay energies (typically in the $\rm MeV$ range), and so this does not occur in any cases that I am aware of. This is not a coincidence, but just a natural consequence of the relative strength of nuclear and electromagnetic interactions. | {
"source": [
"https://physics.stackexchange.com/questions/573755",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/269791/"
]
} |
574,214 | I know there are a lot of similar question but I don't believe this to be a copy. I understand that if two people lived far away they could not transfer information through quantum entangled particles because forcing a particle into a particular spin breaks the entanglement and simply observing the particle to collapse the other part of the pair will give a perfectly random result. But what about using entanglement to sort of indirectly coordinate plans from far away: I know this is wrong somehow and uses a childish interpretation of the idea of communicating information but this is just to make this as clear as possible: lets say the year is 3050, there are 2 leaders of an allied war who want to attack a planet, they are currently on opposite sides of the planet and have 2 plans they can decide upon, 1) both attack from the east and west at once or north and south at the same time. Using an atomic clock the leaders coordinate to check the state of quantum entangled particles (or a qubit, doesn't matter) at 12pm. If the qubit collapses as a (1/0) they go with plan A while (0/1) means plan B. I believe that this does not constitute faster than light communication because both plans were conceived ahead of time and the particle or quantum state was just used as a random number generator, but it still seems as though the plan of attack was being transferred. My questions are: could this scheme actually be used or is there something I'm missing. Why does this not constitute faster than light communication? I would also just like to hear people who are smarter than me's thoughts on the physics around this hypothetical. | Going off of WillO's answer, while this scheme would work it would be no more effective than using a printer and two pieces of paper. Yes, your scheme is different in that it involves quantum nonlocality, but nevertheless it does not constitute faster-than-light communication because no information is being transferred between the two leaders. Their respective observations are correlated, but are nevertheless random. Hence, there's no problem. Is it weird? Yes. Is it a threat to causality? No. :) | {
"source": [
"https://physics.stackexchange.com/questions/574214",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/250009/"
]
} |
574,217 | For rigid bodies, all the particles can have different linear velocities but the same angular velocity, so it makes it convenient to talk about the angular velocity instead. From there, we get to ideas like angular momentum and torque, which work the same way for angular motion as momentum and force do for linear motion. However, if we have a system of $n$ particles freely moving around, say a gas, do we still use these ideas? In that case, the moment of inertia is constantly changing. If we apply a constant force to any of the particles, then it'll result in a non-constant torque because of the continuously changing position vector. In case of rigid bodies, this is not the case because, in at least the 'axis of rotation frame', the torque due to a constant force on a particle is constant because the angle between the force and the position vector of the particle remains constant because of the rigid nature of the body. For non-rigid bodies, there is no 'axis of rotation' frame, so torque is also very inconvenient to talk about. So are these ideas only used for motions where the whole system can be ascribed the same angular velocity at all times? | Going off of WillO's answer, while this scheme would work it would be no more effective than using a printer and two pieces of paper. Yes, your scheme is different in that it involves quantum nonlocality, but nevertheless it does not constitute faster-than-light communication because no information is being transferred between the two leaders. Their respective observations are correlated, but are nevertheless random. Hence, there's no problem. Is it weird? Yes. Is it a threat to causality? No. :) | {
"source": [
"https://physics.stackexchange.com/questions/574217",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/156987/"
]
} |
574,223 | An object is in free fall when the force acting on it is exclusively gravitational. But why then is the moon in free fall? Isn't there a centrifugal force acting on it? | Going off of WillO's answer, while this scheme would work it would be no more effective than using a printer and two pieces of paper. Yes, your scheme is different in that it involves quantum nonlocality, but nevertheless it does not constitute faster-than-light communication because no information is being transferred between the two leaders. Their respective observations are correlated, but are nevertheless random. Hence, there's no problem. Is it weird? Yes. Is it a threat to causality? No. :) | {
"source": [
"https://physics.stackexchange.com/questions/574223",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271049/"
]
} |
574,593 | Why am I able to see the top of the pictures even though they aren’t facing the reflexive surface.
The light would have to travel down through the picture | As FGSUZ said, an object doesn't have to face a reflective surface to be seen as an reflection in it. I made the following picture to illustrate it for a 2D illustration: Two dimensions are sufficient to illustrate it, and it seems to be clearer that way. You can see now that an object doesn't have to face a reflective surface for its image to be reflected from it. The sufficient condition is that the object is visible from a point at the reflective surface. | {
"source": [
"https://physics.stackexchange.com/questions/574593",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/272810/"
]
} |
574,736 | What is the principle of physics used in this popular stunt?
Initially, I thought aerodynamics due to an increase in the angle of attack, but its magnitude is not sufficient to balance the whole body and skateboard. Please, can anyone help me to get about it? Animation: | The skateboard is able to lift off the ground because of the momentum imparted to it by the skateboarder pushing down on the kicktail. The skateboard acts as a lever around the rear wheels, so when the kicktail is pushed down, the center of mass of the skateboard rises up. If you do this fast enough, the skateboard's center of mass gets enough upward momentum to lift the entire skateboard off the ground. To set up a similar experiment, lay a ruler or pencil so it hangs over the edge of a table a small amount, hit down on the free end, and watch it fly up into the air. You may notice that the object not only flies up but also across the room toward the end you hit. The impulse imparts both vertical and horizontal momentum, which you can see in the first part of the skateboard clip as the center of the board moves both upward and backward. The skateboarder then uses their front foot to stop this horizontal/rotational motion of the board and keep it under their feet, which is possible because the skateboarder has much more mass/inertia than the board. Because the skateboarder is tens of times as massive as the board, they are easily able to manipulate its momentum with their body, while changing their own momentum relatively little (if you look closely, you can see that both the skateboard and skateboarder do, in fact, land slightly behind the point of liftoff). If they just stomped on the kicktail without doing anything else, the board would arc upwards and backward, flipping end over end through the air. There is nothing related to aerodynamics at play here, this trick could be performed exactly the same way in a vacuum. EDIT: There seem to be some other factors at play that I've missed here. In particular, the front foot can add some lift to the board as it slides forward to the nose. As the board leaves the ground and rotates up into the front foot, it produces a normal force, which allows the front foot to impart a frictional force parallel to the surface of the board. This won't get the board off the ground in the first place (since friction is always parallel to the board), but once the board is oriented somewhat upright, the board can be pulled further upward by the front foot. Thanks to @Todd Wilcox for pointing this out. | {
"source": [
"https://physics.stackexchange.com/questions/574736",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/269574/"
]
} |
574,866 | I looked at a few of the other posts regarding the accuracy of atomic clocks, but I was not able to derive the answer to my question myself. I've seen it stated that atomic clocks are accurate on the order of $10^{-16}$ seconds per second. However, if there is no absolute reference frame with which to measure "real-time”, what is the reference clock relative to which the pace of an atomic clock can be measured? Is the accuracy of an atomic clock even meaningful? Can't we just say the atomic clocks are perfectly accurate and use them as the reference for everything else? | This is a good and somewhat tricky question for a number of reasons. I will try to simplify things down. SI Second First, let's look at the modern definition of the SI second . The second, symbol s, is the SI unit of time. It is defined by
taking the fixed numerical value of the caesium frequency ∆νCs, the unperturbed ground-state hyperfine transition frequency of the caesium
133 atom, to be 9192631770 when expressed in the unit Hz, which is
equal to s−1. Emphasis mine The key word here is unperturbed . This means, among other things, that the Cs atom should have no motion and there should be no external fields. We'll come back to why these systematic effects are very important shortly. How an Atomic Clock Works How do we build a clock based on this definitions of the second? We do it as follows. The Cs transition frequency is about 9.19 GHz. This is a microwave signal. Using analog electronics, engineers are able to make very very precise electric signals at these frequencies and these frequencies can be tuned to address the Cs atomic transition. The basic idea is to bath the Cs atoms in microwave radiation in the vicinity of 9.192631770 GHz. If you are on resonance the atoms will be excited to the excited state. If not they will stay in the ground state. Thus, by measuring whether the atoms are in the ground or excited state you can determine if your microwave signal is on or off resonance. What we actually end up using as the clock (the thing which ticks off periodic events that we can count) is actually the 9.19 GHz microwave signal which is generated by some electronics box*. Once we see 9192631770 oscillations of this microwave signal (counted by measuring zero crossing of the microwave signal using electronics) we say that one second has passed. The purpose of the atoms is to check that the microwave frequency is just right. This is similar to how you might reset your microwave or oven clock to match your phone occasionally. We calibrate or discipline one clock to another. So an atomic clock works by disciplining a microwave signal to an atomic transition frequency. Now, suppose you build a clock based on this principle and I also build one and we start our clocks at the same time (turn on our microwave oscillators and start comparing to the atoms occasionally). There are two possibilities. The first is that our two clocks always tick at the exact same time. The second is that there is noise or fluctuations somewhere in the system that cause us to get ticks at slightly different moments in time. Which do you think happens? We should be guided by the principle that nothing in experimental physics is ever exact. There is always noise. Atomic clock physics is all about learning about and understanding noise. Clock Accuracy This is the main topic of the OP's question. This is also where the key word unperturbed comes back into play. The Zeeman effect says that if the atom is in a magnetic field its transition frequency will shift slightly. This means a magnetic field constitutes a perturbation. This is one reason why your clock and my clock might tick at different moments in time. Our atoms may experience slightly different magnetic fields. Now, for this reason you and I will try really hard to ensure there is absolutely no magnetic field present in our atomic clock. However, this is difficult because there are magnetic materials that we need to use to build our clock, and there are magnetic fields due to earth and screwdrivers in the lab and all sorts of things. We can do our best to eliminate the magnetic field, but we will never be able to remove it entirely. One thing we can do is we can try to measure how large the magnetic field is and take this into account when determining our clock frequency. Suppose that the atoms experience a linear Zeeman shift of $\gamma = 1 \text{ MHz/Gauss}$ **. That is $$
\Delta f = \gamma B
$$ Now, if I go into my atomic clock I can try to do my best to measure the magnetic field at the location of the atoms. Suppose I measure a magnetic field of 1 mG. This means that I have a known shift of my Cs transition frequency of $\Delta f = 1 \text{ MHz/Gauss} \times 1 \text{ mG} = 1 \text{ kHz}$ . This means that, in absence of other perturbations to my atoms, I would expect my atoms to have a transition frequency of 9.19263 2 770 GHz instead of 9.19263 1 770 GHz. Ok, so if you and I both measure the magnetic fields in our clocks and compensate for this linear Zeeman shift, we now get our clocks ticking at the same frequency, right? Wrong. The problem is that however we measure the magnetic field, that measurement itself will have some uncertainty. So I might actually measure the magnetic field in my clock to be $$
B = 1.000 \pm 0.002\text{ mG}
$$ This corresponds to an uncertainty in my atomic transition frequency of $$
\delta f = 2 \text{ Hz}
$$ So that means because of uncertainty about my systematic shifts I don't exactly know the transition frequency for my atoms. That is, I don't have unperturbed ground state Cs atoms so my experiment doesn't exactly implement the SI definition of the second. It is just my best guess. But, we do have some information. What if we could compare my atoms to perfect unperturbed Cs atoms? How much might my clock differ from that ideal clock? Suppose I decrease the frequency of my clock by 1 kHz to account for the magnetic field shift so that my clock runs at $$
f_{real} = 9192631770 \pm 2 \text{ Hz}
$$ While the ideal Cs clock runs (by definition of the SI second) at exactly $$
f_{ideal} = 9192631770 \text{ Hz}
$$ Let’s run both of these for $T= 1 \text{ s}$ . The ideal clock will obviously tick off $$
N_{ideal} = f_{ideal} T = 9192631770
$$ oscillations since that is the definition of a second. How many times will my clock tick? Let's assume the worst case scenario that my clock is slow by 2 Hz. Then it will tick $$
N_{real} = f_{real} * T = 91926317\textbf{68}
$$ It was two ticks slow after one second. Turning this around we can ask if we used my clock to measure a second (that is if we let it tick $N_{real} = 9192631770$ under the assumption - our best guess - that the real clock's frequency is indeed 9.192631770 GHz) how long would it really take? $$
T_{real} = 9192631770/f_{real} \approx 1.00000000022 \text{ s}
$$ We see that after one second my clock is slow by about 200 ps after 1 s. Pretty good. If you run my clock for $5 \times 10^9 \text{ s} \approx 158.4 \text{ years}$ then it will be off by one second. This corresponds to a fractional uncertainty of about $$
\frac{1 \text{ s}}{5 \times 10^9 \text{ s}} \approx \frac{2 \text{ Hz}}{919263170 \text{ Hz}} \approx 2\times 10^{-10} = 2 \text{ ppb}
$$ Frequency Uncertainty to Seconds Lost Here I want to do some more mathematical manipulations to show the relationship between the fractional frequency uncertainty for a clock and the commonly referred to "number of seconds needed before the clock loses a second" metric. Suppose we have two clocks, an ideal clock which has unperturbed atoms which runs at frequency $f_0$ and a real clock which we've calibrated so our best guess is that it runs at $f_0$ , but there is an uncertainty $\delta f$ , so it really runs at $f_0 - \delta f$ .
We are now going to run these two clocks for time $T$ and see how long we have to run it until they are off by $\Delta T = 1 \text{ s}$ . As time progresses, each clock will tick a certain number of times. The $I$ subscript is for the ideal clock and $R$ is for real. \begin{align}
N_I =& f_0T\\
N_R =& (f_0 - \delta f)T
\end{align} This relates the number of ticks to the amount of time that elapsed. However, we actually measure time by counting ticks! So we can write down what times $T_I$ and $T_R$ we would infer from each of the two clocks (by multiplying the observed number of oscillations by the presumed oscillation frequency $f_0$ ). \begin{align}
T_I =& N_I/f_0 = T\\
T_R =& N_R/f_0 = \left(\frac{f_0 - \delta f}{f_0}\right) T_I = \left(1 - \frac{\delta f}{f_0}\right)T_I
\end{align} These are the key equations. Note that in the first equation we see that the time inferred from the ideal clock $T_I$ is equal $T$ which of course had to be the cause because time is actually defined by $T_I$ . Now, for the real clock we estimated its time reading by dividing its number of ticks, $N_R$ (which is unambiguous) by $f_0$ . Why didn't I divide by $f_0 + \delta f$ ? Remember that our best guess is that the real clock ticks at $f_0$ , $\delta f$ is an uncertainty, so we don't actually know the clock is ticking fast or slow by amount $\delta f$ , we just know that it wouldn't be so statistical improbable that we are off by this amount. It is this uncertainty that leads to the discrepancy in the time reading between the real and ideal clocks. We now calculate \begin{align}
\Delta T = T_I - T_R = \frac{\delta f}{f_0} T_I
\end{align} So we see \begin{align}
\frac{\Delta T}{T_I} = \frac{\delta f}{f_0}
\end{align} So we see that the ratio of the time difference $\Delta T$ to the elapsed time $T$ is given exactly by the ratio of the frequency uncertainty $\delta f$ to the clock frequency $f_0$ . Summary To answer the OP's question, there isn't any perfect clock against which we can compare the world's best atomic clocks. In fact, the world's most accurate atomic clocks (optical clocks based on atoms such as Al , Sr , or Yb ) are actually orders of magnitude more accurate than the clocks which are actually used to define the second (microwave Cs clocks). However, by measuring systematic effects we can estimate how far from ideal a given real clock is from an ideal clock. In the example I gave above, if we know the magnetic field is less than .002 mG then we know that the clock is less than 2 Hz from an ideal clock frequency. In practice, every clock has a whole zoo of systematic effects that must be measured and constrained to quantify the clock accuracy. And one final note. Another important clock metric which we haven't touched on here is clock stability. Clock stability is related to the fact that the measurement we use to determine if there is a frequency detuning between the microwave oscillator and the atomic transition frequency will always have some statistical uncertainty to it (different from the systematic shift I described above) meaning we can't tell with just one measurement exactly what the relative frequency between the two is. (In absence of drifts) we can reduce this statistical uncertainty by taking more measurements, but this takes time. A discussion of clock stability is outside of the scope of this question and would require a separate question. Reference Frames Here is a brief note about reference frames because they're mentioned in the question. Special and general relativity stipulate that time is not absolute. Changing reference frames changes the flow of time and even sometimes the perceived order of events. How do we make sense of the operation of clocks, especially precision atomic clocks, in light of these facts? Two steps. First, see this answer that convinces us we can treat the gravitational equipotential surface at sea level as an inertial frame. So if all of our clocks are in this frame there will not be any relativistic light shifts between those clocks. To first order, this is the assumption we can make about atomic clocks. As long as they are all within this same reference frame, we don't need to worry about it. Second, however, what if our clocks are at different elevations? The atomic clocks in Boulder, Co are over 1500 m above sea level. This means that they would have gravitational shifts relative to clocks at sea level. In fact, just like the magnetic field, these shifts constitute systematic shifts to clock frequencies which must be estimated and accounted. That is, if your clock is sensitive (or stable) enough to measure relativistic frequency shifts then part of the job of running the clock is to estimate the elevation of the clock relative to the Earth's sea level equipotential surface. Clocks are now so stable that we are able to measure two clocks running at different frequencies if we lift one clock up just a few cms relative to another one in the same building or room. See this popular news article . So the answer to any question about reference planes and atomic clocks is as follows. When specifying where "time" is defined we have to indicate the gravitational equipotential surface or inertial frame that we take as our reference frame. This is typically conventionally the surface of earth. For any clocks outside of this reference (remember that the GPS system uses atomic clocks on satellites) we must measure the position and velocity of these clocks relative to the Earth reference frame so that we can estimate and correct for the relativistic shifts these clocks experience. These measurements will of course come with some uncertainty which results in additional clock inaccuracies as per the rest of my answer. Footnotes *You might wonder: Why do we need an atomic clock then? Can't we just take our microwave function generator and set it to 9.192631770 GHz and use that as our clock? Well sure, you can dial in those number on your function generator, but what's really going to bake your noodle is "how do we know the function generator is outputting the right frequency?" The answer is we can't truly know unless we compare it to whatever the modern definition of the second is. The microwave signal is probably generated by multiply and dividing the frequency of a mechanical oscillator such as a quartz oscillator or something which has some nominal oscillation frequency, but again, we can't truly know what the frequency of that thing is unless we compare it to the definition of the second, an atom. **I made this number up. Cs transition which is used for Cs atomic clocks actually doesn't have a linear Zeeman shift, just a quadratic Zeeman shift, but that doesn't matter for purposes of this calculation. | {
"source": [
"https://physics.stackexchange.com/questions/574866",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11146/"
]
} |
575,349 | A couple of days ago, I noticed that the torque unit used by my teachers is $mN$ , and while reading on the internet it came to my notice that in all textbooks the official unit is $Nm$ .
I asked one teacher about it and he insisted that I'm wrong, and while I told him that I read it on Wikipedia, he said that the sources or references used by Wikipedia aren't necessarily correct and I think I agree with him on that. I checked my book and the only time it's mentioned is while discussing string torsion (I explained how it appears at the end of the post) but while solving problems, all our physics teachers us $mN$ . All of my teachers convince us using the idea that torque should be distinguished from energy since their units have the same dimensions and they represent different quantities (and I agree on this one) and that torque is the cross product between position vector and force to further support their point that it must be $mN$ and not $Nm$ even though the second point doesn't make any sense.
I kept looking online and eventually found the SI units official brochure published by the International Bureau of Weights and Measures and it clearly states that the unit must be $Nm$ and it couldn't be wrong (since it's the official units reference by definition). My problem is that my school textbooks, while not in English, write its equations in English letters and notation (also left to right) so it couldn't be a matter of translation. I'm not even trying to argue with him (because that's impossible, even if I had proof, he's too stubborn)
and in fact, he told me to look into more trustable references (he suggest old French textbooks/literature since he thought they were more dependable on than others even though I couldn't find any and they're probably outdated by now).
So is this choice of units purely conventional, or does matter mathematically, or are both units correct? | Just like $2\times3=3\times2$ , There is no difference between newton-meters and meter-newtons. They're two different ways of saying the same thing. Probably your book is trying to avoid confusion when you learn about energy, which is also measured in newton-meters, although we normally rename the unit, when referring to energy, as joules . You should go ahead and call the unit of torque "meter newtons" in your class, because that's what your instructor expects. But be prepared to see other people call it "newton meters". I'd even strongly recommend using newton-meters anywhere except in your class, since the unit ${\rm m\cdot N}$ (meter-newtons) is much too easily confused with $\rm mN$ (millinewtons). | {
"source": [
"https://physics.stackexchange.com/questions/575349",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257588/"
]
} |
575,354 | Question: Could an aquatic animal weighing 5,000 kg and traveling at 55 km/hr break through solid 11cm-thick ice? Context : I am writing a story and want the physics to be as accurate as possible. I'd like to describe an animal attack where a creature, approximately the size and strength of an orca, rams through the ice beneath an ice fishing hut. I imagine the creature swimming straight upwards, perpendicular to the ice. However, I will not use this as a plot device if it physically unrealistic. Please presume the ice has a 20 cm diameter hole drilled into it. This is the hole used for ice fishing. The creature is aiming for that hole. What I tried: I know nothing of physics, but I did attempt to figure this out. I used an online impact force calculator. It suggests that the peak impact force is 76.389 kN. I just have no idea if that is enough to bust through ice from underneath. I also have no idea if this impact would be too injurious to be a decent hunting strategy, but that's another question. | Just like $2\times3=3\times2$ , There is no difference between newton-meters and meter-newtons. They're two different ways of saying the same thing. Probably your book is trying to avoid confusion when you learn about energy, which is also measured in newton-meters, although we normally rename the unit, when referring to energy, as joules . You should go ahead and call the unit of torque "meter newtons" in your class, because that's what your instructor expects. But be prepared to see other people call it "newton meters". I'd even strongly recommend using newton-meters anywhere except in your class, since the unit ${\rm m\cdot N}$ (meter-newtons) is much too easily confused with $\rm mN$ (millinewtons). | {
"source": [
"https://physics.stackexchange.com/questions/575354",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/273068/"
]
} |
575,422 | I recently saw this awesome video by Steve Mould where he explained that a sugar solution in water will turn polarized light in the clockwise direction. The explanation basically boils down to sugar molecules (glucose) having a handedness (they are chiral) and that linearly polarized light can be thought of as a superposition of circular polarized light in opposite directions which experience a different refractive index when interacting with the sugar solution. Now to my question ; If I want to replicate this experiment at home, will regular table sugar work, or do I need pure glucose, and if that is the case where can I get it? Many thanks! Edit 1 : I will get back with the results I get from using table sugar when I have performed the experiment. Edit 2 : I did the experiment using half water half sugar, basically simple syrup, and the result was excellent. The optical rotation was very apparent. | Chemically, table sugar is sucrose , whose molecule is basically a unit of glucose and a unit of fructose connected together. To know the expected amount of rotation of polarization for a given substance, see the table of specific rotations . In particular, for D-glucose specific rotation is $+52.7°\,\mathrm{dm}^{-1}\,\mathrm{cm}^3\,\mathrm{g}^{-1}$ , while for D-sucrose it's $+66.37°\,\mathrm{dm}^{-1}\,\mathrm{cm}^3\,\mathrm{g}^{-1}$ , which is actually even larger than that of D-glucose. So yes, you should be able to succeed with the experiment using table sugar instead of glucose. | {
"source": [
"https://physics.stackexchange.com/questions/575422",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/118609/"
]
} |
575,471 | Throughout my highschool classes, I have been made to learn that metals have free electrons that's why they are able to conduct electricity.. But I never understood why. Is that related to metallic bonding... Correct me if I am wrong but even if that's the case.... I am just not able to understand the concept of free electrons | Without getting into the quantum mechanical details, here’s a cartoon depiction of what’s going on. The vertical axis represents energy. Like other answers have already pointed out, metals don’t have actual free electrons. In the cartoon this is given by the grey region. If electrons have enough energy to be in the grey region, they’re free. In individual independent atoms (gaseous state), the energy levels below a certain energy are discrete. This is depicted by the lines in the cartoon. This means the energy is fixed, rigid. The electrons in this state can’t conduct electricity. In solids however, the discrete states of multiple neighbouring atoms “merge” into a continuum and create what is called as bands . For further details you may look at my answers here . With this, there exist a continuum of states called the conduction band where the electrons are not bound to any single atom of the solid. They are mobile . The fascinating property of these states is that it is possible for electrons to respond to an external electric field. These states are called Bloch waves . In insulators there is a big energy gap between the filled states (valence) and the empty states (conduction). So without sufficient external field, they are unable to conduct electricity. In metals however, the energy gap is absent and thus electrons can easily go into the conduction band and respond to external electric field. Some details The reason why mobile electrons seem like free electrons has to do with crystal symmetries. Specifically translational symmetry. In a crystal the atoms are arranged in a regular periodic manner. In the bulk (non boundary) of the metal if you go from one atom to another, the neighbourhood looks identical. This is known as translational symmetry. And a consequence of this is that the electrons have well defined momentum, just as a free electron does. This is encapsulated in the band structure . | {
"source": [
"https://physics.stackexchange.com/questions/575471",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/273129/"
]
} |
576,300 | Earth orbits the Sun because the Sun's mass curves spacetime. But the Sun is 150 million kilometers away from here; how can mass curve spacetime that it's not actually in? Is that a form of action at a distance? | The curvature of spacetime can be separated mathematically into two components, Ricci curvature and Weyl curvature . They are locally independent, but their joint variation over spacetime is constrained by mathematical relations (the second Bianchi identity). General relativity says that the Ricci curvature is determined by the local matter density (stress-energy), but there is no direct constraint on the Weyl curvature. So, in vacuum regions (Schwarzschild field, gravitational waves, etc.), the Ricci curvature is zero while the Weyl curvature can be nonzero. The physical value of the Weyl curvature is determined by the mathematical curvature relations and the boundary conditions. The Weyl curvature represents the propagating degrees of freedom of the gravitational field, which can exist without matter. This spreads the influence of gravity beyond the immediate location of matter, but does not represent action at a distance because it still acts causally (limited by the speed of light). Typically we solve for the gravitational field of the Sun as a steady state , which makes it seem like a global result that appears all at once. However, if we pose an initial-value problem containing a central mass (thus determining Ricci curvature) with a different initial configuration of Weyl curvature, the "extra" Weyl curvature would break up into gravitational waves and ultimately disperse to large distances, leaving the steady-state (Schwarzschild) solution. That is, roughly speaking, the Weyl curvature is indirectly determined by matter, as the effect of Ricci curvature on Weyl curvature propagates outward at the speed of light. | {
"source": [
"https://physics.stackexchange.com/questions/576300",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/196306/"
]
} |
576,306 | Say we have a particle in an infinite deep well that is $V(x)=\left\{\begin{array}{ll}0 & 0 \leq x \leq L \\ \infty & \text { elsewhere }\end{array}\right.$ . The energies corresponding to various states are given as $E_n=\frac{n^{2} \pi^{2} \hbar^{2}}{2 m L^{2}}$ .This means that the particle can have different energies upon different measurements.
But this goes against the rule that the total energy of a system remains constant,because if I measure the energy now it's something and later on it may be something else, violating conservation of energy!
Where am I wrong? | The curvature of spacetime can be separated mathematically into two components, Ricci curvature and Weyl curvature . They are locally independent, but their joint variation over spacetime is constrained by mathematical relations (the second Bianchi identity). General relativity says that the Ricci curvature is determined by the local matter density (stress-energy), but there is no direct constraint on the Weyl curvature. So, in vacuum regions (Schwarzschild field, gravitational waves, etc.), the Ricci curvature is zero while the Weyl curvature can be nonzero. The physical value of the Weyl curvature is determined by the mathematical curvature relations and the boundary conditions. The Weyl curvature represents the propagating degrees of freedom of the gravitational field, which can exist without matter. This spreads the influence of gravity beyond the immediate location of matter, but does not represent action at a distance because it still acts causally (limited by the speed of light). Typically we solve for the gravitational field of the Sun as a steady state , which makes it seem like a global result that appears all at once. However, if we pose an initial-value problem containing a central mass (thus determining Ricci curvature) with a different initial configuration of Weyl curvature, the "extra" Weyl curvature would break up into gravitational waves and ultimately disperse to large distances, leaving the steady-state (Schwarzschild) solution. That is, roughly speaking, the Weyl curvature is indirectly determined by matter, as the effect of Ricci curvature on Weyl curvature propagates outward at the speed of light. | {
"source": [
"https://physics.stackexchange.com/questions/576306",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/256066/"
]
} |
576,325 | In Kleppner's Mechanics, there is a problem given as A rope of mass $M$ and length $l$ lies on a frictionless table, with
a short portion, $l_0$ , hanging through a hole. Initially the rope is at rest. Find the general equation for the length of rope hanging through hole. In the solution, the problem is solved by using momentum equation given as- Suppose at time $t$ , $x$ length of ripe is hanging Initial momentum at time t, $P_t$ = $Mv$ Momentum at time $t+dt$ , $P(t+dt) = M(v+dv)$ Rate of change of momentum = $Mdv/dt$ dp/dt = Force on rope $Mdv/dt = Mxg/l$ Then we can solve for the expression for $x$ . The question is that while rope is hanging from table then hanging part moves with velocity $v$ in downward direction and the part which rest on table moves with velocity $v$ in horizontal direction and the force of weight of hanging part acts in downward direction. Then how we write the momentum of rope as $Mv$ and $M(v+dv)$ , shouldn't velocity involve separate x and y component in velocity? Like how we write the initial and final momentum of rope in vector notation? How we write the momentum of whole rope using a single velocity in one y-component(not including x component) and equate the change of momentum to downward force of weight? Please explain. | The curvature of spacetime can be separated mathematically into two components, Ricci curvature and Weyl curvature . They are locally independent, but their joint variation over spacetime is constrained by mathematical relations (the second Bianchi identity). General relativity says that the Ricci curvature is determined by the local matter density (stress-energy), but there is no direct constraint on the Weyl curvature. So, in vacuum regions (Schwarzschild field, gravitational waves, etc.), the Ricci curvature is zero while the Weyl curvature can be nonzero. The physical value of the Weyl curvature is determined by the mathematical curvature relations and the boundary conditions. The Weyl curvature represents the propagating degrees of freedom of the gravitational field, which can exist without matter. This spreads the influence of gravity beyond the immediate location of matter, but does not represent action at a distance because it still acts causally (limited by the speed of light). Typically we solve for the gravitational field of the Sun as a steady state , which makes it seem like a global result that appears all at once. However, if we pose an initial-value problem containing a central mass (thus determining Ricci curvature) with a different initial configuration of Weyl curvature, the "extra" Weyl curvature would break up into gravitational waves and ultimately disperse to large distances, leaving the steady-state (Schwarzschild) solution. That is, roughly speaking, the Weyl curvature is indirectly determined by matter, as the effect of Ricci curvature on Weyl curvature propagates outward at the speed of light. | {
"source": [
"https://physics.stackexchange.com/questions/576325",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/259524/"
]
} |
576,396 | Let's assume there is an astronaut with a very long rope trailing behind him. As he approaches a very large black hole, he can look back and see the rope behind him trailing off into the distance. What would he see after he crosses the event horizon and looks back along the rope while a portion of the rope is still outside the event horizon? | Dale's answer is correct, but I want to further emphasize that nothing special happens in the vicinity of an event horizon. It's just like any other region of spacetime. Here's an analogy. Suppose you're in a building that's rigged to explode at a certain time. If you're in the building and too far from an exit at a late enough time, you won't be able to escape before the explosion even at your top speed. If it's a single-story, square building and you can exit at any point on the edge, then the region from which you won't be able to escape at a given time is square. It starts in the center of the building and expands outward at your maximum running speed. The boundary of that region is the "escape horizon". If you don't escape and die in the explosion, then the escape horizon will necessarily sweep over you at some point before your death. When it passes you, nothing special happens. You don't notice it passing. You can't detect it in any way. It isn't really there. It's just an abstract concept that we defined based on our foreknowledge of the future. The event horizon of a black hole is defined in the same way, with a singularity in place of the explosion and the speed of light in place of your running speed. If your worldline ends at the singularity, then the event horizon will sweep over you, at the speed of light, at some earlier time. But you won't notice. You can't detect it in any way. It isn't really there. People get confused about this because there's phenomenology associated with black hole horizons: the closer you get to them, the faster you have to accelerate to avoid falling through, the slower your clock runs, the hotter you get from the Hawking radiation, and so on. They also behave like electrical conductors for some purposes , though it's not mentioned as often. The thing is, if you mispredict where the singularity is going to be, and try to escape from what you think is the horizon but actually isn't, all of those same things happen. Any event horizon defined by any future spacetime points, whether singular or not, has these properties, even in special relativity. (See Rindler coordinates and Unruh effect for more about the special-relativistic case.) So the answer to any question about what you'd see while falling through an event horizon is always the same as if the event horizon was somewhere else. | {
"source": [
"https://physics.stackexchange.com/questions/576396",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/261150/"
]
} |
576,645 | Why is it necessary to burn the hydrogen fuel coming out of the engine for the lift of rockets? If it is done to create a greater reaction force on the rocket then why can't we get the same lift with just adjusting the speed of the hydrogen gas going out of the engine like we can release them at a great pressure (and also by adjusting the size of the nozzle opening) and thus at a greater speed? Is it possible for rockets to fly without burning the fuel and just releasing the fuel with a great force ? (I know the rockets are too massive). How does the I SP of the ordinary rocket engines compare with the one in my question ? Most of the answers have done the comparison (and a great thanks for that), but help me with the numerical difference in the I SP 's. (Compare it using any desired values of the amount of fuel and other required things for taking off.) | Why is it necessary to burn the hydrogen fuel coming out of the engine for the lift of the rockets ? Hydrogen isn't the only fuel possible, so I presume your question is more general, why is any fuel burned? If it is done to create a greater reaction force on the rocket then why can't we do the same lift with just adjusting the speed of the hydrogen gas going out of the engine like we can release them at a great pressure and thus at a greater speed? You need two things for a rocket: a reaction mass to expel, and a source of power to accelerate it. Combustion rockets combine these two into a single source. The fuel/oxidizer burns generating energy. The energy from combustion heats and then, via the nozzle configuration, accelerates the combustion products as the reaction mass. Just about anything could be put onboard as the reaction mass, but getting the power to accelerate it is much harder. Batteries and compressed gas hold a bit of energy, but the density is much lower than rocket fuels. Solar panels can gather a nearly unlimited amount of energy, but you have to wait for a long time to collect it. Nuclear fuels could release a lot of power, but putting a nuclear reactor on a rocket takes a lot of mass and is difficult to convince everyone that it can be done safely. Even if you had sufficient electrical power, converting it into thrust isn't simple. Ion engines can be used, but they have orders of magnitude less thrust than a chemical rocket. The acceleration can be useful in space, but is too small to help lift a rocket off the surface of the earth. So the fuel is burned because it can be stored on the rocket with a fairly high energy density, and the reaction can take place at a high rate, giving large amounts of thrust. | {
"source": [
"https://physics.stackexchange.com/questions/576645",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271783/"
]
} |
576,651 | As per Energy Band Diagram of metals, the conduction band and valence bands overlap, But if we assume there is overlapping then the total number of energy bands will reduce as hence electron will have less number of energy states to occupy, which will violate Pauli exclusion principle. For Example, suppose, if there are N states in Valance band and M states in Conduction band and total M+N electrons. Assume 10% of overlapping of states, we will then have 0.9M+N states remaining, but since the total electron is still M+N even after overlapping, Electrons will have to occupy the same energy state i.e. more than 1 electron will be occupied in the same energy state, But this is not allowed by Pauli exclusion principle... My question is do overlapping of energy band is allowed? If allowed then where I am Wrong? | Why is it necessary to burn the hydrogen fuel coming out of the engine for the lift of the rockets ? Hydrogen isn't the only fuel possible, so I presume your question is more general, why is any fuel burned? If it is done to create a greater reaction force on the rocket then why can't we do the same lift with just adjusting the speed of the hydrogen gas going out of the engine like we can release them at a great pressure and thus at a greater speed? You need two things for a rocket: a reaction mass to expel, and a source of power to accelerate it. Combustion rockets combine these two into a single source. The fuel/oxidizer burns generating energy. The energy from combustion heats and then, via the nozzle configuration, accelerates the combustion products as the reaction mass. Just about anything could be put onboard as the reaction mass, but getting the power to accelerate it is much harder. Batteries and compressed gas hold a bit of energy, but the density is much lower than rocket fuels. Solar panels can gather a nearly unlimited amount of energy, but you have to wait for a long time to collect it. Nuclear fuels could release a lot of power, but putting a nuclear reactor on a rocket takes a lot of mass and is difficult to convince everyone that it can be done safely. Even if you had sufficient electrical power, converting it into thrust isn't simple. Ion engines can be used, but they have orders of magnitude less thrust than a chemical rocket. The acceleration can be useful in space, but is too small to help lift a rocket off the surface of the earth. So the fuel is burned because it can be stored on the rocket with a fairly high energy density, and the reaction can take place at a high rate, giving large amounts of thrust. | {
"source": [
"https://physics.stackexchange.com/questions/576651",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/273356/"
]
} |
576,791 | According to Wikipedia, the energy released in a TNT explosion is 4 × 10 6 J/kg. https://en.wikipedia.org/wiki/TNT According to web, combusion of coal is around 24 × 10 6 J/kg. https://www.world-nuclear.org/information-library/facts-and-figures/heat-values-of-various-fuels.aspx This looks rather counter-intuitive: TNT is famous for the explosion, thus I would expect that it releases a lot of energy, but actually, it seems much smaller than coal combustion... How is that possible that combustion of coal releases similar energy as a TNT explosion while intuitively we would not expect that? | There are some marked differences that make $\text{TNT}$ far more suitable than the combustion of coal for explosives purposes. Firstly, the decomposition reaction of $\text{TNT}$ : $$2 \text{C}_7\text{H}_5\text{N}_3\text{O}_6 \to 3 \text{N}_2 + 5 \text{H}_2 + 12 \text{CO} + 2 \text{C}$$ proceeds far faster than the combustion reaction of coal: $$\text{C}+\text{O}_2 \to \text{CO}_2$$ Secondly, the decomposition of $\text{TNT}$ produces far more gaseous reaction products than the combustion of coal: respectively $10\text{ mol}$ of gas per $\text{mol}$ of $\text{TNT}$ for $1\text{ mol}$ of gas per $\text{mol}$ of coal (and the latter requires $1\text{ mol}$ of $\text{O}_2$ for the combustion to take place). It's the production of gaseous reaction/decomposition products that make a good explosive: the super-fast build-up of gas inside the shell makes the pressure increase until the shell bursts, releasing all its energy at once. | {
"source": [
"https://physics.stackexchange.com/questions/576791",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/230627/"
]
} |
577,022 | What do black holes spin relative to? In other words, what is black hole spin measured in relation to? Spinning black holes are different from non-spinning black holes. For instance, they have smaller event horizons. But what are the spins of black holes measured relative to? Let's first look at an example with common objects. Example Let's say there's a disc on a table that rotates at 60 rpm. When you are standing still it spins at 60 rpm. But if you start running around it, it will move faster or slower relative to you. In this case, the disc has a ground speed, 60 rpm, because it has something to spin in relation to, in this case, the table. Black Holes Now, let's say that there is a spinning black hole. Because there is no control for the black hole to spin relative to, its spin must be relative to an object, for example, you. If you stand still, it spins at a constant rate. But if you start moving around the black hole in the same direction as the rotation, according to Newtonian physics, the black hole would spin at a slower rate relative to you. Since a faster spinning black hole has a smaller event horizon, in the first case, there would be a smaller event horizon. Then how do scientists say that there are spinning and non-spinning black holes? Is that just in relation to Earth? Ideas First Idea My first idea is also one that is more intuitive. When I move around the black hole, the black hole spins slower relative to me and consequently has a larger event horizon. In this idea, the black hole would just behave like a normal object. This would mean that if you went really fast around a black hole, you could get a lot closer to the black hole that if you were standing still. This is kind of like a satellite that orbits Earth. The slower it moves, the easier it is to fall to the Earth. (I know this is a horrible analogy.) Nothing special happens here. Second Idea My second idea is that when you move faster around the black hole, the relative rotational speed of the black hole doesn't change. Because of how fast it is/how dense it is and special relativity, moving around the black hole doesn't affect its speed. This is like trying to accelerate past the speed of light. No matter how much energy you spend, your speed barely changes. I don't understand how this one would work. Why won't the rotational speed of the black hole stay the same? Conclusion What do black holes spin relative to? And what happens if you move around it? There are lots of questions that ask how black holes spin, or how fast they spin, but as far as I know, none of them address this question. | But if you start running around it, it will move faster or slower relative to you.
In this case, the disc has a ground speed, 60 rpm, because it has something to spin in relation to, in this case, the table. Actually, this is fundamentally incorrect. The spinning of the disk has nothing to do with the table in principle. Acceleration, including spinning, is not relative. It can be measured without reference to any external object. For example, using a ring interferometer, or a gyroscope. It does not matter if the object is a disk or a black hole or anything else, spinning is not relative like inertial motion is. When I move around the black hole, the black hole spins slower relative to me, and consequently has a larger event horizon. The event horizon is a global and invariant feature of the spacetime. Your motion does not change it. Of course, you can use whatever coordinates you like and make the coordinate size change as you wish. However, which events are on the event horizon is unchanged by your motion. | {
"source": [
"https://physics.stackexchange.com/questions/577022",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/184767/"
]
} |
577,411 | Problem I am locked in a room with no windows and I need to tell if the room is moving, I only have a lamp.
According to my professor I cannot tell if the room is moving because of the Special Relativity Theory. My thought What if the room moves to a speed very close to the speed of the light and I can see that the beam of light takes more time to reach the wall in the direction the room is moving. The speed of light is always the same no matter the initial frame of reference.
Can I tell the room is moving? I guess my idea does not work because Galileo Theory of Relativity, but I still don't understand | You assume that the room is moving close to the speed of light with respect to some absolute frame of reference (this would be the Galilean frame), and thus that the light would appear to you to move slower in one direction than the other. This is the assumption that was proved false by the Michelson-Morley experiment . That experiment was supposed to dot one of the last 'i's of physics, so that science could close the book on it before the close of the 19th century, and move on to more interesting things. Instead, it absolutely broke the physics of the time. It utterly disproved the existence of a Galilean frame of reference, as well as the luminiferous aether. To you, light always travels at the same speed. You are always at rest relative to yourself -- it's only relative to other things in the universe that you're moving. So the whole entire point of Special Relativity is that if the speed of light is constant, then time and space must be variable. So you you and your room are zipping by me at close to the speed of light, it looks to me like your room is foreshortened in the direction of our relative travel, and like your clock is slow -- and it looks to you like I am foreshortened, and that my clock is slow. All the other stuff in Special Relativity is there to make the math work out, and make it consistent with physical measurement. | {
"source": [
"https://physics.stackexchange.com/questions/577411",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
577,653 | I think in some media, light can be significantly slowed down; but even if only slightly, where would the momentum go when the light slows down and where does it get the extra momentum when it leaves that medium? An example is a water. | This question is a very long-standing one, and is sometimes known as the Abraham-Minkowski controversy. Both Abraham and Minkowski derived expressions for the energy-momentum tensor of electromagnetic waves in matter. Each author’s tensor is based on sound theoretical arguments. Unfortunately, they disagree. Abraham’s tensor shows that the momentum decreases, as you suggest, but Minkowski’s actually shows that it increases in matter. These two views were resolved in this paper: https://arxiv.org/abs/0710.0461 where it is shown that the key is to pay attention to the corresponding energy momentum tensor of the matter also. The sum of the EM and the matter tensor is the same for both. Any extra momentum comes from the matter and any missing momentum goes to the matter. Also, all experimental predictions are identical for both tensors. So the choice of how to partition the total momentum into an EM and a matter tensor is arbitrary. An EM wave propagating through matter cannot be uniquely identified or separated from the matter through which it propagates. | {
"source": [
"https://physics.stackexchange.com/questions/577653",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/208473/"
]
} |
577,662 | I was told that If net external moment of some forces is zero about a point, then the net external force passes through the point. I know it's not true in general, what was the special condition imposed which I forgot to include to make it true? Also, is there anything close to the statement we can say? Some of my friends guess that if the additional condition that the net external force on the body is zero would be also imposed the statement might become true, while some say that the forces were meant to be co-planar. I do not understand why that would be true. It will be a great help if someone could provide some insight into what exactly is happening here. Please ask for clarifications if something is not clear. | This question is a very long-standing one, and is sometimes known as the Abraham-Minkowski controversy. Both Abraham and Minkowski derived expressions for the energy-momentum tensor of electromagnetic waves in matter. Each author’s tensor is based on sound theoretical arguments. Unfortunately, they disagree. Abraham’s tensor shows that the momentum decreases, as you suggest, but Minkowski’s actually shows that it increases in matter. These two views were resolved in this paper: https://arxiv.org/abs/0710.0461 where it is shown that the key is to pay attention to the corresponding energy momentum tensor of the matter also. The sum of the EM and the matter tensor is the same for both. Any extra momentum comes from the matter and any missing momentum goes to the matter. Also, all experimental predictions are identical for both tensors. So the choice of how to partition the total momentum into an EM and a matter tensor is arbitrary. An EM wave propagating through matter cannot be uniquely identified or separated from the matter through which it propagates. | {
"source": [
"https://physics.stackexchange.com/questions/577662",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/257380/"
]
} |
577,954 | This is a question I’ve been thinking about for a while. If I’m standing in the middle of a straight road, during night, I can see a car coming towards me because of its lights even if it is kilometers away. Notwithstanding, the driver can not see me because the car will brighten the road only few hundred meters further. What physics properties of the light causes this phenomenon? | Mainly because the driver sees the much brighter road right in front of the car, which is reflecting a greater portion of the light from the headlights. The light reflected from kilometers away is much less intense. The drivers eyes are not sensitized to the less intense light from farther away when they are mainly seeing the more intensely lit road in front of them. There is also the fact that the light you see from the headlights is much more intense than the light that the driver sees reflected off of you. This is due to doubled distance of the reflected light traveling with intensity lowering with the inverse square law https://en.wikipedia.org/wiki/Inverse-square_law#Light_and_other_electromagnetic_radiation , and because part of the light that reaches you is absorbed and part of it is not reflected directly back to the driver. | {
"source": [
"https://physics.stackexchange.com/questions/577954",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/274030/"
]
} |
578,146 | I see a lot of images, including one in my textbook, like this one, where at the ends of a uniform field, field lines curve. However, I know that field lines are perpendicular to the surface. The only case I see them curving is when drawing field lines to connect two points which aren't collinear (like with charged sphere or opposite charges) and each point of the rod is collinear to its opposite pair, so why are they curved here? | I have taken your image and created a few additional field lines at one end of the plates in the first diagram below. When you come to the ends of the plates, the field starts to resemble that associated with two point charges instead of a sheet of charge. The second diagram below shows the field lines between two point charges. Note that as you move away from the two point charges an equal distance apart, the lines look like those at the ends of your parallel plate capacitor (curved lines). Towards the center between the charges, the field lines start to look straight and evenly spaced (parallel lines). Hope this helps. | {
"source": [
"https://physics.stackexchange.com/questions/578146",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/269924/"
]
} |
578,153 | I'm quite confused, If it's a state function it is dependent on the properties of state, after we loop a cycle, we return to the same point and hence evaluating entropy at that state and subtracting with original entropy at start, it should total to zero. However the Clausius inequality states that it's less than or equal to zero.. does this mean that entropy is not a state function for irreversible cyclic processes? | I have taken your image and created a few additional field lines at one end of the plates in the first diagram below. When you come to the ends of the plates, the field starts to resemble that associated with two point charges instead of a sheet of charge. The second diagram below shows the field lines between two point charges. Note that as you move away from the two point charges an equal distance apart, the lines look like those at the ends of your parallel plate capacitor (curved lines). Towards the center between the charges, the field lines start to look straight and evenly spaced (parallel lines). Hope this helps. | {
"source": [
"https://physics.stackexchange.com/questions/578153",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/236734/"
]
} |
578,229 | Part of the definition of a virtual image is that it cannot be formed on a screen. I understand this is the case when the screen is right next to the image, since there are no physical rays that can hit the screen. But what I don't understand is why an image can't form on the screen if the screen is located sufficiently far away from the image and/or lens so that rays do physically hit the screen? The 'explanation' usually given is that real rays converge while virtual rays don't, but how is the screen supposed to know if the rays it's seeing actually converged at some point or not? The only apparent difference compared to real rays I can see is that rays for virtual images would have greater angular divergence, which would create an image on the screen, just blurry. | There seems to be some fundamental confusion here. An image is formed on a screen when light rays emanating from an object converge there. If there is no convergence of rays, then there is no image on a screen. Think about a portrait located positioned on the left side of the lens. The light emanating from a point on the tip of the nose focuses to the (single) corresponding point on the image. The same is true of all the neighboring points, so there is a one-to-one correspondence between points on the image and points on the object, and the image is clear. On the other hand, if you position a screen at a different location, then the light emanating from the tip of the portrait's nose will be spread over a whole region of the screen. The light from the neighboring points on the object will overlap, and the result will be a blurred image. The conclusion is that the calculated image distance is where you will get a clear image; if you put your screen anywhere else, then an image will not form. Now consider what you'd get with a diverging lens. The blue dotted lines are obtained by tracing the rays on the right hand side backward and pretending the lens wasn't there. The virtual image is the location from which the rays appear to be emanating from the perspective of somebody on the right-hand side of the lens. However, there are no actual light rays which converge there. If you place a screen at the location of the virtual image, can you see why you don't get a nice picture? | {
"source": [
"https://physics.stackexchange.com/questions/578229",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/114852/"
]
} |
578,282 | If a uranium atomic bomb directly hit a stockpile of weapons grade uranium, would the chain reaction also detonate the stockpile? what about a stockpile of nuclear reactor fuel rods? what about a stockpile of various nuclear weapons? what about a plutonium bomb or a hydrogen bomb? what about all possible permutations of these? | To supplement niels's answer , the hardest part about a nuclear bomb is to prevent it from blowing itself apart before it has completed fission (or fusion). If a nuclear bomb does detonate, any nearby potential fission/fusion fuel will simply be blown away and not fizz. To understand why we need to understand how nuclear bombs work. An explosion is a rapid increase in volume and energy causing high pressure, usually due to gas rapidly expanding. A bomb is basically a device which uses extreme heat to vaporize material turning it into a rapidly expanding gas. If this expansion causes a supersonic shock wave we call it a detonation . High explosives are explosives which detonate. Normal explosives happen via combustion , the technical term for burning. Once sufficient energy is applied the chemicals in the explosive combine with oxygen. This reaction is exothermic meaning it gives off energy. That energy then starts the nearby material reacting with oxygen. Once one bit starts burning it can provide energy to the next bit and the next bit and so on to make a chain reaction. This is why a burning fuse works; so long as there is combustible material, oxygen, and energy to start the reaction, any amount of material will burn. Nuclear fission works very differently. When fissile material is bombarded with neutrons, some of them will smack into a nucleus and break it apart. This reaction is very exothermic, and it also produces more neutrons which then fly off and break apart more nuclei producing more energy. However, the density of neutrons required to sustain a reaction is very high, so the fissile material must be kept packed together. The point at which the reaction is producing enough energy and neutrons to sustain itself is the critical mass . Nuclear reactors must sit between too dense and not dense enough while using the waste heat to produce electricity. Various safety mechanisms manage this. Not dense enough and the reaction cannot sustain itself and it fizzles out. Too dense and the reaction runs away, it goes supercritical, and you get a nuclear meltdown ... or a bomb. A nuclear fission bomb is, basically, a deliberate nuclear meltdown. It's a way to very precisely smash hunks of sub-critical fissile material together to make a critical mass; a prompt criticality . This must happen very precisely because as soon as some fission starts energy will be produced which will vaporize material raising pressure rapidly. This can shove the fissile pieces apart shutting down the reaction before much fission has happened. The most basic fission bomb is a gun-type which literally shoots a sub-critical pellet of uranium into a sub-critical cylinder of uranium making a critical mass. But it's also very inefficient since as soon as fission starts the uranium blows itself apart stopping the fission. Most of the uranium is never used. Little Boy was a gun-type bomb. A more efficient, and much more complex, fission bomb is the implosion type . This uses very, very, very precise conventional explosives arranged around many pieces of sub-critical material to simultaneously crush them into a sphere. The force and precision of the conventional explosion holds the fissile material together in a super-critical state for as long as possible to fizz as much material as possible making them very efficient. Fat Man was an implosion-type bomb. Fusion bombs work on basically the same principle: you need to squeeze the material together very quickly and very precisely before it blows itself apart. They do this with a fission bomb. A fusion ("hydrogen") bomb is basically a conventional bomb that sets off a fission bomb which sets off a fusion bomb. Now we can see why a nuclear bomb will not set off a nearby stockpile of nuclear material or bombs. An insignificant fraction will fizz because of the particle bombardment, but the blast will simply blow the extra material away before it can achieve critical mass. If you were to take two pieces of sub-critical fissile and smack them together with your hands to make a critical mass, it would be very bad for you and everyone nearby, but it would not cause a nuclear blast. As Wikipedia dryly puts it... The prompt-critical excursion is characterized by a power history with an initial prompt-critical spike as previously noted, which either self-terminates or continues with a tail region that decreases over an extended period of time. In layman's terms: the two halves would blow themselves apart. This is known as a "criticality accident" or "critical power excursion". This happened at Los Alamos twice when experimenting with the "Demon Core" in terrifyingly unsafe manners; though in both cases the scientists shut down the reactions before they blew themselves apart. And this is why nuclear weapons are considered "safe". Unlike conventional explosives which can be detonated by a simple fire, a nuclear bomb must work perfectly to go off. This is why they are often referred to as a "device". Any damage to the bomb makes it safer. Safety mechanisms effectively remove critical parts of the device; like how one can ensure a car will not start by pulling out the carburetor or the fuel pump. Arming a nuclear bomb basically finishes putting the device together. The worst that is likely to happen to an unarmed nuclear device is the conventional explosives will detonate scattering fissile material into the environment. While that's indeed very bad it's much better than a nuclear explosion. We're very sure of this because it has happened a very distressing number of times . | {
"source": [
"https://physics.stackexchange.com/questions/578282",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133736/"
]
} |
578,819 | Or is there a bias toward a specific angle in regards to the direction of the current? | These types of questions are tempting to ask in a yes-no way, and you currently have an answer that says "yes" and and answer that says "no."
The physicist's approach is to ask how big the biggest effect might be; let's try that. The current answer which proposes an aligning effect suggests an interaction between the electric dipole moment of the water molecule, $p \approx 6\times10^{-30}\,\mathrm{C\,m} \approx 0.4\,e\,Å$ , and the Earths' magnetic field, via the Lorentz force, $\vec F = q \vec v \times \vec B$ . The energy associated with this interaction is what you get if the force interacts over the length scale of the molecule, which has a bond length of about $1\,Å$ . The typical thermal velocities obey $kT \approx mv^2$ , or \begin{align}
v^2 \sim \frac{kT}{m} = \frac{25\rm\,meV}{18\,\mathrm{GeV}/c^2}
&\approx \frac 43\times10^{-12}\ c^2
\\
v &\sim 10^{-6}\ c \approx 300 \rm\,m/s
\end{align} So a typical Lorentz-force polarization energy would be \begin{align}
U &\approx | p v B |
%\\ &= 6\times10^{-30} \mathrm{C\,m}\cdot 3\times10^2\mathrm{m/s} \cdot \frac12\times10^{-4}\mathrm T
%\\ &\approx 9\times10^{-32}\,\mathrm J
%\times\frac{1\rm\,eV}{1.6\times10^{-19}\rm\,J}
\\ &\approx \frac 58\times10^{-13} \rm\,eV
\approx 60 \rm\,feV
\end{align} Those are femto-eV.
But the water molecule's rotational degree of freedom also has milli-eV energy sloshing around. The ratio of the aligned and un-aligned populations will go like the Boltzmann factor for this energy difference, $$
e^{\Delta E/kT} = e^{\text{femto/milli}} = 1 + 10^{-12},
$$ that is, a part-per-trillion difference. I've been involved in several experiments looking for part-per-billion asymmetries; each one took ten years. A few parts per trillion is a small effect, even if you go back through my arithmetic and futz around with some missing factors of two. What's more, the preferred direction $\vec v \times \vec B$ is only well-defined if most of the water molecules are moving in generally the same direction. That only happens if the rate of flow is much faster than the typical thermal velocity --- which doesn't really happen unless the flow approaches the speed of sound in water. If you tried to enhance the effect --- by, say, shooting a hypersonic jet of water through the bore of ten-tesla magnet, to bring the asymmetry up to the part-per-million range --- you'd probably just learn something sneaky about hydrogen bonding. | {
"source": [
"https://physics.stackexchange.com/questions/578819",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/230132/"
]
} |
579,440 | Consider a box floating on water having a coin on top, now suppose after some time by some external influence, the coin is dropped into water. After doing the calculations, to my surprise, I found that the water level actually drops...I just can't understand this phenomena. For what reason does this happen? I think that my whole conception of buoyancy is wrong after this because intuitively I think water level rises because the coin would apply both it's weight and take up some volume while in the water. Notes for future answers/ existing answers: After some deep thinking, I started to realize the problem was that I had this intuition that water would have been compressed by the weight somehow. How exactly does that effect vary in the scenario displayed above? After even more deep thinking, how does the behaviour of fluid change as we relax/ impose the condition of compressibility ? specifically how would results in this case differ? ( A diagram would be nice) Suppose we constantly applied a force onto the surface of a water, like a pressure, by the logic given in most answers, the water level should rise! but, intuitively it is often said that water level drops as we apply more pressure on the face open to atmosphere My question is different from this stack because my confusion is mostly about the actual structure of the fluid and response of fluids to load rather than the directly calculated volume changes. | When submerged, the coin displaces as much water as it has volume (logical). When floating on the box, the coin displaces as much water as corresponds to its weight. As metal has a higher density than water, it means that the coin in the box displaces more water than when the coin is submerged. | {
"source": [
"https://physics.stackexchange.com/questions/579440",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/236734/"
]
} |
579,452 | In Dirac's Principles of QM following is stated: $$
\langle x | A + B | x \rangle = \langle x | A | x \rangle + \langle x | B |x \rangle
$$ but $$
\langle x | AB | x \rangle \ne \langle x | A | x \rangle \langle x | B | x \rangle
$$ and so $\langle x|A|x \rangle$ is not exact but average value of observable A otherwise in second relation both sides would have to be equal.
I don't understand the second relation. Shouldn't both side be equal like this, $\langle x|AB|x \rangle = \langle x|A(B|x \rangle) = b \langle x|A|x \rangle = ba \langle x|x \rangle = ba = \langle x|A|x \rangle \langle x|B|x \rangle$ , where $a$ and $b$ are corresponding eigenvalues. What is wrong here? Edit: It is embarrassing. I indeed was thinking $|x\rangle$ was eigenvector of both $A$ and $B$ out of sleep deprivation I suppose which I only realised this morning. So anyway I am going to ask moderator to delete this question. | When submerged, the coin displaces as much water as it has volume (logical). When floating on the box, the coin displaces as much water as corresponds to its weight. As metal has a higher density than water, it means that the coin in the box displaces more water than when the coin is submerged. | {
"source": [
"https://physics.stackexchange.com/questions/579452",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/135125/"
]
} |
579,525 | Photons are energy. According to general relativity they should bend space.
Assuming two photons pass one another in a large void of empty space how would they gravitationaly affect each other exactly? Would there be a change in their path, a change in color, both, neither or something entirely different? | One can quantize linearized spacetime perturbations in General Relativity and compute the effect of photons scattering elastically by exchanging virtual gravitons . This theory isn’t consistent at Planck-scale photon energies but is believed to be fine at the energies of photons we observe... even very high-energy gamma rays. All the energy coming in has to come out. In the center-of-momentum frame the two photons each enter with energy $E$ and exit with energy $E$ . Thus in this frame there is no change in their frequency (“color”). Their direction does change (but the effect is tiny). There is a probability of scattering through different angles, and this is described as usual by a differential cross-section $d\sigma/d\Omega$ which depends on the scattering angle $\theta$ . The details of the calculation are in this 1967 paper: Gravitational Scattering of Light by Light . The differential cross section for unpolarized photons found in this paper — and then corrected in an erratum — is $$\frac{d\sigma}{d\Omega}=\frac{32G^2E^2}{c^8\sin^4{\theta}}\left(1+\cos^{16}{\frac{\theta}{2}}+\sin^{16}{\frac{\theta}{2}}\right).$$ As you can guess, $G$ is Newton’s gravitational constant and $c$ is the speed of light. Try computing the area $G^2E^2/c^8$ for a visible photon (or a gamma-ray photon) to see how tiny and unmeasurable this scattering effect is! | {
"source": [
"https://physics.stackexchange.com/questions/579525",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/245323/"
]
} |
579,737 | The only things I read about so far in astrophysics are either black holes, developing black holes or not black holes at all. So I am wondering, is it physically possible to have an object that is almost a black hole, but not a black hole. What I mean by that, is an object that would have a gravitational pull almost as strong as a black hole, but not equal, so light would be bent and considerably slowed down, among other effects yet able to escape. I am not a physicist so I use my own words. The point of my question, if this helps, is to know if we can/could observe and study such objects as intermediary between non-blackholes and blackholes with its own properties. Again this is NOT about the formation of black holes. So maybe such an intermediary object is impossible because things are binary (like starting the process of black hole formation would not stop). Also I know there are massive objects that are not black holes for example neutron stars but they do not seem to have "almost black holes" radiations. | The conformal limit For simplicity, consider non-rotating compact objects. A non-rotating object with mass $M$ becomes a black hole when its radius $R$ is $$
R = 2\frac{GM}{c^2}
\tag{1}
$$ where $G$ is Newton's gravitational constant and $c$ is the speed of light. Equation (1) is the Schwarzschild radius . According to ref 1, in order to avoid becoming a black hole, the radius of a compact object must be $$
R\gtrsim 2.83 \frac{GM}{c^2}.
\tag{2}
$$ Equation (2) is the conformal limit (ref 4), sometimes also called the causality constraint (but beware that the latter name is also used for something different). $^\dagger$ It comes from the equation of state for ultrarelativistic particles (ref 2), where the pressure $P$ and density $\rho$ are related to each other by $P=\rho c^2/3$ . This, in turn, means that the speed of "sound" in a compact object is limited by $v\equiv \sqrt{dP/d\rho}\leq c/\sqrt{3}$ , which limits how quickly one part of the object can react to changes in another part, which in turn leads to the bound (2). This bound is consistent with observation (ref 1). $^\dagger$ The condition $v<c/\sqrt{3}$ is called the "causality constraint" in ref 2 and is called the "conformal limit" in ref 4. Other papers use the name "causality constraint" for the looser condition $v<c$ . This puts a limit on (non-rotating) "almost black holes": the radius must be at least 40% greater than the radius of a black hole. Presumably a similar limit can be derived for the more-realistic case of rotating compact objects, but I'm not familiar with it. Both the Schwarzschild radius (1) and the conformal limit (2) are indicated near the upper-left corner of this mass-versus-radius figure from ref 3: The Schwarzschild radius is the boundary of the dark blue region (labeled "GR" for General Relativity"), and the conformal limit (labeled "causality") is the boundary of the upper-left green region. The black curves are various models for neutron stars, and the green curves are models for quark stars. The Buchdahl bound Equation (2) comes from considering the equation of state for ultrarelativistic particles. If a realistic equation of state could exceed the conformal limit, then maybe the conformal limit (2) could be beaten. Table 2 in ref 4 suggests that this might be possible. I'm not familiar enough with that work to comment on how realistic that is, but in any case we still have the Buchdahl bound . The Buchdahl bound comes from requiring that the pressure at the center of the object is finite and that the density decreases away from the center (ref 2). The Buchdahl bound is $$
R > \frac{9}{4}\,\frac{GM}{c^2},
\tag{3}
$$ which says that the radius of an "almost black hole" must be at least 12% greater than the Schwarzschild radius. This again assumes a non-rotating object. I don't know what the generalization is for a rotating object. Bending of light As explained in ref 5, if light comes close to a certain critical radius of a sufficiently compact object, the gravity can be so strong that the light loops around the object arbitrarily many times before leaving the vicinity, and it can leave in any direction (depending on the precise details of how close to the critical radius). That critical radius is $3 GM/c^2$ , 50% larger than the Schwarzschild radius, so an object as compact as (2) or (3) would show this effect. Here's an example from figure 3 in ref 5: The shaded area is a circle with the Schwarzschild radius (so the compact object will be a bit larger than this), the dashed line shows the critical radius (equations (2) and (3) represent objects smaller than this), and the solid line is the trajectory of the light. The same paper also includes several other figures illustrating various light-bending effects due to such a compact object. The idea of searching for neutron stars (and other compact objects) using their gravitational-lensing effect has received some attention. Ref 6 is one example. References: Lattimer and Prakash, "Neutron Star Observations: Prognosis for Equation of State Constraints", https://arxiv.org/abs/astro-ph/0612440 Eksi, "Neutron stars: compact objects with relativistic gravity", https://arxiv.org/abs/1511.04305 Lattimer, "The Nuclear Equation of State and Neutron Star Masses", https://arxiv.org/abs/1305.3510 Li et al, "Neutron star equation of state: Exemplary modeling and applications", https://www.sciencedirect.com/science/article/pii/S2214404820300355 Kraus (1998), "Light Deflection Near Neutron Stars", https://www.spacetimetravel.org/licht/licht.html (includes a link to download the PDF file) Dai et al, "Gravitational microlensing by neutron stars and radio pulsars", https://arxiv.org/abs/1502.02776 | {
"source": [
"https://physics.stackexchange.com/questions/579737",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/230132/"
]
} |
580,579 | Consider an object which has been given a speed $v$ on a rough horizontal surface. As time passes, the object covers a distance $l$ until it stops because of friction. Now, Initial kinetic energy = $\frac{1}2mv^2$ And final kinetic energy is zero. Therefore, work done by friction on the object is equal in magnitude to $\frac{1}2mv^2$ . Now here is the part that I found weird: Consider another frame moving with a speed $v_0$ in the same direction with respect to the ground frame. Now, kinetic energy of the original object with respect to this new frame is $\frac{1}2m(v-v_0)^2$ . And, the final kinetic energy is equal to $\frac{1}2mv_0^2$ . So this means that the work done by frictional force, in this case, will have a magnitude of $\frac{1}2m[(v-v_0)^2-v_0^2]$ , which is obviously different from the value which we get with respect to a stationary frame. And this part seems very unintuitive to me. How is it possible for the same force to do different amounts of work in two different inertial frames? (I would consider it unintuitive even if we consider non inertial frames, after considering pseudo forces). And if we were to do more calculations based on the two values of the work done by friction, we would land on different values of some quantities which aren't supposed to be different in any frame. For example, the coefficient of friction would be different, as the amount of frictional force is constant, acting over a distance $l$ . We can say that Work done by frictional force is $\alpha$$mgl$ , where $\alpha$ is the coefficient of friction and $g$ is the acceleration due to gravity. We can clearly see that $\alpha$$mgl$ equals two different values. So, is this just how physics works, or is there something wrong here? | You have correctly discovered that power, work, and kinetic energy are all frame variant. This is well-known for centuries, but is always surprising to a student when they first discover it. For some reason, it is not part of a standard physics curriculum. So, the reason that this is disturbing to every student who encounters it is that it seems irreconcilable with the conservation of energy. If the work done is different in different reference frames then how can energy be conserved in all frames? The key is to recognize that the force doing work acts on two bodies. In this case the object and the horizontal surface. You must include both bodies to get a complete picture of the conservation of energy. Consider the situation in your example from an arbitrary frame where the horizontal surface (hereafter the "ground") is moving at a velocity $u$ , the ground frame then being the frame $u=0$ . Let the ground have mass $M$ . The initial kinetic energies are: $$KE_{obj}(0)=\frac{1}{2}m (v+u)^2$$ $$KE_{gnd}(0)=\frac{1}{2} M u^2$$ Now, the friction force $-f$ acts on the object until $v_{obj}(t_f)=v_{gnd}(t_f)$ . Solving for the time gives $$t_f=\frac{m M v}{(m+M) f}$$ and, by Newton's 3rd law, a force $f$ acts on the ground for the same time. At $t_f$ the final kinetic energies are: $$KE_{obj}(t_f)=\frac{1}{2} m \left(\frac{Mu+m(u+v)}{m+M} \right)^2$$ $$KE_{gnd}(t_f)=\frac{1}{2} M \left(\frac{Mu+m(u+v)}{m+M} \right)^2$$ so $$\Delta KE_{obj}+\Delta KE_{gnd}=-\frac{m M v^2}{2(m+M)}$$ Note importantly that the total change in KE is independent of $u$ , meaning that it is frame invariant. This is the amount of energy that is converted to heat at the interface. So even though the change in KE for the object itself is frame variant, when you also include the ground then you find that the total change in kinetic energy is frame invariant which allows energy to be conserved since the amount of heat generated is frame invariant. | {
"source": [
"https://physics.stackexchange.com/questions/580579",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/270701/"
]
} |
581,685 | Carlo Rovelli is interviewed in this article (The illusion of time): Alongside and inspired by his work in quantum gravity, Rovelli puts forward the idea of 'physics without time' This stems from the fact that some equations of quantum gravity (such as the Wheeler–DeWitt equation, which assigns quantum states to the Universe) can be written without any reference to time at all. From Wikipedia: In theoretical physics, the problem of time is a conceptual conflict between general relativity and quantum mechanics in that quantum mechanics regards the flow of time as universal and absolute, whereas general relativity regards the flow of time as malleable and relative. In this article (Quanta Magazine, by Natalie Wolchover; Does time really flow?) the issue is further discussed. I'm not sure I understand doing physics without time. Why does it stem from a conflict between GR and QM? Let it be clear that I'm not referring to block time. I have little understanding of the Wheeler-deWitt equation. So the question is: Can physics really be done without using time? It seems impossible to me. GR and QM may be putting forth different notions of time (absolute vs. relative), but nevertheless. The difference between time used in GR and QM (I do know that the Wheeler-DeWitt-equation tries to reconcile the two: the equation is trying to unite GR with QM) doesn't mean that time is superfuous. I know this sounds all quite philosophical, but I have brought in the physics. | Rovelli's The Order of Time is an excellent book and well worth reading if you haven't already. I don't think Rovelli is advocating doing physics without time (the article adds spice by exaggerating his position somewhat). Instead, he is suggesting that time is an emergent phenomenon rather than one of the fundamental attributes of reality. But it is a useful shorthand, and trying to do physics without making any use of that shorthand would be at best tedious and at worst incomprehensible. In a similar way, we know that colour (in its everyday sense) is an emergent phenomenon and not an attribute of fundamental particles. However, we still talk about "redshift", for example, as a useful shorthand for "photons with longer wavelengths and lower energy", and describing a van Gogh painting without referring to its colours would be a pointless and uninformative exercise. The question of whether time is fundamental to reality or an emergent phenomenon is not settled - which is what makes it so interesting. Julian Barbour's The End of Time presents a more extreme and definitely less mainstream view than Rovelli's. On the other hand, Lee Smolin's Time Reborn puts the opposite case and argues that time really is fundamental. | {
"source": [
"https://physics.stackexchange.com/questions/581685",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/98822/"
]
} |
582,749 | Imagine a horse is tethered to a cart. According to Newton's third law, when the horse pulls on the cart, the cart will also pull backwards on the horse. Since the two objects are attached together, they are technically the same object, and they cannot accelerate. This doesn't make any sense. In the real world, horse carts are able to move, even though they are attached together. Am I overlooking something here? | You are overlooking the force between the horse and the ground. Yes, if you had only the horse and the cart with nothing else , then the system could not accelerate as a whole (i.e., the center of mass could not accelerate). However, friction between the horse and the ground pushes on the horse, thus allowing for an acceleration of the system. | {
"source": [
"https://physics.stackexchange.com/questions/582749",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/274945/"
]
} |
582,842 | I often see and hear people claiming that "the gravitational force is much weaker than the electromagnetic force".
Usually, they justify it by comparing the universal gravity constant to Coulomb's constant. But obviously, such comparison is meaningless, as they differ in dimensions.
I'll make myself clear: of course you can say it is true for electron-electron interaction, but I'm talking about whether they can be compared fundamentally somehow in any area of physics. | Yes, they can. Both interactions can be modeled using perturbative quantum field theory, where their strength is parametrized by a dimensionless coupling constant. Electromagnetic repulsion between two electrons can be written as a power series in $\alpha$ , the fine structure constant, which is dimensionless and has a value of roughly 1/137. Meanwhile, the gravitational attraction between two electrons can be expanded in a similar way in a power series in $\alpha_G$ , which is a dimensionless constant with a value of roughly $10^{-45}$ . The precise value of $\alpha_G$ depends somewhat on which particle you're comparing, since ultimately it's the square of the ratio of the particle's mass to the Planck mass. However, for fundamental particles, this ratio does not vary by more than ten orders of magnitude, which still places $\alpha_G$ far smaller than $\alpha$ no matter which fundamental particle you choose to compare. | {
"source": [
"https://physics.stackexchange.com/questions/582842",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/275924/"
]
} |
583,111 | Suppose I have some material in solid-state (say), I cut it into two parts. Take the first cut it into two parts, take the first cut it into two parts, and then repeat this again and again. There will be a point when the substance loses its solid property. I'm interested in this point. I realize we don't have to go until we break it into two molecules but the state will come a little sooner. This thought experiment is a little bit crazy (and at a point impossible) but please consider this and correct me if there is a flaw in my thinking. | Generally speaking, there isn't a hard line: the borders between regimes are fuzzy, and their positions can be quite different depending on what properties you're looking at. Moreover, it is generally rare to have a direct border between "molecule" behaviour and "solid" behaviour: the intermediate size regimes typically behave very differently to both of those extremes, and they need to be handled separately. Depending on their size, these materials are known as atomic clusters or nanoparticles (though several other related, more technical terms are also important). Both of those regimes are the focus of active, dedicated fields of current cutting-edge research. | {
"source": [
"https://physics.stackexchange.com/questions/583111",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/247238/"
]
} |
583,179 | I've seen in a lot of movies and animations of bubbles forming when something moves underwater (e.g., fish swimming). Is it theoretically possible (under any temperature, pressure circumstance possible in oceans) that bubbles could form miles underwater, just because of mere movement? | Certainly, the mechanism is called cavitation and it works like this: As an object moves through water, a pressure distribution gets built up around the walls of the object and depending on its shape, it is possible for pressures at some points to be lower than ambient pressure- the best example of this is on the backwards face of a rotating propeller blade. If that propeller blade moves fast enough through the water, that "negative pressure" grows until it equals the vapor pressure of the water at that ambient condition and the water there explodes into vapor- in essence, it boils. As soon as those cavitation bubbles full of vapor form, any dissolved air in the water nearby will diffuse into the vapor bubbles and collect there. Because the pressure in the cavitation bubble is below ambient, as soon as the moving object sheds a bubble it tends to quickly collapse as the vapor inside the bubble condenses back into liquid water- but the re-dissolution process for the air in the bubble is much slower, and so after the vapor bubble is gone a tiny air-filled bubble remains. Note that the lifetime of a cavitation bubble is measured in milliseconds, and those air remnants are far tinier than anything portrayed in a cartoon. | {
"source": [
"https://physics.stackexchange.com/questions/583179",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/275573/"
]
} |
583,621 | We know the composition of stars by spectroscopic analysis. The EM waves generated by them are blue- or redshifted. We could have said, "Look, the wavelength is slightly different so it may be constituted of quarks and leptons which are slightly different than ours." How do we ascertain that it is made up of the same particles? | Fairy Physics It is entirely possible to construct a theory of the universe which states: "All effects are caused by fairies. Each effect has its own fairy, and every fairy is unique. When two fairies produce the same outcome, that is just a happy coincidence." Unfortunately, it is basically impossible to disprove this theory. Also, this theory lacks explanatory power. If I ask the question: "Where will Venus be in 3,000 years?" Fairy Physics can only answer: "Wherever the Venus fairies decide it will be!" And this is the real problem with Fairy Physics. The problem is not that it is "wrong", because in some sense, it can't be wrong. The problem is that it is useless . And it is useless because it is, in a way, infinitely powerful. It is a theory which allows anything to happen, because it's answer is always: "A fairy caused that." And thus, we see that a useful theory is one in which we can separate "explainable" events from "miraculous" ones. There are no "fairy miracles." But if we witnessed Jupiter teleport to the other side of the sun that would pretty fairly violate the Standard Model of physics. A theory actually becomes more powerful the more constraints it imposes on how the universe can evolve. That's because more constraints means a greater ability for us as human beings to predict the future . Physics has proceeded by imposing ever more detailed limitations on how the physical world is predicted to behave. A universe in which quarks and leptons differ across space and/or time has fewer constraints than one in which quarks and leptons are identical. And thus, a theory which describes such a universe is weaker than one which forbids it, because it allows more behavior. Standard Model The nice property of a constraint is that it gives you a way to invalidate a theory. The more specific and precise a prediction, the more ways it can fail. And if a prediction succeeds, you thus have more confidence in the prediction under tighter constraints. Fairy Physics is "true" because it cannot be falsified . But such theories are, as we have established, utterly useless. We want a theory with the tightest possible constraints we can impose, because such a theory offers the sharpest predictions and the most opportunities for falsification. If observations are then compatible with the resulting theory, we have much greater confidence in it. The Standard Model is widely accepted because it offers the strongest possible predictions we know how to make, and our observation of the universe does not provide any strong counter-examples to falsify it. Standard Model-- with bespoke quarks and leptons elsewhere in the universe is a weaker theory, and also unnecessary . So why downgrade to a Ford Fiesta when you can drive around in a Lamborghini? To give a more explicit example, consider MOND: Modified Newtonian Dynamics. This is a possible explanation for Dark Matter which causes gravity to behave differently on large distance scales. But even this theory avoids letting gravity simply vary arbitrarily across space, because such a relaxation would be giving up too many constraints . An isotropic, homogeneous universe w.r.t. the laws of physics offers the strongest constraints for a physical theory, which is why almost no modern theory will give it up. Doing so cripples the theory to a level that few find acceptable. | {
"source": [
"https://physics.stackexchange.com/questions/583621",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/276176/"
]
} |
583,628 | The definition that I have concluded is that: No two fermions can exist in the state, or quantum state, unless they have opposite spins. Am I right in saying this? They can have the same azimuthal number, the same principle quantum number, and thus the same magnetic number but if they have a different spin it's alright? Any clarification appreciated.
Thanks! | Fairy Physics It is entirely possible to construct a theory of the universe which states: "All effects are caused by fairies. Each effect has its own fairy, and every fairy is unique. When two fairies produce the same outcome, that is just a happy coincidence." Unfortunately, it is basically impossible to disprove this theory. Also, this theory lacks explanatory power. If I ask the question: "Where will Venus be in 3,000 years?" Fairy Physics can only answer: "Wherever the Venus fairies decide it will be!" And this is the real problem with Fairy Physics. The problem is not that it is "wrong", because in some sense, it can't be wrong. The problem is that it is useless . And it is useless because it is, in a way, infinitely powerful. It is a theory which allows anything to happen, because it's answer is always: "A fairy caused that." And thus, we see that a useful theory is one in which we can separate "explainable" events from "miraculous" ones. There are no "fairy miracles." But if we witnessed Jupiter teleport to the other side of the sun that would pretty fairly violate the Standard Model of physics. A theory actually becomes more powerful the more constraints it imposes on how the universe can evolve. That's because more constraints means a greater ability for us as human beings to predict the future . Physics has proceeded by imposing ever more detailed limitations on how the physical world is predicted to behave. A universe in which quarks and leptons differ across space and/or time has fewer constraints than one in which quarks and leptons are identical. And thus, a theory which describes such a universe is weaker than one which forbids it, because it allows more behavior. Standard Model The nice property of a constraint is that it gives you a way to invalidate a theory. The more specific and precise a prediction, the more ways it can fail. And if a prediction succeeds, you thus have more confidence in the prediction under tighter constraints. Fairy Physics is "true" because it cannot be falsified . But such theories are, as we have established, utterly useless. We want a theory with the tightest possible constraints we can impose, because such a theory offers the sharpest predictions and the most opportunities for falsification. If observations are then compatible with the resulting theory, we have much greater confidence in it. The Standard Model is widely accepted because it offers the strongest possible predictions we know how to make, and our observation of the universe does not provide any strong counter-examples to falsify it. Standard Model-- with bespoke quarks and leptons elsewhere in the universe is a weaker theory, and also unnecessary . So why downgrade to a Ford Fiesta when you can drive around in a Lamborghini? To give a more explicit example, consider MOND: Modified Newtonian Dynamics. This is a possible explanation for Dark Matter which causes gravity to behave differently on large distance scales. But even this theory avoids letting gravity simply vary arbitrarily across space, because such a relaxation would be giving up too many constraints . An isotropic, homogeneous universe w.r.t. the laws of physics offers the strongest constraints for a physical theory, which is why almost no modern theory will give it up. Doing so cripples the theory to a level that few find acceptable. | {
"source": [
"https://physics.stackexchange.com/questions/583628",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/276053/"
]
} |
583,647 | Mathematically, it is obvious that the total orbital angular momentum $L^2$ commutes with the spin-orbit Hamiltonian $\propto\boldsymbol{L}\cdot\boldsymbol{S}$ . However, is there an intuitive physical reason for this? For example, the total angular momentum $J^2$ must commute because there is no external torque, and the total spin $S^2$ must commute because the spin of the electron is constant, but I can't think of any similar argument for $L^2$ . | Fairy Physics It is entirely possible to construct a theory of the universe which states: "All effects are caused by fairies. Each effect has its own fairy, and every fairy is unique. When two fairies produce the same outcome, that is just a happy coincidence." Unfortunately, it is basically impossible to disprove this theory. Also, this theory lacks explanatory power. If I ask the question: "Where will Venus be in 3,000 years?" Fairy Physics can only answer: "Wherever the Venus fairies decide it will be!" And this is the real problem with Fairy Physics. The problem is not that it is "wrong", because in some sense, it can't be wrong. The problem is that it is useless . And it is useless because it is, in a way, infinitely powerful. It is a theory which allows anything to happen, because it's answer is always: "A fairy caused that." And thus, we see that a useful theory is one in which we can separate "explainable" events from "miraculous" ones. There are no "fairy miracles." But if we witnessed Jupiter teleport to the other side of the sun that would pretty fairly violate the Standard Model of physics. A theory actually becomes more powerful the more constraints it imposes on how the universe can evolve. That's because more constraints means a greater ability for us as human beings to predict the future . Physics has proceeded by imposing ever more detailed limitations on how the physical world is predicted to behave. A universe in which quarks and leptons differ across space and/or time has fewer constraints than one in which quarks and leptons are identical. And thus, a theory which describes such a universe is weaker than one which forbids it, because it allows more behavior. Standard Model The nice property of a constraint is that it gives you a way to invalidate a theory. The more specific and precise a prediction, the more ways it can fail. And if a prediction succeeds, you thus have more confidence in the prediction under tighter constraints. Fairy Physics is "true" because it cannot be falsified . But such theories are, as we have established, utterly useless. We want a theory with the tightest possible constraints we can impose, because such a theory offers the sharpest predictions and the most opportunities for falsification. If observations are then compatible with the resulting theory, we have much greater confidence in it. The Standard Model is widely accepted because it offers the strongest possible predictions we know how to make, and our observation of the universe does not provide any strong counter-examples to falsify it. Standard Model-- with bespoke quarks and leptons elsewhere in the universe is a weaker theory, and also unnecessary . So why downgrade to a Ford Fiesta when you can drive around in a Lamborghini? To give a more explicit example, consider MOND: Modified Newtonian Dynamics. This is a possible explanation for Dark Matter which causes gravity to behave differently on large distance scales. But even this theory avoids letting gravity simply vary arbitrarily across space, because such a relaxation would be giving up too many constraints . An isotropic, homogeneous universe w.r.t. the laws of physics offers the strongest constraints for a physical theory, which is why almost no modern theory will give it up. Doing so cripples the theory to a level that few find acceptable. | {
"source": [
"https://physics.stackexchange.com/questions/583647",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/137185/"
]
} |
584,058 | I find the term "microgravity" to be misleading, how was it coined? NASA provide this definition: Microgravity is the condition in which people or objects appear to be
weightless. The effects of microgravity can be seen when astronauts
and objects float in space. Presumably the word "micro" is not being used in its mathematical sense, and is being used to express something that is small. However, the above article goes on to state that: The International Space Station orbits Earth at an altitude between
200 and 250 miles. At that altitude, Earth's gravity is about 90
percent of what it is on the planet's surface. I would not describe 90% as small. It seems a poorly constructed term. | Microgravity is used because zero gravity is inaccurate. The ISS, at 400 km, experiences an average atmospheric density of 4 nanograms per cubic meter. It's frontal area varies from 700-2300 square meters. At 1000 m $^2$ , the drag force is $ \frac 1 4$ N. With a mass around 250,000 kg, that's $10^{-6}$ m/s $^2$ , or 0.1 $\mu$ g. Hence: microgravity, literally. If you leave an object at the back of the space station, it will fall forward, falling 100 m (the length scale of ISS) in 4 hours, with an impact speed of $\sqrt 2$ cm/s. | {
"source": [
"https://physics.stackexchange.com/questions/584058",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/66413/"
]
} |
584,228 | If I move my swing my arm really fast, the matter in my arm should experience time slower than the matter in my body. So how does my body still sync with each other? And a more general question that derives from this: A lot of matter move at different speeds inside our body, how does anything ever stay synced? | how does anything ever stay synced ? Not sure what you mean by "stay synced". Different parts of your body maintain their structural integrity at the atomic level because of the electromagnetic forces between atoms and molecules. This simply involves the exchange of photons (the force carrier for the electromagnetic force) over very short distances - no "syncing" is required. Similarly, nerve impulses to and from different parts of your body are chemical signals sent down nerves, which also ultimately depends on the exchange of photons at an atomic level. Again, no "syncing" required. In computer science terms, the body is an asynchronous system . There is no master clock in the body that says "hey, arm, you're a femto second behind everyone else". | {
"source": [
"https://physics.stackexchange.com/questions/584228",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/276456/"
]
} |
584,683 | In every text/ physics book that I've read, Protons are mentioned as particles that are bigger, way bigger 2000 times to be precise, than electrons...I believed that until a few minutes ago when I googled "what is the radius of an electron" and then I read somewhere that it was 2.5 times bigger than the radius of a PROTON...the radius of an ELECTRON is bigger than that of a PROTON. Which goes completely against every physics text book that I've read...Any help towards explaining why protons are 2000 times bigger while its radius is 2.5 times smaller than that of an electron will be valued...thanx in advance | Quantum mechanical particles have well-defined masses, but they do not have well-defined sizes (radius, volume, etc) in the classical sense. There are multiple ways you could assign a length scale to a particle, but if you think of them as little balls with a well-defined size and shape, then you're making a mistake. de Broglie Wavelength: Particles which pass through small openings exhibit wavelike behavior, with a characteristic wavelength given by $$\lambda_{dB} = \frac{h}{mv}$$ where $h$ is Planck's constant, $m$ is the particle's mass, and $v$ is the particle's velocity. This sets the length scale at which quantum effects like diffraction and interference become important. It also turns out that if the average spacing between particles in an ideal gas is on the order of $\lambda_{dB}$ or smaller, classical statistical mechanics breaks down (e.g. the entropy diverges to $-\infty$ ). Compton Wavelength: One way to measure the position of a particle is to shine a laser on the region where you think the particle will be. If a photon scatters off of the particle , you can detect the photon and trace it's trajectory back to determine where the particle was. The resolution of a measurement like this is limited to the wavelength of the photon used, so smaller wavelength photons yield more precise measurements. However, at a certain point the photon's energy would be equal to the mass energy of the particle. The wavelength of such a photon is given by $$\lambda_c = \frac{hc}{mc^2} = \frac{h}{mc}$$ Beyond this scale, position measurement stops being more precise because the photon-particle collisions start to produce particle-antiparticle pairs. "Classical" Radius: If you want to compress a total amount of electric charge $q$ into a sphere of radius $r$ , it takes energy roughly equal to $U = \frac{q^2}{4\pi\epsilon_0 r}$ (this is off by a factor of 3/5, but nevermind - we're just looking at orders of magnitude). If we set that equal to the rest energy $mc^2$ of a (charged) particle, we find $$r_0 = \frac{q^2}{4\pi\epsilon_0 mc^2}$$ This is sometimes called the classical radius of a particle with charge $q$ and mass $m$ . It turns out that this is of the same order of magnitude as the Thompson scattering cross section, and so this length scale is relevant when considering the scattering of low-energy electromagnetic waves off of particles. Charge Radius: If you model a particle as a spherical "cloud" of electric charge, then you can perform very high precision scattering experiments (among other things) to determine what effective size this charge cloud has. The result is called the charge radius of the particle, and is a very relevant length scale to consider if you are thinking about the fine details of how the particle interacts electromagnetically. Fundamentally, the charge radius arises in composite particles because their charged constituents occupy a non-zero region of space. The charge radius of the proton is due to the quarks of which it is comprised, and has been measured to be approximately $0.8$ femtometers; on the other hand, the electron is not known to be a composite particle, so its charge radius would be zero (which is consistent with measurements). Excitation Energy: Yet another length scale is given by the wavelength of the photon whose energy is sufficient to excite the internal constituents of the particle into a higher energy state (e.g. of vibration or rotation). The electron is (as far as we know) elementary, meaning that it doesn't have any constituents to excite; as a result, the electron size is zero by this measure as well. On the other hand, the proton can be excited into a Delta baryon by a photon with energy $E\approx 300$ MeV, corresponding to a size $$\lambda = \frac{hc}{E} \approx 4\text{ femtometers}$$ In the first three examples, note that the mass of the particle appears in the denominator; this implies that, all other things being equal, more massive particles will correspond to smaller length scales (at least by these measures). The mass of a proton is unambiguously larger than that of an electron by a factor of approximately 1,836 . As a result, the de Broglie wavelength, Compton wavelength, and classical radius of the proton are smaller than those of the electron by the same factor. This raises the question of where the meager 2.5x claim came from. A quick google search shows that this claim appears on the site AlternativePhysics.org. The point being made is that the classical electron radius mentioned above is 2.5 times the "measured" proton radius - by which they mean the measured proton charge radius. This is true, but not particularly meaningful - being quantum mechanical objects, neither the electron nor the proton have a radius in the sense that a classical marble does. Comparing two particles by using two completely different measures of size is comparing apples to oranges. As a final note, I would caution you from taking any of the claims you find on AlternativePhysics.org too seriously. To borrow a saying from the medical community, there's a name for the subset of "alternative physics" which actually makes sense. It's called physics . | {
"source": [
"https://physics.stackexchange.com/questions/584683",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/275573/"
]
} |
585,118 | I found here that the Planck constant is defined as an exact number: $6.626 070 15\times10^{−34}\ \mathrm{J/Hz}$ . How could this be done? Shouldn't it be a quantity with uncertainty measured by experiments? | Planck's constant relates two different types of quantities, namely energy and frequency. That means it is a conversion factor which converts the units of quantities from one form to another. If the units of these two quantities are separately defined, then one can use measurements to determine the value of the conversion factor. That value would then have some uncertainty due to the experimental conditions. That is what has been done before. However, recently it was decided to define the units of one of the quantities in terms of the other, by setting the conversion factor (Planck's constant) to a fixed value without uncertainty. It came about by the redefinition of the kilogram . Now it does not have any uncertainty anymore. The same thing was done for the speed of light some time ago. | {
"source": [
"https://physics.stackexchange.com/questions/585118",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/110264/"
]
} |
585,261 | (For context, I originally thought of this question in the context of electromagnetic Doppler shift, but I'm also curious if the same logic applies for acoustic Doppler shift.) Assume you are watching an object approaching you at relativistic speeds, for example fast enough that the measured frequency of its emissions is shifted by $10\%$ . The object is not on a collision course, but the point of closest approach is a reasonably short, i.e. non-relativistic distance away. If the object emits a continous wave radio signal, over what timescale does the measured frequency of that signal change as it passes the point of closest approach? I cannot intuitively accept that it changes $20\%$ of a potentially very large number (e.g. $1\text{ GHz}$ ) instantly , because classical mechanics really dislikes discontinuities. But the transition between moving towards and moving away is in some sense instantaneous, given the boundary between the two is infinitesimal. What actually happens then? | The instantaneous change occurs when you consider the Doppler shift in only one dimension. In three dimensions you can consider the correction when the velocity vector and the separation vector are not parallel. Usually such corrections go like $\cos\theta$ , where $\theta$ is the angle between the two vectors, but more complicated things are possible. Years ago I sat down and computed the speeds for which acoustic Doppler shifts correspond to musical intervals. That gave me the superpower of being able to stand on a sidewalk, listen to the WEEE-ooom as a car drove past, and say to myself “a major third? They're speeding!” But because of the $\cos\theta$ dependence, the trick gets harder as you get further from the road. | {
"source": [
"https://physics.stackexchange.com/questions/585261",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/152379/"
]
} |
585,384 | We have two concepts that are energy and momentum. To me, momentum is more fundamental than energy and I think that momentum was the thing which we wanted to discover as energy. Now momentum can describe several things that that energy does and if it is not able to describe it then it can be somehow extended to describe that thing. For example, momentum can not describe the potential energy let's say due to the gravitational field of Earth on an object but it can be easily twisted to be able to describe it. Momentum can describe quantum stuff as well. Also, we know that momentum is conserved just as energy. In short, I want to know the physical difference b/w momentum and energy. | Already some good answers here, but also let's add the following important idea which no one has mentioned yet. Suppose two particles are in a collision. The masses are $m_1$ , $m_2$ , the initial velocities are ${\bf u}_1$ and ${\bf u}_2$ , the final velocities are ${\bf v}_1$ and ${\bf v}_2$ . Then conservation of momentum tells us $$
m_1 {\bf u}_1 + m_2 {\bf u}_2 = m_1 {\bf v}_1 + m_2 {\bf v}_2.
$$ That is a useful and important result, but it does not completely tell us what will happen. If the masses and the initial velocities are known, for example, then there would be infinitely many different combinations of ${\bf v}_1$ and ${\bf v}_2$ which could satisfy this equation. Now let's bring in conservation of energy, assuming no energy is converted into other forms such as heat. Then we have $$
\frac{1}{2}m_1 u^2_1 + \frac{1}{2}m_2 u^2_2 = \frac{1}{2}m_1 v^2_1 + \frac{1}{2} m_2 v^2_2.
$$ Now we have some new information which was not included in the momentum equation . In fact, in a one dimensional case these two equations are sufficient to pin down the final velocities completely, and in the three-dimensional case almost completely (up to rotations in the CM frame; see below). This shows that energy and momentum are furnishing different insights, both of which help to understand what is going on. Neither can replace the other. There are plenty of other things one might also say. The most important is the connection between energy and time on the one hand, and between momentum and position on the other, but other answers have already mentioned that. It may also interest you to know that the two most important equations in quantum theory are a relationship between energy and development in time (Schrodinger's equation) and a relationship between momentum and position (the position, momentum commutator). Further info The general two-body collision can be analysed in the CM frame (variously called centre of mass frame; centre of momentum frame; zero momentum frame). This is the frame where the total momentum (both before and after the collision) is zero. The conservation laws fix the sizes but not the directions of the final velocities in this frame, except to say that the directions are opposite to one another. | {
"source": [
"https://physics.stackexchange.com/questions/585384",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/236799/"
]
} |
585,401 | According to this source , there are $5077$ visible stars in the night sky, and a full sky area of $41253$ square degrees of sky. This makes for a density of $0.12$ stars per square degree of the sky. Suppose I hold up a square picture frame that is $1$ square meter in size ( $1\ {\rm m} \times 1\ {\rm m}$ ), $2$ meters away from me. How many stars can I expect to find contained within that frame? What is the formula if I want to vary the picture frame size (let's say rectangular), or distance away from me (let's say $3$ meters)? | Already some good answers here, but also let's add the following important idea which no one has mentioned yet. Suppose two particles are in a collision. The masses are $m_1$ , $m_2$ , the initial velocities are ${\bf u}_1$ and ${\bf u}_2$ , the final velocities are ${\bf v}_1$ and ${\bf v}_2$ . Then conservation of momentum tells us $$
m_1 {\bf u}_1 + m_2 {\bf u}_2 = m_1 {\bf v}_1 + m_2 {\bf v}_2.
$$ That is a useful and important result, but it does not completely tell us what will happen. If the masses and the initial velocities are known, for example, then there would be infinitely many different combinations of ${\bf v}_1$ and ${\bf v}_2$ which could satisfy this equation. Now let's bring in conservation of energy, assuming no energy is converted into other forms such as heat. Then we have $$
\frac{1}{2}m_1 u^2_1 + \frac{1}{2}m_2 u^2_2 = \frac{1}{2}m_1 v^2_1 + \frac{1}{2} m_2 v^2_2.
$$ Now we have some new information which was not included in the momentum equation . In fact, in a one dimensional case these two equations are sufficient to pin down the final velocities completely, and in the three-dimensional case almost completely (up to rotations in the CM frame; see below). This shows that energy and momentum are furnishing different insights, both of which help to understand what is going on. Neither can replace the other. There are plenty of other things one might also say. The most important is the connection between energy and time on the one hand, and between momentum and position on the other, but other answers have already mentioned that. It may also interest you to know that the two most important equations in quantum theory are a relationship between energy and development in time (Schrodinger's equation) and a relationship between momentum and position (the position, momentum commutator). Further info The general two-body collision can be analysed in the CM frame (variously called centre of mass frame; centre of momentum frame; zero momentum frame). This is the frame where the total momentum (both before and after the collision) is zero. The conservation laws fix the sizes but not the directions of the final velocities in this frame, except to say that the directions are opposite to one another. | {
"source": [
"https://physics.stackexchange.com/questions/585401",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/276927/"
]
} |
585,690 | So, I have been watching some science videos regarding Einstein's theory on general relativity and until today the predictions based on his equations have been proven to stand. My question would be: what happens in the scientific community if one experiment proves it wrong (not only Einstein, but even other laws and theories of physics? Do we automatically get rid of these centuries-old theories, or do they get rewritten to fit the new experiment, or do they stay the same, but with exceptions? | My question would be, what happens in the scientific community if one experiment proves it wrong We have already seen what happens in this circumstance by looking at what happened to Newtonian gravity. First, well before the development of general relativity there were observations that did not fit with Newtonian gravity. For example, Uranus’ orbit did not match Newtonian predictions. It was found that by modifying the predictions by including an unobserved source of gravity, the data could be coerced into fitting the observations. Subsequent observations confirmed the planet Neptune. As another example, Mercury’s orbit also did not fit, and a similar additional planet named Vulcan was proposed. The planet Vulcan was never observed through other means. Now, afterward general relativity was developed. It explained the orbit of Mercury without requiring Vulcan. In addition, many other phenomena were predicted and discovered. Many of these phenomena were not predicted by Newtonian gravity or the wrong value was predicted. Through the course of these observations Newtonian gravity was explicitly falsified. However, after Newtonian gravity was falsified it still continued to be taught in schools. The Apollo space program and other spacecraft successfully reached their destinations using the falsified Newtonian gravity theory. The thing is that although the theory was falsified it had also been verified for centuries and none of that verification was removed by the falsification. Newtonian gravity continued to accurately predict all of the phenomena that it had ever been shown to accurately predict. If you were only interested in those previously verified phenomena then you could continue to use Newtonian gravity with confidence, and there is a strong incentive to do so because it is computationally far simpler than general relativity. So, at some point when an experiment falsifies general relativity then new sources will be sought and if they cannot be found then that will place limits on its domain of validity, but it will not reverse any of the evidence that validates it within its domain of validity. Furthermore, just as general relativity needed to reduce to Newtonian gravity in the appropriate domain, so any future theory will need to reduce to general relativity in the appropriate domain. If the future theory is computationally more difficult than general relativity, then we would continue to use general relativity just as we have continued to use Newtonian gravity. Thus, we would fully expect future students to learn general relativity just as current students still learn Newtonian gravity. General relativity will not go away, even after such an experiment | {
"source": [
"https://physics.stackexchange.com/questions/585690",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/274664/"
]
} |
585,691 | In this paper 1 the following bandstructure of Bi $_2$ Se $_3$ is shown: In "a" they show the bands without Spin orbit coupling (SOC) and in "b" they include SOC.
It is said that: "Figure 2a and b show the band structure of Bi $_2$ Se $_3$ without and with SOC, respectively. By comparing the two figure
parts, one can see clearly that the only qualitative change induced
by turning on SOC is an anti-crossing feature around the $\Gamma$ point,
which thus indicates an inversion between the conduction band
and valence band due to SOC effects, suggesting that Bi $_2$ Se $_3$ is a
topological insulator" What is meant by the "anti crossing around the $\Gamma$ point after SOC is turned on?" Also before SOC is turned on there is no crossing between valence band and conduction band!? And what is meant by the "inversion between conduction and valence band"? Am I supposed to see that conduction and valence bands are mirrored at the Fermi level (dashed line) when going from the left figure to the right? And why does this indicate that we have a topological insulator? 1 H. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang & S.-C. Zhang, "Topological insulators in $\require{mhchem}\ce{Bi2Se3}$ , $\ce{Bi2Te3}$ and $\ce{Sb2Te3}$ with a single Dirac cone on the surface", Nat. Phys. 5 , 438–442 (2009). | My question would be, what happens in the scientific community if one experiment proves it wrong We have already seen what happens in this circumstance by looking at what happened to Newtonian gravity. First, well before the development of general relativity there were observations that did not fit with Newtonian gravity. For example, Uranus’ orbit did not match Newtonian predictions. It was found that by modifying the predictions by including an unobserved source of gravity, the data could be coerced into fitting the observations. Subsequent observations confirmed the planet Neptune. As another example, Mercury’s orbit also did not fit, and a similar additional planet named Vulcan was proposed. The planet Vulcan was never observed through other means. Now, afterward general relativity was developed. It explained the orbit of Mercury without requiring Vulcan. In addition, many other phenomena were predicted and discovered. Many of these phenomena were not predicted by Newtonian gravity or the wrong value was predicted. Through the course of these observations Newtonian gravity was explicitly falsified. However, after Newtonian gravity was falsified it still continued to be taught in schools. The Apollo space program and other spacecraft successfully reached their destinations using the falsified Newtonian gravity theory. The thing is that although the theory was falsified it had also been verified for centuries and none of that verification was removed by the falsification. Newtonian gravity continued to accurately predict all of the phenomena that it had ever been shown to accurately predict. If you were only interested in those previously verified phenomena then you could continue to use Newtonian gravity with confidence, and there is a strong incentive to do so because it is computationally far simpler than general relativity. So, at some point when an experiment falsifies general relativity then new sources will be sought and if they cannot be found then that will place limits on its domain of validity, but it will not reverse any of the evidence that validates it within its domain of validity. Furthermore, just as general relativity needed to reduce to Newtonian gravity in the appropriate domain, so any future theory will need to reduce to general relativity in the appropriate domain. If the future theory is computationally more difficult than general relativity, then we would continue to use general relativity just as we have continued to use Newtonian gravity. Thus, we would fully expect future students to learn general relativity just as current students still learn Newtonian gravity. General relativity will not go away, even after such an experiment | {
"source": [
"https://physics.stackexchange.com/questions/585691",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
586,835 | I think I understand the idea of thinking about gravity not as a force pulling an object towards another object but instead a warping of space so that an object moving in a straight line ends up following a path that brings it closer to the object, like two people at the equator both heading North and ending up at the same point even though all they did was move forward. What I'm not following is why the speed that the object is traveling would affect the path it takes if all it is doing is moving forward and it is in fact spacetime that is bending around the planet. I can easily understand this in classical mechanics as two forces counteracting each other, but I can't visualise what is happening in a model of gravity as warped space. Imagine a large planet and two objects passing by the planet both on the same course. One is slower than the other. The slow object gets captured by the planet and falls into an orbit (or to the planet itself if it is too slow to make an orbit). If I understand correctly this object is simply moving forward in space but space itself bends around so that its path now takes it towards the planet. But nothing has pulled the object off its original course. The other, a fast moving object, has its path bent slightly but flies past the planet and off into space. Same thing, it simply moves forward and again its path is bent by virtue of space itself being bent If these two objects are both simply moving in a straight line through the same bent space time, both going only "forward" how would the speed of one object cause a path that is less bend towards the planet than the other. Surely one just travels through the same equally bent space time faster than the other. I'm sure I'm missing something, but can't find a good explanation, most explanations I can find online about viewing gravity as curved spacetime ignore completely the speed at which the object caught by gravity is traveling. Follow Up Just want to say thank you to everyone who answered this question, blown away by how much people were prepared to put into formulating answers. I've not picked an acceptable answer since I don't feel qualified to know which is the best explanation, but they are all really good and have all really helped expand my understanding of this topic. | You're using the wording "curved spacetime", but you're still only thinking "curved space" with an independent, linear time. In your curvature model, you're assuming that moving through some 3D spatial point in one spatial 3D direction will experience the same 3D path curvation independent on speed (as if you'd shoot a ball through a curved tube). You'd certainly agree that a different initial 3D direction will result in a different path. Now we are in 4D, meaning that two different initial speeds are two different 4D directions, and as time cannot be treated as an independent component, but is curved together with space, this easily results in a different path. | {
"source": [
"https://physics.stackexchange.com/questions/586835",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277063/"
]
} |
586,847 | Consider the problem of determining the equations of motion in 2D for a point mass sliding down the quarter unit circle lying in the 3rd quadrant. That is, at $t=0$ , it is at position $(-1,0)$ , and we wish to determine its position $x(t)$ for $t>0$ , as it slides to $(0,-1)$ . Let $\alpha(t)$ denote the angle between the tangent line to the quarter circle at position $x(t)$ , and the x-axis, so that essentially $\alpha(0) = \frac{\pi}{2}$ . Then $x(t) = -(\sin \alpha(t), \cos \alpha(t))$ . Moreover, the instantaneous force acting on the particle is $F = m \ddot{x} = mg \sin \alpha (\cos \alpha, -\sin \alpha)$ . Edit: The calculations from here onwards are not correct, as pointed out in the answers, although coincidentally the end result and the plot are. See my answer below. Differentiating $x$ twice and equating it to $\frac{1}{m} F$ gives $\begin{pmatrix}\sin \alpha & -\cos \alpha \\ \cos \alpha & \sin \alpha\end{pmatrix}
%
\begin{pmatrix} \dot{\alpha} \\ \ddot{\alpha} \end{pmatrix}
=
g \begin{pmatrix} \sin \alpha \cos \alpha \\ -\sin^2 \alpha \end{pmatrix}
$ so that by assuming $\alpha$ is never exactly $\frac{\pi}{2}$ , hence being able to invert the matrix, after some cancellations we get $\begin{pmatrix} \dot{\alpha} \\ \ddot{\alpha} \end{pmatrix}
%
= \begin{pmatrix} 0 \\ -g \sin \alpha \end{pmatrix}$ . Now solving $\ddot{\alpha} = -g \sin \alpha$ alone using Euler integration with initial conditions $\alpha(0) = \frac{\pi}{2}$ and $\dot{\alpha}(0) = 0$ , and then plotting $x(t)$ , produces something that looks reasonable: the spacings increase as the particle slides down and gains speed. However, $\dot{\alpha} = 0$ forces $\alpha$ to be constant, and $\ddot{\alpha}$ to be zero, constraining away the expected trajectories. Why does this problem occur with this solution? | You're using the wording "curved spacetime", but you're still only thinking "curved space" with an independent, linear time. In your curvature model, you're assuming that moving through some 3D spatial point in one spatial 3D direction will experience the same 3D path curvation independent on speed (as if you'd shoot a ball through a curved tube). You'd certainly agree that a different initial 3D direction will result in a different path. Now we are in 4D, meaning that two different initial speeds are two different 4D directions, and as time cannot be treated as an independent component, but is curved together with space, this easily results in a different path. | {
"source": [
"https://physics.stackexchange.com/questions/586847",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/195358/"
]
} |
586,848 | An electromagnetic wave propagates in air toward a reflective plate and reflects off the plate. Is there any difference in the wave's reflection if the plate is charged versus uncharged? Related: an electromagnetic wave propagates through a medium. Is there any difference in the propagation or any change in the wave if an electric field (either static or changing) is applied in the medium through which the wave propagates? | You're using the wording "curved spacetime", but you're still only thinking "curved space" with an independent, linear time. In your curvature model, you're assuming that moving through some 3D spatial point in one spatial 3D direction will experience the same 3D path curvation independent on speed (as if you'd shoot a ball through a curved tube). You'd certainly agree that a different initial 3D direction will result in a different path. Now we are in 4D, meaning that two different initial speeds are two different 4D directions, and as time cannot be treated as an independent component, but is curved together with space, this easily results in a different path. | {
"source": [
"https://physics.stackexchange.com/questions/586848",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277082/"
]
} |
587,050 | Let's say that we have a gaseous or liquidus compound (I don't know if elements or compounds make a difference, take this as a thought experiment), and we have a tungsten or steel block that's 5cm (or less, you choose) thick. Is there any physical method for that gas or liquid to pass through that thick heavy metal block (not by drilling etc.)? Maybe vibrating or something else, I am asking for I have no information about this. Every quantum mechanical or unorthodox ideas and theories are accepted. Maybe some solid state physicist could help me. Maybe some proposal which works like diffusion, I don't know. I am here to listen and learn. Thanks. | Yes, some gases can diffuse into and through metal. It is the bane of the high-vacuum engineer's life. Hydrogen is the worst because it tends to dissociate into atoms at the surface and the nucleus, a single proton, can then leave its electron behind and wander through the metal lattice until it picks up another electron when it leaves. For example Mu-metal, favoured for some applications, typically has to be annealed in hydrogen at high temperature. Once that is over, it can take weeks or months for the residual hydrogen to diffuse out of the metal before a high enough vacuum can be achieved and the work proceed. A "virtual leak" occurs where a small bubble of gas is embedded in the material inside a vacuum chamber. The leak usually happens because a tiny hole exists for the gas to diffuse out through, but sometimes the "hole" is no more than an ultra-thin skin of metal (invisible to the frustrated technician) and the gas diffuses through it. These little horrors can keep going for months or even years and generally mean replacing suspected parts and pumping down over and over again until the dodgy one is finally stumbled on. Helium is both monatomic and the physically smallest atom. It can diffuse more easily than any other neutral atom or molecule, making certain metal foils unsuitable as say gas-tight liners for airships. As noted in another answer, in quantity it can also affect the bulk properties of the metal. On a more energetic scale, hydrogen and helium nuclei (protons and alpha particles) can pass through thin metal foils if fired with sufficient energy, and this has been used to establish the crystalline structures of some metals and alloys (where, for whatever reason, electrons were unsuitable). Other gases have much larger atoms (neon and other noble gases) or molecules (nitrogen and other diatomic molecules, water and other hydrides), but they can still diffuse extremely slowly through some metals. This can limit the lifetime of some microchips. A related phenomenon occurs where there is a defect in the lattice at the surface, such as a grain boundary, and a gas atom attaches to it. Defects are sometimes quite mobile and can migrate through the lattice; the gas atom will stabilise the defect and may be able to hitch a ride. Quantum processes such as tunnelling are not really relevant, as they work over distances smaller than the atomic wavelength, which in turn is typically far smaller than the thickness of any metal atom or foil. The probability of a gas atom tunnelling across is so infinitesimal as to be effectively zero. | {
"source": [
"https://physics.stackexchange.com/questions/587050",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/251995/"
]
} |
587,214 | It makes sense to me that we can find some operator that gives us eigenfunctions that correspond to definite values for some desired observable. However, I do not see how the eigenvalues happen to give you the actual measurable values. I feel like there is something obvious I'm missing. | Suppose we don't know quantum mechanics yet and we want to calculate the expecatation value of an observable $A$ . Could be momentum, spin whatever. It is given by $$\mathbb E(A)=\sum_i a_i\,p(a_i)$$ Where $a_i$ are the possible outcomes and $p(a_i)$ are the probabilities of those outcomes. When the outcome is continuous this becomes an integral. To each of these states we can associate a vector $|a_i\rangle$ and it is possible to make these states orthonormal such that $\langle a_i|a_j\rangle=\delta_{ij}$ . Quantum mechanics is linear so if we have two solutions $|a_1\rangle,|a_2\rangle$ then the state $|\psi\rangle=\alpha|a_1\rangle+\beta|a_2\rangle$ is also a valid solution. How do we interpret this new state? It is a postulate (Born rule) that the probability of finding $a_1$ is given by $p(a_1)=|\alpha|^2$ . This means we have to normalize $|\psi\rangle$ such that $|\alpha|^2+|\beta|^2=1$ in order for it to be a valid state. If we then define Dirac notation as usual we get $\alpha=\langle a_1|\psi\rangle$ and $\alpha^*=\langle \psi|a_1\rangle$ which you can check using orthonormality. After some manipulation we can get the expectation value in the following form \begin{align}
\mathbb E(A)&=|\alpha|^2a_1+|\beta|^2a_2\\
&=\alpha^*\alpha\ a_1+\beta^*\beta\ a_2\\
&=\langle \psi|a_1\rangle \langle a_1|\psi\rangle a_1+\langle \psi|a_2\rangle \langle a_2|\psi\rangle a_2\\
&=\langle \psi|\left(\sum_i |a_i\rangle\langle a_i|a_i\right)|\psi\rangle
\end{align} If we then define $\hat A=\sum_i |a_i\rangle\langle a_i|a_i$ then we get $\mathbb E(A)=\langle \psi|\hat A|\psi\rangle$ . So what's the link with eigenvectors/eigenvalues? It turns out that according to the spectral theorem that any Hermitian matrix can be written as $\hat A=\sum_i |\lambda_i\rangle\langle \lambda_i|\lambda_i$ where $\lambda_i$ are its eigenvalues and $|\lambda_i\rangle$ its eigenvectors. Notably these eigenvectors form an orthonormal basis. This implies that only the eigenvectors of $\hat A$ can give the outcome of a measurement. This is because $|a_i\rangle\langle a_i|$ is a projection along $|a_i\rangle$ . Any vectors that are orthogonal to $|a_i\rangle$ will be projected out. If a state is orthogonal to all eigenvectors of $\hat A$ , which means it can't be written as a sum of eigenvectors, then it will automatically give zero contribution in the expectation value because it is projected out. As a final note I would like to add that my reasoning has been a bit backwards from how you would usually do it but I hope this made it more clear why this eigenvalue/eigenvector construction actually makes a lot of sense. | {
"source": [
"https://physics.stackexchange.com/questions/587214",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/153727/"
]
} |
587,656 | Imagine that a ball of mass $m$ is launched at a block, which also has mass $m$ . Attached to the block, facing the ball, is a massless spring with a massless board at the end. Alternatively, we can assume that the block, spring, and board taken together have mass $m$ . Assume there is no gravity and no dissipative forces. Suppose that the length of the spring and the spring constant are sufficient to stop the ball. When the velocity of the ball is zero, the spring becomes locked; it has some mechanism that physically prevents it from expanding or contracting, which is activated using an arbitrarily small amount of energy. After the interaction, how much potential energy is stored in the spring? In solving this problem, an apparent paradox appears. Because the block-spring system and the ball have the same mass, and because the ball is at rest after the interaction, momentum conservation implies that the final velocity of the block should equal the initial velocity of the ball. This would imply that the final kinetic energy of the block is the same as the initial kinetic energy of the ball. But then there can be no potential energy stored in the spring, even though it has been compressed. Obviously such a setup is impossible because of the massless spring and the lack of energy dissipation. However, these assumptions are fairly standard in physics, so one would not expect them to lead to a contradiction. Also, I am aware of the principle that an idealized ratchet-like mechanism cannot exist because it could be used to violate the second law of thermodynamics. I understand that argument, but the problem here is not a violation of the second law, but rather a contradiction of energy and momentum conservation. Which of the premises of the problem is responsible for this contradiction, and why? | If the spring locks when the ball is at rest in the lab frame, then by the arguments you give it follows that the spring must not be compressed at all. This is indeed the case. As the ball slows down, the block begins to speed up. Eventually they are traveling at the same speed, at which point the spring has reached its maximum compression. As the spring begins to expand, the block's velocity becomes greater than that of the ball. When the spring attains its uncompressed length, the ball comes to rest and the block is traveling with speed $v$ . This can be shown directly. Let $x(t)$ be the position of the ball and $y(t)$ be the position of the block, and let us consider left to be the positive direction in accordance with your figure. At time $t=0$ the ball makes contact with the spring. Let the initial position and velocity of the ball be $x(0)=0$ and $x'(0) = v$ , and the initial position and velocity of the block be $y(0)=L$ and $y'(0) = 0$ where $L$ is the (unimportant) unstretched length of the spring. The dynamics of the system are governed by the equations $$m x'' = k(y-x-L)$$ $$m y'' = -k(y-x-L)$$ We can define the auxiliary variables $u = \frac{x+y}{2}$ and $w = \frac{x-y+L}{2}$ to obtain $$ u'' = 0 \implies u(t)= \frac{L+vt}{2}$$ $$w'' = -\frac{k}{m} w \implies w(t)= \frac{v}{2\omega}\sin(\omega t)$$ where $\omega=\sqrt{k/m}$ and I've applied the initial conditions stated above. We can invert these relations to find $x$ and $y$ to be $$x(t) = u+w-\frac{L}{2} = \frac{vt}{2}+\frac{v}{2\omega}\sin(\omega t)$$ $$y(t) = u-w+\frac{L}{2} = L + \frac{vt}{2} - \frac{v}{2\omega}\sin(\omega t)$$ The ball comes to rest when $x'(t) = \frac{v}{2}(1+\cos(\omega t)) = 0 \implies \omega t = \pi$ . However, at this time we have that $y-x = L$ and $y'= v$ . An alternative is that the spring locks when the maximum compression is achieved, i.e. when $y'=x'$ . This occurs when $\cos(\omega t)=0 \implies \omega t = \pi/2$ . At this moment, the velocity of the ball and the block are both $v/2$ , in accordance with the conservation of momentum in a completely inelastic collision. | {
"source": [
"https://physics.stackexchange.com/questions/587656",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/199698/"
]
} |
587,657 | As photons are quantum particles and basically waves in a quantum field, could an infinite number of photons exist in a closed space described by finite numbers? Does the answer to this apply to other fundamental particles as well? | In principle you could fit a very large number of photons into a finite volume but with a limit. Even though photons are waves, they have energy and from general relativity you can only have so much energy in a certain region up to a point where the energy density is so high that the region will collapse into a black hole. At this point the region will be infinitely dense and infinitely small. So you probably could not fit an infinite number in a finite volume as the energy density will be infinite. This would also apply to fundamental particles as well (assuming they have no well defined volume) since they have mass and therefore energy. Furthermore, if you were to continually put more photons/matter into it, the "stuff" inside the black hole (given a sufficient length of time) will gradually dissolve by radiating away the energy of the stuff that was there to begin with, once again meaning that no finite region can have infinite photons/particles. To see more on this last part, click this link for more information about Hawking Radiation . | {
"source": [
"https://physics.stackexchange.com/questions/587657",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/211721/"
]
} |
587,671 | For $V_2$ , why is there no current flowing from the positive terminal from $V_2$ ? In other words, why does $I_3$ win out from the current that would be flowing from $V_2$ if it was the only battery in the circuit? Also, why is the Kirchoff loop ebcde in this diagram oriented this way? Could we not argue that the loop should be in the direction edcbe given the Kirchoff loop points in the direction of current flow coming out of the positive terminal of the battery and starts where it ends by convention? | In principle you could fit a very large number of photons into a finite volume but with a limit. Even though photons are waves, they have energy and from general relativity you can only have so much energy in a certain region up to a point where the energy density is so high that the region will collapse into a black hole. At this point the region will be infinitely dense and infinitely small. So you probably could not fit an infinite number in a finite volume as the energy density will be infinite. This would also apply to fundamental particles as well (assuming they have no well defined volume) since they have mass and therefore energy. Furthermore, if you were to continually put more photons/matter into it, the "stuff" inside the black hole (given a sufficient length of time) will gradually dissolve by radiating away the energy of the stuff that was there to begin with, once again meaning that no finite region can have infinite photons/particles. To see more on this last part, click this link for more information about Hawking Radiation . | {
"source": [
"https://physics.stackexchange.com/questions/587671",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/271734/"
]
} |
588,086 | I'm no expert on string theory, but I've been reading about it. I've been quite surprised because of how it appears to be inconsistent with observations, but hasn't been rejected yet. Examples: On the cosmological implications of the string Swampland Criterion 2: The current B-mode constraint $\epsilon < 0.0044$ corresponds to $|∇ϕV|/V<0.09$ , in tension with the second Swampland criterion $|∇ϕV|/V>c∼O(1)$ . Near-future measurements will be precise enough to detect values of $r$ at the level of $0.01$ ; failure to detect would require $|∇ϕV|/V≲0.035$ . The plateau models, favored by some cosmologists as the simplest remaining that fit current observations, require $|∇ϕV|/V≲0.02$ during the last 60 e-folds, which is in greater tension with the second Swampland criterion. This seems to imply that this second Swampland criterion is inconsistent with observations by at least one order of magnitude, possibly two. Example #2 : The conjectured formula — posed in the June 25 paper by Vafa, Georges Obied, Hirosi Ooguri and Lev Spodyneiko and further explored in a second paper released two days later by Vafa, Obied, Prateek Agrawal and Paul Steinhardt — says, simply, that as the universe expands, the density of energy in the vacuum of empty space must decrease faster than a certain rate. The rule appears to be true in all simple string theory-based models of universes. But it violates two widespread beliefs about the actual universe: It deems impossible both the accepted picture of the universe’s present-day expansion and the leading model of its explosive birth. So string theory is inconsistent with inflation, dark energy, and Big Bang theory. Even if one argues that the observational evidence behind inflation is not rock solid, surely the other two should be on very firm ground. Why hasn't string theory been rejected yet? Or, even if string theory itself hasn't been rejected, why haven't these problematic swampland conjectures been rejected? It's weird to me how string theorists are apparently excited by developments (as in Example #2 above) when they are seemingly fatal to the theory. The only possible explanation I can see is that string theory hasn't been falsified, it's just encountered difficulties - but if that's the case then it reminds me somewhat of steady state cosmology vs. Big Bang theory of the past, and being able to appeal to one of the $10^{500}$ possible universes in string theory as the "solution" doesn't seem appealing at all. | You surely know that string theory has zillions of vacua. Most of these vacua can immediately be ruled out e.g. because they have the wrong number of macroscopic dimensions, or for similar reasons. But among those that remain possibilities - possessing the right qualitative possibilities - it is exceedingly difficult to calculate anything testable. The interest in the "swampland hypotheses" - hypotheses that certain things are impossible in string theory - is that they might dramatically speed up the understanding of the theory, and its application to reality. For example, if a metastable de Sitter space lasting for cosmological durations really is impossible in string theory, then dark energy needs to be explained in some other way, e.g. via quintessence. Swampland hypotheses can also potentially have sharp implications for the allowed values of the parameters in effective field theory. But the keyword is, potentially . None of these hypotheses have been proven. It's a little like in mathematics, where there are various high-powered propositions (generalized Riemann hypothesis, abc conjecture...) which have never been proven, but most people think they are true, and have figured out many of the further consequences, if they are true. The swampland research still has this conjectural character, and the swampland hypotheses are still challenged e.g. by the people who constructed a landscape of putative de Sitter vacua for string theory in the 2000s. Those constructions have some heuristic, not entirely rigorous ingredients, which the swampland hypotheses imply must actually be flawed. So there is a technical debate underway about whether or not they are viable. (The implications of swampland hypotheses for the reality of the string theory landscape, and the paradigm of anthropic selection within eternal inflation, would be another reason why there is lively interest. After all, the swampland is defined as the space of field theories that aren't in the landscape.) You could say that without the swampland debate, string theory would be stuck just with either handwaving anthropic justifications for the observed features of the world, or the slow technical improvement in the ability to calculate particle properties. The swampland debate is an opportunity to move ahead on a third front. | {
"source": [
"https://physics.stackexchange.com/questions/588086",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/177855/"
]
} |
588,299 | As far as I have understood, the case is that there is nothing that argues that time or space is continuous, but at the same time we must assume this in order to be able to calculate derivatives or integrals with respect to these, how can we justify this? | Let's say space is really a lattice with spacing $\Delta x$ . It turns out that this idea has more trouble with experiment than you might think, but we can plow ahead for the purposes of this question. You might propose replacing integrals in physics with discrete sums over individual lattice points, to take a concrete example let's think about the work needed to move a particle from point $A$ to point $B$ \begin{equation}
W = \int_A^B \vec{F} \cdot {\rm d} \vec{x} \rightarrow \sum_{i=1}^N \vec{F}(\vec{x}_i) \cdot \hat{e}_{i,i+1} \Delta x
\end{equation} where $i=1,2,...,N$ labels the lattice points that the particle follows in going from $A$ to $B$ and $\hat{e}_{i,i+1}$ is a vector pointing from the lattice space at $i$ to the lattice point at $i+1$ . If $\Delta x$ is small enough so $N$ is large enough, these two quantities will be quite close (since in the limit of infinite $N$ the two quantities are actually exactly the same). To see a difference (if there is one) we need to probe distances of the same order or smaller than $\Delta x$ , or else have a large precision to tell the difference between these two expressions. Here's the point. No one has ever found any disagreement between experiment and theory that can be attributable to the failure of the continuum limit. If there is such a $\Delta x$ , it must be so small that it is a very good approximation to use integrals instead of sums over lattice in all experiments done to date. You can think of the LHC as probing energy scales of order 1-10 TeV, which amounts to $10^{-18}-10^{-19}$ meters -- so $\Delta x$ , if it is nonzero, must be smaller than this. There are other problems with having a lattice, but this is already a powerful argument that the world is at least effectively continuous at the scales we can probe. | {
"source": [
"https://physics.stackexchange.com/questions/588299",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/234261/"
]
} |
588,845 | Why is the combination of two light waves (red, yellow) percieved as the same color as the arithmetic mean of their frequencies (orange) while we percieve two musical notes at the same time as just those two waves stacked on top, and not the mean of those frequencies? | Because 16,000 is greater than 3. We only have 3 sorts of detector (called cones) in the eye, sensitive broadly to red, green and blue light.
So a mix of red and green light excites the red and green cones. But yellow light, in between red and green, also excites the red and green cones, and the brain can't tell the difference. So it's not the mean of the frequencies - a red-blue mix is different from green - but there is some averaging going on. But the ear has 16,000 hair cells each sensitive to a particular frequency, so the brain gets a whole lot more information. A 256 Hz C will excite the 256 Hz hair cell, but a mix of 242 Hz B and 271 Hz C# will excite those two receptors and not the one in between. | {
"source": [
"https://physics.stackexchange.com/questions/588845",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/277859/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.