source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
281,828 | If Earth had rings, would they center on the equator like Saturn's rings do on its equator? | Great question. If Earth had rings, and they had been there as long as the moon has, they would mostly likely line parallel to Earth's equator and be visible in the sky from an east to west orientation. So how would Earth acquire a ring? Our moon is, in reality, slowly moving away from the Earth, but if it were instead moving inwards, eventually it would break apart due to differential gravitational forces between the side nearest us and the far side, 3000 km away. Obviously, a large amount of the moon will bombard the Earth, but this answer assumes we survive. Typically, the Roche Limit limit applies to a satellite's disintegrating due to tidal forces induced by its primary, the body about which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. View of ring from Washington. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit, Saturn's E-Ring and Phoebe ring being notable exceptions. They could either be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart. The Roche Limit can be viewed as an Earth shaped imaginary "border", on average 9,492 km from the centre of Earth, 1.49 times Earth's radius, for rigid bodies. So around the equator it "moves" outwards a little. It follows the oblate spheroid shape of Earth. Earth may have had a ring just after its formation. The view of these ring from Earth would vary. It would all depend on your latitude and which direction you were facing. Near the equator, the ring would be like thin slices of light that erupted from distant Earth horizons and stretched into the sky as far as the eye could see. Thanks to Emilio Pisanty for correctly pointing out the depiction of the rings from mid and high latitudes is not completely accurate. The plane of the ground is not orthogonal to the plane of the rings, so they would appear at an angle. All I can do is ask for some personal latitude in the presentation of this "what if" scenario. The pictures assume the ring around Earth would be in the same proportion as the ring around Saturn is to that planet. View of ring from the equator. Why does the ring form around the equator as opposed to another axis .
It's due to the effect of the Central Force Law , the same basic reason the planets are situated in a plane around the Sun. The Sun is spherical,
so objects such as Pluto can "get away" with being 8 degrees out of line. If the Earth, and Saturn) were perfect spheres, then the axis of the ring could be at any angle. Because both planets are oblate spheroid, with a tidal bulge, over time the particles composing the ring would collect there. Saturn's rings have an estimated local thickness of as little as 10 metres and as much as 1 kilometer, so they are extemely "thin". View of rings from the mid latitudes. View of rings at 23° south latitude a 180° panorama gives an idea of what a magnificent sight the rings would be. The Earth itself is casting the shadow. Image source: If Earth Had a Ring Like Saturn | {
"source": [
"https://physics.stackexchange.com/questions/281828",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/148704/"
]
} |
282,047 | Temperature is the average kinetic energy of all the molecules in a system. Thus
\begin{align}
T
&= \frac{1}{N}\sum \frac{1}{2} m v^2 \\[6pt]
&= \sum \frac{E_\text{single particle}}{n} \cdot 6.022 \times 10^{23} \, .
\end{align}
where $N$ is the number of particles and $n$ is the amount of substance. The unit here is J/mol. Why isn't this used in place of temperature? (Instead of K, °C or °F)? | Temperature is the average kinetic energy of all the molecules in a system. It’s not that simple. Let’s first look at an ideal, mono-atomic gas. Here we have: $$ T = \frac{2}{3 k_\text{B}} \bar{E}_\text{kin},$$ where $k_\text{B}$ is the Boltzmann constant and $\bar{E}_\text{kin}$ shall be the average kinetic energy of each particle. Apart from the proportionality constants, this is what you were describing. However, the $3$ in that equation is the number of the degrees of freedom of each particle. If we consider an ideal gas whose particles are molecules composed of two atoms (e.g., oxygen), we have five degrees of freedom and thus: $$ T = \frac{2}{5 k_\text{B}} \bar{E}_\text{kin},$$ In real situations (i.e., with non-ideal gases, fluids, and solid states), things become much more complicated here, as there are various degrees of freedom with differing properties and all of this becomes temperature-dependent on top. This is what is usually expressed in the heat capacity of a material. Therefore there is no simple way to define temperature using kinetic energy. Now, you may ask: “Why is temperature defined such that we have to bother about heat capacities in the first place? Couldn’t we just define it via the average kinetic energy despite all of this?” In theory, we could of course do this, but then we would lose one of the most practical properties of temperature, namely that two bodies with the same temperature are in thermal equilibrium. For example, two materials which melt under the same conditions would very likely have a different melting "temperature"¹, or every object in your room would have a vastly different "temperature". Also, it would be very tedious to measure that "temperature". ¹ in the new sense | {
"source": [
"https://physics.stackexchange.com/questions/282047",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/105888/"
]
} |
282,067 | It seems to me there are two types of balances. When you balance rocks, it seems like as you're getting closer to the equilibrium point, the stones tend less and less to fall. But when you balance for example this spoon on the tape, it's impossible to me to find the equilibrium point and the spoon seems to move from pushing one side to the state of dominance of the force in the other direction without any intermediate state (I tried it on several similar objects and I tried to fix the spoon tight to the tape as well. Also, the objects had al). What is the difference between these two systems and what's the official name for it? | Temperature is the average kinetic energy of all the molecules in a system. It’s not that simple. Let’s first look at an ideal, mono-atomic gas. Here we have: $$ T = \frac{2}{3 k_\text{B}} \bar{E}_\text{kin},$$ where $k_\text{B}$ is the Boltzmann constant and $\bar{E}_\text{kin}$ shall be the average kinetic energy of each particle. Apart from the proportionality constants, this is what you were describing. However, the $3$ in that equation is the number of the degrees of freedom of each particle. If we consider an ideal gas whose particles are molecules composed of two atoms (e.g., oxygen), we have five degrees of freedom and thus: $$ T = \frac{2}{5 k_\text{B}} \bar{E}_\text{kin},$$ In real situations (i.e., with non-ideal gases, fluids, and solid states), things become much more complicated here, as there are various degrees of freedom with differing properties and all of this becomes temperature-dependent on top. This is what is usually expressed in the heat capacity of a material. Therefore there is no simple way to define temperature using kinetic energy. Now, you may ask: “Why is temperature defined such that we have to bother about heat capacities in the first place? Couldn’t we just define it via the average kinetic energy despite all of this?” In theory, we could of course do this, but then we would lose one of the most practical properties of temperature, namely that two bodies with the same temperature are in thermal equilibrium. For example, two materials which melt under the same conditions would very likely have a different melting "temperature"¹, or every object in your room would have a vastly different "temperature". Also, it would be very tedious to measure that "temperature". ¹ in the new sense | {
"source": [
"https://physics.stackexchange.com/questions/282067",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41078/"
]
} |
282,286 | So, I've been trying to teach myself about quantum computing, and I found a great YouTube series called Quantum Computing for the Determined. However. Why do we use ket/bra notation? Normal vector notation is much clearer (okay, clearer because I've spent a couple of weeks versus two days with it, but still). Is there any significance to this notation? I'm assuming it's used for a reason, but what is that reason? I guess I just don't understand why you'd use ket notation when you have perfectly good notation already. | Indeed, I agree with you, standard notation is, in my personal view, already sufficiently clear and bra-ket notation should be used when it is really useful. A typical case in QM is when a state vector is determined by a set of quantum numbers like this $$\left|l m s \right\rangle$$
Another case concerns the use of the so-called occupation numbers $$\left|n_{k_1} n_{k_2}\right\rangle$$ in QFT. Also q-bits notation for states $\left|0\right\rangle$, $\left|1\right\rangle$ in quantum information theory is meaningful...
Finally the use of bra ket notation permits one to denote orthogonal projectors onto subspaces in a very effective manner
$$\sum_{|m|\leq l}\left|l m \right\rangle \left\langle l m\right|\:.$$ A reason for its, in my view, nowadays not completely justified use is historical and due to the famous P.A.M. Dirac's textbook. In the 1930s, mathematical objects like Hilbert spaces and dual spaces, self-adjoint operators, were not very familiar mathematical tools to physicists. (The modern notion of Hilbert space was invented in 1932 by J. von Neumann in his less famous textbook on mathematical foundations of QM.)
Dirac proposed a very nice notation which embodied a fundamental part of the formalism. However it also includes some drawbacks. In particular, manipulating non-self adjoint operators, e.g., symmetries, turns out to be very cumbersome within bra-ket formalism.
If $A$ is self-adjoint, in $\left\langle \psi\right| A\left| \phi\right\rangle$ the operator can be viewed, indifferently, as acting on the left or on the right preserving the final result. If the operator is not self-adjoint this is false. I think bra-ket notation is a very useful tool, but should be used "cum grano salis" in QM. In my view $\left|\psi\right\rangle$ where $\psi$ is a qunatum mechanics wavefunction , may be a dangerous notation, especially for students, as it generates misleading questions like this, $A\left|\psi\right\rangle = \left|A\psi \right\rangle$? ADDENDUM . I understand that I interpreted the question into a broader view, regarding the use of bra-ket notation in QM rather than the restricted field of quantum information theory. | {
"source": [
"https://physics.stackexchange.com/questions/282286",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/121464/"
]
} |
282,410 | Consider a Stern-Gerlach machine that measures the $z$ -component of the spin of an electron. Suppose our electron's initial state is an equal superposition of $$|\text{spin up}, \text{going right} \rangle, \quad |\text{spin down}, \text{going right} \rangle.$$ After going through the machine, the electron is deflected according to its spin, so we get $$|\text{spin up}, \text{going up-right} \rangle, \quad |\text{spin down}, \text{going down-right} \rangle.$$ In a first quantum mechanics course, we say the spin has been measured. After all, if you trace out the momentum degree of freedom, we no longer have a spin superposition. In simpler words, you can figure out the spin by which way the electron is going. In a second course, sometimes you hear this isn't really a measurement: you can put the two beams through a second, upside-down Stern-Gerlach machine, to combine them into $$|\text{spin up}, \text{going right} \rangle, \quad |\text{spin down}, \text{going right} \rangle.$$ Now the original spin superposition is restored, just as coherent as before. This point of view is advanced in this lecture and the Feynman lectures . Here's my problem with this argument. Why doesn't the interaction change the state of the Stern-Gerlach machine? I thought the two states would be $$|\text{spin up}, \text{going up-right}, \text{SG down} \rangle, \quad |\text{spin down}, \text{going down-right}, \text{SG up} \rangle.$$ That is, if the machine pushes the electrons up, it itself must be pushed down by momentum conservation. After recombining the beams, the final states are $$|\text{spin up}, \text{going right}, \text{SG down} \rangle, \quad |\text{spin down}, \text{going right}, \text{SG up} \rangle.$$ and the spins cannot interfere, because the Stern-Gerlach part of the state is different! Upon tracing out the Stern-Gerlach machine, this is effectively a quantum measurement. This is a special case of a general question: under what circumstances can interaction with a macroscopic piece of lab equipment not cause decoherence? Intuitively, there is always a backreaction from the spin onto the equipment, which changes its state and destroys coherence, so it seems that every particle is always continuously being measured. In the case of a magnetic field acting on a spin, like in NMR, there is a resolution: the system state is a coherent state, because it's a macroscopic magnetic field, and coherent states are barely changed by $a$ or $a^\dagger$ . But I'm not sure how to argue it for the Stern-Gerlach machine. | It's a very good question, since indeed if the original Stern-Gerlach machine had a well-defined momentum, then you are right that there could be no coherence upon rejoining the beams! The rule of thumb for decoherence: a superposition is destroyed/decohered when information has leaked out. In this setting that would mean that if by measuring, say, the momentum of the Stern-Gerlach machine you could figure out whether the spin had curved upwards or downwards, then the quantum superposition between up and down would have been destroyed. Let's be more exact, as it then will become clear why in practice we can preserve the quantum coherence in this kind of set-up. Let us for simplicity suppose that the first Stern-Gerlach machine simply imparts a momentum $\pm k$ to the spin, with the sign depending on the spins orientation. By momentum conservation, the Stern-Gerlach machine gets the opposite momentum, i.e. (using that $\hat x$ generates translation in momentum space)
$$\left( |\uparrow \rangle + |\downarrow \rangle \right) \otimes |SG_1\rangle \to \left( e^{- i k \hat x} |\uparrow \rangle \otimes e^{ i k \hat x} |SG_1\rangle \right) + \left( e^{i k \hat x} |\downarrow \rangle \otimes e^{- i k \hat x} |SG_1\rangle \right) $$
Let us now attach the second (upside-down) Stern-Gerlach machine, with the final state
$$\to \left( |\uparrow \rangle \otimes e^{ i k \hat x} |SG_1\rangle \otimes e^{-i k \hat x} |SG_2\rangle \right) + \left( |\downarrow \rangle \otimes e^{- i k \hat x} |SG_1\rangle \otimes e^{ i k \hat x} |SG_2\rangle \right) $$ For a clearer presentation, let me now drop the second SG machine (afterwards you can substitute it back in since nothing really changes). So we now ask the question: does the final state $\boxed{ \left( |\uparrow \rangle \otimes e^{ i k \hat x} |SG_1\rangle \right) + \left( |\downarrow \rangle \otimes e^{- i k \hat x} |SG_1\rangle \right) }$ still have quantum coherence between the up and down spins? Let us decompose $$ e^{ -i k \hat x} |SG_1\rangle = \alpha \; e^{i k \hat x} |SG_1\rangle + |\beta \rangle $$ where by definition the two components on the right-hand side are orthogonal, i.e. $\langle SG_1 | e^{ -2 i k \hat x} | SG_1 \rangle = \alpha$. Then $|\alpha|^2$ is the probability we have preserved the quantum coherence! Indeed, the final state can be rewritten as
$$\boxed{ \alpha \left( |\uparrow \rangle +| \downarrow \rangle \right) \otimes e^{ i k \hat x} |SG_1\rangle + |\uparrow\rangle \otimes | \gamma \rangle + |\downarrow \rangle \otimes |\beta\rangle }$$ where
$\langle \gamma | \beta \rangle = 0$. In other words, tracing out over the Stern-Gerlach machine, we get a density matrix for our spin-system: $\boxed{\hat \rho = |\alpha|^2 \hat \rho_\textrm{coherent} + (1-|\alpha|^2) \hat \rho_\textrm{decohered}}$. So you see that in principle you are right: the quantum coherence is completely destroyed if the overlap between the SG machines with different momenta is exactly zero, i.e. $\alpha = 0$. But that would only be the case if our SG has a perfectly well-defined momentum to begin with. Of course that is completely unphysical, since that would mean our Stern-Gerlach machine would be smeared out over the universe. Analogously, suppose our SG machine had a perfectly well-defined position, then the momentum-translation is merely a phase factor, and $|\alpha|=1$ so in this case there is zero information loss! But of course this is equally unphysical, as it would mean our SG machine has completely random momentum to begin with. But now we can begin to see why in practice there is no decoherence due to the momentum transfer: in practice we can think of the momentum of the SG machine as being described by some mean value and a Gaussian curve, and whilst it is true that the momentum transfer of the spin slightly shifts this mean value, there will still be a large overlap with the original distribution, and so $|\alpha| \approx 1$. So there is strictly speaking some decoherence, but it is negligible. (This is mostly due to the macroscopic nature of the SG machine. If it were much smaller, than the momentum of the spin would have a much greater relative effect.) | {
"source": [
"https://physics.stackexchange.com/questions/282410",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83398/"
]
} |
282,459 | This may be a basic question, but I have never understood it completely: why is an earthed conductor always at zero potential? I would say it is because theoretically one can suck up charge from the earth without doing work, hence it is at zero potential (all earth charge is at infinity), not sure if this makes sense. | The electrical potential has a gauge freedom because we can arbitrarily set the zero anywhere we want. This is because we can only ever measure differences in the potential and not the absolute value of the potential. Conductors that are earthed are all at the same potential (because they are earthed to the same planet) so it is usually convenient to choose this as our zero. Any potentials that we measure are then the difference from the potential of the earth. | {
"source": [
"https://physics.stackexchange.com/questions/282459",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20281/"
]
} |
282,745 | If light is faster in vacuum medium than in air medium, does it mean that we are seeing everything in a delayed manner since we live in air medium? Is there any way to see things in actual speed i.e. in vacuum? P.s. I'm not a physics grad, so I'm sorry if my question is trivial. | If you mean "do we see things in slow motion", the answer is "no". We see things with a slight delay, but at the same speed as if the medium was a vacuum. The easiest way to see this is to think about what would happen over time. Let's assume we are looking at a clock, and the light from the clock gets to us slowly - say it takes a second longer than it would in a vacuum. Then when the second hand reaches "1 second past the hour", I see it at the top of the hour. But a second later, the information "it is now one second later" must reach me. Otherwise, all that information will end up piled up between the clock and me - and a person who just walks into the room would either see a different time than I see (they see the one second delay), or for them the situation would be different than it was for me when I walked into the room. Neither of those things make sense. So - constant delay due to the extra time the signal takes; but other than that, no difference in speed with which observed events unfold. As was pointed out by @hobbs, the actual difference in speed between light in vacuum and in air is tiny. With the refractive index of air at STP around 1.0003, the difference is not something you would normally notice. Light travels 1 meter in about 3 nano seconds; on that scale, an extra 0.03% adds about 1 pico second. | {
"source": [
"https://physics.stackexchange.com/questions/282745",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/118921/"
]
} |
283,305 | I feel more tired walking 1 km than cycling 1 km at the same speed. However when cycling I am moving the extra weight of the cycle along! | This is one of those cases where the physics definition of "work done" does not match your experience. If you move an object that is subject to gravity along a horizontal surface, physics tells us the only work done is the work done to accelerate the object, and overcome the force of friction. Biologically, "walking" is a complex action that involves many muscles contracting and stretching. But muscles are not a reversible system (unlike a spring) - work done contracting is not returned when they stretch again (incidentally this is why some animals like kangaroos have highly elastic tendons... this greatly improves the efficiency of their jumping). So if you do a deep squat, returning to the same position, you will have expended (chemical) energy, even though you "did no net work" in the physics sense. Walking involves continuous (small) changes in the height of your center of mass, and so a lot of "shallow squats". Even if you could walk smoothly without bouncing your center of mass up and down, your legs will bend and stretch as you absorb the shocks of the road (your leg has to be straight when it is placed in front of you, and bend when it is directly underneath you - or you have to move your center of gravity). When you look at cycling, you don't have to carry your entire weight on your legs - the only work you need to do is work required to overcome the small rolling friction (at walking speed) and air drag (if you go a little faster). Also, your center of mass stays at a constant height, so there is no energy lost in "bouncing" so much. If you look at calories burned, this is confirmed. Riding a fast mile on a bicycle you will burn about 50 kcal, and much less if you go more slowly; running a mile will burn about 130 kcal (depending on how heavy you are - this is for 170 pound runner). UPDATE In Umberger et al, "A model of human muscle energy expenditure", Computer Methods in Biomechanics and Biomedical Engineering, 2003 Vol. 6 (2), pp. 99–111 the authors give a detailed model of energy expenditure of different muscles, showing clearly that load, not just extension, play a big role. And obviously when you carry your entire weight, you are carrying more of the load. They include the following diagram showing what muscles are loaded during what part of the walking cycle: It occurred to me that the earliest "bicycle" that I am aware of was the velocipede , a contraption that allowed one to "walk" while part of the body weight was carried by the "bike". This immediately reduced the effort required for locomotion and provides further evidence for the above. Image from that article: | {
"source": [
"https://physics.stackexchange.com/questions/283305",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29849/"
]
} |
283,682 | Me and my friend were arguing. I think there could theoretically be different types of protons, but he says not. He says that if you have a different type of proton, it isn't a proton, it's something else. That doesn't make sense to me! There are different types of apples, but they're still called apples! He says that's how protons work, but can we really know that? | It is an experimental fact that all electrons and also all protons (but this often applies also to nuclei, atoms and even molecules) are indistinguishable from one another, i.e. they both are identical particles . Imagine to perform the following experiment: you take two objects A and B, perform as many measurements as you want on them, put them into a "black box", shake the box and then take them out. At this point, you want to be able to tell which object is A and which is B. Let's say that A and B are two...apples. You can then measure their mass, their volume, take photographs of them etc.: you will obtain different results (taking into account experimental errors). Therefore, the only thing you have to do is take note of these results and you will be able to tell which is A and which is B. However, if you try to do the same thing with two electrons, you will discover that all the quantities you can measure (mass,charge,spin etc.) are identical within experimental error . Therefore, you will not be able to tell one electron from the other. This is an experimental fact, and as far as I know there is not a theoretical reason why it should be so. Maybe one day we will be able to perform more precise measurements and we will discover that electron charges are actually slightly different from each other! PS I would like to stress that it is pointless to say that protons are identical because they are made of identical quarks, because this only shifts the problem from proton to quarks (we could then ask "why are all quarks identical?"). | {
"source": [
"https://physics.stackexchange.com/questions/283682",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/131873/"
]
} |
284,444 | Given Planck's energy-frequency relation $E=hf$, since energy is quantized, presumably there exists some quantum of energy that is the smallest possible. Is there truly such a universally-minimum quantum of $E$, and does that also mean that there is a minimum-possible frequency (and thus, a maximum-possible wavelength)? | since energy is quantized You have a misunderstanding here on what quantization means. At present in our theoretical models of particle interactions all the variables are continuous, both space-time and energy momentum. This means they can take any value from the field of real numbers. It is the specific solution of quantum mechanical equations, with given boundary conditions that generates quantization of energy. The same is true for classical differential equations, as far as frequencies go. Sound frequency can take any value, and its quantization in specific modes depends on the specific problem and its boundary conditions. There exist limits given by the value of the constants that are used in elementary particle quantum mechanical equations. It is the Planck length and the Planck time the reciprocal of the Planck time can be interpreted as an upper bound on the frequency of a wave. This follows from the interpretation of the Planck length as a minimal length, and hence a lower bound on the wavelength. which are at the limits of what we can see in experiments and study in astrophysical observations, but these are another story. | {
"source": [
"https://physics.stackexchange.com/questions/284444",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26704/"
]
} |
284,450 | What is the difference between a functional and an operator ? When we define an operator in physics, e.g. the momentum operator as $\hat{p} = i \frac{d}{dx}$, it is said this operator acts on the wave functions. But isn't something that takes a function as an argument also called a functional? Why do we call $\hat{p}$ momentum operator and not momentum functional? | Loosely, an operator (acting on a function space) takes functions to functions (e.g., $f(x)$ to $-i f'(x)$). On the other hand, a functional takes functions to numbers (think about a certain integral, or the derivative evaluated at a certain point). | {
"source": [
"https://physics.stackexchange.com/questions/284450",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1648/"
]
} |
286,360 | Newton's Law of Universal Gravitation tells us that the potential energy of object in a gravitational field is $$U ~=~ -\frac{GMm}{r}.\tag{1}$$ The experimentally verified near-Earth gravitational potential is
$$U ~=~ mgh.\tag{2}$$ The near-Earth potential should be an approximation for the general potential energy when $r\approx r_{\text{Earth}}$, but the problem I'm having is that they scale differently with distance. $(1)$ scales as $\frac 1r$. So the greater the distance from the Earth, the less potential energy an object should have. But $(2)$ scales proportionally to distance. So the greater the distance from the Earth, the more potential energy an object should have. How is this reconcilable? | Your equation (2) is the change in potential energy when the object moves vertically by a distance $h$ i.e. when the object moves from $r$ to $r+h$. Let's use equation (1) to calculate this: $$ \Delta U = GMm\left(\frac{1}{r}-\frac{1}{r+h}\right) $$ Subtracting the two fractions inside the bracket gives: $$\begin{align}
\Delta U &= GMm\left(\frac{r+h}{r(r+h)}\frac{r}{r(r+h)}\right) \\
&= GMm\frac{h}{r(r+h)}
\end{align}$$ Since $h \ll r$ that means $r+h\approx r$ and our equation becomes: $$\begin{align}
\Delta U &\approx GMm\frac{h}{r^2} \\
&\approx \frac{GM}{r^2}mh \\
&\approx gmh
\end{align}$$ Footnote: I've just noticed that in your comment to Itachí's answer you ask if you can use a Taylor series. You can use a binomial expansion to make the aproximation more obvious. You rewrite: $$ \Delta U = GMm\frac{h}{r(r+h)} $$ as: $$ \Delta U = \frac{GM}{r^2}mh\left(1+\frac{h}{r}\right)^{-1} $$ then a binomial expansion gives: $$ \Delta U = \frac{GM}{r^2}mh\left(1-\frac{h}{r} + O\left(\frac{h}{r}\right)^2 \right) $$ And as before since $h \ll r$ the term in the brackets is approximately one and we once again get: $$ \Delta U = \frac{GM}{r^2}mh $$ | {
"source": [
"https://physics.stackexchange.com/questions/286360",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/109338/"
]
} |
286,374 | A body is undergoing uniform circular motion. Imagine a mass swung in a circle on a string in the absence of gravity or the motion of the Earth about the sun. At some point, the string is cut or, in the other example, gravity is instantaneously turned off. Is there any reasonable frame of reference that depicts the body moving radially away from the center of rotation? [This question is prompted by an overly extensive comment section in another question. I feel obliged to bring it here since it seems to need more elaboration than comments provide.] Edit: I'm considering deleting this question. As mentioned, it was intended to bring an ancillary discussion from the comments of another question her where they could be more fully discussed. The person(s) involved there seem not interested in participating. Missing the context there, the question here, in isolation, seems rather deflated of meaning. | Your equation (2) is the change in potential energy when the object moves vertically by a distance $h$ i.e. when the object moves from $r$ to $r+h$. Let's use equation (1) to calculate this: $$ \Delta U = GMm\left(\frac{1}{r}-\frac{1}{r+h}\right) $$ Subtracting the two fractions inside the bracket gives: $$\begin{align}
\Delta U &= GMm\left(\frac{r+h}{r(r+h)}\frac{r}{r(r+h)}\right) \\
&= GMm\frac{h}{r(r+h)}
\end{align}$$ Since $h \ll r$ that means $r+h\approx r$ and our equation becomes: $$\begin{align}
\Delta U &\approx GMm\frac{h}{r^2} \\
&\approx \frac{GM}{r^2}mh \\
&\approx gmh
\end{align}$$ Footnote: I've just noticed that in your comment to Itachí's answer you ask if you can use a Taylor series. You can use a binomial expansion to make the aproximation more obvious. You rewrite: $$ \Delta U = GMm\frac{h}{r(r+h)} $$ as: $$ \Delta U = \frac{GM}{r^2}mh\left(1+\frac{h}{r}\right)^{-1} $$ then a binomial expansion gives: $$ \Delta U = \frac{GM}{r^2}mh\left(1-\frac{h}{r} + O\left(\frac{h}{r}\right)^2 \right) $$ And as before since $h \ll r$ the term in the brackets is approximately one and we once again get: $$ \Delta U = \frac{GM}{r^2}mh $$ | {
"source": [
"https://physics.stackexchange.com/questions/286374",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/123046/"
]
} |
286,392 | In physics and chemistry, I learned that energy is directly proportional to the frequency of a wave $E=hf$ for light. However, in biology, the opposite is true - energy is high when frequency is low. (For example, in sound waves). Why does this discrepancy exist and why isn't there only one relationship between frequency and energy? | Your equation (2) is the change in potential energy when the object moves vertically by a distance $h$ i.e. when the object moves from $r$ to $r+h$. Let's use equation (1) to calculate this: $$ \Delta U = GMm\left(\frac{1}{r}-\frac{1}{r+h}\right) $$ Subtracting the two fractions inside the bracket gives: $$\begin{align}
\Delta U &= GMm\left(\frac{r+h}{r(r+h)}\frac{r}{r(r+h)}\right) \\
&= GMm\frac{h}{r(r+h)}
\end{align}$$ Since $h \ll r$ that means $r+h\approx r$ and our equation becomes: $$\begin{align}
\Delta U &\approx GMm\frac{h}{r^2} \\
&\approx \frac{GM}{r^2}mh \\
&\approx gmh
\end{align}$$ Footnote: I've just noticed that in your comment to Itachí's answer you ask if you can use a Taylor series. You can use a binomial expansion to make the aproximation more obvious. You rewrite: $$ \Delta U = GMm\frac{h}{r(r+h)} $$ as: $$ \Delta U = \frac{GM}{r^2}mh\left(1+\frac{h}{r}\right)^{-1} $$ then a binomial expansion gives: $$ \Delta U = \frac{GM}{r^2}mh\left(1-\frac{h}{r} + O\left(\frac{h}{r}\right)^2 \right) $$ And as before since $h \ll r$ the term in the brackets is approximately one and we once again get: $$ \Delta U = \frac{GM}{r^2}mh $$ | {
"source": [
"https://physics.stackexchange.com/questions/286392",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/94698/"
]
} |
286,576 | I am wondering about this. When salty water in the ocean evaporates we are getting the clean distilled water. Why is that?
I was trying to think on this and maybe the comparative size/mass of water molecules to the size of different salts molecules plays a role here, but it is not likely. The size/mass of e.g. alcohol molecules is much higher but they still evaporate quite well.
Could you explain intuitively? | Water molecules are polar; this basically means that they have a "positive side" and a "negative side" Salt is composed by Na$^+$ and Cl$^-$ ions held together by electrostatic forces (is a ionic compound ). When salt is put into water, it dissociates, i.e. the Na$^+$ ions are separated from the Cl$^-$ ions. When in the water, such ions are surrounded by water molecules facing them with the side which has charge opposite to that of the ion; this is because this way they can reach a lower energy state, being their electrostatic field screened by that of the water molecules (picture below [source] ). Water evaporates when the thermal energy of the molecules is high enough to break about half the hydrogen bonds between them [source] . For the ions, it is much more difficult to evaporate, because their thermal energy would have to be enough to compensate the effect of the water molecules which surround them. Basically, both the water molecules and the ions are in what is called a potential energy well: to "kick them out" of the well, we have to provide them with an energy as high as the depth of the energy well $\Delta E$ (picture below). The depth of the well in which the water molecules are (due to hydrogen bonding) is much lower than the depth of the well where the ions are. So, a far higher thermal energy is needed to take an ion out of the water. Since thermal energy is proportional to $k_B T$, where $k_B$ is Boltzmann's constant , this means that a far higher temperature is needed. Update: Some numbers To get an idea of the order of magnitudes of the energies involved, we should consider the following: At room temperature ($T_r\simeq298$K), $k_B T_r = 0.026$ eV (however, we should keep in mind that this is just an order of magnitude...) The energy of an hydrogen bond (hydrogen bond enthalpy) in water is around $23.3$ kJ/mol = $0.24$ eV = $9.3 \ k_B T_r$, and in order a volume for water to evaporate, about half of all the hydrogen bonds in the volume must be broken: There is no standard
definition for the hydrogen bond energy. In liquid water, the energy of attraction between water
molecules (hydrogen bond enthalpy) is optimally about $23.3$ kJ/mol (Suresh and Naik, 2000) and
almost five times the average thermal collision fluctuation at $25$°C. This is the energy required for
breaking and completely separating the bond, and equals about half the enthalpy of vaporization ($44$ kJ/mol
at $25$°C), as an average of just under two hydrogen bonds per molecule are broken when water
evaporates. [source] The energy gained by putting an ion in water (technical term "hydrating" the ion) is known as the hydration energy or hydration enthalpy ($\Delta H_{hyd}$). Since we are interested in the opposite process (the remotion or de-hydration of the ion), we have to take $-\Delta H_{hyd}$. Here we can find some numbers. We can see that $$\Delta H_{hyd}(\text{Na}^+) = -406 \ \text{kJ/mol} = -4.2 \ \text{eV} = -162\ k_B T_r$$ $$\Delta H_{hyd}(\text{Cl}^-) = -363 \ \text{kJ/mol} = -3.8 \ \text{eV} = -145 \ k_B T_r$$ | {
"source": [
"https://physics.stackexchange.com/questions/286576",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126917/"
]
} |
286,599 | I'm a programmer and I'm trying to create a physically accurate game. I'm not an expert on physics so if I'm missing the correct terms please excuse me. What I'm trying to do is to simulate an arrow hitting and deflecting off a wall So imagine an arrow and an impulse is applied to it on any point. I need to find out how much of that impulse will cause the arrow to rotate and how much of it to translate seperately. I know the rotational kinetic energy is (1/2) I w^2 and I for a rod is (1/12) M L^2 around the center but I'm not sure about what to do if the axis is around a point between the center and the end which is the case in my game. Also I dont know what do the rotational and translational kinetic energies add up to. All I have is a velocity change that I can equate. | Water molecules are polar; this basically means that they have a "positive side" and a "negative side" Salt is composed by Na$^+$ and Cl$^-$ ions held together by electrostatic forces (is a ionic compound ). When salt is put into water, it dissociates, i.e. the Na$^+$ ions are separated from the Cl$^-$ ions. When in the water, such ions are surrounded by water molecules facing them with the side which has charge opposite to that of the ion; this is because this way they can reach a lower energy state, being their electrostatic field screened by that of the water molecules (picture below [source] ). Water evaporates when the thermal energy of the molecules is high enough to break about half the hydrogen bonds between them [source] . For the ions, it is much more difficult to evaporate, because their thermal energy would have to be enough to compensate the effect of the water molecules which surround them. Basically, both the water molecules and the ions are in what is called a potential energy well: to "kick them out" of the well, we have to provide them with an energy as high as the depth of the energy well $\Delta E$ (picture below). The depth of the well in which the water molecules are (due to hydrogen bonding) is much lower than the depth of the well where the ions are. So, a far higher thermal energy is needed to take an ion out of the water. Since thermal energy is proportional to $k_B T$, where $k_B$ is Boltzmann's constant , this means that a far higher temperature is needed. Update: Some numbers To get an idea of the order of magnitudes of the energies involved, we should consider the following: At room temperature ($T_r\simeq298$K), $k_B T_r = 0.026$ eV (however, we should keep in mind that this is just an order of magnitude...) The energy of an hydrogen bond (hydrogen bond enthalpy) in water is around $23.3$ kJ/mol = $0.24$ eV = $9.3 \ k_B T_r$, and in order a volume for water to evaporate, about half of all the hydrogen bonds in the volume must be broken: There is no standard
definition for the hydrogen bond energy. In liquid water, the energy of attraction between water
molecules (hydrogen bond enthalpy) is optimally about $23.3$ kJ/mol (Suresh and Naik, 2000) and
almost five times the average thermal collision fluctuation at $25$°C. This is the energy required for
breaking and completely separating the bond, and equals about half the enthalpy of vaporization ($44$ kJ/mol
at $25$°C), as an average of just under two hydrogen bonds per molecule are broken when water
evaporates. [source] The energy gained by putting an ion in water (technical term "hydrating" the ion) is known as the hydration energy or hydration enthalpy ($\Delta H_{hyd}$). Since we are interested in the opposite process (the remotion or de-hydration of the ion), we have to take $-\Delta H_{hyd}$. Here we can find some numbers. We can see that $$\Delta H_{hyd}(\text{Na}^+) = -406 \ \text{kJ/mol} = -4.2 \ \text{eV} = -162\ k_B T_r$$ $$\Delta H_{hyd}(\text{Cl}^-) = -363 \ \text{kJ/mol} = -3.8 \ \text{eV} = -145 \ k_B T_r$$ | {
"source": [
"https://physics.stackexchange.com/questions/286599",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133189/"
]
} |
286,721 | What actually are virtual particles ? In various places around physics SE, documentaries and occasional news headlines, I see the term "virtual particles", normally virtual photons. I have tried researching it, but I'm not at a level of understanding yet to be able to grasp whats going on, if someone could explain it in a simple manner that would be great. | This is the table of particles on which the standard model of elementary particle physics is founded: These particles are completely and uniquely characterized by their mass and quantum numbers , like spin, flavour, charge... The standard model is a mathematical model based on a Lagrangian which contains the interactions of all these particles, and it is framed in the four dimensions of special relativity. This means that the mass of each particle, called rest mass (because it is the invariant mass the particle has in its rest frame) in the energy-momentum frame is given by : $$m_0^2c^2 = \left(\frac Ec\right)^2 - ||\mathbf p||^2$$ in natural units where $c= 1,$ $$m_0^2 = E^2 -||\mathbf p||^2$$ The standard model Lagrangian allows the calculation of cross-sections and lifetimes for elementary particles and their interactions, using Feynman diagrams which are an iconic representation of complicated integrals: Only the external lines are measurable and observable in this model, and the incoming and outgoing particles are on the mass shell. The internal lines in the diagrams carry only the quantum numbers of the exchanged named particle, in this example a virtual photon. These "photons" instead of having a mass of zero, as they do when measured/observed have a varying mass imposed by the integral under which they have "existence". The function of the virtual line is to keep the quantum number conservation rules and help as a mnemonic. It does not represent a "particle" that can be measured, but a function necessary for the computation of cross-sections and lifetimes according to the limits of integration entering the problem under study. p.s. my answer to this other question might be relevant in framing what a particle is . | {
"source": [
"https://physics.stackexchange.com/questions/286721",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/123649/"
]
} |
286,738 | Total internal reflection is used to send light signals long distances in fiber-optic cables. If the cable is shaped into a ring, the light will run in a circle until its intensity dwindles due to attenuation (optical ring resonators are examples of this). Light doesn't attenuate in a vacuum, however. If you were to create a ring-shaped vacuum chamber lined with a metamaterial with an index of refraction less than one, could you build up arbitrary light intensities without losses to attenuation? Edit: My question references materials with indices of refraction less than unity. I believe the speed which can be calculated using this index is the phase velocity of light which does not carry information and can exceed c . For example, to x-rays most solid materials are optically less dense than a vacuum and have a refractive index slightly less than one. You can get total external reflection this way. | This is the table of particles on which the standard model of elementary particle physics is founded: These particles are completely and uniquely characterized by their mass and quantum numbers , like spin, flavour, charge... The standard model is a mathematical model based on a Lagrangian which contains the interactions of all these particles, and it is framed in the four dimensions of special relativity. This means that the mass of each particle, called rest mass (because it is the invariant mass the particle has in its rest frame) in the energy-momentum frame is given by : $$m_0^2c^2 = \left(\frac Ec\right)^2 - ||\mathbf p||^2$$ in natural units where $c= 1,$ $$m_0^2 = E^2 -||\mathbf p||^2$$ The standard model Lagrangian allows the calculation of cross-sections and lifetimes for elementary particles and their interactions, using Feynman diagrams which are an iconic representation of complicated integrals: Only the external lines are measurable and observable in this model, and the incoming and outgoing particles are on the mass shell. The internal lines in the diagrams carry only the quantum numbers of the exchanged named particle, in this example a virtual photon. These "photons" instead of having a mass of zero, as they do when measured/observed have a varying mass imposed by the integral under which they have "existence". The function of the virtual line is to keep the quantum number conservation rules and help as a mnemonic. It does not represent a "particle" that can be measured, but a function necessary for the computation of cross-sections and lifetimes according to the limits of integration entering the problem under study. p.s. my answer to this other question might be relevant in framing what a particle is . | {
"source": [
"https://physics.stackexchange.com/questions/286738",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/129063/"
]
} |
286,964 | Today at a teachers' seminar, one of the teachers asked for fun whether zero should be followed by units (e.g. 0 metres/second or 0 metre or 0 moles). This question became a hot topic, and some teachers were saying that, yes, it should be while others were saying that it shouldn't be under certain conditions. When I came home I tried to find the answer on the Internet, but I got nothing. Should zero be followed by units? EDIT For Reopening: My question is not just about whether there is a dimensional analysis justification for dropping the unit after a zero (as a positive answer to Is 0m dimensionless would imply), but whether and in which cases it is a good idea to do so. That you can in principle replace $0\:\mathrm{m}$ with $0$ doesn't mean that you should do so in all circumstances. | This is actually a really interesting question. In principle, "zero" doesn't need units. You can think of units as a multiplier - but multiplying zero by anything still leaves you with zero. However, when you are talking about a physical quantity, it is very reasonable and appropriate to use units, even if the quantity is zero. And you have to use the correct units. It's important to think about the situations in which it even makes sense to speak of "zero anything" - because the absence of a certain property has different implications in different situations. Think about this statement: "The photon has zero rest mass" - in this case, there is no need to specify units. The mass is zero - it is simply a property that the photon does not have. On the other hand, there are times where you are trying to determine whether something is really zero or not. For example, you might want to determine whether the charge of a neutron is truly zero. A careful experiment might conclude that the charge is $0 ± 1.234\cdot 10^{-34} ~\rm{C}$. The units are necessary - because while the number itself is zero, the uncertainty in the number is finite, and has units. Finally, it is patently wrong to say "the neutron has 0 kg of charge" - which shows that although it is "nominally" the same as saying "the neutron has 0 charge", the units do matter. Of course, in situations where the scale is arbitrary (that is, where 0 "units" does not correspond to the absence of the property) you always need to use the units. The example given in several of the answers of temperature (°C, K, F) is a good one. In general I believe this can only be true of intrinsic properties (that is, properties that are independent of the quantity of material). | {
"source": [
"https://physics.stackexchange.com/questions/286964",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/128861/"
]
} |
287,101 | Consider a rocket in deep space with no external forces. Using the formula for linear kinetic energy
$$\text{KE} = mv^2/2$$
we find that adding $100\ \text{m/s}$ while initially travelling at $1000\ \text{m/s}$ will add a great deal more energy to the ship than adding $100 \ \text{m/s}$ while initially at rest:
$$(1100^2 - 1000^2) \frac{m}{2} \gg (100^2) \frac{m}{2}.$$
In both cases, the $\Delta v$ is the same, and is dependent on the mass of fuel used, hence the same mass and number of molecules is used in the combustion process to obtain this $\Delta v$.
So I'd wager the same quantity of chemical energy is converted to kinetic energy, yet I'm left with this seemingly unexplained $200,000\ \text{J/kg}$ more energy, and I'm clueless as to where it could have come from. | You've noted that at high velocities, a tiny change in velocity can cause a huge change in kinetic energy. And that means that the thrust due to burning fuel seems to be able to contribute an arbitrarily high amount of energy, possibly exceeding the chemical energy of the fuel itself. The resolution is that all of this logic applies to the fuel too! When the fuel is exhausted, it loses much of its speed, so the kinetic energy of the fuel decreases a lot. The extra kinetic energy of the rocket comes from this extra contribution, which can be arbitrarily large. Of course, the kinetic energy of the fuel didn't come from nowhere. If you don't use gravity wells, that energy came from the fuel you burned previously, which was used to speed up both the rocket and all the fuel inside it. So everything works out -- you don't get anything for free. For those that want more detail, this is called the Oberth effect , and we can do a quick calculation to confirm it. Suppose the fuel is ejected from the rocket with relative velocity $u$ , a mass $m$ of fuel is ejected, and the rest of the rocket has mass $M$ . By conservation of momentum, the velocity of the rocket will increase by $(m/M) u$ . Now suppose the rocket initially has velocity $v$ . The change in kinetic energy of the fuel is $$\Delta K_{\text{fuel}} = \frac12 m (v-u)^2 - \frac12 mv^2 = \frac12 mu^2 - muv.$$ The change in kinetic energy of the rocket is $$\Delta K_{\text{rocket}} = \frac12 M \left(v + \frac{m}{M} u \right)^2 - \frac12 M v^2 = \frac12 \frac{m^2}{M} u^2 + muv.$$ The sum of these two must be the total chemical energy released, which shouldn't depend on $v$ . And indeed, the extra $muv$ term in $\Delta K_{\text{rocket}}$ is exactly canceled by the $-muv$ term in $\Delta K_{\text{fuel}}$ . Sometimes this problem is posed with a car instead of a rocket. To understand this case, note that cars only move forward because of friction forces with the ground; all that a car engine does is rotate the wheels to produce this friction force. In other words, while rockets go forward by pushing rocket fuel backwards, cars go forward by pushing the Earth backwards. In a frame where the Earth is initially stationary, the energy associated with giving the Earth a tiny speed is negligible, because the Earth is heavy and energy is quadratic in speed. Once you switch to a frame where the Earth is moving, slowing the Earth down by the same amount harvests a huge amount of energy, again because energy is quadratic in speed. That's where the extra energy of the car comes from. More precisely, the same calculation as above goes through, but we need to replace the word "fuel" with "Earth". The takeaway is that kinetic energy differs between frames, changes in kinetic energy differ between frames, and even the direction of energy transfer differs between frames. It all still works out, but you must be careful to include all contributions to the energy. | {
"source": [
"https://physics.stackexchange.com/questions/287101",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133392/"
]
} |
287,126 | I have heard various definitions of the uncertainty principle . Yet I cannot quite comprehend how it is true. Nevertheless, something tells me, it is a consequence of the wave nature of light/electron which gives the intrinsic nature of uncertainty even if we don't measure it. Is it true that this principle is a consequence of wave nature of particle, that the uncertainty pops up due the fact that particle acts as a wave(I find no answer which stated the exact implication of the wave characteristics which should give the uncertainty principle)? Will it be true to assume that, if an electron acts only like a particle and not as a wave, the uncertainty principle will not be necessary(this part of the question is not asked anywhere)? Can you please tell me without much mathematics why this is so? Like we understood the photoelectric effect contradicts the wave nature of light, Could you please guide me the intuitive explanation with formal reason why we cannot absolutely know simultaneously the position and momentum of a particle? | From the comments, you seem to want the minimum possible math. There are 4 things you have to know first: First, what you have to know is that a basic quantum wavefunction can be imagined as exactly just a sine wave: Second, you should know that the amplitude of the wave across an interval is related to the probability of measuring your particle's position within that interval. (This is an approximate analogy of what a probability density function does.) Third, the wavelength of the wave is related to your particle's measured momentum . (If we want to be strict, it should be the frequency and it should also be a probability across an interval in frequency space , but it helps to imagine it with just a wavelength.) Fourth, you can compose a more complicated quantum wavefunction just by adding together waves of different wavelengths. (This is called superposition -- see this gif: ( Image from Wikipedia ) Now that you know these four things, we're ready to tackle the idea of Heisenberg's uncertainty principle. Note the 4th thing we said (re: superposition). Take a look at the gif. What do you notice? When we add more and more waves of different wavelengths, a prominent central peak starts to appear. Now remember the 2nd thing we said: amplitude is related to position . If we have a peak with a prominent amplitude, our particle's position becomes more likely to be measured within that peak. The more we make the central peak prominent, the more precisely we can predict the particle's position! However, to make the central peak more prominent, we have to keep adding more waves of different wavelengths. Remember the 3rd thing we said? Wavelength is related to momentum . If we keep adding different wavelengths, we expect a larger range for our momentum to be measured in, which means our particle's momentum cannot be predicted as easily. The more we add waves of different wavelengths, the less precisely we can predict the particle's momentum! And therein lies the heart of the uncertainty principle: if you try to measure position more precisely, you will consequently measure momentum less precisely, and vice versa . So to answer your question: yes, the uncertainty principle is a necessary consequence of the 'wave-nature of particles'. And to answer your second question (thank you for bringing it up in the comments!): yes, if the electron were a particle instead of a quantum mechanical object, the uncertainty principle wouldn't be necessary , or at least wouldn't necessarily apply. This is because the 4 basic concepts behind the uncertainty principle are uniquely wave concepts , especially the 2nd and 3rd concepts which are uniquely quantum mechanical wavefunction concepts , neither of which apply to particles. | {
"source": [
"https://physics.stackexchange.com/questions/287126",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/122455/"
]
} |
287,142 | My first post here and I'm a complete beginner on this. So please excuse if I'm asking too-basic a question. This question is about the classical boat and river problem. Say a boat travels at 10 m/s in a water channel. the water speed relative to ground is 0. so the boat travels at 10 m/s relative to the ground. now suddenly, the water in the channel has started to flow at 10 m/s in the opposite direction. (say this happened in 10 seconds so the acceleration is 1 m/s^2). As after a while the boat speed relative to ground has become 0,
then from the ground-based observer's point of view, the boat has undergone a deceleration. My question is; Is this deceleration always necessarily equal to minus the water acceleration? In other words whats the velocity of the boat with respect to the ground, infinitesimal time dt after the water has started to accelerate ? PS: What I'm trying to understand is what happens when an aircraft or watercraft gets hit by a gust or similar disturbance? | From the comments, you seem to want the minimum possible math. There are 4 things you have to know first: First, what you have to know is that a basic quantum wavefunction can be imagined as exactly just a sine wave: Second, you should know that the amplitude of the wave across an interval is related to the probability of measuring your particle's position within that interval. (This is an approximate analogy of what a probability density function does.) Third, the wavelength of the wave is related to your particle's measured momentum . (If we want to be strict, it should be the frequency and it should also be a probability across an interval in frequency space , but it helps to imagine it with just a wavelength.) Fourth, you can compose a more complicated quantum wavefunction just by adding together waves of different wavelengths. (This is called superposition -- see this gif: ( Image from Wikipedia ) Now that you know these four things, we're ready to tackle the idea of Heisenberg's uncertainty principle. Note the 4th thing we said (re: superposition). Take a look at the gif. What do you notice? When we add more and more waves of different wavelengths, a prominent central peak starts to appear. Now remember the 2nd thing we said: amplitude is related to position . If we have a peak with a prominent amplitude, our particle's position becomes more likely to be measured within that peak. The more we make the central peak prominent, the more precisely we can predict the particle's position! However, to make the central peak more prominent, we have to keep adding more waves of different wavelengths. Remember the 3rd thing we said? Wavelength is related to momentum . If we keep adding different wavelengths, we expect a larger range for our momentum to be measured in, which means our particle's momentum cannot be predicted as easily. The more we add waves of different wavelengths, the less precisely we can predict the particle's momentum! And therein lies the heart of the uncertainty principle: if you try to measure position more precisely, you will consequently measure momentum less precisely, and vice versa . So to answer your question: yes, the uncertainty principle is a necessary consequence of the 'wave-nature of particles'. And to answer your second question (thank you for bringing it up in the comments!): yes, if the electron were a particle instead of a quantum mechanical object, the uncertainty principle wouldn't be necessary , or at least wouldn't necessarily apply. This is because the 4 basic concepts behind the uncertainty principle are uniquely wave concepts , especially the 2nd and 3rd concepts which are uniquely quantum mechanical wavefunction concepts , neither of which apply to particles. | {
"source": [
"https://physics.stackexchange.com/questions/287142",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133406/"
]
} |
287,160 | When driving a car on ice, there is a danger of slipping, thereby losing control of the car. I understand that slipping means that as the wheels rotate, their circumference covers a total distance larger than the actual distance traveled by the car. But why does that result in a loss of control? | Because friction is your method of steering! (- and of braking and accelerating.) As @MasonWheeler comments: This is such an important principle that there's a special name for it: in the specific context of using applied friction to direct motion, friction is also known as traction . Turning / steering Friction is what makes you turn left at a corner: you turn the wheels which directs the friction the correct way. In fact, by turning your wheels you turn the direction of friction so that it has a sideways component. Friction then pushes your wheels gradually sideways and this results in the whole car turning. Without friction you are unable to do this steering . No matter how you turn your wheels, no force will appear to push you sideways and cause a turn. Without friction the car is drifting randomly according to how the surface tilts, regardless of what you do and how the wheels are turned. Braking and accelerating Accelerating and braking (negative acceleration) requires something to push forward from or something to hold on to. That something is the road. And friction is the push and the pull. No friction means no pull or push, and braking and accelerating becomes impossible. So, friction is very, very important in any kind of controlled motion of vehicles that are in touch with the ground. Even when ice skating, you'd have no chance if the ice was 100% smooth. It should now be easy to grasp that it's a problem to go from static friction (no slipping of the tires) to kinetic friction (the tires slip and skid), simply because kinetic friction is lower than maximum static friction. If you brake e.g., it is better to have static friction, because it can reach higher values than kinetic friction and thus it can stop you more effectively. | {
"source": [
"https://physics.stackexchange.com/questions/287160",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23677/"
]
} |
287,326 | Back in 2002 there was some research published hinting that $c$ may have been faster at some distant point. It was based on measurements of the fine-structure constant,
$$
\alpha = \frac1{4\pi\epsilon_0} \frac{e^2}{\hbar c} \approx \frac 1{137},
$$
in light from distant (and thus ancient) quasars. Has there been any recent developments on this? I know that at the time there was considerable doubt as to whether $c$ was inconstant. Have there been further measurements? Is it accepted now that alpha is changing? What's the current thinking on whether that means $c$ has changed? http://www.theage.com.au/articles/2002/08/07/1028157961167.html | That result has been controversial since the beginning. A comparable survey looking at a different part of the sky saw no effect, but the original authors and some new collaborators combined data from a most-of-the-sky survey and found hints that the fine-structure constant might be large in one direction of space and small in another . One of the strengths of the quasar observation was that was based on spectroscopic observations of atomic transitions. Since a slight change to the fine-structure constant pushes some energy levels up and others down, there were transitions from the same sources which were both redder and bluer than predicted. This was the main argument against the effect being some sort of redshift miscalibration. If the fine-structure constant is changing over time, or if Earth is moving through regions of space where the fine-structure constant has different values, those same sorts of energy-level shifts would occur on Earth. A long-running experiment has compared the atomic-clock transition in cesium, which should be relatively in sensitive to changes in α, to a particular transition in dysprosium which should have enhanced sensitivity to changes in α. So far, no earthbound effect has been seen. Conclusion: still an open question. Stay tuned. | {
"source": [
"https://physics.stackexchange.com/questions/287326",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133289/"
]
} |
287,622 | Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time? Please explain it in simple way. Note: I am not a physicist but have some interest in physics. | In very simple terms which I hope you will understand. The gravitational force of attraction depends on mass and distance. For the atoms which make up the Earth there are two forces acting on them, the gravitational attraction due to all the other atoms and the Coulomb/electrostatic repulsive force between the electrons orbiting the atoms. The electron shells repel one another. As mass increases the gravitational attractive force increases and the atoms come closer together and the repulsion between the electron shells increases to balance the increased gravitational attraction. If the mass increases even more the Coulomb repulsive force cannot balance the increased gravitational attractive force and the atom collapses with protons and electrons combining to form neutrons. You then have an entity composed of neutrons - a neutron star. There is still the gravitational attractive force between neutrons but now the repulsive force is provided by the strong nuclear force between the neutrons - neutrons do not like to be "squashed". Increase the mass even more and the gravitational attractive force increases and so does the repulsive force between neutrons by the neutrons coming closer together. Eventually if you increase the mass even more the repulsive force between the neutrons is not sufficient to balance the gravitational attractive force between the neutrons and so you get a further collapse into a black hole. So the simple answer to your question is that the gravitational forces between the atoms which make up a planet are not large enough to initiate catastrophic collapse because the mass of a planet is not large enough. | {
"source": [
"https://physics.stackexchange.com/questions/287622",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133633/"
]
} |
287,624 | You drop an object. It lands before hitting terminal velocity. What does the velocity graph look like? Jump discontinuity or continuous? (ie: Does it instantly go from moving to not moving? Or does it "gradually" slow down, as molecules collide & shift, etc?) What about Newton's 3rd? Does the thing technically "bounce" many times? There may be a difference between what humans can perceive vs. the reality of what is happening. I want to graph the reality of what is happening, even if it's not able to be measured. | In very simple terms which I hope you will understand. The gravitational force of attraction depends on mass and distance. For the atoms which make up the Earth there are two forces acting on them, the gravitational attraction due to all the other atoms and the Coulomb/electrostatic repulsive force between the electrons orbiting the atoms. The electron shells repel one another. As mass increases the gravitational attractive force increases and the atoms come closer together and the repulsion between the electron shells increases to balance the increased gravitational attraction. If the mass increases even more the Coulomb repulsive force cannot balance the increased gravitational attractive force and the atom collapses with protons and electrons combining to form neutrons. You then have an entity composed of neutrons - a neutron star. There is still the gravitational attractive force between neutrons but now the repulsive force is provided by the strong nuclear force between the neutrons - neutrons do not like to be "squashed". Increase the mass even more and the gravitational attractive force increases and so does the repulsive force between neutrons by the neutrons coming closer together. Eventually if you increase the mass even more the repulsive force between the neutrons is not sufficient to balance the gravitational attractive force between the neutrons and so you get a further collapse into a black hole. So the simple answer to your question is that the gravitational forces between the atoms which make up a planet are not large enough to initiate catastrophic collapse because the mass of a planet is not large enough. | {
"source": [
"https://physics.stackexchange.com/questions/287624",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133566/"
]
} |
287,628 | Suppose we have a Lagrangian that with fields that are acted on by a symmetry group, e.g. $$\mathcal{L} = \partial_{\mu}\phi \partial^{\mu}\phi^* - m^2 \phi \phi^*$$ with $G=U(1)$ (i.e. $\phi \to e^{i \alpha}\phi$). Then this symmetry group has a representation acting on the physical Hilbert space - to find these operators we can use Noether's first theorem to find a conserved current and integrate to get a conserved charge operator $\hat{Q}$ and then a representation of $U(1)$ by acting on the Hilbert space with $\hat{U} = e^{i \theta \hat{Q}}$. My question now is what happens if we now have a gauge symmetry $G$ - how do we find the Hilbert space operator corresponding to gauge symmetries $G$? We can no longer use Noether's first theorem. (Of course we expect that the subspace of physical states will transform as a singlet under the Hilbert space operators corresponding to elements of $G$.) | In very simple terms which I hope you will understand. The gravitational force of attraction depends on mass and distance. For the atoms which make up the Earth there are two forces acting on them, the gravitational attraction due to all the other atoms and the Coulomb/electrostatic repulsive force between the electrons orbiting the atoms. The electron shells repel one another. As mass increases the gravitational attractive force increases and the atoms come closer together and the repulsion between the electron shells increases to balance the increased gravitational attraction. If the mass increases even more the Coulomb repulsive force cannot balance the increased gravitational attractive force and the atom collapses with protons and electrons combining to form neutrons. You then have an entity composed of neutrons - a neutron star. There is still the gravitational attractive force between neutrons but now the repulsive force is provided by the strong nuclear force between the neutrons - neutrons do not like to be "squashed". Increase the mass even more and the gravitational attractive force increases and so does the repulsive force between neutrons by the neutrons coming closer together. Eventually if you increase the mass even more the repulsive force between the neutrons is not sufficient to balance the gravitational attractive force between the neutrons and so you get a further collapse into a black hole. So the simple answer to your question is that the gravitational forces between the atoms which make up a planet are not large enough to initiate catastrophic collapse because the mass of a planet is not large enough. | {
"source": [
"https://physics.stackexchange.com/questions/287628",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/47059/"
]
} |
288,227 | Probably a silly question, but something that came to mind yesterday. I couldn't find anything when searching. Why is there an Energy mass equivalence principle but not an Energy charge equivalence principle? In other words, why do our field theories have a Gauge invariance which allow for charge conservation, but not mass conservation, or why is it that charge has escaped from being a term in the energy mass equivalence? Is it just because we have observed these to be the case and made our field theories around this, or because that's just how the mathematics works out? | You're making some category errors in the question. Energy can't be converted into mass, mass is a form that energy can take. In other words, when energy is "converted" into mass it never stops being energy. It's kind of like if I have a mass on a spring hanging vertically in a gravitational field, and I make it start bouncing. The energy moves back and forth from kinetic energy to the gravitational and spring stretch potential energies, and back. At no point in this process do any of these quantities not qualify as "energy". Mass, likewise, is just another way energy can be stored. If you study quantum field theory, you'll even learn that mass is one of the types of potential energies a field can store. Charge, on the other hand, is about how a particle couples to a force. That gravity couples to mass is simply an observational fact that didn't, necessarily, have to be the case. When that distinction is being made physicists will refer to gravitational mass versus inertial mass. One of the strongest arguments for general relativity is the observed fact that gravity doesn't just couple to mass, it couples directly to energy/momentum in a way that is consistent with Einstein's equations. See: the gravitational lensing (observed many times by gravity from galaxies, galaxy clusters, microlensing, and even stars near the sun during a solar eclipse), gravitational redshift (observed frequency shift of light directed upward), etc. Charge, on the other hand, is how various fermion fields, like the electron, up, and down fields, couple to the electro-magnetic field. Note that total energy is conserved, so that which the gravitational field couples to is just as conserved as electric charge. | {
"source": [
"https://physics.stackexchange.com/questions/288227",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/76385/"
]
} |
288,234 | I have come across a problem from "Biological Physics" by Philip Nelson (pg212) which involves finding the equilibrium position of a spring, $x_{eq}$, which is compressing an ideal gas whilst in contact with a heat reservoir of temperature, $T_{res}$. A diagram of the setup is shown below: I believe that solving it involves the minimisation of the system's free energy $F$: $$F = U - TS $$
$$dF = dU - TdS - SdT $$ which simplifies to $$ dF = -SdT -pdV$$ using $dU = TdS - pdV$. Since the system is at thermal equilibrium with a head bath, $dT = 0$. Also, the pressure and volume can be written in terms of the spring extension and constant: $V = A(L-x_{1})$ and $pA = \frac{1}{2} kx^{2}$ (when at equilibrium) so that $dV = -Adx_{1}$ and $ dp = \frac{k}{A}x$. However, setting the equilibrium condition $dF=0$ leaves me with $$ dF (=0) = dW = -pdV = \frac{1}{2}kx^{3}dx $$ which gives an extension of 0 as a (somewhat unhelpful) result. A second attempt involved writing $dW$ as $$ dW = fdx_{1} $$ where $f$ is the force on the compression plate so that $$ f(x_1) = \frac{1}{2}x_{1}^{2} - pA $$ with the solutions of $f(x_1) = 0$ giving the equilibrium position of the plate assuming the ideal gas law $pV = pA(L-x_{1}) = nRT$. I believe this involves solving $$ x_{eq}^{3} - Lx_{eq}^{2} - \frac{2nRT}{k} = 0 $$ However, this solution appears to avoid all thermodynamic arguments, and just simplify to solving a simple mechanical problem assuming ideal gas behavior, which makes me believe I have make a mistake somewhere along the derivation. Any advice with the exercise, and on understanding the application of free energies to thermodynamic problems would be much appreciated. ---------------------EDIT----------------------- I realise that I erred on the penultimate equation ( $f(x_{1}) = kx_{1} - pA$), which then leads to the equation $kx_{eq}(L - x_{eq}) = nRT$. This seems to suggest that the total internal energy of the system can be split into the energy of the spring, and the energy of the gas so that $$dU_{total} = dU_{gas} +dU_{spring} = -pdV + kxdx$$ It appears that my mistake was to treatthe spring as a seperate entity, and forgetting its energy addition to the total internal energy of the system | You're making some category errors in the question. Energy can't be converted into mass, mass is a form that energy can take. In other words, when energy is "converted" into mass it never stops being energy. It's kind of like if I have a mass on a spring hanging vertically in a gravitational field, and I make it start bouncing. The energy moves back and forth from kinetic energy to the gravitational and spring stretch potential energies, and back. At no point in this process do any of these quantities not qualify as "energy". Mass, likewise, is just another way energy can be stored. If you study quantum field theory, you'll even learn that mass is one of the types of potential energies a field can store. Charge, on the other hand, is about how a particle couples to a force. That gravity couples to mass is simply an observational fact that didn't, necessarily, have to be the case. When that distinction is being made physicists will refer to gravitational mass versus inertial mass. One of the strongest arguments for general relativity is the observed fact that gravity doesn't just couple to mass, it couples directly to energy/momentum in a way that is consistent with Einstein's equations. See: the gravitational lensing (observed many times by gravity from galaxies, galaxy clusters, microlensing, and even stars near the sun during a solar eclipse), gravitational redshift (observed frequency shift of light directed upward), etc. Charge, on the other hand, is how various fermion fields, like the electron, up, and down fields, couple to the electro-magnetic field. Note that total energy is conserved, so that which the gravitational field couples to is just as conserved as electric charge. | {
"source": [
"https://physics.stackexchange.com/questions/288234",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133900/"
]
} |
288,241 | I am facing the following problem. We have an object of mass $M$ (for example a car) that can move in one direction (initially not moving) and we are throwing a stream of balls with constant velocity $u$ on it and a rate $\sigma\ kg/s$. What is the velocity and position of the object $M$ (in time) in two cases: balls can bounce off the object $M$ elastically. balls can stick to the object $M$ increasing its mass (elastically). This can also be a simplification of the boat on a water acceleration. If the wind can be viewed as a collection of hard-core particles then they elastically collide with the sail increasing its velocity. How does this velocity changes in time? | You're making some category errors in the question. Energy can't be converted into mass, mass is a form that energy can take. In other words, when energy is "converted" into mass it never stops being energy. It's kind of like if I have a mass on a spring hanging vertically in a gravitational field, and I make it start bouncing. The energy moves back and forth from kinetic energy to the gravitational and spring stretch potential energies, and back. At no point in this process do any of these quantities not qualify as "energy". Mass, likewise, is just another way energy can be stored. If you study quantum field theory, you'll even learn that mass is one of the types of potential energies a field can store. Charge, on the other hand, is about how a particle couples to a force. That gravity couples to mass is simply an observational fact that didn't, necessarily, have to be the case. When that distinction is being made physicists will refer to gravitational mass versus inertial mass. One of the strongest arguments for general relativity is the observed fact that gravity doesn't just couple to mass, it couples directly to energy/momentum in a way that is consistent with Einstein's equations. See: the gravitational lensing (observed many times by gravity from galaxies, galaxy clusters, microlensing, and even stars near the sun during a solar eclipse), gravitational redshift (observed frequency shift of light directed upward), etc. Charge, on the other hand, is how various fermion fields, like the electron, up, and down fields, couple to the electro-magnetic field. Note that total energy is conserved, so that which the gravitational field couples to is just as conserved as electric charge. | {
"source": [
"https://physics.stackexchange.com/questions/288241",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/53030/"
]
} |
288,351 | It is sometimes said, that if you stand still (in space), you travel through time at the speed of light. On the other side light never stands still, so it always only travels through space (at the speed of light), but not through time. Does that mean, if our universe would be filled with light only, no time would exist? Is the existence of mass therefor necessary for the existence of time? | A universe containing only light is simply a radiation dominated FLRW universe . Indeed our universe had approximately this geometry in its radiation dominated era. The FLRW metric is a perfectly good spacetime, so time certainly exists. Moreover the geometry is time dependent so we can use the energy density as a measure of time. I concede that it's hard to build a device capable of measuring time in a universe containing only light, but to claim that time does not exist in such a universe would be plain wrong. | {
"source": [
"https://physics.stackexchange.com/questions/288351",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1648/"
]
} |
288,472 | I am having some difficulty accepting the implications of the equation governing the intensity of light passing through polarization filters,
$$ I = I_0 \space\cos^2\theta $$
with $\theta$ being the angular difference between the two filters. Here's the difficulty. If I put two filters at an angle of $\frac{\pi}{2}$, then no light makes it through to the other side. But if I then put another filter in between the original two, then we apply the above equation twice successively, neither time getting a result of zero . That is, if you have two filters that don't allow any light through, you can force them to allow light through by placing a filter in between. It seems to me that, in general, a filter blocks light, so the result is counterintuitive. What happens to the light when the second filter is placed to allow some light to pass through the three-filter system? | Indeed, it can be counterintuitive that adding a polarizing filter can increase the transmitted intensity, when each filter only 'removes' light. Here's a slightly more intuitive way of thinking about it. A filter doesn't strictly remove light -- what it really does is add a light wave which destructively interferes with part of the incoming light. For example, a horizontal polarizing filter removes the horizontal part by transmitting an additional horizontally polarized wave out of phase by $180^\circ$. (This is true on a microscopic level, too. In the polarizing filter, electrons are driven by the incoming wave and, since they accelerate, they emit radiation of their own.) Now let's say we add a diagonal polarizing filter in between horizontal and vertical filters. Thinking about the horizontal and vertical filters as "destroying light" in the usual way, you are correct in that none of the original light wave will make it through, regardless of what you put in between. But the new diagonal filter adds a new diagonally polarized wave! It's part of this wave that makes it out. | {
"source": [
"https://physics.stackexchange.com/questions/288472",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/132789/"
]
} |
288,485 | If the question seems strange, here's the wikipedia snippet that drives it: The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[133] For the first millisecond of the Big Bang, the temperatures were over 10 billion Kelvin and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons: γ + γ ↔ e+ + e− An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.[134] (where citation 134 is Silk, J. (2000). The Big Bang: The Creation and Evolution of the Universe (3rd ed.).) Is this how electrons were originally formed? Is it a fundamental process where, if you get two photons hot enough and smash them together, you produce an electron and a positron? | Indeed, it can be counterintuitive that adding a polarizing filter can increase the transmitted intensity, when each filter only 'removes' light. Here's a slightly more intuitive way of thinking about it. A filter doesn't strictly remove light -- what it really does is add a light wave which destructively interferes with part of the incoming light. For example, a horizontal polarizing filter removes the horizontal part by transmitting an additional horizontally polarized wave out of phase by $180^\circ$. (This is true on a microscopic level, too. In the polarizing filter, electrons are driven by the incoming wave and, since they accelerate, they emit radiation of their own.) Now let's say we add a diagonal polarizing filter in between horizontal and vertical filters. Thinking about the horizontal and vertical filters as "destroying light" in the usual way, you are correct in that none of the original light wave will make it through, regardless of what you put in between. But the new diagonal filter adds a new diagonally polarized wave! It's part of this wave that makes it out. | {
"source": [
"https://physics.stackexchange.com/questions/288485",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36146/"
]
} |
288,489 | I am doing two tasks of flipping card and lifting a load. I am collecting data for this two tasks for a week. I want to find the mean acceleration for each day. As it is 3-axis I thought it is good to take magnitude. I have a 3-axis accelerometer sensor. I have the 100 readings collected. I found the acceleration magnitude for each record by using formula:
$$\sqrt{a_x^2 + a_y^2 + a_z^2}$$ I would like to find the total average acceleration of my readings. Should I find the average of the magnitude of acceleration? Can I find it by dividing the sum of values by 100 or (sum of magnitude values)/time? Is it valid ? Any suggestions or help is greatly appreciated. | Indeed, it can be counterintuitive that adding a polarizing filter can increase the transmitted intensity, when each filter only 'removes' light. Here's a slightly more intuitive way of thinking about it. A filter doesn't strictly remove light -- what it really does is add a light wave which destructively interferes with part of the incoming light. For example, a horizontal polarizing filter removes the horizontal part by transmitting an additional horizontally polarized wave out of phase by $180^\circ$. (This is true on a microscopic level, too. In the polarizing filter, electrons are driven by the incoming wave and, since they accelerate, they emit radiation of their own.) Now let's say we add a diagonal polarizing filter in between horizontal and vertical filters. Thinking about the horizontal and vertical filters as "destroying light" in the usual way, you are correct in that none of the original light wave will make it through, regardless of what you put in between. But the new diagonal filter adds a new diagonally polarized wave! It's part of this wave that makes it out. | {
"source": [
"https://physics.stackexchange.com/questions/288489",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134016/"
]
} |
288,519 | I found my problem on Yahoo Answers: here . I do not understand where $$-\Delta U =\Delta K $$ comes from. In class, we learned that
$$W_\textrm{net}=\Delta(K)$$ But we didn't see any of what the guy uses in his answer. Could someone provide an answer using the work-energy theorem, please ? | Indeed, it can be counterintuitive that adding a polarizing filter can increase the transmitted intensity, when each filter only 'removes' light. Here's a slightly more intuitive way of thinking about it. A filter doesn't strictly remove light -- what it really does is add a light wave which destructively interferes with part of the incoming light. For example, a horizontal polarizing filter removes the horizontal part by transmitting an additional horizontally polarized wave out of phase by $180^\circ$. (This is true on a microscopic level, too. In the polarizing filter, electrons are driven by the incoming wave and, since they accelerate, they emit radiation of their own.) Now let's say we add a diagonal polarizing filter in between horizontal and vertical filters. Thinking about the horizontal and vertical filters as "destroying light" in the usual way, you are correct in that none of the original light wave will make it through, regardless of what you put in between. But the new diagonal filter adds a new diagonally polarized wave! It's part of this wave that makes it out. | {
"source": [
"https://physics.stackexchange.com/questions/288519",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/132577/"
]
} |
288,614 | I know the Many Worlds interpretation is controversial among physicists, but it's been a pop culture hit nonetheless. I frequently see people making statements like, "Well in another universe I'm a rock star", where you can substitute rock star for any given fantasy. But one thing that's always bothered me about that kind of statement: does Many Worlds really imply Every World? You can have an infinite set of numbers that doesn't include every number. So even assuming MWI is true, is it necessarily true that one of those universes contain a rockstar version of myself? | No, it doesn't. For example, since charge is conserved, every "world" in the wavefunction must have the same charge. This goes for any other conserved quantity, too. (This doesn't rule out you being a rockstar, though.) | {
"source": [
"https://physics.stackexchange.com/questions/288614",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29177/"
]
} |
288,762 | What does the Pauli Exclusion Principle mean if time and space are continuous? Assuming time and space are continuous, identical quantum states seem impossible even without the principle. I guess saying something like: the closer the states are the less likely they are to exist , would make sense, but the principle is not usually worded that way, it's usually something along the lines of: two identical fermions cannot occupy the same quantum state | Real particles are never completely localised in space (except possibly in the limit case of a completely undefined momentum), due to the uncertainty principle. Rather, they are necessarily in a superposition of a continuum of position and momentum eigenstates. Pauli's Exclusion Principle asserts that they cannot be in the same exact quantum state, but a direct consequence of this is that they tend to also not be in similar states.
This amounts to an effective repulsive effect between particles. You can see this by remembering that to get a physical two-fermion wavefunction you have to antisymmetrize it.
This means that if the two single wavefunctions are similar in a region, the total two-fermion wavefunction will have nearly zero probability amplitude in that region, thus resulting in an effective repulsive effect. To see this more clearly, consider the simple 1-dimensional case, with two fermionic particles with partially overlapping wavefunctions.
Let's call the wavefunctions of the first and second particles $\psi_A(x)$ and $\psi_B(x)$ , respectively, and let us assume that their probability distributions have the form: The properly antisymmetrized wavefunction of the two fermions will be given by: $$
\Psi(x_1,x_2) = \frac{1}{\sqrt2}\left[ \psi_A(x_1) \psi_B(x_2)- \psi_A(x_2) \psi_B(x_1) \right].
$$ For any pair of values $x_1$ and $x_2$ , $\lvert\Psi(x_1,x_2)\rvert^2$ gives the probability of finding one particle in the position $x_1$ and the other particle in the position $x_2$ .
Plotting $\lvert\Psi(x_1,x_2)\rvert^2$ we get the following: As you can clearly see from the picture, for $x_1=x_2$ the probability vanishes. This is an immediate consequence of Pauli's exclusion principle: you cannot find the two identical fermions in the same position state.
But you also see that, the more $x_1$ is close to $x_2$ , the smaller is the probability, as it must be due to continuity of the wavefunction. Addendum: Can the effect of Pauli's exclusion principle be thought of as a force in the conventional $F=ma$ sense? The QM version of what is meant by force in the classical setting is an interaction mediated by some potential, like the electromagnetic interaction between electrons.
This corresponds to additional terms in the Hamiltonian, which says that certain states (say, same charges very close together) correspond to high-energy states and are therefore harder to reach, and vice versa for low-energy states. Pauli's exclusion principle is conceptually entirely different: it is not due to an increase of energy associated with identical fermions being close together, and there is no term in the Hamiltonian that mediates such "interaction" ( important caveat here: this " exchange forces " can be approximated to a certain degree as "regular" forces). Rather, it comes from the inherently different statistics of many-fermion states: it is not that identical fermions cannot be in the same state/position because there is a repulsive force preventing it, but rather that there is no physical (many-body) state associated with them being in the same state/position .
There simply isn't: it's not something compatible with the physical reality described by quantum mechanics.
We naively think of such states because we are used to reasoning classically, and cannot wrap our heads around what the concept of "identical particles" really means. Ok, but what about things like degeneracy pressure then?
In some circumstances, like in dying stars, Pauli's exclusion principle really seems to behave like a force in the conventional sense, contrasting the gravitational force and preventing white dwarves from collapsing into a point.
How do we reconcile the above described "statistical effect" with this? What I think is a good way of thinking about this is the following:
you are trying to squish a lot of fermions into the same place.
However, Pauli's principle dictates a vanishing probability of any pair of them occupying the same position. The only way to reconcile these two things is that the position distribution of any fermion (say, the $i$ -th fermion) must be extremely localised at a point (call it $x_i$ ), different from all the other points occupied by the other fermions.
It is important to note that I just cheated for the sake of clarity here: you cannot talk of any fermion as having an individual identity: any fermion will be very strictly confined in all the $x_i$ positions, provided that all the other fermions are not.
The net effect of all this is that the properly antisymmetrized wavefunction of the whole system will be a superposition of lots of very sharp peaks in the high dimensional position space.
And it is at this point that Heisenberg's uncertainty comes into play: very peaked distribution in position means very broad distribution in the momentum, which means very high energy, which means that the more you want to squish the fermions together, the more energy you need to provide (that is, classical speaking, the harder you have to "push" them together). To summarize: due to Pauli's principle the fermions try so hard to not occupy the same positions, that the resulting many-fermion wavefunction describing the joint probabities becomes very peaked, highly increasing the kinetic energy of the state, thus making such states "harder" to reach. Here (and links therein) is another question discussing this point. | {
"source": [
"https://physics.stackexchange.com/questions/288762",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74427/"
]
} |
289,041 | The Mponeng Gold Mine is nearly $4$ km deep. It has the largest elevators in the world and is considered one of the most dangerous mines in the world. The geothermal gradient is $25$ degrees Celsius per kilometer, which would be $100$ degrees. Therefore, it would be well over boiling temperature at the deepest part of the mine, at least theoretically. Why don't the miners get boiled to death? Also, I have read that the temperature in the mine is only $150$ °F [ $66$ °C] which would seem to conflict with the geothermal gradient. Why is that? | As noted in CountTo10's answer, the main answer is simple - miners don't "boil" because the mines use suitable cooling and ventilation equipment, plain and simple. That said, there is a contradiction, at least if you go only by Wikipedia and explicitly ignore its caveats. The Wikipedia page for the Mponeng gold mine makes the maximum rock temperature at 66 °C, and if all you read from the Wikipedia page on the geothermal gradient is the stuff in bold, then yes, a 25 °C/km gradient over a 4 km depth would give you 100 °C on top of the surface temperature. However, the actual text in that page reads Away from tectonic plate boundaries, it is about 25 °C per km of depth (1 °F per 70 feet of depth) near the surface in most of the world. and makes it clear that there can be local variations. With that in mind even some very mild digging turns up this map of the geothermal heat flow in South Africa: (Taken from S. Afr. J. Sci. 110 no. 3-4, p. 1 (2014) .) This makes it clear that the Mponeng mine is right on top of a cold spot in the Wits basin. The stated heat flows are not enough to reconstruct the thermal gradient (you need the thermal conductivity for that), and I'm not going to go on an expedition for fully trustworthy sources for that gradient. However, some more cursory digging unearthed this source , which looks reasonable (if not particularly scientific), and which claims that mining at these depths is only feasible in South Africa’s Wits Basin due to a relatively low geothermal gradient (nine degrees Celcius/km) and the presence of gold reefs in hard competent country rocks. This is enough of an agreement to call it a day. Backtracking a 9 °C/km gradient over 4 km gives a ~36 °C difference, and taking that away from the 66 °C (maximal!) rock temperature in the mine gives a ~30 °C average surface temperature. This is relatively high, but it is within a reasonable envelope, and there's plenty of leeway on the numbers (e.g. making the gradient 10 or 11 °C/km) to take away any glaring contradictions. | {
"source": [
"https://physics.stackexchange.com/questions/289041",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/57925/"
]
} |
289,058 | There was a talk at my school by Rocky Kolb and he claimed that they derived (or it might have been ''experimentally found'', I don't remember) that the mass of empty space must be/is on the order of $10^{-30}g/cm^3$. This made me curious - what would happen if I somehow took a unit volume of empty space (cube) and pulled on it from all sides, increasing its volume by $dV$? I'm curious as to what that would mean with regards to the new volume of space $V+dV$ and what that would mean to the space around it and the rest of the universe? Do we even have ideas/theories as to what changes as you change empty space (I explicitly ask this because we are probably nowhere near having the ability to modify space itself)? | As noted in CountTo10's answer, the main answer is simple - miners don't "boil" because the mines use suitable cooling and ventilation equipment, plain and simple. That said, there is a contradiction, at least if you go only by Wikipedia and explicitly ignore its caveats. The Wikipedia page for the Mponeng gold mine makes the maximum rock temperature at 66 °C, and if all you read from the Wikipedia page on the geothermal gradient is the stuff in bold, then yes, a 25 °C/km gradient over a 4 km depth would give you 100 °C on top of the surface temperature. However, the actual text in that page reads Away from tectonic plate boundaries, it is about 25 °C per km of depth (1 °F per 70 feet of depth) near the surface in most of the world. and makes it clear that there can be local variations. With that in mind even some very mild digging turns up this map of the geothermal heat flow in South Africa: (Taken from S. Afr. J. Sci. 110 no. 3-4, p. 1 (2014) .) This makes it clear that the Mponeng mine is right on top of a cold spot in the Wits basin. The stated heat flows are not enough to reconstruct the thermal gradient (you need the thermal conductivity for that), and I'm not going to go on an expedition for fully trustworthy sources for that gradient. However, some more cursory digging unearthed this source , which looks reasonable (if not particularly scientific), and which claims that mining at these depths is only feasible in South Africa’s Wits Basin due to a relatively low geothermal gradient (nine degrees Celcius/km) and the presence of gold reefs in hard competent country rocks. This is enough of an agreement to call it a day. Backtracking a 9 °C/km gradient over 4 km gives a ~36 °C difference, and taking that away from the 66 °C (maximal!) rock temperature in the mine gives a ~30 °C average surface temperature. This is relatively high, but it is within a reasonable envelope, and there's plenty of leeway on the numbers (e.g. making the gradient 10 or 11 °C/km) to take away any glaring contradictions. | {
"source": [
"https://physics.stackexchange.com/questions/289058",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/103052/"
]
} |
289,101 | I was standing outside in very light drizzle, sun behind me. I saw a rainbow. I know why they occur but... I was wearing polarized sunglasses. As an experiment, I turned my sunglasses through 90 degrees. The rainbow got brighter. I partially expected this, since rainbows are very directional. What I did not expect is that my naked eye (without sunglasses) saw a dimmer rainbow than the eye looking through the polarized dark glass. I had expected it to be about the same. More than that, through the sunglasses the colours were more vibrant. What are the reasons for this? I didn't think refracted light could be that polarized. | The rainbow did not become brighter through the polarized sunglasses (PS). Rather, the PS enhanced the contrast between the rainbow and the background light of the sky: The PS decreased the brightness of the sky, while the effect on the rainbow, if any, was much smaller. While the eyes have adjusted to the absolute level of brightness, the relative brightness of the rainbow (i.e., contrast) became higher. The reason why the colors appear more vibrant through the PS is the same: higher contrast. And, just in case, the reason why PS enhance the contrast is described in detail in the section “Sky polarization and photography” of the Wikipedia article on polarization . | {
"source": [
"https://physics.stackexchange.com/questions/289101",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134340/"
]
} |
289,108 | I'm having difficulty understanding the bra-ket notation used in quantum mechanics. For instance, take the notation used in the question Is there a relation between quantum theory and Fourier analysis? Let $O$ be an operator on a (wave)function, $f,g$ be (wave)functions, and $x$ be a dummy variable (representing a basis for $f$, I suppose). If I'm understanding the notation correctly, then $|f\rangle =$ a function independent of basis, i.e., $|\psi\rangle =$ the state vector $\langle x|f\rangle = f = |f\rangle$ transformed to a position basis $\langle x|O\rangle =$ operator on an eigenvalue of $O$ that produces the corresponding eigenfunction under a position basis $|O\rangle =$ operator on an eigenvalue of O that produces the corresponding eigenfunction independent of basis $\langle g(x)\rangle = \langle \psi|g(x)|\psi\rangle = $expectation of g(x) on measure $|\langle x|\psi\rangle|^2$ $\langle f|g\rangle$ is the projection of $g$ onto $f$, i.e. $\langle f,g\rangle$ for normalized $f$ $\langle f|x\rangle$ is undefined $\langle x|x\rangle$ is undefined $|x\rangle$ is undefined The bra portion of the bra-ket is always a dummy variable ($x$ for position, $p$ for momentum, etc). The ket portion is always a function/operator ($p$ for the momentum operator, etc) Does this look right? Also, how does the three-argument version $\langle a|b|c\rangle$ work? Same question for the bra version $\langle a|$ - if the bra is the basis, then what does it mean to take a basis without a function? | Let me work in mathematicians' notation for a bit and then switch back to Dirac notation. Suppose you start with a Hilbert space $\mathscr H$, which you can understand as a space of functions from some coordinate space $S$ into $\mathbb C$, i.e. if $f\in\mathscr H$ then $f:R\to \mathbb C$, and that you have some suitable notion of inner product $(·,·):\mathscr H\times \mathscr H\to\mathbb C$, like e.g. an integral over $R$. (Note that here $(·,·)$ should be linear on the second argument.) Given this structure, for every vector $f\in\mathscr H$ you can define a linear functional $\varphi_f:\mathscr H\to \mathbb C$, i.e. a function tha takes elements $g\in \mathscr H$ and assigns them complex numbers $\varphi_f(g)\in \mathbb C$, whose action is given specifically by $\varphi_f(g) = (f,g)$. As such, $\varphi_f$ lives in $\mathscr H^*$, the dual of $\mathscr H$, which is the set of all (bounded and/or continuous) linear functionals from $\mathscr H$ to $\mathbb C$. There's plenty of other interesting functionals around. For example, if $\mathscr H$ is a space of functions $f:R\to \mathbb C$, then another such functional is an evaluation at a given point $x\in R$: i.e. the map $\chi_x:\mathscr H\to\mathbb C$ given by
$$\chi_x(g) = g(x).$$
In general, this map is not actually bounded nor continuous (w.r.t. the topology of $\mathscr H$), but you can ignore that for now; most physicists do. Thus, you have this big, roomy space of functionals $\mathscr H^*$, and you have this embedding of $\mathscr H$ into $\mathscr H^*$ given by $\varphi$. In general, though, $\varphi$ may or may not cover the entirety of $\mathscr H^*$. The correspondence of this into Dirac notation goes as follows: $f$ is denoted $|f\rangle$ and it's called a ket. $\varphi_f$ is denoted $\langle f|$ and it's called a bra. $\chi_x$ is denoted $\langle x|$, and it's also called a bra. Putting these together you start getting some of the things you wanted: 2. $\langle x |f\rangle$ is $\chi_x(f) = f(x)$, i.e. just the wavefuntion. 6. $\langle f | g \rangle$ is $\varphi_f(g) = (f,g)$, i.e. the iner product of $f$ and $g$ on $\mathscr H$, as it should be. Note in particular that these just follow from juxtaposing the corresponding interpretations of the relevant bras and kets. 7. Somewhat surprisingly, $\langle f | x\rangle$ is actually defined - it just evaluates to $f(x)^*$. This is essentially because, in physicists' brains, 9. $|x\rangle$ is actually defined. It's normally understood as "a function that is infinitelly localized at $x$", which of course takes a physicist to make sense of (or more accurately, to handwave away the fact that it doesn't make sense). This ties in with 8.' $\langle x' | x\rangle$, the braket between different positions $x,x'\in R$, which evaluates to $\delta(x-x')$. Of course, this then means that 8. $\langle x | x\rangle$, with both positions equal, is not actually defined. If this looks like physicists not caring about rigour in any way, it's because it mostly is. I should stress, though, that it is possible to give a rigorous foundation to these states, through a formalism known as rigged Hilbert spaces , where you essentially split $\mathscr H$ and $\mathscr H^*$ into different "layers". On balance, though, this requires more functional analysis than most physicists really learn, and it's not required to successfully operate on these objects. Having done, that, we now come to some of the places where you've gone down some very strange roads: 3. $\langle x| O\rangle$ does not mean anything. Neither does "operator on an eigenvalue of $O$ that produces the corresponding eigenfunction under a position basis". 4. $|O\rangle$ is not a thing. You never put operators inside a ket (and certainly not on their own). Operators always act on the outside of the ket. So, say you have an operator $O:\mathscr H\to\mathscr H$, which in mathematician's notation would take a vector $f\in \mathscr H$ and give you another $O(f)\in \mathscr H$. In Dirac notation you tend to put a hat on $\hat O$, and you use $\hat O|f\rangle$ to mean $O(f)$. In particular, this is used for the most fundamental bit of notation: $\langle f |\hat O|g\rangle$, which a mathematician would denote $\varphi_f(O(g)) = (f,O(g))$, or alternatively (once you've defined the hermitian conjugate $O^*$ of $O$) $\varphi_{O^*(f)}(g) = (O^*(f),g)$. This includes as a special case 5. $\langle f |G(\hat x)|f\rangle$. This is sometimes abbreviated as $\langle G(\hat x)\rangle$, but that's a good recipe for confusion. In this case, $G:R \to \mathbb C$ is generally a function, but $G(\hat x)$ is a whole different object: it's an operator, so e.g. $G(\hat x)|f\rangle$ lives in $\mathscr H$, and its action is such that this vector has wavefunction
$$ \langle x| G(\hat x) | f \rangle = G(x) f(x).$$
The general matrix element $\langle g |G(\hat x)|f\rangle$ is then taken to be the inner product of $|g\rangle$ with this vector, i.e. $\int_R g(x)^*G(x) f(x)\mathrm dx$, and similarly in the special case $g=f$. Finally, this brings us to your final two questions: 10. The statement that "the bra portion of the bra-ket is always a dummy variable" is false. As you have seen, $\langle f|$ is perfectly well defined. (Also, $x$ and $p$ are not "dummy" variables, either, again as you have seen above.) 11. Similarly, the statement that "the ket portion is always a function/operator" is also false. You never put operators inside a ket (you put them to the left), and it's generally OK to put $x$'s in there (though, again, this does require either more work to bolt things down, or a willingness to handwave away the problems). I hope this is enough to fix the problems in your understanding and get you using Dirac notation correctly. It does take a while to wrap one's head around but once you do it is very useful. Similarly, there's plenty of issues in terms of how we formalize things like position kets like $|x\rangle$, but they're all surmountable and, most importantly, they make much more sense once you've been using Dirac notation correctly and comfortably for a while. | {
"source": [
"https://physics.stackexchange.com/questions/289108",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134195/"
]
} |
289,495 | E.g. If we had a jar of marbles or something else of different densities and shook it, the most dense ones would go to the bottom and the less dense ones to the top. ( Image Source ) If I put a cube of lead in water it would sink all the way to the bottom. But for ice : what I am trying to understand is why doesn't the water (being denser than the ice) seek to reach the bottom, and the ice sit flat on top of it (as in the left image)? Instead, some part of the ice is submerged in the water (as in the right image), and some sits on top it. | When put in water, an objects sinks to the point where the volume of water it displaces has the same weight as the object. Archimedes was the one who discovered this. When you put lead in water, the weight of the lead is much greater than that of the same volume of water. Hence it sinks to the bottom. As ice only weighs about 90% of its volume of water, 90% of the ice will be under water, the rest above. The actual figure is 91.7%, given by the specific gravities of water (0.9998) and ice (0.9168) at 0C. Actually, in the case of lead, if the water were deep enough, the lead would sink to the point where its weight equals that of the water under pressure at depths. As lead will compress as well as the water, that may never happen, but for other objects and/or fluids it might. This is also the reasons why helium-filled balloons float up: their weight is less than that of the same volume of air. As they float up, the balloon expands, while the air gets rarer and hence lighter. At a certain altitude the two will be equal and the balloon will stop rising. | {
"source": [
"https://physics.stackexchange.com/questions/289495",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126999/"
]
} |
289,508 | Suppose we have a simple pendulum damped by air resistance, proportional to the velocity of the pendulum. By using the small angle approximation of sin, we are able to solve a second order differential equation and arrive at the conclusion that the angle from the vertical, $\theta$, is equal to a trig function multiplied by a decaying exponential $$\theta(t) = A~\left(e^{-bt/2m}\right)\sin (ft + \omega)$$ It is evident that the amplitude of successive swings become smaller, yet the frequency of the oscillation $f$ remains constant, according to this. Evidently wrong, how would one be able to quantify such a change in period of such a damped pendulum, as a function of time? | When put in water, an objects sinks to the point where the volume of water it displaces has the same weight as the object. Archimedes was the one who discovered this. When you put lead in water, the weight of the lead is much greater than that of the same volume of water. Hence it sinks to the bottom. As ice only weighs about 90% of its volume of water, 90% of the ice will be under water, the rest above. The actual figure is 91.7%, given by the specific gravities of water (0.9998) and ice (0.9168) at 0C. Actually, in the case of lead, if the water were deep enough, the lead would sink to the point where its weight equals that of the water under pressure at depths. As lead will compress as well as the water, that may never happen, but for other objects and/or fluids it might. This is also the reasons why helium-filled balloons float up: their weight is less than that of the same volume of air. As they float up, the balloon expands, while the air gets rarer and hence lighter. At a certain altitude the two will be equal and the balloon will stop rising. | {
"source": [
"https://physics.stackexchange.com/questions/289508",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134531/"
]
} |
289,924 | If we say that an instant of time has no duration, why does a sum of instants add up to something that has a duration? I have a hard time understanding this. I think of one instant as being a 'moment' of time. Hence, the sum of many instants would make a finite time period (for example 10 minutes). EDIT:
Since I got so many great answers, I was wondering, if someone can also give a illustrative example, besides the pure math ? I am just being curious... | I believe you're asking about a paradox in the style of Zeno's paradoxes . Your paradox is most similar to 'Paradox of the Grain of Millet'. You want to know how an infinite sum of infinitesimal instants could equal a finite length of time,
$$
\int~\mathrm dt = t.
$$
Well, the above is nothing but
$$
\int~\mathrm dt =\lim_{N\to\infty} \sum_{n~=~1}^N t/N = t,
$$
This, and many of Zeno's paradoxes, are resolved by understanding calculus and infinite sums. | {
"source": [
"https://physics.stackexchange.com/questions/289924",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134504/"
]
} |
290,567 | I am not sure if this is the right forum for the question but failing to have any better location to ask it, i have come here. In the UK we have a tradition of lighting bonfires on the 5th of November (or the weekend closest to it). With numerous amounts of bonfires of a large size being created and knowing that forest fires in Australia can raise the temperature of the area there, is it feasible that the temperature of the UK slightly increases on bonfire night due to the amount of bonfires that have been lit or is the increase so infinitesimally small that it couldn't be measured let alone felt by the populace? I have tried to find an answer to this but the best i can come up with is the current weather conditions for bonfire night which isn't even remotely close to what i am hoping to see. As an addendum to this question....ironically it snowed in a lot of the UK only 5 days later........ | With numerous amounts of bonfires of a large size being created and knowing that forest fires in Australia can raise the temperature of the area there, is it feasible that the temperature of the UK slightly increases on bonfire night due to the amount of bonfires that have been lit or is the increase so infinitesimally small that it couldn't be measured let alone felt by the populace? I doubt very much if the overall land area of the UK would be affected by the presence of an arbitrary number of bonfires. I have no idea of the number of bonfires, but let's do an estimation. The population of the UK is around 60 million people. Assume that the people most likely to light bonfires are in the 15 to 40 age group, UK Official Population Figures , produces a guessimate of 20 percent. So you have possibly 12 million potential firestarters, let's cut that down to 6 million actually available to light fires, and that 100 people attend the event. So you have 6000 large bonfires, (or a lot more smaller ones, it evens out) and say the area of each bonfire base is 10 square metres. So 6000 by 10 square metres is 60,000 square metres occupied by burning material. The total land area of the UK is 243.61 billion square metres. Actually that 6,000 is way too low but even 60,000 fires would still be ok in this rough estimation. Enough already, I think. I am ignoring calculating the heat output of each fire. A better way to estimate, in my opinion, is to consider if an average 5 degrees Celsius of 243.61 billion square metres of air will be affected in any detectable way by 60,000 square metres at an average temperature of say 600 degrees Celsius, (the average temperature of burning wood). No, it won't, even if my number for fires is to 60,000 fires, which looking at it now seems a more likely figure. No matter what the fire number is, it still has to contend with 244 billion square metres of 5 degree Celsius air. In addition we may have to allow for 2 to 5 million hot car exhausts and 3 million plus domestic heating house fires, which would have far more effect than bonfires. So locally, within an radius of 200 metres of each fire, the air temperature increase is detectable, beyond that, no chance of detection seems likely. A better way to think about it, after the fires all go out about two in the morning, is to ask yourself, will you notice a distinct change in the weather for the next day? Will all this energy affect you? You could find out yourself, by looking at the weather pattern over London during the 1940/3 blitz, for example. I put this part as the real answer, because I believe that rather than saying it depends on various factors, research can be carried to establish if this idea is true, by checking on weather patterns shortly after the fires and looking for correlations. More supporting, (but indirect) evidence is the deadly smog that regularly covered London due to domestic coal fires, but it's far too complicated as regards the number of variables to easily establish correlations between the bonfire and the weather. The focus then moves from PhysicsSE to EarthSciencesSE, imo. | {
"source": [
"https://physics.stackexchange.com/questions/290567",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133795/"
]
} |
290,690 | I've been studying the postulates of QM and seeing how to derive important ideas from them. One thing that I haven't been able to derive from them, however, is the identity of the momentum operator. For simplicity, I'm only thinking about no relativistic effects, no spin, no time-dependent potentials, and one spatial dimension. Also I'm assuming the position operator is simply multiplication by $x$, as in, I'm in position space. So the Hamiltonian operator is
$ H = -\frac{\hbar^2}{2m}\nabla^2+V$. I know that the momentum operator is $p = -i\hbar \frac{\partial}{\partial x}$. But how do I get there from the postulates? I know that it makes sense , as it results in the Ehrenfest Theorem, the De Broglie wavelength hypothesis, the Heisenberg Uncertainty Principle (for $x$ and $p$), the momentum operator being the generator of the translation operator, and possibly many other desirable theorems, and correlations with classical momentum. But none of these are postulates (at least, not in the various formalisms I encountered), so you can't derive $p = -i\hbar \frac{\partial}{\partial x}$ from them. Rather, they are consequences of it. You need to know the operator beforehand to see that they are correct. Yes, this is just semantics, but that is the core issue for me: Regardless of how much sense it makes, is the identity $p = -i\hbar \frac{\partial}{\partial x}$ (under the assumptions I made) a Postulate, meaning that you can't derive it from other postulates, or can it in fact be obtained from them? And in the latter case, could you show me how? Note: I know that there are many different and equivalent sets of postulates for QM. But in none that I saw did they name it as a postulate nor properly derived it. | First it is a bit scrappy to write something like: $$\hat{P} = -i\hbar\partial /\partial x.$$ It's more rigorous to write: $$\langle x|\hat{P}|\phi\rangle = -i\hbar\frac{\partial}{\partial x}\langle x|\phi\rangle,$$ and it should be interpreted as the momentum operator in spatial representation. Derivations: The physical meaning behind momentum is that: 1. It is the conserved quantity corresponding to spatial translation symmetry. 2. Because of 1, the momentum operator (Hermitian) is the generator of the spatial translation operator (unitary). In terms of equations: Define the spatial translation operator $D(a)$ s.t. $$C|x+a \rangle = D(a)|x \rangle,$$ and: $$D(a) = e^{-ia\hat{p}/\hbar}$$ I assume you have no problem deriving this. Please note that this only depends on the quantization condition $[x,p] = i\hbar$ , which is one of the postulates of quantum mechanics. Take an arbitrary state $|\phi\rangle$ and apply $D(a)$ on it: $$D(a)|\phi\rangle = \int D(a)|\phi\rangle |x\rangle \langle x|dx$$ Change of variable, RHS = $$\int C|x\rangle \langle x-a|\phi\rangle dx$$ Take $a\to 0$ , plug in to RHS: $$\phi(x-a) = \phi(x) - a\frac{\partial}{\partial x}\phi(x)$$ and to LHS: $$D(a) = 1-ia\hat{p}/\hbar$$ you can recover $\langle x|\hat{P}|\phi\rangle = -i\hbar\frac{\partial}{\partial x}\langle x|\phi\rangle$ | {
"source": [
"https://physics.stackexchange.com/questions/290690",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/123183/"
]
} |
290,906 | It is often said that, according to general relativity, spacetime is curved by the presence of matter/energy. But isn't it simply the coordinate lines of the coordinate system that are curved? | Congratulations! You stumbled upon an important question of differential geometry : How can I know whether the curvature is caused by my choice of coordinates or the space I live in? As has been mentioned in other answers, the word “curvature” is referred to as either a property of the space, but also a property of the coordinates. Let me call the latter “variation” instead. To illustrate both cases, imagine: Being in “flat” Euclidean space, but using spherical coordinates Living on the sphere, using any kind of coordinates. In the first case, obviously a change to cartesian coordinates eliminates all variation in your coordinates. In the latter, you can choose any representation you want – you will not get variation-free coordinates! For instance, the closer you get to the poles, your coordinates are forced to get “denser”, if they shall stay continuous. This means it must be caused by the space itself - If coordinates fail to get straight, we say the “Space has curvature”. Curvature is also said to be an “intrinsic property of the space”, meaning exactly that this property does not depend on its representation by coordinates. To answer your question briefly: No . When saying “spacetime is curved”, we mean “Spacetime has curvature”, and not only “The coordinates vary”. Some definitions Note however that the vocabulary is extremely vague. To be more precise, we need to use the mathematical terms: Our “space” or “spacetime” becomes a “Riemannian Manifold”, namely an abstract mathematical set with some nice properties and the ability to measure distances locally. The latter is called the “Metric tensor field” . “Coordinates” Are actually maps from our Manifold to $\mathbb R^n$, in the case of spacetime $n=4$. Wherever you are, you will find a map giving you a set of real numbers. Once you introduce a coordinate map, you have a basis for the metric tensor and can represent it by multiple components which are real numbers. That is extremely useful, since we now can easily take derivatives of it (in the directions of our coordinate basis). If these derivatives are zero everywhere, you already know you are in a flat space. “Curvature” is not so easy to define however. We need to find tools to measure the failure of our coordinate maps to become constant. Luckily, there have been people such as Gauss and Riemann doing the hard work for you. Gauß' Approach Gauß' approach is to compare how “circles grow”. If you are on a sphere, the ”perceived radius“ of a circle is slightly larger than the radius corresponding to its circumference / area, so you know you are in a curved space. More precisely, in a Space with positive curvature – the radius can be shorter than expected, as well! Consider a saddle. Since the circle is “stretched”, the circumference and area are larger than expected – this would be an example of negative curvature. A nice mental picture for $n=2$ is that if you tried mounting a sheet of paper, and observe that: It rips: Negative curvature It fits nicely: Zero curvature It squeezes: Positive Curvature The Problem with Gauß' Approach is although it is intuitive when looking from “outside” at the manifold, determining it from inside the manifold involves taking a limit, and it is not so easy to compute and generalize. Well, not as easy as the way Riemann did it at least: Riemann's Approach Take the sphere: A most famous effect of our world's curvature is the fact you can span a triangle with angles $\frac \pi 2$ only. Another possibility is parallel transport - if you take a vector and go straight up to the north pole, then straight to your right to the equator, and straight down, your vector shifted by $\frac\pi 2$. This can be generalized: Take a vector, parallel transport it some distance up, some distance to the right, go back down and back left. In a flat space, the vector wouldn't have changed. In a curved space however, we would observe a shift. Now note that the notion of “up” and “right” can easily be generalized into the idea of following two coordinate vectors! This is the Idea of the Riemann Tensor :
$$R(u,v)w=\nabla_u\nabla_v w - \nabla_v \nabla_u w - \nabla_{[u,v]} w$$
This is essentially implementing the following protocol: Take a vector $w$ Transport it in the direction of the vector (in our case: a coordinate vector) $u$, then $v$ Transport the same vector in the direction of then $v$, then $u$, plus a correction term that's there for technical reasons Observe how the difference in paths made our vector differ. However, not quite. Since the displacement vector depends on the distance, and we want to define a value of the curvature locally, in this case as a property of the point, shrinking the distance makes the displacement vector goes to zero. So our argument is not quite correct – we are interested in the linear change of said displacement vector when changing the distance. We can compute the quantity for each pairs of the $n$ coordinates (indices: $\mu, \nu$), and can then observe the $\rho$-component of a unit vector in direction $\sigma$ – let's denote this quantity by $R^\rho{}_{\sigma\mu\nu}$. It has some symmetries, so we actually have $\frac{n^2(n^2-1)}{12}$ independent components (I trust wikipedia on that one).
This tensor can be contracted to a smaller one by summing over same $\rho$ and $\mu$, leaving two indices, which can be contraced once again, leaving a scalar $R$, also known as the Ricci Scalar , which is, surprise, in two dimensions twice the gaussian curvature. So Riemannian curvature does seem to capture the right intuition nonetheless! The equation you saw above can be reduced to first and second partial derivatives of the metric tensor – which is really easy to evaluate (at least if you know the closed form). Remember that the tensor (and obviously derived contractions such as the Ricci scalar) contain a lot of terms; calculating the riemann tensor is a well-beloved exercise for the eager student (or the poor soul willing to pass a class on differential geometry. Summary What is meant is the intrinsic curvature of the space, meaning it is independent of the choice of coordinates. There are clever methods of determining whether and to what extend your space deviates from flat euclidean space, namely Gaussian curvature, and, more importantly, the Riemann tensor. | {
"source": [
"https://physics.stackexchange.com/questions/290906",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/68279/"
]
} |
291,186 | In the ideal gas, the volume of a molecule and the interactions between the molecules are assumed to be negligible. Why aren't the interactions between the molecules and the walls of the container assumed to be negligible? Why are they much larger than those between molecules? | One important answer is simply that experimentally ignoring the interaction with the walls is clearly a terrible approximation. If that were true any gas would instantly escape from any container we put it in. More theoretically, an idea gas does not assume there are no interactions between particles, it assumes that the interactions have 0 range (i.e. the particles have to be in contact) We can apply the same idea to the wall of the container, but we get a very different result, because the wall has a finite cross sectional area, rather than being point like, so the particle will always hit the wall if it travels far enough, but it will almost never hit another particle. | {
"source": [
"https://physics.stackexchange.com/questions/291186",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
291,491 | Walter Lewin's first lecture (at 22:16) analyzes the time $t$ for an apple to fall to the ground, using dimensional analysis. His reasoning goes like this: It's natural to suppose that height of the apple to the ground ($h$), mass of the apple ($m$), and the acceleration due to gravity ($g$) may impact (pardon the pun) the time it takes for the apple to reach the ground. Then
$$t \propto h^\alpha m^\beta g^\gamma.$$
On both sides, the units must be equivalent, so
$$[T] = [L]^\alpha [M]^\beta \left[\frac{L}{T^2}\right]^\gamma = [L]^{\alpha + \gamma} [M]^\beta [T]^{-2\gamma}.$$
Therefore,
$$1 = -2\gamma, \quad \alpha + \gamma = 0, \quad \beta = 0.$$
Solving, we have
$$\gamma = -\frac{1}{2}, \quad \alpha = \frac{1}{2}, \quad \beta = 0.$$
Then we conclude $t = k\sqrt{\frac{h}{g}}$, where $k$ is some unit-less constant. Lewin concludes that the apple falls independently of its mass, as proved in his thought experiment and verified in real-life. But I don't agree with his reasoning. Lewin made the assumption that $k$ is unit-less. Why could he come to this conclusion? After all, some constants have units, like the gravitational constant ($G$). Why isn't the following reasoning correct? The constant ($k$) has the unit $[M]^{-z}$; therefore, to match both sides of the equation, $\beta = z$. So indeed, the mass of the apple does impact its fall time. | Your example shows a fundamental idea: even though the units agree, this does not mean that the resulting equation is a law of physics. This is why physicists only 'accept' laws that have been tested experimentally. This idea is nicely explained in the following XKCD comic: Here, we get a more extreme example than just 'changing the units of $k$'. It turns out we could arbitrarily add different quantities to an equation, and end up with a new equation that is completely valid. This does not mean that this actually makes any sense! Your new 'law' needs to be validated with experiment, and as you can see in the comic, a single experiment may not be enough. Dimensional analysis, then, is not used to derive new laws of physics through pure reasoning alone. Instead, your professor already knew , through whatever reason, that $t\propto h^\alpha m^\beta g^\gamma$. Even that is already a bit of a leap of faith - there is nothing + that keeps you from assuming $t\propto \ln h$. To quote your own post: [...] as proved in his thought experiment and verified in real-life . Instead, then, you should see dimensional analysis as a very useful tool, part of a larger toolset to derive certain laws and equations. For example, if you derive an equation through whatever other means, you can use dimension analysis to check whether your new equation is possible at all. Or, in the case of this professor, you may have a general idea where your equation should be going, and you can then use dimensional analysis to get a reasonable idea of the final form of that equation. (Note: this may save you in a closed-book exam one day). + That's not actually true. There are good reasons why you will probably never see $\ln h$, but that's for some other time. | {
"source": [
"https://physics.stackexchange.com/questions/291491",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/113809/"
]
} |
291,886 | What I mean is, suppose a ball is fired from a cannon. Suppose the ball is moving at 100 m/s in the first second. Would the ball have started from 1m/s to 2m/s and gradually arrived at 100m/s? And is the change so fast that we are not able to conceive it? Or does the ball actually start its motion at 100m/s as soon as the cannon is fired? Suppose a 40-wheeler is moving at the speed of 100 kmph. And it collides with a car and does not brake (The car is empty in my hypothesis ;-) ). If a body does not reach a certain speed immediately and only increases speed gradually, does that mean that as soon as the truck collides with the car, the truck momentarily comes to rest? Why I ask this is because the car is not allowed to immediately start at the truck's speed. So when both of them collide, the car must start from 0 to 1kmph to 2kmph and finally reach the truck's speed. Does this not mean that the truck must 'restart' as well? Intuition tells me I'm wrong but I do not know how to explain it physically being an amateur. When is it possible for a body to accelerate immediately? Don't photons travel at the speed of light from the second they exist? | The answer to your question is "No" as this would require an infinite acceleration and hence an infinite force to be applied. However to simplify problems it is sometimes convenient to assume an instantaneous jump in velocity as in the left hand graph. It might be that the actual change in velocity is as per the green line in the right hand graph but because the change in velocity $\Delta t$ takes place over such a short period of time, perhaps $\Delta t \ll 1$ in this example, the approximation to an "instantaneous" change has little bearing on the final outcome. So every time you see a velocity against tine graph with some sort of "corner", where a gradient (= acceleration) cannot be found, you have to think that the corner is actually rounded but that rounding occurs over a very short period of time compared with the time scale of the whole motion it matters not a lot. Another example is of problems where a ball rebounds from the ground and you have to find the time it takes to reach a certain height after the rebound. It is unlikely that you consider the time that the ball is in contact with the ground and slowing down with an acceleration much greater than $g$ (acceleration of free fall) stopping and then initially accelerating upwards at an acceleration greater than $g$. Usually the sums are done assuming that the acceleration is $g$ all the time because the time that the ball is in contact with the ground is so much smaller than its time of flight through the air. | {
"source": [
"https://physics.stackexchange.com/questions/291886",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/135753/"
]
} |
291,991 | Intro: In completing Walter Lewin's 6th lecture on Newton's Laws , he presents an experiment (go to 42:44) which leaves me baffled. Experiment: (I recommend watching the video; see link above.) There is a $2$ kg block with 2 identical strings attached to it: one at the top, the other at the bottom. The top string is attached to a "ceiling", and the bottom to a "floor". Professor Lewin "stretches" the system (by pulling on the bottom string) with the block not accelerating. One string snaps. Prediction: Initially, the top string has a tension of approximately $20$ N, to counter the force of gravity. The bottom string has no tension at all. Then, when Lewin pulls the bottom string, it gains some tension $n$ N. To counter act the force exerted by the bottom string, the top string exerts now $20 + n$ N. I assume that the string with more force will give out sooner, leading me to conclude that the top string will break. Results: (This was conducted by Lewin, not me; see link above.) Trial 1: Bottom string breaks. Trial 2: Top string breaks. Trial 3: Bottom string breaks. Additional Notes: The results don't seem consistent. If I was right, I'd expect all 3 experiments to be right; conversely, if I was wrong, I'd expect all 3 experiments wrong, with one exception: the results are more-less random and one result isn't preferred over the other. Question: Why was my prediction incorrect? Was there a flaw in my logic? Why were the results inconsistent? | While I haven't seen the video, the description matches an old science trick using inertia: if you want the top string to snap, pull slowly. To snap the bottom string, pull suddenly - the inertia of the weight will “protect" the upper string for a brief moment. | {
"source": [
"https://physics.stackexchange.com/questions/291991",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/113809/"
]
} |
292,309 | Today I was watching Professor Walter Lewin's lecture on Newton's laws of motion . While defining Newton's first, second and third law he asked "Can Newton's laws of motion be proved?" and according to him the answer was NO ! He said that these laws are in agreement with nature and experiments follow these laws whenever done. You will find that these laws are always obeyed (to an extent). You can certainly say that a ball moving with constant velocity on a frictionless surface will never stop unless you apply some force on it, yet you cannot prove it. My question is that if Newton's laws of motion can't be proved then what about those proofs which we do in high school (see this , this )? I tried to get the answer from previously asked question on this site but unfortunately none of the answers are what I am hoping to get. Finally, the question I'm asking is: Can Newton's laws of motion be proved? | If you want to prove something, you have to start with axioms that are presumed to be true. What would you choose to be the axioms in this case? Newton's Laws are in effect the axioms, chosen (as others have pointed out) because their predictions agree with experience. It's undoubtedly possible to prove Newton's Laws starting from a different set of axioms, but that just kicks the can down the road. | {
"source": [
"https://physics.stackexchange.com/questions/292309",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/128861/"
]
} |
292,958 | I know the spent fuel is still radioactive. But it has to be more stable than what was put in and thus safer than the uranium that we started with. That is to say, is storage of the waste such a big deal? If I mine the uranium, use it, and then bury the waste back in the mine (or any other hole) should I encounter any problems? Am I not doing the inhabitants of that area a favor as they will have less radiation to deal with than before? | Typical nuclear power reactions begin with a mixture of uranium-235 (fissionable, with a half-life of 700 Myr) and uranium-238 (more common, less fissionable, half-life 4 Gyr) and operate until some modest fraction, 1%-5%, of the fuel has been expended. There are two classes of nuclides produced in the fission reactions: Fission products, which tend to have 30-60 protons in each nucleus. These include emitters like strontium-90 (about 30 years), iodine-131 (about a week), cesium-137 (also about 30 years). These are the main things you hear about in fallout when waste is somehow released into the atmosphere. For instance, after the Chernobyl disaster, radioactive iodine-131 from the fallout was concentrated in people's thyroid glands using the same mechanisms as the usual concentration natural iodine, leading to acute and localized radiation doses in that organ. Strontium behaves chemically very much like calcium, and there was a period after Chernobyl when milk from dairies in Eastern Europe was discarded due to high strontium content. ( Some Norwegian reindeer are still inedible. ) Activation products. The reactors operate by producing lots of free neutrons, which typically are captured on some nearby nucleus before they decay. For most elements, if the nucleus with $N$ neutrons is stable, the nucleus with $N+1$ neutrons is radioactive and will decay after some (possibly long) time. For instance, neutron capture on natural cobalt-59 in steel alloys produces cobalt-60 (half-life of about five years); Co-60 is also produced from multiple neutron captures on iron. In particular, a series of neutron captures and beta decays, starting from uranium, can produce plutonium-239 (half-life 24 kyr) and plutonium-240 (6 kyr). What sometimes causes confusion is the role played by the half-life in determining the decay rate. If I have $N$ radionuclides, and the average time before an individual nuclide decays is $T$, then the "activity" of my sample is
$$
\text{activity, } A= \frac NT.
$$ So suppose for the sake of argument that I took some number $N_\mathrm{U}$ of U-238 atoms and fissioned them into $2N_\mathrm{U}$ atoms of cobalt-60. I've changed by population size by a factor of two, but I've changed the decay rate by a factor of a billion . The ratio of the half-lives $T_\text{U-238} / T_\text{Pu-240}$ is roughly a factor of a million. So if a typical fuel cycle turns 0.1% of the initial U-238 into Pu-240, the fuel leaves the reactor roughly a thousand times more radioactive than it went in --- and will remain so for thousands of years. | {
"source": [
"https://physics.stackexchange.com/questions/292958",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/118188/"
]
} |
293,280 | A friend and I are hobby physicist. We don't really understand that much but at least we try to :) We tried to understand what the recently discovered gravitational waves at LIGO are, how they are created and how they have been measured. If I remember correctly, the information we found was that only large/massive objects, for example colliding black holes or neutron stars, emit these. What about smaller objects, e.g. a basketball hitting the ground or an asteroid hitting the earth? Do they also emit gravitation waves? And if not, at which threshold of mass is this happening? daniel | Gravitational waves (GW) are emitted by all systems which have an 'accelerating quadrupole moment' --- which means that the systems have to be undergoing some sort of acceleration (i.e. a constant velocity is not enough), and they have to be asymmetric. The perfect example is a binary system, but something like an asymmetric supernovae is also expected to emit GW. The total mass of the system doesn't matter [1] in determining whether GW are produced or not. It does determine how strong the GW are. The more massive the system and the more compact they are, the stronger the GW, and the more likely they are to be detectable---of course, how often an event happens nearby is also very important. The examples you give, black holes (BH) and neutron stars (NS), are some of the best sources because they are the most compact objects in the universe. Another aspect to consider is the detection method. LIGO for example is only sensitive to GW in a certain frequency range (kilohertz-ish), and roughly stellar-mass systems (like binaries of NS and stellar-mass BH) emit at those frequencies. Something like supermassive BH binaries, in wide-separation orbits, emit GW at frequencies of (often) nanohertz --- which are expected to be detected by an entirely different type of method: by Pulsar Timing Arrays . There is a proposed mission called the Laser-Interferometer Space Antenna (LISA) which would detect objects at frequencies intermediate between Pulsar Timing Arrays and ground-based interferometers (like LIGO), which would detect tremendous numbers of White-Dwarf binaries. [1] General Relativity (GR), the theory which describes gravity and gravitational waves, has a property called "scale invariance". This means that no matter how massive things are, all of the properties of the system look the same if you scale by the mass . For example, if I run a GR simulation of a 10 solar-mass BH, the results would be identical to that of a 10 million solar-mass BH --- except one million times smaller in length-scales (for example the radius of the event horizon). This means that no matter the total mass of the binary, GW are still produced. It's also very convenient for running simulations... one simulation can apply to many situations! | {
"source": [
"https://physics.stackexchange.com/questions/293280",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136508/"
]
} |
293,359 | In another post , I claimed that there was obviously an oscillating charge in a hydrogen atom when you took the superposition of a 1s and a 2p state. One of the respected members of this community (John Rennie) challenged me on this , saying: Why do you say there is an oscillating charge distribution for a hydrogen atom in a superposition of 1s and 2p states? I don't see what is doing the oscillating. Am I the only one that sees an oscillating charge? Or is John Rennie missing something here? I'd like to know what people think. | In this specific instance you are correct. If you have a hydrogen atom that is completely isolated from the environment, and which has been prepared in a pure quantum state given by a superposition of the $1s$ and $2p$ states, then yes, the charge density of the electron (defined as the electron charge times the probability density, $e|\psi(\mathbf r)|^2$) will oscillate in time. In essence, this is because the $2p$ wavefunction has two lobes with opposite sign, so adding it to the $1s$ blob will tend to shift it towards the positive-sign lobe of the $p$ peanut. However, the relative phase of the two evolves over time, so at some point the $p$ signs will switch over, and the $1s$ blob will be pushed in the other direction. It's worth doing this in a bit more detail. The two wavefunctions in play are
$$
\psi_{100}(\mathbf r,t) = \frac{1}{\sqrt{\pi a_0^3}} e^{-r/a_0} e^{-iE_{100}t/\hbar}
$$
and
$$
\psi_{210}(\mathbf r, t) = \frac{1}{\sqrt{32\pi a_0^5}} \, z \, e^{-r/2a_0} e^{-iE_{210}t/\hbar},
$$
both normalized to unit norm. Here the two energies are different, with the energy difference
$$\Delta E = E_{210}-E_{100} = 10.2\mathrm{\: eV}=\hbar\omega = \frac{2\pi\,\hbar }{405.3\:\mathrm{as}}$$
giving a sub-femtosecond period. This means that the superposition wavefunction has a time dependence,
$$
\psi(\mathbf r,t)
= \frac{\psi_{100}(\mathbf r,t) + \psi_{210}(\mathbf r,t)}{\sqrt{2}}
=
\frac{1}{\sqrt{2\pi a_0^3}}
e^{-iE_{100}t/\hbar}
\left(
e^{-r/a_0}
+
e^{-i\omega t}
\frac{z}{a_0}
\frac{
e^{-r/2a_0}
}{
4\sqrt{2}
}
\right)
,
$$
and this goes directly into the oscillating density:
$$
|\psi(\mathbf r,t)|^2
=
\frac{1}{2\pi a_0^3}
\left[
e^{-2r/a_0}
+
\frac{z^2}{a_0^2}
\frac{
e^{-r/a_0}
}{
32
}
+
z
\cos(\omega t)
\,
\frac{e^{-3r/2a_0}}{2\sqrt{2}a_0}
\right]
.
$$ Taking a slice through the $x,z$ plane, this density looks as follows: Mathematica source through Import["http://halirutan.github.io/Mathematica-SE-Tools/decode.m"]["http://i.stack.imgur.com/KAbFl.png"] This is indeed what a superposition state looks like, as a function of time, for an isolated hydrogen atom in a pure state. On the other hand, a word of warning: the above statement simply states: "this is what the (square modulus of the) wavefunction looks like in this situation". Quantum mechanics strictly restricts itself to providing this quantity with physical meaning if you actually perform a high-resolution position measurements at different times, and compare the resulting probability distributions. (Alternatively, as done below, you might find some other interesting observable to probe this wavefunction, but the message is the same: you don't really get to talk about physical stuff until and unless you perform a projective measurement.) This means that, even with the wavefunction above, quantum mechanics does not go as far as saying that "there is oscillating charge" in this situation. In fact, that is a counterfactual statement, since it implies knowledge of the position of the electron in the same atom at different times without a (state-destroying) measurement. Any such claims, tempting as they are, are strictly outside of the formal machinery and interpretations of quantum mechanics. Also, and for clarity, this superposition state, like any hydrogen state with support in $n>1$ states, will eventually decay down to the ground state by emitting a photon. However, the lifetime of the $2p$ state is on the order of $1.5\:\mathrm{ns}$, so there's room for some four million oscillations of the superposition state before it really starts decaying. A lot of atomic physics was forged in a time when a nanosecond was essentially instantaneous, and this informed a lot of our attitudes towards atomic superposition states. However, current technology makes subpicosecond resolution available with a modest effort, and femtosecond resolution (and better) is by now routine for many groups. The coherent dynamics of electrons in superposition states has been the name of the game for some time now. It's also important to make an additional caveat: this is not the state that you will get if you initialize the atom in the excited $2p$ state and wait for it to decay until half of the population is in the ground state. In a full quantum mechanical treatment, you also need to consider the quantum mechanics of the radiation field, which you usually initialize in the vacuum, $|0⟩$, but that means that after half the population has decayed, the state of the system is
$$
|\Psi⟩= \frac{|1s⟩|\psi⟩+|2p⟩|0⟩}{\sqrt{2}},
$$
where $|\psi⟩$ is a state of the radiation field with a single photon in it, and which is therefore orthogonal to the EM vacuum $|0⟩$. What that means is that the atom and the radiation field are entangled, and that neither can be considered to even have a pure quantum state on its own. Instead, the state of the atom is fully described (for all experiments that do not involve looking at the radiation that's already been emitted) by the reduced density matrix obtained by tracing out the radiation field,
$$
\rho_\mathrm{atom}
= \operatorname{Tr}_\mathrm{EM}\mathopen{}\left(|\Psi⟩⟨\Psi|\right)\mathclose{}
=\frac{|1s⟩⟨1s|+|2p⟩⟨2p|}{2},
$$
and this does not show any oscillations in the charge density. Foundational issues about interpretations aside, it's important to note that this is indeed a real, physical oscillation (of the wavefunction, at least), and that equivalent oscillations have indeed been observed experimentally. Doing it for this hydrogen superposition is very challenging, because the period is blazingly fast, and it's currently just out of reach for the methods we have at the moment. (That's likely to change over the next five to ten years, though: we broke the attosecond precision barrier just last week .) The landmark experiment in this regard, therefore, used a slightly slower superposition, with a tighter energy spacing. In particular, they used two different fine-structure states within the valence shell of the Kr + ion, i.e. the states $4p_{3/2}^{-1}$ and $4p_{1/2}^{-1}$, which have the same $n$ and $L$, but with different spin-orbit alignments, giving different total angular momenta, and which are separated by
$$\Delta E=0.67\:\mathrm{eV}=2\pi\hbar/6.17\:\mathrm{fs}.$$
That experiment is reported in Real-time observation of valence electron motion. E. Goulielmakis et al. Nature 466 , 739 (2010) . They prepared the superposition by removing one of the $4p$ electrons of Kr using tunnel ionization, with a strong ~2-cycle pulse in the IR, which is plenty hard to get right. The crucial step, of course, is the measurement, which is a second ionization step, using a single, very short ($<150\:\mathrm{as}$) UV burst of light. Here the superposition you're probing is slightly more complicated than the hydrogen wavefunction the OP asks about, but the essentials remain the same. Basically, the electron is in a superposition of an $l=1,m=0$ state, and an $l=1,m=1$ state, with an oscillation between them induced by the difference in energy given by the spin-orbit coupling. This means that the shape of the ion's charge density is changing with time, and this will directly impact how easy it is for the UV pulse to ionize it again to form Kr 2+ . What you end up measuring is absorbance: if the UV ionizes the system, then it's absorbed more strongly. The absorbtion data therefore shows a clear oscillation as a function of the delay between the two pulses: The pictures below show a good indication of how the electron cloud moves over time. (That's actually the hole density w.r.t. the charge density of the neutral Kr atom, but it's all the same, really.) However, it's important to note that the pictures are obviously only theoretical reconstructions. Anyways, there you have it: charge densities (defined as $e|\psi(\mathbf r)|^2$) do oscillate over time, for isolated atoms in pure superposition states. Finally, the standard caveats apply: the oscillations caused in quantum mechanics by superpositions are only valid for pure, isolated states. If your system is entangled with the environment (or, as noted above, with the radiation it's already emitted), then this will degrade (and typically kill) any oscillations of local observables. If the overall state of the world is in some meaningful superposition of energy eigenstates, then that state will indeed evolve in time. However, for heavily entangled states, like thermal states or anything strongly coupled to the environment, any local observables will typically be stationary, because each half of an entangled state doesn't even have a proper state to call its own. | {
"source": [
"https://physics.stackexchange.com/questions/293359",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3253/"
]
} |
293,802 | In reviewing some problems in an elementary book, I ran across a reference to the reaction $p+n\rightarrow d$ + "energy". Is that possible? I don't see any reason why not, but I don't find any mention of this reaction at all using Google. It seems to me that the "energy" would have to be a combination of deuteron kinetic energy and a gamma. | Of course the reaction is possible. It doesn't even require special environmental conditions. Having no charge the neutrons don't need to overcome a strong Coulomb barrier to interact with atomic nuclei and will happily find any nuclei that can capture them at thermal energies. KamLAND (for instance) relies on this reaction as the delayed part of the delay-coincidence in detecting anti-neutrino events in the detector.
In the mineral oil environment of KamLAND the free neutrons have a mean lifetime around $200 \,\mathrm{\mu s}$. Neutron capture even on a proton releases 2.2 MeV. Chlorine, boron and gadolinium are all better neutron capture agents than hydrogen bearing molecules like water and oils, and captures to those absorbers release even more energy per event. So why isn't everyone jumping around cheering for room temperature fusion and prognosticating a beautiful future full of safe and abundant energy? Because there is no adequate supply of free neutrons. With their roughly 15 minute beta-decay lifetime there is no naturally occurring reserve and you can't store them in any case. | {
"source": [
"https://physics.stackexchange.com/questions/293802",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5739/"
]
} |
293,873 | I always thought the non-linearity of Einsteins field equations implies that there should be direct graviton-graviton interactions. But I stumbled upon Wikipedia which argues: If gravitons exist, then, like photons and unlike gluons, gravitons do
not interact with other particles of their kind. That is, gravitons
carry the force of gravitation but are not affected by it. This is
apparent due to gravity being the only thing which escapes from black
holes, in addition to having an infinite range and traveling in
straight lines, similarly to electromagnetism. Is Wikipedia correct? If not, why not? And what then are the arguments that there must be graviton-graviton interactions? (As of this question being asked, the above paragraph has been removed from Wikipedia.) | I'm pretty sure that you are right and Wikipedia is wrong. In the linearized gravity approximation at weak curvature, you ignore the gravitational self-back-reaction, but in general gravitons carry energy (as evidenced by the work done by gravitational waves on the LIGO detectors) and therefore contribute to the stress-energy tensor of general relativity, therefore sourcing more gravitons. Also, some quick Googling finds lots of references to multiple-graviton vertices in effective quantum gravity field theories, whereas the Wikipedia article paragraph you quote has no references. The issue of how gravitons can "escape" from a black hole without needing to travel faster than light is discussed at How does gravity escape a black hole? . The short answer is that gravitons can't escape from a black hole, but that's okay because they only carry information about gravitational radiation (which also can't escape from inside a black hole), not about static gravitational fields. | {
"source": [
"https://physics.stackexchange.com/questions/293873",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1648/"
]
} |
294,271 | Why do we use the electron volt ? Why did it come to be the electron volt and not, say, just a prefix of the joule, like the nanojoule? Does the electron volt represent anything particular as far as the mathematics goes? I am guessing that it does, and if so, what is it that the electron volt exactly represents in terms of the mass of a particle, as I have seen it used for both the energy of a photon and the mass of subatomic particles? | The electron-volt is a convenient unit of energy when considering electrons moving between points at different potentials. The convenience came from having numerical values which are around or greater than one, $1 \rm eV = 1.6 \times 10^{-19} \rm J$. It was first used in the 1930s. So one perhaps has a better "feel" for the difference between 1 and 100 eV than $1.6 \times 10^{-19} \rm J$ and $1.6 \times 10^{-17} \rm J$ and the value in electron volts is easier to write. Electron energy levels are conveniently quoted in electron-volts and then nuclear energy levels in MeV show a clear difference in terms of scale. Then using eV/c² with the appropriate prefix as a unit of mass also becomes convenient; e.g. the mass of the electron as 500 keV/c² and that of the proton as 1 GeV/c². It is not an SI unit but is retained because as well as being convenient it was and still is in widespread use in the scientific community. | {
"source": [
"https://physics.stackexchange.com/questions/294271",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136407/"
]
} |
294,279 | I'm reading a solid state physics book and there's something which is confusing me, related to the free electron gas. After solving Schrodinger's equation with $V = 0$ and with periodic boundary conditions, one finds out that the allowed values of the components of $\mathbf{k}$ are: $$k_x = \dfrac{2n_x\pi}{L}, \quad k_y=\dfrac{2n_y \pi}{L}, \quad k_z = \dfrac{2n_z\pi}{L}.$$ In the book I'm reading the author says that it follows from this that: there is one allowed wavevector - that is, one distinct triplet of quantum numbers $k_x,k_y,k_z$ - for the volume element $(2\pi/L)^3$ of $\mathbf{k}$ space . After that he says that this implies that in the sphere of radius $k_F$ the total number of states is $$2 \dfrac{4\pi k_F^3/3}{(2\pi/L)^3}=\dfrac{V}{3\pi^2}k_F^3 = N,$$ where the factor $2$ comes from spin. Now, why is that the case? Why it follows from the possible values of $k_x,k_y,k_z$ that density of points in $k$-space? I really can't understand this properly. | The electron-volt is a convenient unit of energy when considering electrons moving between points at different potentials. The convenience came from having numerical values which are around or greater than one, $1 \rm eV = 1.6 \times 10^{-19} \rm J$. It was first used in the 1930s. So one perhaps has a better "feel" for the difference between 1 and 100 eV than $1.6 \times 10^{-19} \rm J$ and $1.6 \times 10^{-17} \rm J$ and the value in electron volts is easier to write. Electron energy levels are conveniently quoted in electron-volts and then nuclear energy levels in MeV show a clear difference in terms of scale. Then using eV/c² with the appropriate prefix as a unit of mass also becomes convenient; e.g. the mass of the electron as 500 keV/c² and that of the proton as 1 GeV/c². It is not an SI unit but is retained because as well as being convenient it was and still is in widespread use in the scientific community. | {
"source": [
"https://physics.stackexchange.com/questions/294279",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21146/"
]
} |
294,966 | I have seen simulations of antimatter on TV. Has antimatter ever been photographed? | The total amount of antimatter ever created on earth is not even sufficient to be visible by eye, so it is hard to answer. However, if a bunch of antimatter was available as stable solid or liquid material, there is no reason to think it would look different. Indeed, its interaction with visible light is pretty much exactly the same as usual matter, so it would look the same. Update:
As comments explain, the looks of a piece of antimatter would be the same of its matter counterpart. Thus it might have any colour, texture, shine, etc. | {
"source": [
"https://physics.stackexchange.com/questions/294966",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133249/"
]
} |
295,282 | Lets take the following example According to above examples it means that velocity at the above portion is max while the velocity at lower portion is min. But I think it should be the same at both parts (just opposite in direction). Why are both different? | You have to remember that the entire wheel is also moving. Think of this. Where the wheel meets the ground, the velocity of the contact point must be 0, otherwise the wheel would be skidding. Another way of looking at it is that at the contact point the forward velocity of the wheel is cancelled by the backward velocity of the point. On the other hand, at the top of the wheel these velocities add together: the velocity of the entire wheel with respect to the ground, plus the velocity of that point with respect to the centre of the wheel. I once tested this, when I drove behind a truck that was trailing a rope on the road. I drove one of my front wheels over the rope and instantly the rope broke. It had to break because one end of the rope was moving at the speed of the truck, while the other was stationary between the road and my tyre. | {
"source": [
"https://physics.stackexchange.com/questions/295282",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/137035/"
]
} |
295,365 | Quantum mechanics says that if a system is in an eigenstate of the Hamiltonian, then the state ket representing the system will not evolve with time. So if the electron is in, say, the first excited state then why does it change its state and relax to the ground state (since it was in a Hamiltonian eigenstate it should not change with time)? | The atomic orbitals are eigenstates of the Hamiltonian
$$
H_0(\boldsymbol P,\boldsymbol R)=\frac{\boldsymbol P^2}{2m}+\frac{e}{R}
$$ On the other hand, the Hamiltonian of Nature is not $H_0$: there is a contribution from the electromagnetic field as well
$$
H(\boldsymbol P,\boldsymbol R,\boldsymbol A)=H_0(\boldsymbol P+e\boldsymbol A,\boldsymbol R)+\frac12\int_\mathbb{R^3}\left(\boldsymbol E^2+\boldsymbol B^2\right)\,\mathrm d\boldsymbol x
$$
(in Gaussian units , and where $\boldsymbol B\equiv\nabla \times\boldsymbol A$ and $\boldsymbol E\equiv \dot{\boldsymbol A}-\nabla\phi$) Therefore, atomic orbitals are not stationary: they depend on time and you get transitions from different states. The problem is that what determines time evolution is the total Hamiltonian of the system, and in Nature, the total Hamiltonian includes all forms of interactions. We usually neglect most interactions to get the overall description of the system, and then add secondary effects using perturbation theory. In this sense, the atom is very accurately described by $H_0$, but it is not the end of the story: there are many more terms that contribute to the real dynamics. | {
"source": [
"https://physics.stackexchange.com/questions/295365",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/135989/"
]
} |
295,478 | So a diamond has a energy band gap of $\approx 5$eV. If that is too much for visible light $\approx 1.6$eV to be absorbed and then it travels straight through a diamond. Although, I still see a diamond. I know that when light goes through, there is a change in index of refraction. If visible light does not interact with the diamond then how does it reflect off the surface? | Light does not have to make outer shell electrons leap the full band gap to interact with them. Electrons can be excited to virtual states, whence photons of the same energy and momentum are emitted. So although it is true that the absorption loss for pure diamond is very small as you've rightly inferred, a phase delay arises from this interaction, as I discuss further in this answer here . In diamond, this phase delay is big: diamond has a refractive index of about 2.4 for visible light. So you see all the effects of the strong difference between the diamond's refractive index and that of the air around it: you see a diamond plate shift transmitted light sideways relative to the background, you see a strong specular reflexion from surfaces (the power reflexion ratio is about 17% for diamond) and, for white light, you see strong dispersion into colors for glancing reflexions and transmissions. | {
"source": [
"https://physics.stackexchange.com/questions/295478",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/123260/"
]
} |
295,569 | The sun is strong enough to keep gas giants close, but why not people? | The Sun is keeping you close. After all, you are orbiting it just like the Earth. You don't fly off into space because the Earth and you experience the same acceleration due to the Sun's gravitational force, so you orbit together; this is sometimes called the equivalence principle. If, however, you were floating near Earth but closer to the Sun, you would experience stronger gravity. You would be in a smaller orbit which would make you drift away from the Earth. You wouldn't fall into the Sun, though. Edit: I forgot to say something about the outer planets, something which the other answers touch on but I think get wrong. First, we should speak of acceleration rather than force, because like I said earlier all objects at a given distance from the Sun experience different forces but the same acceleration. You ask "how come the Sun is strong enough to keep the distant planets in orbit but I don't fall into it?". The important point is that you don't need such a huge acceleration to keep the planets in orbit, because they are far away and move very slowly. But , the smallness of the acceleration isn't the reason you don't feel it. The reason is that you're in free fall around the Sun; even if you were zipping around kilometers from the Sun's surface, you would not feel the huge gravitational force, because it affects everything around you in exactly the same way (disregarding tidal effects). | {
"source": [
"https://physics.stackexchange.com/questions/295569",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/137553/"
]
} |
296,189 | What are BMS supertranslation symmetries? I am studying soft hair on black holes and I need to learn BMS supertranslation symmetries. | The supertranslations are just direction-dependent time translations on the boundary of asymptotically flat spacetimes . Specifically, if the "future null infinity" boundary of such a spacetime is given retarded-time coordinate $u$ , then a supertranslation transforms that coordinate according to \begin{equation}
u \to u + \alpha(\theta, \phi),
\end{equation} where $\alpha$ is any (differentiable) function of the angular coordinates $(\theta, \phi)$ . The idea is actually pretty simple, though the explanation is a bit long. For simplicity, I'll just stick to the future in this answer, but all the same ideas apply on past null infinity if you replace the retarded time $u$ with advanced time $v$ , and future-directed null cones with past-directed null cones. In asymptotically flat spacetimes, the supertranslations are one of the asymptotic symmetries of the metric. Naively, you might expect just the Poincaré group (rotations, boosts, time translation, and space translations), since that describes all the symmetries of Minkowski spacetime. But it turns out that at the boundary of asymptotically flat spacetime, you get the Poincaré group, but you also get "generalized" translations. Putting those all together, you get the whole BMS group. The supertranslations include simple time translation, and space translations, as well as these "generalized" translations. The first full exposition of the BMS group was by Sachs in this paper , which is worth reading if you want to understand the history of this this group. He called it the "generalized Bondi-Metzner group", because it wouldn't be appropriate to insert his own name; other people did that for him later, and it's now called the Bondi-Metzner-Sachs group. Okay, now on to the explanation... There's a pretty simple interpretation that lets you understand the supertranslations, starting with time translation, building to space translations, and finally ending up at general supertranslations. I think a really nice pedagogical explanation of this is given in section II.B of this paper (though I'm a little biased, because I wrote that paper). For simplicity, I'll just talk about the most basic example: Minkowski space. But any asymptotically flat spacetimes that we discuss look more-or-less like Minkowski asymptotically, and an even larger class of spacetimes are close enough for this example to help understand. To start off with, we have to consider the entire Minkowski spacetime along with its actual asymptotic limits, which is done with the "compactified" spacetime. Basically, we just draw a Penrose diagram , which changes coordinates so that we can draw infinitely distant points on a finite diagram. The interesting part of this diagram for our purposes is future null infinity, $\mathscr{I}^+$ , which is the (future) asymptotic limit of where null signals go to in an asymptotically flat spacetime. The Minkowski spacetime is a nice model for explaining this because we can construct coordinates on $\mathscr{I}^+$ by shooting light rays from inertial emitters inside the spacetime. (More complicated spacetimes might have black holes or other complications that would make it impossible to define coordinates covering $\mathscr{I}^+$ in this way, but don't change the essential features we care about — the asymptotic structure.) Suppose an inertial observer $\mathscr{A}$ has the usual spherical coordinates $(\theta, \phi)$ , and has some clock for their proper time $\tau$ . That observer shoots a light ray in the direction $(\theta, \phi)$ at time $\tau$ , and the light ray eventually approaches some point on $\mathscr{I}^+$ . So now, we just label that point with the angular coordinates $(\theta, \phi)$ and a retarded time $u = \tau$ . But maybe we decide that observer $\mathscr{A}$ 's clock was wrong by some amount $\delta \tau$ . The light ray that was emitted at $\tau$ should have been emitted at $\tau - \delta \tau$ . I've copied Figure 3 from my paper, showing exactly this, below. If we make this adjustment, all that happens to the coordinates on $\mathscr{I}^+$ is that \begin{equation}
\tag{1}
u \to u - \delta \tau.
\end{equation} That's the first type of supertranslation — a simple time translation . And it shouldn't surprise you that the asymptotic metric "on" $\mathscr{I}^+$ is not altered by the coordinate transformation caused by this time translation. This situation is illustrated in figure 3 from my paper , which is also shown below. Now suppose we have another observer, $\mathscr{B}$ , that is stationary with respect to $\mathscr{A}$ , but is displaced by some $\delta \boldsymbol{x}$ . This observer also gets to shoot off light rays and label $\mathscr{I}^+$ . But as we see in figure 4 (below), the light rays that $\mathscr{B}$ shoots off in two opposite directions reach points on $\mathscr{I}^+$ that $\mathscr{A}$ could only reach by shooting photons at different times depending on the direction: $\mathscr{A}$ has to shoot off the one going left much earlier than the one going right. In fact, if their clocks are synchronized and $\mathscr{B}$ emitted both photons at $\tau=0$ , then it's not hard to see that $\mathscr{A}$ had to emit to the left at $\tau=-\lvert \delta \boldsymbol{x} \rvert$ and to the right at $\tau=\lvert \delta \boldsymbol{x} \rvert$ , to account for the extra propagation times. Of course, this diagram is simplified because it suppresses two dimensions of our four-dimensional spacetime. But it's easy to figure out that in any direction $\hat{\boldsymbol{r}}$ , the retarded time on $\mathscr{I}^+$ is transformed by this spatial translation as \begin{equation}
\tag{2}
u \to u + \delta \boldsymbol{x} \cdot \hat{\boldsymbol{r}}.
\end{equation} That's the second type of supertranslation — a simple space translation . And again it shouldn't surprise you that the asymptotic metric "on" $\mathscr{I}^+$ is not altered by the coordinate transformation caused by this space translation, since the Minkowski metric is not altered by it. You might observe that in terms of the usual spherical-harmonic index $\ell$ , transformation (1) above is an $\ell=0$ function of coordinates on the sphere (independent of direction), while transformation (2) is an $\ell=1$ function. That is, if we think of them as being expanded in spherical harmonics $Y_{\ell, m}$ , then we can write those transformations as \begin{gather}
\tag{1'}
u \to u + \alpha^{0,0}Y_{0,0}(\theta, \phi), \\
\tag{2'}
u \to u + \sum_{m=-1}^{1} \alpha^{1,m}Y_{1,m}(\theta, \phi),
\end{gather} where we just have a simple condition on $\alpha^{0,0}$ and the $\alpha^{1, m}$ to ensure that the result is purely real. It is natural to wonder what a general transformation like \begin{gather}
\tag{3}
u \to u + \sum_{\ell=0}^\infty \sum_{m=-\ell}^{\ell} \alpha^{\ell,m} Y_{\ell,m}(\theta, \phi)
\end{gather} would mean. (Again, assuming the $\alpha^{\ell,m}$ obey the reality condition.) This is what we mean by a general supertranslation . This generalization is interesting, but it wouldn't really mean much unless we had some physically relevant fact about it. It turns out that a supertranslation is also a symmetry of the asymptotic metric . Obviously, it's not generally a symmetry of the metric in the interior of the spacetime, but something important happens once we get to the boundary $\mathscr{I}^+$ . There's a pretty simple intuitive explanation of why supertranslations are relevant asymptotically. Very roughly speaking, "neighboring" points on $\mathscr{I}^+$ with infinitesimally different $(\theta, \phi)$ coordinates are actually infinitely "far apart" in space. More precisely, they are causally disconnected (light rays from one can't reach another), so there's no way we could synchronize their clocks, which means that we could add an arbitrary time offset to the clocks in each $(\theta, \phi)$ direction — which is exactly what this equation describes. In fact, for any point on $\mathscr{I}^+$ with a given $(\theta, \phi)$ , the only other points on $\mathscr{I}^+$ that it is causally connected to are points with the same $(\theta, \phi)$ coordinates — but maybe different $u$ coordinates. So this explains all the supertranslations except the $\ell=0$ time offset, but that's just explained by the fact that there's no physically meaningful way to set any observer's clock; the origin of coordinates is arbitrary. Now, this is all easiest to understand in Minkowski space using the nice, simple pictures I've shown. But it's important to remember that asymptotic flatness is a much broader idea than just Minkowski, and includes systems with much more complicated and seriously non-flat geometries inside, as well as spacetimes that may not be complete enough to draw an entire Penrose diamond like I've shown. Still, the BMS symmetries show up any time the asymptotic behavior is reasonable and flat near future null infinity. Specifically, at least some part of the limit $\mathscr{I}^+$ still exists — and near that limit, the other spacetime "looks like" the same limit of Minkowski spacetime, so the same basic rules apply. | {
"source": [
"https://physics.stackexchange.com/questions/296189",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/138000/"
]
} |
296,336 | It is pretty straight forward how light is redshifted in an expanding universe, yet I still can't understand why the De'Broglie wavelength of a massive particle isn't redshifted in an expanding universe. There is no proper notion of conserved mass energy in the expanding universe (without considering gravitational energy that is).
anything that doesn't involve handwaving would be great (I'm not afraid of getting my hands dirty in the math). thanks!! | The de Broglie wavelength of a massive particle is redshifted in an expanding universe. The de Broglie wavelength is given by: $$ \lambda = \frac{h}{p} $$ so a red shift of the de Broglie wavelength simply means that the momentum is decreasing, which for a massive particle means that its velocity relative to us is decreasing. And that is exactly what we see. Suppose someone on a distant galaxy fires a particle towards us with an initial velocity (relative to us) of $v$. As the particle crosses the space towards us the space expands under its feet so the particle slows down. We would see the particle slow down and in the absence of dark energy eventually come to a halt - in the presence of dark energy the particle can reverse direction and then accelerate away from us. The result is that we observe the de Broglie wavelength of the particle to increase as the universe expands. | {
"source": [
"https://physics.stackexchange.com/questions/296336",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/100917/"
]
} |
296,347 | What is a lay terms explanation for the meaning of compactness and noncompactness of a surface, S or of an horizon?
In particular, I don't understand what a noncompact partial Cauchy surface is and what distinguishes it from a compact one. | The de Broglie wavelength of a massive particle is redshifted in an expanding universe. The de Broglie wavelength is given by: $$ \lambda = \frac{h}{p} $$ so a red shift of the de Broglie wavelength simply means that the momentum is decreasing, which for a massive particle means that its velocity relative to us is decreasing. And that is exactly what we see. Suppose someone on a distant galaxy fires a particle towards us with an initial velocity (relative to us) of $v$. As the particle crosses the space towards us the space expands under its feet so the particle slows down. We would see the particle slow down and in the absence of dark energy eventually come to a halt - in the presence of dark energy the particle can reverse direction and then accelerate away from us. The result is that we observe the de Broglie wavelength of the particle to increase as the universe expands. | {
"source": [
"https://physics.stackexchange.com/questions/296347",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/127414/"
]
} |
296,391 | In a book, it says, Fock space is defined as the direct sum of all $n$-body Hilbert Space: $$F=H^0\bigoplus H^1\bigoplus ... \bigoplus H^N$$ Does it mean that it is just "collecting"/"adding" all the states in each Hilbert space? I am learning 2nd quantization, that's why I put this in Physics instead of math. | Suppose you have a system described by a Hilbert space $H$ , for example a single particle. The Hilbert space of two non-interacting particles of the same type as that described by $H$ is simply the tensor product $$H^2 := H \otimes H$$ More generally, for a system of $N$ particles as above, the Hilbert space is $$H^N := \underbrace{H\otimes\cdots\otimes H}_{N\text{ times}},$$ with $H^0$ defined as $\mathbb C$ (i.e. the field underlying $H$ ). In QFT there are operators that intertwine the different $H^N$ s, that is , create and annihilate particles. Typical examples are the creation and annihilation operators $a^*$ and $a$ . Instead of defining them in terms of their action on each pair of $H^N$ and $H^M$ , one is allowed to give a "comprehensive" definition on the larger Hilbert space defined by taking the direct sum of all the multi-particle spaces, viz. $$\Gamma(H):=\mathbb C\oplus H\oplus H^2\oplus\cdots\oplus H^N\oplus\cdots,$$ known as the Fock Hilbert space of $H$ and sometimes also denoted as $e^H$ . From a physical point of view, the general definition above of Fock space is immaterial. Identical particles are known to observe a definite (para)statistics that will reduce the actual Hilbert space (by symmetrisation/antisymmetrisation for the bosonic/fermionic case etc...). | {
"source": [
"https://physics.stackexchange.com/questions/296391",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/130611/"
]
} |
296,521 | If the buoyant force acting on a body submerged in a liquid,say water, does not depend on depth, why does it become increasingly difficult to push an object deeper and deeper.
I know that the buoyant force is just the pressure difference between the bottom and the top of an object, and since the only forces acting are the force F( which you are applying on the body to push it) , the buoyant force and the weight of the object and also since the latter 2 are constant shouldn't F also be constant?
Could someone please point out to me where i am going wrong? | The force required to push an object into water increases as the object submerges, i.e. as the amount of water the object displaces steadily increases. But I think if you do the experiment carefully you will find that, once the object is fully submerged, the force required should be almost constant. Thereafter, many objects get easier to push down with increasing depth, as the water pressure crushes them and they therefore displace less water. Wetsuits, for example, become greatly less buoyant with depth for this reason, which is why divers usually wear a buoyancy compensator. At extreme depths, if something is less compressible than water, it will become harder to push down owing to the increasing density of water with depth. Factors such as this are important in the design of deep sea submersibles and bathyscaphes such as Alvin and the Trieste. | {
"source": [
"https://physics.stackexchange.com/questions/296521",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/101386/"
]
} |
296,601 | I know amplitudes cancel (destructive) or combine (constructive) as per image below: ( Source ) But how do frequencies cancel out or combine? For some context: a question from my textbook A song is played off a CD. One set of speakers is playing the note at $512$ Hz, but the presence of the second set of speakres causes beats of frequency $4$ Hz to be heard at a point equidistanct from the four speakers. The possible frequencies being playing by the addition speakers are: Answer: $508$ and $516$ Hz. I am not sure if I understand the concept correctly, but amplitudes canceling out makes sense as its a matter of e.g. a negative amplitude cancelling out an equal magnitude positive amplitude and combining into one wave (or variations of this depending on the magnitude of each wave). So to me this seems like a matter of distance/displacements (in the form of Amplitude, or distance above or below the centerline) canceling out. But I don't see how this works for frequencies. Frequency is waves/second. So wouldn't playing a $512$ Hz frequency and a $516$ Hz frequency just cause both of them to be heard separately, rather than cancel out to $4$ Hz? I don't understand how "speeds" can cancel out. | Beats can be thought of as the next level of complication from constructive destructive interference. To demonstrate this best, we should visualize what actually happens when we sum two sine waves of different frequencies: There's no magic going on here, this is just straight up addition. What is happening is that sometimes the two signals are constructively interfering, and sometimes they are destructively interfering. The rate at which they go back and forth between constructive and destructive is defined by the difference in frequencies, and is called the "beat frequency." You can see that there is still a high frequency sine wave there... you still hear the "correct" note (I believe it's the average of the two frequencies), but you also hear what we call an "envelope," making that high frequency go louder and softer. Those are the beats. | {
"source": [
"https://physics.stackexchange.com/questions/296601",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126999/"
]
} |
296,612 | I would like to know how can we model a simple spool that unrolls at speed close to light (in the frame of the wire remaining on the ground AND in the frame of the translatory motion of the rolling spool)? (See attached image) "Isn't there a paradox? The spool unrolls in the wire's frame, but what about the spool where time should be dilated? If we count the number of turns, isn't there a problem by comparing the two frames, once the spool unwound? (See attached image) According to different interpretations of relativity, either the spool does not make the same number of turns in each frames (which is curious), or it rotates faster in the frame of the dilated time (which is contradictory). I try to understand my mistake: A: person or frame of a thread left on the ground
/B: person or frame in rectilinear motion following the spool Lets put 4 breaths (equivalent to the time) of an individual per turns in his referential.
Imagine that a huge spool makes a single full turn at speed 0.999c. Isn't B's breathing more slowly compared with A (and in A)?
The number of turns should be the same.
Is the number of breaths less than 4 for B in B?
If the breathing of B in B is normal (proper time), has the spool not stopped before? Or did it turn faster (perception of a faster speed for B in B than A in A) to finish its turn? (Both last questions are linked and any answer would be paradoxical) I hope my thought experiment is interesting and will lead you to question relativity. | Beats can be thought of as the next level of complication from constructive destructive interference. To demonstrate this best, we should visualize what actually happens when we sum two sine waves of different frequencies: There's no magic going on here, this is just straight up addition. What is happening is that sometimes the two signals are constructively interfering, and sometimes they are destructively interfering. The rate at which they go back and forth between constructive and destructive is defined by the difference in frequencies, and is called the "beat frequency." You can see that there is still a high frequency sine wave there... you still hear the "correct" note (I believe it's the average of the two frequencies), but you also hear what we call an "envelope," making that high frequency go louder and softer. Those are the beats. | {
"source": [
"https://physics.stackexchange.com/questions/296612",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/138199/"
]
} |
296,650 | Why does bench pressing your own bodyweight feel so much harder than doing a push-up ? I have my own theories about the weight being distributed over multiple points (like in a push-up) but would just like to get a definite answer. | While doing push-ups, you don't push your whole body weight. You have your toes on the ground, so your body weight is distributed between your feet and your arms. While benching, you have no support from feet. You hold the whole weight with your arms, so benching your body weight is always tougher. | {
"source": [
"https://physics.stackexchange.com/questions/296650",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/138220/"
]
} |
296,904 | I have recently started to learn about the electric field generated by a moving charge. I know that the electric field has two components; a velocity term and an acceeleration term. The following image is of the electric field generated by a charge that was moving at a constant velocity, and then suddenly stopped at x=0: I don't understand what exactly is going on here. In other words, what is happening really close to the charge, in the region before the transition, and after the transition. How does this image relate to the velocity and acceleration compnents of the electric field? | The electric $\:\mathbf{E}\:$ and magnetic $\:\mathbf{B}\:$ parts of the electromagnetic field produced by a moving charge $\:q\:$ on field point $\:\mathbf{x}\:$ at time $\;t\;$ are (1) \begin{align}
\mathbf{E}(\mathbf{x},t) & \boldsymbol{=} \frac{q}{4\pi\epsilon_0}\left[\frac{(1\boldsymbol{-}\beta^2)(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta})}{(1\boldsymbol{-} \boldsymbol{\beta}\boldsymbol{\cdot}\mathbf{n})^3 R^2}\vphantom{\dfrac{\dfrac{a}{b}}{\dfrac{a}{b}}} \right]_{\mathrm{ret}}\!\!\!\!\!\boldsymbol{+} \frac{q}{4\pi}\sqrt{\frac{\mu_0}{\epsilon_0}}\left[\frac{\mathbf{n}\boldsymbol{\times}\left[(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta})\boldsymbol{\times} \boldsymbol{\dot{\beta}}\right]}{(1\boldsymbol{-} \boldsymbol{\beta}\boldsymbol{\cdot}\mathbf{n})^3 R}\right]_{\mathrm{ret}}
\tag{01.1}\label{eq01.1}\\
\mathbf{B}(\mathbf{x},t) & = \left[\mathbf{n}\boldsymbol{\times}\mathbf{E}\right]_{\mathrm{ret}}
\tag{01.2}\label{eq01.2}
\end{align} where \begin{align}
\boldsymbol{\beta} & = \dfrac{\boldsymbol{\upsilon}}{c},\quad \beta=\dfrac{\upsilon}{c}, \quad \gamma= \left(1-\beta^{2}\right)^{-\frac12}
\tag{02.1}\label{eq02.1}\\
\boldsymbol{\dot{\beta}} & = \dfrac{\boldsymbol{\dot{\upsilon}}}{c}=\dfrac{\mathbf{a}}{c}
\tag{02.2}\label{eq02.2}\\
\mathbf{n} & = \dfrac{\mathbf{R}}{\Vert\mathbf{R}\Vert}=\dfrac{\mathbf{R}}{R}
\tag{02.3}\label{eq02.3}
\end{align} In equations \eqref{eq01.1},\eqref{eq01.2} all scalar and vector variables refer to the $^{\prime}$ ret $^{\prime}$ arded position and time. Now, in case of uniform rectilinear motion of the charge, that is in case that $\;\boldsymbol{\dot{\beta}} = \boldsymbol{0}$ , the second term in the rhs of equation \eqref{eq01.1} cancels out, so \begin{equation}
\mathbf{E}(\mathbf{x},t) \boldsymbol{=} \frac{q}{4\pi\epsilon_0}\left[\frac{(1\boldsymbol{-}\beta^2)(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta})}{(1\boldsymbol{-} \boldsymbol{\beta}\boldsymbol{\cdot}\mathbf{n})^3 R^2} \right]_{\mathrm{ret}} \quad \text{(uniform rectilinear motion : } \boldsymbol{\dot{\beta}} = \boldsymbol{0})
\tag{03}\label{eq03}
\end{equation} In this case the $^{\prime}$ ret $^{\prime}$ arded variable $\:\mathbf{R}\:$ , so and the unit vector $\:\mathbf{n}\:$ along it, could be expressed as function of the present variables $\:\mathbf{r}\:$ and $\:\phi$ , see Figure-01. Then equation \eqref{eq03} expressed by present variables is (2) \begin{equation}
\mathbf{E}\left(\mathbf{x},t\right) =\dfrac{q}{4\pi \epsilon_{0}}\dfrac{(1\boldsymbol{-}\beta^2)}{\left(1\!\boldsymbol{-}\!\beta^{2}\sin^{2}\!\phi\right)^{\frac32}}\dfrac{\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{3}}\quad \text{(uniform rectilinear motion)}
\tag{04}\label{eq04}
\end{equation} That is : In case of uniform rectilinear motion of the charge the electric field is directed towards the position at the present instant and not towards the position at the retarded instant from which it comes. For the magnitude of the electric field \begin{equation}
\Vert\mathbf{E}\Vert =\dfrac{\vert q \vert}{4\pi \epsilon_{0}}\dfrac{(1\boldsymbol{-}\beta^2)}{\left(1\!\boldsymbol{-}\!\beta^{2}\sin^{2}\!\phi\right)^{\frac32}r^{2}}
\tag{05}\label{eq05}
\end{equation} Without loss of generality let the charge be positive ( $\;q>0\;$ ) and instantly at the origin $\;\rm O$ , see Figure-02. Then \begin{equation}
r^{2} = x^{2}+y^{2}\,,\quad \sin^{2}\!\phi = \dfrac{y^2}{x^{2}+y^{2}}
\tag{06}\label{eq06}
\end{equation} so \begin{equation}
\Vert\mathbf{E}\Vert =\dfrac{q}{4\pi \epsilon_{0}}\left(1\boldsymbol{-}\beta^2\right)\left(x^{2}\!\boldsymbol{+}y^{2}\right)^{\boldsymbol{\frac12}}\left[x^{2}\!\boldsymbol{+}\left(1\!\boldsymbol{-}\beta^{2}\right)y^{2}\right]^{\boldsymbol{-\frac32}}
\tag{07}\label{eq07}
\end{equation} For given magnitude $\;\Vert\mathbf{E}\Vert\;$ equation \eqref{eq07} is represented by the closed curve shown in Figure-02. More exactly the set of points with this magnitude of the electric field is the surface generated by a complete revolution of this curve around the $\;x-$ axis. Now, let the charge that has been moving with constant velocity reaches the origin $\;\rm O\;$ at $\;t_{0}=0$ , is abruptly stopped, and remains at rest thereafter. At a later instant $\;t>t_{0}=0\;$ the Coulomb field (from the charge at rest on the origin $\;\rm O$ ) has been expanded till a circle (sphere) of radius $\;\rho=ct$ . Outside this sphere the field lines are like the charge continues to move uniformly to a point $\;\rm O'\;$ , so being at a distance $\;\upsilon t =(\upsilon/c)c t=\beta\rho\;$ inside the Coulomb sphere as shown in Figure-03 (this Figure is produced with $\beta=\upsilon/c=0.60$ ). Note that the closed oval curves (surfaces) refer to constant magnitude $\;\Vert\mathbf{E}\Vert\;$ and must not be confused with the equipotential ones. In case that the charge stops abruptly the second term in the rhs of equation \eqref{eq01.1} dominates the first one. Furthermore since the velocity $\;\boldsymbol{\beta}\;$ and the acceleration $\;\boldsymbol{\dot{\beta}}\;$ are collinear we have $\;\boldsymbol{\beta}\boldsymbol{\times}\boldsymbol{\dot{\beta}}=\boldsymbol{0}\;$ so \begin{equation}
\mathbf{n}\boldsymbol{\times}\left[(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta})\boldsymbol{\times} \boldsymbol{\dot{\beta}}\right]=\mathbf{n}\boldsymbol{\times}(\mathbf{n}\boldsymbol{\times}\boldsymbol{\dot{\beta}})=\boldsymbol{-}\boldsymbol{\dot{\beta}}_{\boldsymbol{\perp}\mathbf{n}}
\tag{08}\label{eq08}
\end{equation} that is the projection of the acceleration on a direction normal to $\;\mathbf{n}$ , see Figure-06. But don't forget that this unit vector is the one on the line connecting the field point with the retarded position. But the retarded position in the time period of an abrupt deceleration and a velocity very close to zero is very close to the rest point. So it's reasonable a field line inside the Coulomb sphere to continue as a circular arc on the Coulomb sphere and then to a field line outside the sphere as shown in Figure-04. To find the correspondence $^{\prime}$ inside line-circular arc-outside line $^{\prime}$ we apply Gauss Law on the closed surface $\:\rm ABCDEF\:$ shown in Figure-05. Don't forget that we mean the closed surface generated by a complete revolution of this polyline around the $\;x-$ axis. The electric flux through the surface $\:\rm BCDE\:$ is zero since the field is tangent to it. So application of Gauss Law means to equate the electric flux through the spherical cap $\:\rm AB\:$ to that of the electric flux through the spherical cap $\:\rm EF$ . The final result of this application derived analytically is a relation (3) between the angles $\:\phi, \theta\:$ as shown in Figure-04 or Figure-05 \begin{equation}
\tan\!\phi=\gamma \tan\!\theta=\left(1-\beta^2\right)^{\boldsymbol{-}\frac12}\tan\!\theta
\tag{09}\label{eq09}
\end{equation} (1)
From J.D.Jackson's $^{\prime}$ Classical Electrodynamics $^{\prime}$ , 3rd Edition, equations (14.14) and (14.13) respectively. (2)
From W.Rindler's $^{\prime}$ Relativity-Special, General, and Cosmological $^{\prime}$ , 2nd Edition. Equation \eqref{eq04} here is identical to (7.66) therein. (3)
From E.Purcell, D.Morin $^{\prime}$ Electricity and Magnetism $^{\prime}$ , 3rd Edition 2013, Cambridge University Press. The derivation is given as Exercise 5.20 and equation \eqref{eq09} here is identical to (5.16) therein. $\textbf{Proof of equation}$ \eqref{eq04} $\textbf{from equation}$ \eqref{eq03} In Figure-01 the triangle formed by vectors $\:\mathbf{n},\boldsymbol{\beta},\mathbf{n}\boldsymbol{-}\boldsymbol{\beta}\:$ is similar to the triangle $\:\rm AKL$ , that is to that formed by the vectors $\:\mathbf{R},\overset{\boldsymbol{-\!\rightarrow}}{\rm KL},\mathbf{r}$ . Note that $\:\mathbf{R}\:$ is the vector from the retarded position $\:\rm K\:$ to the field point $\:\rm A$ . A light signal of speed $\:c\:$ travels along this vector, that is along the straight segment $\:\rm KA$ , from the retarded time moment $\:t_{\mathrm{ret}}\:$ to the present time moment $\:t\:$ so \begin{equation}
\mathrm{KA}\boldsymbol{=}\Vert\mathbf{R}\Vert\boldsymbol{=}R\boldsymbol{=}c\, \Delta t\boldsymbol{=}c\left(t\boldsymbol{-}t_{\mathrm{ret}}\right)
\tag{q-01}\label{q-01}
\end{equation} On the other hand the triangle side $\:\rm KL\:$ is the straight segment along which the charge $\:q\:$ travels with speed $\:\upsilon\:$ from the retarded time moment $\:t_{\mathrm{ret}}\:$ to the present time moment $\:t\:$ so \begin{equation}
\mathrm{KL}\boldsymbol{=}\upsilon\left(t\boldsymbol{-}t_{\mathrm{ret}}\right)
\tag{q-02}\label{q-02}
\end{equation} Now, the aforementioned triangle similarity is valid since \begin{equation}
\dfrac{\mathrm{KL}}{\mathrm{KA}}\boldsymbol{=}\dfrac{\upsilon\left(t\boldsymbol{-}t_{\mathrm{ret}}\right) }{c\left(t\boldsymbol{-}t_{\mathrm{ret}}\right)}\boldsymbol{=}\dfrac{\upsilon}{c}\boldsymbol{=}\dfrac{\beta}{1}\boldsymbol{=}\dfrac{\Vert\boldsymbol{\beta}\Vert}{\Vert\mathbf{n}\Vert}
\tag{q-03}\label{q-03}
\end{equation} So, the vector $\:\left(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta}\right)\:$ in the numerator of the rhs of equation \eqref{eq03} is parallel to the vector $\:\mathbf{r}\:$ and from the triangle similarity \begin{equation}
\dfrac{\left(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta}\right)}{\Vert\mathbf{n}\Vert}\boldsymbol{=} \dfrac{\mathbf{r}}{R}
\tag{q-04}\label{q-04}
\end{equation} so \begin{equation}
\left(\mathbf{n}\boldsymbol{-}\boldsymbol{\beta}\right)\boldsymbol{=} \dfrac{\mathbf{r}}{R}
\tag{q-05}\label{q-05}
\end{equation} Note that $\:\mathbf{r}\:$ is the vector from the present position $\:\rm L\:$ to the field point $\:\rm A$ . Using \eqref{q-05} equation \eqref{eq03} yields \begin{equation}
\mathbf{E}(\mathbf{x},t) \boldsymbol{=} \frac{q}{4\pi\epsilon_0}\frac{(1\boldsymbol{-}\beta^2)}{(1\boldsymbol{-} \beta\sin\theta)^3 R^3} \mathbf{r}
\tag{q-06}\label{q-06}
\end{equation} omitting the subscript $^{\prime}$ ret $^{\prime}$ since the variables $\:\theta, R\:$ are already referred to the retarded position, see Figure-01. If we want this equation to have variables of the present position we must express $\:\theta, R\:$ in terms of the them, for example in terms of $\:\phi, r$ . Indeed this is the case due to the geometry of this configuration, see Figure-07. From this Figure \begin{equation}
(1\boldsymbol{-} \beta\sin\theta) R \boldsymbol{=} \mathrm{AM}\boldsymbol{=}r\cos(\phi\boldsymbol{-}\theta)
\tag{q-07}\label{q-07}
\end{equation} But from triangles $\:\rm AKN\:$ and $\:\rm LKN\:$ we have respectively \begin{equation}
R\sin(\phi\boldsymbol{-}\theta) \boldsymbol{=} \mathrm{KN}\boldsymbol{=}\beta R\sin\phi \quad \boldsymbol{\Longrightarrow} \quad \sin(\phi\boldsymbol{-}\theta) \boldsymbol{=}\beta \sin\phi
\tag{q-08}\label{q-08}
\end{equation} so \begin{equation}
\cos(\phi\boldsymbol{-}\theta)\boldsymbol{=}[1\boldsymbol{-} \sin^2(\phi\boldsymbol{-}\theta)]^{\frac12}\boldsymbol{=}(1\boldsymbol{-} \beta^2\sin^2\phi)^{\frac12}
\tag{q-09}\label{q-09}
\end{equation} and from \eqref{q-07} \begin{equation}
(1\boldsymbol{-}\beta\sin\theta)^3 R^3 \boldsymbol{=} r^3(1\boldsymbol{-} \beta^2\sin^2\phi)^{\frac32}
\tag{q-10}\label{q-10}
\end{equation} Replacing this expression in equation \eqref{q-06} we prove equation \eqref{eq04} \begin{equation}
\mathbf{E}\left(\mathbf{x},t\right) =\dfrac{q}{4\pi \epsilon_{0}}\dfrac{(1\boldsymbol{-}\beta^2)}{\left(1\!\boldsymbol{-}\!\beta^{2}\sin^{2}\!\phi\right)^{\frac32}}\dfrac{\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{3}}\quad \text{(uniform rectilinear motion)}
\nonumber
\end{equation} $\textbf{Proof of equation}$ \eqref{eq09} Equation \eqref{eq09} is proved by equating the electric flux through the spherical cap $\:\rm AB\:$ to that of the electric flux through the spherical cap $\:\rm EF$ , see Figure-05 and discussion in main section. $\boldsymbol{\S a.}$ Spherical cap $\:\rm AB\:$ of angle $\:\theta$ Let $\;\:\mathrm{OA}=r\;$ the radius of the cap. The flux of the electric field through the cap is \begin{equation}
\Phi_{\rm AB}=\iint\limits_{\rm AB}\mathbf{E}\boldsymbol{\cdot}\mathrm d\mathbf{S}
\tag{p-01}\label{eqp-01}
\end{equation} The field, of constant magnitude \begin{equation}
\mathrm E\left(r\right)=\dfrac{q}{4\pi\epsilon_{0}}\dfrac{1}{r^2}
\tag{p-02}\label{eqp-02}
\end{equation} is everywhere normal to the spherical surface. So taking the infinitesimal ring formed between angles $\;\omega\;$ and $\;\omega\boldsymbol{+}\mathrm d \omega\;$ we have for its infinitesimal area \begin{equation}
\mathrm dS=\underbrace{\left(2\pi r \sin\omega\right)}_{length}\underbrace{\left(r\mathrm d \omega\right)}_{width}=2\pi r^2 \sin\omega \mathrm d \omega
\tag{p-03}\label{eqp-03}
\end{equation} and \begin{equation}
\Phi_{\rm AB}=\int\limits_{\omega=0}^{\omega=\theta}\mathrm E\left(r\right)\mathrm dS=\dfrac{q}{2\epsilon_{0}}\int\limits_{\omega=0}^{\omega=\theta}\sin\omega \mathrm d \omega=\dfrac{q}{2\epsilon_{0}}\Bigl[-\cos\omega\Bigr]_{\omega=0}^{\omega=\theta}
\nonumber
\end{equation} so \begin{equation}
\boxed{\:\:\Phi_{\rm AB}=\dfrac{q}{2\epsilon_{0}}\left(1-\cos\theta\right)\:\:}
\tag{p-04}\label{eqp-04}
\end{equation} $\boldsymbol{\S b.}$ Spherical cap $\:\rm EF\:$ of angle $\:\phi$ Let $\;\:\mathrm{O'F}=r\;$ the radius of the cap. The flux of the electric field through the cap is \begin{equation}
\Phi_{\rm EF}=\iint\limits_{\rm EF}\mathbf{E}\boldsymbol{\cdot}\mathrm d\mathbf{S}
\tag{p-05}\label{eqp-05}
\end{equation} The field, of variable magnitude as in equation \eqref{eq05} of the main section \begin{equation}
\mathrm E\left(r,\psi\right)=\dfrac{q}{4\pi \epsilon_{0}}\dfrac{(1\boldsymbol{-}\beta^2)}{\left(1\!\boldsymbol{-}\!\beta^{2}\sin^{2}\!\psi\right)^{\frac32}r^{2}}
\tag{p-06}\label{eqp-06}
\end{equation} is everywhere normal to the spherical surface. So taking the infinitesimal ring formed between angles $\;\psi\;$ and $\;\psi\boldsymbol{+}\mathrm d \psi\;$ we have for its infinitesimal area \begin{equation}
\mathrm dS=\underbrace{\left(2\pi r \sin\psi\right)}_{length}\underbrace{\left(r\mathrm d \psi\right)}_{width}=2\pi r^2 \sin\psi \mathrm d \psi
\tag{p-07}\label{eqp-07}
\end{equation} and \begin{align}
\Phi_{\rm EF} & =\int\limits_{\psi=0}^{\psi=\phi}\mathrm E\left(r,\psi\right)\mathrm dS=\dfrac{q(1\boldsymbol{-}\beta^2)}{2\epsilon_{0}}\int\limits_{\psi=0}^{\psi=\phi}\dfrac{\sin\psi \mathrm d \psi}{\left(1\!\boldsymbol{-}\!\beta^{2}\sin^{2}\!\psi\right)^{\frac32}}
\nonumber\\
& \stackrel{z\boldsymbol{=}\cos\psi}{=\!=\!=\!=\!=}\boldsymbol{-}\dfrac{q(1\boldsymbol{-}\beta^2)}{2\epsilon_{0}}\int\limits_{z=1}^{z=\cos\phi}\dfrac{\mathrm d z}{\left(1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} z^2\right)^{\frac32}}
\tag{p-08}\label{eqp-08}
\end{align} From the indefinite integral \begin{equation}
\int\dfrac{\mathrm d z}{\left(1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} z^2\right)^{\frac32}}=\dfrac{z}{\left(1\!\boldsymbol{-}\!\beta^{2}\right)\left(1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} z^2\right)^{\frac12}}+\text{constant}
\tag{p-09}\label{eqp-09}
\end{equation} equation \eqref{eqp-08} yields \begin{equation}
\Phi_{\rm EF} = \boldsymbol{-}\dfrac{q}{2\epsilon_{0}}\Biggl[\dfrac{ z}{\left(1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} z^2\right)^{\frac12}}\Biggr]_{z=1}^{z=\cos\phi}
\nonumber
\end{equation} so \begin{equation}
\boxed{\:\:\Phi_{\rm EF} = \dfrac{q}{2\epsilon_{0}}\Biggl[1\boldsymbol{-}\dfrac{ \cos\phi}{\sqrt{\left(1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} \cos^2\phi\right)}}\Biggr]\:\:}
\tag{p-10}\label{eqp-10}
\end{equation} Equating the two fluxes, \begin{equation}
\Phi_{\rm EF} = \Phi_{\rm AB} \quad \stackrel{\eqref{eqp-04},\eqref{eqp-10}}{=\!=\!=\!=\!=\!\Longrightarrow}\quad \cos\theta=\dfrac{ \cos\phi}{\sqrt{\left(1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} \cos^2\phi\right)}}
\tag{p-11}\label{eqp-11}
\end{equation} Squaring and inverting \eqref{eqp-11} we have \begin{equation}
\cos^2\theta=\dfrac{\cos^2\phi}{1\!\boldsymbol{-}\!\beta^{2}+\beta^{2} \cos^2\phi}\quad\Longrightarrow\quad 1+\tan^2\theta =\beta^{2}+\left(1\!\boldsymbol{-}\!\beta^{2}\right)\left(1+\tan^2\phi\right)
\tag{p-12}\label{eqp-12}
\end{equation} and \begin{equation}
\boldsymbol{\vert}\tan\phi\,\boldsymbol{\vert}=\left(1\!\boldsymbol{-}\!\beta^{2}\right)^{\boldsymbol{-}\frac12}\boldsymbol{\vert}\tan\theta\,\boldsymbol{\vert}
\tag{p-13}\label{eqp-13}
\end{equation} Now $\;\theta,\phi \in [0,\pi]\;$ so $\;\sin\theta,\sin\phi \in [0,1]\;$ while from \eqref{eqp-11} $\;\cos\theta\cdot\cos\phi \ge 0\;$ so $\;\tan\theta\cdot\tan\phi \ge 0\;$ and finally \begin{equation}
\boxed{\:\:\tan\!\phi=\gamma \tan\!\theta=\left(1-\beta^2\right)^{\boldsymbol{-}\frac12}\tan\!\theta\:\:\vphantom{\dfrac{a}{b}}}
\tag{p-14}\label{eqp-14}
\end{equation} Note that for $\;\theta=\pi=\phi\;$ equations \eqref{eqp-04},\eqref{eqp-10} give as expected \begin{equation}
\Phi_{\rm AB} =\dfrac{q}{\epsilon_{0}}= \Phi_{\rm EF}
\tag{p-15}\label{eqp-15}
\end{equation} while from equation \eqref{eqp-14} we have also \begin{equation}
\theta =\dfrac{\pi}{2} \quad \Longrightarrow \quad \phi =\dfrac{\pi}{2}
\tag{p-16}\label{eqp-16}
\end{equation} as shown in Figure-08. | {
"source": [
"https://physics.stackexchange.com/questions/296904",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/102482/"
]
} |
297,004 | I have been introduced to the Feynman diagrams in QFT after following Wick-Dyson tedious formalism. Two things are unclear, though, about the Feynman shortcut to compute scattering amplitudes. What are the horizontal and vertical axes in these diagrams? Are they $x$ and $t$ respectively or $x$ and $y$? If yes then we are drawing a particle with an exact momentun $p$. But doesn't this violate the uncertainty principle since we are assuming exactly measured $x$ and $p$? | There are no axes in Feynman diagrams. The only important part of a diagram is what is connected to what, and not the relative orientation. You can move around the pieces of a diagram and, as long as you don't break any line, the value of the diagram remains unchanged. | {
"source": [
"https://physics.stackexchange.com/questions/297004",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
297,146 | My six year old daughter asked me this morning, 'how many dimensions does electricity have ?' What would be the best answer bearing in mind the age !? | Here's an idea for what you maybe could say: Well, there are kind of two "types" of things in the world. First, there are physical objects , like you, me, this house, and so on (here she might chime in with the toaster, or her doll, or something). These physical objects have the property of dimension, which we were talking about. A sheet of paper looks 2-d at first glance, but is actually 3-d, because it has a thickness, however tiny. You and I have three dimensions, height, length, and width (here you might illustrate with some object). There can be higher dimensions than 3, actually, but those are kind of hard to wrap our minds around. The second "type" of thing is a force . Forces move the physical objects around, like when you push in a chair, or pull your doll out of a bin. There's a bunch of forces acting on you right now, like gravity, which pulls you down toward the center of the Earth. There's also things like electricity, which powers lightbulbs, and engines, and your iPad. These forces don't really have the property of dimension. They aren't physical things like you and I, they just push us and pull us around. So electricity isn't a thing that has the property of dimension. In other words, asking what dimension electricity has is a bit of a meaningless question. It's kind of like asking what kind of food is in the microwave when there is no food in the microwave. It hasn't been defined yet. Hope this helps! I'll see if I can think up an explanation for lightning =) Teach your daughter the ways of the Force (ahem, physics). EDIT: In response to the comments, yes, you could get into electricity being carried by electrons. If you wanted to go this route, then here's what I'd say on top of the previous explanation: Forces themselves aren't physical, but there are physical things that can "carry" forces. For example, let's say you shuffle your feet along the carpet and then touch a doorknob. You might get a shock, right? Well, you, a physical object, just "carried" a force, electricity, which allowed you to get that shock (side note: this obviously isn't completely how that works, but you can't really explain how that works without understanding the next part). Electricity is carried by electrons, which are tiny, tiny little particles that can hook up with other particles called a proton and a neutron to make an atom, sort of like you and I make up a family, or they can just float free, kind of like how you and I can be separated. It is when electrons are floating free, moving where they please, that they carry electricity. So electrons transfer, or carry, the force of electricity between physical objects. But electrons themselves are physical, so they do have the property of dimension. They are 3-d. Don't look around you for electrons, though - they are so small that we can't even see them with the most powerful microscopes. Now, remember how I said earlier that you carry the force of electricity when you shuffle across the carpet and then touch a doorknob and get shocked? Well, that isn't quite right. Atoms, those things made up of protons, neutrons, and electrons, themselves make up every other physical object. They make up you, and me, and everything else. Normally, in an atom, the number of electrons and protons are the same. Electrons and protons each have a property called charge, and electrons have a -1 charge and protons have a 1 charge. When you add these two together, they cancel out to zero, right? (If she doesn't know about negative numbers, that might be a nice side lesson. I forget what I knew when I was six.) Well, this total charge of zero means the atom is something called neutral, which means it doesn't carry the force of electricity. However, the electrons in an atom can be pulled away from the atom, sometimes, or pushed into the atom. This leaves the atom with a non-zero charge, which means it can carry the electric force. When you rub your feet across the carpet, electrons are being pulled off and pushed on, giving you a charge, and when you touch the doorknob, electrons are pulled and pushed again, because in metal, it is easier to pull and push electrons, and this time, you feel the shock. Note that this isn't a complete explanation, I'm still working on making it clearer (and closer to a higher level explanation). | {
"source": [
"https://physics.stackexchange.com/questions/297146",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/138509/"
]
} |
297,386 | I have seen several questions and good answers on the link between reversible and quasistatic processes, such as here or here . However, these questions only adress one side of the problem : a reversible process is necessarily quasistatic. I am interested in the other side of the equivalence : is there a process that is quasistatic, yet not reversible ? It looks to me that an irreversible process cannot be made perfectly quasistatic. The wikipedia article about quasistatic processes takes as an example the very slow compression of a gas with friction. As the compression occurs very slowly, the transformation is quasistatic, and the friction makes it irreversible. I am not convinced by this example : if you press on the piston with a vanishingly small force you will have to reach the threshold of the Coulomb law for solid friction before moving the piston anyway. It makes the process non-quasi-static, however small the Coulomb threshold might be. Another example I've heard of is the reaction between a strong acid and a strong base. It is always an irreversible process, and you could make it quasistatic by adding very small drops of base into acid at a time. But by trying to do that, you would inevitably reach a limit to the size of the drop imposed by surface tension. Even if "reversible" and "quasistatic" mean very different things, is it true to consider that in practice, a reversible process and a quasistatic process is essentially the same thing ? | Most quasi-static processes are irreversible. The issue comes down to the following: the term quasi-static applies to the description of a single system undergoing a process, whereas the term irreversible applies to the description of the process as a whole, which often involves multiple interacting systems. In order to use the term quasi-static , one has to have a certain system in mind. A system undergoes a quasi-static process when it is made to go through a sequence of equilibrium states. A process is irreversible if either (a) the system undergoes a non-quasi-static process, (b) the system undergoes a quasi-static process but is exchanging energy with another system that is undergoing a non-quasi-static process, or (c) two systems are exchanging energy irreversibly, usually via heat flow across a finite temperature difference. One can imagine a (admittedly idealized, as most of basic thermodynamics in physics) process in which two systems undergo quasi-static processes while exchanging energy via heat due to a finite temperature difference between them. The irreversibility comes about due heating due to the temperature difference between them rather than due to irreversibilities inside each system. | {
"source": [
"https://physics.stackexchange.com/questions/297386",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/100142/"
]
} |
298,022 | For complete dummies when it comes to space-time, what is a manifold and how can space-time be modelled using these concepts? | What is a manifold? A manifold is a concept from mathematics that has nothing to do with physics a priori. The idea is the following: You have probably studied Euclidean geometry in school, so you know how to draw triangles, etc. on a flat piece of paper. In contrast to common parlance, let's take "space" to mean anything with a number of points. The Euclidean plane ( $\mathbb{R}^2$ ) or your piece of paper are a "space", the 3d-space around you is a "space" or the surface of the world is a "space" (caveat: Actually, I want to define a topological space, which is not "everything with a number of points", but let's not get distracted here). Now, if you look at the surface of the sphere, it's definitely not a Euclidean space: In Euclidean geometry, the sum of every angle in a triangle is 180° which is not true for the surface of a ball, a sphere. However, if you only look at a small patch of the sphere, it is approximately true. For instance, you perceive the earth as flat although it isn't if you look from above. A manifold is every "space" with this property: locally, it looks like a Euclidean plane. The circle is a manifold (it looks like a line locally, which is the one-dimensional Euclidean space $\mathbb{R}$ ), the sphere (it looks like a plane locally), your room (it looks like a 3d-Euclidean space $\mathbb{R}^3$ locally - forget about the boundaries here), etc. The cool thing about manifolds is that this property of looking like Euclidean space locally makes it possible to describe them completely using only Euclidean spaces. Since we know Euclidean space very well, that's a good thing. For instance, you can take a map of England - since the word "map" is used differently in mathematics, let's call it a "chart". This is a perfectly good way of describing England, although it really is part of a round object. You can patch a lot of these charts together to get a whole atlas covering the earth which gives you a nice description of the earth using only 2d pieces of paper. Obviously, you'll need more than one chart to cover the whole earth without doubling certain points and obviously, if the chart covers a very large area, it will look very distorted at some places, but it's possible as you can see. And that's a manifold. It's some space where you can create an atlas of charts, each of which is a (part of a) Euclidean space describing a part of the space. Okay, not quite: what you want of the manifold is that you can get from chart to chart with a nice operation. For instance, in your atlas of the earth, some charts will overlap and points in the overlap that are close together on one chart will be close together on the other chart. In other words, you have a map between the overlapping regions of any two charts and that map is continuous (at that point you get a topological manifold) or even differentiable (at that point you get a differentiable manifold). By now, it should be obvious to you that it should be possible to say that the space around us is a differentiable manifold. It seems perfectly accurate to describe it using $\mathbb{R}^3$ locally, as you have probably done in school. And that's also how manifolds enter relativity: If you add the time dimension, it turns out to be a good guess, that you can still model the space + time as a four dimensional manifold (meaning every chart looks like $\mathbb{R}^4$ locally). Why model spacetime with manifolds? Now you know what a manifold is, but even if you get an idea of how you could model spacetime as a manifold, this doesn't really tell you why you should model spacetime as a manifold. After all, just because you can do something, that doesn't always make it particularly useful. Consider the following problem: Given two points, what's their shortest distance? [Aside: Before answering this question, I want to mention that although I talked about things like distances and angles before, you don't necessarily have these concepts on an arbitrary manifold because it might be impossible to define something like this for your underlying "space", but if you have a "differentiable manifold" (meaning that the functions that get you from chart to chart in the overlapping regions are differentiable), then you do. At that point, it becomes possible to speak about distances. For physics, especially general relativity, you always have a notion of distances and angles.] Back to the problem of shortest distance: In $\mathbb{R}^n$ , the answer is pretty simple. The smallest path between two lines is the straight line between them. But on a sphere? In order to define this, you first need a distance on the sphere. But how to do this? At that point I'd already know what the shortest distance is! Here is one idea: If you consider a flight from London to Buenos Aires (for example), what's the "shortest path"? Well, the earth is more or less a sphere in some $\mathbb{R}^3$ . That's a Euclidean space, so you know how to compute distances there, so the shortest path is just the smallest distance of all possible paths. Easy. However, there is a problem: This only works because we have some ambient three dimensional space. But that doesn't have to be the case - indeed our own "space" doesn't seem to be embedded in some four spatial dimensions dimensional hyperspace (or whatever you want to call it). Here is another idea: Your manifold locally looks like a Euclidean space where the answer is simple. What if you only define your distance locally and then somehow patch it together so that it makes sense? The beautiful thing is that a differentiable manifold gives you tools to do that. This way, you can create a measure of distance (called a Riemannian metric), which allows you to calculate shortest paths between points even without ambient spaces. But it doesn't stop there. What are parallel lines? What happens to a local coordinate system? For instance, if you fly with your plane, it seems that you are always looking ahead, yet your field of view doesn't go in a straight line, how does your field of view change going along a path? Once you have your metric, it's all straightforward. It should be clear that all of these questions are questions that you can ask about the space(time) surrounding you - and you'd want the answer to them! It also seems natural that you should actually be able to answer these questions for our universe. So, what's the metric of our space? Can we just patch it together locally? Well, we could, but it's not going to be unique, so how to decide what is the right metric? That's exactly what general relativity is about: The fundamental equations of general relativity tell us how the distance measure in space time is related to matter and energy. A little bit more about topology (in case you are interested) Finally, if you want to learn more about the "space" aspect that I left out above, let's have a closer look there. What you want is not any set of points, but a set of points which has neighbourhoods for every point. You can think of a neighbourhood of a point as a number of points which are somehow "near" the point. Just like in real life, your neighbourhood could be really big, it could comprise all of the space, it mustn't even be connected, but it must somehow always comprise the points immediately "next to" you. In fact, if you have a distance measure such as the usual Euclidean distance in $\mathbb{R}^n$ , then a set of neighbourhoods is given by all balls of all sizes around any point. However, you can define these neighbourhoods also without having a distance measure, but you can still somehow think of "nearness". These spaces are enough to let you define "continuous functions", where a function is continuous at a point, if all points "near" this point (meaning in some neighbourhood) stay "near" to the point after the mapping (meaning they are mapped into some neighbourhood again). Usually and especially for all manifolds we really want to talk about in relativity, you'd add some more conditions to the spaces to have nicer properties, but if you want to know about this, I suggest to begin learning the true mathematical definitions. There are a lot of other answers that cover the basics! | {
"source": [
"https://physics.stackexchange.com/questions/298022",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/138894/"
]
} |
298,907 | If the Michelson-Morley experiment hadn't been conducted, are there any other reasons to think, from the experimental evidence available at that time, that Einstein could think of the Special Theory of Relativity? Is there any other way to think why the speed of light is the ultimate speed limit? | A lot of people find it somewhat surprising, but Einstein's initial formulation of special relativity was in a paper, On the electrodynamics of moving bodies , that makes very little reference to the Michelson-Morley result; instead, it is largely based on the symmetry of electromagnetic analyses in different frames of reference. From a more modern perspective, there is a strong theoretical case to be made that special relativity is, at the very least, a strong contender for the description of reality. These are beautifully summed up in Nothing but Relativity ( doi ), but the argument is that under some rather weak assumptions, which are essentially the homogeneity and isotropy of space, and the homogeneity of time, plus some weak linearity assumptions you are essentially reduced to either galilean relativity, or special relativity with some (as yet undetermined) universal speed limit $c$, with no other options. To get to reality, you need to supplement this theoretical framework with experiment - there's no other way around it. The Michelson-Morley experiment is, of course, the simplest piece of evidence to put in that slot, but in the intervening century we have made plenty of other experiments that fit the bill. From a purely mechanical perspective, the LHC routinely produces $7\:\mathrm{TeV}$ protons, which would speed at about $120c$ in Newtonian mechanics: it is very clear that $c$ is a universal speed limit, because we try to accelerate things faster and faster, but (regardless of how much kinetic energy they hold) they never go past $c$. If you want something from further back, this is precisely the reason we developed the isochronous cyclotron in the late 1930s and then switched to synchrotrons back in the 1950s - cyclotrons require particles to keep in sync with the driving voltage, but if they approach the speed of light they can no longer go fast enough to keep up. We have upwards of eighty years of history of being able to mechanically push things to relativistic regimes. If you wish for an answer inscribed within "experimental physics as of 1888, minus the Michelson-Morley result" then, as I said, the symmetry properties of electromagnetism (which are directly compatible with SR as derived from $v\ll c$ experiments, but require aether theories to make sense in galilean relativity) were plenty to convince Einstein that SR was the right choice. Edit: As pointed out in a comment, Einstein's original paper does make some reference to Michelson-Morley(-type) experiments, in his second paragraph: Examples [like the reciprocal electrodynamic action of a magnet and a conductor], together with the unsuccessful attempts to discover
any motion of the earth relatively to the “light medium,” suggest that the
phenomena of electrodynamics as well as of mechanics possess no properties
corresponding to the idea of absolute rest. However, apart from this small nod, he makes no substantive references to the aether or its equivalents: the paper starts with the relativity postulates (based on the constancy of the speed of light), uses those to construct special relativity (as pertains transformations between moving frames, and so on), and then builds his case for it on the transformation properties of the equations of electromagnetism: these provide the deeper fundamental insight that underlies the symmetry of analysis of electromagnetic situations performed on different moving frames of reference. | {
"source": [
"https://physics.stackexchange.com/questions/298907",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/128588/"
]
} |
299,608 | If we neglect the danger of unsuccessful lift-off of the rocket and the cost, would it be physically possible to send all nuclear waste on Earth to the Sun?
Will there be an obstacle that prevents this? For example, solar winds? | Sending nuclear waste to the sun is of course physically possible, yet there is one major obstacle: energy, and thus money. Let's consider the launch of a barrel of nuclear waste to the sun. You don't want the waste to start orbiting the sun - eventually falling back to Earth - so you must send it straight to the sun. However, Earth is travelling around the sun at around $30$ km/s so you would have to give the barrel an initial speed of at least around 30 km/s for it to stand still in the heliocentric frame of reference - the effects of the rotation of the Earth are negligible. This is two times the maximum speed of an Ariane 5 rocket. Now, say you want to send a ton of waste to the sun. For a four stage rocket to reach this speed, with this payload, using the best known fuel - that is liquid hydrogen and liquid oxygen -, it needs to weigh around $44\times 10^3$ tons: this is more than 10 times the mass of Saturn V. Now, let's assume that your rocket's mass is more realistic, say $3,000$ tons. Then, the payload that finally reach the Sun would weigh around 100kg, and it would cost around 4 M\$ per kilogram. In comparison, based on the Yucca Mountain nuclear waste repository, it seems that storing nuclear waste underground costs around 1000\$/kg. Finally, as you said, the rocket could be highly damaged by solar winds, so you would have to protect the nuclear waste in a steel canister. Then, only half of the payload would be nuclear waste. | {
"source": [
"https://physics.stackexchange.com/questions/299608",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136363/"
]
} |
299,723 | When I am on earth, the weight of my body is countered by the reaction of the ground. So, there is no net force acting on me. But I am spinning with earth. But if there is no centripetal force then why am I spinning? And the equal air pressure on both side of my body won't be enough for me to stay in the same angular velocity as the earth. Is it just conservation of angular momentum? | Actually, this is rather insightful. The normal force from the ground does not quite cancel out the effect of gravity. The difference between them is precisely the centripetal force that keeps you rotating around with the Earth's surface. Of course, you won't notice this because the centripetal force is so small compared to the gravitational force on you. The centripetal acceleration at the equator is
$$a_c = \omega^2 r \approx \biggl(\frac{2\pi}{24\ \mathrm{h}}\biggr)^2\times 3959\text{ miles} = 0.034\ \frac{\mathrm{m}}{\mathrm{s}^2}$$
which is a paltry one-third of a percent of the gravitational acceleration, and at higher latitudes it is correspondingly less. | {
"source": [
"https://physics.stackexchange.com/questions/299723",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/106713/"
]
} |
299,834 | What are the reasons why we usually treat Quantum Field Theory in momentum space instead of position space? Are the computations (e.g. of Feynman diagrams) generally easier and are there other advantages of this formulation? | The most important reasons we use momentum space Feynman rules are: In position space, the Feynman rules generate convolutions of propagators. Because of the convolution theorem , the momentum space rules generate products of propagators, which are clearly easier to handle. Moreover, in position space you have an integral for each vertex, while on momentum space you have one integral per loop, and in a general diagram there are many more vertices than loops, thus making the momentum space rules easier to use. What's more, the LSZ theorem in momentum space is trivial to implement: we just drop the propagators on the external lines; in position you'd have to evaluate some exponential integrals (which are straightforward, but cumbersome). Finally, the renormalisation conditions are naturally imposed in momentum space, and therefore you want the diagrams in momentum space. | {
"source": [
"https://physics.stackexchange.com/questions/299834",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/76892/"
]
} |
300,046 | What is the mechanism by which increasing $\rm CO_2$ (or other greenhouse gases) ends up increasing the temperature at (near) the surface of the Earth? Mostly what I'm looking for is a big-picture explanation of how increasing $\rm CO_2$ affects the Earth's energy transfer balance that goes a step or two beyond Arrhenius's derivation. I've read Arrhenius's 1896 derivation of the greenhouse effect in section III here . It assumes that
there is non-negligible transmission of the long wavelength radiation from the surface through the full thickness of the atmosphere to space. In the band of $\rm CO_2$ vibrational lines (wavenumbers between about $\rm 600cm^{-1}$ to $\rm 800cm^{-1}$) It is my impression that for most (some? almost all?) of the wavelengths in this band, the atmosphere is optically thick, so the outgoing long wave radiation, e.g. as observed by IRIS on Nimbus 4 had it's "last scattering" somewhere up in the atmosphere, and thus this Arrhenius's "the surface can't radiate into space as efficiently" doesn't apply uniformly across this band. How does this kind of saturation effect modify Arrhenius's description of the greenhouse effect? If this line of reasoning is correct, then the net outgoing long wave emissions in $\rm CO_2$ band of vibrational lines is some complicated mix of radiation from different altitudes. If my inference is correct, how does this affect the response of the Earth to changes in CO2 concentration? Maybe there is some sort of statistical-mechanics picture in terms of the photons doing a random walk to escape the atmosphere (for wavelengths where the atmosphere is optically thick), but I don't know how to connect that idea to overall radiative efficiency. The issue in my understanding that I'm trying to resolve that that Arrhenius's derivation assumes a non-negligible amount of transmission from the surface directly to space. My, admittedly cursory and thus potentially incorrect, understanding of the absorption spectrum of CO2 is that for a range of IR wavelengths the atmosphere (taken as a whole) is effectively opaque. For the portions of the spectrum where there is only some absorption, Arrhenius's argument applies; is the best model to describe the impact of small changes to CO2 concentration to only consider the portions of the IR spectrum that are (partially) transparent and basically ignore the bands that are opaque? I'm mostly interested in the direct effect of $\rm CO_2$ on an Earth-like planet, so we're dealing with a planet whose blackbody temperature is $\rm \approx 250K$ (in order emit the short wavelength (visible and above) radiation it absorbed from the Sun), but whose surface temperature is more like $\rm 280K$, and has concentrations of $\rm CO_2$ in the $\rm 300ppp-400ppm$ range, but I'm willing to ignore the effects of water vapor (I figure that might overly complicate things), so assuming a dry atmosphere, i.e. just $\rm N_2/O_2$ and $\rm CO_2$, would be fine. I'm not being cheeky with the "physics grad", assume I know, or can learn, any of the relevant physical or mathematical relationships required to understand the relationship between greenhouse gas concentrations and the heat transfer properties of the Earth. | Executive summary: Carbon dioxide in the atmosphere absorbs some of the energy radiated by the Earth; when this energy is re-emitted, part of that is directed back to Earth. More carbon dioxide $\rightarrow$ more energy returns to Earth. This is the "greenhouse effect". The full answer is very very complex; I will try a slight simplification. The sun can be treated as a black body radiator, with the emission spectrum following Planck's Law: $$H(\lambda, T) = \frac{2hc^2}{\lambda^5}\frac{1}{e^{\frac{hc}{\lambda kT}}-1}$$ The integral of emission over all wavelengths gives us the Stefan-Boltmann law, $$j^* = \sigma T^4$$ Where $j$ is the radiance, $\sigma$ is the Stefan-Boltzmann constant ( $5.67\times10^{-8} ~\rm{W~ m^{-2}~ K^{-4}})$ If we considered the Earth to be itself a black body radiator with no atmosphere (like the moon), then it is receiving radiation from just a small fraction of the space surrounding it (solid angle $\Omega$ ), but emitting radiation in all directions (solid angle $4\pi$ ). Because of this, the equilibrium temperature for a black sphere at 1 a.u. from the sun can be calculated from Stefan-Boltzmann: $$4\pi \sigma T_e^4 = \Omega \sigma T_s^4\\
T_e = T_s \sqrt[4]{\frac{\Omega}{4\pi}}$$ Now the solid angle of the sun as seen from Earth is computed from the radius of the sun and the radius of the Earth's orbit: $$\Omega = \frac{\pi R_{sun}^2}{R_o^2}$$ With $R_s\approx 7\times 10^8 ~\rm{m}$ and $R_o\approx 1.5\times 10^{11}~\rm{m}$ we find $\Omega \approx 5.4\times 10^{-5}$ ; given the sun's surface temperature of 5777 K, we get the temperature of the "naked" earth as $$T_e = 278~\rm{K}$$ [updated calculation... removed a stray $4\pi$ that had snuck in to my earlier expression. Thanks David Hammen!] Note that this assumes that the Earth is spinning sufficiently fast that the temperature is the same everywhere on the surface - that is, the sun is heating all parts of the Earth evenly. That is not true of course - the poles consistently get less than their "fair share" and the equator more. Taking that into account, you would expect a lower average temperature, as the hotter equator would emit disproportionately more energy (the correct value for the "naked earth black body" is 254.6 K as David Hammen pointed out in a comment); but the (relatively) rapid rate of rotation, plus presence of a lot of water and the atmosphere does prevent some of the extreme temperatures that you see on the moon (where the difference between "day" and "night" can be as high as 276 K...) Now we need to look at the role of the atmosphere, and how it modifies the above. Clearly, we are alive on Earth, and temperatures are much higher than would be calculated absent an atmosphere. This means the "greenhouse effect" is a good thing. How does it work? Clouds in the atmosphere reflect part of the incoming sunlight. This means less solar energy reaches Earth, keeping us cooler As Earth's surface heats up, it re-emits energy back into the atmosphere Because Earth is much cooler than the sun, the spectrum of radiation of the surface is shifted towards the IR part of the spectrum. Here is a plot of the spectrum of the Sun and Earth (assumed at 20 °C), with their peaks normalized for easy comparison, and with the visible light range overlaid: Now for the "greenhouse effect". I already mentioned that clouds stopped some of the Sun's light from reaching the Earth's surface; similarly, the radiation from Earth will in part be absorbed/re-emitted by the atmosphere. The critical thing here is absorption followed by re-emission (when there is equilibrium, the same amount of energy that is absorbed must be re-emitted, although not necessarily at the same wavelength). When there is re-emission, some of the photons "return" to Earth. This has the effect of making the fraction of "cold sky" that the Earth sees smaller, so the expression for the temperature (which had $\sqrt[4]{\frac{\Omega}{4\pi}}$ in it) will be modified - we no longer "see" $4\pi$ of the atmosphere. The second effect is absorption. The absorption spectrum of $\rm{CO_2}$ can be found for example at Clive Best's blog As you can see, much of the energy emitted by Earth is absorbed by the atmosphere: $\rm{CO_2}$ is not the only culprit, but it does have an absorption peak that is quite close to the peak emission of Earth's surface, so it plays a role. Increase the $\rm{CO_2}$ and you increase the amount of energy that is captured by the atmosphere. Now when that energy is re-emitted, roughly half of it will be emittend towards the Earth, and the other half will be emitted to space. As energy is re-emitted back to Earth, the effective mean temperature that the surface has to reach before there is equilibrium (given a constant influx of energy from the Sun) goes up. There are many complicating factors. Hotter surface may mean more clouds and thus more reflected sunlight; on the other hand, increased water vapor also implies increased absorption in the IR. But the basic idea that absorption of IR by the atmosphere will lead to an increased equilibrium temperature of the surface should be pretty clear. Update The question "If the atmosphere is already so opaque to IR radiation, why does it matter if we add more CO2?" deserves more thought. There are three things I can think of. Spectral broadening First - there is the issue of spectral broadening. According to [this lecture](http://irina.eas.gatech.edu/EAS8803_Fall2009/Lec6.pdf) and references therein, there is significant pressure broadening of the absorption lines in $\rm{CO_2}$ . Pressure broadening is the result of frequent collisions between molecules - if the time between collisions is short compared to the lifetime of the decay (which sets a lower limit on the peak width), then the absorption peak becomes broader. The link gives an example of this for $\rm{CO_2}$ at 1000 mb (sea level) and 100 mb (about 10 km above sea level): This tells me that as the concentration of $\rm{CO_2}$ in the atmopshere increases, there will be more of it in the lower (high pressure) layers, where it effectively has no "windows". At lower pressures, the gaps between the absorption peaks would let more of the energy escape without interaction. This will be more important in the upper atmosphere - not so much near Earth's surface where pressure broadening is significant. Near IR absorption bands In the analysis above, I was focusing on the radiation of Earth, and its interaction with $\rm{CO_2}$ absorption bands around 15 µm - what is usually called the "greenhouse effect". However, there are also absorption bands in the near-IR, at 1.4, 1.9, 2.0 and 2.1 µm (see [Carbon Dioxide Absorption in the Near Infrared](http://jvarekamp.web.wesleyan.edu/CO2/FP-1.pdf). These bands will absorb energy of the sun "on the way down", and result in atmospheric heating. Increase the concentration of carbon dioxide, and you effective make the earth a little better at capturing the sun's energy. In the higher layers of the atmosphere (above the clouds) this is particularly important because this is energy absorbed before clouds get a chance to reflect it back into space. Since these bands have lower absorption (but the incident flux of sunlight is so much higher), they play a role in atmospheric modeling (as described more fully in the paper linked above). More absorption from "side bands" This is really well explained in [the answer by @jkej](https://physics.stackexchange.com/a/300125/26969) but worth reiterating: beside the spectral broadening that I described above, given the shape of a spectral peak, the lower absorptivity as you move away from the center frequency becomes more significant as the total number of molecules increases. This means that the part of the spectrum that was only 10% absorbed will become 20% absorbed when the concentration doubles. As the linked answer explains, this only leads to a "square root of concentration" effect for a single line in the spectrum, and an even smaller amount when spectral lines overlap - but it should not be ignored. I think there may also be an argument that can be made regarding treating the atmosphere as a multi-layered insulator, with each layer at its own temperature (with lapse rate mostly controlled mostly by convection and gravity); as carbon dioxide concentration increases, this will change the effective emissivity of different layers of the atmosphere, and this might expose the surface of the earth to different amounts of heat flux depending on the concentration. But this is something I will have to give some more thought to... and maybe run some simulations for. Finally, in a nod to "the other side", here is a link to a website that attempts to argue that carbon dioxide (let alone man-made carbon dioxide) cannot possibly explain global warming - and that global warming in fact does not exist at all. Writing a full refutation of the arguments in that site is beyond the scope of this answer... but it might make a good exercise for another day. | {
"source": [
"https://physics.stackexchange.com/questions/300046",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10635/"
]
} |
300,146 | The Earth takes 24 hours to spin around its own axis and 365 days to spin around the Sun. So in approximately half a year the Earth will have spun around its axis 182.5 times. Now take a look at the following picture: Assuming that the Earth is in the position on the left is, say, on 1st of Jan. 2017 and in the position on the right, half year after. The Earth will be roughly on the opposite side of the Sun given that half a year passed, is that correct? If at noon, half a year earlier, that part of the Earth was facing the Sun, then why wouldn't the opposite part of the Earth be facing the Sun now, after 182 complete rotations and the Earth being on the opposite side of the Sun? We expect the noon-time to occur on the dark side instead of the lighted side. Shouldn't this cause the AM/PM to switch, the rotations made are consistent with 182 passing days. Assuming it's noon at both dates, why does the Earth face the Sun at the same time on both sides of the Sun? | The earth takes 24 hours to spin around it's own axis. Depending on the specifics (such as what it means to "spin around"), this is incorrect. To spin around exactly once with respect to distant stars (aka Sidereal day ) requires 236 seconds less than 24 hours. Over half a year, this nearly 4 minute difference every day adds up to about 12 hours, the time it takes to rotate half way around and face the sun again. 24 hours is the length of the average solar day ( Synodic Day ), the time it takes the earth to rotate so that (on average) it is facing the sun at the same angle. Because the time period derives from a sun-referenced rotation, not a star-referenced rotation, the same spot on the earth faces the sun at approximately the same time every solar day. (Ignoring additional changes from axial tilt and orbital eccentricity) | {
"source": [
"https://physics.stackexchange.com/questions/300146",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/139956/"
]
} |
300,551 | I did search the question on Physics S.E considering it would be previously asked. I found this How come Wifi signals can go through walls, and bodies, but kitchen-microwaves only penetrate a few centimeters through absorbing surfaces? But in this question , the answers are w.r.t to or in comparison with microwaves , their absorption and certain other things. I didn't find a sort of general answer that could be the answer to the question. So the question is - wifi or radio waves reach us through concrete walls . They also reach us through the ceiling (if some one is using it in the flat above ours ). Even through the air they travel such a lot , bending around corners or doors . Now I would not compare them to microwaves (because I don't want the answer in terms of properties of the material but physics).
Visible light which is so much powerful than them can't penetrate black opaque paper leave alone the walls. Same is true for gamma rays (penetration through a very thick wall). So why then radio waves being so very less powerful than light waves are able to travel through walls? There should be a general concept as to why the radio waves are able to pass through walls but microwaves or light waves cannot !
A linked question is also that sound travels much faster in solids(walls) but is not audible in it though it is in air. After reading @BillN's answer, it would be really helpful if any one could explain it in terms molecular resonance or crystalline structure or electrical conductivity or how does molecular resonance or crystalline structure or electrical conductivity cause this. | Different molecules and different crystalline structures have frequency dependent absorption/reflection/transmission properties. In general, light in the human visible range can travel with little absorption through glass, but not through brick. UV can travel well through plastic, but not through silicate-based glass. Radio waves can travel through brick and glass, but not well through a metal box. Each of these differences has a slightly different answer, but each answer is based on molecular resonance or crystalline structure (or lack thereof) or electrical conductivity. Bottom line: There isn't one general answer for why $\lambda_A$ goes through material X but $\lambda_B$ doesn't. | {
"source": [
"https://physics.stackexchange.com/questions/300551",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/113699/"
]
} |
301,242 | I have read that destructive interference between water waves always leads to the creation of smaller waves which eventually die out. Why, in particular for water waves, it is hard to cancel each other? | Interference requires exactly same frequency in both the sources and also needs them to be coherent i.e. their phase relation must remain same throughout. It's very hard to create such things for macroscopic water bodies. Nevertheless in laboratory environment, you can see perfect interference in water waves. | {
"source": [
"https://physics.stackexchange.com/questions/301242",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/110669/"
]
} |
301,267 | My current understanding is this: When current passes through a resistor heat is generated which the resistor then gives off to the surrounding air. That way the resistor is kept at roughly the same temperature so $R $ is constant which makes Ohm's law ($U=IR$) linear. But in space the resistor doesn't have anywhere to put off the heat! In that case, will the resistor keep heating? Will that change its resistance, in turn affecting the $U=IR$ relation? | But in space the resistor doesn't have anywhere to put off the heat! Actually, it does. Heat transfer can occur by three means: conduction, convection, and radiation. Very basically, heat conduction is about solid materials touching each other; convection is about gases or liquids touching the heat source; and radiation is about transmission of energy by means of releasing waves or particles. (This doesn't capture all of it, but since you are asking this question, I get the feeling that you aren't very familiar with the subject, and this is hopefully good enough to get you started for the purposes of this answer.) In an atmosphere, convection is commonly a major mode of heat transfer. It's how every air cooled gadget (whether forced air cooling or ambient air) remains at an appropriate temperature, and it's mostly the way everything eventually ends up at the ambient temperature. In space, there is no atmosphere, so convection doesn't work for cooling. But there's still conduction and radiation. Conduction basically just means that if you leave your spacecraft somewhere far away from any heat source, or in an area of uniform heat sources surrounding it, everything within it will eventually have the same temperature. That's not particularly useful for our purposes; in a spacecraft, it's more about heat transfer within the spacecraft structure than to outside of it. But even with convection and conduction not providing any useful heat transfer to keep our resistor cool, there is still radiation! And in fact, that's how spacecraft maintain an appropriate temperature: By carefully controlling the heat and energy budget, not uncommonly ensuring that all sides of the spacecraft are exposed roughly equally over time to the heat source (which in our real world cases means the Sun) and matching heat dissipation against heat generation through radiation of excess heat . For this reason, spacecraft designs include radiators which take heat generated and radiates it into space. In that case, will the resistor keep heating? Yes, unless the spacecraft includes radiators or some other way to dump excess heat; which it will, at least if it is intended to work for any length of time. Will that change its resistance in turn affect the $U=IR$ relation? Yes and no! This has been pointed out several times in comments, but I see no answer capturing it. Regardless of how exactly it is phrased, Ohm's law is valid only for a snapshot in time. This means that for $U=IR$ to hold as stated, you must simultaneously measure two or three of the quantities involved (voltage, current and resistance); if you measure two, you can calculate the third. The voltage that is lost through resistance across the resistor becomes heat, which (unless it is somehow released) increases the temperature of the resistor. Real-world resistors have a tendency to change their resistance when their temperature changes, which means that $R$ changes. In turn, either the voltage across the resistor ($U$), or the current through the resistor ($I$), must change for the equality $U=IR$ to remain valid. But if you were to measure these quantities again a microsecond later, you would find that the equality still holds, albeit with slightly different values for each. | {
"source": [
"https://physics.stackexchange.com/questions/301267",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/140361/"
]
} |
301,275 | it can be related to What is the condition for a body on an inclined plane(θ degree with horizontal) to fall freely? | But in space the resistor doesn't have anywhere to put off the heat! Actually, it does. Heat transfer can occur by three means: conduction, convection, and radiation. Very basically, heat conduction is about solid materials touching each other; convection is about gases or liquids touching the heat source; and radiation is about transmission of energy by means of releasing waves or particles. (This doesn't capture all of it, but since you are asking this question, I get the feeling that you aren't very familiar with the subject, and this is hopefully good enough to get you started for the purposes of this answer.) In an atmosphere, convection is commonly a major mode of heat transfer. It's how every air cooled gadget (whether forced air cooling or ambient air) remains at an appropriate temperature, and it's mostly the way everything eventually ends up at the ambient temperature. In space, there is no atmosphere, so convection doesn't work for cooling. But there's still conduction and radiation. Conduction basically just means that if you leave your spacecraft somewhere far away from any heat source, or in an area of uniform heat sources surrounding it, everything within it will eventually have the same temperature. That's not particularly useful for our purposes; in a spacecraft, it's more about heat transfer within the spacecraft structure than to outside of it. But even with convection and conduction not providing any useful heat transfer to keep our resistor cool, there is still radiation! And in fact, that's how spacecraft maintain an appropriate temperature: By carefully controlling the heat and energy budget, not uncommonly ensuring that all sides of the spacecraft are exposed roughly equally over time to the heat source (which in our real world cases means the Sun) and matching heat dissipation against heat generation through radiation of excess heat . For this reason, spacecraft designs include radiators which take heat generated and radiates it into space. In that case, will the resistor keep heating? Yes, unless the spacecraft includes radiators or some other way to dump excess heat; which it will, at least if it is intended to work for any length of time. Will that change its resistance in turn affect the $U=IR$ relation? Yes and no! This has been pointed out several times in comments, but I see no answer capturing it. Regardless of how exactly it is phrased, Ohm's law is valid only for a snapshot in time. This means that for $U=IR$ to hold as stated, you must simultaneously measure two or three of the quantities involved (voltage, current and resistance); if you measure two, you can calculate the third. The voltage that is lost through resistance across the resistor becomes heat, which (unless it is somehow released) increases the temperature of the resistor. Real-world resistors have a tendency to change their resistance when their temperature changes, which means that $R$ changes. In turn, either the voltage across the resistor ($U$), or the current through the resistor ($I$), must change for the equality $U=IR$ to remain valid. But if you were to measure these quantities again a microsecond later, you would find that the equality still holds, albeit with slightly different values for each. | {
"source": [
"https://physics.stackexchange.com/questions/301275",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/129187/"
]
} |
301,672 | So, if all the bodies are embedded in space-time and moves through it, is there some kind of 'friction' with space time of the planets? For example, the Earth suffers friction when moving near the sun due the curvature and General Relativity and loses energy? If a planet loses energy due to friction can this energy loss be measured? | I think the question suggests you are thinking of space-time as if it were e.g. a substance, like a fluid, that we move through. That's not how we view space-time, at least in pure general relativity. But the question you ask is a deceptively simple one and it raises some complex questions. And I don't think we actually can answer them exactly because I'm not sure we have a definitive answer to the most basic question hidden in your answer: What is space-time? is there some kind of 'friction' with space time of the planets? There is a "kind" of friction, but perhaps "interaction" would be a better choice of word, as I'd prefer to avoid the notion of classical friction forces. We say that when an object moves through space time it distorts space time - stretches it, compresses it. Mass creates distortions we describe as gravity. It's a little deeper than that. We also know, thanks to the wonderful LIGO experiments, that these gravitational effects do distort space in a wave-like way. And an object can lose energy (has to, in fact) when it creates such waves. Which leads us to this: if a planet loses energy due to friction can this energy loss be measured? No (I suppose I should say, not at our technological level). It's tiny. The gravitational waves we have measured (which represent the closest thing to your friction loss) are due to the collisions of huge black holes, and the disturbance they make is so small that LIGO scientists are pushing the boundaries of measurement to detect them at all. A planet is a tiny thing compared to those black holes and it barely makes a dent, as it were, in space time by comparison. But it's worth saying that our current understanding of space-time is a little basic. We don't have a clear idea of how the quantum world fits into the grand scale of relativistic space-time. At present we have two models, one of a small scale space-time filled with a sea of virtual particles and the other of a pure, clean empty space time with the odd idealized gravitational mass in it. We don't have a single theory connecting them, so we don't really have a proper theory of space-time (or perhaps something deeper than that is needed - no one knows). | {
"source": [
"https://physics.stackexchange.com/questions/301672",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4435/"
]
} |
301,676 | While studying rotational mechanics, I came across a section where it mentioned that angular momentum may not necessarily be parallel to angular velocity. My thoughts were as follows: Angular momentum ($L$) has the relation $L=I\omega$ where $\omega$ is angular velocity and $I$ is the moment of inertia, so following this relation, it seems they should be in the same direction. Why are they not? | Consider a thin rectangular block with width $w$, height $h$ resting along the xy plane as shown below. The mass of the block is $m$. The mass moment of inertia (tensor) of the block about point A is $$ {\bf I}_A = m \begin{vmatrix} \frac{h^2}{3} & -\frac{w h}{4} & 0 \\ -\frac{w h}{4} & \frac{w^2}{3} & 0 \\ 0 & 0 & \frac{w^2+h^2}{3} \end{vmatrix} $$ This was derived from the definition (as seen on https://physics.stackexchange.com/a/244969/392 ) If this block is rotating along the x axis with a rotational velocity $$ \boldsymbol{\omega} = \begin{pmatrix} \Omega \\ 0 \\ 0 \end{pmatrix} $$ then the angular momentum about point A is $${\bf L}_A = m \Omega\,\begin{pmatrix} \frac{h^2}{3} \\ -\frac{w h}{4} \\ 0 \end{pmatrix} $$ As you can see, there is a component of angular momentum in the y direction. The angular momentum vector forms an angle $\psi = -\tan^{-1} \left( \frac{3 w}{4 h} \right)$ In the figure below you see the direction of angular momentum, and the circle about which the center of mass is going to orbit due to precession. | {
"source": [
"https://physics.stackexchange.com/questions/301676",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134947/"
]
} |
302,269 | I have recently read that an orbital node in an atom is a region where there is a 0 chance of finding an electron. However, I have also read that there is an above 0 chance of finding an electron practically anywhere in space, and such is that orbitals merely represent areas where there is a 95% chance of finding an electron for example. I would just like to know if there truly is a 0 probability that an electron will be within a region defined by the node. Many thanks. | The probability of finding the electron in some volume $V$ is given by: $$ P = \int_V \psi^*\psi\,dV \tag{1} $$ That is we construct the function called the probability density : $$ F(\mathbf x, t) = \psi^*\psi $$ and integrate it over our volume $V$, where as the notation suggests the probability density is generally a function of position and sometimes also of time. There are two ways the probability $P$ can turn out to be zero: $F(\mathbf x, t)$ is zero everywhere in the volume $V$ - note that we can't get positive-negative cancellation as $F$ is a square and is everywhere $\ge 0$. we take the volume $V$ to zero i.e. as for the probability of finding the particle at a point Now back to your question. The node is a point or a surface (depending on the type of node) so the volume of the region where $\psi = 0$ is zero. That means in our equation (1) we need to put $V=0$ and we get $P=0$ so the probability of finding the electron at the node is zero. But (and I suspect this is the point of your question) this is a trivial result because if $V=0$ we always end up with $P=0$ and there isn't any special physical significance to our result. Suppose instead we take some small but non-zero volume $V$ centred around a node. Somewhere in our volume the probability density function will inevitably be non-zero because it's only zero at a point or nodal plane, and that means when we integrate we will always get a non-zero result. So the probability of finding the electron near a node is always greater than zero even if we take near to mean a tiny, tiny distance. So the statement the probability of finding the electron at a node is zero is either vacuous or false depending on whether you interpret it to mean precisely at a node or approximately at a node . But I suspect most physicists would regard this as a somewhat silly discussion because we would generally mean that the probability of finding the elecron at a node or nodal surface is nebligably small compared to the probability of finding it elsewhere in the atom. | {
"source": [
"https://physics.stackexchange.com/questions/302269",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136407/"
]
} |
302,279 | I surmise that there is no equation defining the speed of light, as there is with the speed of sound. Presumably, we assume that, at the event horizon, it falls to zero. What is its value at the centre of the hole? | The probability of finding the electron in some volume $V$ is given by: $$ P = \int_V \psi^*\psi\,dV \tag{1} $$ That is we construct the function called the probability density : $$ F(\mathbf x, t) = \psi^*\psi $$ and integrate it over our volume $V$, where as the notation suggests the probability density is generally a function of position and sometimes also of time. There are two ways the probability $P$ can turn out to be zero: $F(\mathbf x, t)$ is zero everywhere in the volume $V$ - note that we can't get positive-negative cancellation as $F$ is a square and is everywhere $\ge 0$. we take the volume $V$ to zero i.e. as for the probability of finding the particle at a point Now back to your question. The node is a point or a surface (depending on the type of node) so the volume of the region where $\psi = 0$ is zero. That means in our equation (1) we need to put $V=0$ and we get $P=0$ so the probability of finding the electron at the node is zero. But (and I suspect this is the point of your question) this is a trivial result because if $V=0$ we always end up with $P=0$ and there isn't any special physical significance to our result. Suppose instead we take some small but non-zero volume $V$ centred around a node. Somewhere in our volume the probability density function will inevitably be non-zero because it's only zero at a point or nodal plane, and that means when we integrate we will always get a non-zero result. So the probability of finding the electron near a node is always greater than zero even if we take near to mean a tiny, tiny distance. So the statement the probability of finding the electron at a node is zero is either vacuous or false depending on whether you interpret it to mean precisely at a node or approximately at a node . But I suspect most physicists would regard this as a somewhat silly discussion because we would generally mean that the probability of finding the elecron at a node or nodal surface is nebligably small compared to the probability of finding it elsewhere in the atom. | {
"source": [
"https://physics.stackexchange.com/questions/302279",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/140202/"
]
} |
302,461 | To keep the question brief: in bridge design, why is the arch structure favoured compared to a simple flat one? In other words, how does the curved platform alter the force decomposition of the load on the bridge, such that it can uphold larger loads? I imagine that intuitively the load is no longer applied in a fully normal manner (orthogonal) onto the bridge, but I cannot convince myself. | Fracture happens under tension - that is, when you pull on something hard enough, it rips. The key to understanding the arc design hinges on understanding that it lowers the maximum tensile force. Take a simple beam, support it at the ends, and hang something off the center: Tension at the bottom, and compression at the top, are needed to balance the torque created by the vertical forces of the supports, and the load in the middle. Obviously, the further apart the supports are, or the greater the load, the greater the tension. When that tension reaches a critical value the beam will fail. Now if we shape the bridge into an arc, we get this: The additional lateral forces on the arc cause compression in the beam, this reduces the net tension at the bottom and makes the beam better able to support the load. You can make things even better by spreading the load more evenly, designing the shape of the arc to better optimize the load distribution, etc - but the diagram should give you a sense of the underlying principle. Update The lateral forces are perhaps most easily understood by looking at a V shaped structure: you know intuitively that such a structure would collapse unless you provide some torque at the apex to keep the legs together, or provide sufficient friction at the base of the legs to keep them together. You can also see that the force needed near the hinge (which is provided by the red "tension" stress in my upper diagram) would need to be much greater than the force provided by friction at the bottom (lateral forces from the support on the arch). | {
"source": [
"https://physics.stackexchange.com/questions/302461",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/62173/"
]
} |
302,555 | The root mean square velocity of hydrogen gas at room temperature is: Gas constant: $R=8.31\ \mathrm{J\ K^{-1}\ mol^{-1}}$ Molar mass of hydrogen gas: $M=2.02\times10^{-3}\ \mathrm{kg/mol}$ $$\begin{align}
v &= \left(\frac{3\times8.31\ \mathrm{J\ K^{-1}\ mol^{-1}}\times300}{2.02\times10^{-3}\ \mathrm{kg/mol}}\right)^{\frac12}\\
&= 3356.8377\ \mathrm{m/s}\\
&= 3.356\ \mathrm{km/s}
\end{align}$$ The escape speed of Earth is $11.2\ \mathrm{km/s}$ ,
which is larger than the root mean square velocity of hydrogen gas.
But still, hydrogen gas doesn't exist in Earth's atmosphere. Why?
Have I made any mistakes in my calculations? | The answer to your question comes from Maxwell distribution of speed of the hydrogen molecules. If you take a look at this graph, about the speed of a particle $v$ and the probability of that speed $w$, you can see that there is a non-zero probability that the speed of a certain molecule is greater than the root mean square speed $v_{\mathrm{qm}} $ of that distribution. In particular, you can calculate the probability that the speed of a certain molecule is greater than the escape velocity of Earth $v_{\mathrm{esc}} = 11000\,\mathrm{m/s}$. Under the hypothesis of ideal gas, this probability is:
$$\mathcal{P} = \int_{v_{\mathrm{esc}}}^{\infty} w(v) dv $$
defining the probability density function:
$$ w(v) = 4 \pi \left( \frac{m}{2 \pi k_{\mathrm{B}} T}\right)^{3/2} e^{-\frac{mv^2}{2 k_{\mathrm{B}} T}} v^2 $$
where $m$ is the mass of the hydrogen molecule, $k_{\mathrm{B}}$ is the Boltzman constant, $T$ is the absolute temperature (in Kelvin) and $v$ the speed. By doing this calculation (please let me use other values I have already calculated, at this point it should be easy to apply that formula for every value) at a temperature $T=270\,\mathrm{K}$ and with a mass $m_{H_2} = 2 \cdot 1.67 \times 10^{-27}\,\mathrm{kg}$, we get that the root mean square speed is $v_{\mathrm{qm}}=1830\,\mathrm{m/s}$. On the other hand, the probability that a particle has a speed six times greater than this value (it is approximately the escape velocity of Earth) is $2 \times 10^{-9}$. This value is small, but not negligible; in a long enough time, every molecule of hydrogen will escape from Earth's atmosphere. For a last example, you can consider the mass of the molecule of oxygen. Its mass is 16 times bigger than the hydrogen molecule and its root mean square speed is 4 times lower and 24 times lower than the escape speed. The probability to get enough speed to escape Earth's atmosphere is approximately $10^{-40}$: really, really small. This is an intuitive, approximate explanation of why the molecular hydrogen concentration in Earth's atmosphere is really low, while the concentration of other, heavier molecules is higher. There is a difference in the probability, for a certain molecule, to have a speed greater or equal to the escape speed of the Earth. This influences the rate at which these kind of molecule escape the atmosphere and therefore will lead to a different equilibrium (a different concentration) for each molecule. (source: F. Ciccacci, "Fondamenti di Fisica Atomica e Quantistica") | {
"source": [
"https://physics.stackexchange.com/questions/302555",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/140925/"
]
} |
302,564 | I have a problem understanding how to reconcile the particle antiparticle annihilation vertex with the $SU(2)$ gauge theory, in the context of the weak interaction. Let me explain better : Invoking $SU(2)$ gauge invariance we deduce there must be three gauge bosons, associated to the three Pauli matrices. We take, as usual, the linear combination yielding $\sigma_+, \sigma_-$ and $\sigma_z$ that are respectively associated to $W^+, W^-$ and $Z$. I am aware that I should be considering $U(1)_Y\times SU(2)_L$, but in the context of this question I believe it is irrelevant. Now consider the SU(2) doublets,$\begin{pmatrix}l^+\\ l^- \end{pmatrix}$, where $l^+$ has weak isospin $1/2$ and $l^-$ has isospin $-1/2$. Let's take $\begin{pmatrix}v_e\\ e^- \end{pmatrix}$, we find that the weak current by coupling to the $Z$ boson is:
$$j^{\mu}_Z \propto \begin{pmatrix}\overline{v}_e & \overline{e}^- \end{pmatrix}\gamma^{\mu}\sigma_z \begin{pmatrix}v_e\\ e^- \end{pmatrix}$$
Where $\overline{u} = u^{\dagger}\gamma^0$. Expanding this, we find that :
$$j^\mu_Z=\frac{1}{2}\overline{v}_e\gamma^{\mu}v_e-\frac{1}{2}\overline{e}^-\gamma^{\mu}e^-$$ Where, $v_e$ and $\overline{v_e}$ stands for the spinors of the neutrino, and likewise for the electron. As we can see from this, it seems that the Z-boson couples particles of same weak isospin. However, we can have an annihilation vertex where $e^-$ and $e^+$ annihilate into a Z boson, despite the fact that $e^-$ has $I_w^{(3)} = -1/2$ while $e^+$ has $I_w^{(3)} = 1/2$. How can this reconciled with the representation of Z as $\sigma_z$ ? I know that there is some problem with my current, since obviously an $e^-$ cannot annihilate with an $e^-$, in a vertex such as : , but only in a vertex such as : . However, in my derivation, there does not seem to be a distinction in which one of these vertex I'm considering, so I'm confident that there lies my mistake, but I am unable to figure it out. I think somehow, in an annihilation vertex, particles of opposite weak isospin should interact while in a scattering vertex particle of same weak isospin should interact. This is also consistent with conservation of weak isospin, but I am unable to understand how to make this distinction in the currents using $\sigma_Z$ as the Z boson coupling. | The answer to your question comes from Maxwell distribution of speed of the hydrogen molecules. If you take a look at this graph, about the speed of a particle $v$ and the probability of that speed $w$, you can see that there is a non-zero probability that the speed of a certain molecule is greater than the root mean square speed $v_{\mathrm{qm}} $ of that distribution. In particular, you can calculate the probability that the speed of a certain molecule is greater than the escape velocity of Earth $v_{\mathrm{esc}} = 11000\,\mathrm{m/s}$. Under the hypothesis of ideal gas, this probability is:
$$\mathcal{P} = \int_{v_{\mathrm{esc}}}^{\infty} w(v) dv $$
defining the probability density function:
$$ w(v) = 4 \pi \left( \frac{m}{2 \pi k_{\mathrm{B}} T}\right)^{3/2} e^{-\frac{mv^2}{2 k_{\mathrm{B}} T}} v^2 $$
where $m$ is the mass of the hydrogen molecule, $k_{\mathrm{B}}$ is the Boltzman constant, $T$ is the absolute temperature (in Kelvin) and $v$ the speed. By doing this calculation (please let me use other values I have already calculated, at this point it should be easy to apply that formula for every value) at a temperature $T=270\,\mathrm{K}$ and with a mass $m_{H_2} = 2 \cdot 1.67 \times 10^{-27}\,\mathrm{kg}$, we get that the root mean square speed is $v_{\mathrm{qm}}=1830\,\mathrm{m/s}$. On the other hand, the probability that a particle has a speed six times greater than this value (it is approximately the escape velocity of Earth) is $2 \times 10^{-9}$. This value is small, but not negligible; in a long enough time, every molecule of hydrogen will escape from Earth's atmosphere. For a last example, you can consider the mass of the molecule of oxygen. Its mass is 16 times bigger than the hydrogen molecule and its root mean square speed is 4 times lower and 24 times lower than the escape speed. The probability to get enough speed to escape Earth's atmosphere is approximately $10^{-40}$: really, really small. This is an intuitive, approximate explanation of why the molecular hydrogen concentration in Earth's atmosphere is really low, while the concentration of other, heavier molecules is higher. There is a difference in the probability, for a certain molecule, to have a speed greater or equal to the escape speed of the Earth. This influences the rate at which these kind of molecule escape the atmosphere and therefore will lead to a different equilibrium (a different concentration) for each molecule. (source: F. Ciccacci, "Fondamenti di Fisica Atomica e Quantistica") | {
"source": [
"https://physics.stackexchange.com/questions/302564",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/99654/"
]
} |
302,566 | High school physics student here. I was reading an explanation of why we work with torque and the moment of inertia the way we do. It was this one . However i got stuck on a certain concept.
The explanation made sense to me in places, but i must have missed something. Let's take a rod fixed on a point on it's end as an example. Suppose we apply a force to a point on the rod. Well, the translational acceleration of a point of the rod depends on its distance from the center and the angular acceleration. To the best of my understanding, this should mean that the amount of force needed to accelerate the whole thing with an angular acceleration of α is (average distance from center) * α * (mass of whole rod). Given all of this, i fail to see how pushing at a particular distance from the center is going to affect anything (as in, why we use torque in the equations). In the end, isn't the force kind of distributed along the whole rod or something? Please explain what are my misconceptions and why the distance from the fixation point actually matters. (I would prefer if you didn't analyze energies, i desire an explanation based on basic Newtonian laws to better comprehend the topic). | The answer to your question comes from Maxwell distribution of speed of the hydrogen molecules. If you take a look at this graph, about the speed of a particle $v$ and the probability of that speed $w$, you can see that there is a non-zero probability that the speed of a certain molecule is greater than the root mean square speed $v_{\mathrm{qm}} $ of that distribution. In particular, you can calculate the probability that the speed of a certain molecule is greater than the escape velocity of Earth $v_{\mathrm{esc}} = 11000\,\mathrm{m/s}$. Under the hypothesis of ideal gas, this probability is:
$$\mathcal{P} = \int_{v_{\mathrm{esc}}}^{\infty} w(v) dv $$
defining the probability density function:
$$ w(v) = 4 \pi \left( \frac{m}{2 \pi k_{\mathrm{B}} T}\right)^{3/2} e^{-\frac{mv^2}{2 k_{\mathrm{B}} T}} v^2 $$
where $m$ is the mass of the hydrogen molecule, $k_{\mathrm{B}}$ is the Boltzman constant, $T$ is the absolute temperature (in Kelvin) and $v$ the speed. By doing this calculation (please let me use other values I have already calculated, at this point it should be easy to apply that formula for every value) at a temperature $T=270\,\mathrm{K}$ and with a mass $m_{H_2} = 2 \cdot 1.67 \times 10^{-27}\,\mathrm{kg}$, we get that the root mean square speed is $v_{\mathrm{qm}}=1830\,\mathrm{m/s}$. On the other hand, the probability that a particle has a speed six times greater than this value (it is approximately the escape velocity of Earth) is $2 \times 10^{-9}$. This value is small, but not negligible; in a long enough time, every molecule of hydrogen will escape from Earth's atmosphere. For a last example, you can consider the mass of the molecule of oxygen. Its mass is 16 times bigger than the hydrogen molecule and its root mean square speed is 4 times lower and 24 times lower than the escape speed. The probability to get enough speed to escape Earth's atmosphere is approximately $10^{-40}$: really, really small. This is an intuitive, approximate explanation of why the molecular hydrogen concentration in Earth's atmosphere is really low, while the concentration of other, heavier molecules is higher. There is a difference in the probability, for a certain molecule, to have a speed greater or equal to the escape speed of the Earth. This influences the rate at which these kind of molecule escape the atmosphere and therefore will lead to a different equilibrium (a different concentration) for each molecule. (source: F. Ciccacci, "Fondamenti di Fisica Atomica e Quantistica") | {
"source": [
"https://physics.stackexchange.com/questions/302566",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/140929/"
]
} |
302,568 | In my class materials I've came to this issue twice. The first time I thought they solved it wrong, but now that I see a similar situation again I'm more keen on the thought that I'm misunderstanding mechanics.
Here's a picture: So to solve what I need to solve, I need to first figure out the force that is acting vertically down on the surface I painted in blue. But here's the problem On which leg do I put the force F=6kN? To my logic, the second case should be correct, because the beam is transferring the force that I am investing. I was very surprised when I saw that they solved it the first way . Because now, the force that the beam is transferring (the diagonal one) is larger than the force I invested. Which I don't think makes sense. So which is the correct way to solve this, and if it is the first way, how does it make any sense? | The answer to your question comes from Maxwell distribution of speed of the hydrogen molecules. If you take a look at this graph, about the speed of a particle $v$ and the probability of that speed $w$, you can see that there is a non-zero probability that the speed of a certain molecule is greater than the root mean square speed $v_{\mathrm{qm}} $ of that distribution. In particular, you can calculate the probability that the speed of a certain molecule is greater than the escape velocity of Earth $v_{\mathrm{esc}} = 11000\,\mathrm{m/s}$. Under the hypothesis of ideal gas, this probability is:
$$\mathcal{P} = \int_{v_{\mathrm{esc}}}^{\infty} w(v) dv $$
defining the probability density function:
$$ w(v) = 4 \pi \left( \frac{m}{2 \pi k_{\mathrm{B}} T}\right)^{3/2} e^{-\frac{mv^2}{2 k_{\mathrm{B}} T}} v^2 $$
where $m$ is the mass of the hydrogen molecule, $k_{\mathrm{B}}$ is the Boltzman constant, $T$ is the absolute temperature (in Kelvin) and $v$ the speed. By doing this calculation (please let me use other values I have already calculated, at this point it should be easy to apply that formula for every value) at a temperature $T=270\,\mathrm{K}$ and with a mass $m_{H_2} = 2 \cdot 1.67 \times 10^{-27}\,\mathrm{kg}$, we get that the root mean square speed is $v_{\mathrm{qm}}=1830\,\mathrm{m/s}$. On the other hand, the probability that a particle has a speed six times greater than this value (it is approximately the escape velocity of Earth) is $2 \times 10^{-9}$. This value is small, but not negligible; in a long enough time, every molecule of hydrogen will escape from Earth's atmosphere. For a last example, you can consider the mass of the molecule of oxygen. Its mass is 16 times bigger than the hydrogen molecule and its root mean square speed is 4 times lower and 24 times lower than the escape speed. The probability to get enough speed to escape Earth's atmosphere is approximately $10^{-40}$: really, really small. This is an intuitive, approximate explanation of why the molecular hydrogen concentration in Earth's atmosphere is really low, while the concentration of other, heavier molecules is higher. There is a difference in the probability, for a certain molecule, to have a speed greater or equal to the escape speed of the Earth. This influences the rate at which these kind of molecule escape the atmosphere and therefore will lead to a different equilibrium (a different concentration) for each molecule. (source: F. Ciccacci, "Fondamenti di Fisica Atomica e Quantistica") | {
"source": [
"https://physics.stackexchange.com/questions/302568",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136596/"
]
} |
302,633 | Why do we add a minus sign in our formula for gravity, when we might as well choose the unit vector $r_{21}$, instead of $r_{12}$? I'm just wondering why we choose this convention. Is it because it's easier to remember that $F_{12}$ goes with $r_{12}$? Edit: Actually... Is wikipedia right? My syllabus says the following: Still, my question holds... Why go through this trouble of adding a minus sign? | The minus sign is to indicate that the force is attractive: if there was no minus sign, two masses would repel . By the way, Wikipedia's article is correct: the vector F must point away from the mass on which the force is acting ($m_2$), and $\textbf{r}_{12}$ points to the mass $m_2$, so with a minus sign you need $\textbf{r}_{12}$; otherwise you would have repulsion. If it seems redundant to you (using the vector pointing to $m_2$ and introducing a minus sign - why not just use the opposite vector $\textbf{r}_{21}$ and cancel the minus sign?) well you are right in a sense, because when speaking of force this indeed looks a bit of a cramped reasoning.
The real reason is that physicists like to think to forces in terms of fields . You can take a look at the Wikipedia article about the gravitational field . Under this light, it makes much more sense to use the vector $\textbf{r}_{12}$ because it can be identified with the position vector (which has nothing to do with the mass $m_2$ anymore - this is the power of the concept of field as opposed to the force-between-objects concept) | {
"source": [
"https://physics.stackexchange.com/questions/302633",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/140553/"
]
} |
302,811 | The curl in cylindrical coordinates is defined: $$\nabla \times \vec{A}=\left({\frac {1}{\rho }}{\frac {\partial A_{z}}{\partial \varphi }}-{\frac {\partial A_{\varphi }}{\partial z}}\right){\hat {\boldsymbol {\rho }}}+\left({\frac {\partial A_{\rho }}{\partial z}}-{\frac {\partial A_{z}}{\partial \rho }}\right){\hat {\boldsymbol {\varphi }}}{}+{\frac {1}{\rho }}\left({\frac {\partial \left(\rho A_{\varphi }\right)}{\partial \rho }}-{\frac {\partial A_{\rho }}{\partial \varphi }}\right){\hat {\mathbf {z} }}$$ For vector fields of the form $\vec{A}=\frac{k }{\rho}\hat{\varphi}$ (plotted below), $A_z=A_\rho=0$ and $A_\varphi = k\rho^{-1}$, so the resulting field has zero curl. But choosing $k=\frac{\mu_o I}{2\pi}$ results in the correct solution for the magnetic field around a wire: $$\vec{B}=\frac{\mu_o I}{2\pi R}\hat{\varphi}$$ This field cannot be curl-free because of Maxwell's equations, Ampere's law, etc. So I must have made a mistake somewhere: Why am I calculating this field to be curl-free? | The vector $\hat \varphi$ is not defined at the origin, because the coordinate transformation $$(x,y) \mapsto (r,\varphi) = \left(\sqrt{x^2 + y^2}, \arctan(y/x)\right)$$
is singular there. Hence your field $\mathbf B$ is singular at the origin. The theorem that $$\nabla \times \mathbf B = 0 \Rightarrow \oint_C \mathbf B \cdot d\mathbf r = 0$$
requires that the curve $C$ in the line integral be contractible to a point without passing through any singularities. This is not the case for the plane with the origin excluded when the curve winds around the origin. The singularity of course arises because you have an infinitely thin wire. Try finding the magnetic field for a wire of thickness $R_1$ with uniform current density and taking the limit of $R_1 \to 0$ while the total current remains constant. The curl will be zero outside the wire, but diverge inside the wire, as Maxwell's equations dictate. | {
"source": [
"https://physics.stackexchange.com/questions/302811",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/138661/"
]
} |
303,132 | Reading about photons I hear different explanations like "elementary particle", "probability cloud", "energy quanta" and so forth. Since probably no one has ever seen a photon (if "seen" it supposedly - and rather conveniently - ceases to exist) but many experiments seem to verify its properties (or are they maybe adjusted to fit the experiment). I thus can't help wondering if the "photon" is then just a physical/mathematical tool with inexplicable properties (like zero mass - but affected by gravity fields - and constant speed c in space) invented to explain some otherwise unexplainable phenomem and to supplement the elementary particles and their interactions. In short: Are they real or imaginary! Does anybody know? Or maybe the answer is "blowing in the wind" because to most physicist it probably just does'nt matter as long as it works (as the alternative healers say). Sorry if I seem a little sarcastic here and there. | There is lots of experimental evidence that the electromagnetic field exchanges energy with atoms in discrete chunks, and if we call these chunks photons then photons exist. Which is all very well, but my guess is that you’re really interested to know if the photon exists as a little ball of light speeding through space at $c$, and if so then, well, that’s a complicated question. Actually all particles are more elusive than you might think. Many of us will have started our journey into quantum mechanics with the wave equation of a free particle , and been surprised that the solution was a plane wave that didn’t look anything like a particle. Then the teacher tells us we can build a wave packet to make a particle but, well, this isn’t all that convincing. Making a particle by constructing a wave packet seems awfully arbitrary for objects that are supposed to be fundamental. In fact non-relativistic quantum mechanics doesn’t tell us anything about why particles exist and where they come from. It isn’t until we get to quantum field theory that we get a reason why particles exist and an explanation for their properties, but even then particles turn out to be stranger things than we thought. When you learn QFT you traditionally start out by quantising a scalar free field. If we do this we find the field states are Fock states , and we interpret these states as containing a well defined number of particles. Acting on the vacuum state with a creation operator adds a particle to a state, and likewise acting on a state with the annihilation operator removes a particle. All this may sound a bit abstract, but it actually gives us a concrete description of what particles are. The particle properties, like mass, spin, charge, etc, are properties of the quantum field, and all the particles are identical because they are all described by the same field. So the theory immediately tells us why e.g. all electrons are identical, and it describes how particles can be created and destroyed in colliders like the LHC. Right now quantum field theory is the definitive theory for describing what particles are and how they behave. But these field modes that represent particles look awfully like the plane waves that we started with back when we first learned QM. So the particles described by QFT still don’t really resemble particles in the intuitive sense of a little ball. And worse is to come. Fock states only exist for the free field i.e. one in which particles don’t interact with each other. And that’s obviously a useless model for particles like electrons and photons that interact strongly. In an interacting theory the field states are’t Fock states, and they aren’t even superpositions of Fock states. In fact right now we don’t know what the states of an interacting field are. The best we can do is calculate their properties using a perturbative approach or a lattice approximation . But let’s get back to photons. We don’t quantise the electromagnetic field because it isn’t manifestly Lorentz covariant, so instead we construct a field called the electromagnetic four-potential and quantise that. And now we have a definition of the photon in terms of the states of this field. As long as we are dealing with situations where interactions can be ignored we have a nice clean definition of a photon. And we can describe the creation of photons by adding energy to the modes described by the quantum field and annihilation of photons can take energy out of the modes and add it to e.g. a hydrogen atom. In this sense photons are real things that definitely exist. But this photon doesn’t look like a little ball of light. In fact it doesn’t look like a light ray at all. Constructing a light ray involves taking a coherent state of photons in a way that I confess I don’t understand but I know is complicated. This is the domain of quantum optics and I wish you many happy hours attempting to learn it. This is the point made in the paper by W. E. Lamb that I mentioned in a comment. There is a long and ignoble history of people imagining that light rays are just hails of photons, then getting confused as a result. The only time we really see light behaving as a photon is when it exchanges energy with something. So when an excited hydrogen atom decays a photon is emitted. Likewise a photon can be absorbed by an atom and excite it. As the light propagates to or from the atom it is rarely useful to describe it in terms of photons. I feel like I’ve gone on and on at some length without really answering your question, but that’s because your question doesn’t really have an answer. QFT, specifically quantum electrodynamics , gives us a very, very precise description of what photons are and I suspect most of us would say that of course photons really exist. They just aren’t the simple objects that most people think. | {
"source": [
"https://physics.stackexchange.com/questions/303132",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/105027/"
]
} |
303,412 | When jumping a car battery the standard advice is to connect the red (positive) cable first. What's the physics explanation for this? | This is more of an automotive question but... The reason you connect the reds first is to minimize the likelihood of a short. Remember that you're typically in control of one clip at a time, so one of them is not fully in your control. The particular trouble case is the last clip that you put in place. If you attach the negative sides first, then one positive clip, the other positive clip is now at 12V with respect to the "ground" of the other car's body. If the other red clip touches the other car almost anywhere in the engine, you'll create a short circuit. This is really easy because there's a lot of metal in an engine compartment! If you connect the positive clips first, and then one negative clip, the other clip is now at roughly 0V with respect to the "ground" of the other car's body. If it touches anything in the engine, it's no big deal. The only way to create a short circuit here is if you explicitly touched the black clip to the red one, and that's a lot harder to do by accident. | {
"source": [
"https://physics.stackexchange.com/questions/303412",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4836/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.