source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
446,889 | In an answer to another question of mine , concerning gravity, there was a link to a video about the creation of artificial gravity , based on rotation. The question I have might be silly (or with an obvious answer), but it puzzles me non the less. As I understand it, in order for the centrifugal force (which is responsible for creating gravity, in this case) to work, the object it works upon should be attached to the wheels 'spoke' or 'rim'. If an astronaut walks on the inside of the 'rim' (like here in the video ), the contact with the 'rim' is maintained via legs, thus the centrifugal force is in action. Now, the question: if, while being inside a rotating space station, an astronaut would jump really high, wouldn't he in that case experience zero gravity until he again will touch some part (wall or floor) of the station? Am I missing something in my understanding? | Now, the question: if, while being inside a rotating space station, astronaut would jump really high, wouldn't he then experience zero gravity until he again will touch some part (wall or floor) of the station? Am I missing something in my understanding? Well, here's a related question. Suppose you find yourself in an elevator at the top floor of a skyscraper when the cable suddenly snaps. As the elevator plummets down, you realize you'll die on impact when it hits the bottom. But then you think, what if I jump just before that happens? When you jump, you're moving up, not down, so there won't be any impact at all! The mistake here is the same as the one you're made above. When you jump in the elevator, you indeed start moving upward relative to the elevator, but you're still moving at a tremendous speed downward relative to the ground, which is what matters. Similarly, when you are at the rim of a large rotating space station, you have a large velocity relative to somebody standing still at the center. When you jump, it's true that you're going up relative to the piece of ground you jumped from, but you still have that huge tangential velocity. You don't lose it just by losing contact with the ground, so nothing about the story changes. | {
"source": [
"https://physics.stackexchange.com/questions/446889",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
446,893 | I'm having some issues understanding what's happening microscopic when there's sudden changes in the pressure. The microscopic idea, is that particles randomly bounces around each other. It's even possible with entropy to state all the air can go to one side of the room, and leave a vacuum on the other side but off course very improbable. But if particles just randomly just bounces around, why are you being sucked out of a space station, if the doors are suddenly opened into the vacuum? Why can the molecules around you feel the doors has been opened another place, if they just randomly bounces around? | Now, the question: if, while being inside a rotating space station, astronaut would jump really high, wouldn't he then experience zero gravity until he again will touch some part (wall or floor) of the station? Am I missing something in my understanding? Well, here's a related question. Suppose you find yourself in an elevator at the top floor of a skyscraper when the cable suddenly snaps. As the elevator plummets down, you realize you'll die on impact when it hits the bottom. But then you think, what if I jump just before that happens? When you jump, you're moving up, not down, so there won't be any impact at all! The mistake here is the same as the one you're made above. When you jump in the elevator, you indeed start moving upward relative to the elevator, but you're still moving at a tremendous speed downward relative to the ground, which is what matters. Similarly, when you are at the rim of a large rotating space station, you have a large velocity relative to somebody standing still at the center. When you jump, it's true that you're going up relative to the piece of ground you jumped from, but you still have that huge tangential velocity. You don't lose it just by losing contact with the ground, so nothing about the story changes. | {
"source": [
"https://physics.stackexchange.com/questions/446893",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/169351/"
]
} |
446,974 | I have never found experimental evidence that measuring one entangled particle causes the state of the other entangled particle to change, rather than just being revealed. Using the spin up spin down example we know that one of the particles will be spin up and the other spin down, so when we measure one and find it is spin up we know the other is spin down. Is there any situation after creation that the particle with spin up will change to spin down? | The assumption that a measurable property exists whether or not we measure it is inconsistent with the experimental facts. Here's a relatively simple example. Suppose we have four observables, $A,B,C,D$ , each of which has two possible outcomes. (For example, these could be single-photon polarization observables.) For mathematical convenience, label the two possible outcomes $+1$ and $-1$ , for each of the four observables. Suppose for a moment that the act of measurement
merely reveals properties that would exist
anyway even if they were not measured.
If this were true, then any given state
of the system would have definite values $a,b,c,d$ of the observables $A,B,C,D$ .
Each of the four values $a,b,c,d$ could be either $+1$ or $-1$ ,
so there would be $2^4=16$ different possible
combinations of these values.
Any given state would have one of these 16 possible combinations. Now consider the two quantities $a+c$ and $c-a$ .
The fact that $a$ and $c$ both have magnitude $1$ implies that one of these two quantities must be zero,
and then the other one must be either $+2$ or $-2$ .
This, in turn, implies that the quantity $$
(a+c)b+(c-a)d
$$ is either $+2$ or $-2$ .
This is true
for every one of the 16 possible combinations
of values for $a,b,c,d$ ,
so if we prepare many states,
then the average value of this quantity must be somewhere
between $+2$ and $-2$ .
In particular, the average cannot
be any larger than $+2$ .
This gives the CHSH inequality $$
\langle{AB}\rangle
+\langle{CB}\rangle
+\langle{CD}\rangle
-\langle{AD}\rangle\leq 2,
$$ where $\langle{AB}\rangle$ denotes the average of the
product of the values of $a$ and $b$ over all of the prepared states. In the real world, the CHSH inequality can be violated , and quantum theory correctly predicts the observed violations. The quantity $\langle{AB}\rangle
+\langle{CB}\rangle
+\langle{CD}\rangle
-\langle{AD}\rangle$ can be as large as $2\sqrt{2}$ . Here are a few papers describing experiments that verify this: Kwiat et al, 1995. “New high-intensity source of polarization-entangled photon pairs,” http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.75.4337 Kwiat et al, 1998. “Ultra-bright source of polarization-entangled photons,” http://arxiv.org/abs/quant-ph/9810003 Weihs et al, 1998. “Violation of Bell’s inequality under strict Einstein locality conditions,” http://arxiv.org/abs/quant-ph/9810080 The fact that the CHSH inequality is violated in the real world implies that the premise from which it was derived cannot be correct. The CHSH inequality was derived above by assuming that the act of measurement merely reveals properties that would exist anyway even if they were not measured. The inequality is violated in the real world, so this assumption must be wrong in the real world. Measurement plays a more active role. | {
"source": [
"https://physics.stackexchange.com/questions/446974",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/212255/"
]
} |
447,287 | It is a known fact that inertial and gravitational masses are the same thing, and therefore are numerically equal. This is not an obvious thing, since there are even experiments trying to find a difference between the two kinds of masses. What I don't understand is: why is this not obvious? Usually when something that isn't considered obvious seems obvious to me there's something deep that I'm not getting. Here's my line of thought: The inertial mass is defined by $$
{\bf{F}} = m_i {\bf{a}} \tag{1}
$$ The gravitational mass is derived from the fact that the gravitational force between two objects is proportional to the product of the masses of the objects: $$
{\bf{F_g}} = -G \frac{m_{G1} m_{G2}}{|{\bf{r}}_{12}|^2} \hat{{\bf{r}}} \tag{2}
$$ Now if the only force acting on the object $1$ is the gravitational force, I can equate equations $(1)$ and $(2)$ , and I can always fix the constant $G$ in such a way that the gravitational mass and the inertial mass are numerically equal. What's wrong with this line of thought and why is the equivalence not really so obvious? | To make it clear that it is not obvious it is better to stop using the word "mass" in both cases. So it is better to say that it is not obvious that the inertial resistance, meaning the property that scales how different objects accelerate under the same given force, is the same as the "gravitational charge", meaning the property that scales the gravitational field that different objects produce. Just to make it clear, the problem with the equivalence of masses is not "does $m_i=1$ imply $m_{G}=1$ ?" in whatever units. The real question is, "does doubling the inertial mass actually double the gravitational mass?". So your procedure of redefinition of $G$ , while keeping it as a constant, is only valid if the ratio ${m_i}/{m_G}$ is constant, meaning if there is a constant scale factor between the inertial mass and the gravitational mass. By the way, this scale factor would actually never be noticed since from the begginning it would already be accounted for in the constant $G$ . You could do the same thing with electrical forces, using Coulombs law. You can check if electrical charge is the same as inertial mass, since you have: $${\bf{F}} = m_i {\bf{a}} \tag{1}$$ $${\bf{F_e}} = -K \frac{q_1 q_2}{|{\bf{r}}_{12}|^2} \hat{{\bf{r}}} \tag{2}$$ You can ask, is $q_1$ the same as $m_i$ ? And it is true that for one specific case you could redefine $K$ such that it would come out that $q_1$ and $m_i$ are the same, but it would not pass the scaling test, since doubling the inertial mass does not double the charge. | {
"source": [
"https://physics.stackexchange.com/questions/447287",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/121554/"
]
} |
448,573 | In this youtube video it is claimed that electrons orbit their atom's nucleus not in well-known fixed orbits, but within "clouds of probability", i.e., spaces around the nucleus where they can lie with a probability of 95%, called "orbitals". It is also claimed that the further away one looks for the electron from the nucleus, the more this probability decreases, yet it never reaches 0 . The authors of the video conclude that there is a non-zero probability for an atom to have its electron "on the other side of the Universe". If this is true, then there must be a portion of all atoms on Earth whose electron lies outside the Milky Way. Which portion of atoms has this property? | The quantity you should consider first is the Bohr radius , this tells you an idea of the relevant atomic scales, $$
a_0 = 5.29\times 10^{-11} ~{\rm m}
$$ For hydrogen (the most abundant element), in its ground state, the probability of finding an electron beyond a distance $r$ from the center looks something like (for $r \gg a_0$ ) $$
P(r) \approx e^{-2r/a_0}
$$ Now let's plug in some numbers. The virial radius of the Milky Way is around $200 ~{\rm kpc} \approx 6\times 10^{21}~{\rm m}$ , so the probability of finding an electron outside the galaxy from an atom on Earth is around $$
P \sim e^{-10^{32}}
$$ that's ... pretty low. But you don't need to go that far to show this effect, the probability that an electron of an atom in your foot is found in your hand is $\sim 10^{-10^{10}}$ . | {
"source": [
"https://physics.stackexchange.com/questions/448573",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/81683/"
]
} |
448,673 | Given a battleship, suppose we construct a tub with exactly the same shape as the hull of the battleship, but 3 cm larger. We fill the tub with just enough water to equal the volume of space between the hull and the tub. Now, we very carefully lower the battleship into the tub. Does the battleship float in the tub? I tried it with two large glass bowls, and the inner bowl seemed to float. But if the battleship floats, doesn't this contradict what we learned in school? Archimedes' principle says "Any floating object displaces its own weight of fluid." Surely the battleship weighs far more than the small amount of water that it would displace in the tub. Note: I originally specified the tub to be just 1 mm larger in every direction, but I figured you would probably tell me when a layer of fluid becomes so thin, buoyancy is overtaken by surface tension, cohesion, adhesion, hydroplaning , or whatever. I wanted this to be a question about buoyancy alone. | Yes it floats. And it has displaced its "own weight of water" in the sense that if you had filled the container with water and only then lowered the ship into the container, nearly all that water would have been dispaced and is now sloshing around on the floor. | {
"source": [
"https://physics.stackexchange.com/questions/448673",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134671/"
]
} |
448,724 | Many people think of the water analogy to try to explain how electromagnetic energy is delivered to a device in a circuit. Using that analogy, in a DC circuit, one could imagine the power-consuming device is like a water wheel being pushed by the current. In the case of an actual water wheel, the more water that flows per unit of time, the more energy gets delivered to the wheel per unit of time: power = current , but in electric circuits power = voltage x current . Why is this? | Power to a water-wheel depends both on the current (amount of water delivered)
and the head (vertical drop of water as it turns the wheel). So, the
water analogy does have TWO variables that multiply together to make
power: current, measuring (for instance) the water flow at Niagara,
and vertical drop (like the height of Niagara Falls). Current is NOT the same as power, in a river, because long stretches of
moving water in a channel don't dissipate energy as much as a waterfall does.
Siting a hydroelectric power plant at Niagara Falls makes sense.
In the analogy to electricity, a wire can deliver current at little voltage
drop (and has tiny power dissipation) but a resistor which has that same
current will be warmed (it has a substantial terminal-to-terminal voltage drop). | {
"source": [
"https://physics.stackexchange.com/questions/448724",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/214901/"
]
} |
449,201 | I've seen this question asked before but I can't find an answer to the specific point I'm troubled with. From the kinetic theory of gases, pressure results from molecules colliding with the walls of a container enclosing a gas, imparting a force upon the wall. Now, if we split the container into two halves, I am told that pressure remains the same on either side of the partition assuming the gas has uniform density throughout the container. But If we split the container into two, isn't there effectively half the number of molecules striking the wall on each side so the pressure should also be halved? Shouldnt pressure be dependent on the number of molecules? | If we split the container into two, isn't there effectively half the number of molecules striking the wall on each side so the pressure should also be halved? Shouldnt pressure be dependent on the number of molecules? You are right that if we only halved the number of particles we would have a smaller pressure. But you have also halved the volume of the container. The fewer number of particles hits the walls more frequently due to the smaller volume. In other words, the number of particles goes down, but the number of collisions per particle goes up. The two effects cancel out, leading to the same pressure as before you put in the partition. | {
"source": [
"https://physics.stackexchange.com/questions/449201",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
449,635 | I have noticed this several times. When I am boiling water, a few seconds before its boiling point, vapours are formed as usual. But if I turn the gas off before boiling, the moment it turns off, I see a lot of vapours being formed all of a sudden from the hot water for a second or two. Can anyone tell me why this happens? | What you are seeing is not actually vapor - vapor is invisible. The mist seen above boiling water, commonly but inaccurately called vapor, is actually made of tiny droplets of liquid water, formed when the vapor cools down and condenses. While the stove is on, the constant influx of vapor from the boiling water keeps the air above it hot, so condensation is minimal and there is little visible mist. When the gas is turned off, boiling stops, the air above the water cools down, and the vapor it contains suddenly condenses, creating a large plume of mist. | {
"source": [
"https://physics.stackexchange.com/questions/449635",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/217702/"
]
} |
449,958 | As far as I’m aware, gravity in general relativity arises from the curvature of spacetime and is equivalent to an accelerated reference frame. Objects accelerating in a gravitational field are in fact inertial and are moving through geodesics in spacetime. So it could be said then that it is not really a force, but a pseudoforce much like the Coriolis effect. If so, why is it necessary to quantise gravity with a gauge boson, the graviton? And why is it necessary to unify it with the other forces? | While it's common to describe gravity as a fictitious force we should be cautious about the use of the adjective fictitious as this is a technical term meaning the gravitational force is not fundamental but is the result of an underlying property. The force itself most certainly exists as anyone who has been sat on by an elephant can attest. There is a sense in which all forces are fictitious since they are all the emergent long range behaviour of quantum fields, so gravity is not unique in this respect. For more on this see Can all fundamental forces be fictitious forces? Anyhow, the object responsible for the gravitational force is a tensor field called the metric, and when we quantise gravity we are quantising the metric not the force. The graviton then emerges as the excitation of the quantum field that describes the metric. As with other quantum fields we can have real gravitons that are the building blocks of gravitational waves and virtual gravitons used in scattering calculations. Finally, you ask why it's necessary to quantise gravity, and this turns out to be a complicated question and one that ignites much debate about what it means to quantise gravity. However the question has already been thoroughly discussed in Is the quantization of gravity necessary for a quantum theory of gravity? While it's not directly related I can also recommend A list of inconveniences between quantum mechanics and (general) relativity? as interesting reading. The principle reason that we want to quantise gravity is because Einstein's equation relates the curvature to the matter/energy distribution and the matter/energy is quantised. Einstein's equation tells us: $$ \mathbf G = 8 \pi \mathbf T $$ where $\mathbf G$ is the Einstein tensor that describes the spacetime curvature while $\mathbf T$ is the stress-energy tensor that describes the matter/energy distribution. The problem is that $\mathbf T$ could describe matter that is in a superposition of states or an entangled state, and that implies that the curvature must also be in a superposition of states or entangled. And this is only possible if the spacetime curvature is described by a quantum theory, or some theory whose low energy limit is quantum mechanics. | {
"source": [
"https://physics.stackexchange.com/questions/449958",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/203798/"
]
} |
450,006 | According to many high school textbook sources, water perfectly wets glass. That is, the adhesion between water and glass is so strong that it is energetically favorable for a drop of water on glass to spread out and coat the entire surface. Students are often made to memorize this fact, and many physics questions use it as an assumption. However, it's perfectly obvious that this doesn't actually happen. If you put a drop of water on glass, it might spread out a little, but it doesn't remotely coat the whole thing. In fact, I've never seen anything like the phenomenon described as "perfect wetting". Can perfect wetting actually be observed for everyday materials? If not, what are the main additional factors preventing it from happening, as the textbooks say? | In everyday life glass surfaces are always covered by a layer of, well, crud. Glass surfaces are exceedingly high energy surfaces due to the high density of polar hydroxyl groups and they attract pretty much anything. This means that outside of a colloid science laboratory you will never encounter a clean glass surface. I spent many years carrying out experiments involving interactions with glass surfaces, and to get the surface clean we had to clean it with chromic acid. A quick Google found instructions for doing this here , but if you ever feel tempted to try this at home do note the comment in that article: The dichromate should be handled with extreme care because it is a powerful corrosive and carcinogen. If you survive the cleaning process then you will find a water drop placed on the glass does have an effectively zero contact angle and the drop will spread out almost completely. But it's only under these extreme conditions that you will see this. Just leaving the glass exposed to the air for a few hours is enough to coat it with a monolayer of whatever organic detritus if floating around (which if humans are present is quite a lot :-). Once this happens you aren't measuring the contact angle on glass, you are measuring it on whatever organic film is coating the glass. | {
"source": [
"https://physics.stackexchange.com/questions/450006",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83398/"
]
} |
450,054 | Title is self explanatory. I'm assuming both water and air are transparent. So, if they are true, how can I clearly distinguish an air bubble under water? | Air and water are both transparent to a good enough approximation. However, light travels more slowly in water: the speed of light in air is about 33% faster than in water. As a result, when light passes from one medium to the other, it is partly reflected and partly refracted (bent). For the refracted part, the general rule for determining the bending angle is called Snell's law , which can be expressed like this: $$
\frac{\sin\theta_\text{w}}{\sin\theta_\text{a}}=\frac{v_\text{w}}{v_\text{a}}
\approx \frac{1}{1.33}
\tag{1}
$$ where $v_\text{w}$ and $v_\text{a}$ are the speed of light in water and air, respectively, and where $\theta_\text{w}$ and $\theta_\text{a}$ are the angles of the light ray relative to a line perpendicular to the surface, on the water side and on the air side, respectively. If the angle on the water side is $\theta_\text{w} \gtrsim 49^\circ$ , then equation (1) does not have any solution: there is no air-side angle $\theta_\text{a}$ that satisfies the equation. In this case, as niels nielsen indicated , light propagating inside the water will be completely reflected at the water-air interface. So the rim of the bubble acts like a mirror: if you do a reverse ray-trace from your eye back to near the rim of an air bubble in the water, the angle between the ray and the line perpendicular to the surface of the bubble will be greater than $49^\circ$ (this defines what "near the rim" means), so that part of the bubble acts like a mirror for light coming from those angles, as illustrated here: | {
"source": [
"https://physics.stackexchange.com/questions/450054",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/217880/"
]
} |
450,326 | I am reading Hawking's "Brief answers". He complained that black holes destroy information (and was trying to find a way to avoid this). What I don't understand: Isn't deleting information quite a normal process? Doesn't burning a written letter or deleting a hard disk accomplish the same? | (The answers by Mark H and B.fox were posted while this one was being written. This answer says the same thing in different words, but I went ahead and posted it anyway because sometimes saying the same thing in different words can be helpful.) The key is to appreciate the difference between losing information in practice and losing information in principle . If you write "My password is 12345" on a piece of paper and then burn it, the information might be lost for all practical purposes , but that doesn't mean that the information is lost in principle . To see the difference, compare these two scenarios: Scenario 1: You write "My password is 12345" on a piece of paper and then burn it. Scenario 2: You write "My password is ABCDE" on a piece of paper and then burn it. Exactly what happens in either scenario depends on many details, like the specific arrangement of molecules in the piece of paper and the ink, the specific details of the flame that was used to ignite the paper, the specific arrangement of oxygen molecules in the atmosphere near the burning paper, etc, etc, etc. The variety of possible outcomes is equally vast, with possible outcomes differing from each other in the specific details of which parts of the paper ended up as which pieces of ash, which molecules ended up getting oxidized and drifting in such-and-such a direction, etc, etc, etc. This is why the information is lost in practice. However, according to the laws of physics as we understand them today, all of the physically possible outcomes in Scenario 1 are different than all of the physically possible outcomes in Scenario 2. There is no way to start with a piece of paper that says "My password is 12345" and end up with precisely the same final state (at the molecular level) as if the piece of paper had said "My password is ABCDE." In this sense, the information is not lost in principle. In other words, the laws of physics as we understand them today are reversible in principle, even though they are not reversible in practice. This is one of the key ideas behind how the second law of thermodynamics is derived from statistical mechanics. The black hole information paradox says that our current understanding of physics is necessarily flawed. Either information really is lost in principle when a black hole evaporates, or else spacetime as we know it is only an approximately-valid concept that fails in this case, or else some other equally drastic thing. I think it's important to appreciate that the black hole information paradox is not obvious to most people (certainly not to me, and maybe not to anybody). As a testament to just how non-obvious it is, here are a few review papers mostly written for an audience who already understands both general relativity and quantum field theory: [1] Marolf (2017), “The Black Hole information problem: past, present, and future,” http://arxiv.org/abs/1703.02143 [2] Polchinski (2016), “The Black Hole Information Problem,” http://arxiv.org/abs/1609.04036 [3] Harlow (2014), “Jerusalem Lectures on Black Holes and Quantum Information,” http://arxiv.org/abs/1409.1231 [4] Mathur (2011), “What the information paradox is not,” http: //arxiv.org/abs/1108.0302 [5] Mathur (2009), “The information paradox: A pedagogical intro- duction,” http://arxiv.org/abs/0909.1038 Section 2 in [1] says: conventional physics implies the Hawking effect to differ fundamentally from familiar thermal emission from hot objects like stars or burning wood. To explain this difference, ... [technical details] Section 4.2 in [2] says: The burning scrambles any initial information, making it hard to decode, but it is reversible in principle. ... A common initial reaction to Hawking’s claim is that a black hole should be like any other thermal system... But there is a difference: the coal has no horizon. The early photons from the coal are entangled with excitations inside, but the latter can imprint their quantum state onto later outgoing photons. With the black hole, the internal excitations are behind the horizon, and cannot influence the state of later photons. The point of listing these references/excerpts is simply to say that the paradox is not obvious . The point of this answer is mainly to say that burning a letter or deleting a hard disk are reversible in principle (no information loss in principle) even though they make the information practically inaccessible, because reconstructing the original message from its ashes (and infrared radiation that has escaped to space, and molecules that have dissipated into the atmosphere, etc, etc, etc) is prohibitively difficult, to say the least. Note added: A comment by the OP pointed out that the preceding answer neglects to consider the issue of measurement. This is an important issue to address, given that measurement of one observable prevents simultaneous measurement of a mutually non-commuting observable. When we say that the laws of physics as we currently know them are "reversible", we are ignoring the infamous measurement problem of quantum physics, or at least exploiting the freedom to indefinitely defer application of the "projection postulate." Once the after-effects of a measurement event have begun to proliferate into the extended system, the extended system becomes entangled with the measured quantity in a practically irreversible way. (The impossibility of simultaneously measuring non-commuting observables with perfect precision is implicit in this.) It is still reversible in principle, though, in the sense that distinct initial states produce distinct final states — provided we retain the full entangled final state. This is what physicists have in mind when they say that the laws of physics as we currently know them are "reversible." The black hole information paradox takes this into account. The paradox is not resolved by deferring the effects of measurement indefinitely in the black hole case, nor is it resolved by applying the "projection postulate" as soon as we can get away with it in the burning-piece-of-paper case. (Again, the BH info paradox is not obvious , but these things have all been carefully considered, and they don't resolve the paradox.) Since we don't know how to resolve the measurement problem, either, I suppose we should remain open to the possibility that the black hole information paradox and the measurement problem might be related in some yet-undiscovered way. Such a connection is not currently clear, and it seems unlikely in the light of the AdS/CFT correspondence [6][7], which appears to provide a well-defined theory of quantum gravity that is completely reversible in the sense defined above — but in a universe with a negative cosmological constant, unlike the real universe which has a positive cosmological constant [8]. Whether the two mysteries are connected or not, I think it's safe to say that we still have a lot to learn about both of them. [6] Introduction to AdS/CFT [7] Background for understanding the holographic principle? [8] How to define the boundary of an infinite space in the holographic principle? | {
"source": [
"https://physics.stackexchange.com/questions/450326",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/218030/"
]
} |
450,743 | I'm taking a course on quantum mechanics and I'm getting to the part where some of the mathematical foundations are being formulated more rigorously. However when it comes to Hilbert spaces, I'm somewhat confused. The way I understand it is as follows:
The Hilbert space describing for example a particle moving in the $x$ -direction contains all the possible state vectors, which are abstract mathematical objects describing the state of the particle. We can take the components of these abstract state vectors along a certain base to make things more tangible. For example: The basis consisting of state vectors having a precisely defined position. In which case the components of the state vector will be the regular configuration space wave function The basis consisting of state vectors having a precisely defined momentum. In which case the components of the state vector will be the regular momentum space wave function The basis consisting of state vectors which are eigenvectors of the Hamiltonian of the linear harmonic oscillator It's also possible to represent these basis vectors themselves in a certain basis. When they are represented in the basis consisting of state vectors having a precisely defined position, the components of the above becoming respectively Dirac delta functions, complex exponentials and Hermite polynomials. So the key (and new) idea here for me is that state vectors are abstract objects living in the Hilbert space and everything from quantum mechanics I've learned before (configuration and momentum space wave functions in particular) are specific representations of these state vectors. The part that confuses me, is the fact that my lecture notes keep talking about the Hilbert space of 'square integrable wave functions'. But that means we are talking about a Hilbert space of components of a state vector instead of a Hilbert space of state vectors? If there is anybody who read this far and can tell me if my understanding of Hilbert spaces which I described is correct and how a 'Hilbert space of square integrable wave functions' fits into it all, I would be very grateful. | This is a good question, and the answer is rather subtle, and I think a physicist and a mathematician would answer it differently. Mathematically, a Hilbert space is just any complete inner product space (where the word "complete" takes a little bit of work to define rigorously, so I won't bother). But when a physicist talks about " the Hilbert space of a quantum system", they mean a unique space of abstract ket vectors $\{|\psi\rangle\}$ , with no preferred basis. Exactly as you say, you can choose a basis (e.g. the position basis) which uniquely maps every abstract state vector $|\psi\rangle$ to a function $\psi(x)$ , colloquially called "the wave function". (Well, the mapping actually isn't unique, but that's a minor subtlety that's irrelevant to your main question.) The confusing part is that this set of functions $\{\psi(x)\}$ also forms a Hilbert space, in the mathematical sense. (Mumble mumble mumble.) This mathematical Hilbert space is isomorphic to the "physics Hilbert space" $\{|\psi\rangle\}$ , but is conceptually distinct. Indeed, there are an infinite number of different mathematical "functional representation" Hilbert spaces - one for each choice of basis - that are each isomorphic to the unique "physics Hilbert space", which is not a space of functions. When physicists talk about "the Hilbert space of square-integrable wave functions", they mean the Hilbert space of abstract state vectors whose corresponding position-basis wave functions are square integrable. That is: $$\mathcal{H} = \left \{ |\psi\rangle\ \middle|\ \int dx\ |\langle x | \psi \rangle|^2 < \infty \right \}.$$ This definition may seem to single out the position basis as special, but actually it doesn't: by Plancherel's theorem, you get the exact same Hilbert space if you consider the square-integrable momentum wave functions instead. So while "the Hilbert space of square integrable wave functions" is a mathematical Hilbert space, you are correct that technically it is not the "physics Hilbert space" of quantum mechanics, as physicists usually conceive of it. I think that in mathematical physics, in order to make things rigorous it's most convenient to consider functional Hilbert spaces instead of abstract ones. So mathematical physicists consider the position-basis functional Hilbert space as the fundamental object, and define everything else in terms of that. But that's not how most physicists do it. | {
"source": [
"https://physics.stackexchange.com/questions/450743",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/218227/"
]
} |
451,267 | If I have two asteroids. One dead still in space and one whizzing by at 10,000mph. What is the difference between the two, physically? If I freeze time and look at the two of them - what differences would they have? How could one tell that one was moving really quickly, and the other not? Is there some sort of Quantum difference with the particles in front or behind the asteroid? Does it have a different gravitational force on spacetime surrounding it? Nothing online seems to suggest this as Earth moves quickly through space but has a simple, aligned bump beneath. I'm just a programmer who is interested in Physics & has been watching too many Leonard Susskind lectures recently. | I'll choose to interpret the question as specifying that one of the objects is stationary with respect to an observer, and the other object is moving with respect to the observer. Then the question goes on to ask if the observer can discern any difference between the two in an instantaneous "snapshot" of the two objects. It's really a good question, and the answer is yes, there is a discernable difference . Suppose that the objects are charged particles with no intrinsic magnetic moments. The observer will see no magnetic field around the "stationary" particle, but will see a magnetic field around the "moving" particle. Even in the case of uncharged particles, there is a difference. An observer who sees a mass moving relative to himself sees an additional field besides the gravitational field of the mass. The additional field is a consequence of general relativity and is analogous to the magnetic field that's observed around a moving charged particle. | {
"source": [
"https://physics.stackexchange.com/questions/451267",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/218470/"
]
} |
451,588 | It is usual in physics, that when we have a variable that is very small or very large we do a power series expansion of the function of that variable, and eliminate the high order terms, but my question is, why do we usually make the expansion and then approximate, why don't we just do the limit in that function, when that value is very small (tends to zero) or is very large (tends to infinity). | The key reason is that we want to understand the behavior of the system in the neighborhood of the state rather than at the state itself. Take the equation of motion for a simple pendulum, for example: $$\ddot{\theta} = -\frac{g}{\ell}\sin(\theta)$$ If we take the limit where $\theta \rightarrow 0$ , we find $\ddot{\theta}= 0$ , and we would conclude that the pendulum angle increases or decreases linearly with respect to time. If we however take a Taylor expansion and truncate at the linear term, we find $\ddot{\theta} = -\frac{g}{\ell}\theta$ , which is a simple harmonic oscillator! This expansion shows us that in the neighborhood of $0$ , the system returns back to $0$ as if it was a simple harmonic oscillator: completely unlike what we could state in the limit approximation above. In fact, you could consider the limiting behavior around a state to be the zeroth-order component of a local expansion , which holds true straightforwardly for the example above since the limit term contributes no terms to the dynamics of the pendulum (but correctly notes that the angle increases/decreases linearly very close to $0$ ). | {
"source": [
"https://physics.stackexchange.com/questions/451588",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/205686/"
]
} |
451,979 | Ever since special relativity we've had this equation that puts time and space on an equal footing: $$ds^2 = -dt^2 + dx^2 + dy^2 + dz^2.$$ But they're obviously not equivalent, because there's a sign difference between space and time. Question: how does a relative sign difference lead to a situation where time only flows forward and never backward? We can move back and forth in space, so why does the negative sign mean we can't move back and forth in time? It sounds like something I should know, yet I don't - the only thing I can see is, $dt$ could be positive or negative (corresponding to forwards and backwards in time), but after being squared that sign difference disappears so nothing changes. Related questions: What grounds the difference between space and time? , What is time, does it flow, and if so what defines its direction? However I'm phrasing this question from a relativity viewpoint, not thermodynamics. | We can move back and forth in space, so why does the negative sign mean we can't move back and forth in time? As illustrated in the answer by user4552 and acknowledged in other answers, that relative sign doesn't by itself determine which is future and which is past. But as the answer by Dale explains, it does mean that we can't "move back and forth in time," assuming that the spacetime is globally hyperbolic (which excludes examples like the one in user4552's answer). A spacetime is called globally hyperbolic if it has a spacelike hypersurface through which every timelike curve passes exactly once (a Cauchy surface ) [1][2]. This ensures that we can choose which half of every light-cone is "future" and which is "past," in a way that is consistent and smooth throughout the spacetime. For an explicit proof that "turning around in time" is impossible, in the special case of ordinary flat spacetime, see the appendix of this post: https://physics.stackexchange.com/a/442841 . References: [1] Pages 39, 44, and 48 in Penrose (1972), "Techniques of Differential Topology in Relativity," Society for Industrial and Applied Mathematics , http://www.kfki.hu/~iracz/sgimp/cikkek/cenzor/Penrose_todtir.pdf [2] Page 4 in Sanchez (2005), "Causal hierarchy of spacetimes, temporal functions and smoothness of Geroch's splitting. A revision," http://arxiv.org/abs/gr-qc/0411143v2 | {
"source": [
"https://physics.stackexchange.com/questions/451979",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/177855/"
]
} |
451,997 | Imagine a sea of electrons which is so tightly packed that exclusion principle comes into play, next I remove 1 electron from this sea... this hole should behave like a particle and is positively charged so my question can it creates interference pattern when passing through doubleslit? | We can move back and forth in space, so why does the negative sign mean we can't move back and forth in time? As illustrated in the answer by user4552 and acknowledged in other answers, that relative sign doesn't by itself determine which is future and which is past. But as the answer by Dale explains, it does mean that we can't "move back and forth in time," assuming that the spacetime is globally hyperbolic (which excludes examples like the one in user4552's answer). A spacetime is called globally hyperbolic if it has a spacelike hypersurface through which every timelike curve passes exactly once (a Cauchy surface ) [1][2]. This ensures that we can choose which half of every light-cone is "future" and which is "past," in a way that is consistent and smooth throughout the spacetime. For an explicit proof that "turning around in time" is impossible, in the special case of ordinary flat spacetime, see the appendix of this post: https://physics.stackexchange.com/a/442841 . References: [1] Pages 39, 44, and 48 in Penrose (1972), "Techniques of Differential Topology in Relativity," Society for Industrial and Applied Mathematics , http://www.kfki.hu/~iracz/sgimp/cikkek/cenzor/Penrose_todtir.pdf [2] Page 4 in Sanchez (2005), "Causal hierarchy of spacetimes, temporal functions and smoothness of Geroch's splitting. A revision," http://arxiv.org/abs/gr-qc/0411143v2 | {
"source": [
"https://physics.stackexchange.com/questions/451997",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75502/"
]
} |
452,237 | Since the tension force is a central acting force, the torque on an orbiting ball about that center is zero. But if the rope is cut down during motion the torque would still remain zero. This would mean that angular momentum of the orbiting ball should be conserved, but I find everywhere that ball will move in a straight line tangentially. What is wrong in my reasoning? | The flaw in your reasoning is thinking that straight line motion at constant velocity does not constitute constant angular momentum about some point, but it actually does. Angular momentum is given by $^*$ $$\mathbf L=\mathbf r\times\mathbf p$$ Without loss of generality, let's assume after the rope is cut our object is moving along the line $y=1$ in the x-y plane, and we are looking at the angular momentum about the origin. Then our angular momentum must always be perpendicular to the x-y plane, so it will be sufficient to just look at the magnitude of the angular momentum $$L=rp\sin\theta$$ where $\theta$ is the angle between the position vector and the momentum vector (which is the angle between the position vector and the x-axis based on the set up above). Now, since there are no forces acting on our object, $p$ is constant. Also, $r\sin\theta$ is just the constant $y=1$ value given by the line the object is moving along. Therefore, it must be that $L$ is constant. This shows that absence of a net torque (conserved angular momentum) is not enough to uniquely determine the motion. While in circular motion, there is still a net force acting on our object. Without the rope, there is no net force. The motions are different. $^*$ Note that this applies to any type of motion, not just circular motion. | {
"source": [
"https://physics.stackexchange.com/questions/452237",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/214136/"
]
} |
452,592 | Light waves are a type of electromagnetic wave and they fall between 400-700 nm long. Microwaves are less energetic but seem to be more dangerous than visible light. Is visible light dangerous at all and why not? | Your question contains a premise that is false: Microwaves do not have less energy than visible light per se. They only have less energy per photon , as per the Planck–Einstein relation , $E = hf$ . In other words, you can raise the power of electromagnetic radiation to a dangerous level at any wavelength, if only you generate enough photons – as your microwave oven does. That very much includes visible light. You can easily verify this by waiting for a sunny day, getting out your magnifying glass, and using it to focus sunlight on a piece of paper. Watch it char and maybe even burn. (Make sure there's nothing around that piece of paper that can burn.) In conclusion, then, sunlight is dangerous! | {
"source": [
"https://physics.stackexchange.com/questions/452592",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/176677/"
]
} |
453,001 | I've been reading about Arago spots and the Aragoscope . Some fascinating concepts! Basically, due to wave diffraction, there is a bright spot in the centre of the shadow behind any circular object. It strikes me that there are some similarities between a black hole and the disc used for the Arago spot experiments. Black holes (or their event horizons) are spherical, so should cast a circular shadow. Have there been any observations of bright spots in the centre of a black hole? If so, would these be useful for astronomy, like a supersized Aragoscope? I have read that the shadowing object has to be very precisely circular, so I am not sure if a black hole is circular enough. Oblateness caused by rotation might distort the event horizon? There might also be issues with relativity around the fringes of the black hole. I don't have a clear concept of how gravitational lensing and Fresnel diffraction would work together. | A proper analysis of this should treat light as an EM wave propagating in the curved spacetime background of the black hole. The papers (1) "Wave optics and image formation in gravitational lensing," https://arxiv.org/abs/1207.6846 (2) "Viewing Black Holes by Waves," https://arxiv.org/abs/1303.5520 (3) "Wave Optics in Black Hole Spacetimes: Schwarzschild Case," https://arxiv.org/abs/1502.05468 present analyses for the case of a non-rotating (Schwarzschild) black hole. The papers (1) and (2) use numerical techniques (solving the wave equation for the massless scalar field using a finite difference method) and the paper (3) uses analytic techniques. The abstract of (2) says We study scattering of waves by black holes. Solving a massless scalar field with a point source in the Schwarzschild spacetime, waves scattered by the black hole is obtained numerically. We then reconstruct images of the black hole from scattered wave data for specified scattering angles. For the forward and the backward directions, obtained wave optical images of black holes show rings that correspond to the black hole glories associated with existence of the unstable circular photon orbit in the Schwarzschild spacetime. Figure 7 in (2) shows "Images of black holes reconstructed from scattering waves...", and here is an excerpt from that figure: Notice the relatively bright spot in the center. To help interpret the picture, page 12 says: In the geometric optics limit, images of black holes can be obtained by solving null geodesics. For the observer at $\theta_0=0$ , the primary null rays, which are deflected by the black hole but do not go around it, result in the Einstein ring. The secondary and the higher degrees of null rays that go around the black hole many times also form ring images with smaller angular radius compared to the Einstein ring. This is called the glory effect. The words "Arago spot" are not used in the paper, but the words "Airy disk" are used. The papers (2) and (3) both say that an extension of this analysis to rotating (Kerr) black holes is underway, but I don't know if it has been published yet. | {
"source": [
"https://physics.stackexchange.com/questions/453001",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45944/"
]
} |
453,410 | In inflationary cosmology, primordial quantum fluctuations in the process of inflation are considered responsible for the asymmetry and lumpiness of the universe that was shaped. However, according to the Copenhagen interpretation, any random quantum phenomenon only occurs when the system is observed; before observation, the quantum state is symmetric. So the question is, who has observed the universe while it was inflating? Obviously, there was no conscious creature that time. Actually, this problem is discussed in the paper The Bohmian Approach to the Problems of Cosmological Quantum Fluctuations (Goldstein, Struyve and Tumulka; arXiv:1508.01017 ), and the proposed solution to the problem in said to be an observer-independent interpretation (the pilot-wave theory). | “Observe” oftentimes causes a lot of confusion for this exact reason. It doesn’t actually refer to some conscious entity making an observation. Rather, think about how we actually make an observation about something. You have to interact with the system in some way. This can be through the exchange of photons, for example. This interaction is what constitutes an observation having taken place. Obviously, particles can undergo their fundamental interactions without a nearby sentient entity. For the sake of analogy, consider measuring air pressure in a tire. In the process of doing so, you let out some air — changing the tire pressure in the process. | {
"source": [
"https://physics.stackexchange.com/questions/453410",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/207927/"
]
} |
453,657 | I recently read an old physics news about the Higgs boson where it was observed to decay into 2 photons and I was wondering why it wouldn't have decayed into a single photon with the combined energy of 2 photons? | No massive particle can decay into a single photon. In its rest frame , a particle with mass $M$ has momentum $p=0$ . If it decayed to a single photon, conservation of energy would require the photon energy to be $E=Mc^2$ , while conservation of momentum would require the photon to maintain $p=0$ . However, photons obey $E=pc$ (which is the special case of $E^2 = (pc)^2 + (mc^2)^2$ for massless particles). It's not possible to satisfy all these constraints at once. Composite particles may emit single photons, but no massive particle may decay to a photon. | {
"source": [
"https://physics.stackexchange.com/questions/453657",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75502/"
]
} |
453,669 | What is the degrees of freedom of a three dimensional polyatomic molecule when only one vibrational mode is excited? | No massive particle can decay into a single photon. In its rest frame , a particle with mass $M$ has momentum $p=0$ . If it decayed to a single photon, conservation of energy would require the photon energy to be $E=Mc^2$ , while conservation of momentum would require the photon to maintain $p=0$ . However, photons obey $E=pc$ (which is the special case of $E^2 = (pc)^2 + (mc^2)^2$ for massless particles). It's not possible to satisfy all these constraints at once. Composite particles may emit single photons, but no massive particle may decay to a photon. | {
"source": [
"https://physics.stackexchange.com/questions/453669",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/132069/"
]
} |
454,068 | Speaking of what I understood, spacetime is three dimensions of space and one of time. Now, if we look at general relativity, spacetime is generally reckoned as a 'fabric'. So my question is, whether spacetime is real or is just a mathematical construct to understand various things. In addition to this, if it's a mathematical thing, then what does its distortion/bending mean? | TL;DR This is a complicated question and anyone who tells you a definitive answer one way or another is either a philosopher or is trying to sell you something. I justify arguments either way below, and conclude with the AdS/CFT correspondence, in which two theories on two vastly different spacetime manifolds are in fact equivalent physically. First, let’s clear things up: Now, if we look at general relativity, spacetime is generally reckoned as a 'fabric'. No. This is simply how easy-to-digest PBS documentaries and popular science books explain the idea of spacetime. In reality, it is a (pseudo-Riemannian) manifold, meaning that it locally (for small enough observers) looks like regular flat spacetime that we are used to dealing with in special relativity. The main difference here, is that for larger observers, the geometry may start to look a bit foreign/strange when compared to the “flat” case (for instance, one might find a triangle whose angles don’t add up to 180 degrees). These are just geometrical properties of the world in which the observer lives, and it happens that the strange geometrical measurements happen to coincide with areas of concentrated mass/energy. This effect of wonky geometry, coupled with the fact that observers naturally follow the path of least spacetime “distance” (proper time) account for what we’re used to calling gravity. In addition to this, if it's a mathematical thing, then what does its distortion/bending mean? Again, this is a PBS documentary image that anyone wishing to actually understand physics needs to abandon. Spacetime doesn’t “warp,” “bend,” “stretch,” “distort,” or any other words popular science books care to tell you. What these terms are really referring to is the geometrical properties of different parts of spacetime being different than in the special spacetime of special relativity. In particular, they refer to the geometrical notion of curvature , which is simply a value measurable by local observers which is zero for flat spacetime and nonzero for others, but has nothing to do with stretching, pulling, distorting, or what-have-you. Finally, let’s get to the meat of the question: Is spacetime real, or is it a mathematical construct? Short answer: Yes to both. Spacetime is, from a mathematical viewpoint, a manifold, which is a set of points equipped with a certain structure (being locally flat). Physically, each point corresponds to an event (a place for something to happen, a time when it happens), and local flatness simply means that small enough observers can find a reference frame in which they would locally feel like they are in flat spacetime (this is Einstein’s equivalence principle). Mathematically, spacetime has a little more structure. It has a metric tensor, which is the fundamental geometric variable in relativity, and physically corresponds to being able to measure distances between nearby “events” and angles between nearby “lines.” These both certainly seem physical. As you can see, each mathematical property of spacetime manifests itself to the observer as a physically measurable property of the world. In this sense, spacetime is very physical. However, one could argue the other way as well. I really like the way that Terry Gannon puts it in his book “Moonshine Beyond the Monster.” ...we access space-time only indirectly, via the functions (‘quantum fields’) living on it. (Gannon, 117) And this is the sense in which spacetime is just a mathematical tool. We never interact with “spacetime.” What we interact with are the functions whose domains are the abstract manifold we call spacetime when describing them (gravitational fields, electromagnetic fields, etc.), and to make any measurement about spacetime occurs only indirectly through the measurements of these fields. Even something as simple as measuring distance requires a ruler, which can only be read through electromagnetic interaction (light). The truth is, this is a completely and utterly complicated question which we may never know the answer to. Instead, I leave you with this: Holographic theories (AdS/CFT and variations thereupon) suggest that a gravitational system (spacetime + curvature) and a certain non-gravitational quantum theory of fields (spacetime + fields + no curvature) in one dimension fewer have exactly the same physics . That is, no measurement could meaningfully tell you which system you’re in, because, physically, they are the same system. So where did the extra dimension of spacetime come from, and where did all the curvature come from? If spacetime is real , then why can I describe two identical theories on two very different spacetime manifolds? As a final thought: Physics does not aim to find “truth”(so much is the subject of philosophy or metaphysics). Physics seeks only to find models of reality which are useful in predicting the outcomes of experiments or processes. Thus, physics can say nothing of the “reality” of spacetime, so long as there are two different theories on two different spacetimes which give the same result of every possible experiment. | {
"source": [
"https://physics.stackexchange.com/questions/454068",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/211040/"
]
} |
454,292 | The Higgs boson is an excitation of the Higgs field and is very massive and short lived. It also interacts with the Higgs field and thus is able to experience mass. Why does it decay if it is supposed to be an elementary particle according to the standard model? | Most fundamental particles in the standard model decay: muons, tau leptons, the heavy quarks, W and Z bosons. There’s nothing problematic about that, nor about Higgs decays. Your question may come from a misconception about particle decay: that it’s somehow the particle ‘coming apart’ into preexisting constituents. It’s not like that. Decays are transformations into things that weren’t there before. | {
"source": [
"https://physics.stackexchange.com/questions/454292",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75502/"
]
} |
454,328 | Let's consider I have a particle moving on the $x,y$ plane. On this particle acts the Lorentz force, however it does not necessary perform a rotation by the axes origin $(x,y)=(0,0)$ . I would like to re-write the Lorentz force in the "global" cylindrical coordinate system where the origin $(x,y)=(0,0)$ is the same as the $r=0$ . Of course the particle motion will be described by $ R=r,\phi,z$ and $u=u_r, u_{\phi}, u_z$ in that coordinate system. Should I add an inertial force in this system, such as $m\frac{du}{dt} = F_L + F_{int}$ , in my description? | Most fundamental particles in the standard model decay: muons, tau leptons, the heavy quarks, W and Z bosons. There’s nothing problematic about that, nor about Higgs decays. Your question may come from a misconception about particle decay: that it’s somehow the particle ‘coming apart’ into preexisting constituents. It’s not like that. Decays are transformations into things that weren’t there before. | {
"source": [
"https://physics.stackexchange.com/questions/454328",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/185853/"
]
} |
454,332 | According to Newton's third law of motion that states that every action has an equal and opposite reaction. So, if the Earth exerts a gravitational pull on us (people) then even we should exert a force equal and opposite (in terms of direction) on the Earth. It is intuitive to think that this force is really small to get the Earth to move. But, if we take a look at the second law of motion that states that F = ma we see that however small the force, there will be some amount of acceleration. Therefore, even though we exert a very small gravitational force on the Earth it should be enough to get the Earth to move even though the acceleration is a very small amount. But what I said clearly does not happen. So there must be some flaw with my reasoning. What is that flaw? | The acceleration that your gravitational pull causes in the Earth is tiny, tiny, tiny because the Earth's mass is enormous. If your mass is, say, $70\;\mathrm{kg}$ , then you cause an acceleration of $a\approx 1.1\times 10^{-22}\;\mathrm{m/s^2}$ . A tiny, tiny, tiny acceleration does not necessarily mean a tiny, tiny, tiny speed, since, as you mention in comments, the velocity accumulates. True. It doesn't necessarily mean that - but in this case it does . The speed gained after 1 year at this acceleration is only $v\approx 3.6×10^{-15}\;\mathrm{m/s}$ . And after a lifetime of 100 years it is still only around $v\approx 3.6×10^{-13}\;\mathrm{m/s}$ . If all 7.6 billion people on the planet suddenly lifted off of Earth and stayed hanging in the air on the same side of the planet for 100 years, the planet would reach no more than $v\approx 2.8\times 10^{-3}\;\mathrm{m/s}$ ; that is around 3 millimeters-per-second in this obscure scenario of 100 years and billions of people's masses. Now, with all that being said, note that I had to assume that all those people are not just standing on the ground - they must be levitating above the ground. Because, while levitating (i.e. during free-fall), they only exert the gravitational force $F_g$ : $$\sum F=ma\quad\Leftrightarrow\quad F_g=ma,$$ causing a net acceleration according to Newton's 2nd law. If they were standing on the ground, on the other hand, they apart from their gravitational force also exert a downwards pushing force equal to their weight , $w$ : $$\sum F=ma\quad\Leftrightarrow\quad F_g-w=ma.$$ Then there are two forces exerted on the planet, pushing in opposite directions. And in fact, the weight exactly equals the gravitational force (because those two correspond directly to an action-reaction pair via Newton's 3rd law). So the pressing force exerted on the planet cancels out the gravitational pull in the planet. Then the above formula results in zero acceleration. The forces cancel out and nothing accelerates any further. In general, no system can ever accelerate solely via it's own internal forces. If we consider the Earth-and-people as one system, then their gravitational forces on each other are internal. Each part of the system may move individually - the Earth can move towards the people and the free-falling people can move towards the Earth. But the system as a whole - defined by the centre-of-mass of the system - will not move anywhere. So, the Earth can move a tiny, tiny, tiny bit towards you while you move a much larger distance towards the Earth during your free-fall so the combined centre-of-mass is still stationary. But when standing on the ground, nothing can move because that would require you to break through the ground and move inwards into the Earth. If the Earth was moving but you weren't, then the centre of mass would be moving (accelerating) and that is simply impossible both because of the momentum conservation law as well as the energy conservation law. The system would be gaining kinetic energy without any external energy input; creating free energy out of thin air is just not possible. So this never happens. | {
"source": [
"https://physics.stackexchange.com/questions/454332",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/203222/"
]
} |
454,554 | The BBC News article Cern plans even larger hadron collider for physics search says: The difficulty with Cern's proposals for a larger Large Hadron Collider is that no one knows what energies will be needed to crash large hadrons together to discover the enigmatic, super particles that hold the keys to the new realm of particles. Cern hopes that its step-by-step proposal, first using electron-positron and then electron-large hadron collisions will enable its physicists to look for the ripples created by the super particles and so enable them to determine the energies that will be needed to find the super particles. Do hadrons fall nicely into the two categories large and small ? Is the way that the term large hadron is used in the article how particle scientists generally discuss experiments? I'm (creatively) imagining the following sentence "We're not going to be able to do the experiment with these small hadrons, we're going to have to use the large ones." updates: As of 24-Jan-2018 it still hasn't been fixed. When/if it is in the near future, I'll make a note if it here to be fair to the BBC. As of 30-Jan-2018 the first occurrence has been corrected but the second has not... | The article is either a joke or a gross misinterpretation of the name “Large Hadron Collider”. The name refers to the physical size of the device. It is a “large” hadron collider, not a “large hadron” collider. There is no categorization of hadrons into “large” and “small”. | {
"source": [
"https://physics.stackexchange.com/questions/454554",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83380/"
]
} |
455,526 | I know that a supernova can mess up the heliosphere of nearby stars, but I'm wondering if it could physically push neighboring stars off their trajectories. It's fun to imagine all the stars surrounding a supernova being propelled outward and tumbling out of the galactic arm! I would expect that a really close star, such as a partner in a binary pair, would get really messed up. I'm thinking more about the neighbors a few light-years away. I realize that a supernova involves both the initial EM burst and the mass ejection which arrives later. I'm open to the effects of any of these things. | Consider a star of mass $M$ and radius $R$ at a distance $r$ from the supernova. For a back-of-the-envelope estimate, consider how much momentum would be transferred to the star by the supernova. From that, we can estimate the star's change in velocity and decide whether or not it would be significant. First, for extra fun, here's a review of how a typical core-collapse supernova works [1]: Nuclear matter is highly incompressible. Hence once the central part of the core reaches nuclear density there is powerful resistance to further compression. That resistance is the primary source of the shock waves that turn a stellar collapse into a spectacular explosion. ... When the center of the core reaches nuclear density, it is brought to rest with a jolt. This gives rise to sound waves that propagate back through the medium of the core, rather like the vibrations in the handle of a hammer when it strikes an anvil. .. The compressibility of nuclear matter is low but not zero, and so momentum carries the collapse beyond the point of equilibrium, compressing the central core to a density even higher than that of an atomic nucleus. ... Most computer simulations suggest the highest density attained is some 50 percent greater than the equilibrium density of a nucleus. ...the sphere of nuclear matter bounces back, like a rubber ball that has been compressed. That "bounce" is allegedly what creates the explosion. According to [2], Core colapse liberates $\sim 3\times 10^{53}$ erg ... of gravitational binding energy of the neutron star, 99% of which is radiated in neutrinos over tens of seconds. The supernova mechanism must revive the stalled shock and convert $\sim 1$ % of the available energy into the energy of the explosion, which must happen within less than $\sim 0.5$ - $1$ s of core bounce in order to produce a typical core-collapse supernova explosion... According to [3], one "erg" is $10^{-7}$ Joules. To give the idea the best possible chance of working, suppose that all of the $E=10^{53}\text{ ergs }= 10^{46}\text{ Joules}$ of energy goes into the kinetic energy of the expanding shell. The momentum $p$ is maximized by assuming that the expanding shell is massless (because $p=\sqrt{(E/c)^2-(mc)^2}$ ), and while we're at it let's suppose that the collision of the shell with the star is perfectly elastic in order to maximize the effect on the motion of the star. Now suppose that the radius of the star is $R=7\times 10^8$ meters (like the sun) and has mass $M=2\times 10^{30}$ kg (like the sun), and suppose that its distance from the supernova is $r=3\times 10^{16}$ meters (about 3 light-years). If the total energy in the outgoing supernova shell is $E$ , then fraction intercepted by the star is the area of the star's disk ( $\pi R^2$ ) divided by the area of the outgoing spherical shell ( $4\pi r^2$ ). So the intercepted energy $E'$ is $$
E'=\frac{\pi R^2}{4\pi r^2}E\approx 10^{-16}E.
$$ Using $E=10^{46}$ Joules gives $$
E'\approx 10^{30}\text{ Joules}.
$$ That's a lot of energy, but is it enough? Using $c\approx 3\times 10^8$ m/s for the speed of light, the corresponding momentum is $p=E'/c\approx 3\times 10^{21}$ kg $\cdot$ m/s. Optimistically assuming an elastic collision that completely reverses the direction of that part of the shell's momentum (optimistically ignoring conservation of energy), the change in the star's momentum will be twice that much. Since the star has a mass of $M=2\times 10^{30}$ kg, its change in velocity (using a non-relativistic approximation, which is plenty good enough in this case) is $2p/M\approx 3\times 10^{-9}$ meters per second, which is about $10$ centimeters per year . That's probably not enough to eject the star from the galaxy. Sorry. References: [1] Page 43 in Bethe and Brown (1985), "How a Supernova Explodes," Scientific American 252 : 40-48, http://www.cenbg.in2p3.fr/heberge/EcoleJoliotCurie/coursannee/transparents/SN%20-%20Bethe%20e%20Brown.pdf [2] Ott $et al$ (2011), "New Aspects and Boundary Conditions of Core-Collapse Supernova Theory," http://arxiv.org/abs/1111.6282 [3] Table 9 on page 128 in The International System of Units (SI), 8th edition , International Bureau of Weights and Measures (BIPM), http://www.bipm.org/utils/common/pdf/si_brochure_8_en.pdf | {
"source": [
"https://physics.stackexchange.com/questions/455526",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134671/"
]
} |
455,531 | I am studying loop quantum gravity using the book by Pullin and Gambini . I am having some trouble understanding and getting past the chapter on Yang Mills theory, mainly because I am confused about some of the notation and concepts. I am hoping someone can shed some light on this for me. First of all, the Yang Mills covariant derivative is defined and written as: $$
D_{\mu} \equiv \partial_{\mu} - ig/2\sigma^iA^i_{\mu}
$$ I understand that the superscripted indices, $i$ , on $\sigma^i$ and $A^i_{\mu}$ are internal indices of the theory that run from 1 to 3. But is the $i$ in front of the coupling parameter, $g$ , meant to indicate this internal index as well, or is this the imaginary unit? I am guessing the second, but the notation is a little ambiguous to me so I would like to have some clarification. Secondly, I don't understand the relation between the commutator of the covariant derivative with itself and the field tensor. In the book the following is written: $$
\left[D_{\mu},D_{\nu}\right] = -ig/2F^i_{\mu\nu}\sigma^i
$$ Where I understand that $F^i_{\mu\nu}$ is the field tensor of the theory. I have a few questions about this. Firstly: if I write out the commutator explicitly, using the definition of the covariant derivative from above, I get: $$\begin{align}
\left[D_{\mu},D_{\nu}\right] = D_{\mu}D_{\nu} - D_{\nu}D_{\mu} &= (\partial_{\mu} - ig/2\sigma^iA^i_{\mu})(\partial_{\nu} - ig/2\sigma^iA^i_{\nu}) - (\partial_{\nu} - ig/2\sigma^iA^i_{\nu})(\partial_{\mu} - ig/2\sigma^iA^i_{\mu}) \\
&=\partial_{\mu}\partial_{\nu} - \partial_{\mu}ig/2\sigma^iA^i_{\nu} - ig/2\sigma^iA^i_{\mu}\partial_{\nu}+i^2g^2/4\sigma^iA^i_{\mu}\sigma^iA^i_{\nu} \\
&- \partial_{\nu}\partial_{\mu}+\partial_{\nu}ig/2\sigma^iA^i_{\mu}+ig/2\sigma^iA^i_{\nu}\partial_{\mu}-i^2g^2/4\sigma^iA^i_{\nu}\sigma^iA^i_{\mu}
\end{align}$$ Now, I understand that the first and fifth terms on the right cancel, because the partial derivatives, $\partial_{\mu}$ and $\partial_{\nu}$ , commute. But I don't understand how to manipulate the remaining terms to get to a something of the form $-ig/2F^i_{\mu\nu}\sigma^i$ . Can someone please show me the full derivation for this? Then the book goes on to say that if one indeed works out the commutator explicitly, one gets that the field tensor is given by $$
F^i_{\mu\nu} = \partial_{\mu}A^i_{\nu}-\partial_{\nu}A^i_{\mu}+g\epsilon^{ijk}A^j_{\mu}A^k_{\nu},
$$ where, I assume, $\epsilon^{ijk}$ are the Levi-Civita symbols, right? Maybe the full derivation of the commutator will already explain this too, but how does one get from the commutator to this last equation. Finally, and this is a purely conceptual question. Why is it that the commutator of the covariant derivatives yields the field tensor? Is this a just a definition in gauge theory? Any help would be greatly appreciated! | Consider a star of mass $M$ and radius $R$ at a distance $r$ from the supernova. For a back-of-the-envelope estimate, consider how much momentum would be transferred to the star by the supernova. From that, we can estimate the star's change in velocity and decide whether or not it would be significant. First, for extra fun, here's a review of how a typical core-collapse supernova works [1]: Nuclear matter is highly incompressible. Hence once the central part of the core reaches nuclear density there is powerful resistance to further compression. That resistance is the primary source of the shock waves that turn a stellar collapse into a spectacular explosion. ... When the center of the core reaches nuclear density, it is brought to rest with a jolt. This gives rise to sound waves that propagate back through the medium of the core, rather like the vibrations in the handle of a hammer when it strikes an anvil. .. The compressibility of nuclear matter is low but not zero, and so momentum carries the collapse beyond the point of equilibrium, compressing the central core to a density even higher than that of an atomic nucleus. ... Most computer simulations suggest the highest density attained is some 50 percent greater than the equilibrium density of a nucleus. ...the sphere of nuclear matter bounces back, like a rubber ball that has been compressed. That "bounce" is allegedly what creates the explosion. According to [2], Core colapse liberates $\sim 3\times 10^{53}$ erg ... of gravitational binding energy of the neutron star, 99% of which is radiated in neutrinos over tens of seconds. The supernova mechanism must revive the stalled shock and convert $\sim 1$ % of the available energy into the energy of the explosion, which must happen within less than $\sim 0.5$ - $1$ s of core bounce in order to produce a typical core-collapse supernova explosion... According to [3], one "erg" is $10^{-7}$ Joules. To give the idea the best possible chance of working, suppose that all of the $E=10^{53}\text{ ergs }= 10^{46}\text{ Joules}$ of energy goes into the kinetic energy of the expanding shell. The momentum $p$ is maximized by assuming that the expanding shell is massless (because $p=\sqrt{(E/c)^2-(mc)^2}$ ), and while we're at it let's suppose that the collision of the shell with the star is perfectly elastic in order to maximize the effect on the motion of the star. Now suppose that the radius of the star is $R=7\times 10^8$ meters (like the sun) and has mass $M=2\times 10^{30}$ kg (like the sun), and suppose that its distance from the supernova is $r=3\times 10^{16}$ meters (about 3 light-years). If the total energy in the outgoing supernova shell is $E$ , then fraction intercepted by the star is the area of the star's disk ( $\pi R^2$ ) divided by the area of the outgoing spherical shell ( $4\pi r^2$ ). So the intercepted energy $E'$ is $$
E'=\frac{\pi R^2}{4\pi r^2}E\approx 10^{-16}E.
$$ Using $E=10^{46}$ Joules gives $$
E'\approx 10^{30}\text{ Joules}.
$$ That's a lot of energy, but is it enough? Using $c\approx 3\times 10^8$ m/s for the speed of light, the corresponding momentum is $p=E'/c\approx 3\times 10^{21}$ kg $\cdot$ m/s. Optimistically assuming an elastic collision that completely reverses the direction of that part of the shell's momentum (optimistically ignoring conservation of energy), the change in the star's momentum will be twice that much. Since the star has a mass of $M=2\times 10^{30}$ kg, its change in velocity (using a non-relativistic approximation, which is plenty good enough in this case) is $2p/M\approx 3\times 10^{-9}$ meters per second, which is about $10$ centimeters per year . That's probably not enough to eject the star from the galaxy. Sorry. References: [1] Page 43 in Bethe and Brown (1985), "How a Supernova Explodes," Scientific American 252 : 40-48, http://www.cenbg.in2p3.fr/heberge/EcoleJoliotCurie/coursannee/transparents/SN%20-%20Bethe%20e%20Brown.pdf [2] Ott $et al$ (2011), "New Aspects and Boundary Conditions of Core-Collapse Supernova Theory," http://arxiv.org/abs/1111.6282 [3] Table 9 on page 128 in The International System of Units (SI), 8th edition , International Bureau of Weights and Measures (BIPM), http://www.bipm.org/utils/common/pdf/si_brochure_8_en.pdf | {
"source": [
"https://physics.stackexchange.com/questions/455531",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/220429/"
]
} |
455,791 | So the backstory is that I purchased a reusable drinking straw that is copper coloured, but is advertised to be stainless steel. That got me thinking about whether I could be sure it was one or the other without having access to a laboratory. I saw this answer that mentions that the conductivity of steel is much lower, but I don't think my voltmeter could really measure something this small. The other effect I know of would be the Hall effect (nice because it's a tube, so it's easy to demonstrate), but I wasn't able to find what the predicted behaviour is for a steel tube. My question is then: are there any at-home/readily available ways to differentiate copper vs. stainless steel. | Take advantage of the large difference in thermal conductivity between copper and stainless steel ( approximately $400$ and $16$ $\mathrm{Wm^{-1}K^{-1}}$ respectively). If you put one end of a metal rod into contact with something held at a constant high or low temperature $T_C$ , you would expect the other end to asymptotically approach that temperature like: $$
T(t) \sim T_C + A e^{-\lambda t}
$$ where $$
\lambda = \frac{k}{\rho c_p L^2}
$$ where $k$ is the thermal heat conductivity, $\rho$ is the mass density, $c_p$ is the specific heat capacity and $L$ is the length of the metal rod. Assuming the straw is approximately $0.1$ m long, you should get $\lambda$ values of approximately $0.011$ $\mathrm{s^{-1}}$ for copper and approximately $0.0004$ $\mathrm{s^{-1}}$ for stainless steel. A simple experiment would consist of putting one end of the your straw into contact with a container of ice water, or maybe a pot of boiling water, while you carefully measure and record the temperature at the other end as a function of time. (If the straw is longer than $0.1$ m, immerse the extra length of the straw into the water.) The best thing would be if you had some kind of digital thermometer that allows you to log data, but it could probably also be done with an analog thermometer, a clock, and a notebook. After taking the measurements it should be relatively easy to determine weather the temperature difference decreases with a half-life of one minute or half an hour. There are many potential error sources in a simple experiment like this, but since the difference between copper and stainless steel is more than an order of magnitude, it should be relatively easy to tell them apart despite these errors. The experiment could also be carried out for other rods that are known to be made of copper or stainless steel (or of some other metal), to validate that the experiment gives approximately the expected result for them. | {
"source": [
"https://physics.stackexchange.com/questions/455791",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/220558/"
]
} |
455,800 | Assumption: The bodies are moving on the same axis, along their center of mass, and the bodies are perfectly spherical. There is no loss of kinetic energy after collision. m2 >>> m1
v2 = -v1 Will the bodies stick together and move with the velocity v2, or will the lighter body move with a velocity v >>> v2? | Take advantage of the large difference in thermal conductivity between copper and stainless steel ( approximately $400$ and $16$ $\mathrm{Wm^{-1}K^{-1}}$ respectively). If you put one end of a metal rod into contact with something held at a constant high or low temperature $T_C$ , you would expect the other end to asymptotically approach that temperature like: $$
T(t) \sim T_C + A e^{-\lambda t}
$$ where $$
\lambda = \frac{k}{\rho c_p L^2}
$$ where $k$ is the thermal heat conductivity, $\rho$ is the mass density, $c_p$ is the specific heat capacity and $L$ is the length of the metal rod. Assuming the straw is approximately $0.1$ m long, you should get $\lambda$ values of approximately $0.011$ $\mathrm{s^{-1}}$ for copper and approximately $0.0004$ $\mathrm{s^{-1}}$ for stainless steel. A simple experiment would consist of putting one end of the your straw into contact with a container of ice water, or maybe a pot of boiling water, while you carefully measure and record the temperature at the other end as a function of time. (If the straw is longer than $0.1$ m, immerse the extra length of the straw into the water.) The best thing would be if you had some kind of digital thermometer that allows you to log data, but it could probably also be done with an analog thermometer, a clock, and a notebook. After taking the measurements it should be relatively easy to determine weather the temperature difference decreases with a half-life of one minute or half an hour. There are many potential error sources in a simple experiment like this, but since the difference between copper and stainless steel is more than an order of magnitude, it should be relatively easy to tell them apart despite these errors. The experiment could also be carried out for other rods that are known to be made of copper or stainless steel (or of some other metal), to validate that the experiment gives approximately the expected result for them. | {
"source": [
"https://physics.stackexchange.com/questions/455800",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/220572/"
]
} |
455,869 | In the physics literature, you can often find the term "two-ion crystal", when talking about two ions that are confined in a e.g. Paul trap. How is this possible? Shouldn't a crystal be a structure which repeats in space multiple (>2) times? Otherwise, what are the necessary requirements to define something as a crystal? EDIT: one of the first ≈5k results found by Googling "two-ion crystal" https://arxiv.org/abs/1202.2730 | Coulomb crystals are the structures formed by ions in a trap when they are sufficiently cold: once they stop jiggling around, they come down to equilibrium positions which need to balance the need to get down to the center of the trap, where the trapping potential is at its minimum, with the mutual repulsion between the ions. This usually results in an orderly stacking of the ions, often with very clear local symmetries in a bunch of places. Here's one example, formed in an elongated ion trap (with experiment on the left and a simulation on the right; the lines are blurry because the whole thing is rigidly rotating about its vertical axis): Image source Within an ion-trapping context, the phrase "two-ion crystal" is a perfectly natural phrase to use for the case where you have coulomb-crystal dynamics, with a trapping potential and a Coulomb repulsion balancing out to give the equilibrium positions, and you have $N=2$ ions in the structure. If the phrase doesn't make sense to you, then that's just an indication that you're not within that text's intended audience. Now, is the word "crystal" being used correctly here? The real answer is that it doesn't matter , at all: this is unambiguous notation, and lack of ambiguity is the single requirement that we make of notation. | {
"source": [
"https://physics.stackexchange.com/questions/455869",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/160397/"
]
} |
456,808 | I've done a bit of research, and have learned that computers "solve" the three-body-problem by using "Numerical methods for ordinary differential equations", but I can't really find anything about it other then Wikipedia. Does anyone have any good sources about this topic that isn't any kind of Wikipedia? My thoughts: Currently I'm using simulations of three bodies flying around each other, using Newton's gravitational law, and at a random time in the simulation, everything goes chaotic. I though that this was the only way to kind of "solve" it, but how does this "Numerical methods for ordinary differential equations" method work? And what does the computer actually do? | Numerical analysis is used to calculate approximations to things: the value of a function at a certain point, where a root of an equation is, or the solutions to a set of differential equations. It is a huge and important topic since in practice most real problems in mathematics, science and technology will not have an explicit closed-form solution (and even if they have, it might not be possible to compute with infinite precision - computers after all represent numbers with finite precision). In general there are trade-offs between accuracy and computational speed. For the three-body problem we have three point masses in starting locations $\mathbf{x}_i(0)$ with velocities $\mathbf{v}_i(0)$ that we want to calculate for later times $t$ . Mathematically we want to find the solution to the system $$\mathbf{x}'_i(t)=\mathbf{v}_i(t),$$ $$\mathbf{v}'_i(t)=\mathbf{f}_i(t)/m_i,$$ $$\mathbf{f}_i(t)=Gm_i \sum_{j\neq i} \frac{m_j(\mathbf{x}_j-\mathbf{x}_i)}{||\mathbf{x}_i-\mathbf{x}_j||^3}.$$ The obvious method is to think "if we move forward a tiny step $h$ in time, we can approximate everything to be linear", so we make a formula where we calculate the state at time $t+h$ from the state at time $t$ (and so on for $t+2h$ and onwards): $$\mathbf{x}_i(t+h)=\mathbf{x}_i(t)+h\mathbf{v}_i(t),$$ $$\mathbf{v}_i(t+h)=\mathbf{v}_i(t)+h\mathbf{f}_i(t).$$ This is called Euler's method . It is simple but tends to be inaccurate; the error per step is $\approx O(h^2)$ and they tend to build up. If you try it for a two body problem it will make the orbiting masses perform a precessing rosette orbit because of the error build-up, especially when they get close to each other. There is a menagerie of methods for solving ODEs numerically. One can use higher order methods that sample the functions in more points and hence approximate them better. There are implicit methods that instead of trying to find a state at a later time only based on the current state look for a self-consistent late and intermediate state. Most serious methods for solving ODEs will also reduce the step-size $h$ when the forces grow big during close encounters to ensure that accuracy remains acceptable. As I said, this is a big topic. However, for mechanical simulations you may in particular want to look at methods designed to preserve energy and other conserved quantities ( symplectic methods - these are the ones used by professionals for long-run orbit calculations). Perhaps the simplest is the semi-implicit Euler method . There is also the Verlet method and leapfrog integration . I like the semi-implicit Euler method because it is super-simple (but being a first order-method it is still not terribly accurate): $$\mathbf{v}_i(t+h)=\mathbf{v}_i(t)+h\mathbf{f}_i(t),$$ $$\mathbf{x}_i(t+h)=\mathbf{x}_i(t)+h\mathbf{v}_i(t+h).$$ Do you see the difference? You calculate the updated velocity first, and then use it to update the positions - a tiny trick, but suddenly 2-body orbits are well behaved. The three body problem is chaotic in a true mathematical sense. We know there are situations where tiny differences in initial conditions get scaled up to arbitrarily large differences in later positions (even if we rule out super-close passes between masses). So even with an arbitrarily fine numerical precision there will be a point in time where our calculated orbits will be totally wrong. The overall "style" of trajectory may still be correct, which is why it is OK to play around with semi-implicit Euler as long as one is not planning any space mission based on the results. | {
"source": [
"https://physics.stackexchange.com/questions/456808",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/220705/"
]
} |
457,686 | I only have high school physics knowledge, but here is my understanding: Fusion: 2 atoms come together to form a new atom. This process releases the energy keeping them apart, and is very energetic. Like the sun! Fission: Something fast (like an electron) smashes into an atom breaking it apart. Somehow this also releases energy. Less energy than fusion, and it's like a nuclear reactor. Now my understanding is that the lowest energy state is when everything is tightly stuck together (as per fusion), and it costs energy to break them apart.. So.. why do both fusion and fission release energy? | In general, both fusion and fission may either require or release energy. Purely classical model Nucleons are bound together with the strong (and some weak) nuclear force.
The nuclear binding is very short range; this means that we can think of nucleons as "sticking" together due to this force.
Additionally the protons repel due to their electric charge. As geometry means that a nucleon has only a limited number of other nucleons it can "stick" to, the attractive force per nucleon is more or less fixed. The repulsive electric field is long range. That means that as the nucleus grows, the repulsion grows, so that eventually that repulsion exceeds the attractive effect and one cannot grow the nucleus further. Hence a limited number of possible elements. Effectively this means the attractive force per nucleon increases rapidly for a small number of nucleons, then tops out and starts to fall. Equivalently, the binding energy per nucleon behaves similarly. As @cuckoo noted, iron and nickel have the most tightly bound nuclei; iron-56 having lowest mass per nucleon and nickel-62 having most binding energy. This image (from Wikipedia) illustrates the curve in the typically presented manner: However, I prefer to think of binding energy as negative and therefore better visualize iron as being the lowest energy state: For lighter elements: Fission requires energy Fusion releases energy For heavier elements, the opposite is true. The reason we mainly observe the release energy cases is because: It is easier to do It is more "useful" | {
"source": [
"https://physics.stackexchange.com/questions/457686",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/221343/"
]
} |
458,045 | I understand that significant figures is a term used for "reliably known digits". However, what I don't understand is why the 0's are not counted among these in numbers such as 0.002. Surely, if we know that the units digit is 0, and that the tenths digit is 0, and that the hundredths digit is 0, then we know these digits reliably? In other words, we know that the units digit is not 1 or 2 or 3, but 0. Thus, we know this digit reliably. Why then is it not counted as a significant figure? Why do all physics textbooks say that 0.002 only has 1 significant figure? The "related" question is different from the one I am asking. The one there is asking about 1500 whereas my one is about 0.002, ie when the zeros come to the left of the number. | One of the logical rules for significant figures is that expressing a given number in a different order of magnitude should not make you sound like you know more or less about the number. If you start with $0.002$ , we can only say that it's equal to $2\times 10^{-3}$ , since you probably already appreciate the implications of adding zeros to the left of a decimal place. Regarding the claim, we know that the units digit is not 1 or 2 or 3 Yes, but those are extremely trivial bits of knowledge. Try saying " $002$ has three significant figures". It's obvious that there's no other constant in those places, because then we'd be dealing with a completely different number; you wouldn't call it "two". Significant figures are only a relevant thing to consider when you're debating between options which can be rounded to the same value, within reason. | {
"source": [
"https://physics.stackexchange.com/questions/458045",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/160255/"
]
} |
458,587 | According to QM, we know that The act of measurement forces a particle to acquire a definite (up to
experimental errors) position, so in a particle collider, like the one in CERN, by which means do we force particles to acquire a definite position, so that they "collide"? My guts says the answer first point out that we are not actually colliding anything, but rather we are forcing the probability distribution of two particles, say protons, to overlap, and at the end they "somehow" acquire a position, hence "collide", but, this is just an educated guess. | The answer is basically the one you've suggested. When we collide particles in e.g. the LHC we are not colliding point particles. We are colliding two wavefunctions that look like semi-localised plane waves. The collision would look something like: So classically the two particles would miss each other, but in reality their positions are delocalised so there is some overlap even though the centres (i.e. the average positions) of the two particles miss each other. I've drawn a green squiggle to vaguely indicate some interaction between the two particles, but you shouldn't take this too literally. What actually happens is that both particles are described as states of a quantum field. When the particles are far from each other they are approximately Fock states i.e. plane waves. However when the particles approach each other they become entangled and now the state of the quantum field cannot simply be separated into states of two particles. In fact we don't have a precise description of the state of the field when the particles are interacting strongly - we have to approximate the interaction using perturbation theory, which is where those Feynmann diagrams come in. So to summarise: we should replace the verb collide with interact , and the interaction occurs because the two particles overlap even when their centres are separated. We calculate that interaction using quantum field theory, and the interaction strength will depend on the distance of closest approach. The OP asks in a comment: So, that interaction causes two particles to "blow up", and disintegrate into its more elementary particles? I mentioned above that the particles are a state of the quantum field and that when far apart that state is separable into the two Fock states that describe the two particles. When the particles are close enough to interact strongly the state of the field cannot be separated into separate particle states. Instead we have some complicated state that we cannot describe exactly. This intermediate state evolves with time, and depending on the energy it can evolve in different ways. It could for example just evolve back into the two original particles and those two particles head off with the same total energy. But if the energy is high enough the intermediate state could evolve into states with different numbers of particles, and this is exactly how particles get create in colliders. We can't say what will happen, but we can calculate the probabilities for all the possible outcomes using quantum field theory. The key point is that the intermediate state does not simply correspond to a definite number of specific particles. It is a state of the field not a state of particles. | {
"source": [
"https://physics.stackexchange.com/questions/458587",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/99217/"
]
} |
458,778 | I was very interested in gravitational wave detectors and how they work exactly. From what I know they set up two lasers with mirrors set up to cancel each other out. Then if movement is detected from gravitational wave changes then light from the laser can pass through. So can't vibrations of the earth or some other medium of energy be affecting this like changes in air pressure, earth plate movement, etc.? | Summary Yes they can. False positives arising from the acoustic sources you name are ruled out by seismological analysis and the examination of correlation between the separate gravitational wave detection stations. Pretty obviously, the LIGO detectors are probably the most sensitive microphones ever built. So how do we know they are detecting gravitational waves? Part of the answer is exquisitely sophisticated vibration isolation engineering. The kind of issues that are addressed by this vibration are explained in the Einstein Telescope design document: M. Abernathy (& about 30 others), "Einstein gravitational wave Telescope conceptual design study", Published by the Einstein Telescope collaboration comprising the institutions listed in Table 1 of this document, 2011 in chapter 4 "Suspension Systems" and Appendix C. However, it is impossible to get rid of all the effects of vibration. So a major part of the experimental design is the fact that there are two LIGO detector stations almost as far apart as one can make them in the United States: one in Louisiana in the South East and one in Washington State in the North West. Therefore, we look for signals that are present in both interferometers at once. If we accept the relativistic conclusion that no signal can travel faster than $c$ , then no the effect on or near to the ground (mining, traffic on roads, and all the human made clatter that we make as a species) that influences one interferometer can possibly influence the other in the time it takes to glean a reading, because those influences travel much, much slower than lightspeed. This "correlation test" rules most false positives arising from effects other than gravitational waves out. There are then only two possible sources for correlated signals arising at both detector stations with a delay less than 100 milliseconds: (1) acoustic signals arising from a common source within the Earth on the bisecting plane midway between the two stations or (2) from outer space. Careful and thorough seismometry continuously monitors all seismic waves that arrive at the detector stations. This is enough to rule false positives from source (1) out. The only other possible source of a signal common to both stations is therefore a disturbance from outer space. When we see such a signal that is absent from the seismometers, we know it is something that modulates the interferometer path lengths simultaneously (or near enough thereto that all other Earthly effects are ruled out) that is coming from outer space. There are no other known natural effects that can easily explain such correlated detections. Furthermore, the spectacular gravitational wave event GW170817 was the simultaneous observation of a gravitational wave event by both the LIGO detectors in the United States and the Virgo detector in Italy as well as a gamma ray burst observation within 1.7 seconds by the Fermi telescope. Given that gamma ray bursts are detected by Fermi about once every few days and are observed to arrive as a Poisson process ( i.e. they are equally likely to arrive at all times and the statistics conditioned on any event are exactly the same as the unconditioned statistics), the probability of a gamma ray burst within 1.7 seconds of the gravitational wave detection by pure co-incidence is of the order of $10^{-5}$ . So GW170817 was a spectacular corroboration of the hypothesis that our gravitational wave detectors are indeed detecting gravitational waves. It is almost certain that the gamma rays and whatever it was that "shook" the LIGO and Virgo detectors in GW170817 was the same source. | {
"source": [
"https://physics.stackexchange.com/questions/458778",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/221816/"
]
} |
458,788 | My prof. told me that using differential forms proca equation reduces to solving for scalar field equation. How is that? I can’t see how does one relate to Scalar equation using differential forms. Proca equation: $$\mathcal{L} = \frac{-1}{16}F^{\mu v}F_{\mu v} + \frac{1}{8\pi}m^2A_\mu A^\mu.$$ Equation of motion for Proca: $$\partial_\mu F^{\mu v} + m^2 A^v = 0.$$ | Summary Yes they can. False positives arising from the acoustic sources you name are ruled out by seismological analysis and the examination of correlation between the separate gravitational wave detection stations. Pretty obviously, the LIGO detectors are probably the most sensitive microphones ever built. So how do we know they are detecting gravitational waves? Part of the answer is exquisitely sophisticated vibration isolation engineering. The kind of issues that are addressed by this vibration are explained in the Einstein Telescope design document: M. Abernathy (& about 30 others), "Einstein gravitational wave Telescope conceptual design study", Published by the Einstein Telescope collaboration comprising the institutions listed in Table 1 of this document, 2011 in chapter 4 "Suspension Systems" and Appendix C. However, it is impossible to get rid of all the effects of vibration. So a major part of the experimental design is the fact that there are two LIGO detector stations almost as far apart as one can make them in the United States: one in Louisiana in the South East and one in Washington State in the North West. Therefore, we look for signals that are present in both interferometers at once. If we accept the relativistic conclusion that no signal can travel faster than $c$ , then no the effect on or near to the ground (mining, traffic on roads, and all the human made clatter that we make as a species) that influences one interferometer can possibly influence the other in the time it takes to glean a reading, because those influences travel much, much slower than lightspeed. This "correlation test" rules most false positives arising from effects other than gravitational waves out. There are then only two possible sources for correlated signals arising at both detector stations with a delay less than 100 milliseconds: (1) acoustic signals arising from a common source within the Earth on the bisecting plane midway between the two stations or (2) from outer space. Careful and thorough seismometry continuously monitors all seismic waves that arrive at the detector stations. This is enough to rule false positives from source (1) out. The only other possible source of a signal common to both stations is therefore a disturbance from outer space. When we see such a signal that is absent from the seismometers, we know it is something that modulates the interferometer path lengths simultaneously (or near enough thereto that all other Earthly effects are ruled out) that is coming from outer space. There are no other known natural effects that can easily explain such correlated detections. Furthermore, the spectacular gravitational wave event GW170817 was the simultaneous observation of a gravitational wave event by both the LIGO detectors in the United States and the Virgo detector in Italy as well as a gamma ray burst observation within 1.7 seconds by the Fermi telescope. Given that gamma ray bursts are detected by Fermi about once every few days and are observed to arrive as a Poisson process ( i.e. they are equally likely to arrive at all times and the statistics conditioned on any event are exactly the same as the unconditioned statistics), the probability of a gamma ray burst within 1.7 seconds of the gravitational wave detection by pure co-incidence is of the order of $10^{-5}$ . So GW170817 was a spectacular corroboration of the hypothesis that our gravitational wave detectors are indeed detecting gravitational waves. It is almost certain that the gamma rays and whatever it was that "shook" the LIGO and Virgo detectors in GW170817 was the same source. | {
"source": [
"https://physics.stackexchange.com/questions/458788",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
458,828 | I recently answered a question on the WorldBuilding forum about grenades and bullets. One of the things that came up was that I argued smokeless powder in a rifle round could detonate, but was challenged on that. Commenters said that smokeless powder only deflagrates during normal use. This, however, leaves me with a question. How can we accelerate a bullet to supersonic speeds using only a sonic speed pressure wave? As the bullet approaches the speed of sound, shouldn't the pressure wave be pushing the bullet less effectively? It strikes me that a bullet traveling at the speed of sound should not be able to be pushed by a pressure wave at the speed of sound. How does this work? | The speed of sound increases with increasing pressure. Assuming ideal behaviour the relationship is: $$ v = \sqrt{\gamma\frac{P}{\rho}} $$ or equivalently: $$ v = \sqrt{\frac{\gamma RT}{M}} $$ where $M$ is the molar mass. In a gun barrel just after the charge has gone off the gas is under very high pressure and very hot, so the speed of sound is much higher than under ambient conditions. | {
"source": [
"https://physics.stackexchange.com/questions/458828",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/47472/"
]
} |
459,592 | For example, nucleons in nucleus are in motion with kinetic energies of 10 MeV. Their rest energies are about 1000 MeV. Kinetic energy of nucleons is small compared to rest energy. They are hence considered non-relativistic. | When we say a particle is non-relativistic we mean the Lorentz factor $\gamma$ is close to one, where $\gamma$ is given by: $$ \gamma = \frac{1}{\sqrt{1 - v^2/c^2}} $$ So saying $\gamma$ is close to one means that the velocity $v$ must be much less than $c$ . With a bit of algebra we can show that the kinetic energy of a particle is given by: $$ T =(\gamma - 1)mc^2 $$ And the rest mass energy is the usual $mc^2$ , so if we take the ratio of the kinetic energy to the rest mass energy we get: $$ \frac{T}{E} = \frac{(\gamma - 1)mc^2}{mc^2} = \gamma - 1 $$ And if this ratio is small that means $\gamma \approx 1$ , which was our original criterion for non-relativistic behviour. | {
"source": [
"https://physics.stackexchange.com/questions/459592",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/208833/"
]
} |
460,125 | I have read multiple explanations of escape velocity, including that on Wikipedia , and I don't understand it. If I launch a rocket from the surface of the Earth towards the sun with just enough force to overcome gravity, then the rocket will slowly move away from the Earth and we see this during conventional rocket launches. Let's imagine I then use slightly excessive force until the rocket reaches 50 miles per hour and then I cut back thrust to just counterbalance the force of gravity. Then my rocket will continue moving at 50 mph toward the sun. I don't see any reason why I can't just continue running the rocket at the same velocity and keep pointing it towards the sun. The rocket will never orbit earth (by "orbit" I mean go around it). It will just go towards the sun at 50 mph until it eventually reaches the sun. There seems to be no need whatsoever to ever go escape velocity (25,000 mph). Answer : just to clarify the answers from below and the other linked question... Escape velocity is not necessary to leave the earth, unless the object has no thrust or other means of propulsion. In other words, if you throw a baseball, it has to go escape velocity to leave the earth, but if you have a spaceship with engines, then you leave the earth at any speed you want as long as you have the fuel necessary. | Escape velocity is the velocity an object needs to escape the gravitational influence of a body if it is in free fall , i.e. no force other than gravity acts on it. Your rocket is not in free fall since it is using its thruster to maintain a constant velocity so the notion of "escape velocity" does not apply to it. | {
"source": [
"https://physics.stackexchange.com/questions/460125",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/57925/"
]
} |
460,855 | Let's say we are doing the double slit experiment with electrons. We get an interference pattern, and if we put detectors at slits, then we get two piles pattern because we measure electrons' positions when going through slits. But an electron interacts with other particles in a lot of different ways, e.g. electric field, gravity. Seems like the whole universe is receiving information about the electron's position. Why is it not the case and the electron goes through slits "unmeasured"? Bonus question: in real experiments do we face the problem of not "shielding" particles from "measurement" good enough and thus getting a mix of both patterns on the screen? | Seems like the whole universe is receiving information about the electron's position. Yes, the influence that an electron exerts on the rest of the universe does depend on the location of the electron, but that's not enough to constitute a measurement of the electron's location. We need to consider the degree to which the electron's influence on the rest of the universe depends on its location. Consider something analogous to but simpler than a double slit experiment: consider an electron in deep space, in a superposition of two different locations $A$ and $B$ . Even in deep space, the electron is not alone, because space is filled with cosmic microwave background (CMB) radiation. CMB radiation has a typical wavelength of about $1$ millimeter. When CMB radiation is scattered by an electron, the resulting state of the radiation depends on the electron's location, but the key question is how much it depends on the electron's location. If the locations $A$ and $B$ differ from each other by $\gg 1$ millimeter, then the CMB radiation will measure the electron's location very effectively, because an electron in location $A$ will have a very different effect on the CMB radiation than an electron in location $B$ would have. But if locations $A$ and $B$ differ from each other by $\ll 1$ millimeter, then an electron in location $A$ will not have a very different effect on the CMB radiation than an electron in location $B$ would have. Sure, the electron has a significant effect on the CMB radiation regardless of its location, but that key is whether the effect differs significantly when the location is $A$ versus $B$ . The CMB radiation measures the electron's location, but it does so with limited resolution . Widely-spaced locations will be measured very effectively, but closely-separated locations will not. For this to really make sense, words are not enough. We need to consider the math. So here's a version that includes a smidgen of math. Let $|a\rangle$ denote the state of the universe (including the electron) that would result if the electron's location were $A$ , and let $|b\rangle$ denote the state of the universe that would result if the electron's location were $B$ . If the electron started in some superposition of locations $A$ and $B$ , then the resulting state of the universe will be something like $|a\rangle+|b\rangle$ . Whether or not the electon's location is effectively measured, these two terms will be essentially orthogonal to each other, $\langle a|b\rangle\approx 0$ , simply because they differ significantly in the location of the electron itself. So the fact that the final state is $|a\rangle+|b\rangle$ with $\langle a|b\rangle\approx 0$ doesn't tell us anything about whether or not the electron's location was actually measured . For that, we need a principle like this: The electron's location has been effectively measured if and only if the states $|a\rangle$ and $|b\rangle$ are such that $\langle a|\hat O|b\rangle\approx 0$ for all feasibly-measurable future observables $\hat O$ . (Quantifying " $\approx 0$ " requires some care, but I won't go into those details here.) For an operator $\hat O$ to be "feasibly measurable", it must be sufficiently simple, which loosely means that it does not require determining too many details over too large a region of space. This is a fuzzy definition, of course, as is the definition of measurement itself, but this fuzziness doesn't cause any problems in practice. (The fact that it doesn't cause any problems in practice is frustrating, because this makes the measurement process itself very difficult to study experimentally!) In the example described above, the suggested condition is satisfied if locations $A$ and $B$ differ by $\gg 1$ millimeter, because after enough CMB radiation has been scattered by the electron, the states $|a\rangle$ and $|b\rangle$ differ significantly from each other everywhere , and no operator $\hat O$ that is simple enough to represent a feasibly-measurable observable can possibly un-do the orthogonality of the states $|a\rangle$ and $|b\rangle$ . Loosely speaking, the state $|a\rangle$ and $|b\rangle$ aren't just orthogonal; they're prolifically orthogonal, in a way that can't be un-done by any simple operator. In contrast, if locations $A$ and $B$ differ by $\ll 1$ millimeter, then we can choose an operator $\hat O$ that acts just on the electron (and is therefore relatively simple) to obtain $\hat O|a\rangle\approx |b\rangle$ , thus violating the condition $\langle a|\hat O|b\rangle\approx 0$ . So in this case, the electron's location has not been effectively measured at all. The states $|a\rangle$ and $|b\rangle$ are orthogonal simply because they differ in the location of the electron itself, but they are not prolifically orthogonal because the effect on the rest of the universe doesn't depend significantly on whether the electron's location was $A$ versus $B$ . What I'm doing here is describing "decoherence" in a different way than it is usually described. The way I'm describing it here doesn't rely on any factorization of the Hilbert space into the "system of interest" and "everything else." The way I'm describing it here (after quantifying some of my loose statements more carefully) can be applied more generally. It doesn't solve the infamous measurement problem (which has to do with the impossibility of deriving Born's rule within quantum theory), but it does allow us to determine how effectively a given observable has been measured. Some quantitative calculations — including quantitative results for the specific example I used here — are described in Tegmark's paper "Apparent wave function collapse caused by scattering" ( https://arxiv.org/abs/gr-qc/9310032 ), which is briefly reviewed in https://physics.stackexchange.com/a/442464 . Those calculations use the more traditional description of decoherence, but the results are equally applicable to the way I described things here. | {
"source": [
"https://physics.stackexchange.com/questions/460855",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/222770/"
]
} |
461,605 | Must a star belong to a galaxy, or could it be completely isolated? In case it can be isolated (not belong to a galaxy), could it have a planet orbiting around it? | No, stars do not need to be inside a galaxy. It is estimated that about 10% of stars do not belong to a galaxy [ 1 ]. While most intergalactic stars formed inside a galaxy and were ejected by gravitational interactions, stars can form outside of galaxies as well [ 2 ]. We assume that such stars could have planets, just like stars in a galaxy, although no specific examples have been detected yet. [1] "Detection of intergalactic red-giant-branch stars in the Virgo cluster" , Ferguson et al. Nature 391.6666 (1998): 461. [2] "Polychromatic view of intergalactic star formation in NGC 5291" ,
M. Boquien et al. A&A, 467 1 (2007) 93-106. | {
"source": [
"https://physics.stackexchange.com/questions/461605",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/223117/"
]
} |
461,672 | I found this question in my physics textbook: From a certain height a cat is dropped back-side down. The cat rotates his body while falling and lands on his four legs. Does the cat's angular momentum change during the fall? The answer is no, but I said yes, because I thought the gravitational force will change the angular momentum? Am I missing something? | To change angular momentum, a torque must be applied. Since gravity pulls every part of the cat with a force proportional to its mass (that is, with the same acceleration), there is no net torque on the free falling cat, and thus no change in angular momentum. This is true for any free falling object, but not necessarily if it is supported at any point. The support together with the gravitational force can apply a torque and therefore change angular momentum. As to how the cat manages to turn around even with no net torque, this is known as the Falling cat problem , and is visualized nicely in this very disturbing animation from Wikipedia The rotation is based on the fact that the cat is not a rigid body, and can thus bend in a way that results in its reorientation. | {
"source": [
"https://physics.stackexchange.com/questions/461672",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/208374/"
]
} |
461,677 | In my (university) particle physics course, I am asked to find the values of $r$ for which the function $P(r)$ of a $2s$ Hydrogen electron has its maximum values.
Here, $r$ denotes the distance in Bohr radii, and $P(r)$ denotes the radial function for a hydrogen-like ion. In the case of a $2s$ hydrogen electron, we have $$P(r)=\sqrt(Z/a) · 2 · (Z·r/a) · \exp(-Z·r/a).$$ $Z=1$ in the case of hydrogen, and $a$ stands for the Bohr radius.
My question is, how do I determine the value of $r$ for the maximum values of $P(r)$ ? I know there must be 1 maximum. Do I just differentiate the function with respect to $r$ ?
I feel like its a rather simple problem, however I'm stuck on where to start. Any help would be really appreciated! | To change angular momentum, a torque must be applied. Since gravity pulls every part of the cat with a force proportional to its mass (that is, with the same acceleration), there is no net torque on the free falling cat, and thus no change in angular momentum. This is true for any free falling object, but not necessarily if it is supported at any point. The support together with the gravitational force can apply a torque and therefore change angular momentum. As to how the cat manages to turn around even with no net torque, this is known as the Falling cat problem , and is visualized nicely in this very disturbing animation from Wikipedia The rotation is based on the fact that the cat is not a rigid body, and can thus bend in a way that results in its reorientation. | {
"source": [
"https://physics.stackexchange.com/questions/461677",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/174521/"
]
} |
461,695 | In general relativity we see that length around a massive object is elongated due to its effect on the geometry of spacetime which elongates the length due to stretched space time.
Is there any experiment to prove this?
Or is it still only theoretical? | To change angular momentum, a torque must be applied. Since gravity pulls every part of the cat with a force proportional to its mass (that is, with the same acceleration), there is no net torque on the free falling cat, and thus no change in angular momentum. This is true for any free falling object, but not necessarily if it is supported at any point. The support together with the gravitational force can apply a torque and therefore change angular momentum. As to how the cat manages to turn around even with no net torque, this is known as the Falling cat problem , and is visualized nicely in this very disturbing animation from Wikipedia The rotation is based on the fact that the cat is not a rigid body, and can thus bend in a way that results in its reorientation. | {
"source": [
"https://physics.stackexchange.com/questions/461695",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/210956/"
]
} |
461,833 | Based on Lorentz factor $\gamma = \frac{1}{\sqrt {1-\frac{v^2}{c^2}}}$ it is easy to see $v < c$ since otherwise $\gamma$ would be either undefined or a complex number, which is non-physical. Also, as far as I understand this equation was known before Einstein's postulates were published. My question is: why didn't Lorentz himself conclude that no object can go faster than speed of light? Or maybe he did, I do not know. I feel I am missing some contexts here. | If I had to sum up my findings in a sound bite it would be this: Einstein was the first to derive the Lorentz transformation laws based on physical principles --namely that the speed of light is constant and the principle of relativity. The fact that Lorentz and Poincaré were not able to do this naturally leads to why they were not able to justify making any fundamental statements about the nature of space and time--namely that nothing can go faster than light. This is seen by a careful reading of the Einstein (1905) – Special relativity section of the History of Lorentz Transformations Wikipedia article On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a new derivation of the transformation, which was based only on the principle on relativity and the principle of the constancy of the speed of light. [Emphasis mine] Furthermore, it is stated that (idem) While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. My reading of this seems so indicate that that at the time of publishing, Lorentz considered the notion of "local time" (via his transformations) to be just a convenient theoretical device, but didn’t seem to have a justifiable reason for why it it should be physically true. It looks obvious in hindsight I know, but model building is tough. So the reason, in short, seems (to me) to be this: As far as Lorentz saw it, he was able to "explain" the Michaelson-Morely experiment in a way not unlike the way that Ptolemy could explain the orbits with epicycles. Did it work? Yes , but its mechanism lacked physical motivation. That is, he didn't have a physical reason for such a transformation to arise. Rather it was Einstein who showed that these transformation laws could be derived from a single, physical assumption--the constancy of the speed of light. This insight was the genius of Einstein. Picking up at the end of the last blockquote, we further have that (idem) For quantities of first order in v/c, this was also done by Poincaré in 1900; while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations concern the nature of space and time. This implies actually that Lorentz and Poincaré were able to derive the Lorentz transformations to first order in $\beta$ , but since they believed that the Aether existed they failed to be able to make the fundamental connection to space, time and the constancy of the speed of light. The failure to make this connection means that there would have been no justifiable reason to take it physically serious. So, to Lorentz and Poincaré the Lorentz transformation laws would remain ad-hoc mathematical devices to explain the Michaelson-Morley experiment within the context of the Aether but not saying anything fundamental about space and time. This failure to conclude any fundamental laws about the nature of spacetime subsumes, by implication, making any statements such as no moving object can surpass the speed of light. Edit : @VladimirKalitvianski has pointed me to this source , which provides the opinions of historians on the matter. Poincaré's work in the development of special relativity is well recognised, though most historians stress that despite many similarities with Einstein's work, the two had very different research agendas and interpretations of the work. Poincaré developed a similar physical interpretation of local time and noticed the connection to signal velocity, but contrary to Einstein he continued to use the Aether in his papers and argued that clocks at rest in the Aether show the "true" time, and moving clocks show the local time. So Poincaré tried to keep the relativity principle in accordance with classical concepts, while Einstein developed a mathematically equivalent kinematics based on the new physical concepts of the relativity of space and time. Indeed this resource is useful, as it adds an additional dimension as to why Lorentz didn't publish any claims about a maximum signal velocity. It reads rather clearly, so I won't bother summarizing it. | {
"source": [
"https://physics.stackexchange.com/questions/461833",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/222579/"
]
} |
461,971 | When you go to the sauna you may sit in a room with 90°C+. If it is a "commercial" sauna it will be on for the whole day. How does it come that when you sit on the wood you don't get burned? I believe this question is different than the "classical" one concerning the "feeling" of heat, which may be explained with a low heat transfer. After a much shorter time other objects seem much "hotter", and the heat transfer is not different (as it's still a room filled with the same air). My guess would be that the reason is the heat capacity but I cannot really explain it. In my understanding a capacity is the ability to store something (heat, charge, ...). Why should an object be cooler if it can store less heat? Also, cannot this be ignored in this case, as the wood is exposed to the temperature for a very long time? | First of all, I hope you sit on a towel. But even when you touch wood with your bare skin, you don't get burned. This indeed has to do with thermal conductance. The point is not the heat transfer between the wood and your skin, but rather the heat flowing within the wood. When you touch the surface, your skin and the wood at the very surface equalize their temperature. But because it's only a thin film of wood at the surface, not much heat is transferred. This relatively small amount of heat is quickly transported away from the skin into the body by the high thermal conductance of the human body (many processes play a role here, including blood flow carrying heat away). To further heat up your skin, heat from deeper down in the wood needs to get to the surface, so it can be transferred to your skin. This is the process that is slow whenever a material has low heat conductance, like wood, and allows the skin to transport energy away quicker than it can come from the bulk to the surface, so you don't get burned. Compare this to touching metal, where the heat stored deep in the bulk of the material can rush to the surface rather quickly, if something cool is touching the surface. Much more heat is transferred and you will burn your hand. The low heat capacity of a wooden bench certainly also plays a role, simply because if there's little heat stored in the material, it has less energy to heat up your skin with. | {
"source": [
"https://physics.stackexchange.com/questions/461971",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/99071/"
]
} |
462,089 | Why do we interpret the accelerated expansion of the universe as the proof for the existence of dark energy? The accelerated expansion only tells us that the Einstein field equation must contain a cosmological constant, but I can put the constant on either side of the equation. Usually we put it on the right hand side and interpret it as an additional contribution to energy, namely dark energy. But if I put it in the left hand side instead, I would have a theory of gravity that explains both the "small scale" success of general relativity and the large scale expansion of the universe, without needing an element like dark energy. So why do we keep saying that there is this dark energy and trying to identify it as the vacuum energy while we could more elegantly (in my opinion) use a theory of gravity with a cosmological constant and explain all observations? | The accelerated expansion of the universe is not direct evidence for dark energy, i.e. a perfect fluid contribution to the stress-energy tensor with $w = -1$ . Dark energy is just by far the simplest thing that fits the data well. It's simple to cosmologists because they are used to dealing with matter in the form of perfect fluids, and dark energy is just another one. And it's simple to particle physicists because it can be sourced by a constant term in the Lagrangian, the simplest possible term. At the classical level, the distinction you're making is not really important. A cosmological constant, which you call a "modification of gravity", amounts to adding a constant term to the Lagrangian. But the standard description of dark energy also amounts to adding a constant term to the Lagrangian. They're the exact same thing -- a constant is a constant, it doesn't come with a little tag saying if it's "from" gravity or something else. Neither is more elegant because functionally all the equations come out the same. It's a philosophical difference, not a real difference. But the situation changes dramatically when you account for quantum effects. That's because we know that QFT generically produces vacuum energy, i.e. sources a constant term in the Lagrangian, whether or not we put that term in classically or not. So even if you do explain the accelerated expansion by some other mechanism, you have to explain why this one isn't in effect. This is a difficult argument to make, because the contribution from QFT is already too big even if you only trust it up to tiny energy scales like $1 \text{ eV}$ ! Of course, there is room to work on alternative theories, such as quintessence and phantom energy; these are functionally different because they correspond to a perfect fluid with $w \neq -1$ . At present, observational constraints show that the acceleration can only be explained by one additional perfect fluid if that component has $w = -1$ to within about 20% accuracy. If you wish, you can think of these theories as a modification of gravity by just moving their contributions to the other side of the Einstein field equation. My impression is that these theories simply ignore the QFT vacuum energy without explanation. That's the real elephant in the room, and probably the reason so few physicists work on explaining the accelerated expansion. We have an automatic mechanism to explain the expansion, and that mechanism appears to be $10^{120}$ times more powerful than it should be. It seems premature to start to consider additional mechanisms before understanding this one first, and there seems to be no possible understanding of it now except via the much-hated anthropic principle. The ultimate explanation of the expansion will be a job for physicists of another millenium. | {
"source": [
"https://physics.stackexchange.com/questions/462089",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/121554/"
]
} |
462,245 | The defining property of SHM (simple harmonic motion) is that the force experienced at any value of displacement from the mean position is directly proportional to it and is directed towards the mean position, i.e. $F=-k(x)$ . From this, $$m\left(\frac{d^2x}{dt^2}\right) +kx=0.$$ Then I read from this site Let us interpret this equation. The second derivative of a function of x plus the function itself (times a constant) is equal to zero. Thus the second derivative of our function must have the same form as the function itself. What readily comes to mind is the sine and cosine function. How can we assume so plainly that it should be sin or cosine only? They do satisfy the equation, but why are they brought into the picture so directly? What I want to ask is: why can the SHM displacement, velocity etc. be expressed in terms of sin and cosine? I know the "SHM is the projection of uniform circular motion" proof, but an algebraic proof would be appreciated. | This follows from the uniqueness theorem for solutions of ordinary differential equations , which states that for a homogeneous linear ordinary differential equation of order $n$ , there are at most $n$ linearly independent solutions. The upshot of that is that if you have a second-order ODE (like, say, the one for the harmonic oscillator) and you can construct, through whatever means you can come up with, two linearly-independent solutions, then you're guaranteed that any solution of the equation will be a linear combination of your two solutions. Thus, it doesn't matter at all how it is that you come to the proposal of $\sin(\omega t)$ and $\cos(\omega t)$ as prospective solutions: all you need to do is verify that they are solutions, i.e. just plug them into the derivatives and see if the result is identically zero; and check that they're linearly independent. Once you do that, the details of how you built your solutions become completely irrelevant. Because of this, I (and many others) generally refer to this as the Method of Divine Inspiration: I can just tell you that the solution came to me in a dream, handed over by a flying mass of spaghetti, and $-$ no matter how contrived or elaborate the solution looks $-$ if it passes the two criteria above, the fact that it is the solution is bulletproof, and no further explanation of how it was built is required. If this framework is unclear or unfamiliar, then you should sit down with an introductory textbook on differential equations. There's a substantial bit of background that makes this sort of thing clearer, and which simply doesn't fit within this site's format. | {
"source": [
"https://physics.stackexchange.com/questions/462245",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/205249/"
]
} |
462,266 | Hey to whomever is reading this! I'm currently trying to solve a problem given to the class in a hydrodynamics course. I have to main questions. The following describes the problem: We are considering a parallel disk viscometer (type of rheometer with plate-plate geometry) with disk radius $R = 10mm$ and disk spacing $H = 0.1mm$ (z-axis). An incompressible Newtonian fluid is placed between the disks. The top disk is rotating at a constant angular speed $\Omega = 10 rad/s$ , generating a velocity gradient where the velocity at $z=0$ is $u = 0$ and at $z=H$ is $u = R\Omega$ . We consider the fluid in question to be silicone oil with a kinematic viscosity of $\nu = 8.3\ \cdot 10^{-5} m^2/s$ . We want to calculate the time scale $\tau$ over which the motion induced by the upper plate will be transmitted to the bottom plate (time necessary to generate steady flow) based on the given parameters. In another subquestion of this problem (later stage of the exercise) we are asked to derive the expression for the velocity $u_\theta (r,z)$ . Based on the result of that question, namely $u_\theta (r,z)=\frac{\Omega}{H}zr$ , I can calculate the time scale as the reciprocal of the stear strain rate $\dot{\gamma} = \frac{du_\theta}{dz}$ giving me a $t = 10^{-3} s$ . 1st question: How can I calculate (estimate?!) this time scale based on the parameters given (radius, disk spacing, angular velocity, kinematic viscosity), without knowing the expression for $u_\theta$ ? I just found an answer purely by looking at the units and trying to get seconds. So I calculated this: $t = \frac{\Omega R^2 H^2}{\nu^2} = 1.4\ \cdot 10^{-3} s$ , which is very close to the value I get from the reciprocal of the shear strain rate. However, we are supposed to explain our calculations and I do not know how to justify what I did in a physical sense. 2nd question: If I want to calculate the Reynolds number for my problem, what do I choose as my characteristic length? I thought of using the form $Re = \frac{\Omega L^2}{\nu}$ . As the characteristic length, I calculated the geometric mean of the disk spacing and two times the radius. Is this correct? I would not know what else to use as the characteristic length for this kind of geometry. Any help is greatly appreciated! Thanks :) | This follows from the uniqueness theorem for solutions of ordinary differential equations , which states that for a homogeneous linear ordinary differential equation of order $n$ , there are at most $n$ linearly independent solutions. The upshot of that is that if you have a second-order ODE (like, say, the one for the harmonic oscillator) and you can construct, through whatever means you can come up with, two linearly-independent solutions, then you're guaranteed that any solution of the equation will be a linear combination of your two solutions. Thus, it doesn't matter at all how it is that you come to the proposal of $\sin(\omega t)$ and $\cos(\omega t)$ as prospective solutions: all you need to do is verify that they are solutions, i.e. just plug them into the derivatives and see if the result is identically zero; and check that they're linearly independent. Once you do that, the details of how you built your solutions become completely irrelevant. Because of this, I (and many others) generally refer to this as the Method of Divine Inspiration: I can just tell you that the solution came to me in a dream, handed over by a flying mass of spaghetti, and $-$ no matter how contrived or elaborate the solution looks $-$ if it passes the two criteria above, the fact that it is the solution is bulletproof, and no further explanation of how it was built is required. If this framework is unclear or unfamiliar, then you should sit down with an introductory textbook on differential equations. There's a substantial bit of background that makes this sort of thing clearer, and which simply doesn't fit within this site's format. | {
"source": [
"https://physics.stackexchange.com/questions/462266",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/223516/"
]
} |
463,293 | There are skyscrapers sitting and pushing on the ground with tremendous weight. Is it possible to convert this weight/force to harness energy to power the building? Maybe build the building on top of some type of pendulum that will rotate under the pressure, and when one cycle of rotation reaches the equilibrium point we could give it a kick from the stored energy to continue rotation. Was something like this created or tested and found useless? Note: maybe my question should be, is it possible to convert the potential energy of a building into kinetic? | In classical mechanics, absolute values of potential energy are meaningless. In your case of a skyscraper just sitting there, we could say it has a large positive amount of potential energy, no potential energy, or even negative potential energy. It doesn't matter at all. What is important is a change in potential energy. is it possible to convert the potential energy of a building into a kinetic? Based on what is said above, you would need to decrease the potential energy of the building and find a way to harness that change in potential energy. The issue is that for gravity, the potential energy just depends on the distance from the Earth, so this would mean that you would have to move the building (or at least parts of the building) closer to the Earth. The utility of buildings is typically that they remain stationary so people can use them consistently and for a long time, so I don't see this being feasible. To see how gravitational potential energy can be converted to other types of energy in other systems, see some of the other posted answers. | {
"source": [
"https://physics.stackexchange.com/questions/463293",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/53057/"
]
} |
463,327 | Throughout my life, I have always been taught that gravity is a simple force, however now I struggle to see that being strictly true. Hence I wanted to ask what modern theoretical physics suggests about this: is gravity the exchange of the theoretical particle graviton or rather a 'bend' in space due to the presence of matter? I don't need a concrete answer, but rather which side the modern physics and research is leaning to. | Both. General relativity describes gravity as curvature of spacetime, and general relativity is an extremely successful theory. Its correct predictions about gravitational waves, as verified directly by LIGO, are especially severe tests. Gravity also has to be quantum-mechanical, because all the other forces of nature are quantum-mechanical, and when you try to couple a classical (i.e., non-quantum-mechanical) system to a quantum-mechanical one, it doesn't work. See Carlip and Adelman for a discussion of this. So we know that gravity has to be described both as curvature of spacetime and as the exchange of gravitons. That's not inherently a contradiction. We do similar things with the other forces. We just haven't been able to make it work for gravity. Carlip, "Is Quantum Gravity Necessary?," http://arxiv.org/abs/0803.3456 Adelman, "The Necessity of Quantizing Gravity," http://arxiv.org/abs/1510.07195 | {
"source": [
"https://physics.stackexchange.com/questions/463327",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/223905/"
]
} |
464,379 | I'm struggling to understand the relativity of simultaneity and position. If my conception and birth are separated by time but not space, a frame of reference in which my birth and conception are simultaneous should exist right? If another observer moves in the opposite direction, will he see my birth before my conception? | Suppose we take the spacetime point of your conception as the origin, $(t=0, x=0)$ , then the spacetime point for your birth would be $(t=T, x=uT)$ . The time $T$ is approximately $9$ months, and we are writing the spatial position of your birth as $x=uT$ where $u$ is a velocity. The velocity $u$ can be any value from zero (i.e. born in the same spot as conception) up to $c$ (because your mother can't move faster than light). Now we'll use the Lorentz transformations to find out how these events appear for an observer moving at a speed $v$ relative to you. The transformations are: $$ t' = \gamma \left( t - \frac{vx}{c^2} \right ) $$ $$ x' = \gamma \left( x - vt \right) $$ though actually we'll only be using the first equation as we're only interested in the time. Putting $(0,0)$ into the equation for $t'$ gives us $t'=0$ so the clocks of the observer and your mother both read zero at the moment of your conception. Now feeding the position of your birth $(T,uT)$ into the equation for $t'$ we get: $$ t' = \gamma \left( T - \frac{vuT}{c^2} \right ) $$ For you to be born before you were conceived we need $t'\lt 0$ and that gives us: $$ T \lt \frac{vuT}{c^2} $$ or: $$ vu \gt c^2 $$ We know that the observer's velocity $v$ cannot be greater than $c$ , and your mother's velocity $u$ cannot be greater than $c$ , so this inequality can never be satisfied. That is, there is no frame in which you were born before you were conceived. The rule is that two events that are timelike separated, i.e. their separation in space is less than their separation in time times $c$ , can never change order. All observers will agree on which event was first. For the order to change the events have to be spacelike separated. In this case this would mean $uT \gt cT$ i.e. your mother would have to have moved at a speed $u$ faster than light between your conception and birth. | {
"source": [
"https://physics.stackexchange.com/questions/464379",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/206192/"
]
} |
464,451 | It is said that atoms with the same number of electrons as protons are electrically neutral, so they have no net charge or net electric field. A particle with charge cannot exist at the same position and time as another; an electron cannot be positioned at the location of a proton, at any single point in time, without displacing the proton. Assuming the above is correct, how can a single electron cancel out the entire electric field of a proton? I don't think there is any position a single electron can take, that would result in the entire electric field of the proton being cancelled out - it seems like it will always be only partially cancelled out. For simplicity, let's look at a single hydrogen atom that we consider to be electrically neutral. It has one proton and one electron, so at any single point in time, there will be a partial net electric field (because the electron will never be in a position where its field can completely cancel out the proton's field), and the electric field from the electron will only cancel out part of the field from the proton. So at this single point in time, there will be a net electric field from the proton. So how can this atom be considered to be electrically neutral, with no net charge or field? Here is a graphical representation of two sources of electric fields interacting: As you can see from the image, only part of the (equal but opposite) electric fields produced by both sources are affected by each other. To have the field from one source cancel out the other, completely, we would need to position the sources in the same location, at the same time, which is not possible. I know that I'm wrong, so please correct me. | It is said that atoms with the same number of electrons as protons are electrically neutral, so they have no net charge or net electric field. This is a great over-simplification, which I am sure you have already determined (based on why you are asking this question). You can have objects that are polarized where, overall, they have no "net charge", yet the distribution of charge is very important. The example you give is an excellent one of a dipole , where the net charge is $0$ , yet $\mathbf E\neq 0$ at distances away from the dipole. Really, the idea of electrically neutral is a macroscopic description meaning that if we look in this general area we will see that the number of positive charges exactly balances our the number of negative charges. However, as we "zoom in" we will find this to not be the case for an "electrically neutral" body, since (neglecting QM) we will have point charges at specific locations, and most of the "charge density" will be $0$ due to no charges being present at all, and then "infinite" (or at least really large) at the locations of the charges. | {
"source": [
"https://physics.stackexchange.com/questions/464451",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/224506/"
]
} |
464,559 | Obviously, a perpetuum mobile isn't possible by any law in physics, because energy can't be "created" or "destroyed", only transformed.
This said, I've had an idea for a perpetuum mobile and can't seem to find my mistake (I'm actually very close to building and trying it). Here's the plan: Take a piece of wood and attach its upper end to a screw so it can swing like a pendulum. Then, attach a magnetic metal at the lower end of the wood. Now, place two magnets at each side of the pendulum, so that at the maximum amplitude, the metal will barely touch the magnets and will swing back, due to its weight.
Now, in my head, if you give the pendulum a little impulse, it will swing up in one direction and get attracted by the magnet just a tiny bit. Thus, on the "way back", it will have a slightly higher amplitude. So it swings to the other side, closer to the magnet, which will pull the pendulum a bit more upwards, thus increasing the amplitude furthermore.
This could theoretically go on and the pendulum will never stop, it will actually gain more momentum at the start. So the conditions are: The Metal must be heavy enough so it doesn't stick to the magnets The Metal must be magnetic enough so that we gain amplitude instead of losing it each swing And that's basically it. I am aware that the construction couldn't work, but struggle to find where I made my mistake.
Anyways, if it does work and you guys build it before I do: I want 50% of all profits and want you to name it Perpenduluum Mobile :D | Now, in my head, if you give the pendulum a little impulse, it will swing up in one direction and get attracted by the magnet just a tiny bit. You've neglected to account for the magnetic attraction as the pendulum bob goes back to its central position. On the outwards leg, you are correct that the magnet's attraction will pull on the bob and give it more energy than it would have in the absence of the magnet. However, in the return leg, the pendulum bob is trying to get away from the magnet's attractive force, and this will claim back all of the additional energy. (... if the system is perfect, that is. Real-world magnetic materials will show some amount of hysteresis, so the bob will lose slightly more energy on the way back than it gained on the way out.) This type of mistake is quite common when you have a core dynamics which is known to be conservative, and still seems to be producing energy - you're just conveniently neglecting to take into account the parts of the cycle where that force performs work against your system. For a similar example in action, see What prevents this magnetic perpetuum mobile from working? . | {
"source": [
"https://physics.stackexchange.com/questions/464559",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/224559/"
]
} |
464,911 | So, we all know if you shine some light on CD (compact disc) or DVD, you can see all the colors from red to violet. What is bothering me is what is the reason of why do we see the colors on compact disc? I know that CD has tiny pits/bumps (order of size is micrometers).
So when I shine a light on CD, I see different colors and they are changing I move the disc, which is because I change the angle $\theta$ between my eyes and disc; which is followed by diffraction condition: $dsin\theta = m\lambda$ So my, my main concern is are colors over there because light travels different distances when in comes to the surface of CD because of bumps/pits? Can I interpret diffraction colors on CD same as color on thin films? Because although one part of CD is quite reflective surface the other is almost fully transparent.
Is this like combination of reflection and diffraction?
I mean reflection is there because of reflective surface, and even because of some thickness of aluminum I see colors (which is like thin films)? And now comes the bumps/pits on cd? Because of them I also see colors or? | The pits are in parallel circular tracks with a distance of 1.6 micrometers. These act as a diffraction grating. Normally this is a reflection grating, but one can make a transmission grating by removing the metal layer (easiest in recordable disks). Here is an image that I made in transmission with a cover disk without a recording layer. The mercury street light can be seen in the middle. | {
"source": [
"https://physics.stackexchange.com/questions/464911",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/205463/"
]
} |
465,017 | I have read various answers, on PSE and elsewhere, and most of them explain that the point of contact of the rolling object undergoes no instantaneous displacement in the direction of friction, I agree, but then there is a feeling that the friction does provide a torque, so how can it do no work> (The pure rolling here is under a constant external force on the object so friction does act ). | In a scenario of pure rolling of a rigid wheel on a flat plane, you don't need any friction. Once the wheel is rolling, it will continue to do so, even if the friction coefficient becomes zero. If this were not the case, you'd be violating conservation of angular momentum. There is no force, no torque and therefore no work done. The more interesting case is that where the object is rolling under an external force, say down an inclined plane. See the diagram below Now, you can analyze it in two ways. One is similar to Farcher's answer where the point of contact moves perpendicular to the frictional force and hence, there is no work done. But you were interested in it from the point of view of torques (where we consider the whole wheel, not just the point of contact) so let's do that. Friction does two things to the wheel as a whole. It does negative work when you look at the linear motion of the wheel. Indeed, $$W_1 = -f.dS,$$ where $f$ is the force of friction and the wheel has moved a linear distance $dS$ . Next, the friction provides a torque about the center of the wheel and the wheel has angular displacement. Hence, it does positive rotational work i.e. $$W_2 = \tau. d\alpha,$$ where $\tau$ is the torque and $d\alpha$ is the angular displacement of the wheel. But note that $\tau = fR$ and $R d\alpha = dS$ . Hence $W_2 = f.dS$ and you get $$W_{tot} = W_1 + W_2 = 0$$ | {
"source": [
"https://physics.stackexchange.com/questions/465017",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/205249/"
]
} |
465,280 | I've noticed it is difficult to turn the wheels of a car when the car is stationary, especially cars without power steering, which is why the power steering was invented. However, I've noticed it becomes feather light when traveling at speed (some models even stiffen the steering wheel electronically at speed). So, why does a car's steering wheel get lighter with increasing speed? | Imagine the car stationary. The tire sits on the ground with the contact patch touching. As you start turning the wheel, the linkage to the wheels starts to rotate the contact patch on the ground. (There are also more complex motions because of the non-zero caster angle of the front wheel). This rotation is opposed by the static friction of the tire. As you continue turning, portions of the tread on the contact patch are pulled over and stressed. Now imagine holding the steering wheel at that angle and allowing the car to roll forward a bit. The tread at the rear of the contact patch lifts away from the road and the stress in that portion of the tire is released. Meanwhile new tread rolls onto the contact patch in front, but at the correct angle. Once the contact patch is covered by new tread, the stress from the turn is gone and steering wheel is back to a near-neutral force (again, modulo several effects from the suspension angles). The faster the car is moving forward, the faster it can put tread into the contact patch with no side stress. So the steering becomes easier. | {
"source": [
"https://physics.stackexchange.com/questions/465280",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/169763/"
]
} |
465,965 | I can't understand why $\left(\frac{\partial P}{\partial V} \right)_T=-\left(\frac{\partial P}{\partial T} \right)_V\left(\frac{\partial T}{\partial V}\right)_P$ . Why does the negative sign arise? I can easily write $\left(\frac{\partial P}{\partial V}\right)_T=\left(\frac{\partial P}{\partial T}\right)_V\left(\frac{\partial T}{\partial V}\right)_P$ from the rule of partial derivative, but what's the negative sign for? | This is not a simple application of partial derivatives, since the variables that are being held constant vary here, but an instance of the triple product rule , which says that for any three quantities $x,y,z$ depending on each other, the relation $$ \left(\frac{\partial x}{\partial y}\right)_z\left(\frac{\partial y}{\partial z} \right)_x\left(\frac{\partial z}{\partial x}\right)_y = -1$$ holds. | {
"source": [
"https://physics.stackexchange.com/questions/465965",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/197118/"
]
} |
466,335 | According to press releases, researchers have reversed time in a quantum computer and violated the second law of thermodynamics. What does that mean for physics? Will it allow time travel? Further information: "Arrow of time and its reversal on the IBM quantum computer" (Scientific Reports, 2019-03-13) "Physicists reverse time using quantum computer" (Phys.org, 2019-03-13) | First of all, let's get some important 'sociological' aspects out of the way: While website you've linked to, phys.org, tries to pass itself off as a science-journalism site, it is nothing of the sort . Instead, its core business is to aggregate press releases written by universities themselves. For most of the press releases they publish, phys.org does not do any vetting at all of the contents of the documents, nor do they do any independent journalism or check with independent experts. This means that the text you've linked to was written by someone (a university press office) with a direct financial stake in the impact of the piece , and it was not verified by anyone, and neither the writer nor the editor checked with any independent experts to confirm what they were publishing. That's fine if it's presented like what it is (i.e. promotional material), but it is unethical to present it as science journalism (which it is not). If it sounds like a "newspaper" taking claims about steel production in Russia from the Soviet Union's government and running them unchecked, that's because there's no difference between the two. The same is true, by the way, with EurekAlert, and with Science Alert, whose takes on the subject are here and here . Notice any similarities? Moreover, a word about the journal. The paper in question was published in Scientific Reports, whose refereeing process only checks for correctness, and not for interest or impact. (And, frankly, I wouldn't say that it has a great reputation for its correctness checks.) This doesn't impact the paper itself, but it does bear keeping in mind. Anyways, on to the paper. Suppose I throw an elastic ball at a wall, such that it hits the wall horizontally and bounces back: Assuming that the ball is completely elastic, then the collision with the wall will reverse its velocity, and it will follow exactly the same arc back to my hand. Why is that? Basically, because newtonian mechanics doesn't have an arrow of time - its laws of motion are completely reversible, which means that if you keep all of the particles' positions intact, but you reverse all of their velocities, the system will track back on exactly the same trajectory it came from, only backwards. The same is true within quantum mechanics: the laws of microscopic motion are completely time-reversible, though the operation of "reverse all of the velocities" is somewhat more complicated. Technically, you need to take the wavefunction $\psi(x)$ and replace it with its complex conjugate, $\psi(x)^*$ , and this is what Lesovik and his coworkers have done: they've devised a way to take a wavefunction and reverse its complex phase. And once you do that, the system will follow back on its previous track, exactly like the bouncing ball in classical mechanics. Image source: Sci. Rep 9 , 4396 (2019) So what does the current paper have to do with the arrow of time? Nothing at all, except for hype. The arrow of time emerges as a concept of statistical mechanics, in which the complexity of the processes in a large system is such that, although the individual microscopic laws of motion are reversible, the large-scale dynamics are not, since it is too difficult and unlikely to fully time-reverse the state of the system at any one time. This is primarily framed within classical mechanics, but there are equivalent versions within quantum mechanics, once you introduce the concepts of mixed states and thermodynamic ensembles. However, the current paper does nothing of the sort. They work with pure states and with fully coherent dynamics (instead of mixed states and partially-coherent dynamics, which would be required to deal with thermodynamic concepts or the arrow of time), and this makes them unable to address any issues with entropy increasing or decreasing, or any of the interesting stuff on the topic. The authors talk a big game about entropy and surrounding topics in the introduction, but that's where it stops - they do not measure any entropies, so they're completely unable to say anything meaningful about "reversing the arrow of time". So, let's have a final run through your specific questions: researchers have reversed time in a quantum computer No, they haven't. They reversed the direction of travel of a quantum evolution and watched it travel back, exactly like a classical elastic ball bouncing from a wall. There is some technical merit in the practical implementation, but nothing more. and violated the second law of thermodynamics. They did nothing of the sort. The paper effectively works at zero temperature and with pure states, so the entropy is zero throughout. What does that mean for physics? Exactly the same as a classical ball bouncing from a wall. Will it allow time travel? Not at all. | {
"source": [
"https://physics.stackexchange.com/questions/466335",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/192788/"
]
} |
466,889 | We all know where iron comes from. However, as I am reading up on supernovas, I started to wonder why there is as much iron as there is in the universe. Neither brown dwarfs nor white dwarfs deposit iron. Type I supernovas leave no remnant so I can see where there would be iron released. Type II supernovas leave either a neutron star or a black hole. As I understand it, the iron ash core collapses and the shock wave blows the rest of the star apart. Therefore no iron is released. (I know some would be made in the explosion along with all of the elements up to uranium. But would that account for all of the iron in the universe?) Hypernovas will deposit iron, but they seem to be really rare. Do Type I supernovas happen so frequently that iron is this common? Or am I missing something? | The solar abundance of iron is a little bit more than a thousandth by mass. If we assume that all the baryonic mass in the disc of the Galaxy (a few $10^{10}$ solar masses) is polluted in the same way, then more than 10 million solar masses of iron must have been produced and distributed by stars. A type Ia supernova results in something like 0.5-1 solar masses of iron (via decaying Ni 56), thus requiring about 20-50 million type Ia supernovae to explain all the Galactic Fe. Given the age of the Galaxy of 10 billion years, this requires a type Ia supernova rate of one every 200-500 years. The rate of type Ia supernovae in our Galaxy is not observationally measured, though there have likely been several in the last 1000 years. The rate above seems entirely plausible and was probably higher in the past. | {
"source": [
"https://physics.stackexchange.com/questions/466889",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/189760/"
]
} |
466,916 | We know that the speed of light depends on the density of the medium it is travelling through. It travels faster through less dense media and slower through more dense media. When we produce sound, a series of rarefactions and compressions are created in the medium by the vibration of the source of sound. Compressions have high pressure and high density, while rarefactions have low pressure and low density. If light is made to propagate through such a disturbance in the medium, does it experience refraction due to changes in the density of the medium? Why don't we observe this? | Actually this effect has been discovered in 1932 with light diffracted by ultra-sound waves.
In order to get observable effects you need ultra-sound
with wavelengths in the μm range (i.e. not much longer than light waves),
and thus sound frequencies in the MHz range. See for example here: On the Scattering of Light by Supersonic Waves by Debye and Sears in 1932 Propriétés optiques des milieux solides et liquides soumis aux
vibrations élastiques ultra sonores (Optical properties of solid and liquid media subjected to ultrasonic elastic vibrations) by Lucas and Biquard in 1932 translated from French: Abstract : This article describes the main optical properties presented by solid and liquid media, subjected to ultra sonic elastic vibrations whose frequencies range from 600,000 to 30 million per second. These ultra sounds were obtained by Langevin's method using piezoelectric quartz excited with high frequency. Under these conditions, and according to the relative sizes of the elastic wavelengths, the light wavelengths, and the opening of the light beam passing through the medium studied, different optical phenomena are observed. In the case of the smallest elastic wavelengths of up to a few tenths of a millimeter, grating-like light diffraction patterns are observed when the incident light rays run parallel to the elastic wave planes. ... The diffraction of light by high frequency sound waves: Part I by Raman and Nagendra Nathe in 1935 A theory of the phenomenon of the diffraction of light by sound-waves of high frequency in a medium, discovered by Debye and Sears and Lucas and Biquard, is developed. | {
"source": [
"https://physics.stackexchange.com/questions/466916",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/181963/"
]
} |
467,613 | Apparently spectral solar radiation is approximated by a black body at 5800 K. The spectral black body distribution (Planck distribution) is shown below (from Incropera, Fundamentals of Heat and Mass Transfer), with different temperatures including solar radiation at 5800 K. The heat flux on an Earth surface perpendicular to sun rays (i.e., the solar constant), is roughly 1.36 kilowatts per square meter. Is this simply the integral of the above distribution? Why is the sun approximated as a black body at 5800 K? Does this mean that the surface of the sun which is emitting radiation has a temperature of 5800 K? That seems kind of low. Is solar radiation approximated as a black body at 5800 K only on Earth, or is it the same everywhere? Why don't atmospheric effects and scattering change measurements of the solar spectrum on Earth? EDIT: The solar constant is approximated by considering the Stefan-Boltzmann law (i.e., the integration of the spectral solar emission), the size of the sun, and the distance from the sun to Earth. A good derivation is shown here: https://www.youtube.com/watch?v=DQk04xqvVbU | Yes - the integral of the spectrum you refer to gives the total power per unit area emitted from the surface of the sun. If you multiply that by a factor of $\left(\frac{\text{Solar Radius}}{1\text{ AU}}\right)^2$ to account for the $1/r^2$ dependence of intensity on distance, then you'll get the solar constant you quote. Yes. The sun is not at a single uniform temperature - the radiation which reaches Earth is mostly emitted from the photosphere (~6000 K) but the temperature varies dramatically between the different layers of the sun. Source Everywhere. The sun is very nearly an ideal black body. This is a property of the sun, not of a particular vantage point from which you're observing it. Furthermore, atmospheric effects dramatically change measurements of the solar spectrum on Earth. The upper atmosphere blocks nearly all of the radiation at higher frequencies than UV, and quite a lot of the IR spectrum is absorbed and scattered by greenhouse gases. Visible light passes through without much trouble (which is a substantial part of why we evolved to be sensitive to those frequencies) but the facts that the sky is blue and that sunsets are beautiful demonstrate that the atmosphere scatters visible light as well. Source | {
"source": [
"https://physics.stackexchange.com/questions/467613",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/98414/"
]
} |
468,891 | Nature article on reproducibility in science. According to that article, a (surprisingly) large number of experiments aren't reproducible, or at least there have been failed attempted reproductions. In one of the figures, it's said that 70% of scientists in physics & engineering have failed to reproduce someone else's results, and 50% have failed to reproduce their own. Clearly, if something cannot be reproduced, its veracity is called into question. Also clearly, because there's only one particle accelerator with the power of the LHC in the world, we aren't able to independently reproduce LHC results. In fact, because 50% of physics & engineering experiments aren't reproducible by the original scientists, one might expect there's a 50% chance that if the people who originally built the LHC built another LHC, they will not reach the same results. How, then, do we know that the LHC results (such as the discovery of the Higgs boson) are robust? Or do we not know the LHC results are robust, and are effectively proceeding on faith that they are? EDIT: As pointed out by Chris Hayes in the comments, I misinterpreted the Nature article. It says that 50% of physical scientists have failed to reproduce their own results, which is not the same statement as 50% of physics experiments aren't reproducible. This significantly eases the concern I had when I wrote the question. I'm leaving the question here however, because the core idea - how can we know the LHC's results are robust when we only have one LHC? - remains the same, and because innisfree wrote an excellent answer. | That's a really great question. The ' replication crisis ' is that many effects in social sciences (and, although to a lesser extent, other scientific fields) couldn't be reproduced. There are many factors leading to this phenomenon, including Weak standards of evidence, e.g., $2\sigma$ evidence required to demonstrate an effect Researchers (subconsciously or otherwise) conducting bad scientific practice by selectively reporting and publishing significant results. E.g. considering many different effects until they find a significant effect or collecting data until they find a significant effect. Poor training in statistical methods. I'm not entirely sure about the exact efforts that the LHC experiments are making to ensure that they don't suffer the same problems. But let me say some things that should at least put your mind at ease: Particle physics typically requires a high-standard of evidence for discoveries ( $5\sigma$ ). To put that into perspective, the corresponding type-1 error rates are $0.05$ for $2\sigma$ and about $3\times10^{-7}$ for $5\sigma$ The results from the LHC are already replicated! There are several detectors placed around the LHC ring. Two of them, called ATLAS and CMS, are general purpose detectors for Standard Model and Beyond the Standard Model physics. Both of them found compelling evidence for the Higgs boson. They are in principle completely independent (though in practice staff switch experiments, experimentalists from each experiment presumably talk and socialize with each other etc, so possibly a very small dependence in analysis choices etc). The Tevatron, a similar collider experiment in the USA operating at lower-energies, found direct evidence for the Higgs boson. The Higgs boson was observed in several datasets collected at the LHC The LHC (typically) publishes findings regardless of their statistical significance, i.e., significant results are not selectively reported. The LHC teams are guided by statistical committees, hopefully ensuring good practice The LHC is in principle committed to open data, which means a lot of the data should at some point become public. This is one recommendation for helping the crisis in social sciences. Typical training for experimentalists at the LHC includes basic statistics (although in my experience LHC experimentalits are still subject to the same traps and misinterpretations as everyone else). All members (thousands) of the experimental teams are authors on the papers. The incentive for bad practices such as $p$ -hacking is presumably slightly lowered, as you cannot 'discover' a new effect and publish it only under your own name, and have improved job/grant prospects. This incentive might be a factor in the replication crisis in social sciences. All papers are subject to internal review (which I understand to be quite rigorous) as well as external review by a journal LHC analyses are often (I'm not sure who plans or decides this) blinded. This means that the experimentalists cannot tweak the analyses depending on the result. They are 'blind' to the result, make their choices, then unblind it only at the end. This should help prevent $p$ -hacking LHC analysis typically (though not always) report a global $p$ -value, which has beeen corrected for multiple comparisons (the look-elsewhere effect). The Higgs boson (or similar new physics) was theoretically required due to a 'no-lose' theorem about the breakdown of models without a Higgs at LHC energies, so we can be even more confident that it is a genuine effect.
The other new effects that are being searched for at the LHC, however, arguably aren't as well motivated, so this doesn't apply to them. E.g., there was no a priori motivation for a 750 GeV resonanace that was hinted at in data but ultimately disappeared. If anything, there is a suspicion that the practices at the LHC might even result in the opposite of the 'replication crisis;' analyses that find effects that are somewhat significant might be examined and tweaked until they decrease . In this paper it was argued this was the case for SUSY searches in run-1. | {
"source": [
"https://physics.stackexchange.com/questions/468891",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/177855/"
]
} |
469,345 | Imagine we managed to squeeze light into a very tiny region of space so that the energy concentration at that point becomes a black hole. Can this black hole then move at the speed of light? | No. I assume you're thinking that a black hole made from light would have a zero rest mass and could therefore travel at the speed of light. However the rest mass of any black hole is due not only to the mass that went into it but also the energy (e.g. photons) that went into it. The increase in mass due to the energy is given by Einstein's famous equation $E = mc^2$ . So if we create the black hole from purely mass $m$ the rest mass of the black hole is just $m$ . If we create the black hole from purely energy $E$ , e.g. from photons with a total energy $E$ , then the rest mass of the black hole is $E/c^2$ . Or for completeness we could use a mixture of mass $m$ and energy $E$ in which case the rest mass would be $m + E/c^2$ . So a black hole made from just photons would not have a zero rest mass and therefore could not travel at the speed of light. This conversion of photons to a mass isn't unique to a black hole. For example suppose we start with a hydrogen atom in the ground state, $1s$ , and let it absorb a 10.2eV photon to excite it to the $2p$ state. This would increase the mass by $10.2\textrm{eV}/c^2$ i.e. even though the photon is massless absorbing it increases the mass of the hydrogen atom. As a general rule mass is not a conserved quantity either in special or general relativity. | {
"source": [
"https://physics.stackexchange.com/questions/469345",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75502/"
]
} |
469,780 | Herbert Spencer somewhere says that the parabola of a ballistic object is actually a portion of an ellipse that is indistinguishable from a parabola--is that true? It would seem plausible since satellite orbits are ellipses and artillery trajectories are interrupted orbits. | The difference between the two cases is the direction of the gravity vector. If gravity is pulling towards a point (as we see in orbital mechanics), ballistic objects follow an elliptical (or sometimes hyperbolic) path. If, however, gravity points in a constant direction (as we often assume in terrestrial physics problems: it pulls "down"), we get a parabolic trajectory. On the timescales of these trajectories that we call parabolic, the difference in direction of gravity from start to end of the flight is so tremendously minimal, that we can treat it as a perturbation from the "down" vector and then ignore it entirely. This works until the object is flying fast enough that the changing gravity vector starts to have a non-trivial effect. At orbital velocities, the effect is so non-trivial that we don't even try to model it as a "down" vector plus a perturbation. We just model the vector pointing towards the center of the gravitational body. | {
"source": [
"https://physics.stackexchange.com/questions/469780",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/56930/"
]
} |
469,782 | The spacetime around a rotating massive body is distorted (in addition to the distortion caused by the body's gravity) by frame dragging . My question is simple. Is this distortion around a massive rotating body (at constant angular velocity) changing in time, i.e. gets the spacetime "winded up" more and more or isn't it? If so can we measure the difference in the time of how long two equal massive spherical bodies are already rotating? | The difference between the two cases is the direction of the gravity vector. If gravity is pulling towards a point (as we see in orbital mechanics), ballistic objects follow an elliptical (or sometimes hyperbolic) path. If, however, gravity points in a constant direction (as we often assume in terrestrial physics problems: it pulls "down"), we get a parabolic trajectory. On the timescales of these trajectories that we call parabolic, the difference in direction of gravity from start to end of the flight is so tremendously minimal, that we can treat it as a perturbation from the "down" vector and then ignore it entirely. This works until the object is flying fast enough that the changing gravity vector starts to have a non-trivial effect. At orbital velocities, the effect is so non-trivial that we don't even try to model it as a "down" vector plus a perturbation. We just model the vector pointing towards the center of the gravitational body. | {
"source": [
"https://physics.stackexchange.com/questions/469782",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/98822/"
]
} |
469,869 | The longest solar day of year is approximately 24 hours 0 min 30 seconds (occurs at mid to late December) while the shortest solar day of year is approximately 23 hour 59min 38 seconds. If I average out both of these I come up with average solar day of 24 hour +4seconds. Why then it is said it is 24 hours 0min 0 seconds exactly?? Wouldn't using a 24 hour solar day as a definition of day cause the offset of 4 seconds each solar year | You can't calculate the length of a mean solar day just by taking the mean of the shortest & longest apparent solar days. That would work if the apparent day lengths varied in a simple linear fashion, but that's not the case. From Wikipedia's article on the Equation of Time , The equation of time describes the discrepancy between two kinds
of solar time. [...] The two times that differ are the apparent solar
time, which directly tracks the diurnal motion of the Sun,
and mean solar time, which tracks a theoretical mean Sun with uniform
motion. This graph shows the cumulative differences between mean & apparent solar time: The equation of time — above the axis a sundial will
appear fast relative to a clock showing local mean time, and below the
axis a sundial will appear slow. To correctly calculate the mean solar day length you need to integrate the apparent day lengths over the whole year. (And you need to decide exactly how to define the length of the year, which is a whole complicated story in its own right). There are two primary causes of the Equation of Time. 1. The obliquity of the plane of Earth's orbit (the ecliptic plane), which is tilted approximately 23° relative to the equatorial plane. This tilt is also responsible for the seasons. 2. The eccentricity of Earth's orbit, which causes the Earth's orbital speed to vary over the year. The following graph shows how these two components combine to create the Equation of Time. Equation of time (red solid line) and its two main components plotted
separately, the part due to the obliquity of the ecliptic (mauve
dashed line) and the part due to the Sun's varying apparent speed
along the ecliptic due to eccentricity of the Earth's orbit (dark blue
dash & dot line) Please see the linked Wikipedia article for further details. I have further info, with better and additional graphs, on our sister site. https://astronomy.stackexchange.com/a/48253/16685 This answer has more details, graphs, and some Python code: https://astronomy.stackexchange.com/a/49546/16685 | {
"source": [
"https://physics.stackexchange.com/questions/469869",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/205782/"
]
} |
469,917 | In the particular problem I encountered, an electric field was zero at the origin and we were meant to prove that a particle at the origin was in an unstable state of equilibrium. Is it enough to state that for any non-null coordinates, the electric field isn't zero, ergo the equilibrium is unstable? Or is there a more elegant way of proving it? | In the centre of a bowl there is equilibrium. Put a ping pong ball in it. If this ball is ever shaken slightly away from equilibrium, it will immediately roll back. A "shake-proof" equilibrium is called stable . Now, turn the bowl around and put the ball on the top. This is an equilibrium. But at the slightest shake, the ball rolls down. This "non-shake-proof" equilibrium is called unstable . It is about the potential energy . Because, systems always tend towards lowest potential energy. The bottom of the bowl is of lowest potential energy, so the ball wants to move back when it is slightly displaced. The top of the flipped bowl is of highest potential energy, and any neighbour point is of lower energy. So the ball has no tendency to roll back up. Mathematically, it is thus all about figuring out if the equilibrium is a minimum or a maximum . Only a minimum is stable. You might for many practical/physical purposes be able to determine this by simply looking at the graph of the potential energy. But mathematically, this can be solved directly from the potential energy expression $U$ . Just look at the sign of the double derivative (derived to position). If it is positive, $U''_{xx}>0$ , then the value at the equilibrium is about to increase - so it is a minimum. If it is negative, $U''_{xx}<0$ , then the value at the equilibrium is about to decrease - so it is a maximum. If you have a 2D function, then you have more than one double derivative, $U''_{xx}$ , $U''_{xy}$ , $U''_{yx}$ and $U''_{yy}$ . In this case, you must collect them into a so-called Hessian matrix and look at the eigenvalues of that matrix. If both positive, then the point is a minimum; if both negative, then the point is a maximum. (And if a mix, then the point is neither a minimum nor a maximum, but a saddle point). This may be a bit more than you expected - but it is the rather elegant, mathematical method. | {
"source": [
"https://physics.stackexchange.com/questions/469917",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/214638/"
]
} |
470,202 | The entropy $S$ of a system is defined as $$S = k\ln \Omega.$$ What precisely is $\Omega$ ? It refers to "the number of microstates" of the system, but is this the number of all accessible microstates or just the number of microstates corresponding to the systems current macrostate? Or is it something else that eludes me? | Entropy is a property of a macrostate, not a system. So $\Omega$ is the number of microstates that correspond to the macrostate in question. Putting aside quantization, it might appear that there are an infinite number of microstates, and thus the entropy is infinite, but for any level of resolution, the number is finite. And changing the level of resolution simply multiplies the number of microstates by a constant amount. Since it is almost always the change in entropy, not the absolute entropy, that is considered, and we're taking the log of $\Omega$ , it actually doesn't matter if the definition of S is ambiguous up to a constant multiplicative factor, as that will cancel out when we take dS. So with a little hand waving (aka "normalization"), we can ignore the apparent infinity of entropy. | {
"source": [
"https://physics.stackexchange.com/questions/470202",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
470,316 | It is often said that, while many fermions cannot occupy the same state, bosons have the tendency to do that. Sometimes this is expressed figuratively by saying, for example, that "bosons are sociable" or that "bosons want to stay as close as possible". I understand that the symmetry of the wavefunction allows many bosons to be in the same one-particle state, but I can't see why they should prefer to do that rather than occupying different states. Anyway, according to many science writers, bosons not only can be in the same state, but they also tend to do that. Why is it like that? | Suppose you have two distinguishable coins that can either come up heads or tails. Then there are four equally likely possibilities, $$\text{HH}, \text{HT}, \text{TH}, \text{TT}.$$ There is a 50% chance for the two coins to have the same result. If the coins were fermions and "heads/tails" were quantum modes, the $\text{HH}$ and $\text{TT}$ states wouldn't be allowed, so there is a 0% chance for the two coins to have the same result. If the coins were bosons, then all states are allowed. But there's a twist: bosons are identical particles. The states $\text{HT}$ and $\text{TH}$ are precisely the same state, namely the one with one particle in each of the two modes. So there are three possibilities, $$\text{two heads}, \text{two tails}, \text{one each}$$ and hence in the microcanonical ensemble (where each distinct quantum state is equally probable) there is a $2/3$ chance of double occupancy, not $1/2$ . That's what people mean when they say bosons "clump up", though it's not really a consequence of bosonic statistics, just a consequence of the particles being identical. Whenever a system of bosonic particles is in thermal equilibrium, there exist fewer states with the bosons apart than you would naively expect, if you treated them as distinguishable particles, so you are more likely to see them together. | {
"source": [
"https://physics.stackexchange.com/questions/470316",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/160509/"
]
} |
470,675 | I'm learning infinite-dimensional systems in mathematical viewpoint and trying to understand it from physical perspective. I would like to understand if infinite-dimensional systems make sense in physics, especially when it becomes necessary in quantum control theory. Are there any simple and intuitive examples? | Welcome to Stack Exchange! I do not know much about quantum control theory, but I can give you a simple example from regular quantum mechanics: that of a particle in a box. This is one of the simplest systems one can study in QM, but even here an infinite dimensional space shows up. Indexing the energy eigenstates by $n$ so that $$H|n\rangle=E_n|n\rangle$$ there is an infinite number of possible states, one for every integer. Every state with its own energy: $$E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}.$$ Thus if you want to describe a general quantum state in this system you would write it down as $$|\psi\rangle=\sum_{n=1}^\infty c_n|n\rangle,$$ where $c_n$ is a complex number. $|\psi\rangle$ is then an example of a vector in an infinite dimensional space where every possible state is a basis vector, and the $c_n$ are the expansion coefficients in that basis. You can of course imagine the $n$ indexing some other collection of states of some other system. Indeed, in most cases the space of all possible states of a quantum system will be infinite dimensional. | {
"source": [
"https://physics.stackexchange.com/questions/470675",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/218240/"
]
} |
470,714 | When we drop a ball, it bounces back to the spot where we dropped it, due to the reaction forces exerted on it by the ground. However, if a person falls down (say, if we push them), why don't they come back to their initial position where they started their fall? According to Newton's 3rd law of motion, to every action there is always an equal but opposite reaction. If we take the example of ball then it comes back with the same force as it falls down. But in the case of a human body, this law seems not to be applicable. Why? | Newton's third law just says when the person is hitting the floor the force the person exerts on the ground is equal to the force the ground exerts on the person at all times. i.e. all forces are interactions. Newton's third law does not say that all collisions are elastic, which is what you are proposing. When someone hits the floor, most of the energy is absorbed by the person through deformation (as well as the floor, depending on what type of floor it is), but there is barely any rebound since people tend to not be very elastic. i.e. the deformation does not involve storing the energy to be released back into kinetic energy. Contrast this with a bouncy ball where much of the energy goes into deforming the ball, but since it is very elastic it is able to spring back and put energy back into motion. However, it is unlikely the collision is still perfectly elastic, as you seem to suggest in your question. In summary, Newton's third law tells us that action-reaction force pairs must have equal magnitudes and opposite directions, but it doesn't tell us anything about what the magnitude of these forces actually are. Your misunderstanding likely comes from the imprecise usage of the words "action" and "reaction". In this case, these words refer to just forces, not entire processes. You can get some confusing questions if you don't understand this. For example, why is it that when I open my refrigerator that my refrigerator doesn't also open me? | {
"source": [
"https://physics.stackexchange.com/questions/470714",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/227378/"
]
} |
470,728 | If I do a handstand, everything looks upside down (which is originally what we see if we’re upright but the brain switches this). I assume this is because the change in how light rays enter our eye. Now if I use my camera and turn it upside down, the image on screen is still upright. Why does the same principle not apply to cameras? | Newton's third law just says when the person is hitting the floor the force the person exerts on the ground is equal to the force the ground exerts on the person at all times. i.e. all forces are interactions. Newton's third law does not say that all collisions are elastic, which is what you are proposing. When someone hits the floor, most of the energy is absorbed by the person through deformation (as well as the floor, depending on what type of floor it is), but there is barely any rebound since people tend to not be very elastic. i.e. the deformation does not involve storing the energy to be released back into kinetic energy. Contrast this with a bouncy ball where much of the energy goes into deforming the ball, but since it is very elastic it is able to spring back and put energy back into motion. However, it is unlikely the collision is still perfectly elastic, as you seem to suggest in your question. In summary, Newton's third law tells us that action-reaction force pairs must have equal magnitudes and opposite directions, but it doesn't tell us anything about what the magnitude of these forces actually are. Your misunderstanding likely comes from the imprecise usage of the words "action" and "reaction". In this case, these words refer to just forces, not entire processes. You can get some confusing questions if you don't understand this. For example, why is it that when I open my refrigerator that my refrigerator doesn't also open me? | {
"source": [
"https://physics.stackexchange.com/questions/470728",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/165639/"
]
} |
470,922 | Questions of the form: An electron and a positron collide with E MeV of energy, what is the frequency of the photons released. quite often come up in my A Level course (for often fairly arbitrary E). But this got me thinking. There is energy stored in the separation of an electron and a positron, which, as they get closer and closer together, should all be converted into kinetic energy. As the potential is of the form $\frac{1}{r}$ , this implies that at arbitrarily small distances and arbitrarily high amount of energy is given off. Given that both electrons and positrons are typically regarded as point particles, in order for them to collide, they would have to be arbitrarily close together, which would imply that over the course of their collision they should have released arbitrarily high amounts of energy, in the form of kinetic energy. As this would imply photons of arbitrarily high frequency given off, I assume that I must have missed out some piece of physics somewhere, but I am uncertain where. Ideas I have had so far include: Energy should be given off, anyway, by an accelerating electron, in the form of light, according to classical EM, although I don't know how this changes from classical to quantum ideas of EM - we certainly can't have all the energy given off in a continuous stream, because we need quantised photons, so does the electron itself experience quantised energy levels as it accelerates inwards (my only issue with treating the electron in such a quantised way is that, to my mind, it'd be equivalent of treating it mathematically as a hydrogen-like atom, where the probability of the electron colliding with the positron is still extremely low, and unlike electron capture, there'd be no weak force interaction to mediate this 'electron-positron atom'). The actual mechanism for the decay occurs at a non-zero separation distance, perhaps photons pass between the two particles to mediate the decay at non-infinitesimal distances. At relativistic speeds our classical model of electrodynamics breaks down. Now, I know this to be true - considering the fact that magnetism is basically the relativistic component of electrodynamics. However, given the fact that magnetism is the only relativistic force which'd be involved, I don't see how it'd act to counteract this infinite release of energy - so is there another force which I'm forgetting? These are just ideas I've come up with whilst thinking about the problem, and I don't know if any of them have any physical significance in this problem, so any advice is appreciated! | This is a great question! It can be answered on many different levels. You are absolutely right that if we stick to the level of classical high school physics, something doesn't make sense here. However, we can get an approximately correct picture by "pasting" together a classical and a quantum description. To do this, let's think of when the classical picture breaks down. In relativistic quantum field theory, the classical picture of point particles breaks down when we go below the Compton wavelength $$\lambda = \frac{\hbar}{mc}.$$ It isn't impossible to localize particles smaller than this length, but generically you will have a significant probability to start creating new particles instead. Now, the electric potential energy released by the time we get to this separation is $$E = \frac{e^2}{r} = \frac{e^2 m c}{\hbar}$$ in cgs units. Here one of the most important constants in physics has appeared, the fine structure constant which characterizes the strength of electromagnetism, $$\alpha = \frac{e^2}{\hbar c} \approx \frac{1}{137}.$$ The energy released up to this point is $$E \approx \alpha m c^2$$ which is not infinite, but rather only a small fraction of the total energy. Past this radius we should use quantum mechanics, which renders the $1/r$ potential totally unapplicable -- not only do the electrons not have definite positions, but the electromagnetic field doesn't even have a definite value, and the number of particles isn't definite either. Actually thinking about the full quantum state of the system at this point is so hairy that not even graduate-level textbooks do it; they usually black-box the process and only think about the final results, just like your high school course is doing. Using the full theory of quantum electrodynamics, one can show the most probable final outcome is to have two energetic photons come out. In high school you just assume this happens and use conservation laws to describe the photons long after the process is over. For separations much greater than $\lambda$ , the classical picture should be applicable, and we can think of part of the energy as being released as classical electromagnetic radiation, which occurs generally when charges accelerate. (At the quantum level, the number of photons released is infinite , indicating a so-called infrared divergence, but they are individually very low in energy, and their total energy is perfectly finite.) As you expected, this energy is lost before the black-boxed quantum process starts, so the answers in your school books are actually off by around $1/137$ . But this is a small enough number we don't worry much about it. | {
"source": [
"https://physics.stackexchange.com/questions/470922",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/126714/"
]
} |
470,927 | I learn that damping force with regard to forced damped oscillations is given by
F = -bv where is the velocity of the object measured from ground frame. Suppose we are in a frame which is moving with a velocity v'. Will the damping force in this new frame be F = -b(v-v') or will it remain -bv. This doubt came when i solving a problem related to forced damped oscillation of a pendulum with its suspension point oscillating. Im arriving at the correct solution by assuming the force is -bv in the non-inertial frame of the moving suspension point instead of -b(v-v') but im not able to understand why? | This is a great question! It can be answered on many different levels. You are absolutely right that if we stick to the level of classical high school physics, something doesn't make sense here. However, we can get an approximately correct picture by "pasting" together a classical and a quantum description. To do this, let's think of when the classical picture breaks down. In relativistic quantum field theory, the classical picture of point particles breaks down when we go below the Compton wavelength $$\lambda = \frac{\hbar}{mc}.$$ It isn't impossible to localize particles smaller than this length, but generically you will have a significant probability to start creating new particles instead. Now, the electric potential energy released by the time we get to this separation is $$E = \frac{e^2}{r} = \frac{e^2 m c}{\hbar}$$ in cgs units. Here one of the most important constants in physics has appeared, the fine structure constant which characterizes the strength of electromagnetism, $$\alpha = \frac{e^2}{\hbar c} \approx \frac{1}{137}.$$ The energy released up to this point is $$E \approx \alpha m c^2$$ which is not infinite, but rather only a small fraction of the total energy. Past this radius we should use quantum mechanics, which renders the $1/r$ potential totally unapplicable -- not only do the electrons not have definite positions, but the electromagnetic field doesn't even have a definite value, and the number of particles isn't definite either. Actually thinking about the full quantum state of the system at this point is so hairy that not even graduate-level textbooks do it; they usually black-box the process and only think about the final results, just like your high school course is doing. Using the full theory of quantum electrodynamics, one can show the most probable final outcome is to have two energetic photons come out. In high school you just assume this happens and use conservation laws to describe the photons long after the process is over. For separations much greater than $\lambda$ , the classical picture should be applicable, and we can think of part of the energy as being released as classical electromagnetic radiation, which occurs generally when charges accelerate. (At the quantum level, the number of photons released is infinite , indicating a so-called infrared divergence, but they are individually very low in energy, and their total energy is perfectly finite.) As you expected, this energy is lost before the black-boxed quantum process starts, so the answers in your school books are actually off by around $1/137$ . But this is a small enough number we don't worry much about it. | {
"source": [
"https://physics.stackexchange.com/questions/470927",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/101386/"
]
} |
471,753 | After the revelation of the first black hole images , it seems there is a bias towards the south side. Is it because of measuring it from earth or is it something more fundamental in the understanding of gravitation? | The reason is almost entirely due to Doppler beaming and boosting of radiation arising in matter travelling at relativistic speeds. This in turn is almost entirely controlled by the orientation of the black hole spin. The black hole sweeps up material and magnetic fields almost irrespective of the orientation of any accretion disk. The pictures below from the fifth event horizon telescope paper makes things clear. The black arrow indicates the direction of black hole spin. The blue arrow indicates the initial rotation of the accretion flow. The jet of M87 is more or less East-West (when projected onto the page - in fact I believe the projected jet axis position angle should be more like 72 degrees rather than 90 degrees), but the right hand side is pointing towards the Earth, with an angle of about 17 degrees between the jet and the line of sight. It is assumed that the spin vector of the black hole is aligned (or anti-aligned) with this. The two left hand plots show agreement with the observations. What they have in common is that the black hole spin vector has a component into the page (anti-aligned with the jet), so that its projected spin vector is to the left. Gas is forced to rotate in the same way and results in projected relativistic motion towards us south of the black hole and away from us north of the black hole. Doppler boosting and beaming does the rest. As the paper says: "the location of the peak flux in the ring is controlled by the black hole spin: it always lies roughly 90 degrees counterclockwise from the projection of the spin vector on the sky." EDIT: Having read a bit more, there is a marginal (1.5 sigma) discrepancy between the large scale jet orientation, which should be at about a PA of 72 degrees (measuring to the right) from North in the observations and the deduced orientation of the black hole spin axis which is around $145 \pm 55$ degrees measuring from the same datum line. | {
"source": [
"https://physics.stackexchange.com/questions/471753",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6336/"
]
} |
471,920 | The latest photos of the M87 black hole capture light from around a black hole at the center of the Messier 87 galaxy, which is 16.4 Mpc ( $5.06 \times 10^{20}$ km) from our milky way. Why couldn't / didn't the scientists involved take photos of black holes less distant, for example those at the center of our Milky Way or Andromeda ( 0.77 Mpc ) or Triangulum Galaxy? Wouldn't these black holes appear larger and the photos have greater detail / resolution and be easier to capture? My intuition would be that maybe black holes at the center of closer galaxies aren't as large, or maybe they have more matter in the way / aren't directly aligned with our view from earth making it harder to capture them, but I don't know for sure. | The "size" (Schwarzschild radius) of a black hole is directly proportional to its mass. The figure of merit that has to be considered, in order to resolve any spatial detail, is the angular diameter of the black hole as viewed from Earth. This will scale as $M/d$ , where $d$ is the distance. My understanding is that the black hole in the centre of M87 and Sgr A* at the centre of our Galaxy are the two black holes with the largest value of $M/d$ . Sgr A* : $M/d \sim 4\times 10^{6}/8 = 5\times 10^{5}\ M_{\odot}$ /kpc; M87 : $\ \ $ $M/d \sim 6\times 10^{9}/16\times 10^3 = 3.8\times 10^5\ M_{\odot}$ /kpc. Your suggested alternatives. Andromeda : $M/d \sim 2\times 10^8/8\times 10^2 = 2.5\times 10^5\ M_{\odot}$ /kpc; Triangulum: doesn't have a known central supermassive black hole? So Andromeda is not a crazy target. It's angular size is only 2/3 that of M87. However, another issue is how many of the 8 telescopes in the network can view Andromeda at any one time? It's impossible for the South Pole (as was M87), but also not visible for very long from Chile, so there is a reduced baseline coverage. A further crucial argument is to consider the timescale of variability in the object. In order to co-add images you need be sure the image is stable on long enough timescales. But the timescale of variability for accretion-illumination in a black hole is proportional to its mass (see Why was M87 targeted for the Event Horizon Telescope instead of Sagittarius A*? ) and this timescale is only about 2 minutes for Sgr A* and and hour os so for Andromeda. This makes less massive black holes harder to image with this interferometric technique. | {
"source": [
"https://physics.stackexchange.com/questions/471920",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/227914/"
]
} |
471,934 | I'm busy writing a train simulator. I need to calculate the fuel consumption of the locomotive. I know that Work = Force x distance. From that, I can easily use energy density and work out how many kg of fuel was used. I have a total resistance force (F res ) for the train (gradient, aerodynamic resistance, rolling resistance). I also have the force produced by the locomotive (F loc ). I can easily calculate the train's acceleration by knowing F acc , where F acc = F loc - F res . The question is this: in order to calculate the fuel used by the locomotive, do I use F acc or F loc ? I feel that the locomotive needs to use energy (its fuel) to overcome resistance. But Wikipedia ( https://en.wikipedia.org/wiki/Work_(physics) ) says that work is only done by the resultant force. This would mean that at terminal speed, the locomotive would consume no fuel, the forces are balanced, and there is no change in E K (Kinetic energy). Clearly, this sounds wrong. Please advise. | The "size" (Schwarzschild radius) of a black hole is directly proportional to its mass. The figure of merit that has to be considered, in order to resolve any spatial detail, is the angular diameter of the black hole as viewed from Earth. This will scale as $M/d$ , where $d$ is the distance. My understanding is that the black hole in the centre of M87 and Sgr A* at the centre of our Galaxy are the two black holes with the largest value of $M/d$ . Sgr A* : $M/d \sim 4\times 10^{6}/8 = 5\times 10^{5}\ M_{\odot}$ /kpc; M87 : $\ \ $ $M/d \sim 6\times 10^{9}/16\times 10^3 = 3.8\times 10^5\ M_{\odot}$ /kpc. Your suggested alternatives. Andromeda : $M/d \sim 2\times 10^8/8\times 10^2 = 2.5\times 10^5\ M_{\odot}$ /kpc; Triangulum: doesn't have a known central supermassive black hole? So Andromeda is not a crazy target. It's angular size is only 2/3 that of M87. However, another issue is how many of the 8 telescopes in the network can view Andromeda at any one time? It's impossible for the South Pole (as was M87), but also not visible for very long from Chile, so there is a reduced baseline coverage. A further crucial argument is to consider the timescale of variability in the object. In order to co-add images you need be sure the image is stable on long enough timescales. But the timescale of variability for accretion-illumination in a black hole is proportional to its mass (see Why was M87 targeted for the Event Horizon Telescope instead of Sagittarius A*? ) and this timescale is only about 2 minutes for Sgr A* and and hour os so for Andromeda. This makes less massive black holes harder to image with this interferometric technique. | {
"source": [
"https://physics.stackexchange.com/questions/471934",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/227917/"
]
} |
472,159 | What force causes entropy to increase? I realize that the second law of thermodynamics requires the entropy of a system to increase over time. For example, gas stored in a canister, if opened inside a vacuum chamber, will expand to fill the chamber. But I’m not clear on what force, exactly, is acting upon the molecules of gas that causes them to fly out of the opened canister and fill the chamber. Just looking for a concise explanation as to what is going on at the fundamental level, since obviously, the second law of thermodynamics is not a force and therefore does not cause anything to happen. | This might not be as detailed as you want, but really all the second law says is that the most likely thing will happen. The reason we can associate certainty with something that seems random is because when we are looking at systems with such a large number of particles, states, etc. anything that is not the most likely is essentially so unlikely that we would have to wait for times longer than he age of the universe to observe them to happen by chance. Therefore, as you say in your last paragraph, there is no force associated with entropy increase. It's just a statement of how systems will move towards more likely configurations. For the specific example you give of Joule expansion the (classical) gas molecules are just moving around according to Newton's laws as they collide with each other and the walls of the container. There is no force "telling" the gas to expand to the rest of the container. It's just most likely that we will end up with a uniform gas concentration in the container. | {
"source": [
"https://physics.stackexchange.com/questions/472159",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/70192/"
]
} |
472,215 | In the comments for the question Falsification in Math vs Science , a dispute around the question of "Have Newtonian Mechanics been falsified?" That's a bit of a vague question, so attempting to narrow it a bit: Are any of Newton's three laws considered to be 'falsified theories' by any 'working physicists'? If so, what evidence do they have that they believe falsifies those three theories? If the three laws are still unfalsified, are there any other concepts that form a part of "Newtonian Mechanics" that we consider to be falsified? | "Falsified" is more philosophical than scientific distinction.
Newton laws have been falsified somehow, but we still use them, since usually they are a good approximation, and are easier to use than relativity or quantum mechanics. The "action at distance" of Newton potentials has been falsified (finite speed of light...) but again, we use it every day. So, in practical terms, no, Newton laws are still not falsified, in the sense that are not totally discredited in the scientific community. Classical mechanics is still in the curriculum of all universities, in a form more or less identical that 200 years ago (Before Relativity, quantum mechanics, field theory). Most concept in physics fit more in the category of "methods" rather than "paradigms", so can be used over and over again. And all current methods and laws fails and give "false" results, when used outside their range of applicability. The typical example of "falsified" theory is the Ptolemaic system of Sun & planets rotating around the Earth. However, philosopher usually omits the facts that: Ptolemaic system was experimentally pretty good at calculating planet motions Most mathematical and experimental methods of the new Heliocentric paradigm are the same of the old Ptolemaic So the falsification was more on the point of view, rather than in the methods. | {
"source": [
"https://physics.stackexchange.com/questions/472215",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45805/"
]
} |
472,256 | Suppose I apply some force on one side of hydraulic lift where area is less, and the fluid in the lift raises some heavier object on the other side where area is more. Now, work done is $\text{Force}\times \text{Displacement}$ and displacement on both side is same (incompressible liquid) but the force on one side is less, so we get more energy on other side. Then why doesn't the law of conservation of energy fail here? | Displacement in both sides is not same. If on one side of lift the area is $A_1$ , and on other side it is $A_2$ , and we apply a force $F_1$ on one side to distance $d_1$ then volume decreased in one side is $=A_1 d_1$ Equal amount of volume will raise in the other side. So $$A_1 d_1=A_2 d_2$$ $A_1 \not= A_2$ , so $d_1 \not=d_2$ . Actually, we need to apply the little force $F_1$ for a greater distance $d_1$ . | {
"source": [
"https://physics.stackexchange.com/questions/472256",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/226827/"
]
} |
472,323 | How can we do it just by looking at the image. But I heard in news saying "Einstein was right! black hole image confirms GTR. The image is so less detailed that I can't even make some pretty good points. Please correct me if I'm wrong on any aspect. Please provide a link if this question sounds duplicate... | I think it's fair to say that the EHT image definitely is consistent with GR, and so GR continues to agree with experimental data so far. The leading paper in the 10th April 2019 issue of Astrophysical Journal letters says (first sentence of the 'Discussion' section): A number of elements reinforce the robustness of our image and the conclusion that it is consistent with the shadow of a black hole as predicted by GR. I'm unhappy about the notion that this 'confirms' GR: it would be more correct to say that GR has not been shown to be wrong by this observation: nothing can definitively confirm a theory, which can only be shown to agree with experimental data so far. This depends of course on the definition of 'confirm': above I am taking it to mean 'shown to be correct' which I think is the everyday usage and the one implied in your question, and it's that meaning I object to. In particular it is clearly not the case that this shows 'Einstein was right': it shows that GR agrees with experiment (extremely well!) so far , and this and LIGO both show (or are showing) that GR agrees with experiment in regions where the gravitational field is strong. (Note that, when used informally by scientists, 'confirm' very often means exactly 'shown to agree with experiment so far' and in that sense GR has been confirmed (again) by this observation. I'm assuming that this is not the meaning you meant however.) At least one other answer to this question is excellent and very much worth reading in addition to this. | {
"source": [
"https://physics.stackexchange.com/questions/472323",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
473,061 | I noticed that I have been bending my book all along, when I was reading it with one hand. This also works for plane flexible sheets of any material. Illustration using an A4 sheet Without bending the sheet: With a bend along perpendicular axis How do you explain this sturdiness, that comes only when the object is bent along the perpendicular axis ? I feel that this is a problem related to the elastic properties of thin planes. But any other versions are also welcome. | Understanding why this works turns out to be quite deep. This answer is kind of a long story, but there's no maths. At the end ('A more formal approach') there is an outline of how the maths works: skip to that if you don't want the story. Insect geometry Consider a little insect or something who lives on the surface of the paper. This insect can't see off the paper, but it can draw straight lines and measure angles on the paper. How does it draw straight lines? Well it does it in two ways: either it takes two points, draws lines between them on the paper, and finds the shortest line between them, which it calls 'straight'; or alternatively it draws a line in such a way that it is parallel to itself and calls this 'straight'. There is a geometrical trick for constructing such 'parallel-to-themselves' lines which I won't go into. And it turns out that these two sorts of lines are the same. I'm not sure how it measures angles: perhaps it has a little protractor. So now our insect can do geometry. It can draw various triangles on the paper, and it can measure the angles at the corners of these triangles. And it's always going to find that the angles add up to $\pi$ ( $180^\circ$ ), of course. You can do this too, and check the insect's results, and many people do just this at school. The insect (let's call it 'Euclid') can develop an entire system of geometry on its sheet of paper, in fact. Other insect artists will make pictures and sculptures of it, and the book on geometry it writes will be used in insect schools for thousands of years. In particular the insect can construct shapes out of straight lines and measure the areas inside them and develop a bunch of rules for this: rectangles have areas which are equal to $w \times h$ for instance. I didn't specify something above: I didn't tell you if the paper was lying flat on a desk, or if it was curved in your hand. That's because it does not matter to the insect : the insect can't tell whether we think the paper is curved, or whether we think it's flat: the lines and angles it measures are exactly the same . And that's because, in a real sense, the insect is right and we're wrong: the paper is flat, even when we think it's curved . What I mean by this is that there is no measurement you can do, on the surface of the paper which will tell you if it is 'curved' or 'flat'. So now shake the paper, and cause one of the insects to fall off and land on a tomato. This insect starts doing its geometry on the surface of the tomato, and it finds something quite shocking: on a small scale everything looks OK, but when it starts trying to construct large figures things go horribly wrong: the angles in its triangles add up to more than $\pi$ . Lines which start parallel, extended far enough, meet twice, and there is in fact no global notion of parallelism at all . And when it measures the area inside shapes, it finds it is always more than it thinks it should be: somehow there is more tomato inside the shapes than there is paper. The tomato, in fact, is curved : without ever leaving the surface of the tomato the insect can know that the surface is somehow deformed. Eventually it can develop a whole theory of tomato geometry, and later some really smart insects with names like 'Gauss' and 'Riemann' will develop a theory which allows them to describe the geometry of curved surfaces in general: tomatoes, pears and so on. Intrinsic & extrinsic curvature To be really precise, we talk about the sheet of paper being 'intrinsically flat' and the surface of the tomato being 'intrinsically curved': what this means is just that, by doing measurements on the surface alone we can tell if the rules of Euclidean geometry hold or not. There is another sort of curvature which is extrinsic curvature: this is the kind of curvature which you can measure only by considering an object as being embedded into some higher-dimensional space. So in the case of sheets of paper, the surfaces of these are two dimensional objects embedded in the three dimensional space where we live. And we can tell whether these surfaces are extrinsically curved by constructing normal vectors to the surfaces and checking whether they all point in the same direction. But the insects can't do this: they can only measure intrinsic curvature. And, critically, something can be extrinsically curved while being intrinsically flat. (The converse is not true, at least in the case of paper: if it's intrinsically curved it's extrinsically curved as well.) Stretching & compressing There's a critical thing about the difference between intrinsically flat and intrinsically curved surfaces which I've mentioned in passing above: the area inside shapes is different . What this means is that the surface is stretched or compressed: in the case of the tomato there is more area inside triangles than there is for flat paper. What this means is that, if you want to take an intrinsically flat object and deform it so that it is intrinsically curved, you need to stretch or compress parts of it: if we wanted to take a sheet of paper and curve it over the surface of a sphere, then we would need to stretch & compress it: there is no other way to do it. That's not true for extrinsic curvature: if I take a bit of paper and roll it into a cylinder, say, the surface of the paper is not stretched or compressed at all. (In fact, it is a bit because paper is actually a thin three-dimensional object, but ideal two-dimensional paper is not.) Why curving paper makes it rigid Finally I can answer the question. Paper is pretty resistant to stretching & compressing: if you try and stretch a (dry) sheet of paper it will tear before it has streched really at all, and if you try and compress it it will fold up in some awful way but not compress. But paper is really thin so it is not very resistant to bending (because bending it only stretches it a tiny tiny bit, and for our ideal two dimensional paper, it doesn't stretch it at all). What this means is that it's easy to curve paper extrinsically but very hard to curve it intrinsically . And now I will wave my hands a bit: if you curve paper into a 'U' shape as you have done, then you are curving it only extrinsically: it's still intrinsically flat. So it doesn't mind this, at all. But if it starts curving in the other direction as well, then it will have to curve intrinsically : it will have to stretch or compress. It's easy to see this just be looking at the paper: when it's curved into a 'U' then to curve it in the other direction either the top of the 'U' is going to need to stretch or the bottom is going to need to compress. And this is why curving paper like that makes it rigid: it 'uses up' the ability to extrinsically curve the paper so that any further extrinsic curvature involves intrinsic curvature too, which paper does not like to do. Why all this is important As I said at the start, this is quite a deep question. The mathematics behind this is absolutely fascinating and beautiful while being relatively easy to understand once you have seen it. If you understand it you get some kind of insight into how the minds of people like Gauss worked, which is just lovely. The mathematics and physics behind it turns out to be some of the maths that you need to understand General Relativity, which is a theory all about curvature. So by understanding this properly you are starting on the path to understanding the most beautiful and profound theory of modern physics (I was going to write 'one of the most ...' but no: there's GR and there's everything else). The mathematics and physics behind it also is important in things like engineering: if you want to understand why beams are strong, or why car panels are rigid you need to understand this stuff. And finally it's the same maths : the maths you need to understand various engineered structures is pretty close to the maths you need to understand GR: how cool is that? A more formal approach: a remarkable theorem The last section above involved some handwaving: the way to make it less handwavy is due to the wonderful Theorema Egregium ('remarkable theorem') due to Gauss. I don't want to go into the complete detail of this (in fact, I'm probably not up to it any more), but the trick you do is, for a two dimensional surface you can construct the normal vector $\vec{n}$ in three dimensions (the vector pointing out of the surface), and you can consider how this vector changes direction (in three dimensions) as you move it along various curves on the surface. At any point in the surface there are two curves which pass through it: one on which the vector is changing direction fastest along the curve, and one along which is changing direction slowest (this follows basically from continuity). We can construct a number, $r$ which describes how fast the vector is changing direction along a curve (I've completely forgotten how to do that, but I think it's straightforward), and for these two maximum & minimum curves we can call the two rates $r_1$ and $r_2$ . $r_1$ & $r_2$ are called the two principal curvatures of the surface. Then the quantity $K = r_1r_2$ is called the Gaussian curvature of the surface, and the theorema egregium says that this quantity is intrinsic to the surface: you can measure it just by measuring angles et cetera on the surface. The reason the theorem is remarkable is that the whole definition of $K$ involved things which are extrinsic to the surface, in particular the two principal curvatures. Because $K$ is intrinsic, our insects can measure it ! Euclidean geometry is true (in particular the parallel postulate is true) for surfaces where $K = 0$ only. And we can now be a bit more precise about the whole 'stretching & compressing' thing I talked about above. If we're not allowed to stretch & compress the sheet of paper, then all the things we are allowed to do to it don't alter any measurement that the insects can do: lengths or angles which are intrinsic, that is to say measured entirely in the surface of the paper, can't change unless you stretch or compress the paper. Changes to the paper which preserve these intrinsic properties are called isometries . And since $K$ is intrinsic it is not altered by isometries. Now consider a sheet of paper which is flat in three dimensions. It's obvious that $r_1 = r_2 = 0$ (the normal vector always points in the same direction). So $K = 0$ . Now fold the paper in a 'U' shape: now it's clear that $r_1 \ne 0$ -- if you draw a curve across the valley in the paper then the normal vector from that curve changes direction. But this folding is an isometry: we didn't stretch or compress the paper. So $K$ must still be $0$ : the paper is still intrinsically flat. But since $K = r_1r_2$ and $r_1 \ne 0$ this means that $r_2 = 0$ . And what this means is that the other principal curvature must be zero. This principal curvature is along the line that goes down the valley of the 'U'. In other words the paper can't bend in the other direction without becoming intrinsically curved ( $K \ne 0$ ), which means it needs to stretch. (I have still handwaved a bit here: I have not defined how you compute $r$ , and I've not shown that there is not some other curve you can draw along the paper which has $r = 0$ apart from the obvious one.) One of the reasons that this is all quite interesting is that this maths is the beginning of the maths you need to understand General Relativity, which also is about curvature. Failure and folding Of course, if you take the U-shaped bit of paper and try to bend it in the other direction at some point it will fail suddenly and become folded in some complicated way. I think there's a whole area of study which thinks about that. I suspect that when this happens (during the sudden failure, not after it I think) there must be, locally, non-zero intrinsic curvature at places on the paper. I'm sure there is a lot of interesting maths about this (apart from anything else it must be very interesting for engineered structures), but I don't know it. | {
"source": [
"https://physics.stackexchange.com/questions/473061",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/155230/"
]
} |
473,330 | Let's say I was at the very center of the enormous Boötes void , way out in deep, deep space. What could I see with the naked eye? I assume I could see no individual stars, but could I resolve any galaxies? If I gazed in the direction of a super-cluster of galaxies would it seem brighter than other directions? How dark would it be compared to, say, the far side of the moon when it is a full moon on earth? I am told there are, in fact, a few galaxies in the void. So let's say I pick a spot in the void that is as far from any of those galaxies as possible. | Individual sources The number density of galaxies in a void is typically an order of magnitude lower than the average in the Universe (e.g. Patiri et al. 2006 ). In this astronomy.SE post , I estimate the number density of galaxies of magnitude $M=-17$ or brighter in the Boötes Void to be $n \sim 0.004\,\mathrm{Mpc}^{-3}$ , or $10^{-4}\,\mathrm{Mlyr}^{-3}$ (i.e. "per cubic mega-light-year"). Hence, the typical distance to a galaxy from a random point in the Boötes Void is $$
d = \left( \frac{3}{4\pi n} \right)^{1/3} \simeq 13\,\mathrm{Mlyr}.
$$ Although some galaxies will be brighter than $M=-17$ , the number density declines fast with brightness; for instance, galaxies that are 10 times brighter are roughly 100 times rarer, meaning that they're on average 5 times more distant and hence appear 25 times fainter. On the other hand, the number density of galaxies fainter than $M=-17$ doesn't increase that fast (in astronomish: $-17$ is close to $M^*$ ; "M-star"). So for the sake of this calculation, let's assume that the closest galaxy is an $M=-17$ galaxy at a distance of $13\,\mathrm{Mlyr}$ . That distance corresponds to a distance modulus of $\mu \simeq 28$ , so the apparent magnitude of the galaxy would be $$
m = M + \mu \simeq 11.
$$ Typically, humans cannot see objects darker than $m \simeq 6.5$ (the magnitude scale is backwards, so darker means "larger values than 6.5"), although some have claimed to be able to see $m\simeq8$ — still an order of magnitude brighter than the $m=11$ estimated above. Moreover, this threshold assumes point sources, whereas a galaxy has its brightness smeared out over a quite large area, lowering its surface brightness significantly! $^\dagger$ . Note also that, as in the rest of the Universe, galaxies in voids are not completely randomly scattered throughout space, but tend to cluster in clumps and filaments, and that the number density is smaller in the center of the void, meaning that here the typical distance to the next galaxy is larger. Hence, you would — at a random position in the Böoted Void — be most likely to be floating in complete darkness. $^\ddagger$ Background radiation The combined light from all astrophysical and cosmological sources comprises a cosmic background radiation (CBR), meaning that at any time your eye does indeed receive photons across the entire electromagnetic spectrum. Thus the term "complete darkness" may be debated. On average, this background is dominated by the cosmic microwave background (if you're close to a star or a galaxy, those sources will dominate, but then it isn't really a "background" any longer). In this answer , I estimate the total background in the visible region (from sources outside the Milky Way) to be roughly $3.6\times10^{-8}\,\mathrm{W}\,\mathrm{m}^{-2}$ . If I've done my maths right, this corresponds to a the light from a 25 W light bulb, smeared out over a 15 km diameter sphere with you in the center. The Böotes Void would have an even lower background than this. I'm not a physiologist, but I think this qualifies as "complete darkness" (to the human eye; not to a telescope). $^\dagger$ For instance, the Andromeda galaxy has an apparent magnitude of $m=3.44$ which, if its light were concentrated in a point, would make it easily visibly even under light-polluted conditions. $^\ddagger$ Your eye might be able to detect individual photons, at stated in Árpád Szendrei's answer, but that hardly counts as "seeing anything". | {
"source": [
"https://physics.stackexchange.com/questions/473330",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/218022/"
]
} |
473,566 | Light exhibits wave behaviour in phenomenon such as interference but particle behaviour in the photoelectric effect. How does light 'choose' where to be a wave and where to be a particle? | In fact, light is not really a wave or a particle. It is what it is; it's this strange thing that we model as a wave or a particle in order to make sense of its behaviour, depending on the scenario of interest. At the end of the day, it's the same story with all theories in physics. Planets don't "choose" to follow Newtonian mechanics or general relativity. Instead, we can model their motion as Newtonian if we want to calculate something like where Mars will be in 2 weeks, but need to use general relativity if we want to explain why the atomic clock on a satellite runs slow compared to one on the ground. Light doesn't "choose" to be a wave or a particle. Instead, we model it as a wave when we want to explain (or calculate) interference, but need to model it as a particle when we want to explain (or calculate) the photoelectric effect. | {
"source": [
"https://physics.stackexchange.com/questions/473566",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/228644/"
]
} |
473,569 | I've started asking myself: What is the value of the electric field at the surface of a shell with uniform distribution? In particular, is it infinite, as this point is over an (infinitesimal) charge? After google a few, it seems the correct answer is $E=\frac{k_eQ}{2R^2}$ being R the radius of the shell and Q its total charge. It is not infinite (why?), nor $0$ nor $\frac{k_eQ}{R^2}$ . Still on same target, I've try to compare the expression for total energy in a system of charged particles versus the expression for continuous system. According wiki and other, the expression for particles is: $$
U = \frac{1}{2} k_e\sum_{i=1}^N q_i \sum_{j=1}^{N(j\ne i)} \frac{q_j}{r_{ij}}.
$$ Note the $j \ne i$ in second summatory, that means that, for a single charge, $U=0$ , as explained in the comment: A common question arises concerning the interaction of a point charge
with its own electrostatic potential. Since this interaction doesn't
act to move the point charge itself, it doesn't contribute to the
stored energy of the system. However, for a continuous system, the expression is: $$
U = \frac{1}{2} \int_V \rho \Phi dV
$$ (not an equivalent to the $i \ne j$ ?) a) if I describe a particle has $\rho(v)=Q\delta(v)$ , then: $$
U = \frac{1}{2} \int_V \rho \Phi dV = \frac{1}{2} \int_V Q\delta(v) \frac{k_eQ}{r} dV = \frac{k_eQ^2}{r}|_{r=0} = \infty
$$ b) skipping Dirac's delta , I try to define a particle as a shell in the limit $R \rightarrow 0$ . Inside and in the surface of a shell there are constant potential $\frac{k_eQ}{R}$ : $$
U = \frac{1}{2} \int_V \rho \Phi dV = \frac{1}{2} \int_V \rho \frac{k_eQ}{R} dV = \frac{k_eQ^2}{R}
$$ that, again, when $R \rightarrow 0$ then $U \rightarrow \infty$ . (equal result, but longer proof, if I define a particle as a uniformly charged sphere). Thus, my questions are: why the field in the surface of a charged sphere (uniform or shell) is not infinite? why continuous and discrete expression of total energy doesn't gives me the same result ? | In fact, light is not really a wave or a particle. It is what it is; it's this strange thing that we model as a wave or a particle in order to make sense of its behaviour, depending on the scenario of interest. At the end of the day, it's the same story with all theories in physics. Planets don't "choose" to follow Newtonian mechanics or general relativity. Instead, we can model their motion as Newtonian if we want to calculate something like where Mars will be in 2 weeks, but need to use general relativity if we want to explain why the atomic clock on a satellite runs slow compared to one on the ground. Light doesn't "choose" to be a wave or a particle. Instead, we model it as a wave when we want to explain (or calculate) interference, but need to model it as a particle when we want to explain (or calculate) the photoelectric effect. | {
"source": [
"https://physics.stackexchange.com/questions/473569",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/164884/"
]
} |
474,105 | Does the universe have a fixed centre of mass?
If it does, doesn't it necessarily mean that every action of ours has to be balanced by a counteraction somewhere in the universe so as to neutralize the disbalance of mass? | As far as we know, the universe does not have a centre of mass because it does not have a centre. One of the basic assumptions we use when describing the universe is that, on average, it is the same everywhere. This is called the cosmological principle . While this is only an assumption, the evidence we have from observing the universe suggests that it is true. This can seem a bit odd if you have the idea that the Big Bang happened at a point and the Big Bang blasted the universe outwards from that point. But the Big Bang did not happen at a point; it happened everywhere in the universe at the same time. For more on this, see Did the Big Bang happen at a point? It is certainly true that every action of ours has to be balanced by a counteraction because this is just Newton's third law. If I apply a force on you then you apply an equal and opposite force on me, so if we were floating in space our combined centre of mass would not change. So while it does not make sense to ask about the centre of mass of the universe we can ask what happens on a smaller scale, and we find that unless some external force is being applied the centre of mass of a system cannot change. | {
"source": [
"https://physics.stackexchange.com/questions/474105",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/228864/"
]
} |
474,449 | I am taking AP Physics right now (I'm a high school student) and we are learning about circuits, current, resistance, voltage, Ohm's Law, etc. I am looking for exact definitions of what current, voltage, and resistance are. My teacher, as I'm sure most physics teachers do, compared a wire with current flowing through it to a pipe with water flowing through it. The thinner the pipe, the more 'resistance'. The more water pressure, the more 'voltage'. And the faster the water travels, the higher the 'current'. I took these somewhat literally, and assumed that current is literally the velocity of electrons, voltage is the pressure, etc. My physics teacher said that the analogy to the water pipe is only really used for illustrative purposes. I'm trying to figure out exactly what current, resistance, and voltage are, because I can't really work with a vague analogy that kind of applies and kind of doesn't. I did some research, and found this page which provided a decent explanation, but I was slightly lost in the explanation given. Let me know if this question has already been asked (again, remember: I don't want an analogy, I want a concrete definition). Edit 1: Note, when I say 'exact definition' I simply mean a definition that does not require an analogy. To me, an exact definition for a term applies to every use case. Whether I am talking about a series circuit, a parallel circuit, or electric current within a cell, the 'exact definition' should apply to all of them and make sense. | Before explaining current, we need to know what charge is, since current is the rate of flow of charge. Charge is measured in coulombs. Each coulomb IS a large group of electrons: roughly 6.24 ˟ 10^18 of them . The “rate of flow” of charge is simply charge/time and this calculation for a circuit gives you the number of coulombs that went past a point in a second . This is just what current is. Resistance is a circuit’s resistance to current; it is, like you said, measured in ohms, but it is caused by the vibrations of atoms in a circuit's wire and components, which results in collisions with electrons, making charge passage difficult. This increases with an increase in temperature of the circuit, as the atoms of the circuit have more kinetic energy to vibrate with. Voltage is the energy in joules per coulomb of electrons. This is shown though the equation E=QV where the ratio of Energy over charge= voltage. This is granted by the battery, which pushes coulombs of electrons, with what we call electromotive force. However when it is said that the potential difference across a component is X volts, it means that each coulomb is giving X joules of energy to that component. Note: if an equation doesn’t make intuitive sense to you, chances are it is a complicated derivation, and to understand it you’ll have to learn its derivation. | {
"source": [
"https://physics.stackexchange.com/questions/474449",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/229027/"
]
} |
476,175 | The three-dimensional Laplacian can be defined as $$\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}.$$ Expressed in spherical coordinates, it does not have such a nice form. But I could define a different operator (let's call it a "Laspherian") which would simply be the following: $$\bigcirc^2=\frac{\partial^2}{\partial \rho^2}+\frac{\partial^2}{\partial \theta^2}+\frac{\partial^2}{\partial \phi^2}.$$ This looks nice in spherical coordinates, but if I tried to express the Laspherian in Cartesian coordinates, it would be messier. Mathematically, both operators seem perfectly valid to me. But there are so many equations in physics that use the Laplacian, yet none that use the Laspherian. So why does nature like Cartesian coordinates so much better? Or has my understanding of this gone totally wrong? | Nature appears to be rotationally symmetric, favoring no particular direction. The Laplacian is the only translationally-invariant second-order differential operator obeying this property. Your "Laspherian" instead depends on the choice of polar axis used to define the spherical coordinates, as well as the choice of origin. Now, at first glance the Laplacian seems to depend on the choice of $x$ , $y$ , and $z$ axes, but it actually doesn't. To see this, consider switching to a different set of axes, with associated coordinates $x'$ , $y'$ , and $z'$ . If they are related by $$\mathbf{x} = R \mathbf{x}'$$ where $R$ is a rotation matrix, then the derivative with respect to $\mathbf{x}'$ is, by the chain rule, $$\frac{\partial}{\partial \mathbf{x}'} = \frac{\partial \mathbf{x}}{\partial \mathbf{x}'} \frac{\partial}{\partial \mathbf{x}} = R \frac{\partial}{\partial \mathbf{x}}.$$ The Laplacian in the primed coordinates is $$\nabla'^2 = \left( \frac{\partial}{\partial \mathbf{x}'} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}'} \right) = \left(R \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left(R \frac{\partial}{\partial \mathbf{x}} \right) = \frac{\partial}{\partial \mathbf{x}} \cdot (R^T R) \frac{\partial}{\partial \mathbf{x}} = \left( \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}} \right)$$ since $R^T R = I$ for rotation matrices, and hence is equal to the Laplacian in the original Cartesian coordinates. To make the rotational symmetry more manifest, you could alternatively define the Laplacian of a function $f$ in terms of the deviation of that function $f$ from the average value of $f$ on a small sphere centered around each point. That is, the Laplacian measures concavity in a rotationally invariant way. This is derived in an elegant coordinate-free manner here . The Laplacian looks nice in Cartesian coordinates because the coordinate axes are straight and orthogonal, and hence measure volumes straightforwardly: the volume element is $dV = dx dy dz$ without any extra factors. This can be seen from the general expression for the Laplacian, $$\nabla^2 f = \frac{1}{\sqrt{g}} \partial_i\left(\sqrt{g}\, \partial^i f\right)$$ where $g$ is the determinant of the metric tensor. The Laplacian only takes the simple form $\partial_i \partial^i f$ when $g$ is constant. Given all this, you might still wonder why the Laplacian is so common. It's simply because there are so few ways to write down partial differential equations that are low-order in time derivatives (required by Newton's second law, or at a deeper level, because Lagrangian mechanics is otherwise pathological ), low-order in spatial derivatives, linear, translationally invariant, time invariant, and rotationally symmetric. There are essentially only five possibilities: the heat/diffusion, wave, Laplace, Schrodinger, and Klein-Gordon equations, and all of them involve the Laplacian. The paucity of options leads one to imagine an "underlying unity" of nature, which Feynman explains in similar terms : Is it possible that this is the clue? That the thing which is common to all the phenomena is the space, the framework into which the physics is put? As long as things are reasonably smooth in space, then the important things that will be involved will be the rates of change of quantities with position in space. That is why we always get an equation with a gradient. The derivatives must appear in the form of a gradient or a divergence; because the laws of physics are independent of direction, they must be expressible in vector form. The equations of electrostatics are the simplest vector equations that one can get which involve only the spatial derivatives of quantities. Any other simple problem—or simplification of a complicated problem—must look like electrostatics. What is common to all our problems is that they involve space and that we have imitated what is actually a complicated phenomenon by a simple differential equation. At a deeper level, the reason for the linearity and the low-order spatial derivatives is that in both cases, higher-order terms will generically become less important at long distances. This reasoning is radically generalized by the Wilsonian renormalization group, one of the most important tools in physics today. Using it, one can show that even rotational symmetry can emerge from a non-rotationally symmetric underlying space, such as a crystal lattice. One can even use it to argue the uniqueness of entire theories, as done by Feynman for electromagnetism . | {
"source": [
"https://physics.stackexchange.com/questions/476175",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/165384/"
]
} |
476,554 | So I was going through callen's thermodynamics book and their he says that thermodynamics is only applicable to systems which are in equilibrium and that naturally raised a few questions in my mind Is thermodynamics really never applicable to systems which are not in equilibrium, if so why should such a restriction exist? And also it might sound silly but why is the theory called "thermodynamics"- specifically the "dynamics" part? | It entirely depends on what you think "thermodynamics" is. The traditional idea of thermodynamics dealing with systems whose macrostate can be fully described by e.g. temperature, pressure and volume indeed only applies to systems in equilibrium. Of course, as an approximation it also applies to systems "not far" from equilibrium, for some suitable notion of "not far", explaining its success in describing nevertheless a plethora of phenomena that occur in the real world. However, non-equilibrium thermodynamics also exists, and is well and alive as a subfield of both classical and quantum physics. Its methods, however, differ strongly from what is commonly referred to as "thermodynamics" in introductory textbooks. | {
"source": [
"https://physics.stackexchange.com/questions/476554",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/222265/"
]
} |
477,214 | It is being said that gravitational-wave detectors are now able to distinguish neutron star waves from those originating from black holes. Two Questions: How do LIGO and Virgo know that a gravitational wave has its origin in a neutron star or a black hole, if their gravitational fields, except for their intensity, are identical in that space beyond the radius that defines them? Is this identification accurate and reliable? | The most obvious — though possibly least convincing — way is by noting the "mass gap": the heaviest neutron stars we know of (by other means) are lighter than 3 solar masses, while the lightest black holes we know of (by other means) are heavier than 5 solar masses. So if the constituents of a binary that LIGO detects have masses in one group or the other, LIGO/Virgo folks sort of expect that the objects are really in that group. If you look at the current confirmed detections (shown in the image below), you'll notice that there is indeed a significant gap between the masses of the neutron stars and the masses of the black holes. But part of LIGO/Virgo's job is to look for things that we can't find by other means, which might show us that there are lighter black holes (BHs) or heavier neutron stars (NSs) than we expect otherwise. So they don't stop there. It's also possible to look for "tidal effects". Before two NSs (or one NS and one BH) actually touch, the matter in the neutron star will get distorted in ways that a black hole can't. The build up of this distortion takes energy, which comes out of the orbital energy of the binary, and that loss of energy imprints itself on the orbital motion — most prominently, on the "phase" of the binary which is the most accurately detected aspect of the inspiral. So when the OP says the BH and NS "gravitational fields, except for their intensity, are identical in that space beyond the radius that defines them", that's not quite true. It's true for isolated nonspinning objects (thanks to Birkhoff's theorem , which I guess is what the OP was thinking of), but it's not true for objects in binaries, and not once you get below the radius of the NS. That brings up another important difference: NSs merge (basically) when their matter comes into contact with the other member of the binary, which is significantly earlier than BHs come into contact with each other. The BH radius is much smaller than the NS radius, so essentially a pair of BHs get to keep going for a while, going faster and faster than they would if a NS were present. This talk of distances is a bit imprecise, so it's better to talk about the GW signal observed at large distances from the binary (e.g., on Earth). You could — in principle — see this effect in the GW signal where the BH signal would just keep getting faster and stronger after the NS signal "shuts off". Of course, it's not really shut off; complicated stuff happens after NSs merge. After the objects merge, they continue to exhibit huge differences. For example, if there's a NS involved, some matter can get flung out in a "tail" or into a disc around the central remnant. This extra motion of the matter (that wouldn't happen if there were only BHs) can generate its own gravitational waves, which could possibly be detected directly. More likely, the NS will "smear out" and just not be as good at emitting gravitational waves, so the peak amplitude will be smaller. However, after BHs merge, we know that they "ringdown" exponentially quickly. Basically, BHs have a very fast, simple, and well understood ringdown phase, whereas NSs have a messy and non-exponential aftermath. For example, we frequently talk about "mountains" on NSs afterward, which will continue to spin and give off sort of mildly damped but mostly continuous waves. Of course, it is possible that a merger with one or two NSs will end up with a single BH at the end, which will also ringdown, but before or in addition to that, we expect a lot of other complicated stuff to happen. [Note that the binary NS merger shown in the figure below ends up in a question mark, meaning that we're not entirely sure whether the remnant is a huge NS or a tiny BH.] I should explain that these merger and post-merger effects happen at pretty high frequencies (because NSs are relatively low-mass objects), whereas LIGO and Virgo start to become much less sensitive as you go to higher frequencies (because at high frequencies there just aren't enough photons arriving at the interferometer's output; the number of photons per period, say, becomes quite random and therefore noisy). So it's not entirely clear whether or not we'll be able to see the "shutoff" or "mountains" with current detectors. A lot depends on unknown physics, and our ability to create good models for the signals given off by merging NSs. But it is true that we have not yet seen any direct evidence for them as of early 2019. So the last two items I described have not yet featured in claims about whether any source involves NSs or BHs. But one thing that will tell us for sure if there was much matter involved — and was the reason we were so sure about the binary NS LIGO/Virgo announced in 2017 — is the presence of electromagnetic signals. Obviously, a pair of BHs on their own won't give off any obvious electromagnetic signal, whereas those huge amounts of matter when a NS is involved should give off some signal. If we detect an electromagnetic "counterpart", we can be much more confident that there was a lot of matter involved; if we don't detect any, it's unlikely that there was much matter in the system. So there's no one piece of evidence that proves beyond doubt that there were only NSs or only BHs involved, but a collection of evidence that points in that direction. And really, how sure we are of the conclusion depends on a lot of factors. If the signal is very "loud" and clear, and the masses are very far from the mass gap, we can be particularly sure about our conclusions. But if the signal is from a source that's very far away, or is otherwise hard to measure, and if the masses are close to that mass gap, then we wouldn't be too sure about our conclusions. For all the systems confirmed so far, I think it's fair to say that most GW astronomers are extremely confident in the conclusions, but there are certainly more detections on the way that will be much more uncertain. | {
"source": [
"https://physics.stackexchange.com/questions/477214",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/198107/"
]
} |
477,228 | When we apply a constant force on the massless string as described in the left image, the tension
T = F. My doubt : isn't applying constant force similar to a block m2 hanging in the 2nd image. What is the difference between applying a constant force and hanging a block Instead of it ? | The most obvious — though possibly least convincing — way is by noting the "mass gap": the heaviest neutron stars we know of (by other means) are lighter than 3 solar masses, while the lightest black holes we know of (by other means) are heavier than 5 solar masses. So if the constituents of a binary that LIGO detects have masses in one group or the other, LIGO/Virgo folks sort of expect that the objects are really in that group. If you look at the current confirmed detections (shown in the image below), you'll notice that there is indeed a significant gap between the masses of the neutron stars and the masses of the black holes. But part of LIGO/Virgo's job is to look for things that we can't find by other means, which might show us that there are lighter black holes (BHs) or heavier neutron stars (NSs) than we expect otherwise. So they don't stop there. It's also possible to look for "tidal effects". Before two NSs (or one NS and one BH) actually touch, the matter in the neutron star will get distorted in ways that a black hole can't. The build up of this distortion takes energy, which comes out of the orbital energy of the binary, and that loss of energy imprints itself on the orbital motion — most prominently, on the "phase" of the binary which is the most accurately detected aspect of the inspiral. So when the OP says the BH and NS "gravitational fields, except for their intensity, are identical in that space beyond the radius that defines them", that's not quite true. It's true for isolated nonspinning objects (thanks to Birkhoff's theorem , which I guess is what the OP was thinking of), but it's not true for objects in binaries, and not once you get below the radius of the NS. That brings up another important difference: NSs merge (basically) when their matter comes into contact with the other member of the binary, which is significantly earlier than BHs come into contact with each other. The BH radius is much smaller than the NS radius, so essentially a pair of BHs get to keep going for a while, going faster and faster than they would if a NS were present. This talk of distances is a bit imprecise, so it's better to talk about the GW signal observed at large distances from the binary (e.g., on Earth). You could — in principle — see this effect in the GW signal where the BH signal would just keep getting faster and stronger after the NS signal "shuts off". Of course, it's not really shut off; complicated stuff happens after NSs merge. After the objects merge, they continue to exhibit huge differences. For example, if there's a NS involved, some matter can get flung out in a "tail" or into a disc around the central remnant. This extra motion of the matter (that wouldn't happen if there were only BHs) can generate its own gravitational waves, which could possibly be detected directly. More likely, the NS will "smear out" and just not be as good at emitting gravitational waves, so the peak amplitude will be smaller. However, after BHs merge, we know that they "ringdown" exponentially quickly. Basically, BHs have a very fast, simple, and well understood ringdown phase, whereas NSs have a messy and non-exponential aftermath. For example, we frequently talk about "mountains" on NSs afterward, which will continue to spin and give off sort of mildly damped but mostly continuous waves. Of course, it is possible that a merger with one or two NSs will end up with a single BH at the end, which will also ringdown, but before or in addition to that, we expect a lot of other complicated stuff to happen. [Note that the binary NS merger shown in the figure below ends up in a question mark, meaning that we're not entirely sure whether the remnant is a huge NS or a tiny BH.] I should explain that these merger and post-merger effects happen at pretty high frequencies (because NSs are relatively low-mass objects), whereas LIGO and Virgo start to become much less sensitive as you go to higher frequencies (because at high frequencies there just aren't enough photons arriving at the interferometer's output; the number of photons per period, say, becomes quite random and therefore noisy). So it's not entirely clear whether or not we'll be able to see the "shutoff" or "mountains" with current detectors. A lot depends on unknown physics, and our ability to create good models for the signals given off by merging NSs. But it is true that we have not yet seen any direct evidence for them as of early 2019. So the last two items I described have not yet featured in claims about whether any source involves NSs or BHs. But one thing that will tell us for sure if there was much matter involved — and was the reason we were so sure about the binary NS LIGO/Virgo announced in 2017 — is the presence of electromagnetic signals. Obviously, a pair of BHs on their own won't give off any obvious electromagnetic signal, whereas those huge amounts of matter when a NS is involved should give off some signal. If we detect an electromagnetic "counterpart", we can be much more confident that there was a lot of matter involved; if we don't detect any, it's unlikely that there was much matter in the system. So there's no one piece of evidence that proves beyond doubt that there were only NSs or only BHs involved, but a collection of evidence that points in that direction. And really, how sure we are of the conclusion depends on a lot of factors. If the signal is very "loud" and clear, and the masses are very far from the mass gap, we can be particularly sure about our conclusions. But if the signal is from a source that's very far away, or is otherwise hard to measure, and if the masses are close to that mass gap, then we wouldn't be too sure about our conclusions. For all the systems confirmed so far, I think it's fair to say that most GW astronomers are extremely confident in the conclusions, but there are certainly more detections on the way that will be much more uncertain. | {
"source": [
"https://physics.stackexchange.com/questions/477228",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/218058/"
]
} |
477,403 | I'm learning a bit about sound and was wondering: If the speed of sound is determined by the amount of matter the source is surrounded with, why doesn't it go through a wall? Example: Speed of sound in air is 343 m/s but in water, it moves at 1500 m/s because of the increase of matter surrounding it. And since iron has more tightly packed matter, it moves even faster because it's moving the matter to move the vibrations. If this is true, why doesn't the sound go through walls? Is it because it loses its "strength" for the amount it travels? | Sound doesn't go through walls? Please tell my neighbor. In electromagnetism, a medium has a property called an "impedance" which is related to the index of refraction and the speed of waves in the medium. At an interface between two media, the relative impedances determine how much of an incoming wave is transmitted or reflected, so that the entire power of the incoming wave goes somewhere . At an "impedance-matched" interface the reflection coefficient goes to zero. In signal cables and waveguides for electromagnetic waves this leads to people adding "terminating resistors" in various places, so that an incoming signal doesn't get reflected back from a cable junction. Conversely, at a junction with an impedance mis -match, the reflection coefficient is generally nonzero and not all of the power is transmitted. You can do the same sort of analysis for sound waves moving from one medium to another. The reflection and transmission coefficients can depend on the frequency of the wave, as well, which is why my neighbor complains when I have my music turned up too loud: they can hear the low-frequency bass sounds just fine through the wall, but the high-frequency components (that they'd need to follow the lyrics) don't reach them. | {
"source": [
"https://physics.stackexchange.com/questions/477403",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/220705/"
]
} |
477,602 | I've completed a full year QM course (undergraduate level) and I am left confused on where to draw the line between quantum mechanics theory and its interpretation(s). I would personally like to stick to no interpretation at all, but since I do not know what is interpretation and what isn't, it is extremely hard to stick to this rule. Many introductory books do not mention if they use a particular interpretation at all, and I suspect they do use some interpretation(s) here and there, without any warning nor notice. From what I have read on the Internet, the "collapse" or "reduction" of the wave function, is part of interpretations of QM. Not all interpretations assume there is even such a thing as a collapse of $\Psi$ . Good, that's an easy one. But what about what $\Psi$ represents for example? I've commonly read that its modulus squared represents the probability density of finding the particle(s) at a particular position(s) and time(s). But does such a description already assume an interpretation? What about the QM postulates? Is there any interpretation hidden in one or more of these postulates? I've read several Lubos Motl's posts (here on PSE and on his own blog) and to him (and apparently many others such as John Rennie and Zurek), $\Psi$ is entirely subjective and two observers of the same quantum system need not to use the same $\Psi$ to describe the system. But no mention of any interpretation is ever done. I suspect they use some interpretation to make such claims, but I couldn't get the information from skimming through many books (including one by Zurek called "Quantum theory and Measurement" which is a package of many QM papers and one such paper by London around page 250 seemed to agree with the Motl's description). I have heard of the "Shut up and calculate!" approach, but I have read on Wikipedia that it's associated to the Copenhagen interpretation. Is that really so? I have read from the member alephzero that QM works perfectly well without any interpretation. Quoting him: "Wave function collapse" is not part of QM. It is only part of some
interpretations of QM (in particular, the Copenhagen interpretation).
The fact that this interpretation is used in a lot of pop-science
writing about QM doesn't make it an essential part of QM - to quote
David Mermin, "just shut up and calculate!" Note: AFAIK there is no
so-called "standard interpretation" of QM - it works perfectly well as a theory of physics with no "interpretation" at all. My question is, how on Earth do we draw the line between QM theory and its interpretations? The books seem completely blurry in that aspect and almost any other sources I could find too. | Interpretation is whatever people don't have to agree on to have the same accurate predictions about the observable. Classical mechanics is empirically wrong in ways quantum mechanics isn't. For example, only quantum mechanics predicts discrete energies for atomic electrons, and discrete changes in these energies from the absorption and emission of radiation. How do you get these energies? Empirically, you measure them; theoretically, you reduce it to a calculus problem. These agree; there's no "interpretation" at work there. Meanwhile, there are experiments you can do that vary in their results from time to time, and the frequencies of the results are, again, available both empirically and theoretically. The latter comes from the same calculus apparatus. What's that? You have a formula for something called $\psi$ , whose square modulus gets us the answers we want? Great, our theory is predictive (insofar as anything probabilistic deserves that label.) But what's this $\psi$ that crops up in both of those exercises? Well, it's not a thing classical mechanics makes claims about, or experiments detect; so whatever answer you give to that question, it amounts to an interpretation of quantum mechanics. Oh, you need $\psi$ or some alternative to get the predictions, and the predictions are right; no-one disputes either of those statements. But when you ask what these items "are", or "what they do unobserved", that's interpretation. Get 20 QM experts in the room, each of them subscribing to a different interpretation. They'll all make the same predictions about experiments' observable outcomes. And if, in an experiment that leaves an electron's position unmeasured, one of these experts says the electron is "somewhere specific we don't know", and another says the electron is "everywhere at once", and another that it "doesn't have a location", they've found something they disagree on. It's just not an observable thing. This doesn't mean interpretation is bunkum, or interpretations are wrong, or you shouldn't think about interpretations. (Fun fact: philosophy of physics is not limited to awkward questions about quantum mechanics.) But since your question is about where the line exists between interpretations and the rest of a QM textbook's contents, well... see the bold sentence up top. Trust me, I understand the urge to put as little philosophy into things as possible. I do, I love me some number-crunching. But that should cut both ways, i.e. you don't want too many philosophical opinions about how little philosophy physicists should be doing either. For example, "shut up and calculate" doesn't have to mean "don't have an interpretation"; to me it means, "it's 9 am and we're predicting experimental outcomes; you can wonder what's going on 'behind the scenes' when we're at the bar". (Or vice versa!) "Philosophy" isn't necessarily worse than "physics". It's just you can discern which is which from the fact that we know better how to get everyone on the same page for some questions than for others. Maybe that's not a bad thing. You don't have to agree the lack of an interpretative consensus is "embarrassing" , but it's worth knowing that consensus is lacking. | {
"source": [
"https://physics.stackexchange.com/questions/477602",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75916/"
]
} |
477,910 | As far as I understand it a stochastic process is a mathematically defined concept as a collection of random variables which describe outcomes of repeated events while a deterministic process is something which can be described by a set of deterministic laws. Is then playing (classical, not quantum) dices a stochastic or deterministic process? It needs random variables to be described, but it is also inherently governed by classical deterministic laws. Or can we say that throwing dices is a deterministic process which becomes a stochastic process once we use random variables to predict their outcome? It seems to me only a descriptive switch, not an ontological one. Can someone tell me how to discriminate better between the two notions? | Physics models rarely hint at ontological level. Throwing dice can be modelled as deterministic process, using initial conditions and equations of motion. Or it can be modelled as stochastic process, using assumptions about probability. Both are appropriate in different contexts. There is no proof of "the real" model. | {
"source": [
"https://physics.stackexchange.com/questions/477910",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4460/"
]
} |
477,920 | what happens to the motion of electrons in the their respective orbits when a substance is cooled down to zero kelvin? assuming they stop moving then are they gonna stick to the nucleus? if yes what happens when it's brought back to ordinary temperature, will it still be stuck? what about the entropy in the whole process? | Physics models rarely hint at ontological level. Throwing dice can be modelled as deterministic process, using initial conditions and equations of motion. Or it can be modelled as stochastic process, using assumptions about probability. Both are appropriate in different contexts. There is no proof of "the real" model. | {
"source": [
"https://physics.stackexchange.com/questions/477920",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/212400/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.