source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
238,855
In QM it is sometimes said that electrons are not waves but they behave like waves or that waves are a property of electrons. Perhaps it is better to speak of a wave function representing a particular quantum state. But in the slit experiment it is obvious to see that electrons really are a (interfered) wave. So can you say that an electron is a wave? And is that valid for other particles, like photons? Or is it wrong to say an electron is a wave because it can be also a particle, and because something can't be both (a behaviour and a property)?
What is a wave? From sound and water waves we come to an association with sine and cosine variational behavior. Wave equations are differential equations whose elementary solutions are sinusoidal . In water waves and sound waves and even electromagnetic waves what is "waving", i.e. has a sinusoidal variation with time and space, is the energy of the wave, represented by its amplitude. When dimensions become very small, compatible with h, the Planck constant the individual "particles" electrons etc., can be described sometimes like classical billiard balls, and at the same time they exhibit a randomness, which when accumulated displays interferences and other wave characteristics. This single electron at a time double slit experiment shows both effects. The individual electrons leave a point on the screen which seems random. The accumulation gives a probability distribution that has sinusoidal variations. One can only give a probability for the electron to appear at the (x,y) of the screen, which depends on the quantum mechanical solution of the boundary value problem "electron scattering from two slits" So it is not a classical particle behavior because even though the energy is carried by the single electron, its (x,y) is controlled by a probability distribution; and it is not the classical wave, i.e. a single electron that is "waving" its mass all over the screen interference pattern. Each electron is a quantum mechanical entity.
{ "source": [ "https://physics.stackexchange.com/questions/238855", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/103999/" ] }
238,893
Can someone intuitively explain how physically time dilation happens? Please don't explain about the invariant speed of light and the mathematical background, I am familiar with that. I just can't imagine how this time dilation process is happening physically, and I can't understand how to distort my mind to understand it!!! Sorry for this question, it probably sounds like, "Why there are positive and negative charges", but I have to ask! What could make the question clear is the equivalent one: how could be that the two moving observers do not agree on the simultaneity of events and one sees the other in slow motion but he also knows that the other see him in slow motion. How to imagine the reason of this time dilation, probably something similar to banding of space leading to contracting lengths but for the time. It would have been more clear if the time difference was unsymmetrical and if observer A sees me in slow motion then I should see him in faster motion, so ok I agree that my time has being delayed and he agree that too, but this is not the case.
One result of special relativity is that the magnitude of all 4-velocity vectors $\vec{u}$ is the speed of light. Written with the (-,+,+,+) signature: $$\vec{u}\cdot\vec{u} = -c^2$$ One way to think of this is that everything is always moving the speed of light in some direction. When I stand still, I move the speed of light in the time direction. My clock advances as fast as possible. When I look at other observers in other moving frames their clocks all advance more slowly than mine. Imagine someone moving close to the speed of light. Their clock seems to hardly advance at all relative to mine. If I start running, my 4-velocity vector is still the speed of light long. Now it has some non-zero component pointing in the space direction to account for my motion. That means the time component of my 4-velocity must have shrunk. My clock does not advance as fast as it used to, relative to everyone else, who remains standing still.
{ "source": [ "https://physics.stackexchange.com/questions/238893", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/103346/" ] }
238,976
Imagine there is a proton confined in a box and we put an electron at 10 cm distance: It gets an acceleration of thousands of meters/second^2 along a straight line joining the two CM's. One would expect the electron to hit the positive particle in a fraction of a second, and stick there glued by a huge force, but this does not happen, even if we shoot the electron providing extra KE and velocity/momentum. Is there a plausible explanation for that? Why doesn't the electron follow the straight force line that leads to the proton? Edit my question has been misunderstood: is not about orbitals or collisions. If it has an answer/explanation it is irrelevant if it refers to classical or QM physics. No explanation has been presented. We know that a) two protons can stick together even though repel each other via Coulomb force, it is legitimate then, a fortiori , to suppose that b) two particles that do not repel each other can comfortably sit side by side, almost touching each other: 2a) proton proton 2b) proton electron we also know that in a TV tube electrons leave the guns and hit the screen following incredibly precise trajectories producing pictures in spite of HUP and the fact the are a "... a point particle having no size or position" Now the situation I envisaged is very simple, and probably can be adequately answered step by step with yes/no or (approximate) figures: 0) When the electron is in the gun/box is it a point-mass/charge or is it a probability wave smeared over a region. when it hits the screen doess it have a definite size/position? 1) does electrostatics and Coulomb law apply here? do we know with tolerable precision what acceleration the electron will get when it is released and what KE and velocity it will aquire whenit ges near the proton? 2) if we repeat the experiment billion of times can those figures change? 3) according to electrostatics the electron should follow the force line of the electric field leatding to the CM of the proton and, when it gets there, remain as near as possible glued by an incredibly huge Coulomb force (picture 2 b). This does not happen,....never, not even by a remote probability chance. What happens, what prevents this from happening? Physics says that only a very strong force can alter the outcome of other laws.An answers states that QM has solved this long-standing mystery but does not give the solution.
The electron and proton aren't like pool balls. The electron is normally considered to be pointlike, i.e. has no size, but what this really means is that any apparent size we measure is a function of our probe energy and as we take the probe energy to infinity the measured size falls without limit. The proton has a size (about 1fm) but only because it's made up of three pointlike quarks - the size is actually just the size of the quark orbits and the proton isn't solid. Classically two pointlike particles, an electron and a quark, can never collide because if they're pointlike their frontal area is zero and you can't hit a target that has a zero area. What actually happens is that the electron and quark are quantum objects that don't have a position or a size. They are both described by some probability distribution. Quantum mechanics tells us that a reaction between the electron and quark can occur, and indeed this is what happens when you collide particles in an accelerator like the LHC. However in your experiment the colliding electron and proton don't have enough energy to create new particles, so they are doomed to just oscillate around each other indefinitely. If you accelerate the electron you can give it enough energy for a reaction to occur. This process is known as deep inelastic scattering and historically this experiment has been an important way we've learned about the structure of protons.
{ "source": [ "https://physics.stackexchange.com/questions/238976", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
238,987
Is it only because of the less friction involved or at there other reasons?
The electron and proton aren't like pool balls. The electron is normally considered to be pointlike, i.e. has no size, but what this really means is that any apparent size we measure is a function of our probe energy and as we take the probe energy to infinity the measured size falls without limit. The proton has a size (about 1fm) but only because it's made up of three pointlike quarks - the size is actually just the size of the quark orbits and the proton isn't solid. Classically two pointlike particles, an electron and a quark, can never collide because if they're pointlike their frontal area is zero and you can't hit a target that has a zero area. What actually happens is that the electron and quark are quantum objects that don't have a position or a size. They are both described by some probability distribution. Quantum mechanics tells us that a reaction between the electron and quark can occur, and indeed this is what happens when you collide particles in an accelerator like the LHC. However in your experiment the colliding electron and proton don't have enough energy to create new particles, so they are doomed to just oscillate around each other indefinitely. If you accelerate the electron you can give it enough energy for a reaction to occur. This process is known as deep inelastic scattering and historically this experiment has been an important way we've learned about the structure of protons.
{ "source": [ "https://physics.stackexchange.com/questions/238987", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/101812/" ] }
238,988
I. The Green's Function Method The Green's function is immensely useful as a tool in Solid State Physics. Using a Green's function, one can compute all relevant data from a physical system. For example, the Green's function for the time-independent Schrodinger equation (TISE), $$G(E):=\frac{1}{H-E}$$ yields both the density of states , $$-\frac{1}{\pi}\lim_{\epsilon\to 0^+}\text{Im}\,\text{Tr}\,G(E+i\epsilon)=\sum_n\delta(E-E_n)~~~~~~~~~~~~~~~$$ that is, the eigenvalues, and, letting $\{\psi_n\}$ denote the associated eigenstates, the Green's function also yields the projected density of states , $$-\frac{1}{\pi}\lim_{\epsilon\to 0^+}\text{Im}\,(f_0,G(E+i\epsilon)f_0)=\sum_n|(f_0,\psi_n)|^2\delta(E-E_n)$$ which is equivalent to the eigenstate data. Moreover, Green's functions allow us to efficiently formulate effective field theory, perturbation theory, and the renormalization group in the Hamiltonian picture, which is indispensable. II. Many-Body Green's Functions: Where Everything Falls Apart: However, this is all misleading: when physicists mention, "the Green's function of a non-interacting Hamiltonian $H$", which is, explicitly, $$(\psi_0, T\{\Psi^\dagger(x,t)\Psi(x',t')\}\psi_0),$$ Where $\psi_0$ is the groundstate of $H$, a literalist would think that they mean the Green's function for the many-body, time-dependent Schrodinger equation (TDSE) : $$\frac{1}{\frac{i}{\hbar}H-\partial_t},~~~~~~ H=\sum_{ij}A_{ij}c^\dagger_ic_j. $$ However, close calculation shows that this is instead the Green's function of the associated single-particle Hamiltonian: $$(\psi_0, T\{\Psi^\dagger(x,t)\Psi(x',t')\}\psi_0)=\frac{1}{\frac{i}{\hbar}\mathcal H-i\partial_t},~~~~~~\mathcal H= \sum_{ij}A_{ij}\,\left|f_i\right>\left<f^j\right|.$$ However, this does not generalize straightforwardly to interacting systems. In particular, for an interacting system, there is no such single-particle hamiltonian ! So the above virtues of the Green's function method no longer hold. We do not have the density of states, and we don't have the projected density of states . So here's my question: what use is this method if it only characterizes non-interacting systems, which we already know how to solve? (Also, this gives a very boring renormalization group flow).
The electron and proton aren't like pool balls. The electron is normally considered to be pointlike, i.e. has no size, but what this really means is that any apparent size we measure is a function of our probe energy and as we take the probe energy to infinity the measured size falls without limit. The proton has a size (about 1fm) but only because it's made up of three pointlike quarks - the size is actually just the size of the quark orbits and the proton isn't solid. Classically two pointlike particles, an electron and a quark, can never collide because if they're pointlike their frontal area is zero and you can't hit a target that has a zero area. What actually happens is that the electron and quark are quantum objects that don't have a position or a size. They are both described by some probability distribution. Quantum mechanics tells us that a reaction between the electron and quark can occur, and indeed this is what happens when you collide particles in an accelerator like the LHC. However in your experiment the colliding electron and proton don't have enough energy to create new particles, so they are doomed to just oscillate around each other indefinitely. If you accelerate the electron you can give it enough energy for a reaction to occur. This process is known as deep inelastic scattering and historically this experiment has been an important way we've learned about the structure of protons.
{ "source": [ "https://physics.stackexchange.com/questions/238988", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/33698/" ] }
239,426
The possibility of randomness in physics doesnt particularly bother me, but contemplating the possibility that quarks might be made up of something even smaller, just in general, leads me to think there are likely (or perhaps certainly?) thousands of particles and forces, perhaps layers and sub layers of forces, at play that we do not know about. So this got me thinking about quantum mechanics. I'm no physicist, but I do find it interesting to learn and explore the fundamentals of physics, so I'm wondering: Could the randomness found in radioactive decay as described in quantum mechanics be the result of forces and / or particles too weak / small for us to know about yet resulting in the false appearance of randomness? Or rather, can that be ruled out?
As noted in the comments this is a much studied question. Einstein, Podolsky and Rosen wrote a paper on it, "Can Quantum-Mechanical Description of Reality Be Considered Complete?", published in Physical Review in 1935, and universally known today as the EPR paper. They considered a particular situation, and their paper raised the question of "hidden variables", perhaps similar to the microstates which undergird thermodynamics. Several "hidden variable" theories have been proposed, including one by David Bohm which resurrected de Broglie's "Pilot Wave" model. These are attempts to create a quantum theory which gets rid of the random numbers at the foundations of quantum mechanics. In 1964 Bell analyzed the specific type of situation which appears in the EPR paper, assuming that it met the conditions Einstein et al had stipulated for "physical reality". Using this analysis he then showed some specific measurements that are in agreement with any such hidden-variable, classical theory would satisfy a set of inequalities; these are today known as the Bell inequalities. They are classical results. He then showed that for ordinary quantum mechanics that the Bell inequalities are violated for certain settings of the apparatus. This means that no hidden variable theory can replace quantum mechanics if it also meets Einstein's conditions for "physical reality". The EPR abstract reads: "In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) the description of reality given by the wave function in quantum mechanics is not complete or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false. One is thus led to conclude that the description of reality as given by a wave function is not complete." In fact, one can run quantum mechanical experiments that routinely violate Bell's inequalities; I'm currently involved in setting one up which will be validated by violating Bell's inequalities. People have been doing this for over 40 years. The main argument against closing this chapter are the various "loopholes" in the experiments. Recently it has been claimed that a single experiment has simultaneously closed all of the loopholes. If that is true, then there are no classical hidden variable theories which can replace regular quantum mechanics unless they are grossly non-local. Einstein certainly would not think that these were an improvement!
{ "source": [ "https://physics.stackexchange.com/questions/239426", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75876/" ] }
239,549
Can radioactivity be slowed using the effect of time dilation? If you put cesium, tritium or uranium in a cyclotron at relativisitic speeds, do their half lives become longer in our frame? Could this be used as a means to store radioactive material?
Yes. The classic example is that this is the only reason muons produced by cosmic radiation high up in the atmosphere live long enough to reach the ground.
{ "source": [ "https://physics.stackexchange.com/questions/239549", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/36061/" ] }
240,543
Consider a quantum system described by the Hilbert space $\mathcal{H}$ and consider $A,B\in \mathcal{L}(\mathcal{H},\mathcal{H})$ to be observables. If those observables do not commute there's no simultaneous basis of eigenvectors of each of them. In that case, in general if $|\varphi\rangle$ is eigenvector of $A$ it will not be of $B$. This leads to the problem of not having a definite value of some quantity in some states. Now, this is just a mathematical model. It works because it agrees with observations. But it makes me wonder about something. Concerning the Physical quantities associated to $A$ and $B$ (if an example helps consider $A$ to be the position and $B$ the momentum) what is really behind non-commutativity? Do we have any idea whatsoever about why two observables do not commute? Is there any idea about any underlying reason for that? Again I know one might say "we don't care about that because the theory agrees with the observation", but I can't really believe there's no underlying reason for some physical quantities be compatible while others are not. I believe this comes down to the fact that a measurement of a quantity affects the system in some way that interferes with the other quantity, but I don't know how to elaborate on this. EDIT: I think it's useful to emphasize that I'm not saying that "I can't accept that there exist observables which don't commute". This would enter the rather lengthy discussion about whether nature is deterministic or not, which is not what I'm trying to get here. My point is: suppose $A_1,A_2,B_1,B_2$ are observables and suppose that $A_1$ and $B_1$ commute while $A_2$ and $B_2$ do not commute. My whole question is: do we know today why the physical quantities $A_1$ and $B_1$ are compatible (can be simultaneously known) and why the quantities $A_2$ and $B_2$ are not? In other words: accepting that there are incompatible observables, and given a pair of incompatible observables do we know currently, or at least have a guess about why those physical quantities are incompatible?
Observables don't commute if they can't be simultaneously diagonalized, i.e. if they don't share an eigenvector basis. If you look at this condition the right way, the resulting uncertainty principle becomes very intuitive. As an example, consider the two-dimensional Hilbert space describing the polarization of a photon moving along the $z$ axis. Its polarization is a vector in the $xy$ plane. Let $A$ be the operator that determines whether a photon is polarized along the $x$ axis or the $y$ axis, assigning a value of 0 to the former option and 1 to the latter. You can measure $A$ using a simple polarizing filter, and its matrix elements are $$A = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}.$$ Now let $B$ be the operator that determines whether a photon is $+$ polarized (i.e. polarized southwest/northeast) or $-$ polarized (polarized southeast/northwest), assigning them values 0 and 1, respectively. Then $$B = \begin{pmatrix} 1/2 & -1/2 \\ -1/2 & 1/2 \end{pmatrix}.$$ The operators $A$ and $B$ don't commute, so they can't be simultaneously diagonalized and thus obey an uncertainty principle. And you can immediately see why from geometry: $A$ and $B$ are picking out different sets of directions. If you had a definite value of $A$, you have to be either $x$ or $y$ polarized. If you had a definite value of $B$, you'd have to be $+$ or $-$ polarized. It's impossible to be both at once. Or, if you rephrase things in terms of compass directions, the questions "are you going north or east" and "are you going northeast or southeast" do not have simultaneously well-defined answers. This doesn't mean compasses are incorrect, or incomplete, or that observing a compass 'interferes with orientation'. They're just different directions . Position and momentum are exactly the same way. A position eigenstate is sharply localized, while a momentum eigenstate has infinite spatial extent. Thinking of the Hilbert space as a vector space, they're simply picking out different directions; no vector is an eigenvector of both at once.
{ "source": [ "https://physics.stackexchange.com/questions/240543", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21146/" ] }
240,852
I have read that "thanks to conservation of momentum" there is no dipole gravitational radiation. I am confused about this, since I cannot see the difference with e.m. radiation. Is this due to the non-existence of positive and negative gravi-charges? Is this due to the non-linearity of Einstein equations? How the conservation of momentum enters here? An example of my confusion below Q: Why I cannot shake a single mass producing dipole gravi-radiation? A: You need another mass to shake it. Q: Isn't it the same with electromagnetism?
The simple Newton-like explanation of dipole gravitational radiation unexistence is following. The gravitational analog of electric dipole moment is $$ \mathbf d = \sum_{\text{particles}}m_{p}\mathbf r_{p} $$ The first time derivative $$ \dot{\mathbf d} =\sum_{\text{particles}}\mathbf p_{p}, $$ while the second one is $$ \ddot{\mathbf d} = \sum_{\text{particles}}\dot{\mathbf p}_{p} = 0, $$ indeed due to momentum conservation. "Magnetic" dipole gravitational radiation is analogically impossible due to conservation law of angular momentum. Indeed, since by the definition it is the sum of cross products of position of point on corresponding current: $$ \mathbf{M} = \sum_{\text{particles}}\mathbf r_{p}\times m_{p}\mathbf{p}_{p} = \sum_{\text{particles}}\mathbf{J}_{p} \Rightarrow \dot{\mathbf M} = \ddot{\mathbf M} = 0 $$ What's about general relativity? As you know, the propagation of gravitational waves is described by linearized Einstein equations for perturbed metric $h_{\mu \nu}$, and in this limit they coincide with EOM for helicity 2 massless particles in the presence of stress-energy pseudotensor $\tau_{\mu \nu}$: $$ \square h_{\mu \nu} = -16 \pi \tau_{\mu \nu}, \quad \partial_{\mu}h^{\mu \nu} = 0, \quad \partial_{\mu}\tau^{\mu \nu} = 0 $$ Since $\tau^{\mu \nu}$ is conserved, this protects $h_{\mu \nu}$ from the contributions from monopole or dipole moments of sources as well as from additional helicities. Formally the deep difference between gravitational and EM radiations is that we associate General relativity symmetry $g_{\mu \nu} \to g_{\mu \nu} + D_{(\mu}\epsilon_{\nu )}$ (it is infinitesimal version of $g_{\mu \nu}(x)$ transformation under $x \to x + \epsilon$ transformation) with covariant stress-energy tensor conservation (indeed, tensor current conservation, from which we can extract conservation of 4-momentum vector current), while EM gauge symmetry is associated with vector current conservation (from which we can extract the conservation of electrical charge scalar quantity). So that corresponding conservation laws affect on different quantities; the nature of radiation in EM and GR cases are different, and the first one rules primarily by Maxwell equations (and hence conservation of charge plays the huge role), while the second one rules by linearized Einstein equations (and hence the momentum conservation is genuine). For example, heuristically speaking, due to conservation of EM charge EM monopole radiation is impossible (it is expressed therough the time derivative of charge), but nothing restricts dipole moment radiation. In GR due to conservation of momentum vector, which is related to metric (an so to gravitational waves, in the sense I've shown above), dipole moment radiation is impossible. This, indeed as anna v said in comments, is connected with the fact that EM field represents helicity-1 particles, while linearized gravitational field coincides with the field which represents helicity-2 particles. As you see, such thinking doesn't require presence of plus minus masses.
{ "source": [ "https://physics.stackexchange.com/questions/240852", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/77255/" ] }
241,050
Last night my daughter was asking why a mirror "always does that" (referring to reflecting a spot of light). To help her figure it out, I grabbed my green laser pointer so she could see the light traveling from the source and reflecting off the mirror. But as we were playing, I noticed something strange . Rather than one spot, there were several . When I adjusted the angle to something fairly obtuse The effect became quite pronounced And when you looked closely, you could actually see several beams (Of course, the beams actually looked like beams in real life. The picture gives the beams an elongated hourglass shape because those parts are out of focus.) I made these observations: The shallower the angle, the greater the spread of the split beams and resulting dots. The directionality of the reflection is due to the orientation of the mirror, not the laser pointer itself. Indeed, by rotating the mirror 360° the string of dots would make a full rotation as well. I can count at least 8 individual dots on the wall, but I could only see 6 beams with the naked eye. If you look at the split beam picture you can see a vertical line above the most intense dots. I didn't observe any intense spots of light there. And when I looked closely at the spot where the beam hit the mirror you can see a double image. This was not due to camera shake, just the light reflecting off the dust on the surface of the glass, and a reflection of that light from the rear surface of the mirror. It's been a few years since college physics, I remembered doing things like the double split experiment. I also remembered that light seems like it does some strange things when it enters liquid/prisms. I also know that the green laser has a certain wavelength, and you can measure the speed of light with a chocolate bar and a microwave . Why does the mirror split the laser beam? How does that explain the effects that I saw? Is there any relation to the double split experiment, or the wavelength/speed of light?
You are getting reflections from the front (glass surface) and back (mirrored) surface, including (multiple) internal reflections: It should be obvious from this diagram that the spots will be further apart as you move to a more glancing angle of incidence. Depending on the polarization of the laser pointer, there is an angle (the Brewster angle) where you can make the front (glass) surface reflection disappear completely. This takes some experimenting. The exact details of the intensity as a function of angle of incidence are described by the Fresnel Equations . From that Wikipedia article, here is a diagram showing how the intensity of the (front) reflection changes with angle of incidence and polarization: This effect is independent of wavelength (except inasmuch as the refractive index is a weak function of wavelength... So different colors of light will have a slightly different Brewster angle); the only way in which laser light is different from "ordinary" light in this case is the fact that laser light is typically linearly polarized, so that the reflection coefficient for a particular angle can be changed simply by rotating the laser pointer. As Rainer P pointed out in a comment, if there is a coefficient of reflection $c$ at the front face, then $(1-c)$ of the intensity makes it to the back; and if the coefficient of reflection at the inside of the glass/air interface is $r$ , then the successive reflected beams will have intensities that decrease geometrically: $$c, (1-c)(1-r), (1-c)(1-r)r, (1-c)(1-r)r^2, (1-c)(1-r)r^3, ...$$ Of course the reciprocity theorem tells us that when we reverse the direction of a beam, we get the same reflectivity, so $r=c$ . This means the above can be simplified; but I left it in this form to show better what interactions the rays undergo. The above also assumes perfect reflection at the silvered (back) face: it should be easy to see how you could add that term...
{ "source": [ "https://physics.stackexchange.com/questions/241050", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7456/" ] }
241,772
Please will someone explain what time dilation really is and how it occurs? There are lots of questions and answers going into how to calculate time dilation, but none that give an intuitive feel for how it happens.
Introduction This answer will use ideas discussed in the answers to What is time, does it flow, and if so what defines its direction? , so you really need to read the answers to that question before tackling this one. The key concept you need in order to understand time dilation is that a clock does not measure the flow of time - time doesn’t flow in relativity (see the What is time...? question for more on this). A clock measures distances. To explain what I mean I’ll use the analogy of the odometer in your car. If you start at some point $A$ and drive to some point $B$ then the odometer tells you how far in space you’ve moved. So the change in the odometer reading is the distance in space $A-B$ measured along the route you took. The clock in your car measures the distance in time between the spacetime points $A$ and $B$ i.e. the change in the clock measures the number of seconds between you leaving point $A$ and arriving at point $B$ , and the number of seconds is also measured along the route you took in spacetime . This last point matters, because as we’ll see the distance in time you move depends on your route, just like the distance in space moved. The reason we have to treat time as a distance is because in relativity there isn’t a hard and fast distinction between time and space. You may split spacetime into three spatial dimensions and one time dimension, but a different observer might make this split in a different way and the two of you wouldn’t agree on what was time and what was space. In relativity we have to treat the time dimension just like the space dimensions. It is just a coordinate running from (in principle) $-\infty$ to $\infty$ just like the $x$ , $y$ and $z$ coordinates run from $-\infty$ to $\infty$ . See the What is time...? question for more on this. The point of all this is that it gives us a very specific definition of time dilation. If two different observers measure the distance between two spacetime points $A$ and $B$ then this distance will be a four-vector with time and spatial components. Time dilation simply means that different observers will disagree on the magnitude of the time component of this distance i.e. they will observe a different amount of time between the two points. An example of time dilation To explain why this happens let’s take a specific example. Suppose I am watching you moving, then in my coordinates your trajectory is a line in spacetime. Because I can’t draw four-dimensional graphs let’s assume you’re only moving along the $x$ axis so all I have to draw is your trajectory in $x$ and $t$ (time). Suppose your trajectory looks like this: Figure 1 So we both start at the point $A$ . Because I am stationary in these coordinates my trajectory is straight up the time axis to $B$ , while your trajectory (the red line) heads off to increasing $x$ , then stops, turns round and comes back to my position. The distance I have moved in time is just the distance straight up the time axis from $A$ to $B$ — we’ll call this distance $t_{ab}$ . The distance you have moved in time is, well, let’s see how to calculate that. Figure 1 shows what happens in my coordinate system, but now let’s draw the same diagram in your coordinate system i.e. the coordinates in which you remain stationary at the origin and I move: Figure 2 In your coordinates it’s me that moves (shown by the black line) and you remain stationary, so in your coordinates your trajectory (the red line) is straight up the time axis and the distance you move is just the distance in time between $A$ and $B$ . We’ll call this distance $\tau_{ab}$ . Now this is the point where things get strange, but actually it’s the only point where things get strange, so if you can get past this point you’re home. The distance $\tau_{ab}$ in figure 2 has a special significance in relativity. It’s called the proper time, and it’s a fundamental principle in relativity that the proper time is an invariant . This means the proper time is the same for all observers, and specifically it the same for both you and me. This means that — and here’s the key point: The length of the red line is the same in both figure 1 and figure 2 Let’s go back to figure 1 for a moment and see why this means there must be time dilation: Figure 3 The length of my line from $A$ to $B$ , $t_{ab}$ , is obviously different from the length of the red line from $A$ to $B$ , $\tau_{ab}$ . But we’ve already agreed that the length of the red line is the time you measure between the two points, and that means the time I measure between $A$ and $B$ is different from the time you measure between $A$ and $B$ : $$ t_{ab} \ne \tau_{ab} $$ And that’s what we mean by time dilation. If my aim was to give an intuitive idea of how time dilation arises then I’ve probably failed because it is far from intuitively obvious why the length of the red line should be the same in figure 1 and figure 2. But at least I’ve narrowed it down to one unintuitive step, and if you’re prepared to accept this then the rest follows in a straightforward way. To make this quantitative, and explain exactly what I mean by the length of the red line , we need to get stuck into some math. And now some math The situation I’ve drawn in figures 1 and 2 is actually somewhat complicated because it involves acceleration i.e. you speed away from me, decelerate to a halt then accelerate back towards me. To get started we’ll use the simpler case where you just head off at constant velocity and don’t accelerate. Our two spacetime diagrams look like this: Figure 4 In my frame you are travelling at velocity $v$ , so after some time $t$ measured on my clock your position is $(t, vt)$ . In your frame your are stationary, so after some time $T$ measured on your clock your position is $(T, 0)$ . And remember we said that the length of the red line must be the same for both you and me. To calculate the length of the red line we use a function called the metric. You probably remember being taught Pythagoras’ theorem at school. Which tells you for the right angled triangle: the length of the hypotenuse is given by: $$ s^2 = a^2 + b^2 $$ This equation tells one how to measure total (that is, in this case diagonal ) distances, given the displacements in each coordinate direction. That is precisely the information contained in a metric: It tells you how to measure distances. The above equation does this by giving an explicit formula for the length of a line, resulting from coordinate displacements in the horizontal and vertical directions (let's call those $x$ and $y$ ). Now, one can of course also think about infinitesimal (infinitely small, in a limiting sense) distances. The formula then simply becomes $$ \mathrm ds^2=\mathrm dx^2+\mathrm dy^2$$ This is called the line element for two-dimensional Euclidean space , and it encodes the corresponding (Euclidean) metric. For special relativity we need to extend this idea to include all three spatial dimensions plus time. There are various ways to write the line element for special relativity and for the purposes of this article I’m going to write it as: $$ \mathrm ds^2 = -c^2\mathrm dt^2 + \mathrm dx^2 + \mathrm dy^2 +\mathrm dz^2 $$ where $\mathrm dt$ is the distance moved in time and $\mathrm dx$ , $\mathrm dy$ , $\mathrm dz$ are the distances moved in space. This equation encodes the Minkowski metric and the quantity $\mathrm ds$ is called the proper distance. It looks a bit like Pythagoras’ theorem but note that we can’t just add time to distance because they have different units — seconds and meters — so we multiply time by the speed of light $c$ so the product $ct$ has units of meters. Also note that we give $ct$ a minus sign in the equation — as you’ll see, this minus sign is what explains the time dilation. Since we are only considering two dimensions our equation becomes: $$ \mathrm ds^2 = -c^2\mathrm dt^2 + \mathrm dx^2 $$ OK, let’s do the calculation. Since all motion is in a straight line we don't need the infinitesimal line element and instead we can use: $$ \Delta s^2 = -c^2\Delta t^2 + \Delta x^2 $$ Start in your frame — you don’t move in space so $\Delta x = 0$ and you move a distance $\tau$ in time, so $\Delta t = \tau$ , giving us: $$ \Delta s^2 = -c^2 \tau^2 $$ Now let’s do the calculation in my frame. In my frame you move a distance in space $\Delta x=vt$ and a distance in time $\Delta t = t$ so the equation for the length of the red line is: $$ \Delta s^2 = -c^2t^2 + (vt)^2 = -t^2c^2\left(1 - \frac{v^2}{c^2}\right) $$ Since the lengths $\Delta s$ are equal in both frames we combine the two equations to get: $$ -c^2 \tau^2 = -t^2c^2\left(1 - \frac{v^2}{c^2}\right) $$ And rearranging gives: $$ \tau = t\sqrt{1 - \frac{v^2}{c^2}} = \frac{t}{\gamma} $$ where $\gamma$ is the Lorentz factor : $$ \gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}} $$ And that’s the result we need showing the time dilation. The distance you have moved in time $\tau$ is less than the distance I have moved in time $t$ by a factor of $\gamma$ .
{ "source": [ "https://physics.stackexchange.com/questions/241772", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1325/" ] }
241,943
A tomato is travelling very fast towards a 1 cm thick steel plate. Let's say this happened in a vacuum, so that the air resistance wouldn't rip the tomato apart before it even hit the steel plate. Obviously the tomato would get destroyed too, but the question is whether there would be a hole in the steel plate, given enough speed. I guess a more general way to phrase the question is: Can a soft object create a hole through a hard surface, as long as the soft object is traveling fast enough? If yes, is there a limit to this concept? For example, could the tomato even penetrate a wall made of diamond, as long as it was traveling fast enough? Edit: A comment on one of the answers used this video to show that tomatoes can't exist for very long in a vacuum. If this is correct, the situation needs to be that the tomato is stationary, the plate moves, and the tomato is put into the vacuum shortly before impact. I believe the impact scenario would be the same in that case?
The notion of soft or hard object depends on the velocity of interaction. Water can be soft or hard as rock depending on how fast you fall in (or surf upon). For a shock, the main thing that matter is momentum. In space, where relative speeds can be very high, a simple bolt can cause serious damage to the ISS, and simple flakes of paint cause deep scratches. So, yes, the tomato would create a hole (and evaporate in the shock).
{ "source": [ "https://physics.stackexchange.com/questions/241943", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110509/" ] }
242,043
The paradox in the twin paradox is that the situation appears symmetrical so each twin should think the other has aged less, which is of course impossible. There are a thousand explanations out there for why this doesn't happen, but they all end up saying something vague like it's because one twin is accelerating or you need general relativity to understand it . Please will someone give a simple and definitive explanation for why both twins agree on which twin is younger when they meet for the second time?
Introduction This is the third (and last) in a series of posts explaining time dilation, and it is going to assume you’ve read the preceding posts What is time dilation really? and What is time, does it flow, and if so what defines its direction? . Much of what follows won’t make sense unless you’re familiar with the topics discussed in the previous two questions. This is also going to be the hardest of the three posts by quite some way, but it just isn’t possible to gain a real understanding of the twin paradox without exploring some hard ideas. You have been warned! In what follows I’m going to assume I am the stationary twin i.e. I remain on Earth while you go zooming off on your return trip in your spaceship. Remember that when you see me or my it refers to the stationary twin and you and your refer to the accelerating twin. So as not to keep you in suspense, I'm going to explain that the asymmetry arises because the geometry of spacetime looks different for the two twins. To calculate the elapsed time we need a function called the metric, and in the coordinate system of an accelerating observer the metric looks different from normal flat spacetime. When we take this into account both twins agree about their respective ages. My version of events In the question on time dilation I explained what we mean by time dilation and how we calculate it. In particular I showed this spacetime diagram: Figure 1 This shows our two trajectories through spacetime using my rest coordinates i.e. the coordinates in which I remain stationary at the origin. In these coordinates I remain at $x=0$ and simply travel up the time axis from the starting point $A$ to the finishing point $B$ as shown by the black arrow. You go hurtling away along the $x$ axis from point $A$, then stop, reverse and scream back to meet me again at point $B$, as shown by the red arrows. So the red line shows your trajectory through spacetime as measured using my coordinates. From the time dilation question we know that the elapsed time shown by a clock carried by an observer, $\Delta\tau$, is related to the length of the observer’s trajectory, $\Delta s$, by: $$ \Delta s^2 = -c^2 \Delta\tau^2 $$ And we know that the length $\Delta s$ is calculated using a function called the metric. In flat spacetime this function is the Minkowski metric, and it tells that if you move a distance $\mathrm dx$ along the $x$ axis, $\mathrm dy$ along the $y$ axis and $\mathrm dz$ along the $z$ axis in a time $\mathrm dt$ then the total distance you have moved in spacetime is given by the Minkowski metric: $$\mathrm ds^2 = -c^2\mathrm dt^2 + \mathrm dx^2 +\mathrm dy^2 +\mathrm dz^2 $$ Since it’s hard to draw 4D graphs it’s usual to assume all motion is on the $x$ axis, so $\mathrm dy =\mathrm dz = 0$, in which case the metric simplifies to: $$\mathrm ds^2 = -c^2\mathrm d\tau^2 = -c^2\mathrm dt^2 + \mathrm dx^2 \tag{1} $$ To calculate the length of the red curve we use the cunning trick of noting that velocity is defined by $v = \mathrm dx/\mathrm dt$ so $\mathrm dx = v\,\mathrm dt$, and if we take equation (1) and substitute for $\mathrm dx$ we end up with: $$ \mathrm d\tau = \sqrt{1 - \frac{v^2(t)}{c^2}}\,\mathrm dt $$ So the elapsed time $\tau_{AB}$ is given by the integral: $$ \tau_{AB} = \int_{t_A}^{t_B} \, \sqrt{1 - \frac{v^2(t)}{c^2}} \,\mathrm dt \tag{2} $$ where $v(t)$ is your velocity as a function of time. The exact form of $v(t)$ will depend on how you choose to accelerate, but since $v^2$ is always positive that means the term inside the square root is always less than or equal to one: $$ 1 - \frac{v^2(t)}{c^2} \le 1 $$ And therefore the integral from $t_A$ to $t_B$ must be less than or equal to $t_B-t_A$. This means your elapsed time $\tau_{AB}$ must be less than my elapsed time $t_{AB}$ i.e. when we meet again you have aged less than I have. So far so good, but the paradox is that we could draw the spacetime diagram in figure 1 using your coordinates, i.e. the coordinates in which you are at rest, to give something like: Figure 2 In these coordinates you remain stationary so your trajectory shown by the red line goes straight up your time axis, while my trajectory shown by the black line heads off in the $-x$ direction before returning. If we use the same argument as above we would conclude that I should have aged less than you, but we can’t both have aged less than each other. And that’s the paradox. Your version of events The resolution to the paradox turns out to be very simple. When I calculated the length of your trajectory in the previous section I used the Minkowski metric, equation (1), and after some algebra ended up with the equation for your path length in equation (2): $$ \Delta t_\text{you} = \int_{t_A}^{t_B} \, \sqrt{1 - \frac{v^2(t)}{c^2}}\,\mathrm dt $$ The resolution to the paradox is simply that in your rest frame the metric is not the Minkowski metric, and therefore the equation you have to use to calculate my path length is not the same as equation (2): $$ \Delta t_\text{me} \ne \int_{t’_A}^{t’_B} \, \sqrt{1 - \frac{v’^2(t)}{c^2}}\,\mathrm dt’ $$ and that’s why when you calculate my path length we both agree that my path length is longer than yours i.e. we both agree that I age more than you do. So what is your metric? The form of your metric will depend on exactly how you accelerate, and in general will not be a simple function. However there is a special case that is reasonably simple, and that is what I’m going to assume for the rest of this answer. I’ll assume that your acceleration (or rather deceleration) is constant, so your motion consists of the following: at time zero you pass me with some positive velocity $v$ and constant deceleration $a$ - constant deceleration means you are accelerating towards me and in the opposite direction to your velocity the constant deceleration eventually slows you to a stop at some distance $x$ away from me you maintain the constant deceleration and now you start moving back towards me i.e. your velocity becomes negative eventually you pass me again now with a velocity of $-v$ For motion at constant acceleration your metric is a function called the Rindler metric : $$\mathrm ds^2 = -\left(1 + \frac{a\,x}{c^2} \right)^2 c^2\mathrm dt^2 +\mathrm dx^2 \tag{3} $$ For now I won’t attempt to justify this (I may do so in an appendix) I’ll just make a few comments on it before showing how to use it to calculate the trajectory length. The Rindler metric doesn’t look completely different to the Minkowski metric that I used before. Indeed at the point $A$, where we part company, the value of $x$ is zero for both of us, and if we set $x=0$ the Rindler metric reduces to: $$\mathrm ds^2 = -c^2\mathrm dt^2 +\mathrm dx^2 $$ which is just the Minkowski metric. Likewise if we take the acceleration $a$ to zero, the equation (3) just reduces to the Minkowski metric. However when $a \ne 0$ and $x \ne 0$ the two metrics are different, and the further $a$ and $x$ are from zero the more different the metrics become. OK let’s attempt the calculation Now we can calculate my elapsed time in your rest frame using the correct metric i.e. the Rindler metric. Let’s remind ourselves of the spacetime diagram: In your frame I pass you at time zero with a negative velocity, and I head off to negative $x$ before turning round to come back. What is perhaps not obvious is that the acceleration $a$ is negative. This is because $a$ is your acceleration. In the diagram above my acceleration relative to you is obviously positive so your acceleration relative to me must be negative. We start as before by writing down the metric: $$\mathrm ds^2 = -c^2\mathrm d\tau^2 = -\left(1 + \frac{a}{c^2}x \right)^2 c^2 \mathrm dt^2 +\mathrm dx^2 $$ And we use the same trick of substituting $\mathrm dx = v(x)\mathrm dt$. After rearranging we end up with: $$ \Delta t_\text{me} = \int_{t_A}^{t_B} \, \sqrt{\left(1 + \frac{a\,x(t)}{c^2}\right)^2 - \frac{v^2(t)}{c^2}}\,\mathrm dt \tag{4} $$ This is actually pretty similar to the equation (2) that I used to calculate your elapsed time, apart from that extra term $a\,x(t)/c^2$. But it’s that extra term that makes the difference. To see why consider the leftmost point on my trajectory in figure 2. At this point my velocity is zero so the term in the square root becomes: $$ 1 + \frac{a\,x(t)}{c^2} $$ But the product $a\,x(t)$ is positive, which means $1+ax(t)/c^2 \gt 1$ and therefore at this point $\mathrm d\tau \gt\mathrm dt$. Doing the integration in this region gives my elapsed time as greater than your elapsed time. And this is the key to understanding the twin paradox. When you use equation (4) to calculate the length of my trajectory you’re going to find that my elapsed time is greater than your elapsed time, which is exactly what I found when I did the calculation in my frame. The resolution to the paradox is that the metric you use to do the calculation is not the same as the metric I use to do the calculation.
{ "source": [ "https://physics.stackexchange.com/questions/242043", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1325/" ] }
242,387
Why does matter on the earth exist in three states? Why cannot all matter exist in only one state (i.e. solid/liquid/gas)?
The premise is wrong. Not all materials exist in exactly three different states; this is just the simplest schema and is applicable for some simple molecular or ionic substances. Let's picture what happens to a substance if you start at low temperature, and add ever more heat. Solid At very low temperatures, there is virtually no thermal motion that prevents the molecules sticking together. And they stick together because of various forces (the simplest: opposite-charged ions attract each other electrostatically). If you picture this with something like lots of small magnets, it's evident enough that you get a solid phase, i.e. a rigid structure where nothing moves. Actually though: Helium won't freeze at any temperature: its ground state in the low-temperature limit at atmospheric pressure is a superfluid . The reason is that microscopically, matter does not behave like discrete magnets or something, but according to quantum mechanics . There is generally not just one solid state. In the magnet analogy, you can build completely different structures from the same components. Likewise, what we just call “ice” is actually just one possible crystal structure for solid water, more precisely called Ice I h . There are quite a lot of other solid phases. Liquid Now, if you increase temperature, that's like thoroughly vibrating your magnet sculpture. Because these bonds aren't infinitely strong, some of them will release every once in a while, allowing the whole to deform without actually falling apart. This is something like a liquid state. Actually though: Not all materials have a liquid phase (at least not at all pressures). For instance, solid CO 2 (dry ice) sublimates at atmospheric pressure if you increase the temperature, i.e. it goes immediately into the gas state. Many materials have huge molecules, i.e. the size of the chemical structure approaches the size of the physical structure. Now, that chemical structure can also be shaken loose by heat, but this isn't called melting but decomposition then. For instance, plastics decompose at some point between 200°C and 350°C. Some melt before i.e. they have two states; some stay solid all the way , they basically just have one state (solid). A decomposed material hasn't entered a new state of matter, it simply has ceased to be the original material . Furthermore, materials that aren't purely composed of one kind of molecule also generally don't have a simple fix melting point. There's a certain range in which two phases may coexist. (More generally, you can have all sorts of emulsions, dispersions, gels etc.) Gaseous Small and sturdy molecules or single atoms aren't so bothered by high temperatures though. They also don't have so strong forces between molecules. So, if you shake strongly enough, they simply start fizzing all around independently. That's a gas then. Actually though: Even the most sturdy molecules won't survive if you make the temperature high enough. Even single atoms will at some point lose their hold on the electrons. This results in a further phase, a plasma . At high enough pressure – above a critical point , the gas phase won't really be distinguishable from the liquid one: you only have a supercritical fluid . (IMO this could still be labelled a gas, but it does have some properties which are more like a liquid.) Now, the question why a particular material is in some particular state at some given temperature and pressure isn't easy to answer. You need statistical physics to predict the behaviour. The crucial quantities are energy and entropy . Basically, the random thermal motion tends to cause disorder (which is quantified by rising entropy). At any given temperature there's a corresponding amount of energy available to overcome the attractive force, and within that energy budget the system approaches the state with the highest entropy. A solid has little entropy, but if there's not much energy available this is the only feasible state. A liquid has higher entropy but requires some energy to temporarily unstick the molecules. A gas requires enough energy to keep the particles apart all the time, but is completely disordered and therefore has a lot of entropy. But how much energy and entropy a given state has exactly varies a lot between materials, therefore you can't simply say solid-liquid-gas.
{ "source": [ "https://physics.stackexchange.com/questions/242387", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110448/" ] }
242,499
I couldn't help but notice that the expression for the magnetic component of the Lorentz force, $$\mathbf F = q\,\mathbf v \times \mathbf B\,,$$ is very similar in its mathematical form to the Coriolis force, $$\mathbf F = 2m\mathbf v \times \mathbf ω\,,$$ providing that we replace electric charge with mass, and angular velocity with the magnetic induction. Even though I am aware of the physical differences between those two forces (Coriolis is a fictitious force, which acts on objects that are in motion relative to a rotating frame of reference, whereas the magnetic force is caused by a magnetic field), I do remember reading that magnetism is a "relativistic effect of electricity" (Feynman lectures), and wonder whether this analogy is pure coincidence or could obey to a deeper connection. Could it have something to do with Lorentz transformations? On a more general level, could the magnetic force be viewed as "fictitious", and may this have some relation with the apparent non-existence of magnetic monopoles? Edit: I would like to point out that the analogy can be extended to the two other inertial forces, the centrifugal force and the Euler force, as is shown here and here . My question could then be restated as: Why is there an analogy between inertial and electromagnetic forces?
As nobody has done this yet, let's try to give an answer to your question in the right framework, i.e. through the formalism of differential geometry (and of action principles, as far as the physics is concerned). This formalism has the advantage of allowing for the use of arbitrary coordinate systems, so that the problem of the arising of "fictitious" force terms such as the Coriolis force can be addressed in a rigorous way. Moreover, it allows to generalize the standard description of electromagnetism in such a way that magnetic monopoles indeed are permitted to exist. I will show and motivate why in my opinion there is no connection between the Coriolis force and the magnetic force, explain what it means for the magnetic force to be a relativistic effect of the electric force and show how to introduce monopole fields in the magnetic field. I'll try to make myself as clear as possible, as I understand that you are not familiar with the formalism. First of all, the Lagrangian for a point-like, massive particle in an arbitrarily curved spacetime, subject to the electromagnetic field, can be expressed as $$\mathscr{L}=-m\sqrt{g_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}}-e\,A_{\mu}(x^{\mu})\frac{dx^{\mu}}{ds}$$ (factors of $c$ missing). Here $g_{\mu\nu}$ is the spacetime metric, the object which encodes the spacetime curvature, $A_{\mu}$ is the covariant form of the electromagnetic four-potential, $A_{\mu}=(\phi,-\vec{A})$, $s$ is an arbitrary parameter, $m$ and $e$ the mass and charge of the particle, $x^{\mu}(s)$, with $\mu=0,1,2,3$, the trajectory of the particle in spacetime. We will be interested in flat spacetimes, i.e. Minkowski spacetime, but one needs the full generalization to extract useful results from the formalism. One finds the equations of motion for the particle by minimizing the action integral $S$, that is $$ S[x]=\int_{a}^{b}\mathscr{L}(x^{\mu}(s),\dot{x}^{\mu}(s))\ ds $$ where the dot denotes a derivation with respect to the parameter $s$. Finding a minimum for $S$ is completely equivalent to the procedure shown in Frédéric's answer: the curve which minimizes the action is the curve which solves the Euler-Lagrange equations, or equivalently the Hamilton equations (those in the cited answer). Notice that the action $$ S=\int_{a}^{b}\bigg\{-m\sqrt{g_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}}-e\,A_{\mu}\frac{dx^{\mu}}{ds}\bigg\}\ ds $$ is invariant with respect to three different kinds of transformation. The first one is a monotone, increasing change of parametrization $s\to s'$ (i.e. one with $ds'/ds>0$), as the transformation gets absorbed in the measure of integration $ds$. The second one is an arbitrary change of coordinates: every time you see two indices contracted, as the two objects involved in the contraction transform in opposite ways (this is symbolically expressed by the positioning of the indices), the overall object remains invariant under a change of coordinates. The last one is the transformation $$ A_{\mu}\to A_{\mu}+\partial_{\mu}\chi $$ where $\chi$ is an arbitrary function of the variables $x^{\mu}$, known as a gauge transformation of the electromagnetic potential. Under such a transformation, the action gains the additional term $$ \delta S=\int_{a}^{b}-e\ \partial_{\mu}\chi\ \frac{dx^{\mu}}{ds}\ ds=\int_{a}^{b}-e\ \frac{d\chi}{ds}\ ds=-e\ \bigg[\chi(b)-\chi(a)\bigg] $$ which is a constant. So the action may not actually be invariant under such a transformation, but as $S$ only gets shifted by a constant amount, its minima are preserved by the transformation. The property of $S$ being invariant under such transformations has one important consequence: the general form of the dynamical equations one gets from the minimization of $S$ is valid with respect to any parametrization of the curve of the particle, which in turn can be expressed in any coordinate system you like, and the equations are not changed by a gauge transformation. To give the equations a simpler look, I will use the parametrization in which $$ \sqrt{g_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}}=1 $$ Please notice that this condition does not affect the choice of the coordinates through which you express the dynamical curve: the condition itself is unaffected by a change of coordinates. From the above action $S$ and given the former condition on $s$, it can be shown that the dynamical equations have the following form: $$ \frac{d^{2} x^{\mu}}{ds^{2}}=-\Gamma^{\mu}_{\nu\tau}\ \frac{dx^{\nu}}{ds}\frac{dx^{\tau}}{ds}+\frac{e}{m}\ F^{\mu}_{\ \nu}\ \frac{dx^{\nu}}{ds} $$ Here $$ F^{\mu}_{\ \nu}=g^{\mu\sigma}(\partial_{\sigma}A_{\nu}-\partial_{\nu} A_{\sigma}) $$ with $g^{\mu\sigma}$ the inverse of the matrix $g_{\mu\sigma}$, is the electromagnetic field tensor and the functions $$ \Gamma^{\mu}_{\nu\tau}=\frac{1}{2}g^{\mu\sigma}\ \big[\partial_{\nu}g_{\sigma\tau}+\partial_{\tau}g_{\sigma\nu}-\partial_{\sigma}g_{\nu\tau}\big] $$ are called the "Christoffel symbols" related to the metric $g$ in the $x^{\mu}$ coordinate system. The Christoffel symbols encode two kinds of information: first of all, if the metric $g$ describes a curved spacetime, they encode the effect of curvature on the particle, i.e. they encode the gravitational field; second of all, they encode the fictitious forces due to the choice of a specific coordinate system. It can be shown that, when spacetime is not curved, there exist coordinates with respect to which every $\Gamma$ is zero. These are the (in)famous intertial frames, in which $\eta_{\mu\nu}=\text{diag}(1,-1,-1,-1)$ and the equations take the form $$ \frac{d^{2} x^{\mu}}{ds^{2}}=\frac{e}{m}\ F^{\mu}_{\ \nu}\ \frac{dx^{\nu}}{ds} $$ $$ F^{i}_{\ 0}=-\vec{\nabla}\phi-\partial_{0}\vec{A}=\vec{E} $$ $$ F^{i}_{\ j}=-\partial_{i}A_{j}+\partial_{j}A_{i}=\epsilon_{ijk}(\vec{B})_{k} $$ If we substitute $ds$ with $dt ds/dt$, where $t$ is the time coordinate of the inertial observer, i.e. $t=x^{0}$, we find for the $\mu=1,2,3$ equations $$ \frac{d}{dt}\bigg(\frac{dt}{ds}\frac{d\vec{x}}{dt}\bigg)=\frac{e}{m}\ \left(\vec{E}+\vec{v}\times\vec{B}\right) $$ As $ds/dt$ turns out to be $\sqrt{1-v^{2}/c^{2}}$, the former is exactly the Lorentz equation for a relativistic particle. Now let's work in more general coordinates. Let's call these coordinates $y^{\mu}$, with equations of motion $$ \frac{d^{2} y^{\mu}}{ds^{2}}=-\Gamma^{\mu}_{\nu\tau}\ \frac{dy^{\nu}}{ds}\frac{dy^{\tau}}{ds}+\frac{e}{m}\ F^{\mu}_{\ \nu}\ \frac{dy^{\nu}}{ds} $$ It can be shown that the general relation between the Christoffel symbols wrt the $y$ and the $x$ coordinates is $$ \Gamma^{\mu (y)}_{\nu\tau}=\frac{\partial x^{\sigma}}{\partial y^{\nu}}\frac{\partial x^{\lambda}}{\partial y^{\tau}}\Gamma^{\alpha\ (x)}_{\sigma\lambda}\frac{\partial y^{\sigma}}{\partial x^{\alpha}}+\frac{\partial y^{\mu}}{\partial x^{\alpha}}\frac{\partial^{2} x^{\alpha}}{\partial y^{\nu}\partial y^{\tau}} $$ where $\partial y/\partial x$ and its inverse are the matrices of the change of coordinates. In our specific case ($\Gamma^{\alpha\ (x)}_{\sigma\lambda}=0$), $$ \Gamma^{\mu}_{\nu\tau}=\frac{\partial y^{\mu}}{\partial x^{\alpha}}\frac{\partial^{2} x^{\alpha}}{\partial y^{\nu}\partial y^{\tau}} $$ So the dynamical equations can be written as $$ \frac{d^{2} y^{\mu}}{ds^{2}}=-\frac{\partial y^{\mu}}{\partial x^{\alpha}}\frac{\partial^{2} x^{\alpha}}{\partial y^{\nu}\partial y^{\tau}}\ \frac{dy^{\nu}}{ds}\frac{dy^{\tau}}{ds}+\frac{e}{m}\ F^{\mu\ (y)}_{\ \nu}\ \frac{dy^{\nu}}{ds} $$ Now let's focus on each term of the equation. $\frac{d^{2} y^{\mu}}{ds^{2}}$ plays the role of an acceleration wrt to the parameter $s$ (which does not need to be time). $\frac{e}{m}\ F^{\mu (y)}_{\ \nu}\ \frac{dy^{\nu}}{ds}$ is the ordinary electromagnetic acceleration, expressed in an arbitrary coordinate system and wrt to the parameter $s$. But what about the functions $ -\frac{\partial x^{\alpha}}{\partial y^{\nu}\partial y^{\tau}}\ \frac{dy^{\nu}}{ds}\frac{dy^{\tau}}{ds}$? It is useful to notice that these functions depend on second derivatives, i.e. they do not appear if the relation between the coordinates $y$ and $x$ is linear, of the form $$ x^{\mu}=\Lambda^{\mu}_{\ \nu}y^{\nu}+a^{\mu} $$ It is easy to realize that the former is the correct relation between two inertial coordinate systems. In a special-relativistic setting, we choose $\Lambda$ to be a Lorentz transformation, in order to keep the metric $g_{\mu\nu}=\text{diag}(1,-1,-1,-1)$, and in turn the $g^{\mu\nu}$ in the definition of tensor $F^{\mu}_{\nu}$, invariant. Following a Lorentz transformation then, the equations of motion do not change, as advised by the theory of special relativity. But if we were to make different coordinate changes, the equations would indeed be different. In particular, new terms proportional to the velocities $dy^{\mu}/ds$ would arise. This is how Coriolis and fictitious forces arise: $\vec{v}\times\vec{\omega}$ is none other than the product between a velocity and a reference frame parameter $\vec{\omega}$, which you can see in the general equation given above. The other velocity disappears when you choose $s$ to be time. Now that we have the needed machinery and conceptual rigour, let's go back to your questions. First of all, what does it mean for the magnetic force to be a relativistic effect of the electric force? The electric and magnetic fields are related by coordinate transformations through the equations $$ F_{\mu\nu}^{(y)}=\frac{\partial x^{\sigma}}{\partial y^{\mu}}\frac{\partial x^{\lambda}}{\partial y^{\nu}}\ F_{\sigma\lambda}^{(x)} $$ On the right side of the equation, the electric and magnetic fields get mixed due to the change of coordinates. Are there coordinate systems in which the field is entirely electric or magnetic? Yes (and in principle they can even be inertial frames). This is because if the electric source is static in some reference frame, then in that frame the field is entirely electric. Thus, for example, in any frame related to this frame by a space rotation or translation the field will remain totally electric (Lorentz boosts, on the other hand, mix the two). On the other hand, imagine setting the source into motion. Then the field produced by the force in that frame is both electric and magnetic, but the source itself hasn't changed a bit! This means that the splitting of the EM field between an electric component and a magnetic component is not an intrinsic property of the source, but rather an artifact of the choice of a coordinate system, relative to the state of motion of the source itself. This is seen by the very definition of the E and B fields, which are deduced from the tensor $F_{\mu\nu}$, which in turn depends on the choice of coordinates. So in this case "relativistic effect" means "relative wrt the state of motion of the source, wrt to a chosen coordinate system". (The question of magnetic fields due to spin currents, on the other hand, must be addressed in a different, quantum-mechanical, setting). Coriolis forces are too an effect of the choice of a specific coordinate system. As seen, they arise from second derivatives of the coordinate transformation from inertial coordinates. Their "coordinate dependence" status, however, is quite different from that of the E/M splitting. First of all, they do not depend upon the state of motion of any kind of source relative to a specific coordinate system (here we don't regard spacetime itself as one). Second of all, they only appear in non-inertial frames, whereas the E/M splitting is an issue even in inertial frames. Last but not least, Coriolis forces depend upon the mass of the particle, whereas the magnetic force does not. This means that particles with different masses whose motion is described in the same coordinate system will experience different Coriolis forces, but they will experience the very same E/M splitting (the opposite happens with respect to the accelerations). So magnetic forces and Coriolis forces should not be compared to one another: they are two very different objects, and as such no deep connection can exist between them. You noticed, though, that their mathematical form is similar. The mathematical form of a dynamical equation (apart from general covariant equations such as those I wrote above, but this is not the case), though, depends very much on the choice a coordinate system, so one must be careful when comparing force terms of dynamical equations, especially when one is going from one coordinate system to another. The choice of coordinates is unphysical, in the sense that it needn't be connected to underlying physical principles, nor it affects the physics of the system. In this case, though, a comparison can be made on a solid basis and turns out to be useful to answer your question. Now, a Coriolis force term of the form $2m\vec{v}\times\vec{\omega}$ appears in coordinate systems (rotating coordinates wrt a given inertial frame) where the magnetic force can as well not be of the form $e\,\vec{v}\times \vec{B}$. Consider a non-relativistic charged particle in a uniform, constant magnetic field (description given in an inertial frame). The particle will spin around some axis parallel to the magnetic field with frequency $\omega=\frac{eB}{m}$ (factors of $c$ missing, depending on convention on the definition of the magnetic field). If you go to a frame which uniformly rotates around that axis with angular frequency $\omega=\frac{eB}{m}$, there will seem to be no magnetic field at all acting on the particle. This is because the frame is moving together with the component of the motion of the particle which changes due to the magnetic field, thus no motion induced by the magnetic field can be observed in that frame. This does not signal a deep connection between the Coriolis and the magnetic force, it only confirms that there exist specific frames in which specific forces do not appear to act on specific systems. This, of course, is valid for any kind of force, if you don't restrain yourself to simple inertial frames. In this case, magnetic fields make electrically charged particles spin, so it is obvious that the coordinate systems in which the magnetic field does not appear must be a rotating coordinate system. Let us see this for the case of interest. I'll use the same formulas as in Wikipedia's "Rotating reference frame" article (with $\vec{\omega}$ taken in the opposite direction). The acceleration for a charged particle in a uniformly rotating frame about the center of the trajectory, subject to a uniform constant magnetic field, takes the form: $$ \vec{a}_{r}=-\vec{\omega}\times\vec{\omega}\times \vec{r}_{i}+2\vec{\omega}\times\vec{v}_{r}+\frac{e}{m}\ \vec{v}_{i}\times \vec{B} $$ where the subscript $r$ denotes a quantity in the rotating reference frame, while the subscript $i$ denotes the same quantity in an inertial frame. We have: $$ \vec{v}_{i}=\vec{v}_{r}-\vec{\omega}\times\vec{r}_{i} $$ so that $$ \frac{e}{m}\ \vec{v}_{i}\times \vec{B}=\frac{e}{m}\ \vec{v}_{r}\times \vec{B}-\frac{e}{m}\ \vec{\omega}\times\vec{r}_{i}\times \vec{B} $$ As you can see, the mathematical form of the magnetic force changes in a uniformly rotating reference frame. Moreover, as $$ -\vec{\omega}\times\vec{\omega}\times \vec{r}_{i}=-\vec{\omega}(\vec{\omega}\cdot\vec{r}_{i})+\vec{r}_{i}(\vec{\omega}\cdot\vec{\omega})=\frac{e^{2}B^{2}}{m^{2}}\ \vec{r}_{i} $$ where the last identity follows from $$ \vec{\omega}=\frac{e}{m}\ \vec{B}\ ,\qquad\quad \vec{\omega}\cdot\vec{r}_{i}=0 $$ and $$ -\frac{e}{m}\ \vec{\omega}\times\vec{r}_{i}\times \vec{B}=-\frac{e}{m}\ \vec{r}_{i}(\vec{\omega}\cdot\vec{B})+\frac{e}{m}\ \vec{B}(\omega\cdot \vec{r}_{i})=-\frac{e^{2}B^{2}}{m^{2}}\ \vec{r}_{i} $$ we have $$ \vec{a}_{r}=-2\vec{v}_{r}\times \vec{\omega}+\frac{e}{m}\ \vec{v}_{r}\times\vec{B}=-2\vec{v}_{r}\times \vec{\omega}+\vec{v}_{r}\times\vec{\omega}=-\vec{v}_{r}\times \vec{\omega} $$ Again, $$ \vec{a}_{r}=-\vec{v}_{r}\times \vec{\omega}=-\vec{v}_{i}\times\vec{\omega}-\vec{\omega}\times\vec{r}_{i}\times\vec{\omega} $$ which is zero due to the fact that, having solved the equation in the inertial system, $\vec{v}_{i}=\vec{r}_{i}\times\vec{\omega}$. Hence no uniform constant magnetic field nor Coriolis forces appear in the equations expressed in the uniformly rotating reference frame. This is why the form of the magnetic force term in the inertial frame coincides with the form of the Coriolis force term in the rotating frame: one must compensate the other (together with the centrifugal force) in the transition between the two coordinate systems, given the right rotation frequency. The same can be done for the electric force: it can be made to disappear in simple accelerating coordinate systems, as the force term has the form of a simple acceleration (in contrast with the $\vec{v}\times$ form of the magnetic force). A magnetic force is not fictitious in the following sense. Magnetic and electric forces have very different effects on the motion of point-like charges. This statement is supported by the very form of the Lorentz equation. Thus, as previously stated, one may find a coordinate system such that the magnetic force does not appear in the equations, but one cannot remove the effect of the magnetic field on the trajectories of charged particles: the trajectories themselves do not depend on the description you give of them. The curve $x^{\mu}(s)$ is an actual collection of points in spacetime, and the location of these points does not depend on the way in which you parametrize it. The equations will change in such ways as to produce the very same effect on the dynamical trajectories, only through a different coordinate description of the dynamics. This is the statement of the coordinate-transformation invariance of the action $S$. It is the description which changes, not the physics. Thus a magnetic field will always have an influence on the system, whether you call it magnetic or (in a different description, i.e. in a different coordinate system) not. One should though keep in mind that magnetic fields arise from the perturbation of the EM field by non static (wrt to some inertial frame) electric charges, with emphasis on the electric (i.e. non-magnetic) nature of the charges. (Again, we are totally underlooking the role of spin currents on the production of magnetic fields). As a bonus, here is the potential for a monopole magnetic field. Define a coordinate system that covers the entire $\Bbb{R}^{4}$ spacetime except for the non-positive $z$ axis and take $\vec{A}$ to be $$ \vec{A}^{N}=\left(g\ \frac{y}{r(r+z)},-g\ \frac{x}{r(r+z)},0\right) $$ Then define a coordinate system that covers $\Bbb{R}^{4}$ except for the non-negative $z$ axis and take $\vec{A}$ to be $$ \vec{A}^{S}=\left(-g\ \frac{y}{r(r-z)},g\ \frac{x}{r(r-z)},0\right) $$ $g$ is the magnetic coupling constant, and the magnetic field which corresponds to $\vec{A}^{N}$ and $\vec{A}^{S}$ is the same; it equals $$ \vec{B}=\vec{\nabla}\times\vec{A}=-g\frac{\vec{r}}{r^{3}} $$ and it is a monopole field. Differential geometry teaches us that it is ok to choose local coordinate systems, i.e. coordinate systems which cover spacetime only in part. It also teaches us that, thanks to the gauge invariance of the action, we can choose potentials to be different in those local coordinate systems, as long as they are related by a gauge transformation in the overlapping regions. In this case, we have $$ \vec{A}^{N}=\vec{A}^{S}+2g\vec{\nabla} \arctan\frac{y}{x} $$ so that an overall gauge potential is well defined, and it correctly reproduces a monopole field. So no, magnetic monopoles are not forbidden, not even in a classical setting.
{ "source": [ "https://physics.stackexchange.com/questions/242499", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110834/" ] }
243,144
Why is a second equal to the duration of 9,192,631,770 periods of radiation corresponding to the transition between two hyperfine levels of the ground state of the caesium-133 atom? Why is the number of periods so complicated? It could be any simple number, why is it exactly 9,192,631,770?
That number, 9192631770, was chosen to make the new definition of the second as close as possible to the less precise old second definition. This means that--except for the most precise measurements--instruments calibrated before the new second was defined would not have to be recalibrated.
{ "source": [ "https://physics.stackexchange.com/questions/243144", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/111173/" ] }
243,565
When I'm in a dark environment, and I turn on a torch, I can see the beam of light from the torch. To the best of my understanding, the main reason why I can see the beam of light is that the light from the torch scatters off dust and other miscellaneous particles in random directions, allowing us to actually see the beam of light. If this were the case, then I would expect the beam of light to decrease with intensity as it travels further from the torch, and the beam would sort of smoothly fade out of existence. However, recently at a lights festival held in Australia , I noticed something quite strange. Instead of smoothly fading out of existence, the beams of light at the festival continued into the night sky for a set distance, and were abruptly cut off. (The image on the left is just a random image showing the beams of light. The bright white lights were (I think) moths and other insects.) As can be seen on the image on the right, the beams of light were abruptly cut off after a set distance, instead of fading out of existence smoothly. The effect didn't come out that great on the picture, but in real life it was incredibly pronounced. Most times, I can always come up with some explanation for a phenomenon I observe, but this time round I legitimately have no idea. For a time I thought maybe it was due to human perception, (the way we perceive light), but I don't think that it can explain the effect, it was just that pronounced.
This effect is due to a change in the density of aerosols and dust particles at the top of the planetary boundary layer , the border between the part of the atmosphere which is turbulent due to surface details like trees, buildings, and topography, and the part of the atmosphere in which those details are ignored and wind flows can be laminar even at high speeds. You know how sometimes on summer days you'll see a patch of fair-weather cumulus clouds with irregular fluffy tops but flat bottoms, and the flat bottoms are all at the same low-ish altitude? That's the edge of the planetary boundary layer. ( source ) ( source ) The intensity of light backscattered by aerosols at a distance $r$ goes like $r^4$, because you lose a factor of $r^2$ both on the way out and on the way back in.$^\dagger$ A relatively sudden change in the density of scatterers can drop the intensity of the scattered beam below the threshold of your visible sensitivity. (This is part of the reason why it's a felony is the US the point a laser at an airplane, even if the airplane looks "farther away than the laser beam.") Don't let my simple description here fool you: the atmosphere and its motions are complicated. Sometimes, for instance, there are multiple haze layers which are visible if illuminated correctly. Last year, when poor weather interrupted an astronomy event, I successfully spotted a double-haze layer using a laser pointer from the ground: the beam was bright from the ground, went dark, then continued further up with a bright spot on the second layer. $^\dagger$ Two commenters protest that the drop in the intensity of the backscattered light should be proportional to $2r^2$ or $(2r)^2$ rather than proportional to $r^2 \cdot r^2 = r^4$. It's not a typo or an error. The intensity of the laser falls off like $r^2$ as long as $r$ is much larger than the distance to any waist in the laser beam. That determines the absolute brightness of the dust grain. The backscattered light from the dust grain isn't collimated at all, so you get another factor of $r^2$. This $r^4$ falloff in reflected or backscattered intensity is why the amazing lunar laser ranging experiment won't ever be repeated with retroreflectors on Mars.
{ "source": [ "https://physics.stackexchange.com/questions/243565", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60080/" ] }
243,775
For an oscillating string that is clamped at both ends (I am thinking of a guitar string specifically) there will be a standing wave with specific nodes and anti-nodes at defined $x$ positions. I understand and can work through the maths to obtain the fact that the frequency is quantised and is inversely dependent on $L$, the length of the string, and $n$, some integer. If I pluck a guitar string, this oscillates at the fundamental frequency, $n=1$. If I change to a different fret, I am changing $L$ and this is changing the frequency. Is it possible to get to higher modes ($n=2$, $n=3$ etc)? I don't understand how by plucking a string you could get to 1st or 2nd overtones. Are you just stuck in the $n=1$ mode? Or would the string needed to be oscillated (plucked) faster and faster to reach these modes?
When you pluck the string you excite many many overtones, not just the fundamental. You can observe this by suppressing the fundamental. Pluck the string while holding a finger lightly at the center of the string. That point is an antinode for the fundamental and all odd harmonics, but a node for the even harmonics. Putting your finger at that point damps the odd harmonics (especially the fundamental), but has little effect on the even harmonics. (There's a node at that point.) You may have to experiment a little to find exactly the right spot and pressure. Guitar players do this all the time to get a different sound out of the instrument.
{ "source": [ "https://physics.stackexchange.com/questions/243775", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/111525/" ] }
244,158
I would really like to give an explanation similar to this one . Here's my current recipe: (i) Mine uranium, for example take a rock from here (picture of uranium mine in Kazakhstan). (ii) Put the rock in water. Then the water gets hot. (iii) [Efficient way to explain that now we are done with the question] This seems wrong, or the uranium mine would explode whenever there is a rainfall. Does one need to modify the rock first? Do I need some neutron source other than the rock itself to get the reaction started? As soon as I have a concrete and correct description of how one actually does I think I can fill in with details about chain reactions et.c. if the child would still be interested to know more.
Everything is made of tiny things called atoms. All atoms have a tiny center part called the nucleus. Some atoms have an unusual type of nucleus that, every once in a long while, randomly explodes, sending tiny pieces in all directions. Normally those tiny pieces just bounce around until they join another atom. However, if you have a bunch of the right kind of exploding nuclei together, the exploding pieces of one nucleus can hit other exploding nuclei, and make them explode immediately, then those pieces hit even more exploding nuclei, and you get a chain reaction, sort of like dominoes. To make a nuclear reactor, you dig up a bunch of rocks with the right kind of exploding atoms, and you carefully remove many of the other atoms so the exploding atoms are close enough together to make a chain reaction, then you put them in water*. All the exploding nuclei produce a lot of heat, which boils the water. The steam turns a fan, which spins a magnet, and creates electricity. You have to be very careful that you don't put too many of the pieces with exploding atoms together, or the atoms will explode too fast, and reactor will get too hot. *If you want to get into more detail, you could explain that the exploding bits are going so fast, that they usually pass right through the other atoms, cartoon-style, unless you have other atoms, like those in water (a moderator), for them to bounce off of and slow down. You could also explain that reactors use "control rods", which are made of atoms that easily absorb the exploding bits, and therefore slow down the chain reaction. So, if they push the control rods further into the reactor, the chain reaction slows down more. If you want to include more terminology: Rocks = Uranium ore Removing all the other atoms = enrichment Nucleus exploding = nuclear fission Exploding atoms = radioactive atoms (often Uranium) Exploding pieces = neutrons (and some other particles) Fan = turbine
{ "source": [ "https://physics.stackexchange.com/questions/244158", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/111697/" ] }
244,183
Mass apart from the amount of material, is also a measure of the inertia of an object, ie. the resistance to change its motion. In zero gravity, does still mass count as the amount of inertia? In other words, the resistance to move a rock is the same in zero gravity and on earth, if there is no friction etc?
Everything is made of tiny things called atoms. All atoms have a tiny center part called the nucleus. Some atoms have an unusual type of nucleus that, every once in a long while, randomly explodes, sending tiny pieces in all directions. Normally those tiny pieces just bounce around until they join another atom. However, if you have a bunch of the right kind of exploding nuclei together, the exploding pieces of one nucleus can hit other exploding nuclei, and make them explode immediately, then those pieces hit even more exploding nuclei, and you get a chain reaction, sort of like dominoes. To make a nuclear reactor, you dig up a bunch of rocks with the right kind of exploding atoms, and you carefully remove many of the other atoms so the exploding atoms are close enough together to make a chain reaction, then you put them in water*. All the exploding nuclei produce a lot of heat, which boils the water. The steam turns a fan, which spins a magnet, and creates electricity. You have to be very careful that you don't put too many of the pieces with exploding atoms together, or the atoms will explode too fast, and reactor will get too hot. *If you want to get into more detail, you could explain that the exploding bits are going so fast, that they usually pass right through the other atoms, cartoon-style, unless you have other atoms, like those in water (a moderator), for them to bounce off of and slow down. You could also explain that reactors use "control rods", which are made of atoms that easily absorb the exploding bits, and therefore slow down the chain reaction. So, if they push the control rods further into the reactor, the chain reaction slows down more. If you want to include more terminology: Rocks = Uranium ore Removing all the other atoms = enrichment Nucleus exploding = nuclear fission Exploding atoms = radioactive atoms (often Uranium) Exploding pieces = neutrons (and some other particles) Fan = turbine
{ "source": [ "https://physics.stackexchange.com/questions/244183", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/97870/" ] }
244,650
According to Wikipedia's description of torsion springs and according to my understanding of physics the energy of a torsional spring can be written as $$U=\frac{1}{2}k \varphi^2$$ where $k$ is a constant with units of $\rm N\,m/rad$. I am freaking here because if the energy of a torsional spring is really $k \varphi^2$ than the units are $\rm (N\,m/rad) \cdot rad^2=Joule\cdot rad$. ?? What on earth am I missing here?
Radians are a pure number, so they do not contribute to your dimension considerations. The units of the torsion constant are $\mathrm{Nm}$ which are equivalent to Joules. https://en.wikipedia.org/wiki/Radian#Definition
{ "source": [ "https://physics.stackexchange.com/questions/244650", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/86846/" ] }
244,685
Consider an adiabatic box with an adiabatic board in the middle, which separates the box into two parts. There is a small hole in the board next to a coil, and the hole has a door which opens when the current in the coil reaches a certain value. Now, if I put some gas in the right half of the box, where each molecule has a magnetic dipole moment, only fast molecules will produce enough current in the coil by induction to open the door. After some time, the faster molecules will come to the left side and the slower molecules will be left on the right side, so the entropy in this isolated system decreases spontaneously. Does this violate the second law of thermodynamics? What's the problem with this setup?
This ratchet-like Maxwell's demon has the same problem as all of the other ones: the door/coil mechanism itself will heat up, and become useless. Before thinking about this one, think about the simpler scenario where there's just a door, that opens if a fast particle hits it hard enough. Since particles have energy on the order of $kT$, the door must require around that much energy to open. But by the equipartition theorem , once the door itself is at temperature $T$, it will have $kT$ of thermal energy! So after a while the door will be wildly swinging open and shut on its own, and become totally useless. This machine adds a second stage: now, your particle's $kT$ of kinetic energy goes into making a current in the coil, and the current in the coil opens the door. However, thermal noise applies to circuits, too; after a while, your coil will reach temperature $T$ and have $kT$ of Johnson noise , giving a randomly fluctuating current. As in the previous case, this will make your door randomly open and close, making the device fail.
{ "source": [ "https://physics.stackexchange.com/questions/244685", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/99868/" ] }
244,922
Moonlight has a color temperature of 4100K , while sunlight has a higher color temperature of more than 5000K. But objects illuminated by moonlight don't look yellower to the eye. They look bluer. This holds for indoor scenes (like my hall) and for outdoor. I find it counter-intuitive that moonlight has a lower color temperature. I thought the sun is the yellowest natural source of light we have. Is that because of the poor color sensitivity of the eye in dim light? In other words, moonlight is actually yellower, but our eyes can't see the intense yellow color? If one were to use a giant lens to concentrate moonlight to reach the brightness of sunlight, will objects illuminated by this light appear yellower to the eye than the same objects under sunlight? Has anyone done such an experiment? I looked, but couldn't find any. Alternatively, if I take a long-exposure photo of a landscape illuminated by the full moon, and another one illuminated by sunlight, and equalise the white balance and the exposure, will the moonlit photo look yellower?
I refer you to the picture below, taken from Ciocca & Wang (2013) . This clearly shows that the spectrum of the moon (normalised to have a similar overall strength as sunlight) is redder than sunlight and so has a lower "colour temperature". This is a fact, not a perception. EDIT: Just to clear up some confusion - the OP talks about "yellower" because that is how the eye perceives a redder spectrum (in the Physics sense of the word, meaning shifted to longer wavelength - see picture). In this sense yes, moonlight is "yellower" than sunlight because it has a redder spectrum. The reason for the redder spectrum is that the reflectance of the moon gets larger at redder wavelengths, so as moonlight is reflected sunlight, it must be redder than sunlight. As for our perception of moonlight, opinions vary. Whilst the light is probably too bright for true scotopic vision , it is likely not bright enough for full colour vision to be operative and therefore inferior mesopic vision takes over, with eye cells that are more sensitive to blue light - a.k.a. the Purkinje effect . This is exactly what Ciocca & Wang suggest in their paper. However, it must be pointed out that the difference between the solar and moon spectrum is not that big, especially considering that the eye works as a logarithmic intensity detector. It is entirely possible that the difference is not big enough to be perceived by the eye, so that the broad spectrum of the moon basically appears white and that this is enhanced if it is seen against a dark sky.
{ "source": [ "https://physics.stackexchange.com/questions/244922", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/112041/" ] }
245,214
The following facts are what I think I know about gravitational waves: Distortion of space-time moving away from a source at light speed. Produced by very powerful event in the universe such as merging black holes. What I still don't know is what are they made of? Are they empty?
A wave is a traveling distortion. This goes for any type of wave. An ocean wave is a distortion of the water surface. A sound wave is a distortion in air pressure. A light wave is a distortion in electromagnetic fields. A wave is made of the thing that is vibrating--ocean waves are made of water, etc. So, a gravitational wave is made of space and time, since gravity is the effect of space and time warping due to nearby masses.
{ "source": [ "https://physics.stackexchange.com/questions/245214", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75502/" ] }
245,898
My question is concerning wikipedia article on Oh-my-God particle , to be precise, this paragraph: This particle had so much kinetic energy it was travelling at 99.99999999999999999999951% the speed of light. This is so near the speed of light that if a photon were travelling with the particle, it would take 220,000 years for the photon to gain a 1 centimeter lead. Also, due to special relativity effects, from the proton's reference frame it would have only taken it around 10 seconds to travel the 100,000 light years across the Milky way galaxy. [ 1 ] I would like to see demonstration how the special relativity effect allows the particle to travel the distance in 10 seconds. EDIT: Thanks for all responses, I have one more question: you all explain the situation from the "Proton reference frame". What about from "Observer reference frame"? We can imagine the observer (and also the whole universe around) moving at 99.99999999999999999999951% the speed of light comparing to the "stationary" proton. How will the proton look from this reference frame? EDIT2: This was not a homework, just a weekend curiosity while reading wikipedia :-)
Key point in your quote is: "from protons reference frame". In the reference frame, travelling at a relativistic speed, length contraction is experienced. All the lengths in the direction of travel of the particle are contracted by Lorentz factor: $$ l'=\frac{l}{\gamma}$$ $$ \gamma = \frac{1}{\sqrt{1- \frac{v^2}{c^2}}}$$ So $ \gamma = \frac{1}{\sqrt{1-(0.9999999999999999999999951})^2}=3.19*10^{11}. $ In the reference frame of the particle, Milkyway is contracted by this factor. So the proton sees it only $2.96 * 10^9 m$ long. Now you can do the usual calculation to find time using the new contracted length and see that it would take only $2.96 * 10^9/ 3*10^8 = 9$ seconds to cross the Milkyway. Length contraction is kind of a consequence of 4D space-time we live in. If you look at time dilation (which also can be used to derive this result but is less intuitive in my opinion), the length contraction naturally arises from it. If you want to know more about length contraction you can easily more information on it. It is a topic which is usually well explained in any special relativity book,and I bet there are a lot of question on the topic on this website, search tags length-contraction and special-relativity .
{ "source": [ "https://physics.stackexchange.com/questions/245898", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93764/" ] }
246,050
At the start of The Martian movie the astronauts collect samples, targeting for "100 grams". Hence the question ... what units do astronauts actually use? I did some quick google search but all I get is either how astronauts weigh themselves in zero gravity or how planet weight is calculated. I think they could use either earth kilograms adjusted to mars gravity, yielding different number but same mass or would the same number with mars gravity, yielding different mass on each planet. Either I find this confusing and expect a they have a more elegant solution.
First of all, the astronauts would be measuring mass . This is the property of a thing that determines how hard it is to accelerate the thing. It does not change with the local gravity. or lack of gravity. A 450 Magnum slug will hurt just as much if it hits you on Earth, Mars, Luna, or the ISS. There's a lump of stuff kept very safe in a lab in France. The mass of that lump is ***defined***as exactly 1 kilogram. Most people will frequently use the terms weight and mass interchangeably. Only nerdy types like Physics teachers will insist on a distinction. As it happens, an object's mass also determines how much force the local planet exerts on it. Thus, you can guesstimate the mass of something by lifting it and using experience to tell you the mass. If you lack the experience, like The Martian, the guess can be off. You can measure the mass of an object by using a spring to pull up the object. This is unsatisfactory for a number of reasons: gravity is different in different parts of the world (and on different worlds) and the spring may change its properties with temperature, age, or mistreatment. If you look at the scale used in a store to measure stuff for sale, you may see something like; "Honest weight; no springs." The simplest way to measure mass accurately is to use the local pull of gravity on two masses: the one you want to measure, and one or more standard masses that are already determined. For example, this chemical balance: uses a set of calibrated masses piled on one pan, and the unknown mass on the other pan until the pans balance. This balance, with the same set of standard masses, can be used anywhere in the world, or on Mars, or the Moon, with exactly the same results in every location. The only assumption is that gravity is the same at both pans. Another type of balance is slightly different: Here, instead of having a set of standard masses, the balance has two or more standard masses that move from notch to notch in a horizontal bar. Here, the accuracy depends on the size of the standard masses and on the location of the notches in the bar. Worn notches, or smears of peanut butter and jelly on the masses, can degrade the accuracy. In short, any scale that measures mass by comparing the pull of gravity on standard and unknown masses will work to give the same mass in any gravity field...
{ "source": [ "https://physics.stackexchange.com/questions/246050", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/27291/" ] }
246,061
The popular description of black holes, especially outside the academia, is that they are highly dense objects; so dense that even light (as particle or as waves) cannot escape it once it falls inside the event horizon. But then we hear things like black holes are really empty, as the matter is no longer there. It was formed due to highly compact matter but now energy of that matter that formed it and whatever fell into it thereafter is converted into the energy of warped space-time. Hence, we cannot speak of extreme matter-density but only of extreme energy density. Black holes are then empty, given that emptiness is absence of matter. Aren't these descriptions contradictory that they are highly dense matter as well as empty? Also, if this explanation is true, it implies that if enough matter is gathered, matter ceases to exist. (Sorry! Scientifically and Mathematically immature but curious amateur here)
The phrase black hole tends to be used without specifying exactly what it means, and defining exactly what you mean is important to answer your question. The archetypal black hole is a mathematical object discovered by Karl Schwarzschild in 1915 - the Schwarzschild metric . The curious thing about this object is that it contains no matter. Techically it is a vacuum solution to Einstein's equations. There is a parameter in the Schwarzschild metric that looks like a mass, but this is actually the ADM mass i.e. it is a mass associated with the overall geometry. I suspect this is what you are referring to in your second paragraph. The other important fact you need to know about the Schwarzschild metric is that it is time independent i.e. it describes an object that doesn't change with time and therefore must have existed for an infinite time in the past and continue to exist for an infinite time into the future. Given all this you would be forgiven for wondering why we bother with such an obviously unrealistic object. The answer is that we expect the Schwarzschild metric to be a good approximation to a real black hole, that is a collapsing star will rapidly form something that is in practice indistinguishable from a Schwarzschild black hole - actually it would form a Kerr black hole since all stars (probably) rotate. To describe a real star collapsing you need a different metric. This turns out to be fiendishly complicated, though there is a simplified model called the Oppenheimer-Snyder metric . Although the OS metric is unrealistically simplified we expect that it describes the main features of black hole formation, and for our purposes the two key points are: the singularity takes an infinite coordinate time to form the OS metric can't describe what happens at the singularity Regarding point (1): time is a complicated thing in relativity. Someone watching the collapse from a safe distance experiences a different time from someone on the surface of the collapsing star and falling with it. For the outside observer the collapse slows as it approaches the formation of a black hole and the black hole never forms. That is, it takes an infinite time to form the black hole. This isn't the case for an observer falling in with the star. They see the singularity form in a finite (short!) time, but ... the Oppenheimer-Snyder metric becomes singular at the singularity, and that means it cannot describe what happens there. So we cannot tell what happens to the matter at the centre of the black hole. This isn't just because the OS metric is a simplified model, we expect that even the most sophisticated description of a collapse will have the same problem. The whole point of a singularity is that our equations become singular there and cannot describe what happens. All this means that there is no answer to your question, but hopefully I've given you a better idea of the physics involved. In particular matter doesn't mysteriously cease to exist in some magical way as a black hole forms.
{ "source": [ "https://physics.stackexchange.com/questions/246061", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/112650/" ] }
246,438
I know that stars having a mass greater than or equal to 8 solar masses are termed "massive stars". But why is the cut-off 8 solar masses?
The division is conventionally made at the boundary between where stars end their lives as white dwarf stars and where more massive stars will end their lives in core collapse supernovae. The boundary is set both empirically, by observations of white dwarfs in star clusters, where their initial masses can be estimated, and also using theoretical models. The division is not arbitrary, it is of fundamental significance in studying the chemical evolution of a galaxy. The nucleosynthetic products of massive stars are fundamentally different to those of lower mass stars. The products also get recycled into the interstellar medium in a rather different way. Further, massive stars will affect the interstellar medium through supernova explosions in a manner that just doesn't occur in lower mass stars. The reason for the 8 solar mass division (it is uncertain by about 1 solar mass and also depends to a certain extent on rotation and the initial metallicity of the star, so is not a sharp threshold) is that this is where the carbon/oxygen core (during He shell burning)$^{1}$ becomes hot enough to ignite further fusion. Core burning continues through to iron-peak elements, then there is a core mass collapse, a violent supernova and large quantities of processed material (O, Mg, Ne, Si, r-process elements) are ejected at high speeds. A neutron star or black hole remnant is formed. In lower mass stars, the core becomes degenerate, supported by electron degeneracy pressure, and core nucleosynthesis halts. The star ends its life by expelling the majority of its envelope (mostly H and He, with some enrichment with C, N and s-process elements) at low speeds through stellar winds. The degenerate core becomes a white dwarf. $^{1}$ Actually it may be possible to go a step further along the fusion ladder and still avoid a supernova. Stars with a mass just a little more than 8 solar masses (and possibly even as high as 10.5 solar masses - Garcia-Berro 2013 ) may produce Oxygen/Neon white dwarfs as the final outcome.
{ "source": [ "https://physics.stackexchange.com/questions/246438", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/77279/" ] }
246,758
My first assumption was based on "evaporation causes cooling" but I realized that it is not the case as it is cooler even if I am submerged under it. Are all liquids generally cooler than air ? Just curious.
Firstly, to make a valid comparison between how water and air 'feels' on your skin, two conditions would need to be met: Both water and air would have to be at exactly the same temperature. That temperature would have to be lower than human body temperature (strictly speaking skin temperature). If those conditions are met then water would certainly feel cooler than air. Several factors are responsible for this. Water has a much higher Specific Heat Capacity than air, making it a far better coolant than air. More intimate contact between water and skin, compared to air and skin, results in a higher Heat Transfer Coefficient which makes water again a better coolant. In the case of fairly thin layers of water evaporative cooling also takes place in the case of water on skin. As Latent Heat of Vaporisation is carried off the water will cool down and eventually skin will cool too due to heat transfer.
{ "source": [ "https://physics.stackexchange.com/questions/246758", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/82451/" ] }
246,808
My professor asked an interesting question at the end of the last class, but I can't figure out the answer. The question is this (recalled from memory): There are two travelling wave pulses moving in opposite directions along a rope with equal and opposite amplitudes. Then when the two wave pulses meet they destructively interfere and for that instant the rope is flat. Why do the waves continue after that point? Here's a picture I found that illustrates the scenario I know it's got to have something to do with the conservation laws, but I haven't been able to reason it out. So from what I understand waves propogate because the front of the wave is pulling the part of the rope in front of it upward and the back of the wave is pulling downward and the net effect is a pulse that propogates forward in the rope (is that right?). But then, to me, that means that if the rope is ever flat then nothing is pulling on anything else so the wave shouldn't start up again. From a conservation perspective, I guess there's excess energy in the system and that's what keeps the waves moving, but then where's that extra energy when the waves cancel out? Is it just converted to some sort of potential energy? This question is really vexing! :\
What you cannot see by drawing the picture is the velocity of the individual points of the string. Even if the string is flat at the moment of "cancellation", the string is still moving in that instant. It doesn't stop moving just because it looked flat for one instant. Your "extra" or "hidden" energy here is plain old kinetic energy. Mathematically, the reason is that the wave-equation is second-order, hence requires both the momentary position of the string as well as the momentary velocity of each point on it to yield a unique solution.
{ "source": [ "https://physics.stackexchange.com/questions/246808", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/111561/" ] }
247,487
We can see buildings, doors, cars etc. as light falls on it gets reflected to us. but why doesn't the same thing happen with sound? I mean why don't we hear sound reflecting that much?
We do. Normally the reflections are too quick to hear distinctly, and in an environment like a room they rapidly become diffused into a mush which a sound engineer would call reverberation. In larger spaces you can often hear distinct echoes as well or instead: a good way to play with this is to clap your hands (once) in a quiet hall: you will hear the first echo and then hear the subsequent echoes mix into reverb. The reflective and absorbent properties of rooms and halls are absolutely critical to how pleasant they are to be in and how usable they are for music and so on: people spend a lot of time worrying about this, and if they get it wrong you know. One reason people are not very aware of this is that it happens all the time, wherever you are. You can build spaces which do not reflect sound -- anechoic chambers -- and it is very odd indeed being in one. If you record music electronically (so, from an electronic source with no microphone) as is now common, then it is critical to add simulated reverberation to the sound: reverb units (often now done in software of course) are probably the most common effect in recording studios. So reflected sound is absolutely pervasive.
{ "source": [ "https://physics.stackexchange.com/questions/247487", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98751/" ] }
248,649
I have been fascinated by a very intriguing question - Can lasers push objects up? I have done the below math to find out Lets say we have a $1000~\text{mW}$ laser and we would like to lift an object of weight $100~\text{g}$. By definition: $1~\text{W} = 1 \frac{~\text{J}}{~\text{s}}$ That means the laser is emitting $1~\text{J}$ of energy per second. On the other hand energy required to lift an object off the ground is given by $m \cdot g \cdot h$. Putting in the number and lets say we want to solve for $0.1~\text{kg} \cdot 9.8 \frac{~\text{m}}{~\text{s}^{2}} \cdot h = 1~\text{J}$ So, $h \approx 1~\text{m}$. You see, if we had a $1000~\text{mW}$ laser we could lift an object of $100~\text{g}$ weight up to 1 meter in one second. I can't see anything wrong with the above math. If this is correct, can anyone tell me then why on Earth we use heavy rockets to send objects into space?
Your approach is incorrect. You cannot do this calculation by considering that the energy absorbed by the object is converted into a change in gravitational potential energy. For one thing the object would just get hot and radiate away most of the energy and for another this is a dynamical problem, you have to be able to accelerate the object upwards. What is important is the product of the power per unit area of the laser and the area over which it is incident. More precisely, to "levitate" an object by shining a laser onto its underside requires that the force exerted upwards by the laser is equal to the force $mg$ acting downwards. A general expression one could use is $$\frac{1+r}{c}\int \vec{S} \cdot d\vec{A} \geq mg,$$ where $\vec{S}$ is the time-averaged Poynting vector of the laser, with a magnitude equal to the power per unit area in the beam, and the component of this normal to the surface is integrated over the surface area of the object to be levitated. The term $r$ is the reflectivity of the surface. $r=0$ for a black surface, but the upward force would be doubled for a perfectly reflective surface with $r=1$. Hence assuming I had a completely black cube of surface area $A$ oriented so that a surface was perpendicular to a laser beam with Poynting vector $S$: $$ \frac{SA}{c} \geq m g$$ $$ m \leq \frac{SA}{cg}$$ and if $SA = 1$ W, then $m \leq 3.4 \times 10^{-10}$ kg is the mass which it could accelerate upwards against gravity.
{ "source": [ "https://physics.stackexchange.com/questions/248649", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/86759/" ] }
249,239
In my textbook it is given that 'The wave function describes the position and state of the electron and its square gives the probability density of electrons.' Can someone give me a very simple example of a wave function with explanation? (Note: This question is not a duplicate. I have searched for other questions of this type but the answers were overwhelmingly difficult to understand.)
A wave function is a complex-valued function $f$ defined on ${\mathbb R}^1$ (if your electron is confined to a line) or on ${\mathbb R}^2$ (if your electron is confined to a plane) or ${\mathbb R}^3$ (if your electron ranges over three-space), and satisfying $$\int |f|^2=1$$ (where the integral is defined over the entire line or plane or 3-space). Every electron has an associated wave function, and any function satisfying the above can be the wave function associated to some electron. The wave function tells you everything there is to know about the electron. For example, if $A$ is any set, and if you perform an experiment that answers the question "is the electron in the set $A$?", then the probability you'll get a "yes" answer is given by $$\int_A |f|^2$$ (So in particular, if $A$ is the entire space, you're asking "Is the electron anywhere at all?", and the probability of a yes answer is $1$.) The next steps are to learn: 1) How do I use this wave function to predict the outcomes of questions about something other than the electron's location, such as, for example, its momentum? and 2) How does this wave function change over time? I don't think you're quite yet at the point of addressing those questions (though you will be soon enough).
{ "source": [ "https://physics.stackexchange.com/questions/249239", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/107112/" ] }
249,258
I am a physics undergraduate student currently studying electromagnetics. I have previously studied electrostatics and magnetostatics yet the concept of scalar potential, $V$ and the vector potential, A have eluded me. I understand Maxwell's equations and relevant formulas to calculate them in certain situations and how to go between these quantities and the E and B fields. But I do not understand them conceptually. I would like to understand their meaning and purpose. If somebody has a good analogy to view these quantities and how they relate to the E and B fields this would be even better. Please could somebody answer this question and if possible please avoid using too math heavy (some math is expected) approach, so that I can clearly read and understand the concepts presented.
A wave function is a complex-valued function $f$ defined on ${\mathbb R}^1$ (if your electron is confined to a line) or on ${\mathbb R}^2$ (if your electron is confined to a plane) or ${\mathbb R}^3$ (if your electron ranges over three-space), and satisfying $$\int |f|^2=1$$ (where the integral is defined over the entire line or plane or 3-space). Every electron has an associated wave function, and any function satisfying the above can be the wave function associated to some electron. The wave function tells you everything there is to know about the electron. For example, if $A$ is any set, and if you perform an experiment that answers the question "is the electron in the set $A$?", then the probability you'll get a "yes" answer is given by $$\int_A |f|^2$$ (So in particular, if $A$ is the entire space, you're asking "Is the electron anywhere at all?", and the probability of a yes answer is $1$.) The next steps are to learn: 1) How do I use this wave function to predict the outcomes of questions about something other than the electron's location, such as, for example, its momentum? and 2) How does this wave function change over time? I don't think you're quite yet at the point of addressing those questions (though you will be soon enough).
{ "source": [ "https://physics.stackexchange.com/questions/249258", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
249,259
Is there a material known to man that I can tape to a Tsar-Bomba-yield nuclear warhead and find kilometers away after detonation? This question is quite similar but a nuclear explosion is quite instantaneous. The sun, on the other hand, exposes a material to the same conditions continuously until it disintegrates.
A wave function is a complex-valued function $f$ defined on ${\mathbb R}^1$ (if your electron is confined to a line) or on ${\mathbb R}^2$ (if your electron is confined to a plane) or ${\mathbb R}^3$ (if your electron ranges over three-space), and satisfying $$\int |f|^2=1$$ (where the integral is defined over the entire line or plane or 3-space). Every electron has an associated wave function, and any function satisfying the above can be the wave function associated to some electron. The wave function tells you everything there is to know about the electron. For example, if $A$ is any set, and if you perform an experiment that answers the question "is the electron in the set $A$?", then the probability you'll get a "yes" answer is given by $$\int_A |f|^2$$ (So in particular, if $A$ is the entire space, you're asking "Is the electron anywhere at all?", and the probability of a yes answer is $1$.) The next steps are to learn: 1) How do I use this wave function to predict the outcomes of questions about something other than the electron's location, such as, for example, its momentum? and 2) How does this wave function change over time? I don't think you're quite yet at the point of addressing those questions (though you will be soon enough).
{ "source": [ "https://physics.stackexchange.com/questions/249259", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/84153/" ] }
249,606
The photons are the propagators for QED, and we rely on photons to see the world around us. The gluon is the propagator in QCD. Why have our eyes not evolved to see gluons (either on top of being able to "see" photons, or instead of)?
In short, the answer is: because gluons behave in a way that makes them useless for this purpose. To understand why, let's back up a little and look at how photons are useful, and then see how gluons behave differently. We (animals pretty broadly) evolved to see photons because they allow us to move around in and respond to our environment more efficiently. This, in turn, is because our environment is pretty well supplied with photons from the sun (and other sources, in some cases). It so happens, as ulidtko rightly points out in the comments, that we only use a select range of photons for vision. In fact, we (humans) can only see photons from a fairly narrow range right around the peak emission of the sun , which incidentally corresponds to a range over which the atmosphere is fairly transparent. They interact with electrons, which are everywhere, so they bounce off of things in our environment (or are produced by things, in some cases). Yet they travel in fairly straight lines through air, so they can transmit very precise information to us that we can use to adapt to that environment. Photons can tell us about distant threats to avoided, nearby obstacles to be negotiated, food, water, potential mates, etc. Now, the main reason we would not see gluons is because there aren't many — or any — gluons bouncing around in our environment. This is primarily because of a phenomenon called confinement . Gluons don't typically travel freely away from quarks, and quarks aren't exactly flying around as readily as photons. In fact quarks are also subject to confinement , so you won't see one outside of a hadron (proton or neutron, typically). But those are generally charged or short-lived, and stuck in a nucleus, which is stuck in an atom, which is stuck in some sort of molecule in our environment. So you'd only get any benefit from "seeing" these things if molecules and nuclei were routinely broken down and sent flying all over the place with great momentum. And even then, it would probably be easier to "see" these flying hadrons with something other than the gluons. In any case, that wouldn't be the healthiest place to be, and the photons would typically have told you to get out of that situation much earlier — thus preserving your molecules, which is a distinct evolutionary advantage. It is possible that things called "glueballs" exist, which are just what they sound like: particles that are just balls of gluons stuck to each other. They could travel away from quarks, and would move in pretty straight lines. But they have not yet been observed; they are rare, difficult to produce, and hard to unambiguously identify. Their theoretical mass (unlike the massless gluon itself) is in the neighborhood of 1GeV — heavier than most of the elementary particles — which means they would only be produced in very energetic processes (e.g., nuclear reactions, rather than chemical reactions). So they certainly wouldn't be common enough to transmit much information about that saber-toothed tiger that's coming to eat us. So to recap, photons are plentiful in our environment, and they travel long distances in more-or-less straight lines through the atmosphere, so they transmit information efficiently. Gluons are hard to produce in a form that travels long distances (with or without atmosphere), and so cannot transmit information usefully. Basically, gravitons are too weak to be useful, and gluons are too strong — but photons are juuust right.
{ "source": [ "https://physics.stackexchange.com/questions/249606", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/94761/" ] }
249,866
I often see many-body systems in QM represented in terms of a tensor products of the individual wave functions. Like, given two wave functions with basis vectors $|A\rangle$ and $|B\rangle$, belonging to the Hilbert spaces $\mathcal{H}_A^n$ and and $\mathcal{H}_B^m$ respectively, the basis $|C\rangle$ of the combined Hilbert space $\mathcal{H}_{AB}=\mathcal{H}_A \otimes \mathcal{H}_B$ is then \begin{equation} |C\rangle = |A\rangle \otimes |B\rangle. \end{equation} However, in QM the tensor product (or outer product) may be written as $|A \rangle \langle B |$. What is the difference between $|A \rangle \langle B |$ and $|A\rangle \otimes |B\rangle$?
$\lvert A\rangle \langle B \rvert$ is the tensor of a ket and a bra (well, duh). This means it is an element of the tensor product of a Hilbert space $H_1$ (that's where the kets live) and of a dual of a Hilbert space $H_2^\ast$, which is where the bras live. Although for Hilbert spaces their duals are isomorphic to the original space, this distinction should be kept in mind. So we can "feed" a ket $\lvert \psi\rangle$ from $H_2$ to the bra in $\lvert \phi\rangle\otimes \langle\chi\rvert \in H_1\otimes H_2^\ast$, and are left with a state in $H_1$ given by $\langle \chi \vert \psi\rangle \lvert \phi\rangle$. The usual use case for such a tensor product is when $H_1=H_2$ to construct a map from $H_1$ to itself, e.g. the projector onto a state $\lvert \psi \rangle$ is given by $\lvert\psi\rangle \langle \psi \rvert$. In general, a tensor in $H_2 \otimes H_1^\ast$ corresponds to a linear operator $H_1\to H_2$. In the finite-dimensional case, these are all linear operators , in the infinte-dimensional case, this is no longer true, e.g. $H^\ast \otimes H$ are precisely the Hilbert-Schmidt operators on $H$. In constract, a tensor $\lvert A\rangle\otimes \lvert B\rangle$ (also just written $\lvert A \rangle \lvert B\rangle$) in $H_1\otimes H_2$, although it corresponds to a bilinear map $H_1\times H_2\to\mathbb{C}$ by definition, is usually not meant to denote an operator, but a state . Given two quantum systems $H_1$ and $H_2$, $H_1\otimes H_2$ is the space of the states of the combined system (as for why, see this question ).
{ "source": [ "https://physics.stackexchange.com/questions/249866", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
250,363
I am myopic (I don't really know if this is relevant or not) and I usually swim without contact lenses. My vision is clearly better underwater when I am using swimming goggles. I have tried to understand why this happens and I think that it is probably due to the presence of just air (which should be more or less "still") between the googles and my eyes avoiding this way the turbulent flow of water that should somehow affect my vision, due to the fact that air has a different refraction index ($n_1$) than the one of water ($n_2$) and that the material of the goggles is not relevant. But it also could be that the material of the goggles is making the difference. What is the actual physical explanation of this fact?
Is blurred effect due to turbulence? No, it is not. The turbulence has a little effect here. Even if there is no turbulence, one see everything blurred underwater. The reason is explained below. An eye is a natural lens. A clear shot of something you see depends on how well the image is focused on your eye. The most of the refraction in the eye occurs at the cornea and a little bit at the lens of the eye. The image is then focused on the retina. But when you are under water, the optical density of the cornea and water are almost the same (or say both have similar refractive indices-1.376 for cornea and 1.333 for water)). If you open your eyes underwater, there will hardly be any refraction because now the light is going from a medium (water) to another medium (cornea) with the same density, so refraction never takes place. If light entering the cornea is not properly refracted, it will not be focused on the retina to give you a clear image. This is why you see everything blurred underwater. The use of swimming goggles can overcome this defect. Your eyes work perfectly if light enters your eye from air. That principle is made use of in swimming goggles. When you use goggles, you have some air in-between the cornea and the glass of the goggle. So even if the light is coming from underwater it first passes through the air and then only it reaches the eye. So you feel exactly like on ground and see things well. This is for a normal person. You are myopic. The air-glass interface in front of your eye helps you like you have your contact lenses while on ground by providing better refraction. It may work for every myopic persons. Try use goggles while on ground. This may not help. It's because the light is now entering your eye from air. So there will be a slight difference in focusing. Same effect is when you use your contact lenses underwater.
{ "source": [ "https://physics.stackexchange.com/questions/250363", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92700/" ] }
250,379
I have a past paper question which gives me the external charge of this capacitor 7.1μC and the internal charge caused by the resultant field as 1.3μC, how do I calculate the relative permittivity? I thought it would have just been Q1/Q2 but according to the mark scheme that's not the case.
Is blurred effect due to turbulence? No, it is not. The turbulence has a little effect here. Even if there is no turbulence, one see everything blurred underwater. The reason is explained below. An eye is a natural lens. A clear shot of something you see depends on how well the image is focused on your eye. The most of the refraction in the eye occurs at the cornea and a little bit at the lens of the eye. The image is then focused on the retina. But when you are under water, the optical density of the cornea and water are almost the same (or say both have similar refractive indices-1.376 for cornea and 1.333 for water)). If you open your eyes underwater, there will hardly be any refraction because now the light is going from a medium (water) to another medium (cornea) with the same density, so refraction never takes place. If light entering the cornea is not properly refracted, it will not be focused on the retina to give you a clear image. This is why you see everything blurred underwater. The use of swimming goggles can overcome this defect. Your eyes work perfectly if light enters your eye from air. That principle is made use of in swimming goggles. When you use goggles, you have some air in-between the cornea and the glass of the goggle. So even if the light is coming from underwater it first passes through the air and then only it reaches the eye. So you feel exactly like on ground and see things well. This is for a normal person. You are myopic. The air-glass interface in front of your eye helps you like you have your contact lenses while on ground by providing better refraction. It may work for every myopic persons. Try use goggles while on ground. This may not help. It's because the light is now entering your eye from air. So there will be a slight difference in focusing. Same effect is when you use your contact lenses underwater.
{ "source": [ "https://physics.stackexchange.com/questions/250379", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/114695/" ] }
250,688
Taking the curl of the electric field must be possible, because Faraday's law involves it: $$\nabla \times \mathbf{E} = - \partial \mathbf{B} / \partial t$$ But I've just looked on Wikipedia , where it says The curl of the gradient of any twice-differentiable scalar field $\phi$ is always the zero vector: $$\nabla \times (\nabla \phi)=\mathbf{0}$$ Seeing as $\mathbf{E} = - \nabla V$ , where $V$ is the electric potential, this would suggest $\nabla \times \mathbf{E} = \mathbf{0}$ . What presumably monumentally obvious thing am I missing?
The fact is that, in the general case $$ \vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t}; $$ (signs depend on conventions used) where $\vec{A}$ is called vector potential . You can consult for example Wikipedia . Let us consider homogeneous Maxwell equations: $$ \begin{cases} \vec{\nabla}\cdot\vec{B} = 0,\\ \vec{\nabla}\times\vec{E} + \frac{\partial\vec{B}}{\partial t} = 0; \end{cases} $$ It is well-known that every divergenceless filed on $\mathbb{R}^3$ can be written a curl of another vector field just as we know that a curless field can be written as a gradient of a scalar function on $\mathbb{R}^3$ . Thus from the first equation, $$ \vec{B} = \vec{\nabla}\times\vec{A}, $$ and substituting this in the second equation, $$ \vec\nabla\times\left(\vec{E} + \frac{\partial\vec{A}}{\partial t}\right)=0, $$ since one can exchange the curl with the derivative w.r.t. time, and so one can set: $$ \vec{E} + \frac{\partial\vec{A}}{\partial t} = -\vec\nabla V, $$ from which $$ \vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t}. $$ Note that if your magnetic field is time-independent, you recover the well-know formula $$ \vec{E} = -\vec\nabla V. $$
{ "source": [ "https://physics.stackexchange.com/questions/250688", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37779/" ] }
250,698
How does one derive using, say, the operator formula for reflections $$ R(r) = (I - 2nn^*)(r),$$ the reflection representation of a vector $$ R(r) = R(x\hat{i} + y\hat{j} + z\hat{k}) = xR(\hat{i}) + yR(\hat{j}) + zR(\hat{k}) = xs_x + ys_y + zs_z \\ = x \left[ \begin{array}{ c c } 0 & 1 \\ 1 & 0 \end{array} \right] + y\left[ \begin{array}{ c c } 0 & -i \\ i & 0 \end{array} \right] + z \left[ \begin{array}{ c c } 1 & 0 \\ 0 & - 1 \end{array} \right] = \left[ \begin{array}{ c c } z & x - iy \\ x+iy & - z \end{array} \right] $$ that comes up when dealing with spinors in 3-D ? Intuitively I can see the matrices are supposed to come from the following geometric picture: The first Pauli matrix is like a reflection about the "y=x" line. The third Pauli matrix is like a reflection about the "x axis". The second Pauli matrix is like a 90° counterclockwise rotation and scalar multiplication by the imaginary unit https://en.wiktionary.org/wiki/Pauli_matrix but why and how did we make these choices? I know we're doing it to end up using a basis of $su(2)$, but assuming you didn't know anything about $su(2)$, how could you set this up so that it becomes obvious that what we end up calling $su(2)$ is the right way to represent reflections? The usual ways basically postulates them or show they work through isomorphism or say the come from the fact a vector is associated with the matrix I've written above without explaining where that came from. The closest thing to an explanation is that they come from the quaternionic product whose link to all this, especially something as simple as reflections through lines, escapes me.
The fact is that, in the general case $$ \vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t}; $$ (signs depend on conventions used) where $\vec{A}$ is called vector potential . You can consult for example Wikipedia . Let us consider homogeneous Maxwell equations: $$ \begin{cases} \vec{\nabla}\cdot\vec{B} = 0,\\ \vec{\nabla}\times\vec{E} + \frac{\partial\vec{B}}{\partial t} = 0; \end{cases} $$ It is well-known that every divergenceless filed on $\mathbb{R}^3$ can be written a curl of another vector field just as we know that a curless field can be written as a gradient of a scalar function on $\mathbb{R}^3$ . Thus from the first equation, $$ \vec{B} = \vec{\nabla}\times\vec{A}, $$ and substituting this in the second equation, $$ \vec\nabla\times\left(\vec{E} + \frac{\partial\vec{A}}{\partial t}\right)=0, $$ since one can exchange the curl with the derivative w.r.t. time, and so one can set: $$ \vec{E} + \frac{\partial\vec{A}}{\partial t} = -\vec\nabla V, $$ from which $$ \vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t}. $$ Note that if your magnetic field is time-independent, you recover the well-know formula $$ \vec{E} = -\vec\nabla V. $$
{ "source": [ "https://physics.stackexchange.com/questions/250698", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/25851/" ] }
250,800
[General Relativity] basically says that the reason you are sticking to the floor right now is that the shortest distance between today and tomorrow is through the center of the Earth. I love this, not the least because it sounds nonsensical. (From an unassessed comment in the internet ) OK so I love this too, but is it a completely looney description or does it make any sense in which case I'm in for some serious enlightenment today since last time I checked light cones allowed me to move somewhat freely unless in significant proximity with a singularity.
That is awesome! And it makes complete sense too! (other than a possible misusage of the word "distance"). Let's have a look at the equations of motion of you in Earth's curved spacetime, assuming that your feet are not touching the ground: $$ \frac{\mathrm d^{2}x^{\mu}}{\mathrm ds^{2}}+\Gamma^{\mu}_{\nu\sigma}(x(s))\ \frac{\mathrm dx^{\nu}}{\mathrm ds}\frac{\mathrm dx^{\sigma}}{\mathrm ds}=0 $$ where $x^{\mu}(s)$ is your world line, $s$ is some parameter, $$ \Gamma^{\mu}_{\nu\sigma}=\frac{1}{2}\ g^{\mu\tau}(\partial_{\nu}g_{\sigma\tau}+\partial_{\sigma}g_{\nu\tau}-\partial_{\tau}g_{\sigma\nu}) $$ with $g^{\mu\tau}$ the inverse of the metric and $$ g=\left( 1 - \frac{r_{s} r}{\rho^{2}} \right) c^{2}\, \mathrm dt^{2} - \frac{\rho^{2}}{\Delta} \mathrm dr^{2} - \rho^{2} \,\mathrm d\theta^{2}+ \\ - \left( r^{2} + \alpha^{2} + \frac{r_{s} r \alpha^{2}}{\rho^{2}} \sin^{2} \theta \right) \sin^{2} \theta \,\mathrm d\phi^{2} + \frac{2r_{s} r\alpha \sin^{2} \theta }{\rho^{2}} \, c \,\mathrm dt \, \mathrm d\phi $$ where $$ r_{s}=\frac{2GM}{c^{2}}\ ,\quad\alpha=\frac{J}{Mc} \ ,\quad \rho^{2}=r^{2}+\alpha^{2}\cos^{2}\theta\ ,\quad \Delta=r^{2}-r_{s}r+\alpha^{2} $$ with $M$ and $J$ Earth's mass and angular momentum. The equations of motion can be derived from the action functional $$ S[x(s)]=-mc\int_{a}^{b}\sqrt{g_{\mu\nu}(x(s))\,\frac{\mathrm dx^{\mu}}{\mathrm ds}\frac{\mathrm dx^{\nu}}{\mathrm ds}}\ \mathrm ds $$ where $m$ is your mass and, as gravity goes, it plays no role at all in how you fall to the ground. You find the equations of motion by minimizing S with respect to the curve $x(s)$, which amounts to minimizing the (proper) time you spend on your worldline, times $-mc^{2}$ (this is why you are minimizing rather than maximizing): \begin{align} S[x(\tau)]&=-mc^{2}\int_\textrm{today}^\textrm{tomorrow}\sqrt{g_{\mu\nu}(x(\tau))\,\frac{\mathrm dx^{\mu}}{\mathrm d\tau}\frac{\mathrm dx^{\nu}}{\mathrm d\tau}}\,\mathrm d\tau\\ &= \text{the distance between today and tomorrow}\,. \end{align} As you'll fall in the direction that connects you to the center of the Earth, the shortest distance between today and tomorrow is indeed through the center of the Earth. The reason why you are sticking to the floor right now is really that the ground is preventing you from taking the shortest path from today to tomorrow, which passes through the center of the Earth.
{ "source": [ "https://physics.stackexchange.com/questions/250800", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4359/" ] }
251,001
Gravitational waves were discovered 35 years ago without fanfare in 1981/2 by Zumberge, R L Rinker and J E Faller, then completely ignored. See: "A Portable Apparatus for Absolute Measurements of the Earth's Gravity", M A Zumberge, R L Rinker and J E Faller, Metrologia, Volume 18, Number 3, http://iopscience.iop.org/article/10.1088/0026-1394/18/3/006/meta The variation in g over the course of a day due to the sun and moon was carefully measured from the Earth's surface in 1981 as shown below. This is the SAME gravitational wave effect measured by the LIGO researches recently (reported 11Feb2016). LIGO actually detects, then filters out, this local gravitational wave in order to detect the remote ones producing the ultra weak gravitational waves from binary black holes. Although the sensitivity required to detect them is 3 orders of magnitude higher in both frequency and amplitude, the LIGO "gravitational" waves are otherwise exactly the same "gravitational" waves already discovered in 1981 in our own solar system. The proof is in the fact that LIGO detects gravitational waves but can NOT detect "tidal" gravity waves. Thus categorization of Zumberge's waves as "tidal" gravity is incorrect as tidal waves are those between two surfaces as a consequence of gravity. These cannot be detected by an interferometer. Zumberge's measurements are of gravitational variations itself, hence gravitational. Why has the 1981 work of Zumberge, Rinker and Faller been ignored? (See also Leading-order cause of diurnal (not semidiurnal) variations in $g$? )
This represents a major misunderstanding of what a gravitational wave is. The effect presented is simply the semi-static gravitational field at earth due to the earth, moon and sun. It is predicted by Newtonian gravity. There is no 'wave' that propagated, it's the instant positions of the 3 bodies that change over 1 day (and over 1 year also). It does not show that the change moved at the speed of light, which gravitational waves do. Nothing in Newton's equations talk about the speed of light. The GR equations for 3 bodies moving like the earth-sun-moon can only be solved approximately, and in this case it'd be through a post-Newtonian approximation. The pseudo-static term(s) would be the same but possibly some GR correction - and if it is (And I'm not sure if the strongest term correction might not be something like the term for the perihelion of mercury, or something else, in any case extremely small and not measurable in their g measurement). But that's not even a grav wave. The grav waves would be even smaller probably - you'd have to compute the rate of change of the quadrupole moment of the configuration, and do some other calculations. The simpler problem of just the grav radiation of the earth-sun rotation around each other gives a resultant power dissipated that translates in the orbit of the earth loosing altitude ('altitude' above the sun) of the size of 1 proton per day. That g change they measured in your graph is about 10 to the minus 7 g's. It isn't even dissipative, as the bodies keep doing the same thing over and over, in your approximation. If you don't see that dissipation you are not seeing the gravitational waves. There is probably many other ways to see that what you're discussing, what the graphic measured, is not a gravitational wave, but rather a very slow change in a static gravity field, the one produced by the 3 bodies. Grav waves produce something different than just a change in gravity in one direction, they do it in 2 directions at once, an asymmetrical squeezing of a circle first in one axis and then in the other, like squeezing a balloon in one direction, making it bulge in the other. Like Nathaniel said, it's like comparing a (semi) static electric field (say produced by rubbing a couple rags together) and moving them around some, with light. Note: yes, even changing static fields can not produce a change in what's observed at a distance faster than the speed of light, but that doesn't come in at all in your graphic, too small a differential effect for it to see it.
{ "source": [ "https://physics.stackexchange.com/questions/251001", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57591/" ] }
251,385
Could someone please provide an explanation for the origin of Hawking Radiation ? (Ideally someone who I have been speaking with on the h-bar) Any advanced maths beyond basic calculus will most probably leave me at a loss, though I do not mind a challenge! Please assume little prior knowledge, as over the past few days I have discovered that much of my understanding surrounding the process as virtual particle pairs is completely wrong.
To answer this we need to talk a bit about how particles are described in quantum field theory . For every type of particle there is an associated quantum field. So for the electron there is an electron field, for the photon there is a photon field, and so on. These quantum fields occupy all of spacetime i.e. they exist everywhere in space and everywhere in time. It’s important to realise that a quantum field is a mathematical object not a physical one - more precisely it is an operator field - however it’s common to talk as if quantum fields are real objects and I’m going to commit this sin in my answer. Just be cautious about taking it too literally. Anyhow, quantum field theory describes particles as excitations of a quantum field. If we add a quantum of energy to the electron field it appears as an electron, or if we take out a quantum of energy from a quantum field that makes an electron disappear. Incidentally this explains how matter can turn into energy and vice versa. For example in the Large Hadron Collider the kinetic energy of the colliding protons can go into excitations of quantum fields where that energy appears as new particles. The vacuum state of a quantum field is the state that has no particles. For a quantum field there is a function called the particle number operator that returns the number of particles present, and the vacuum state is the state for which the number operator returns zero. So when we talk about the vacuum in physics we are really referring to a specific state of quantum fields. Quantum field theory is designed to be compatible with special relativity, and the vacuum state is Lorentz invariant. That means all observers in constant motion in flat spacetime will agree what the vacuum state of the field is. The problem is that the vacuum state is not invariant in general relativity i.e. in curved spacetime. In a curved spacetime different observers will disagree about how many particles are present and therefore will disagree about the vacuum state. Specifically, and this is step one in our attempt to explain Hawking radiation, observers near and far from a massive body will disagree about the vacuum state. Suppose you are hovering near a massive body like a black hole while I’m hovering a long way away from the body. The quantum field state that looks like a vacuum to you will look to me as if it contains a non-zero number of particles. I’m not sure it’s possible to explain simply why the vacuum state looks different to different observers in a curved spacetime because it’s related to the procedure used to quantise a field (expanding it as a sum of oscillatory modes) and that’s too complicated a process to do justice to here. Maybe that could be the subject of a future question, but for now we’ll just have to take it on trust. Anyhow, you’ll note that a couple of paragraphs back I mentioned that the disagreement about the vacuum was just the first step to explaining Hawking radiation. That is because the fact two observers disagree about the vacuum state does not necessarily mean energy will flow from one observer to the other i.e. a flow of radiation. Indeed, unless an event horizon is present there will be no flow of energy - for example a neutron star does not emit Hawking radiation, and neither does any other massive object unless a horizon is present. The next step is to explain the role of the horizon in the Hawking process. For a black hole to evaporate, energy has to completely escape from its potential well. To make a rather crude analogy, if we fire a rocket from the surface of the Earth then below the escape velocity the rocket will eventually fall back. The rocket has to have a velocity greater than the escape velocity to completely escape the Earth. When we are considering a black hole, rather than the escape velocity we consider the gravitational red shift . The red shift reduces the energy of any outgoing radiation, so it reduces the energy of any radiation emitted by the hotter vacuum state near the event horizon. If the red shift is infinite then the emitted radiation gets red shifted away to nothing and in this case there will be no Hawking radiation. If the red shift remains finite then the emitted radiation still has a non-zero energy as it approaches spatial infinity. In this case some energy does escape from the black hole, and this is what we call the Hawking radiation. This energy comes ultimately from the mass energy of the black hole, so the mass/energy of the black hole is decreased by the amount or radiation that has escaped. The problem is that at this point I find myself completely lost for a way to describe this that is comprehensible to the layman. In Hawking’s original paper from 1975 he calculates the scattering of the particles emitted in the Hawking process, and he shows that in the presence of a horizon the scattering is modified because everything inside the horizon cannot contribute. The result of this is that the red shift remains finite and as a result we observe Hawking radiation i.e. a steady stream of radiation completely escaping from the black hole. Without the horizon the red shift becomes infinite so no energy escapes and no Hawking radiation is seen. That’s why objects without a horizon, e.g. neutron stars, do not produce Hawking radiation no matter how strong their gravitational field is. Hawking himself uses the analogy of virtual particles in his paper. He says: One might picture this negative energy flux in the following way. Just outside the event horizon there will be virtual pairs of particles, one with negative energy and one with positive energy. However he goes on to say: It should be emphasized that these pictures of the mechanism responsible for the thermal emission and area decrease are heuristic only and should not be taken too literally. What he is actually calculating is how a wavepacket (which a free scalar quantum field is) behaves when scattered off a black hole in the process of forming, and then comparing the old and new frequencies of oscillation, which are how we get a notion of particles and vacuum, as noted in passing above. Given that Hawking said this in his original paper in 1975 it is something of a shame that the pairs of virtual particles analogy is still being trotted out as an explanation for the process some thirty years later. Footnote I’m not altogether happy that I have done justice to the Hawking process and radiation. In particular I don’t think I’ve really explained why a horizon is necessary - maybe it is simply impossible to explain this at the layman level. However since I have run out of steam I’ve decided to post this in the hope it will be helpful. I’ve made this answer community wiki because it is the result of contributions from many people, mainly in the hbar chat room. If anyone thinks they can improve on this I encourage them to post their updated version as an additional answer, and we can edit it into this answer to hopefully come up with something both authoritative and comprehensible. Finally we should note that although Hawking's original paper was met with some debate, for example due to the use of trans-Planckian modes , the phenomenon is now well understood and the mathematical treatment is universally accepted. We even have an exact solution for the simplified case of a free scalar field (though this doesn't include the effects of back reaction). If experiment (asuming we are ever able to do the experiment) fails to find Hawking radiation that will require a root and branch re-examination of our understanding of QFT in curved spacetimes.
{ "source": [ "https://physics.stackexchange.com/questions/251385", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/114841/" ] }
251,762
Do all equations have $$\text{left hand side unit} = \text{right hand side unit}$$ for example, $$\text{velocity (m/s)} = \text{distance (m) / time (s)},$$ or is there an equation that has different units on the left- and right-hand sides? I would like to consider empirical equations (determined from experimental results) and theoretical equations (derived from basic theory).
It doesn't matter where the equation came from - a fit to experimental data or a deep string theoretic construction - or who made the equation - Albert Einstein or your next-door neighbour - if the dimensions don't agree on the left- and right-hand sides, it's nonsense. Consider e.g. my new theory that the mass of an electron equals the speed of light. It's just meaningless nonsense from the get-go. This isn't that restrictive - there's lots of equations with correct dimensions (though in some cases you can derive equations or estimates by so-called dimensional analysis, where you just make sure the units agree). But it is useful for checking your work. If you derive a result and the dimensions don't agree, you know you must have made a mistake. There is a subtle distinction between unit and dimension. A dimension represents a fundamental quantity - such as mass, length or time - whereas a unit is a man-made measure of a fundamental quantity or a product of them - such as kg, meters and seconds. Arguably, one can write meaningful equations such as 60 seconds = 1 minute, with matching dimensions but mismatching units (as first noted by Mehrdad).
{ "source": [ "https://physics.stackexchange.com/questions/251762", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/112701/" ] }
251,775
As far as I understand, for the field of a uniformly moving charge, curl of $\mathbf E$ is zero everywhere. Since $\nabla \times \mathbf E = -\dfrac{\partial\mathbf B}{\partial t}$, magnetic field should be constant in every point in space. This sounds wrong, since $\mathbf B$ is supposed to fall off proportionally to $r^2$, and $r$ is changing in time for a moving charge. What is wrong with this reasoning? Even worse, $\nabla \times \mathbf B = \dfrac{\partial\mathbf E}{\partial t}$ , and since $\dfrac{\partial\mathbf E}{\partial t}$ is not constant (because $\dfrac{\partial^2\mathbf E}{\partial t^2}$ is not zero), curl of $\mathbf B$ keeps changing. But how can $\nabla \times \mathbf B$ keep changing if $\mathbf B$ itself stays the same?
It doesn't matter where the equation came from - a fit to experimental data or a deep string theoretic construction - or who made the equation - Albert Einstein or your next-door neighbour - if the dimensions don't agree on the left- and right-hand sides, it's nonsense. Consider e.g. my new theory that the mass of an electron equals the speed of light. It's just meaningless nonsense from the get-go. This isn't that restrictive - there's lots of equations with correct dimensions (though in some cases you can derive equations or estimates by so-called dimensional analysis, where you just make sure the units agree). But it is useful for checking your work. If you derive a result and the dimensions don't agree, you know you must have made a mistake. There is a subtle distinction between unit and dimension. A dimension represents a fundamental quantity - such as mass, length or time - whereas a unit is a man-made measure of a fundamental quantity or a product of them - such as kg, meters and seconds. Arguably, one can write meaningful equations such as 60 seconds = 1 minute, with matching dimensions but mismatching units (as first noted by Mehrdad).
{ "source": [ "https://physics.stackexchange.com/questions/251775", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/111154/" ] }
251,803
From my humble (physicist) mathematics training, I have a vague notion of what a Hilbert space actually is mathematically, i.e. an inner product space that is complete , with completeness in this sense heuristically meaning that all possible sequences of elements within this space have a well-defined limit that is itself an element of this space (I think this is right?!). This is a useful property as it enables one to do calculus in this space. Now, in quantum mechanics Hilbert spaces play an important role in that they are the spaces in which the (pure) states of quantum mechanical systems "live". Given a set of orthonormal basis vectors, $\lbrace\lvert\phi_{n}\rangle\rbrace$ for such a Hilbert space, one can express a given state vector, $\lvert\psi\rangle$ as a linear combination of these basis states, $$\lvert\psi\rangle=\sum_{n}c_{n}\lvert\phi_{n}\rangle$$ since the basis states are orthonormal, i.e. $\langle\phi_{n}\lvert\phi_{m}\rangle =\delta_{nm}$ we find that $c_{n}=\langle\phi_{n}\lvert\psi\rangle$, and hence $$\lvert\psi\rangle=\sum_{n}c_{n}\lvert\phi_{n}\rangle =\sum_{n}\langle\phi_{n}\lvert\psi\rangle\lvert\phi_{n}\rangle =\left(\sum_{n}\lvert\phi_{n}\rangle\langle\phi_{n}\lvert\right)\lvert\psi\rangle$$ which implies that $$\sum_{n}\lvert\phi_{n}\rangle\langle\phi_{n}\lvert =\mathbf{1}$$ This is referred to as a completeness relation , but I'm unsure what this is referring to? I've also read that the basis must be complete. Is this referring to the notion of completeness associated with limits of sequences, or is there something else I'm missing?
A Hilbert space $\cal H$ is complete which means that every Cauchy sequence of vectors admits a limit in the space itself. Under this hypothesis there exist Hilbert bases also known as complete orthonormal systems of vectors in $\cal H$. A set of vectors $\{\psi_i\}_{i\in I}\subset \cal H$ is called an orthonormal system if $\langle \psi_i |\psi_j \rangle = \delta_{ij}$. It is also said to be complete if a certain set of equivalent conditions hold. One of them is $$\langle \psi | \phi \rangle = \sum_{i\in I}\langle \psi| \psi_i\rangle \langle \psi_i| \phi \rangle\quad \forall \psi, \phi \in \cal H\tag{1}\:.$$ (This sum is absolutely convergent and must be interpreted if $I$ is not countable, but I will not enter into these details here.) Since $\psi,\phi$ are arbitrary, (1) is often written $$I = \sum_{i\in I}| \psi_i\rangle \langle \psi_i|\tag{2}\:.$$
{ "source": [ "https://physics.stackexchange.com/questions/251803", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/35305/" ] }
251,971
A vertical rod, a usual dipole, produces radio waves in the horizontal plane, mostly in two opposite directions: $\qquad \qquad \qquad \qquad \qquad $ If that is possible, how do you produce a spherical EM radiation? should the antenna be a ( ..n expanding and contracting ) globe or a circle? How should the charges oscillate? and, lastly, would its energy decrease by 1/$4 \pi r^2$ and so its range would be rather short? P.S. Someone said in a comment to How is a spherical electromagnetic wave emitted from an antenna described in terms of photons? : For some reason, my instinct is that a spherical electromagnetic wave cannot be emitted by an antenna. Instead, they can only be emitted by a charge. I guess that's cause I always think of an antenna as an object that has no net charge. – Carl Brannen Is this true? can you explain how a charge, say an electron, can produce a spherical wave? Also, does the section (the area) of a charge carry any info about its force or anything else?
A result known as Birkhoff's theorem forbids spherical electromagnetic radiation. The statement of the theorem is that any spherically symmetric vacuum solution to Maxwell's equations must be static. It is rather simple to prove. In a spherically symmetric solution $\mathbf E$ and $\mathbf B$ must be radial. Make an Ansatz, $$\mathbf E = E_0 \exp(i(\mathbf k\cdot\mathbf r-\omega t)) \hat r \quad \mathbf B = B_0 \exp(i(\mathbf k\cdot\mathbf r-\omega t)) \hat r $$ The wavevector $\mathbf k$ must be $\mathbf k = k\hat r$ for spherical symmetry. Now Ampere's law is $$\nabla\times \mathbf B = i\mathbf k \times \mathbf B = 0 = \partial_t \mathbf E = -i\omega \mathbf E$$ which implies $\omega = 0$, so that the field is static, or $E_0 = 0$. From Faraday's law $\nabla\times\mathbf E =- \partial_t \mathbf B$ you can see that if $E_0 = 0$ but $\omega \neq 0$, then also $B_0 = 0$. The most general result for electromagnetic radiation is that in Coulomb gauge, in the radiation zone, the vector potential is $$\mathbf A(\mathbf x, t) = \frac{\mu_0}{4\pi }\frac{e^{i(kr-\omega t)}}{r} \int \mathbf J(\mathbf x') e^{-ik\hat{x} \cdot \mathbf x'} \, dx'$$ where $\mathbf J(\mathbf x')$ is the current in the source region, e.g., your antenna, and the current is assumed to have sinusoidal (harmonic) time dependence. [This is not a restriction because Maxwell's equations are linear and Fourier transform exists.] The angular dependence is entirely in the integral over the source current. Thus to achieve some desired angular profile of the radiation, one needs to design $\mathbf J$ appropriately. Your particular case of an oscillating sphere of charge actually does not radiate because it has only a monopole moment and there is no monopole radiation. A spheroidal charge distribution is treated by Jackson Classical Electrodynamics , Sec. 9.3. There Jackson shows that this arrangement leads to quadrupole radiation with a four-lobed distribution of radiated power. For a more in-depth discussion, read Ch. 9 in Jackson, which treats radiation in detail, including the angular distribution of radiated power from various sources.
{ "source": [ "https://physics.stackexchange.com/questions/251971", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
252,101
I am something of a dilettante in physics, so please forgive me if the answer to this question is painfully obvious. The question is simple, can something that theoretically has no mass exert a force. I have been tossing around this and other similar questions in my head for a while now and have not really found any concrete answers to my inquiry. I am thinking about how light seems to be able to push objects but yet has no mass, however I expanded the question to be more encompassing in hopes of further learning.
Yes, photons can. See https://en.wikipedia.org/wiki/Radiation_pressure (and photons are certainly massless). PS In fact, any massless particle has momentum(*) and if it is scattered on a body, it changes its own and the body's momentum, which is what a force does. (*) $p = \hbar k = E/c$ where $E$ is its energy and $c$ is speed of light
{ "source": [ "https://physics.stackexchange.com/questions/252101", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/99285/" ] }
252,111
From SR, we know that the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. But in GR, does it still hold for all observers? I mean the constancy of the speed is agreed only by local inertial observers or any other observers.
Yes, photons can. See https://en.wikipedia.org/wiki/Radiation_pressure (and photons are certainly massless). PS In fact, any massless particle has momentum(*) and if it is scattered on a body, it changes its own and the body's momentum, which is what a force does. (*) $p = \hbar k = E/c$ where $E$ is its energy and $c$ is speed of light
{ "source": [ "https://physics.stackexchange.com/questions/252111", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/101467/" ] }
252,288
I know mathematically the answer to this question is yes, and it's very obvious to see that the dimensions of a ratio cancel out, leaving behind a mathematically dimensionless quantity. However, I've been writing a c++ dimensional analysis library (the specifics of which are out of scope), which has me thinking about the problem because I decided to handle angle units as dimensioned quantities, which seemed natural to enable the unit conversion with degrees. The overall purpose of the library is to disallow operations that don't make sense because they violate the rules of dimensional analysis, e.g. adding a length quantity to an area quantity, and thus provide some built-in sanity checking to the computation. Treating radians as units made sense because of some of the properties that dimensioned quantities seemed to me to have: The sum and difference of two quantities with the same dimension have the same physical meaning as both quantities separately. Quantities with the same dimension are meaningfully comparable to each other, and not meaningfully comparable (directly) to quantities with different dimensions. Dimensions may have different units that are scalar multiple (sometimes with a datum shift). If the angle is treated as a dimension, my 3 made up properties are satisfied, and everything "makes sense" to me. I can't help thinking that the fact that radians are a ratio of lengths (SI defines them as m/m) is actually critically important, even though the length is cancelled out. For example, though radians and steradians are both dimensionless, it would be a logical error to take their sum. I also can't see how a ratio of something like (kg/kg) could be described as an "angle". This seems to imply to me that not all dimensionless units are compatible, which seems analogous to how units with different dimensions are not compatible. And if not all dimensionless units are compatible, then the dimensionless "dimension" would violate made-up property #1 and cause me a lot of confusion. However, treating radians as having dimension also has a lot of issues, because now your trig functions have to be written in terms of $\cos(\text{angleUnit}) = \text{dimensionless unit}$ even though they are analytic functions (although I'm not convinced that's bad). Small-angle assumptions in this scheme would be defined as performing implicit unit conversions, which is logical given our trig function definitions but incompatible with how many functions are defined, especially since many authors neglect to mention they are making those assumptions. So I guess my question is: are all dimensionless quantities, but specifically angle quantities, really compatible with all other dimensionless quantities? And if not, don't they actually have dimension or at least the properties of dimension?
The answers are no and no. Being dimensionless or having the same dimension is a necessary condition for quantities to be "compatible", it is not a sufficient one. What one is trying to avoid is called category error. There is analogous situation in computer programming: one wishes to avoid putting values of some data type into places reserved for a different data type. But while having the same dimension is certainly required for values to belong to the same "data type", there is no reason why they can not be demarcated by many other categories in addition to that. Newton meter is a unit of both torque and energy, and joules per kelvin of both entropy and heat capacity, but adding them is typically problematic. The same goes for adding proverbial apples and oranges measured in "dimensionless units" of counting numbers. Actually, the last example shows that the demarcation of categories depends on a context, if one only cares about apples and oranges as objects it might be ok to add them. Dimension is so prominent in physics because it is rarely meaningful to mix quantities of different dimensions, and there is a nice calculus ( dimensional analysis ) for keeping track of it. But it also makes sense to introduce additional categories to demarcate values of quantities like torque and energy, even if there may not be as nice a calculus for them. As your own examples show it also makes sense to treat radians differently depending on context: take their category ("dimension") viz. steradians or counting numbers into account when deciding about addition, but disregard it when it comes to substitution into transcendental functions. Hertz is typically used to measure wave frequency, but because cycles and radians are officially dimensionless it shares dimension with the unit of angular velocity, radian per second, radians also make the only difference between amperes for electric current and ampere-turns for magnetomotive force. Similarly, dimensionless steradians are the only difference between lumen and candela , while luminous intensity and flux are often distinguished. So in those contexts it might also make sense to treat radians and steradians as "dimensional". In fact, radians and steradians were in a class of their own as "supplementary units" of SI until 1995. That year the International Bureau on Weights and Measures (BIPM) decided that " ambiguous status of the supplementary units compromises the internal coherence of the SI ", and reclassified them as " dimensionless derived units, the names and symbols of which may, but need not, be used in expressions for other SI derived units, as is convenient ", thus eliminating the class of supplementary units. The desire to maintain a general rule that arguments of transcendental functions must be dimensionless might have played a role, but this shows that dimensional status is to a degree decided by convention rather than by fact. In the same vein, ampere was introduced as a new base unit into MKS system only in 1901, and incorporated into SI even later. As the name suggests, MKS originally made do with just meters, kilograms, and seconds as base units, this required fractional powers of meters and kilograms in the derived units of electric current however. As @dmckee pointed out energy and torque can be distinguished as scalars and pseudo-scalars, meaning that under the orientation reversing transformations like reflections, the former keep their value while the latter switch sign. This brings up another categorization of quantities that plays a big role in physics, by transformation rules under coordinate changes. Among vectors there are "true" vectors (like velocity), covectors (like momentum), and pseudo-vectors (like angular momentum), in fact all tensor quantities are categorized by representations of orthogonal (in relativity Lorentz) group. This also comes with a nice calculus describing how tensor types combine under various operations (dot product, tensor product, wedge product, contractions, etc.). One reason for rewriting Maxwell's electrodynamics in terms of differential forms is to keep track of them. This becomes important when say the background metric is not Euclidean, because the identification of vectors and covectors depends on it. Different tensor types tend to have different dimensions anyway, but there are exceptions and the categorizations are clearly independent. But even tensor type may not be enough. Before Joule's measurements of the mechanical equivalent of heat in 1840s the quantity of heat (measured in calories) and mechanical energy (measured in derived units) had two different dimensions. But even today one may wish to keep them in separate categories when studying a system where mechanical and thermal energy are approximately separately conserved, the same applies to Einstein's mass energy. This means that categorical boundaries are not set in stone, they may be erected or taken down both for practical expediency or due to a physical discovery. Many historical peculiarities in the choice and development of units and unit systems are described in Klein's book The Science of Measurement .
{ "source": [ "https://physics.stackexchange.com/questions/252288", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/115586/" ] }
252,802
Every advert I come across for LED bulbs advertise them as the equivalent of a higher W incandescent bulbs. This makes no sense to me, if the room requires 40W to lighten it up then it'll always require 40W of energy. How is it possible for 6W of energy to do the job? What am I missing here?
A 40W incandescent light bulb has a luminous efficiency of 1.9% . That means only 1.9%, or 0.76W, of the energy consumed by the bulb ends up as visible light. LED bulbs have an efficiency of around 10% - the efficiency depends on the design and can be as high as 15% or as low as 8%. So a 6W LED bulb will produce between 0.9 and 0.48W of visible light. The claim that a 6W LED bulb produces as much light as a 40W incandescent bulb requires the efficiency of the LED bulb to be 12.7%, which is well within the range of efficiencies that LED bulbs can achieve.
{ "source": [ "https://physics.stackexchange.com/questions/252802", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62835/" ] }
254,858
When I was doing research on General Relativity, I found Einstein's equation for Gravitational Time Dilation. I discovered that when you plugged in a large enough value for $M$ (around $10^{19}$ kilograms), and plugged in $1$ for $r$, then the equation would give an imaginary answer. What does this mean?
Nice discovery! The formula for time dilation outside a spherical body is $$\tau = t\sqrt{1-\frac{2GM}{c^2r}}$$ where $\tau$ is the proper time as measured by your object at coordinate radius $r$, $t$ is the time as measured by an observer at infinity, $M$ the mass of the spherical body, and $G$ and $c$ the gravitational constant and the speed of light. You have noticed that when $r$ gets small enough, the square root can become imaginary. To get a real result you must have $$r>\frac{2GM}{c^2}=r_S$$ where I have defined $r_S$, the Schwarzschild radius. Well, there's a simple reason for this. If your body has a radius smaller than $r_S$, then it's a black hole, and the formula doesn't apply because objects inside the black hole (that is, with $r<r_S$) can't send signals to the outside, so the notion of time dilation of a signal (also called redshift in this context) doesn't make sense. Indeed, as $r$ approaches the Schwarzschild radius (from above) the redshift approaches infinity; this is why it is said that if you observe from far away a probe falling into a black hole, you will see it getting redder and moving slower as it falls; you'll never actually see it get into the black hole. To answer the question in the title: no, there's no such thing as imaginary time dilation. Getting an imaginary result here is a sign that the formula doesn't always make sense.
{ "source": [ "https://physics.stackexchange.com/questions/254858", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/115161/" ] }
255,340
A box containing photons gravitates more strongly than an empty box, and thus the equivalence principle dictates that a box containing photons has more inertia than an empty box. The inescapable conclusion seems to be that we can ascribe the property of inertia to light. Is this a correct deduction?
Yes! In fact, this is very common. For example, the mass of a proton is much greater than the sum of the masses of the constituent quarks. Much of the extra mass comes from the gluons that bind the quarks together; each gluon is massless, but collectively they contribute to the inertia. The point is that the mass of a system is not the same as the sum of the masses of its constituents. Of course, this is just a rephrasing of $E = mc^2$ . If you have photons bouncing back and forth in a box, their energy contributes to the total mass.
{ "source": [ "https://physics.stackexchange.com/questions/255340", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/11633/" ] }
255,353
Why does a system like to minimize its total energy? For example, the total energy of a $H_2$ molecule is smaller than the that of two two isolated hydrogen atoms and that is why two $H$ atoms try to form a covalent bond. According to the classical mechanics, it is the potential energy of a conservative system that is minimum in equilibrium, not the total energy.
The anthropomorphic formulation "tries to" is misleading. Under the effect of ambient noise, matter explores the possible configurations around its current state: e.g., two single hydrogen atoms wiggle around and meet. If they happen to bind, this releases energy which goes away, and we say that the energetic state of this new $H_2$ molecule is lower than what we had. Unless the ambient noise or some experimentalist gives back this energy to the $H_2$ molecule, it will stay so, so there is a net bias toward these states that we describe as having a lower (free) energy. Let's add that the traditional way to explain this bias (meaning that you need more energy, and thus have less chances, to move from a lower energy state to a higher one than the other way around), is with this schematic of a potential energy analogy:
{ "source": [ "https://physics.stackexchange.com/questions/255353", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/36793/" ] }
255,640
I apologize if this is a really silly question. In the (textbook) quantum teleportation algorithm, in the step right after Alice has measured her system but before she has sent her classical information to Bob, she is about to send one of the following values: 00,01,10,11. What if Bob doesn't want to wait and simply takes a guess? Wouldn't there then be superluminal communication 25% of the time?
This is really a subtle point. You are right that in 25% of the cases, Bob will randomly chose the "correct" measurement basis and thus get the correct value. However, there is no way for Bob to know when he has actually chosen the right basis and when he has chosen the wrong basis, so his measurement outcome does not contain more information that a random coin-toss. It is only when the information from Alice (regarding which basis to measure in) has reached him, that he can make use of his earlier (75% erroneous) measurements. It is in this sence that information cannot propagate faster than the speed of light.
{ "source": [ "https://physics.stackexchange.com/questions/255640", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/81625/" ] }
255,909
I couldn't find any simple texts explaining the Kosterlitz-Thouless transition . More specifically can someone explain the role of vortices in the transition. edit: links explaining the transition in a simple manner are also appreciated. Also the explanation does not have to be mathematically rigorous, a qualitative explanation is just fine. edit 2: To be clear, I know generally why it happens. I believe it has to do with the crystal lattice having 4 fold symmetry (lattice atoms arranged in square fashion), and thus, the atoms do not have an easy axis to align there magnetic moments. As a result, some weird vortices are created.
The Berezinskii-Kosterlitz-Thouless (BKT) scenario is one of the most beautiful transitions that is ubiquitous in 2d systems (though it can also occur in higher dimensions for particular kinds of models) that surprisingly requires non-perturbative effects (i.e. topological defects) to be realized. To understand all the fuss (and the nobel) around this transition, perhaps a bit of context might be useful. There is a celebrated theorem in equilibrium statistical mechanics, the Mermin-Wagner-Hohenberg-Coleman theorem , that essentially tells us that a continuous symmetry cannot be broken spontaneously at any finite temperature in dimensions two or lower. This is because the goldstone modes generated upon breaking breaking a continuous symmetry have strong fluctuations in $d=1,2$ leading to the symmetry being restored at long distances (for $T>0$ ). Now for a 2d superfluid or superconductor the relevant order parameter is a complex scalar field $\psi=|\psi|e^{i\phi}$ with a phase shift $U(1)$ symmetry. So immediately, one would imagine that the 2d superconducting or superfluidity transition would never occur at finite temperature (and hence these states would never exist in the thermodynamic limit). The same conclusion is reached for the XY ferromagnet ( $O(2)$ classical spins on a 2d lattice) or a 2d nematic liquid crystal. What Kosterlitz and Thouless went on to show was that the theorem was true in that no continuous symmetry is spontaneously broken at finite temperature, but there still was a continuous phase transition (with a diverging correlation length) at some finite temperature in these systems. This is an important discovery as until then the Landau-Ginzburg paradigm used to describe continuous phase transitions and critical phenomena always associated spontaneous symmetry breaking with the transition (note that however it was rather well known that a first order transition didn't require any such symmetry breaking, c.f the regular Liquid-Gas transition). Later on, Polyakov extended this scenario to gauge theories (in the hope of describing confinement in QCD), resulting in some very nice work showing, for example, 2+1 "compact" QED has a gapped spectrum in the IR due to topological excitations ( Phys. Lett. B 59 , 1975 , Nucl. Phys. B 120 , 1977 ) and the SU( $N$ ) Thirring model has a fermions condensing with finite mass in the IR without breaking the chiral symmetry of the theory ( E. Witten, Nucl. Phys. B 145 , 1978 ). It was also further extended by D. Nelson and B. Halperin in the context of 2d melting of crystalline solids ( Phys. Rev. B 19 , 1979 ) leading to the prediction of a new liquid crystalline hexatic phase. After this very long preamble, let us now look at what the transition is really all about. The simplest model that exhibits the BKT transition is the XY model. Consider a 2d lattice with unit 2d vectors at each site. Each vector $\vec{S}_i$ (at site ' $i$ ') being in the plane is specified by a single angle $\theta_i$ $$ \vec{S}_i=(\cos\theta_i,\sin\theta_i) $$ The model is now specified by the Hamiltonian of the system, which includes nearest neighbour interactions that prefer to align near by spins. In the absence of an external field, we have \begin{align} \beta\mathcal{H}&=-\dfrac{J}{k_B T}\sum_{\langle i,j\rangle}\vec{S}_i\cdot\vec{S}_j\\ &=-\dfrac{J}{k_B T}\sum_{\langle i,j\rangle}\cos(\theta_i-\theta_j) \end{align} where $J>0$ is the interaction coupling constant. Now at low temperatures, as the fluctuations in the angles are going to be small, at long distances, we take the continuum limit of the lattice model assuming the angle field is slowly varying. Therefore writing $\theta_i-\theta_j=a\nabla\theta(x)\cdot\hat{e}_{ij}+O(a^2)$ , where $a\rightarrow0$ is the lattice spacing and $\hat{e}_{ij}$ is the unit vector along the lattice bond joining sites $i$ and $j$ , we get $$ \beta\mathcal{H}_{\mathrm{cont.}}=\dfrac{\beta J}{2}\int\mathrm{d}^2x\ |\nabla\theta|^2 $$ For small fluctuations ( $\theta\ll 1$ ), the fact that $\theta(x)$ is an angular variable is irrelevant, allowing us to compute the two point correlation function as the partition function is given by a gaussian integral. $$ \langle|\theta(q)|^2\rangle=\dfrac{k_B T}{J\ q^2} $$ Inverse Fourier transforming this, gives us \begin{gather} \langle\theta(x)^2\rangle=\dfrac{k_B T}{2\pi J}\ln\left(\dfrac{L}{a}\right)\\ \langle[\theta(x)-\theta(0)]^2\rangle=\dfrac{k_B T}{2\pi J}\ln\left(\dfrac{x}{a}\right) \end{gather} $L$ is the system size (IR cutoff) and $a$ the lattice spacing (UV cutoff). Hence as $L\rightarrow\infty$ , $\langle\vec{S}(x)\rangle=0$ implying the absence of long-ranged order and $$ \langle\vec{S}(x)\cdot\vec{S}(0)\rangle=\left(\dfrac{x}{a}\right)^{-\frac{k_BT}{2\pi J}} $$ The two-point spin correlation goes to 0 as $x\rightarrow\infty$ denoting the absence of long ranged order (this is once again just the Mermin-Wagner theorem) though the decay is very slow. It is temperature dependent power law, instead of the usual exponential decay (with a finite correlation length) expected for a disordered phase. So the low temperature phase of the XY model has what is called quasi-long ranged order (QLRO) with infinite correlation length ( $\xi=\infty$ ). As additional non-linearities coming from the gradient expansion can be shown to be irrelevant at long distances (in the RG sense), one is immediately led to believe that this power law decay persists for all temperatures. This is evidently wrong, as common sense (and also high temperature loop expansions of the lattice model) would tell us that at high temperatures, the interaction is irrelvant leaving each spin essentially independent and random, leading to decorrelations over a few lattice spacings. The resolution is then obtained by noting that in forgetting the angular nature of $\theta(x)$ , the continuum gaussian "spin wave" theory does not account for windings of the angular phase field from $0$ to $2\pi$ . These are called vortices (and anti-vortices), and they correspond to topological defects in the $\theta(x)$ field (which is then not defined at the core of the defect). They are perfectly reasonable configurations on the lattice whose continuum limit corresponds to point singularities in the angle field. Note that these configurations never appear in a perturbative gradient expansion and are hence non-perturbative in nature. At the continuum level, the vortex is a singular solution of the euler-lagrange equation. $$ \nabla^2\theta=0\\ \oint_{\Gamma}\mathrm{d}s\cdot\nabla\theta=2\pi q $$ where $\Gamma$ is closed loop encircling the origin and $q$ is the integer "charge" of the vortex. This basically says that as you go once around the origin, the phase field $\theta$ goes from $0$ to $2\pi q$ (which is the same as $0$ for a periodic function as $q$ is an integer). Now for a single such defect, we have $|\nabla\theta|=q/r$ ( $r$ being the radial coordinate), we can compute its energy to be, $$ E_q=\pi J q^2\ln\left(\dfrac{L}{a}\right) $$ which diverges logarithmically in the thermodynamic limit. Hence single defects are never excited, but defect pairs with opposite charges (dipoles) have a finite energy and can be excited at finite temperature. Neglecting interactions for the time being, there is a very simple hand waving argument for the existence of a phase transition. The energy of a single free defect diverges, but at finite $T$ one must look at the free energy which includes entropic contributions too. The number of ways a single defect of size $\sim a^2$ can be placed in a area of $L^2$ is roughly $(L/a)^2$ . taking the logarithm to get the entropy, we have for the free energy $$ F=E_q-TS=(\pi J q^2-2 k_B T)\ln\left(\dfrac{L}{a}\right) $$ As the lowest charge excitations correspond to $q=1$ , we have for $T>T_c=\pi J/(2 k_B)$ , the free energy becomes negative, which means that there is a proliferation of free defects in the system as entropy wins over the defect energetics. Including defect interactions doesn't change this picture (even $T_c$ remains the same). At $T=T_c$ , one obtains a universal power law decay (upto log corrections) $$ \langle\vec{S}(x)\cdot\vec{S}(0)\rangle=\left(\dfrac{x}{a}\right)^{-\eta} $$ with $\eta(T_c)=1/4$ . Above $T_c$ , we have a finite correlation length ( $\langle\vec{S}(x)\cdot\vec{S}(0)\rangle\sim e^{-x/\xi}$ ), which diverges exponentially fast as one approaches the transition form above. So, here we have a model in which both low and high temperature phases are disordered, but there is a phase transition at finite $T$ that involves the proliferation and unbinding of pairs of topological defects. Thinking of the defects as electric charges, the transition is then from an insulating low temperature phase to a conducting plasma with freely moving ions at higher temperature.
{ "source": [ "https://physics.stackexchange.com/questions/255909", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117333/" ] }
255,947
There is a device available for about $\$40$, which fits in the palm of the hand, runs on two AA batteries, and can measure distances up to $50\,{\rm ft}$ to an accuracy of $\sim \frac{1}{8}''$ ($\sim 3\,{\rm mm}$). Light travels $300,000\,{\rm km}\,{\rm s}^{-1}$, so it takes about $0.01\,{\rm ns}$ to travel the $3\,{\rm mm}$ equivalent to the device accuracy. If the device was counting ticks of a clock to measure the laser round-trip time (and therefore round-trip distance), the timing circuit would have to be running at about $50\,{\rm GHz}$. That's a pretty fast clock! Is that how these devices actually work? Run a very fast clock to measure off round-trip time, or is there some other principle of optics which is used in conjunction with cheap and simple electronics?
EDIT updated (improved) description of phase detection circuit There are two principles used in these systems. The first is the time-of-flight principle. As you noted, if you wanted to get down to 3 mm accuracy, you need timing resolution of 20 ps (20, not 10, because you would be timing the round trip of the light). That's challenging - certainly not the realm of cheap consumer electronics. The problem is not only the need to detect a fast edge - you have to detect the actual reflected pulse, and not every other bit of noise around. Signal averaging would be your friend: sending a train of pulses and timing their average round trip time helps. This immediately suggests that continuous modulation would probably work better - it has an inherent filtering characteristic. That leads to the second way to get an accurate measurement: by comparing the phase of the emitted and returned signal. If you modulate your laser at a modest 300 MHz, the wave length of one complete "wave" is 1 m; to measure a change in distance of 3 mm (6 mm round trip), it is sufficient to detect a phase shift of $\frac{6}{1000}\times 2\pi$. This is quite trivial with a circuit that squares the transmitted and reflected wave, then takes the XOR of the two signals and averages the result. Such a circuit will give minimum voltage when the two signals are exactly in phase, and maximum voltage when they are exactly out of phase; and the voltage will be very linear with phase shift. You then add a second circuit that detects whether signal 2 is high when signal 1 has a rising edge: that will distinguish whether signal 1 or signal 2 is leading. Putting the output of the logic gates into a low pass filter (resistor and capacitor) and feeding it into a low speed 12 bit ADC is sufficient to determine the phase with high accuracy. There are ready made circuits that can do this for you - for example, the AD8302 The only problem with the phase method is that you will find the distance modulo half the wavelength; to resolve this, you use multiple frequencies. There is only a single distance that has the right wavelength for all frequencies. A possible variation of this uses a sweeping frequency source, and detects the zero crossings of the phase - that is, every time the phase detector output is zero (perfectly in phase) you record the modulation frequency at which this occurred. This can easily be done very accurately - and has the advantage that "detecting zero phase" doesn't even require an accurate ADC. A wise man taught me many years ago that "the only thing you can measure accurately is zero". The distance would correspond to the round trip time of the lowest frequency which has a zero crossing - but you don't necessarily know what that frequency is (you may not be able to go that low). However, each subsequent zero crossing will correspond to the same increase in frequency - so if you measure the $\Delta f$ between zero crossings for a number of crossings, you get an accurate measure of the distance. Note that a technique like that requires very little compute power, and most of the processing is the result of very simple signal averaging in analog electronics. You can read for example US patent application US20070127009 for some details on how these things are implemented. A variation of the above is actually the basis of an incredibly sensitive instrument called the lock-in amplifier. The principle of a lock-in amplifier is that you know there is a weak signal at a known frequency, but with unknown phase (which is the case for us when we look at the reflected signal of a modulated laser). Now you take the input signal, and put it through an IQ detector: that is, you multiply it by two signals of the same frequency, but in quadrature (90° phase shift). And then you average the output over many cycles. Something interesting happens when you do that: the circuit acts, in effect, as a phase sensitive bandpass filter, and the longer you wait (the more cycles' output you average over), the narrower the filter becomes. Because you have both the I and the Q signals (with their phase shift), you get both amplitude and phase information - with the ability to recover a tiny signal on top of a hug amount of noise, which is exactly the scenario you will often have with a laser range finder. See for example the wiki article . The quadrature detection becomes quite trivial when you use a clock at twice the modulation frequency, and put two dividers on it: one that triggers on the positive edge, and one that triggers on the negative edge. A couple of (fast, electronic) analog switches and a simple RC circuit complete the project. You can now sweep the driving frequency and watch the phase on the two outputs "wrap" - and every time it makes a full circle, you have increased the frequency by an amount $\Delta f = \frac{c}{2d}$ where $c$ is the speed of light, and $d$ is the distance to the target. Which has turned a very hard measurement into a really easy one.
{ "source": [ "https://physics.stackexchange.com/questions/255947", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/27898/" ] }
255,964
Is it possible to levitate something inside a solenoid? If yes, is it possible in any direction (horizontal, vertical)? Also, is it possible when an object is traveling through? Thanks in advance!
EDIT updated (improved) description of phase detection circuit There are two principles used in these systems. The first is the time-of-flight principle. As you noted, if you wanted to get down to 3 mm accuracy, you need timing resolution of 20 ps (20, not 10, because you would be timing the round trip of the light). That's challenging - certainly not the realm of cheap consumer electronics. The problem is not only the need to detect a fast edge - you have to detect the actual reflected pulse, and not every other bit of noise around. Signal averaging would be your friend: sending a train of pulses and timing their average round trip time helps. This immediately suggests that continuous modulation would probably work better - it has an inherent filtering characteristic. That leads to the second way to get an accurate measurement: by comparing the phase of the emitted and returned signal. If you modulate your laser at a modest 300 MHz, the wave length of one complete "wave" is 1 m; to measure a change in distance of 3 mm (6 mm round trip), it is sufficient to detect a phase shift of $\frac{6}{1000}\times 2\pi$. This is quite trivial with a circuit that squares the transmitted and reflected wave, then takes the XOR of the two signals and averages the result. Such a circuit will give minimum voltage when the two signals are exactly in phase, and maximum voltage when they are exactly out of phase; and the voltage will be very linear with phase shift. You then add a second circuit that detects whether signal 2 is high when signal 1 has a rising edge: that will distinguish whether signal 1 or signal 2 is leading. Putting the output of the logic gates into a low pass filter (resistor and capacitor) and feeding it into a low speed 12 bit ADC is sufficient to determine the phase with high accuracy. There are ready made circuits that can do this for you - for example, the AD8302 The only problem with the phase method is that you will find the distance modulo half the wavelength; to resolve this, you use multiple frequencies. There is only a single distance that has the right wavelength for all frequencies. A possible variation of this uses a sweeping frequency source, and detects the zero crossings of the phase - that is, every time the phase detector output is zero (perfectly in phase) you record the modulation frequency at which this occurred. This can easily be done very accurately - and has the advantage that "detecting zero phase" doesn't even require an accurate ADC. A wise man taught me many years ago that "the only thing you can measure accurately is zero". The distance would correspond to the round trip time of the lowest frequency which has a zero crossing - but you don't necessarily know what that frequency is (you may not be able to go that low). However, each subsequent zero crossing will correspond to the same increase in frequency - so if you measure the $\Delta f$ between zero crossings for a number of crossings, you get an accurate measure of the distance. Note that a technique like that requires very little compute power, and most of the processing is the result of very simple signal averaging in analog electronics. You can read for example US patent application US20070127009 for some details on how these things are implemented. A variation of the above is actually the basis of an incredibly sensitive instrument called the lock-in amplifier. The principle of a lock-in amplifier is that you know there is a weak signal at a known frequency, but with unknown phase (which is the case for us when we look at the reflected signal of a modulated laser). Now you take the input signal, and put it through an IQ detector: that is, you multiply it by two signals of the same frequency, but in quadrature (90° phase shift). And then you average the output over many cycles. Something interesting happens when you do that: the circuit acts, in effect, as a phase sensitive bandpass filter, and the longer you wait (the more cycles' output you average over), the narrower the filter becomes. Because you have both the I and the Q signals (with their phase shift), you get both amplitude and phase information - with the ability to recover a tiny signal on top of a hug amount of noise, which is exactly the scenario you will often have with a laser range finder. See for example the wiki article . The quadrature detection becomes quite trivial when you use a clock at twice the modulation frequency, and put two dividers on it: one that triggers on the positive edge, and one that triggers on the negative edge. A couple of (fast, electronic) analog switches and a simple RC circuit complete the project. You can now sweep the driving frequency and watch the phase on the two outputs "wrap" - and every time it makes a full circle, you have increased the frequency by an amount $\Delta f = \frac{c}{2d}$ where $c$ is the speed of light, and $d$ is the distance to the target. Which has turned a very hard measurement into a really easy one.
{ "source": [ "https://physics.stackexchange.com/questions/255964", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117367/" ] }
255,968
This is my first time posting here so bear with me. Also, the circumstances are strange so I am posting quite a bit of detail because I don't know what is relevant and what is not. The short story is I went snorkeling tonight in Southern California. Afterwards I went back to my car, took off my wet suit, and drove to starbucks (a mile away). After parking I went to the pay station, and when putting my credit card in I felt something very similar to what I would call an electric shock. Not the feeling of a static discharge, more like the feeling I get when replacing a light switch after turning off the wrong circuit breaker. It was a continuous sensation and very jolting (I jumped back a few feet), but definitely not as strong as the few times I have been shocked doing home repairs. I then went to get my parking receipt from the bottom of the machine and once again I got shocked, this time even stronger. Both parts of the machine that caused the shock were metal. I then started to try and problem solve. My wife came over and asked what was going on. I told her and she touched the same places as me without anything happening. She then had me hand to her my phone and the digital camera that I was holding. I touched the machine again and whoa, same result. She tried touching the machine in various places, again nothing. I inadvertently touched her hand while she was touching the machine and then suddenly she felt it too. We were able to repeat that several times, and then, scratching our heads, we headed into starbucks. We came back out 15 minutes later after drinking our hot chocolate and tried to reproduce the phenomenon with no luck. Now I am going to give you all the relevant (and probably many irrelevant) details I can think of: I was wearing flip flops from the time I stripped off my neoprene wet suit at the car until the time I started getting shocked (my wife was wearing Birkenstocks). I had been snorkeling for about an hour in the Pacific Ocean wearing a full body wet-suit, booties, and gloves (no hood). I had been camping the night before and consumed quite a bit of Gatorade. My wife had only been wearing a spring suit and gloves, no booties. There was another receipt that had been left in the machine (maybe someone else had been shocked as well and decided it wasn't worth the risk of going after it?) I can't think of anything else relevant. Any insights into what was going on here would be welcome. I tried calling the maintainers of the machine but couldn't get through (this was before I found out that I seemed to be the only one affected). Thanks!
EDIT updated (improved) description of phase detection circuit There are two principles used in these systems. The first is the time-of-flight principle. As you noted, if you wanted to get down to 3 mm accuracy, you need timing resolution of 20 ps (20, not 10, because you would be timing the round trip of the light). That's challenging - certainly not the realm of cheap consumer electronics. The problem is not only the need to detect a fast edge - you have to detect the actual reflected pulse, and not every other bit of noise around. Signal averaging would be your friend: sending a train of pulses and timing their average round trip time helps. This immediately suggests that continuous modulation would probably work better - it has an inherent filtering characteristic. That leads to the second way to get an accurate measurement: by comparing the phase of the emitted and returned signal. If you modulate your laser at a modest 300 MHz, the wave length of one complete "wave" is 1 m; to measure a change in distance of 3 mm (6 mm round trip), it is sufficient to detect a phase shift of $\frac{6}{1000}\times 2\pi$. This is quite trivial with a circuit that squares the transmitted and reflected wave, then takes the XOR of the two signals and averages the result. Such a circuit will give minimum voltage when the two signals are exactly in phase, and maximum voltage when they are exactly out of phase; and the voltage will be very linear with phase shift. You then add a second circuit that detects whether signal 2 is high when signal 1 has a rising edge: that will distinguish whether signal 1 or signal 2 is leading. Putting the output of the logic gates into a low pass filter (resistor and capacitor) and feeding it into a low speed 12 bit ADC is sufficient to determine the phase with high accuracy. There are ready made circuits that can do this for you - for example, the AD8302 The only problem with the phase method is that you will find the distance modulo half the wavelength; to resolve this, you use multiple frequencies. There is only a single distance that has the right wavelength for all frequencies. A possible variation of this uses a sweeping frequency source, and detects the zero crossings of the phase - that is, every time the phase detector output is zero (perfectly in phase) you record the modulation frequency at which this occurred. This can easily be done very accurately - and has the advantage that "detecting zero phase" doesn't even require an accurate ADC. A wise man taught me many years ago that "the only thing you can measure accurately is zero". The distance would correspond to the round trip time of the lowest frequency which has a zero crossing - but you don't necessarily know what that frequency is (you may not be able to go that low). However, each subsequent zero crossing will correspond to the same increase in frequency - so if you measure the $\Delta f$ between zero crossings for a number of crossings, you get an accurate measure of the distance. Note that a technique like that requires very little compute power, and most of the processing is the result of very simple signal averaging in analog electronics. You can read for example US patent application US20070127009 for some details on how these things are implemented. A variation of the above is actually the basis of an incredibly sensitive instrument called the lock-in amplifier. The principle of a lock-in amplifier is that you know there is a weak signal at a known frequency, but with unknown phase (which is the case for us when we look at the reflected signal of a modulated laser). Now you take the input signal, and put it through an IQ detector: that is, you multiply it by two signals of the same frequency, but in quadrature (90° phase shift). And then you average the output over many cycles. Something interesting happens when you do that: the circuit acts, in effect, as a phase sensitive bandpass filter, and the longer you wait (the more cycles' output you average over), the narrower the filter becomes. Because you have both the I and the Q signals (with their phase shift), you get both amplitude and phase information - with the ability to recover a tiny signal on top of a hug amount of noise, which is exactly the scenario you will often have with a laser range finder. See for example the wiki article . The quadrature detection becomes quite trivial when you use a clock at twice the modulation frequency, and put two dividers on it: one that triggers on the positive edge, and one that triggers on the negative edge. A couple of (fast, electronic) analog switches and a simple RC circuit complete the project. You can now sweep the driving frequency and watch the phase on the two outputs "wrap" - and every time it makes a full circle, you have increased the frequency by an amount $\Delta f = \frac{c}{2d}$ where $c$ is the speed of light, and $d$ is the distance to the target. Which has turned a very hard measurement into a really easy one.
{ "source": [ "https://physics.stackexchange.com/questions/255968", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117371/" ] }
256,285
As far as I understand the definition of a second , the Cs-133 atom has two hyperfine ground states (which I don't really understand what they are but it's not really important), with a specific energy difference between them. Whenever the atom transitions from the higher-energy to the lower energy state, the difference in energy is released as a photon. A photon with that energy is equivalent to EM radiation of a specific frequency. A second is then defined as 9192631770 divided by this frequency. In many places I see people claiming that the Cesium atom oscillates between the two states, transitioning from one to the next 9192631770 times per second, and that this is what the definition is based on. This makes no sense to me, and seems incompatible with the interpretation above - which is based on the energy of a single transition, not to rapid transitions. So I usually just dismiss it and/or correct the person claiming this. When I saw the "oscillations" interpretation repeated in a video by the hugely popular Vsauce, I started to think maybe I got it all wrong. Maybe the second is defined by oscillations after all? Or maybe the two interpretations are somehow equivalent? So, is there any truth to Vsauce's description? And if not, why is the misconception of oscillations so popular?
You're correct and the video is mistaken. In fact, if cesium atoms were constantly oscillating between the two hyperfine states, cesium beam clocks wouldn't work at all! In its simplest form , a cesium beam clock uses a magnet to separate a stream of atoms into two streams based on their hyperfine state; one state is selected to continue down the tube to be exposed to an oscillating magnetic field in the microwave range, and the others are wasted. After the microwave chamber, the stream is magnetically separated again, with one state (differing from the state that was selected the first time by a certain energy) hitting a target that responds to cesium atoms by producing an electrical signal. The effect is something like the crossed polarizers of an LCD display. Since one state is selected before the microwave chamber, and a different state is selected afterwards, there is no signal unless atoms changed state in between. "Ordinarily", this doesn't happen, but if the microwave tube is bombarding the atoms with energy that corresponds to the desired hyperfine transition, then some of the atoms will absorb energy, make the transition, and be detected at the far end. By incorporating the beam and detector into the control loop of a variable oscillator, the microwave frequency can be maintained at the frequency that causes the hyperfine transition, independent of outside conditions. The part of this that's crucial to your question is the statement that the cesium atoms don't change state between the A and B selectors unless something causes them to. If they were changing states at >9GHz, then small variations in the travel times for the atoms (which move at hundreds or thousands of m/s, but nowhere near the speed of light) would result in a completely random signal at the detector. Instead, we get a coherent signal because the rate of spontaneous hyperfine transitions is small compared to the time the atoms spend in the tube. Any type of interaction that can scramble the hyperfine state of an atom reduces the sensitivity of the clock, and eliminating these interactions is a big part of maximizing accuracy.
{ "source": [ "https://physics.stackexchange.com/questions/256285", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/114200/" ] }
256,396
When painting miniatures (like RPG fantasy miniature soldiers)... why is it necessary to paint lights and shadows? Being a 3D object, shouldn't the natural light be enough to create lights and shadows if the figure is simply painted with plain colours?
When objects are very small, every source of illumination will appear to be "extended" - which softens the shadows and makes it harder to see the contours of the surface. By painting highlights and shadows, you reduce the impact of the extended source. See for example Why don't fluorescent lights produce shadows?
{ "source": [ "https://physics.stackexchange.com/questions/256396", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117578/" ] }
256,570
When thinking about how the WiFi signal propagates through a household, can I use the following thought experiment? Assume absolute darkness. Place a strong lightbulb where the WiFi access point is. The illumination that reaches various places in the house is approximately proportional to the strength of the WiFi signal in that place. How precise is this mental image? I know that the radio waves can penetrate some objects / walls that the light cannot. Is this at least somewhat representative?
It's more like the walls were semi-transparent glass, if you want to imagine it as light (and even then, you neglect diffraction effects). It would actually be better to imagine it as sound! But this seems to be exactly what you're looking for: http://arstechnica.com/gadgets/2014/08/mapping-wi-fi-dead-zones-with-physics-and-gifs/
{ "source": [ "https://physics.stackexchange.com/questions/256570", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117668/" ] }
256,674
I have an Ikea candle which has sat on my bookshelf in the sun for >5 years. Aside from an hour or two shortly after I bought the candle, I have not burned the candle regularly (in fact, the wick is broken off at the moment). After sitting in the sun for a few years, the wax has begun to crawl up the sides of the glass jar the candle is sold in. How does the wax crawl up the jar? I know that the outsides are certainly growing up, rather than the centre falling down. Originally after the candle was lit, the centre was depressed but no wax had begun to climb up the sides. It was only several years afterwards that I noticed the wax crawling behaviour. So it is not anything to do with candle being lit. The candle looked like this when I bought it: My candle now looks like this: How does the wax have such a strong attraction to the walls of the jar, and how does it flow, given that it is a solid? Where does it get the energy from?
Candle wax expands considerably when hot and molten. So while burning the candle the level in the glass rises. But when the candle is extinguished the outer region (nearest the glass) cools down quicker (candle wax doesn't conduct heat very well) and solidifies first, becoming immobile. The molten remainder then shrinks before solidifying. So it's the temperature gradient (from outside to inside) and the preferential solidifying from outside to inside that causes the outside material to be higher up in the glass, after full solidification. Here's a corroborating experiment almost anyone can carry out. Allow a cup candle (even a small tealight candle will work) to burn for a sufficiently long time, so a large molten puddle has formed. Now gently extinguish the flame and allow the candle to cool down and solidify undisturbed. The originally flat solid surface will have become convex.
{ "source": [ "https://physics.stackexchange.com/questions/256674", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/104866/" ] }
256,680
UV radiation isn't visible to the human eye, so how come we can see it as a purple/violet light from a UV lamp? Is it just because the lamps aren't perfect and end up emitting some light at a higher frequency? Or do they add some purple light intentionally? Or is there some more complex mechanism going on?
Candle wax expands considerably when hot and molten. So while burning the candle the level in the glass rises. But when the candle is extinguished the outer region (nearest the glass) cools down quicker (candle wax doesn't conduct heat very well) and solidifies first, becoming immobile. The molten remainder then shrinks before solidifying. So it's the temperature gradient (from outside to inside) and the preferential solidifying from outside to inside that causes the outside material to be higher up in the glass, after full solidification. Here's a corroborating experiment almost anyone can carry out. Allow a cup candle (even a small tealight candle will work) to burn for a sufficiently long time, so a large molten puddle has formed. Now gently extinguish the flame and allow the candle to cool down and solidify undisturbed. The originally flat solid surface will have become convex.
{ "source": [ "https://physics.stackexchange.com/questions/256680", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/112636/" ] }
257,018
There are plenty of well-answered questions on Physics SE about the mathematical differences between gauge symmetries and global symmetries, such as this question . However I would like to understand the key differences between the transformations in terms of what they mean physically . Say we have the Lagrangian for a scalar field interacting with the electromagnetic field, \begin{equation} L = -\frac{1}{4} F_{\mu\nu} F^{\mu\nu} + (D_{\mu} \phi)^*D_{\mu}\phi - m^2|\phi|^2, \end{equation} where $D^{\mu} = \partial^{\mu} + ieA^{\mu}$. This is invariant under both a local gauge symmetry $A^{\mu} \rightarrow A^{\mu} + \partial^{\mu} \chi$ with $\phi \rightarrow e^{i\chi(x)} \phi$ and a global symmetry $\phi \rightarrow e^{i\chi} \phi$. I am aware that by requiring the gauge symmetry we have introduced interaction terms coupling the scalar and vector boson fields, while the global symmetry gives us the conservation of particle number by Noether's theorem. But now what do the local and global phase shifts mean physically? Or are their physical meanings defined purely by their introduction of field couplings and of particle conservation, respectively?
The first answer to such a question must always be: A gauge symmetry has no "physical" meaning, it is an artifact of our choice for the coordinates/fields with which we describe the system (cf. Gauge symmetry is not a symmetry? , What is the importance of vector potential not being unique? , "Quantization of gauge systems" by Henneaux and Teitelboim). Any gauge symmetry of the Lagrangian is equivalent to a constraint in the Hamiltonian formalism, i.e. a non-trivial relation among the coordinates and their canonical momenta. In principle, any gauge symmetry may be eliminated by passing to the reduced phase space that has fewer canonical degrees of freedom. The gauge symmetry has no physical meaning in the sense that be may get rid of it by passing to a (classically) equivalent description of the system. A gauge transformation has no physical meaning because all states related by a gauge transformation are physically the same state . Formally, you have to quotient the gauge symmetry out of your space of states to get the actual space of states. In contrast, a global symmetry is a "true" symmetry of the system. It does not reduce the degrees of freedom of the system, but "only" corresponds to conserved quantites (either through Noether's theorem in the Lagrangian formulation or through an almost trivial evolution equation in the Hamiltonian formalism). It is physical in the sense that states related by it may be considered "equivalent", but they are not the same. Interestingly, for scalar QED, the global symmetry gives a rather inconvenient "Noether current" - one that depends on the gauge field (cf. this answer )! So the statement that "Noether's theorem" gives us charge/particle number conservation is not naively true in the scalar case (but it is in the Dirac case). Getting charge conservation from the gauge symmetry is also discussed in Classical EM : clear link between gauge symmetry and charge conservation . Why then use such a "stupid" description in the first place, you might ask. The answer is that, in practice, getting rid of the superfluous degrees of freedom is more trouble than it's worth. It might break manifest invariance under other symmetries (most notably Lorentz invariance), and there can be obstructions (e.g. Gribov obstructions ) to consistently fix a gauge. Quantization of gauge theories is much better understood in the BRST formalism where gauge symmetry is preserved and implemented in the quantum theory than in the Dirac formalism that requires you to be able to actually solve the constraints in the Hamiltonian formalism. So the key difference between a gauge and a global symmetry is that one is in our theoretical description , while the other is a property of the system . No amount of shenanigans will make a point charge less spherically symmetric (global rotation symmetry). But e.g. the electromagnetic gauge symmetry simply vanishes if we consider electric and magnetic fields instead of the four-potential. However, in that case we lose the ability to write down the covariant Lagrangian formulation of electromagnetism - the current $J^\mu$ must couple to some other four-vector, and that four-vector is simply the potential $A^\mu$. There is one further crucial aspect of gauge symmetries: Every massless vector boson necessarily is associated to a gauge symmetry (for a proof, see Weinberg's "Quantum Theory of Fields" ). There is no other way in a consistent quantum field theory: You want massless vector bosons like photons - you get a gauge symmetry. No matter how "unphysical" this symmetry is - in the covariant framework of quantum field theory we simply have no other choice than to phrase such particle content in terms of a gauge field. This you might see as the true "physical" meaning of gauge symmetries from the viewpoint of quantum field theory. Going one step further, it is the spontaneous breaking of such symmetries that creates massive vector bosons. A theory of vector bosons is almost inevitably a theory of gauge symmetries. As an aside: In principle, one might try to make any non-anomalous global symmetry into a gauge symmetry (cf. When can a global symmetry be gauged? ). The question is whether gauging it produces any new physical states, and whether these states fit to observations.
{ "source": [ "https://physics.stackexchange.com/questions/257018", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/105680/" ] }
257,019
I am trying to build a simulation of gravity in LibGDX using Bullet physics . To simplify it, I just want to apply a force on some body towered a (0,0,0) point. I got my body mass and it's location and I want to use the applyForce method from the bullet api. I need to give it two parameters the force and the direction. Direction is easy to calculate it's just a vector opposite to the location, but how to calculate the force? Also the API require the force to be a vector, while I thought this should be a number, can you explain me why is it this way?
The first answer to such a question must always be: A gauge symmetry has no "physical" meaning, it is an artifact of our choice for the coordinates/fields with which we describe the system (cf. Gauge symmetry is not a symmetry? , What is the importance of vector potential not being unique? , "Quantization of gauge systems" by Henneaux and Teitelboim). Any gauge symmetry of the Lagrangian is equivalent to a constraint in the Hamiltonian formalism, i.e. a non-trivial relation among the coordinates and their canonical momenta. In principle, any gauge symmetry may be eliminated by passing to the reduced phase space that has fewer canonical degrees of freedom. The gauge symmetry has no physical meaning in the sense that be may get rid of it by passing to a (classically) equivalent description of the system. A gauge transformation has no physical meaning because all states related by a gauge transformation are physically the same state . Formally, you have to quotient the gauge symmetry out of your space of states to get the actual space of states. In contrast, a global symmetry is a "true" symmetry of the system. It does not reduce the degrees of freedom of the system, but "only" corresponds to conserved quantites (either through Noether's theorem in the Lagrangian formulation or through an almost trivial evolution equation in the Hamiltonian formalism). It is physical in the sense that states related by it may be considered "equivalent", but they are not the same. Interestingly, for scalar QED, the global symmetry gives a rather inconvenient "Noether current" - one that depends on the gauge field (cf. this answer )! So the statement that "Noether's theorem" gives us charge/particle number conservation is not naively true in the scalar case (but it is in the Dirac case). Getting charge conservation from the gauge symmetry is also discussed in Classical EM : clear link between gauge symmetry and charge conservation . Why then use such a "stupid" description in the first place, you might ask. The answer is that, in practice, getting rid of the superfluous degrees of freedom is more trouble than it's worth. It might break manifest invariance under other symmetries (most notably Lorentz invariance), and there can be obstructions (e.g. Gribov obstructions ) to consistently fix a gauge. Quantization of gauge theories is much better understood in the BRST formalism where gauge symmetry is preserved and implemented in the quantum theory than in the Dirac formalism that requires you to be able to actually solve the constraints in the Hamiltonian formalism. So the key difference between a gauge and a global symmetry is that one is in our theoretical description , while the other is a property of the system . No amount of shenanigans will make a point charge less spherically symmetric (global rotation symmetry). But e.g. the electromagnetic gauge symmetry simply vanishes if we consider electric and magnetic fields instead of the four-potential. However, in that case we lose the ability to write down the covariant Lagrangian formulation of electromagnetism - the current $J^\mu$ must couple to some other four-vector, and that four-vector is simply the potential $A^\mu$. There is one further crucial aspect of gauge symmetries: Every massless vector boson necessarily is associated to a gauge symmetry (for a proof, see Weinberg's "Quantum Theory of Fields" ). There is no other way in a consistent quantum field theory: You want massless vector bosons like photons - you get a gauge symmetry. No matter how "unphysical" this symmetry is - in the covariant framework of quantum field theory we simply have no other choice than to phrase such particle content in terms of a gauge field. This you might see as the true "physical" meaning of gauge symmetries from the viewpoint of quantum field theory. Going one step further, it is the spontaneous breaking of such symmetries that creates massive vector bosons. A theory of vector bosons is almost inevitably a theory of gauge symmetries. As an aside: In principle, one might try to make any non-anomalous global symmetry into a gauge symmetry (cf. When can a global symmetry be gauged? ). The question is whether gauging it produces any new physical states, and whether these states fit to observations.
{ "source": [ "https://physics.stackexchange.com/questions/257019", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57376/" ] }
257,153
I confuse the four kinds of fundamental interactions, so I think the electric force and magnetic force should not be classified as a big class called electromagnetism. Here is my evidence: The Gauss law of electric force is related to the surface integration but the Ampere's law corresponds with path integration. The electric field can be caused by a single static charge while the magnetic force is caused by a moving charge or two moving infinitesimal current. The electric field line is never closed, but the magnetic field line (except those to infinity) is a closed curve.
Consider this: A charged particle at rest creates an electric field, but no magnetic field. Now if you walk past the charge, it will be in motion from your point of view, that is, in your frame of reference. So your magnetometer will detect a magnetic field. But the charge is just sitting on the table. Nothing about the charge has changed. Evidently the space around the charge is filled with something that at times appears to be a pure electric field, and at other times appears to have a magnetic field. We conclude that the field is something other than an electric field or a magnetic field. It is another type of field which combines the two into one entity.
{ "source": [ "https://physics.stackexchange.com/questions/257153", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/109123/" ] }
257,247
Suppose you measure quantity $x$ with an uncertainty ${\rm d}x$. Quantity $f$ is related to $x$ by $f=x^2$ . By error propagation the uncertainty on $f$ would be ${\rm d}f=2x{\rm d}x$. If a certain point $x$ equals zero then the uncertainty on $f$ would be zero, even if $x$ carries an uncertainty. Is there a special procedure in these cases?
Use the second derivative (or third, or whatever). The reason we use that formula is that $$ df \approx \frac{df}{dx} dx $$ is the first order Taylor approximation to df. If the first order term vanishes, you should include higher terms: $$ df \approx \frac{df}{dx} dx+\frac{1}{2}\frac{d^2f}{dx^2} dx^2+... $$ In your case, with $f=x^2$, and $x=0$, we'd have $$ df \approx dx^2 $$
{ "source": [ "https://physics.stackexchange.com/questions/257247", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116677/" ] }
257,383
When the humidity in the air is high, we sweat more and feel it's hotter than when the humidity is lower. So why don't we feel it's hotter when we go inside water, where the water content is much higher than in the air, than when we're not inside the water? Is it just because it's liquid and not a gas?
You feel cold when heat is flowing from you to the surroundings, your body tries to burn more energy to keep up your temperature, so you shiver. Water conducts heat much more effectively than air (more than 100x as well) so even with water at the same temperature as air you will lose a lot more heat and feel cold. When your body is too hot it losses energy most efficiently by sweating. It releases water which evaporates, the energy needed for the water to go from liquid to gas comes from your skin which is then cooled. In humid conditions it is harder for the water to evaporate (because there is already a lot of gaseous water in the air) so you can't cool as efficiently and so feel hotter.
{ "source": [ "https://physics.stackexchange.com/questions/257383", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/18720/" ] }
257,476
In order to get dark energy to dominate, wouldn't you first need another form of energy to push the expansion until dark energy could dominate? Otherwise I don't understand how the universe could shift from having a decelerating expansion to an accelerating expansion. Is there any analogy that could help understand this?
The title and the text actually ask two different questions. While Kyle Oman and Thriveth answer the title excellently, I'll address the question in the text which asks " Why did the Universe expand in the first place, before dark energy (DE) started to dominate ". The answer to this is inflation (we think). The first fraction of a second after the creation of space, it was dominated by "something" that mimicked the effect of DE, causing space to expand by a factor of $\sim e^{60}$. The epoch of inflation lasted until the Universe was some $10^{-32}\,\mathrm{s}$ old. The expansion continued, but were slowed down by the mutual attraction of radiation, and later matter. If the ratio of DE-to-matter had been smaller, this attraction might have slowed it down sufficiently to halt the expansion before DE started dominating, but that was just not the case in our Universe. Now what caused the inflation is another question, which someone else than me is better at answering. But I think the most accepted theory, or rather hypothesis, is some scalar field consisting of inflatons. Analogy You request an analogy. I can give you the following: Throw a rock into the air. Your push is inflation. The distance from Earth to the rock is the size of the Universe. The gravitational force between Earth and the rock is the mutual attraction between various forms of energy in the Universe. The speed of the rock is the expansion rate of the Universe. Now if your pitch was too weak, the rock will eventually fall back (Big Crunch), while if you throw hard enough (11 km/s), the rock will escape Earth's pull (Big Freeze). But even if the initial speed was less than 11 km/s, if the rock comes sufficiently close to the Moon (dark energy), it will start picking up speed and eventually escape.
{ "source": [ "https://physics.stackexchange.com/questions/257476", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/43783/" ] }
257,693
By the end of the 19th century all gasses had been liquefied apart from helium (He) . What is it about helium that makes it so hard to liquefy compared to the other gases? And why does it need to be pre cooled in the Joule-Kelvin expansion ?
The next approximation beyond the ideal gas is given by the Van der Waals fluid equation . It is a phenomenological law which takes into account the finite size of the molecules and their interactions with themselves. When you plot several Van der Vaals isotherms for a given substance, you observe that some of them show a phase transition from gas to liquid while others do not. The ones which do not show a phase transition are above a so called critical temperature $T_c$. Above this temperature you can decrease the volume or increase the pressure of the gas and it will not liquefy. Actually, the isotherms below the critical temperature need a correction given by Maxwell . To avoid instability (lower pressure giving lower volume giving lower pressure...) the actual path in the $PV$ diagram must avoid the "bumps" and follow the dashed line, as in the figure below The dashed line is the phase transition region. To see this, notice that if you keep decreasing volume further below $V_L$ you will need a huge amount of pressure. This means we got a liquid. Also notice that if the substance is above the critical temperature there is no need to apply that Maxwell correction. So there is no phase transition. The phase transition prediction by Van der Waals gave him the 1910 Nobel prize in Physics. Examples of critical temperatures are (in degrees Celsius): \begin{align} T_c(H_2O)&=+374,35,\\ T_c(O_2)&=-118,55,\\ T_c(N_2)&=-147,15,\\ T_c(H_2)&=-240,17,\\ T_c(He^4)&=-267,96. \end{align} As you can see, we are only able to liquefy Helium when it is below $-267,96^oC$. For a long time chemists called the gases $O_2$, $N_2$, $H_2$ and $He^4$ as permanent gases, since they were not able to drop the temperature enough to turn them liquid. Edit: I basically said that the great difficulty in liquefying helium is due to its extremely low critical temperature. The next question would be: Why is the helium critical temperature so low? Let me try to answer to that question too. The van der Waals equation for one mol of gas reads $$\left(P+\frac{a}{v^2}\right)(v-b)=RT.$$ The parameter $a$ characterizes the strength of the attractive intermolecular interaction while $b$ is related to the effective volume occupied by the molecules. The critical temperature can be calculated in terms of these parameters (remember the temperatures are always given in Kelvin), $$T_c=\frac{8a}{27bR}.$$ So a small $T_c$ means either small $a$ (weak interaction) or high $b$ (big molecules) or a combination of both. For the gases above mentioned we have , \begin{array}{|c|c|c|} \hline & a(Pa\cdot m^3/mol^2) & b(m^3/mol) \\ \hline H_2O & 554\cdot 10^{-3} & 3.05\cdot 10^{-5} \\ \hline O_2 & 138\cdot 10^{-3} & 3.19\cdot 10^{-5} \\ \hline N_2 & 137\cdot 10^{-3} &3.87\cdot 10^{-5} \\ \hline H_2 & 24.8\cdot 10^{-3}& 2.66\cdot 10^{-5} \\ \hline He^4 & 3.46\cdot 10^{-3} & 2.38\cdot 10^{-5} \\ \hline \end{array} These data suggest that the extremely weak (compared to the other) intermolecular interaction is the reason it has such a low critical temperature.
{ "source": [ "https://physics.stackexchange.com/questions/257693", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/70392/" ] }
257,694
Is the Oort cloud changing the position of the barycenter of the solar system? Does it affect in a significant way? How significant?
The next approximation beyond the ideal gas is given by the Van der Waals fluid equation . It is a phenomenological law which takes into account the finite size of the molecules and their interactions with themselves. When you plot several Van der Vaals isotherms for a given substance, you observe that some of them show a phase transition from gas to liquid while others do not. The ones which do not show a phase transition are above a so called critical temperature $T_c$. Above this temperature you can decrease the volume or increase the pressure of the gas and it will not liquefy. Actually, the isotherms below the critical temperature need a correction given by Maxwell . To avoid instability (lower pressure giving lower volume giving lower pressure...) the actual path in the $PV$ diagram must avoid the "bumps" and follow the dashed line, as in the figure below The dashed line is the phase transition region. To see this, notice that if you keep decreasing volume further below $V_L$ you will need a huge amount of pressure. This means we got a liquid. Also notice that if the substance is above the critical temperature there is no need to apply that Maxwell correction. So there is no phase transition. The phase transition prediction by Van der Waals gave him the 1910 Nobel prize in Physics. Examples of critical temperatures are (in degrees Celsius): \begin{align} T_c(H_2O)&=+374,35,\\ T_c(O_2)&=-118,55,\\ T_c(N_2)&=-147,15,\\ T_c(H_2)&=-240,17,\\ T_c(He^4)&=-267,96. \end{align} As you can see, we are only able to liquefy Helium when it is below $-267,96^oC$. For a long time chemists called the gases $O_2$, $N_2$, $H_2$ and $He^4$ as permanent gases, since they were not able to drop the temperature enough to turn them liquid. Edit: I basically said that the great difficulty in liquefying helium is due to its extremely low critical temperature. The next question would be: Why is the helium critical temperature so low? Let me try to answer to that question too. The van der Waals equation for one mol of gas reads $$\left(P+\frac{a}{v^2}\right)(v-b)=RT.$$ The parameter $a$ characterizes the strength of the attractive intermolecular interaction while $b$ is related to the effective volume occupied by the molecules. The critical temperature can be calculated in terms of these parameters (remember the temperatures are always given in Kelvin), $$T_c=\frac{8a}{27bR}.$$ So a small $T_c$ means either small $a$ (weak interaction) or high $b$ (big molecules) or a combination of both. For the gases above mentioned we have , \begin{array}{|c|c|c|} \hline & a(Pa\cdot m^3/mol^2) & b(m^3/mol) \\ \hline H_2O & 554\cdot 10^{-3} & 3.05\cdot 10^{-5} \\ \hline O_2 & 138\cdot 10^{-3} & 3.19\cdot 10^{-5} \\ \hline N_2 & 137\cdot 10^{-3} &3.87\cdot 10^{-5} \\ \hline H_2 & 24.8\cdot 10^{-3}& 2.66\cdot 10^{-5} \\ \hline He^4 & 3.46\cdot 10^{-3} & 2.38\cdot 10^{-5} \\ \hline \end{array} These data suggest that the extremely weak (compared to the other) intermolecular interaction is the reason it has such a low critical temperature.
{ "source": [ "https://physics.stackexchange.com/questions/257694", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/112073/" ] }
257,848
We often see films with spinning space station that create artificial gravity by having the astronauts pulled outwards by centrifugal force. I'd like to know if this would really happen, and if so, why is the following scenario not true: Take an astronaut in open space. He doesn't move. Put a big open spinning cylinder around him - surely he still doesn't move. Close the cylinder. I still see no reason for him to be pulled outwards.
Put a stationary astronaut in a small room inside a large spinning cylinder. After an instant walls of that room will hit him, and suddenly he will have the same velocity as the room. Due to angular motion, the room accelerates towards the axis of the cylinder. Subsequently, through the support force from the floor (the floor is at the surface of the cylinder) accelerates the astronaut too towards the center of the cylinder. If the room accelerates $9.81~\rm{ms^{-2}}$ towards center, this will be feel like the regular gravity. Note that one cannot feel gravity or acceleration as such (except for tidal forces). The 'weight' one feels is the support force from surfaces. In other words, gravity feels like so that you are constantly being pushed by the floor, which accelerates you at the rate of $9.81~\rm{ms^{-2}}$. If you stand, your organs will be pushed down etc.
{ "source": [ "https://physics.stackexchange.com/questions/257848", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/118295/" ] }
257,887
I should say first that I don't believe this is a feasible launch method, otherwise NASA and other space agencies would be using it by now. It's based on this BBC news story Saddam Hussein's Supergun but, luckily this monstrosity was never completed or even fully tested. These giant cylinders are one of the few remaining pieces of a contender for one of the most audacious pieces of engineering ever designed: a “supergun” called Big Babylon, which could have fired satellites into orbit from a 156m-long barrel (512ft) embedded inside a hill. Rather than thinking of the engineering aspects of the gun, what are the physics based reasons why we cannot arrange a series of linear explosions, with a valve type device to prevent blowback down the barrel at each stage and thereby maximising the upward boost to the payload to escape velocity. Again, I would stress that I believe there are physical (rather than engineering) reasons this idea is not used today. I just don't know what they are. Is it as simple as the barrel would need to be unfeasibly long, even using the most powerful explosives we have available today? The Project Harp Launch Gun was tested in the 1960s but never achieved more than half the escape velocity required. Merci beaucoup, Jules Verne (1828-1905). From The Earth To The Moon
Other answers don't mention the fact that no single impulse (e.g, like being fired from a gun) can launch a projectile into orbit. A purely ballistic projectile fired from a gun must either crash back into the planet, or it must escape from the planet altogether. In order to achieve orbit, at least two impulses must be applied to the projectile. The first one (from the gun) launches it into an elliptical trajetory that returns to the surface, and then the second impulse must be applied by a rocket motor to "circularize" the orbit at the moment when the projectile reaches the apogee of the initial ellipse.
{ "source": [ "https://physics.stackexchange.com/questions/257887", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
258,008
Gravitational wave detectors and particle accelerators have at least one thing in common -- they require long vacuum tubes through which a narrow beam is fired (a laser in the gravitational wave case, a particle beam in the accelerator case). In both cases, the vacuum tube is many orders of magnitude wider than the beam itself. But interestingly, while the LHC's vacuum tubes are 6.3 cm in diameter , LIGO's are about 20 times wider at 1.2 m in diameter . So my question is: why are LIGO's vacuum tubes so wide? This must have been a conscious design consideration, since it means that a much larger volume of vacuum must be maintained, and more material must be used to construct the tube. The main consideration for tube width that I can think of is that you have to be able to aim your beam within the width allotted, but surely on these grounds LIGO could have gotten away with a much narrower tube. (Actually, I have no idea -- is this even the deciding factor for the tube width at the LHC?)
The LIGO beam is 200 W as generated at the input mode cleaner; the beam is then recycled multiple times in the arms, increasing the power density significantly. This requires large optics with near perfect coatings in order to avoid "hot spot/cold spot" damage from various types of possible defects. But there is an additional reason for the large beam size, and I quote from Advanced LIGO , section 2.1: " In order to reduce test mass thermal noise, the beam size on the test masses is made as large as practical so that it averages over more of the mirror surface. The dominant noise mechanism here is mechanical loss in the dielectric mirror coatings, for which the displacement thermal noise scales inversely with beam size. This thermal noise reduction is balanced against increased aperture loss and decreased mode stability with larger beams. " Inspecting LIGO's optics for contaminants. When I was a grad student in the early 1990s, we worked on extremely sensitive, non-destructive techniques based on non-linear optics which could find the coating defects: location and classification. Our detector scanned the surface, and recorded amplitude and phase changes based on the photothermal effect, so I always take a personal interest in the success of LIGO; after all, they helped pay my way! See LIGO's laser here . LIGO Hanford.
{ "source": [ "https://physics.stackexchange.com/questions/258008", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/25750/" ] }
258,553
Other than knowing which direction is east and which direction is west, or observing for a sufficient timespan (to determine the direction of motion), is there any way of telling whether what one is seeing is a sunset or a sunrise? A priori it seems not but I was wondering if there are some subtler effects beyond Rayleigh. Note. I just came across an article that mentions that the green flash can occur only at sunset, and provides additional references: Broer, Henk W. Near-horizon celestial phenomena, a study in geometric optics. Acta Appl. Math. 137 (2015), 17–39.
In real life: A sunset is "redder" than a sunrise which makes people feel more romantic. It's mostly because the atmosphere is warmer in the evening (no pollution here, lemon, the Earth is warmer in the evening because it was naturally warmed up during the day). However, there's also a very small contribution of the Doppler shift, one that you could in principle measure accurately. When you're looking to the East, your point on the Earth is moving towards the Sun at speed up to 1,500 km/h or so (on the equator). This small velocity still exceeds the radial component of the velocity around the Earth, I think, so if you measure the Doppler shift accurately, you may learn something about the motion. You may also watch where the Sun is moving. If it is setting (dropping, approaching the horizon), it is a sunset, and if it is rising, it is a sunrise. ;-)
{ "source": [ "https://physics.stackexchange.com/questions/258553", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/100943/" ] }
258,613
While in my physics classes, I've always heard that the simple harmonic motion formulas are inaccurate e.g. In a pendulum, we should use them only when the angles are small; in springs, only when the change of space is small. As far as I know, SHM came from the differential equations of Hooke's law - so, using calculus, it should be really accurate. But why it isn't?
The actual restoring force in a simple pendulum is not proportional to the angle, but to the sine of the angle (i.e. angular acceleration is equal to $-\frac{g\sin(\theta)}{l}$, not $-\frac{g~\theta}{l}$ ). The actual solution to the differential equation for the pendulum is $$\theta (t)= 2\ \mathrm{am}\left(\frac{\sqrt{2 g+l c_1} \left(t+c_2\right)}{2 \sqrt{l}}\bigg|\frac{4g}{2 g+l c_1}\right)$$ Where $c_1$ is the initial angular velocity and $c_2$ is the initial angle. The term following the vertical line is the parameter of the Jacobi amplitude function $\mathrm{am}$, which is a kind of elliptic integral. This is quite different from the customary simplified solution $$\theta(t)=c_1\cos\left(\sqrt{\frac{g}{l}}t+\delta\right)$$ The small angle approximation is only valid to a first order approximation (by Taylor expansion $\sin(\theta)=\theta-\frac{\theta^3}{3!} + O(\theta^5)$). And Hooke's Law itself is inaccurate for large displacements of a spring, which can cause the spring to break or bend.
{ "source": [ "https://physics.stackexchange.com/questions/258613", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/118678/" ] }
258,706
So recently we were taught in school that tides are formed because the moon 'cancels out' some of the earth's gravity, and so the water rises because of the weaker force. But if water is not compressible, surely any difference in Gravity shouldn't make the water rise?
incompressible : not able to be compressed. compressible : In thermodynamics and fluid mechanics, compressibility is a measure of the relative volume change of a fluid or solid as a response to a pressure (or mean stress) change. So water does not change its volume. The same volume of water can take many shapes. When no other gravitational force except the earth's $1/r$ potential acts on the oceans, the equipotential surface is defined by the single potential, and the oceans settle in the shape of the equipotential . Because the earth is not really a sphere , these equipotential surfaces vary, but the idea is the same. When an opposing $1/r$ potential, as the moon potential, is strong enough at time $t$ to add a $-1/r'$ ( $r$ from center of the earth, $r'$ from center of moon) potential, then the equipotential form into which the water will settle is disturbed at that time of closest approach. This is what the solution of the problem gives: Figure 2: The Moon's gravity differential field at the surface of the Earth is known (along with another and weaker differential effect due to the Sun) as the Tide Generating Force. This is the primary mechanism driving tidal action, explaining two tidal equipotential bulges, and accounting for two high tides per day. In this figure, the Earth is the central blue circle while the Moon is far off to the right. The outward direction of the arrows on the right and left indicates that where the Moon is overhead (or at the nadir) its perturbing force opposes that between the earth and ocean. So it is the ability of water to change shapes that generates bulges, and the motion of the moon that changes in time moves these bulges with respect to the earth. Please note that there exists also earth tides , i.e. the ground also bulges as far as the elasticity of the solids it is composed of allows it. Also note that water tides can appear differently in different locations due to the geology of the ocean bottom and the land boundaries, and just because water is a fluid and obeys equations of fluid flow.
{ "source": [ "https://physics.stackexchange.com/questions/258706", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116303/" ] }
258,737
The LIGO group has a team that periodically produces fake data indicating a possible gravitational wave without informing the analysts. A friend of mine who works on LHC data analysis told me that none of the LHC groups do this. Why does one of these data-heavy projects use blind data injections but not the other?
After they told me about their impressive "LHC Olympics" in which physicists (often hardcore theorists) were reverse engineering a particle physics model from the raw (but fake) LHC data, I proposed the same idea in a circle of physicists at Harvard, including Nima Arkani-Hamed, sometime in 2005 and we have worked on that LHC ideas in some detail. We were thinking how amusing it would be to inject some signs of extra dimensions and lots of other things. We were also acknowledging the increase of the excitement that it could bring to the particle physics community. The main reason why this "drill" probably isn't as important for the LHC as it was for LIGO is that particle physicists – experimenters and phenomenologists – are doing lots of similar exercises, anyway, even if they're not told that "it is real (but fake) data from the LHC". Phenomenologists preemptively think about lots of "possible signals" etc. They don't need an extra "training" of the same kind. Moreover, LIGO detects boring noise at almost all time, so if some of this noise is overwritten, LIGO doesn't lose much valuable data. However, even if the LHC is expected to create Standard-Model-like processes all the time, their structure is more complex than just some nameless "noise". So by overwriting the real data by something with a contamination of a fake signal, one could really contaminate the data for many analyses. Real work by many people that takes too much time could be useless and it's too much to ask. Here, the difference really is that LIGO was pretty sure that it wouldn't get any real signal around 2010. So the physicists in LIGO didn't have anything of the kind to work on, and not to lose their skills, a "drill" was a good idea. On the other hand, the LHC is analyzing real LHC data from previously untested energies such as 13 TeV and there is a significant probability that they discover something even without injections. So the injections are not needed – people work hard on interesting, structured, data, anyway. A related difference is that the strength of the LIGO signal builds up quickly during those 0.2 signals that the black hole merger took. On the other hand, the strength of the LHC signal builds for a whole year or more. If all the interesting new physics events at the LHC took place too quickly (in a day) and then disappeared, the experimenters could see that something is suspicious. The LHC would need to contaminate the signal in the whole run and it wouldn't know how strong the contamination per unit time of the drill should be. The signal always gets stronger if one records more LHC collisions – but a single event detected by LIGO can't be "strengthened" by such waiting. So the LIGO drill is a well-defined campaign that takes some finite time while the LHC drill could be an "undetermined time" campaign. As CuriousOne basically said but I will say it differently, there are also many more possible discoveries at the LHC . So inventing one particular "fake signal" could be a very problematic thing – what is the best signal to inject? The LIGO case was very different. The fake 2010 signal was actually a black hole merger extremely similar to the actual 2015-2016 discovery. So there was basically "a single most likely first discovery" – a scenario as unique and specific as a fire in a skyscraper – so a particular drill for that scenario made some sense.
{ "source": [ "https://physics.stackexchange.com/questions/258737", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92058/" ] }
259,334
As I understand, the energy-time uncertainty principle can't be derived from the generalized uncertainty relation. This is because time is a dynamical variable and not an observable in the same sense momentum is. Every undergraduate QM book I have encountered has given a very rough "proof" of the time-energy uncertainty relation, but not something that is rigorous, or something even remotely close to being rigorous. So, is there an actual proof for it? If so, could someone please provide me a link to it or even provide me with a proof? Keep in mind that I am not looking a proof using quantum mechanical principles, as comments below pointed out. EDIT: All the proofs I have found take the generalized uncertainty relation and say "let $Δτ=σ_q/|dq/dt|$", cf. e.g. this Phys.SE post. But this for me does not suffice as a rigorous proof. People give that Δτ a precise meaning, but the relation is proven just by defining Δτ, so I am just looking for a proof(if there is any) that shows that meaning through mathematics. If no better proof exists, so be it. Then I will be happy with just the proof through which we define that quantity. By defining it in this way, there is room for interpretation, and this shows from the multiple meaning that researchers have given to that quantity (all concerning time of course).
The main problem is, as you say, that time is no operator in quantum mechanics. Hence there is no expectation value and no variance, which implies that you need to state what $\Delta t$ is supposed to mean, before you can write something like $\Delta E \Delta t\geq \hbar$ or similar. Once you define what you mean by $\Delta t$, relations that look similar to uncertainty relations can be derived with all mathematical rigour you want. The definition of $\Delta t$ must of course come from physics. Mostly of course, people see $\Delta t$ not as an uncertainty but as some sort of duration (see for instance the famous natural line widths, for which I'm sure there exist rigorous derivations). For example, you can ask the following questions: Given a signal of temporal length $t$ (it takes $t$ from "no signal" to "signal has completely arrived"), what is the variance of energy/momentum? This can be mapped to the usual uncertainty principle, because the temporal length is just a spread in position space. It is also related to the so-called Hardy uncertainty principle , which is just the Fourier uncertainty principle in disguise and completely rigorous. If you do an energy measurement, can you relate the duration of the measurement and the energy uncertainty of the measurement? This is highly problematic (see e.g. the review here: The time-energy uncertainty relation . Choosing a model of measurement, you can probably derive rigorous bounds, but I don't think a rigorous bound will actually be helpful, because no measurement model probably captures all of what is possible in experiments. You can ask the same question about preparation time and energy uncertainty (see the review). You can ask: given a state $|\psi\rangle$, how long does it take for a state to evolve into an orthogonal state? It turns out that there is an uncertainty relation between energy (given from the Hamiltonian of the time evolution) and the duration - this is the Mandelstamm-Tamm relation referred to in the other question. This relation can be made rigorous ( this paper here might give such a rigorous derivation, but I cannot access it). other ideas (also see the review)... In other words: You first need to tell me what $\Delta t$ is to mean. Then you have to tell me what $\Delta E$ is supposed to mean (one could argue that this is clear in quantum mechanics). Only then can you meaningfully ask the question of a derivation of an energy-time uncertainty relation. The generalised uncertainty principle does just that, it tells you that the $\Delta$ quantities are variances of operators so you have a well-defined question. The books you are reading seem to only offer physical heuristics of what $\Delta t$ and $\Delta E$ mean in special circumstances - hence a mathematically rigorous derivation is impossible. That's not in itself a problem, though, because heuristics can be very powerful. I'm all in favour of asking for rigorous proofs where the underlying question can be posed in a rigorous manner, but I doubt that's the case here for a universally valid uncertainty relation, because I doubt that a universally valid definition of $\Delta t$ can be given.
{ "source": [ "https://physics.stackexchange.com/questions/259334", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75628/" ] }
259,501
How come mineral oil is a better lubricant than water, even though water has a lower viscosity? When two surfaces slide over each other with a gap filled with a fluid, the different layers of the fluid are dragged at different speeds. The very top layer touching the top metal surface will have the same speed as the surface itself, while the bottommost layer is stationary. The speed in the layers between is distributed linearly and there exist friction forces between those layers that slow the movement. Those frictional forces should be reduces however, if a fluid with a lower viscosity is chosen. How come this is not so? Does it have to do with water's polarity, so that it sticks to surfaces in a different way than oil?
Your derivation is composed of correct statements and indeed, if something is known to act as a lubricant, we want the viscosity to be as low as possible because the friction will be reduced in this way. For example, honey is a bad lubricant because it's too viscous. However, your derivation isn't the whole story. The second condition is that the two surfaces must stay apart. If you use a lubricant with too low a viscosity, the surfaces will come in contact and the original friction will reappear. So the optimum lubricant is the least viscous liquid that is viscous enough to keep the surfaces apart. Which of them is the optimal one depends on the detailed surfaces and other conditions. For example, there exist situations in which water is a better lubricant than oil – for example when ice slides on ice. Some of the ice melts and the water is why the ice slides so nicely.
{ "source": [ "https://physics.stackexchange.com/questions/259501", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/73010/" ] }
259,732
I have seen many folks doing this Moon shadow experiment concluding that shadow from point A on the Moon can travel to point B on the Moon faster than light. What I fail to understand here (and I am sure I am wrong), is nothing can travel faster than light. I also read about Relativity where Einstein stated that every event happening for you depends on how fast the light gets from where the event is happening to you. This is my particular problem with this Moon shadow experiment. You cannot make a shadow travel faster than light, because in order for the "disappearance" of light to get to the Moon, it has to travel at light speed. Here is one way to understand my point. Consider we shine a flashlight on Moon, like the guy in the linked video did, we move the finger across the face of the flashlight, before we moved the finger it was casting the shadow at Point A on Moon, after we finish moving our finger, the shadow is at Point B on the Moon. When the finger reaches the end of it's moving length, the light waves that have left the flashlight before the finger reached there will have to hit Point B and it will take them a second and a half to get there, and another second and half for us to see that shadow on point B, which makes it 3 seconds + finger moving time for us to actually see the shadow at point B. That means shadow traveled (say) across the diameter of the Moon in 3 seconds. So. about 3000 Km in ~3 seconds make it 1000 Km per seconds which is very tiny compared to the speed of light. How can these experiments conclude that Shadow could be made to travel faster than light? What is the obvious clue that I am missing here?
Imaginary things can "travel" faster than light A shadow or a light spot can seem to travel faster than light, because it's not a particular physical thing, but a series of separate things, separate physical particles emitted at different time and at different locations. Imagine that you have launched a lot of tiny bots into space with a very accurate clock and a single LED, spaced out in a straight line with a 1 km distance between each of them. If you program them to blink their LED at particular times – say, the first one blinks at midnight, the second one at midnight + 1 second, the third one at midnight + 2 seconds, then you'd see a spot of light moving at 1 km/s across this line. If you program them to always be on except for a particular moment arranged in the same manner, then you'd see a "shadow" moving at 1 km/s. If you'd do the same, but set the intervals when your bots light up to 1 millisecond instead, you'd see that the signal is "moving" at 1 000 km/s. If you would have them light up at 1 microsecond difference between the neighboring bots, almost at the same time, then you'd see that the signal is "moving" at 1 000 000 km/s, much larger than the speed of light – but note that there is nothing that's actually moving there, the bots are stationary. The same applies for true shadows – they're reflected off of something that's not moving (as much), the reflected photons for each moment are different reflected photons, and the fact that a moment ago there was a reflection much further – the "reflection has moved" at above speed of light, describes only the distance and time between two separate events, not an entity that has moved anywhere.
{ "source": [ "https://physics.stackexchange.com/questions/259732", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/43177/" ] }
261,246
Electric monopoles do exist, but magnetic monopoles don't. Why? The question is closed because I need to clarify it, but I don't know how I could ask it another way. However, I've recieved many answers that were appropiate and added something to my knowledge, so I consider it answered.
There is no theoretical reason why magnetic monopoles cannot exist and indeed there are good reasons for supposing that they should exist . It's just that we have never observed one. In the past there have been various experiments to detect magnetic monopoles, though I think everyone has given up on the idea by now. If you're asking why we can't get monopoles out of a magnet that's because the magnetic field of a magnet is built up from the individual magnetic fields of the unpaired electrons in the magnet, and those electrons have a dipole field. There isn't any way to combine the dipole fields of the electrons to create a monopole, though it's possible to make things that look locally approximately like monopoles.
{ "source": [ "https://physics.stackexchange.com/questions/261246", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/118109/" ] }
261,571
What is the farthest object which we can get a direct Detailed visual image of using visible light which appears more than just a dot and falls into one of the following categories: Planet Satellite Star Asteroids I think Pluto is the farthest we've imaged visually using New Horizons. Can the Hubble telescope take detailed images of, say, a star ?
To address your last point, there are several stars of which we have been able to resolve images i.e. see the star as more than just a featureless point. There is a list of these stars on Wikipedia (I love that they put the Sun at the top of the list - true but pedantic :-). The farthest away of the stars in the list is Epsilon Aurigae at about 2000 light years, so this probably answers the main point in your question. However there is some ambiguity in your phrase direct visual image . We can detect supernovae in distant galaxies, though they cannot be resolved and appear as a featureless point. I'm guessing you mean to exclude objects like this, in which case Epsilon Aurigae holds the crown.
{ "source": [ "https://physics.stackexchange.com/questions/261571", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/89504/" ] }
261,580
I was able to deduce the following information from the problem: acceleration of police = 2m/s^2 x0 = 50 meters t= 5 seconds final velocity = 0 m/s However, I'm not sure what to do next.
To address your last point, there are several stars of which we have been able to resolve images i.e. see the star as more than just a featureless point. There is a list of these stars on Wikipedia (I love that they put the Sun at the top of the list - true but pedantic :-). The farthest away of the stars in the list is Epsilon Aurigae at about 2000 light years, so this probably answers the main point in your question. However there is some ambiguity in your phrase direct visual image . We can detect supernovae in distant galaxies, though they cannot be resolved and appear as a featureless point. I'm guessing you mean to exclude objects like this, in which case Epsilon Aurigae holds the crown.
{ "source": [ "https://physics.stackexchange.com/questions/261580", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/120238/" ] }
261,650
When speed is the path traveled in a given time and the path is constant, as it is for $c$, why can't light escape a black hole? It may take a long time to happen but shouldn't there be some light escaping every so often? I'm guessing that because time is infinite inside a black hole, that this would be one possible reason but wouldn't that mean that we would require infinite mass? What is contradicting with measuring black holes in solar masses, what means they don't contain infinite mass. So how can this be?
The speed $c$ that is constant is so when measured locally relative to a freefalling frame ( i.e. one for which all points follow spacefime geodesics wrt to the metric $g$). Local means that the frame's extent must be "small" enough that it can be thought of as flat : think of this as zooming in on the spacetime manifold, which is a smooth object, with enough magnification that you can't see any appreciable deviation from Minkowski spacetime (which is the spacetime analogue of flat Euclidean space, which you've probably encountered). In contrast, the speed of light as measured by a distant observer can vary in generally curved spacetime. The wording of your question suggests that you imagine sitting at some point within the horizon, and since your laser pointer's output must squirt out at the everconstant $c$, and the horizon is only a finite distace above you, it must reach the horizon and leave. But the geometry is not like this everyday thought picture. The point about an event horizon is that it is not in the future of any event inside the horizon. The spacetime distortion from flatness is so severe that even the future branch of lightlike geodesics will not intersect it. You can only reach the horizon from an event within it by travelling backwards in time . Some Q and A from Comments User PeterA.Schneider asks: "the speed of light as measured by a distant observer can vary in generally curved spacetime": That's the first time I have heard that. You sure? (Considering that essentially all of space time is curved.) which question User Jan Dvorak eloquently answers: don't worry, it will regain the speed of c once it gets close enough to you - if it does. Its wavelength when it meets you might differ drastically from its wavelength when it left its source, however. and I'd like to explain Jan's answer a bit more fully. You infer something's speed by comparing the changes in your spatial and temporal co-ordinates for that object. Let's begin in special relativity, where at first both observers chart the Universe by Minkowski co-ordinates. The fact that your clock and rulers measure the same intervals differently from what the distant one does doesn't lead to any surprises (at least to someone who has studied SR thoroughly) because there is a unique, well defined transformation that will map your co-ordinates for events to the distant observer's co-ordinates, and contrariwise. That transformation is the (proper, orthochronous) Lorentz transformation, which has the property that $c$ is measured to be the same from both observers' standpoints. In general curved spacetime it is impossible to define a unique transformation between two local frames that would allow us to directly compare measured speeds of things in this way. Let's look at why this is so. Let's re-imagine our scenario above: we're still in Minkowski spacetime with the same physics and doing SR, but with new co-ordinates. At every point in that spacetime, we rotate and boost the "reference" frames a bit so that nearby points have their reference directions and time intervals slightly different. This is altogether analogous to charting Euclidean 3-space by, say, spherical co-ordinates. Locally, the reference directions (of increasing $r$, $\theta$ and $\phi$) are rotated from the Cartesian ones, and that rotation varies smoothly with position. Now there's a very big infinity of ways to do such a gauge transformation: we can choose directions and unit time intervals any way we like as long as the variation is smooth and that the limiting transformations as the distance between the points shrinks is a Lorentz transformation. So now, in these new co-ordinates, how do we compare measured speeds if we were given only these co-ordinates? Well we could simply move through space and time along a chosen smooth path, making the little Lorentz transformations between neighboring reference frames and multiplying them all together to get an overall transformation for this path. But we could choose an infinity of smooth paths to do this along. So, if we're given only these co-ordinates, it's not immediately obvious that we wouldn't get a different answer from this procedure if we took a different smooth path between the two points. But we do, because that's what flat means, by definition . We can always make a transformation of our weird co-ordinates back to Minkowski spacetime if and only if the result of our calculation does not depend on path. The result of so-called parallel transport of a vector around a loop is always the identity transformation. A corollary of this fact is there is a well defined transformation between the two observers that allows us to compare measured speeds: it doesn't matter whether we compute it along path A or path B between two points: the answer must be the same since the inverse of one transformation must invert the other to achieve the identity transformation around the loop. Thus, in theory, we can still compute what the other observer would observe locally from afar in our weird co-ordinates. If you've made it through this explanation this far, then General Relativity is now only a small conceptual step away. In curved spacetime , the transformation wrought on vectors by the parallel transport around a loop is in general not the identity transformation. So there is no well defined way of comparing speeds from afar, at least from one's own co-ordinate frame. That's what "curved" means, by definition: nontrivial "holonomy" in parallel transport around closed paths And this is what people mean when they say the "co-ordinate speed of light can by anything in GR". But if a distant observer measures the speed of light continually, repeatedly and at regular time intervals as measured bu their clock in a laboratory they carry with them, and then send the result to you, all their reports to you will be that their measurement hasn't changed, even though the interval between reports that are set regularly by their clock may reach us at wildly varying intervals by our clock. Another analogy that might help you is the $2$-sphere, what we call a "ball" in everyday language, compared with the plane. On the plane, tangent planes to the plane are everywhere the same vector space: there is an unambiguous way to parallel transport the tangent plane at any point to that at any other point. On the ball, not so. Tangent planes at different points are not the same plane. They are isomorphic as vector spaces, but they are not the same. In particular, there is no well defined universal way of comparing them, or of assigning reference bases at all the points in any patch of finite extent, because, on the sphere, parallel transport of vectors around loops always leads to a change to the vector when it arrives back at the beginning point. Indeed, a sphere has constant curvature, which means that the rotation of the vector wrought by loop parallel transport is proportional to the area enclosed by the loop.
{ "source": [ "https://physics.stackexchange.com/questions/261650", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/84895/" ] }
261,958
I'm not experienced in physics yet (if it helps I've covered as much as acceleration, momentum and energy transfer/chemistry ionic and covalent bonding) but I've heard that the way people compare destructive force of nuclear weapons by megatonnes or kilotonnes is wrong. This does seem to make some sense because the energy will turn into a mix of gamma (?) radiation, light radiation, heat radiation and other things, but is there an accurate way to compare nuclear weapon destructive force? Say, I wanted to compare today's weapons to Little Boy.
The so-called TNT equivalent of a nuclear weapon is an unambiguous way of quantifying how much energy is released by the nuclear weapon. There's nothing 'wrong' about it. The only caveat is that the damage caused by, say, Little Boy versus 15 kilotons of TNT would not be identical despite having an equivalent yield (for various practical reasons). Generally, 10-20% of nuclear yield is emitted in the form of ionizing or residual radiation, unlike conventional weapons. Related: effects of nuclear explosions .
{ "source": [ "https://physics.stackexchange.com/questions/261958", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
262,530
In Stanislaw Lem's novel Solaris the planet is able to correct its own trajectory by some unspecified means. Assuming its momentum and angular momentum is conserved (it doesn't eject or absorb any mass), would this be possible (in Newtonian mechanics) and how? If not, can it be proven? The assumption is that the planet orbits a star (or perhaps a binary star) system. Intuitively this seems possible to me. For example, tidal forces result in a planet losing its rotational energy, so it seems possible that by altering its shape, a body could alter at least its rotation speed. My ideas go as follows: Assume we have an ideal rod consisting of two connected mass points. The rod rotates and orbits around a central mass. When one of the points moves towards the central body, we extend the rod, getting it closer to the center. thus increasing the overall gravitational force that acts on the rod. When one of the points is getting away from the center, we shrink the rod again, thus decreasing the combined gravitational force. I haven't run any simulations yet, but it seems this principle could work. Update: An even more complex scenario (conserving momentum and angular momentum) would be if the planet ejected a piece of matter and absorbed it again after some time.
If you allow for non-Newtonian gravity (i.e., general relativity), then an extended body can "swim" through spacetime using cyclic deformations. See the 2003 paper "Swimming in Spacetime: Motion by Cyclic Changes in Body Shape" ( Science , vol. 299, p. 1865) and the 2007 paper "Extended-body effects in cosmological spacetimes" ( Classical and Quantum Gravity , vol. 24, p. 5161). Even in Newtonian gravity, it appears to be possible. The second paper above cited "Reactionless orbital propulsion using tether deployment" ( Acta Astronautica , v. 26, p. 307 (1992).) Unfortunately, the paper is paywalled and I can't access the full text; but here's the abstract: A satellite in orbit can propel itself by retracting and deploying a length of the tether, with an expenditure of energy but with no use of on-board reaction mass, as shown by Landis and Hrach in a previous paper. The orbit can be raised, lowered, or the orbital position changed, by reaction against the gravitational gradient. Energy is added to or removed from the orbit by pumping the tether length in the same way as pumping a swing. Examples of tether propulsion in orbit without use of reaction mass are discussed, including: (1) using tether extension to reposition a satellite in orbit without fuel expenditure by extending a mass on the end of a tether; (2) using a tether for eccentricity pumping to add energy to the orbit for boosting and orbital transfer; and (3) length modulation of a spinning tether to transfer angular momentum between the orbit and tether spin, thus allowing changes in orbital angular momentum. If anyone wants to look at the article and edit this answer accordingly with a more detailed summary, feel free. As pointed out by Jules in the comments, the "previous paper" mentioned in the abstract appears to be this one, which is freely available. The idea of "swimming in spacetime" was also discussed on StackExchange here and here.
{ "source": [ "https://physics.stackexchange.com/questions/262530", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/11459/" ] }
262,563
How did a gravitational wave travel from Livingston, Louisiana to Hanford, Washington in 7 milliseconds, when they are separated by 10 milli-light seconds (3002 km)?
The time delay depends on the direction the wave is travelling. If it is travelling along the line connecting Livingston and Hanford then the delay time would indeed be the Livingston-Hanford distance divided by $c$: However suppose the wave was travelling normal to the line connecting the two detectors. In that case the wave would arrive at both of them at exactly the same time and the delay would have been zero: So the delay can be anything from zero up to $d/c$ depending on the direction the wave is travelling. The only real upset would be if the delay was greater than $d/c$ as that would mean the wave was travelling slower than light.
{ "source": [ "https://physics.stackexchange.com/questions/262563", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/121053/" ] }
262,703
This is a statement (presumably in mass, longevity, energy output) many people that I've met have heard in school, and it is known in pop culture. However, according to Wikipedia , about 75% of the stars in the universe are red dwarfs, which greatly differ from the sun. I've tried doing a little bit of research and I've found that the sun is "average" if you exclude all the dwarf stars from you calculations. Is there a good reason why this is done?
Describing the sun as an average star is probably more of a reaction against the idea that there is something unique about it. Obviously there is for us, since it is the star that we happen to be in orbit around, and much closer to than any other star, and hence historically the sun has been considered rather unique. But over the centuries we've discovered that neither the sun nor the earth is the center of the universe, that the stars we see in the night sky are just like our own sun , and that some of them are much brighter and/or much larger (in mass or volume). So saying the sun is an average star is mostly a historical artifact. It is saying that we've discovered that there is nothing particularly unusual about our star compared to any other star in our galaxy. It isn't a claim that the sun is average in any particular mathematical sense. It is using 'average' in the sense of 'typical' or 'unexceptional'. As it happens, it turns out the majority of stars are in fact smaller and less luminous than our sun, so it is somewhat un-average in that sense.
{ "source": [ "https://physics.stackexchange.com/questions/262703", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21904/" ] }
262,734
In Transmission Lines, does the electric field exist between the two conducting wires or does it exist inside a wire, pushing the flow of electrons? And do all antennas radiate energy by spark?
Describing the sun as an average star is probably more of a reaction against the idea that there is something unique about it. Obviously there is for us, since it is the star that we happen to be in orbit around, and much closer to than any other star, and hence historically the sun has been considered rather unique. But over the centuries we've discovered that neither the sun nor the earth is the center of the universe, that the stars we see in the night sky are just like our own sun , and that some of them are much brighter and/or much larger (in mass or volume). So saying the sun is an average star is mostly a historical artifact. It is saying that we've discovered that there is nothing particularly unusual about our star compared to any other star in our galaxy. It isn't a claim that the sun is average in any particular mathematical sense. It is using 'average' in the sense of 'typical' or 'unexceptional'. As it happens, it turns out the majority of stars are in fact smaller and less luminous than our sun, so it is somewhat un-average in that sense.
{ "source": [ "https://physics.stackexchange.com/questions/262734", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/121128/" ] }
262,813
I was looking at eclipse footage and I noticed that it doesn't get any noticeably darker until the very end when it suddenly all the light is gone. As the moon blocks out the Sun, I would expect that the brightness would gradually decrease as less of the Sun became visible (e.g. 50% as bright when the Moon covers half of it) however judging from all the videos out there this is not true! I took a look at the Wikipedia article , and it says: "Partial eclipses are virtually unnoticeable, as it takes well over 90% coverage to notice any darkening at all." "Even at 99% it would be no darker than civil twilight." Why would this be the case? I also found this diagram that may help illustrate my question: I would expect the graph to be more of a linear shape rather than being so exponential!
Human perception is generally logarithmic . For example, the perceived loudness of a sound is measured using decibels, where an decrease of $10 \text{ dB}$ divides the sound intensity by $10$. So if the eclipse were heard instead of seen, "90% coverage" might mean reducing the intensity from $120 \text{ dB}$ to $110 \text{ dB}$, a small change. Perceived brightness is the same way. There's a huge range of light intensities that we see every day: direct sunlight is ~100 times brighter than indoor lighting, though both look fairly bright to us. So a 90% reduction wouldn't make the sky look dark at all. The shape of the graph 'looks like an exponential' because the $y$-axis is the log of the intensity. This is done so the graph somewhat represents "perceived brightness" vs. time.
{ "source": [ "https://physics.stackexchange.com/questions/262813", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116303/" ] }
262,917
I always read in modern physics textbooks and articles about the need for physical theories to be mathematically self-consistent, which implies that the theories must not produce contradictions or anomalies. For example, string theorists are proud of the fact that string theory itself is self-consistent. But what does this really mean? Physical theories are not a collection of mathematical axioms, they are attempts at describing Nature. I understand the need for rigorous foundations in mathematics, but in physics, we have experiments to decide what is true and what isn't. It's also weird (for me) to say that a theory is mathematically self-consistent. For example, Newton's Laws of Dynamics encode empirically known facts in a mathematical form. What does it mean to say that Newton's Laws are mathematically self-consistent? The same can be said for the Laws of Thermodynamics. There is no logical need for Nature to abhor perpetual motion machines, but from experiments, we believe this is true. Does it make sense to talk about thermodynamics as being self-consistent?
Ever since the time of Newton physics is about observing nature, quantifying observations with measurements and finding a mathematical model that not only describes/maps the measurements but, most important, it is predictive. To attain this, physics uses a rigorous self-consistent mathematical model, imposing extra postulates as axioms to relate the connection of measurements to the mathematics, thus picking a subset of the mathematical solutions to the model. The mathematics is self-consistent by the construction of a mathematical model. Its usefulness in physics is that it can predict new phenomena to be measured. If the mathematics were patched together and inconsistent, how could the predictions of the model have any validity? It is the demand for self-consistency that allows for falsification of a proposed mathematical model, by its predicting invalid numbers. The consistent Euclidean model of the flat earth is falsified on the globe of the earth, for example. This lead to spherical geometry as the model of the globe. The whole research effort of validating the standard model at LHC, for example, is in the hope that it will be falsified and open a window for new theories.
{ "source": [ "https://physics.stackexchange.com/questions/262917", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/61945/" ] }
263,191
Imagine two massive objects, with the same mass (M) circling around their center of mass (C.M.). Let's assume that the distance between them is 1 light hour. Don´t the two bodies get accelerated and move away from each other because they feel the gravity of each other as it was 1 hour ago, because of which a force tangent to the direction of the speed develops?
They actually don't separate in distance, but Laplace himself asked the same question and made the exact same mistaken assumption that you did, so don't feel bad - you're in good company. The best way to approach the problem is through an expansion in powers of $v/c$, where $v$ is the velocity at which the particles move and $c$ is the speed of light (or, more relevantly for this problem, the speed of gravity). At order $(v/c)^2$, we need to factor in the gravitational radiation emitted because the masses are accelerating. The $o((v/c)^2)$ contribution to the gravitational interaction actually causes the the particles to spiral inward as they lose kinetic energy due to the radiated gravitational energy. However, seeing this analytically requires the full machinery of general relativity, and it's a quite complicated calculation - in fact, it can't be analytically solved exactly. However, the $o(v/c)$ effect is more tractable, and analyzing the problem at this order is enough to clear up your misconception. The particles' acceleration is $o(v^2/c^2)$, so to this order we can neglect it and only consider the effect of its velocity. In effect, we can consider the effect on particle B, assuming that particle A had always had its current instantaneous velocity ${\bf v}$ (i.e. assuming it had always been traveling in a straight line tangent to the orbit circle.). It turns out that at this order the problem becomes mathematically equivalent to a familiar problem in special relativity: that of the electric field generated by a charge moving at constant relativistic velocity. It's a standard result from classical E&M (see, for example, Purcell) that in this situation, the electric field actually points exactly to the particle's present position, not to its retarded position as one might expect. (This is because in a relativistic context, the fields generated by a particle depend on its velocity and acceleration as well as its position (as can be seen from Jefimenko's equations). It turns out that in the case where acceleration can be neglected, the fields simply point to the particle's "future projected position" based on its position and velocity at the retarded time, which is also its instantaneous present position.) So it turns out that the $o(v/c)$ correction from general relativity is exactly zero: the acceleration on particle B is $GM/(2R)^2 + o((v/c)^2)$, and its direction is precisely radially inward, despite the apparent violation of causality. See here for more information.
{ "source": [ "https://physics.stackexchange.com/questions/263191", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98822/" ] }
263,192
At 59:14 in this video , the expectation value of the energy of a harmonic oscillator is $$ \langle E \rangle = \int ||\tilde{\Psi}(p)||^2 \frac{p^2}{2m}\ \mathrm dp + \int ||\Psi(x)||^2\frac{m\omega^2}{2}x^2\ \mathrm dx\tag 1$$ My question is how was this equation reached? This was my attempt:$$\langle E \rangle = \int {\Psi}^*(x)~\hat{E}~\Psi(x)\ \mathrm dx=\int {\Psi}^*(x)\left(-\frac{\hbar ^2}{2m}\frac{\partial ^2 \Psi(x)}{\partial x ^2}\right)\ \mathrm dx + \int {\Psi}^*(x)\frac{m\omega^2}{2}\Psi(x)\ \mathrm dx $$ but I can't get any further. How can I reach equation $(1)$?
They actually don't separate in distance, but Laplace himself asked the same question and made the exact same mistaken assumption that you did, so don't feel bad - you're in good company. The best way to approach the problem is through an expansion in powers of $v/c$, where $v$ is the velocity at which the particles move and $c$ is the speed of light (or, more relevantly for this problem, the speed of gravity). At order $(v/c)^2$, we need to factor in the gravitational radiation emitted because the masses are accelerating. The $o((v/c)^2)$ contribution to the gravitational interaction actually causes the the particles to spiral inward as they lose kinetic energy due to the radiated gravitational energy. However, seeing this analytically requires the full machinery of general relativity, and it's a quite complicated calculation - in fact, it can't be analytically solved exactly. However, the $o(v/c)$ effect is more tractable, and analyzing the problem at this order is enough to clear up your misconception. The particles' acceleration is $o(v^2/c^2)$, so to this order we can neglect it and only consider the effect of its velocity. In effect, we can consider the effect on particle B, assuming that particle A had always had its current instantaneous velocity ${\bf v}$ (i.e. assuming it had always been traveling in a straight line tangent to the orbit circle.). It turns out that at this order the problem becomes mathematically equivalent to a familiar problem in special relativity: that of the electric field generated by a charge moving at constant relativistic velocity. It's a standard result from classical E&M (see, for example, Purcell) that in this situation, the electric field actually points exactly to the particle's present position, not to its retarded position as one might expect. (This is because in a relativistic context, the fields generated by a particle depend on its velocity and acceleration as well as its position (as can be seen from Jefimenko's equations). It turns out that in the case where acceleration can be neglected, the fields simply point to the particle's "future projected position" based on its position and velocity at the retarded time, which is also its instantaneous present position.) So it turns out that the $o(v/c)$ correction from general relativity is exactly zero: the acceleration on particle B is $GM/(2R)^2 + o((v/c)^2)$, and its direction is precisely radially inward, despite the apparent violation of causality. See here for more information.
{ "source": [ "https://physics.stackexchange.com/questions/263192", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83421/" ] }
263,197
Context In one of his most popular books Guards! Guards! , Terry Pratchett makes an entropy joke: Knowledge equals Power, which equals Energy, which equals Mass Pratchett is a fantasy comedian and every third phrase in his book is a joke, therefore there is no good reason to believe it. Pratchett uses that madness to make up that a huge library has a tremendous gravitational push. The question I work with computers and mostly with encryption . My work colleagues believe Terry Pratchett's statement because of entropy . On the other hand, I believe , it is incorrect since entropy of information is a different entropy than the one used in thermodynamics. Am I correct? And if so, why do use the same name ( entropy ) to mean two different things? Also, what would be a good way to explain that these two "entropies" are different things to non-scientists (i.e. people without a chemistry or physics background)?
So Pratchett's quote seems to be about energy, rather than entropy. I supposed you could claim otherwise if you assume "entropy is knowledge," but I think that's exactly backwards: I think that knowledge is a special case of low entropy. But your question is still interesting. The entropy $S$ in thermodynamics is related to the number of indistinguishable states that a system can occupy. If all the indistinguishable states are equally probable, the number of "microstates" associated with a system is $\Omega = \exp( S/k )$ , where the constant $k\approx\rm25\,meV/300\,K$ is related to the amount of energy exchanged by thermodynamic systems at different temperatures. The canonical example is a jar of pennies. Suppose I drop 100 coins on the floor. There are 100 ways that I can have one heads-up and the rest tails-up; there are $100\cdot99/2$ ways to have two heads; there are $10 \cdot99\cdot98/6$ ways to have three heads; there are about $10^{28}$ ways to have forty heads, and $10^{29}$ ways to have fifty heads. If you drop a jar of pennies you're not going to find them 3% heads up, any more than you're going to get struck by lightning while you're dealing yourself a royal flush: there are just too many other alternatives. The connection to thermodynamics comes when not all of my microstates have the same energy, so that my system can exchange energy with its surroundings by having transitions. For instance, suppose my 100 pennies aren't on the floor of my kitchen, but they're in the floorboard of my pickup truck with the out-of-balance tire. The vibration means that each penny has a chance of flipping over, which will tend to drive the distribution towards 50-50. But if there is some other interaction that makes heads-up more likely than tails-up, then 50-50 isn't where I'll stop. Maybe I have an obsessive passenger who flips over all the tails-up pennies. If the shaking and random flipping over is slow enough that he can flip them all, that's effectively "zero temperature"; if the shaking and random flipping is so vigorous that a penny usually flips itself before he corrects the next one, that's "infinite temperature." (This is actually part of the definition of temperature .) The Boltzmann entropy I used above, $$ S_B = k_B \ln \Omega, $$ is exactly the same as the Shannon entropy, $$ S_S = k_S \ln \Omega, $$ except that Shannon's constant is $k_S = \frac1{\ln 2}\rm\,bit$ , so that a system with ten bits of information entropy can be in any one of $\Omega=2^{10}$ states. This is a statement with physical consequences. Suppose that I buy a two-terabyte SD card ( apparently the standard supports this ) and I fill it up with forty hours of video of my guinea pigs turning hay into poop. By reducing the number of possible states of the SD card from $\Omega=2\times2^{40}\times8$ to one, Boltzmann's definition tells me I have reduced the thermodynamic entropy of the card by $\Delta S = 2.6\rm\,meV/K$ . That entropy reduction must be balanced by an equal or larger increase in entropy elsewhere in the universe, and if I do this at room temperature that entropy increase must be accompanied by a heat flow of $\Delta Q = T\Delta S = 0.79\rm\,eV = 10^{-19}\,joule$ . And here we come upon practical, experimental evidence for one difference between information and thermodynamic entropy. Power consumption while writing an SD card is milliwatts or watts, and transferring my forty-hour guinea pig movie will not be a brief operation --- that extra $10^{-19}\rm\,J$ , enough energy to drive a single infrared atomic transition, that I have to pay for knowing every single bit on the SD card is nothing compared to the other costs for running the device. The information entropy is part of, but not nearly all of, the total thermodynamic entropy of a system. The thermodynamic entropy includes state information about every atom of every transistor making up every bit, and in any bi-stable system there will be many, many microscopic configurations that correspond to "on" and many, many distinct microscopic configurations that correspond to "off." CuriousOne asks, How comes that the Shannon entropy of the text of a Shakespeare folio doesn't change with temperature? This is because any effective information storage medium must operate at effectively zero temperature --- otherwise bits flip and information is destroyed. For instance, I have a Complete Works of Shakespeare which is about 1 kg of paper and has an information entropy of about maybe a few megabytes. This means that when the book was printed there was a minimum extra energy expenditure of $10^{-25}\rm\,J = 1\,\mu eV$ associated with putting those words on the page in that order rather than any others. Knowing what's in the book reduces its entropy. Knowing whether the book is sonnets first or plays first reduces its entropy further. Knowing that "Trip away/Make no stay/Meet me all by break of day" is on page 158 reduces its entropy still further, because if your brain is in the low-entropy state where you know Midsummer Night's Dream you know that it must start on page 140 or 150 or so. And me telling you each of these facts and concomitantly reducing your entropy was associated with an extra energy of some fraction of a nano-eV, totally lost in my brain metabolism, the mechanical energy of my fingers, the operation energy of my computer, the operation energy of my internet connection to the disk at the StackExchange data center where this answer is stored, and so on. If I raise the temperature of this Complete Works from 300 k to 301 K, I raise its entropy by $\Delta S = \Delta Q/T = 1\,\rm kJ/K$ , which corresponds to many yottabytes of information; however the book is cleverly arranged so that the information that is disorganized doesn't affect the arrangements of the words on the pages. If, however, I try to store an extra megajoule of energy in this book, then somewhere along its path to a temperature of 1300 kelvin it will transform into a pile of ashes. Ashes are high-entropy: it's impossible to distinguish ashes of "Love's Labours Lost" from ashes of "Timon of Athens." The information entropy --- which has been removed from a system where information is stored --- is a tiny subset of the thermodynamic entropy, and you can only reliably store information in parts of a system which are effectively at zero temperature. A monoatomic ideal gas of, say, argon atoms can also be divided into subsystems where the entropy does or does not depend temperature. Argon atoms have at least three independent ways to store energy: translational motion, electronic excitations, and nuclear excitations. Suppose you have a mole of argon atoms at room temperature. The translational entropy is given by the Sackur-Tetrode equation , and does depend on the temperature. However the Boltzmann factor for the first excited state at 11 eV is $$ \exp\frac{-11\rm\,eV}{k\cdot300\rm\,K} = 10^{-201} $$ and so the number of argon atoms in the first (or higher) excited states is exactly zero and there is zero entropy in the electronic excitation sector. The electronic excitation entropy remains exactly zero until the Boltzmann factors for all of the excited states add up to $10^{-24}$ , so that there is on average one excited atom; that happens somewhere around the temperature $$ T = \frac{-11\rm\,eV}{k}\ln 10^{-24} = 2500\rm\,K. $$ So as you raise the temperature of your mole of argon from 300 K to 500 K the number of excited atoms in your mole changes from exactly zero to exactly zero, which is a zero-entropy configuration, independent of the temperature, in a purely thermodynamic process. Likewise, even at tens of thousands of kelvin, the entropy stored in the nuclear excitations is zero, because the probability of finding a nucleus in the first excited state around 2 MeV is many orders of magnitude smaller than the number of atoms in your sample. Likewise, the thermodynamic entropy of the information in my Complete Works of Shakespeare is, if not zero, very low: there are a small number of configurations of text which correspond to a Complete Works of Shakespeare rather than a Lord of the Rings or a Ulysses or a Don Quixote made of the same material with equivalent mass. The information entropy ("Shakespeare's Complete Works fill a few megabytes") tells me the minimum thermodynamic entropy which had to be removed from the system in order to organize it into a Shakespeare's Complete Works, and an associated energy cost with transferring that entropy elsewhere; those costs are tiny compared to the total energy and entropy exchanges involved in printing a book. As long as the temperature of my book stays substantially below 506 kelvin , the probability of any letter in the book spontaneously changing to look like another letter or like an illegible blob is zero, and changes in temperature are reversible . This argument suggests, by the way, that if you want to store information in a quantum-mechanical system you need to store it in the ground state, which the system will occupy at zero temperature; therefore you need to find a system which has multiple degenerate ground states. A ferromagnet has a degenerate ground state: the atoms in the magnet want to align with their neighbors, but the direction which they choose to align is unconstrained. Once a ferromagnet has "chosen" an orientation, perhaps with the help of an external aligning field, that direction is stable as long as the temperature is substantially below the Curie temperature --- that is, modest changes in temperature do not cause entropy-increasing fluctuations in the orientation of the magnet. You may be familiar with information-storage mechanisms operating on this principle.
{ "source": [ "https://physics.stackexchange.com/questions/263197", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/118790/" ] }
263,791
“Every action has an equal and opposite reaction.” I have a query about the word every in that sentence. Suppose we have two objects A and B. A pushes B with a force of 5N and B will push A with a force of 5N. However, won’t the reaction that B has caused on A, serve as an action for A, causing A to again push B with its reaction and thus making a total of 10N? (And then, of course, B will also apply a force of 10N on A.)
The way we are all taught Newton's Laws (by reciting them like mantras as children) is unfortunate because the traditional wording is misleading in many ways. A big problem (though not the only one) with the traditional wording of both Newton's second and third laws is that they incorrectly suggest cause and effect (and hence imply a chain of events, as you put it). Newton's second law, for example, suggests that a force 'causes' an acceleration, implying it happens first . It doesn't. The force and the acceleration occur jointly and concurrently, despite the persistent misconception and stubborn illusion of a temporal sequence. But let's not get distracted with the second law right now, because you are understandably perplexed by the third ... Again, the wording of the third law suggests that an 'action' happens first and then it 'causes' a 'reaction'. If this were literally true, you'd have every right to cry infinite regress ! The truth is, the forces occur jointly and simultaneously, and are not the causes of each other. If you want a better way to think about it, you can hardly do better than the way Newton himself came up with the third law. He argued for it as follows: Suppose you had a system of two objects interacting with each other, with no external forces acting on the system. Then you should be able to consider that system as a 'whole' if you want to, and from that perspective the system as a whole must not accelerate as it has no net force acting on it. But this can only be the case if the two objects making up the system have equal and opposite forces between them (i.e. all internal forces of the system must cancel out). Do you see how this argument does not involve any 'causal sequence' or 'chain' of forces? It is just an observation about what must be the case in order for Newton's force-based scheme to work consistently. Not convinced? Let me try an analogy. You and your friend each have a certain amount of money. You buy something from your friend. Your balance goes down and your friend's goes up. Was there a time-delayed causal sequence here? Nope. Your balance decreased concurrently (as you handed over the money) as your friend's balance increased. Looking at the system as a whole, we know that since no money flowed into or out of the system during the transaction, the net balance must be zero. Every payment entails a receipt and every receipt entails a payment, but, despite the illusion, there is no sequence (much less a perpetual one!). Note : You could also translate this argument into the language of momentum conservation, but I have tried to answer the question in the same language in which you phrased it. UPDATE : The 'infinite regress' problem highlighted here is not the only confusion that arises when we use the suggestive language of 'action' and 'reaction'. I've identified two other problems this language causes along with my proposed solution here .
{ "source": [ "https://physics.stackexchange.com/questions/263791", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/82377/" ] }
264,029
The Wikipedia article on momentum defines momentum as in classical mechanics: … momentum is the product of the mass and velocity of an object. However, an electromagnetic field has momentum, which is how solar sails work . I would not suppose that this is a product of the ‘mass’ and ‘velocity’ of the field. So, what is momentum, conceptually?
Momentum / energy are the conserved Noether charges that correspond, by dint of Noether's Theorem to the invariance of the Lagrangian description of a system with respect to translation. Whenever a physical system's Lagrangian is invariant under a continuous transformation ( e.g. shift of spatial / temporal origin, rotation of co-ordinates), there must be a conserved quantity, called the Noether charge for that transformation. We then define the conserved charges for spatial and temporal translation as momentum and energy, respectively; angular momentum is the conserved Noether charge corresponding to invariance of a Lagrangian with respect to rotation of co-ordinates. One can derive the more usual expressions for these quantities from a Lagrangian formulation of Newtonian mechanics. When Maxwell's equations and electromagnetism are included in a Lagrangian formulation, we find that there are still invariances with the above continuous transformations, and so we need to broaden our definitions of momentum to include those of the electromagnetic field. User ACuriousMind writes: I think it would be good to point out that the notion of "canonical momentum" in Hamiltonian mechanics need not coincide with this one (as is the case for e.g. a particle coupled to the electromagnetic field) When applied to the EM field , we use a field theoretic version of Noether's theorem and the Lagrangian is a spacetime integral of a Lagrangian density ; the Noether currents for a free EM field are the components of the stress-energy tensor $T$ and the resultant conservation laws $T_\mu{}^\nu{}_{,\,\nu}=0$ follow from equating the divergence to nought. This includes Poynting's theorem - the postulated statement of conservation of energy (see my answer to this question here ) and the conservation of electromagnetic momentum (see the Wiki article). On the other hand, the Lagrangian $T-U$ describing the motion of a lone particle in the EM field is $L = \tfrac{1}{2}m \left( \vec{v} \cdot \vec{v} \right) - qV + q\vec{A} \cdot \vec{v}$ , yielding for the canonical momentum conjugate to co-ordinate $x$ the expression $p_x=\partial L/\partial \dot{x} = m\,v_x+q\,A_x$ ; likewise for $y$ and $z$ with $\dot{x}=v_x$ . A subtle point here is that the "potential" $U$ is no longer the potential energy, but a generalized "velocity dependent potential" $q\,V-\vec{v}\cdot\vec{A}$ . These canonical momentums are not in general conserved , they describe the evolution of the particle's motion under the action of the Lorentz force and, moreover, are gauge dependent (meaning, amongst other things, that they do not correspond to measurable quantities). However, when one includes the densities of the four force on non-EM "matter" in the electromagnetic Lagrangian density, the Euler Lagrange equations lead to Maxwell's equations in the presence of sources and all the momentums, EM and those of the matter, sum to give conserved quantities. Also note that the term "canonical momentum" can and often does speak about any variable conjugate to a generalized co-ordinate in an abstract Euler-Lagrange formulation of any system evolution description (be it mechanical, elecromagnetic or a even a nonphysical financial system) and whether or not the "momentum" correspond in the slightest to the mechanical notion of momentum or whether or not the quantity be conserved. It's simply a name for something that mathematically looks like a momentum in classical Hamiltonian and Lagrangian mechanics, i.e. "conjugate" to a generalized co-ordinate $x$ in the sense of $\dot{p} = -\frac{\partial H}{\partial x}$ in a Hamiltonian formulation or $p = \frac{\partial L}{\partial \dot{x}}$ in a Lagrangian setting. Even some financial analysts talk about canonical momentum when Euler-Lagrange formulations of financial systems are used! They are (as far as my poor physicist's mind can fathom) simply talking about variables conjugate to the generalized co-ordinates for the Black Schole's model. Beware, they are coming to a national economy near you soon, if they are not there already!
{ "source": [ "https://physics.stackexchange.com/questions/264029", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/45504/" ] }
264,165
In the movies, arrows shot into the air rotate so that during the descent, the arrow head hits ground first. What is the source of this angular momentum? It would seem that the bow string exerts a force directly in line with the arrow.
The same reason objects which are heavier on one side tend to fall with the heavy side down: the tip of the arrow is denser than the rest of the arrow. The center of gravity is offset from its geometrical center, so the air drag, which is based on the object's geometry, causes a torque together with gravity as seen in this very professional picture of a body falling straight down.
{ "source": [ "https://physics.stackexchange.com/questions/264165", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/121760/" ] }