source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
55,833 | I recently stumbled upon this interesting image of a wasp, floating on water: Assuming this isn't photoshopped, I have a couple of questions: Why do you see its image like that (what's the physical explanation; I'm sure there is an interesting one)? Why are the spots surrounding the wasp's legs circle shaped? Would they be square shaped if the 'feet' of the wasp were square shaped? | The mechanism at play here is surface tension . The cohesion of the molecules of water is what keeps the wasp afloat. Due to this cohesion, the surface of the water behaves like a membrane and is curved inwards. The light rays that would be refracted from the perfectly flat surface are now incident at an altered angle and are reflected or refracted by altered angles around the tip of the wasp's legs, hence the shadow. The curvature of the surface traces the shape of the object that touches the surface. As you can see though, the area of the shadow is much larger than the wasp's legs' tips. The shape of the shadow will therefore always be rounded. The radii of the curvature can also be calculated , given the difference in pressure between air and water. | {
"source": [
"https://physics.stackexchange.com/questions/55833",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20533/"
]
} |
56,202 | Almost every time somebody talks about atoms, at some point they mention something like this: If we remove the spaces between the atoms and atomic components, we can fit the solar system in a thimble . Or If we remove the spaces between the electrons and the nucleus, we can fit the universe in a baseball. I know that atoms are mostly empty , but I've always thought that those statements are exaggerating. Can we really fit the solar system in a thimble (if we remove all those spaces)? | Neither of those statements are true. It's an easy approximation to make: a neutron star has all of that 'space' removed from between nucleons --- so we just need to know how big a neutron star of mass equal to the solar system would be. Well, the only significant mass is the sun (jupiter is about 1% the mass of the sun---negligible). If the sun were compressed into a neutron star, it would have a radius of about 10km (up to 50% or so accuracy). See this nice talk about neutron star radii . Solar System: So if you removed all of the 'space' between all of the atoms in the solar systems, it would form an object about the size of a large town, or small city. Universe: Obviously collecting all of this mass would yield a black-hole. But conceptually, using some very order of magnitude estimates for the universe as a whole , if we assume there are roughly $10^{20}$ - $10^{22}$ stars (I think this estimate is quite high), then the radius would be something like a 1-100 Mpc or roughly 10 million to 1 billion light-years. Edit (To address the question itself): The concept of 'size' for atoms and nuclei has some grey area, but you can define the size of a hydrogen atom, or the size of a proton/neutron to an order of magnitude. A statement like 'remove all of the empty space' is much more nebulous, and ends up being largely a question of semantics. A more accurate way of phrasing the underlying concept being addressed might be something like: 'Roughly how much volume do the dominant mass-constituents of matter take up?' The idea is that nucleons (protons and/or neutrons) are 2000 times more massive than electrons, and thus the important component of mass. At the same time, the electrons are the dominant volume-fillers (by a factor of about $10^{15}$). | {
"source": [
"https://physics.stackexchange.com/questions/56202",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12878/"
]
} |
56,293 | I'm a huge fan of mathematical physics and I know what the formal definitions of those two areas are, I've seen them. But I still get completely baffled when someone asks me to explain it simply. The difference is obvious to me, but I just can't seem to put it into words in a satisfying enough manner. So I'm asking you to help me... What is the essential difference between theoretical physics and mathematical physics ? or if you prefer the rephrased version... What was the motivation for introducing the name "mathematical physics" as a separate entity? (It doesn't matter if you're one of those people who don't like labels, the thing is that we do have these two separated very often in academia.) | Theoretical physics is the field that develops theories about how nature operates. It is fundamentally physics, in that the ultimate goal is to describe reality. It is informed by experiment, and at the same time it extends the results of experiments, making predictions about what has not been physically tested. This is accomplished using the language of mathematics, and often the demands of theoretical physicists force mathematicians to extend this language in new directions, but it is not concerned with developing the language of math. Theoretical physicists are, among other things, physicists who are very well-versed in math (which is not to say other physicists are not - please don't hurt me). Mathematical physics , on the other hand, is a branch of mathematics. It explores relations between abstract concepts, proves certain results contingent upon certain hypotheses, and establishes an interlinked set of tools that can be used to study anything that happens to match the relations and hypotheses on hand. This branch in particular is motivated by the theories used in physics. It may seek to prove certain truths that were simply assumed by physicists, or carefully delineate the conditions under which certain theories hold, or even provide generally applicable tools to physicists, who can in turn apply them to nature. Mathematical physicists are mathematicians who are intrigued/inspired by physics. One could say that mathematical physics is concerned with the internal, logical consistency of physical theories, while theoretical physics is concerned with finding the right model to describe the world around us. Very roughly , one might diagram these things as shown below.
$$ \text{Mathematical physics} \Longleftrightarrow \text{Theoretical physics} \Longleftrightarrow \text{Experimental physics} $$ | {
"source": [
"https://physics.stackexchange.com/questions/56293",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
56,338 | In the textbook, it said a wave in the form $y(x, t) = A\cos(\omega t + \beta x + \varphi)$ propagates along negative $x$ direction and $y(x, t) = A\cos(\omega t - \beta x + \varphi)$ propagates along positive $x$ direction. This statement looks really confusing because when it says the wave is propagating along $\pm$ x direction, to my understanding, we can drop the time term and ignore the initial phase $\varphi$ while analyzing the direction, i.e. $y(x, 0) = A\cos(\pm\beta x)$, however, because of the symmetry of the cosine function, $\cos(\beta x)\equiv \cos(-\beta x)$, so how can we determine the direction of propagation from that? I know my reasoning must be incorrect but I don't know how to determine the direction. So if we don't go over the math, how to figure out the direction of propagation from the physical point of view? Why $-\beta x$ corresponding to the propagation on positive x direction but not the opposite? | For a particular section of the wave which is moving in any direction, the phase must be constant. So, if the equation says $y(x,t) = A\cos(\omega t + \beta x + \phi)$, the term inside the cosine must be constant . Hence, if time increases, $x$ must decrease to make that happen. That makes the location of the section of wave in consideration and the wave move in negative direction. Opposite of above happens when the equation says $y(x,t) = A\cos(\omega t - \beta x + \phi)$. If t increase, $x$ must increase to make up for it. That makes a wave moving in positive direction. The basic idea:For a moving wave, you consider a particular part of it, it moves. This means that the same $y$ would be found at other $x$ for other $t$, and if you change $t$, you need to change $x$ accordingly. Hope that helps! | {
"source": [
"https://physics.stackexchange.com/questions/56338",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/16191/"
]
} |
56,496 | One of the great unsolved problems in physics is turbulence but I'm not too clear what the mystery is. Does it mean that the Navier-Stokes equations don't have any turbulent phenomena even if we solve it computationally? Or does it mean we simply don't have a closed-form solution to turbulent phenomena? | Turbulence is indeed an unsolved problem both in physics and mathematics. Whether it is the "greatest" might be argued but for lack of good metrics probably for a long time. Why it is an unsolved problem from a mathematical point of view read Terry Tao (Fields medal) here . Why it is an unsolved problem from a physical point of view, read Ruelle and Takens here . The difficulty is in the fact that if you take a dissipative fluid system and begin to perturb it for example by injecting energy, its states will qualitatively change.
Over some critical value the behaviour will begin to be more and more irregular and unpredictable.
What is called turbulence are precisely those states where the flow is irregular.
However as this transition to turbulence depends on the constituents and parameters of the system and leads to very different states, there exists sofar no general physical theory of turbulence.
Ruelle et Takens attempt to establish a general theory but their proposal is not accepted by everybody. So in answer on exactly your questions : yes, solving numerically Navier Stokes leads to irregular solutions that look like turbulence no, it is not possible to solve numerically Navier Stokes by DNS on a large enough scale with a high enough resolution to be sure that the computed numbers converge to a solution of N-S.
A well known example of this inability is weather forecast - the scale is too large, the resolution is too low and the accuracy of the computed solution decays extremely fast. This doesn't prevent establishing empirical formulas valid for certain fluids in a certain range of parameters at low space scales (e.g meters) - typically air or water at very high Reynolds numbers. These formulas allow f.ex to design water pumping systems but are far from explaining anything about Navier Stokes and chaotic regimes in general. While it is known that numerical solutions of turbulence will always become inaccurate beyond a certain time, it is unknown whether the future states of a turbulent system obey a computable probability distribution. This is certainly a mystery. | {
"source": [
"https://physics.stackexchange.com/questions/56496",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21600/"
]
} |
56,833 | Why does the base of this slinky not fall immediately to gravity? My guess is tension in the springs is a force > mass*gravity but even then it is dumbfounding. | What an awesome question! By the way, as far as I know, the original video is here for those interested. One key to understanding this is the following fact from classical mechanics that is a version of Newton's second law for systems of particles: The net external force acting on a system of particles equals the total mass $M$ of the system times the acceleration of its center of mass $$
\mathbf F_{\mathrm{ext},\mathrm{net}} = M\mathbf a_\mathrm{cm}
$$
In the case of the slinky, which we can model as a system of many particles, the net external force on the system is simply the weight of the slinky. This is just given by its mass multiplied by $\mathbf g$, the acceleration due to gravity, so from the statement above, we get
$$
M\mathbf g = M\mathbf a_\mathrm{cm}
$$
so it follows that
$$
\mathbf a_\mathrm{cm} = \mathbf g
$$
In other words we have shown that The center of mass of the slinky must move as if it is a particle falling under the influence of gravity. However, there is nothing requiring that the individual particles in the system must move as though they are each falling freely under influence of gravity. This is the case because there are interactions between the particles that affect their motion in addition to the force due to gravity. In particular, there is tension in the slinky, as you point out. You are absolutely correct that the bottom of the slinky does not move because the tension of the rest of the slinky pulling up balances the force due to gravity pulling down until the moment that the slinky is fully compressed and the whole thing falls with the acceleration due to gravity. Regardless, the center of mass is moving as though it is freely falling the whole time. By the way, there are some nice comments about this experiment from the angle of wave propagation on physics.SE user @Mark Eichenlaub's blog which can be found here . | {
"source": [
"https://physics.stackexchange.com/questions/56833",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21963/"
]
} |
56,851 | In the second chapter of Peskin and Schroeder, An Introduction to Quantum Field Theory, it is said that the action is invariant if the Lagrangian density changes by a four-divergence.
But if we calculate any change in Lagrangian density we observe that under the conditions of equation of motion being satisfied, it only changes by a four-divergence term. If ${\cal L}(x) $ changes to $ {\cal L}(x) + \alpha \partial_\mu J^{\mu} (x) $ then action is invariant. But isn't this only in the case of extremization of action to obtain Euler-Lagrange equations. Comparing this to $ \delta {\cal L}$ $$ \alpha \delta {\cal L} = \frac{\partial {\cal L}}{\partial \phi} (\alpha \delta \phi) + \frac{\partial {\cal L}}{\partial \partial_{\mu}\phi} \partial_{\mu}(\alpha \delta \phi) $$ $$= \alpha \partial_\mu \left(\frac{\partial {\cal L}}{\partial \partial_{\mu}\phi} \delta \phi \right) + \alpha \left[ \frac{\partial {\cal L}}{\partial \phi} - \partial_\mu \left(\frac{\partial {\cal L}}{\partial \partial_{\mu}\phi} \right) \right] \delta \phi. $$ Getting the second term to zero assuming application of equations of motion. Doesn't this imply that the noether's current itself is zero, rather than its derivative? That is: $$J^{\mu} (x) = \frac{\partial {\cal L}}{\partial \partial_{\mu}\phi} \delta \phi .$$ I add that my doubt is why changing ${\cal L}$ by a four divergence term lead to invariance of action globally when that idea itself was derived while extremizing the action which I assume is a local extremization and not a global one. | Here's what I perceive to be a mathematically and logically precise presentation of the theorem, let me know if this helps. Mathematical Preliminaries First let me introduce some precise notation so that we don't encounter any issues with "infinitesimals" etc. Given a field $\phi$ , let $\hat\phi(\alpha, x)$ denote a smooth one-parameter family of fields for which $\hat \phi(0, x) = \phi(x)$ . We call this family a deformation of $\phi$ (in a previous version I called this a "flow"). Then we can define the variation of $\phi$ under this deformation as the first order approximation to the change in $\phi$ as follows: Definition 1. (Variation of field) $$
\delta\phi(x) = \frac{\partial\hat\phi}{\partial\alpha}(0,x)
$$ This definition then implies the following expansion $$
\hat\phi(\alpha, x) = \phi(x) + \alpha\delta\phi(x) + \mathcal O(\alpha^2)
$$ which makes contact with the notation in many physics books like Peskin and Schroeder. Note: In my notation, $\delta\phi$ is NOT an "infinitesimal", it's the coefficient of the parameter $\alpha$ in the first order change in the field under the deformation. I prefer to write things this way because I find that it leads to a lot less confusion. Next, we define the variation of the Lagrangian under the deformation as the coefficient of the change in $\mathcal L$ to first order in $\alpha$ ; Definition 2. (Variation of Lagrangian density) $$
\delta\mathcal L(\phi(x), \partial_\mu\phi(x)) = \frac{\partial}{\partial\alpha}\mathcal L(\hat\phi(\alpha, x), \partial_\mu\hat\phi(\alpha, x))\Big|_{\alpha=0}
$$ Given these definitions, I'll leave it to you to show Lemma 1. For any variation of the fields $\phi$ , the variation of the Lagrangian density satisfies \begin{align}
\delta\mathcal L
&= \left(\frac{\partial \mathcal L}{\partial\phi} - \partial_\mu\frac{\partial\mathcal L}{\partial(\partial_\mu\phi)}\right)\delta\phi + \partial_\mu K^\mu,\qquad K^\mu = \frac{\partial\mathcal L}{\partial(\partial_\mu\phi)}\delta\phi
\end{align} You'll need to use (1) The chain rule for partial differentiation, (2) the fact $\delta(\partial_\mu\phi) = \partial_\mu\delta\phi$ which can be proven from the above definition of $\delta\phi$ and (3) the product rule for partial differentiation. Noether's theorem in steps Let a particular flow $\hat\phi(\alpha, x)$ be given. Assume that for this particular deformation , there exists some vector field $J^\mu\neq K^\mu$ such that $$
\delta\mathcal L = \partial_\mu J^\mu
$$ Notice, that for any field $\phi$ that satisfies the equation of motion , Lemma 1 tells us that $$
\delta \mathcal L = \partial_\mu K^\mu
$$ Define a vector field $j^\mu$ by $$
j^\mu = K^\mu - J^\mu
$$ Notice that for any field $\phi$ satisfying the equations of motion steps 2+ 3 + 4 imply $$
\partial_\mu j^\mu = 0
$$ Q.E.D. Important Notes!!! If you follow the logic carefully, you'll see that $\delta \mathcal L = \partial_\mu K^\mu$ only along the equations of motion . Also, part of the hypothesis of the theorem was that we found a $J^\mu$ that is not equal to $K^\mu$ for which $\delta\mathcal L = \partial_\mu J^\mu$ . This ensures that $j^\mu$ defined in the end is not identically zero ! In order to find such a $J^\mu$ , you should not be using the equations of motion. You should be applying the given deformation to the field and seeing what happens to it to first order in the "deformation parameter" $\alpha$ . Addendum. 2020-07-02 (Free scalar field example.) A concrete example helps clarify the theorem and the remarks made afterward. Consider a single real scalar field $\phi:\mathbb R^{1,3}\to\mathbb R$ . Let $m\in\mathbb R$ and $\xi\in\mathbb R^{1,3}$ , and consider the following Lagrangian density and deformation (often called spacetime translation): $$
\mathcal L(\phi, \partial_\mu\phi) = \frac{1}{2}\partial_\mu\phi\partial^\mu\phi - \frac{1}{2}m^2\phi, \qquad \hat\phi(\alpha, x) = \phi(x + \alpha\xi)
$$ Computation using the definition of $\delta\mathcal L$ (plug the deformed field into $\mathcal L$ , take the derivative with respect to $\alpha$ , and set $\alpha = 0$ at the end) but without ever invoking the equation of motion (Klein-Gordon equation) for the field gives $$
\delta \mathcal L = \partial_\mu(\xi^\nu\delta^\mu_\nu \mathcal L), \qquad \frac{\partial\mathcal L}{\partial(\partial_\mu\phi)}\delta\phi = \xi^\nu\partial_\nu\phi\partial^\mu\phi
$$ It follows that $$
J^\mu = \xi^\nu\delta^\mu_\nu \mathcal L, \qquad K^\mu = \xi^\nu\partial_\nu\phi\partial^\mu\phi
$$ and therefore $$
j^\mu = \xi^\nu(\partial_\nu\phi\partial^\mu\phi -\delta^\mu_\nu\mathcal L)
$$ If e.g. one chooses $\tau > 0$ and sets $\xi = (\tau, 0, 0, 0)$ , then the deformation is time translation, and conservation of $j^\mu$ yields conservation of the Hamiltonian density associated with $\mathcal L$ as the reader can check. Suppose, instead, that in the process of computing $\delta \mathcal L$ , one were to further invoke the following equation of motion which is simply the Euler-Lagrange equation for the Lagrangian density $\mathcal L$ : $$
\partial^\mu\partial_\mu\phi = -m^2\phi,
$$ Then one finds that $$
\delta\mathcal L = \partial_\mu(\xi^\nu\partial_\nu\phi\partial_\mu\phi)
$$ so $J^\mu = K^\mu$ and therefore $j^\mu = 0$ , which is uninformative. | {
"source": [
"https://physics.stackexchange.com/questions/56851",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/17123/"
]
} |
56,892 | Did Hilbert publish general relativity field equation before Einstein? | 1915 On November 25, nearly ten years after the foundation of special relativity, Einstein submitted his paper The Field Equations of Gravitation for publication, which gave the correct field equations for the theory of general relativity (or general relativity for short). Actually, the German mathematician David Hilbert submitted an article containing the correct field equations for general relativity five days before Einstein. Hilbert never claimed priority for this theory. [Bold mine.] The Official Web Site of the Nobel Prize Edit 1. But... Many have claimed that in 1915 Hilbert discovered the correct field equations for general relativity before Einstein but never claimed priority. The article [11] however, shows that this view is in error. In this paper the authors show convincingly that Hilbert submitted his article on 20 November 1915, five days before Einstein submitted his article containing the correct field equations. Einstein's article appeared on 2 December 1915 but the proofs of Hilbert's paper (dated 6 December 1915) do not contain the field equations. As the authors of [11] write:- In the printed version of his paper, Hilbert added a reference to Einstein's conclusive paper and a concession to the latter's priority: "The differential equations of gravitation that result are, as it seems to me, in agreement with the magnificent theory of general relativity established by Einstein in his later papers". If Hilbert had only altered the dateline to read "submitted on 20 November 1915, revised on [any date after 2 December 1915, the date of Einstein's conclusive paper]," no later priority question would have arisen. [11] L Corry, J Renn and J Stachel, Belated Decision in the Hilbert-Einstein Priority Dispute, Science 278 (14 November, 1997). Source Edit 2. Haha, butbut... :) Source Edit 3. Roundup. Recent controversy, raised by a much publicized 1997 reading of Hilbert's proof-sheets of his article of November 1915, is also discussed [on pp. 11-13; presumed included in this answer]. Einstein and Hilbert had the moral strength and wisdom - after a month of intense
competition, from which, in a final account, everybody (including science itself) profited
- to avoid a lifelong priority dispute (something in which Leibniz and Newton failed). It
would be a shame to subsequent generations of scientists and historians of science to try
to undo their achievement. Einstein and Hilbert: The Creation of General Relativity | {
"source": [
"https://physics.stackexchange.com/questions/56892",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21108/"
]
} |
57,057 | Why any fermion can be written as a combination of two Majorana fermions? Is there any physical meaning in it? Why Majorana fermion can be used for topological quantum computation? | I put an extra answer, since I believe the first Jeremy's question is still unanswered. The previous answer is clear, pedagogical and correct. The discussion is really interesting, too. Thanks to Nanophys and Heidar for this. To answer directly Jeremy's question: you can ALWAYS construct a representation of your favorite fermions modes in term of Majorana's modes. I'm using the convention "modes" since I'm a condensed matter physicist. I never work with particles, only with quasi-particles. Perhaps better to talk about mode. So the unitary transformation from fermion modes created by $c^{\dagger}$ and destroyed by the operator $c$ to Majorana modes is
$$
c=\dfrac{\gamma_{1}+\mathbf{i}\gamma_{2}}{\sqrt{2}}\;\text{and}\;c{}^{\dagger}=\dfrac{\gamma_{1}-\mathbf{i}\gamma_{2}}{\sqrt{2}}
$$
or equivalently
$$
\gamma_{1}=\dfrac{c+c{}^{\dagger}}{\sqrt{2}}\;\text{and}\;\gamma_{2}=\dfrac{c-c{}^{\dagger}}{\mathbf{i}\sqrt{2}}
$$
and this transformation is always allowed, being unitary. Having doing
this, you just changed the basis of your Hamiltonian. The quasi-particles
associated with the $\gamma_{i}$'s modes verify $\gamma{}_{i}^{\dagger}=\gamma_{i}$,
a fermionic anticommutation relation $\left\{ \gamma_{i},\gamma_{j}\right\} =\delta_{ij}$,
but they are not particle at all. A simple way to see this is to try
to construct a number operator with them (if we can not count the
particles, are they particles ? I guess no.). We would guess $\gamma{}^{\dagger}\gamma$
is a good one. This is not true, since $\gamma{}^{\dagger}\gamma=\gamma^{2}=1$
is always $1$... The only correct number operator is $c{}^{\dagger}c=\left(1-\mathbf{i}\gamma_{1}\gamma_{2}\right)$.
To verify that the Majorana modes are anyons, you should braid them
(know their exchange statistic) -- I do not want to say much about
that, Heidar made all the interesting remarks about this point. I
will come back later to the fact that there are always $2$ Majorana
modes associated to $1$ fermionic ($c{}^{\dagger}c$) one. Most has
been already said by Nanophys, except an important point I will discuss
later, when discussing the delocalization of the Majorana mode. I
would like to finnish this paragraph saying that the Majorana construction
is no more than the usual construction for boson: $x=\left(a+a{}^{\dagger}\right)/\sqrt{2}$
and $p=\left(a-a{}^{\dagger}\right)/\mathbf{i}\sqrt{2}$: only $x^{2}+p^{2} \propto a^{\dagger} a$
(with proper dimension constants) is an excitation number. Majorana
modes share a lot of properties with the $p$ and $x$ representation
of quantum mechanics (simplectic structure among other). The next question is the following: are there some situations when
the $\gamma_{1}$ and $\gamma_{2}$ are the natural excitations of
the system ? Well, the answer is complicated, both yes and no. Yes, because Majorana operators describe the correct excitations of
some topological condensed matter realisation, like the $p$-wave
superconductivity (among a lot of others, but let me concentrate on this specific one, that I know better). No, because these modes are not excitation at all ! They are zero
energy modes, which is not the definition of an excitation. Indeed,
they describe the different possible vacuum realisations of an emergent
vacuum (emergent in the sense that superconductivity is not a natural
situation, it's a condensate of interacting electrons (say)). As pointed out in the discussion associated to the previous answer ,
the normal terminology for these pseudo-excitations are zero-energy-mode.
That's what their are: energy mode at zero-energy, in the middle of
the (superconducting) gap. Note also that in condensed matter, the
gap provides the entire protection of the Majorana-mode, there is
no other protection in a sense. Some people believe there is a kind
of delocalization of the Majorana, which is true (I will come to that
in a moment). But the delocalization comes along with the gap in fact:
there is not allowed propagation below the gap energy. So the Majorana
mode are necessarilly localized because they lie at zero energy, in
the middle of the gap. More words about the delocalization now -- as I promised. Because
one needs two Majorana modes $\gamma_{1}$ and $\gamma_{2}$ to each
regular fermionic $c{}^{\dagger}c$ one, any two associated Majorana
modes combine to create a regular fermion. So the most important challenge
is to find delocalized Majorana modes ! That's the famous
Kitaev proposal arXiv:cond-mat/0010440 -- he said unpaired Majorana instead of delocalised, since delocalization comes for free once again. At the
end of a topological wire (for me, a $p$-wave superconducting wire)
there will be two zero-energy modes, exponentially decaying in space
since they lie at the middle of the gap. These zero-energy modes can
be written as $\gamma_{1}$ and $\gamma_{2}$ and they verify $\gamma{}_{i}^{\dagger}=\gamma_{i}$
each ! To conclude, an actual vivid question, still open: there are a lot
of pseudo-excitations at zero-energy (in the middle of the gap). The
only difference between Majorana modes and the other pseudo-excitations
is the definition of the Majorana $\gamma^{\dagger}=\gamma$, the
other ones are regular fermions. How to detect for sure the Majorana
pseudo-excitation (zero-energy mode) in the jungle of the other ones
? | {
"source": [
"https://physics.stackexchange.com/questions/57057",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11831/"
]
} |
57,060 | When studying classical mechanics using the Euler-Lagrange equations for the first time, my initial impression was that the Lagrangian was something that needed to be determined through integration of the Euler-Lagrange equations$^1$. Of course, now I know it's something that's a given for a mechanical system, and we integrate it via the Euler-Lagrange equations to get the evolution of the coordinates of the mechanical system as a function of time. But are there alternative uses for the Euler-Lagrange equations where the Lagrangian isn't known before hand? To all the teachers out there, for god's sake, explain right at the start that the Lagrangian is a functions known before hand for a system. | I put an extra answer, since I believe the first Jeremy's question is still unanswered. The previous answer is clear, pedagogical and correct. The discussion is really interesting, too. Thanks to Nanophys and Heidar for this. To answer directly Jeremy's question: you can ALWAYS construct a representation of your favorite fermions modes in term of Majorana's modes. I'm using the convention "modes" since I'm a condensed matter physicist. I never work with particles, only with quasi-particles. Perhaps better to talk about mode. So the unitary transformation from fermion modes created by $c^{\dagger}$ and destroyed by the operator $c$ to Majorana modes is
$$
c=\dfrac{\gamma_{1}+\mathbf{i}\gamma_{2}}{\sqrt{2}}\;\text{and}\;c{}^{\dagger}=\dfrac{\gamma_{1}-\mathbf{i}\gamma_{2}}{\sqrt{2}}
$$
or equivalently
$$
\gamma_{1}=\dfrac{c+c{}^{\dagger}}{\sqrt{2}}\;\text{and}\;\gamma_{2}=\dfrac{c-c{}^{\dagger}}{\mathbf{i}\sqrt{2}}
$$
and this transformation is always allowed, being unitary. Having doing
this, you just changed the basis of your Hamiltonian. The quasi-particles
associated with the $\gamma_{i}$'s modes verify $\gamma{}_{i}^{\dagger}=\gamma_{i}$,
a fermionic anticommutation relation $\left\{ \gamma_{i},\gamma_{j}\right\} =\delta_{ij}$,
but they are not particle at all. A simple way to see this is to try
to construct a number operator with them (if we can not count the
particles, are they particles ? I guess no.). We would guess $\gamma{}^{\dagger}\gamma$
is a good one. This is not true, since $\gamma{}^{\dagger}\gamma=\gamma^{2}=1$
is always $1$... The only correct number operator is $c{}^{\dagger}c=\left(1-\mathbf{i}\gamma_{1}\gamma_{2}\right)$.
To verify that the Majorana modes are anyons, you should braid them
(know their exchange statistic) -- I do not want to say much about
that, Heidar made all the interesting remarks about this point. I
will come back later to the fact that there are always $2$ Majorana
modes associated to $1$ fermionic ($c{}^{\dagger}c$) one. Most has
been already said by Nanophys, except an important point I will discuss
later, when discussing the delocalization of the Majorana mode. I
would like to finnish this paragraph saying that the Majorana construction
is no more than the usual construction for boson: $x=\left(a+a{}^{\dagger}\right)/\sqrt{2}$
and $p=\left(a-a{}^{\dagger}\right)/\mathbf{i}\sqrt{2}$: only $x^{2}+p^{2} \propto a^{\dagger} a$
(with proper dimension constants) is an excitation number. Majorana
modes share a lot of properties with the $p$ and $x$ representation
of quantum mechanics (simplectic structure among other). The next question is the following: are there some situations when
the $\gamma_{1}$ and $\gamma_{2}$ are the natural excitations of
the system ? Well, the answer is complicated, both yes and no. Yes, because Majorana operators describe the correct excitations of
some topological condensed matter realisation, like the $p$-wave
superconductivity (among a lot of others, but let me concentrate on this specific one, that I know better). No, because these modes are not excitation at all ! They are zero
energy modes, which is not the definition of an excitation. Indeed,
they describe the different possible vacuum realisations of an emergent
vacuum (emergent in the sense that superconductivity is not a natural
situation, it's a condensate of interacting electrons (say)). As pointed out in the discussion associated to the previous answer ,
the normal terminology for these pseudo-excitations are zero-energy-mode.
That's what their are: energy mode at zero-energy, in the middle of
the (superconducting) gap. Note also that in condensed matter, the
gap provides the entire protection of the Majorana-mode, there is
no other protection in a sense. Some people believe there is a kind
of delocalization of the Majorana, which is true (I will come to that
in a moment). But the delocalization comes along with the gap in fact:
there is not allowed propagation below the gap energy. So the Majorana
mode are necessarilly localized because they lie at zero energy, in
the middle of the gap. More words about the delocalization now -- as I promised. Because
one needs two Majorana modes $\gamma_{1}$ and $\gamma_{2}$ to each
regular fermionic $c{}^{\dagger}c$ one, any two associated Majorana
modes combine to create a regular fermion. So the most important challenge
is to find delocalized Majorana modes ! That's the famous
Kitaev proposal arXiv:cond-mat/0010440 -- he said unpaired Majorana instead of delocalised, since delocalization comes for free once again. At the
end of a topological wire (for me, a $p$-wave superconducting wire)
there will be two zero-energy modes, exponentially decaying in space
since they lie at the middle of the gap. These zero-energy modes can
be written as $\gamma_{1}$ and $\gamma_{2}$ and they verify $\gamma{}_{i}^{\dagger}=\gamma_{i}$
each ! To conclude, an actual vivid question, still open: there are a lot
of pseudo-excitations at zero-energy (in the middle of the gap). The
only difference between Majorana modes and the other pseudo-excitations
is the definition of the Majorana $\gamma^{\dagger}=\gamma$, the
other ones are regular fermions. How to detect for sure the Majorana
pseudo-excitation (zero-energy mode) in the jungle of the other ones
? | {
"source": [
"https://physics.stackexchange.com/questions/57060",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4485/"
]
} |
57,129 | Don't be a $\frac{d^3x}{dt^3}$ What does it all mean? | It means don't be a jerk. The third derivative of position (i.e. the change in acceleration) is called " jerk ", though it's a little used quantity. It's called jerk because a changing acceleration is felt as a "jerk" in that direction. | {
"source": [
"https://physics.stackexchange.com/questions/57129",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
57,140 | electrons on earth moving with us, due to rotation of earth, revolution of earth, sun and our galaxy right? Then, why is there no magnetic field around a piece of copper wire? | It means don't be a jerk. The third derivative of position (i.e. the change in acceleration) is called " jerk ", though it's a little used quantity. It's called jerk because a changing acceleration is felt as a "jerk" in that direction. | {
"source": [
"https://physics.stackexchange.com/questions/57140",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22098/"
]
} |
57,228 | My knowledge on this particular field of physics is very sketchy, but I frequently hear of a theoretical "graviton", the quantum of the gravitational field. So I guess most physicists' assumption is that gravity can be described by a QFT? But I find this weird, because gravity seems so incredibly different from the other forces (yes, I know "weirdness" isn't any sort of scientific deduction principle). For relative strengths: Strong force: $10^{38}$ Electromagnetic force: $10^{36}$ Weak force: $10^{25}$ Gravity: $1$ Not only does gravity have a vastly weaker magnitude, it also has a very strange interaction with everything else. Consider the Standard Model interactions: No particle (or field) interacts directly with all other fields. Heck, gluons only barely interact with the rest of them. So why is it then that anything that has energy (e.g. everything that exists ) also has a gravitational interaction? Gravity seems unique in that all particles interact through it. Then there's the whole issue of affecting spacetime. As far as I'm aware, properties such as charge, spin, color, etc. don't affect spacetime (only the energy related to these properties). | The short answer for why gravity is unique is that it is the theory of a massless, spin-2 field. To contrast with the other forces, the strong, weak and electromagnetic forces are all theories of spin-1 particles. Although it's not immediately obvious, this property alone basically fixes all of the essential features of gravity. To begin with, the fact that gravity is mediated by massless particles means that it can give rise to long-range forces. "Long-range" here means that gravitational potential between distant masses goes like $\dfrac{1}{r}$, whereas local interactions most commonly fall of exponentially, something like $\dfrac{e^{-mr}}{r}$, where $m$ is the mass of the force particle (this is known as a Yukawa potential ). Another important feature of massless particles is they must have a gauge symmetry associated with them. Gauge symmetry is important because it leads to conserved quantities. In the case of electromagnetism (a theory of a massless, spin-1 particle), there is also gauge symmetry, and it is known that the conservation of electric charge is a consequence of this symmetry. For gravity, the gauge symmetry puts even stronger constraints on the way gravity interacts: not only does it lead to a conserved "charge" (the stress energy tensor of matter), it actually requires that the gravitational field couple in the same way to all types of matter. So, as you correctly noted, gravity is very unique in that it is required to couple to all other particles and fields. Not only that, but gravity also doesn't care about the electric charge, color charge, spin, or any other property of the things it is interacting with, it only couples to the stress-energy of the field. For people who are familiar with general relativity, this universal coupling of gravity to the stress energy of matter, independent of internal structure, is known as the equivalence principle. A more technical discussion of the fact that massless, spin-2 implies the equivalence principle (which was first derived by Weinberg) is given in the lecture notes found at the bottom of this page. Another consequence of this universal coupling of gravity is that there can only by one type of graviton, i.e. only one massless, spin-2 field that interacts with matter. This is much different from the spin-1 particles, for example the strong force has eight different types of gluons. So since gravity is described by massless, spin-2 particles, it is necessarily the unique force containing massless spin-2 particles. In regards to the geometric viewpoint of gravity, i.e. how gravity can be seen as causing curvature in spacetime, that property also follows directly (although not obviously) from the massless spin-2 nature of the gravitons. One of the standard books treating this idea is Feynman's Lectures on Gravitation (I think at least the first couple of chapters are available on google books). The viewpoint that Feynman takes is that gravity must couple universally to the stress tensor of all matter, including the stress tensor of the gravitons themselves. This sort of self-interaction basically gives rise to the nonlinearities that one finds in general relativity. Also, the gauge symmetry that was mentioned before gets modified by the self-interactions, and turns into diffeomorphism symmetry found in general relativity (also known as general covariance). All of this analysis comes from assuming that there is a quantum field theoretic description of gravity. It may be concerning that people generally say we don't have a consistent quantum theory of gravity. This is true, however, it can more accurately be stated that we don't have an ultraviolet complete theory of quantum gravity (string theory, loop quantum gravity, asymptotically safe gravity are all proposed candidates for a full theory, among many others). That means that we don't believe that this theory of massless spin-2 particles is valid at very high energies. The cutoff where we think it should break down is around the Planck mass, $M_p \approx 10^{19}$ GeV. These energies would be reached, for example, at the singularity of a black hole, or near the big bang. But in most regions of the universe where such high energies are not present, perturbative quantum general relativity, described in terms of gravitons, is perfectly valid as a low energy effective field theory. Finally, you noted that the extremely weak coupling of gravity compared to the other forces also sets it apart. This is known as the hierarchy problem , and to the best of my knowledge it is a major open problem in physics. Regardless, I hope this shows that even hierarchy problem aside, gravity plays a very special role among the forces of nature. | {
"source": [
"https://physics.stackexchange.com/questions/57228",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/960/"
]
} |
58,136 | As implied from the question, does the sun rotate? If so, do other stars not including the sun also rotate? Would there be any consequences if the sun and other stars didn't rotate? Me and my friends have differing views on this, and would like some clarification. Thanks! | Yes, the sun and nearly all other stars do rotate. One can see the rotation of the sun by looking at the motion of sunspots on its surface. Over time, the sunspots will move across the sun's surface - proof of its rotation. Furthermore, the rate of the sun's rotation is not constant throughout the sun; it is higher near the equator and slower near the poles. Other stars rotate as well. To imagine why this would be requires some thought about the creation of a star. A star begins as an enormous cloud of dust and gas. When these clouds form, they always have some rotation - even if this rotation is incredibly small and imperceptible. Gravity, however, begins pulling the cloud together into a smaller and more compact object (a star). The shrinking of the large cloud into a smaller body hugely decreases its moment of inertia, causing its angular velocity to significantly increase by the conservation of angular momentum. (This is much like how a figure skater increases her rate of spinning by pulling in her arms.) Because of this, even the slightest hint of rotation of the large gas cloud is amplified into a rapid spinning once the compact star forms. Rotation is notable in pulsars, which are rapidly rotating neutron stars. Rapidly spinning neutron stars produce magnetic fields, causing electromagnetic radiation (often in the form of X-rays). Beams of the radiation can strike Earth, allowing observatories to observe the rapidly pulsating stars. | {
"source": [
"https://physics.stackexchange.com/questions/58136",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20168/"
]
} |
58,185 | So, in one science fiction story, that tries to be as realistic as possible apart from a few space magics, humanity has a contingency plan to blow up Jupiter. As in, totally destroy it in one massive nuclear explosion. I'd like to know the effects of such an event. Would it totally wreck the Solar System or would the whole thing be a non-issue? | The mass of Jupiter is about $10^{27}$ kg which, via $E=mc^2$, translates to $10^{44}$ joules. If one turned the planet into thermonuclear fuel in some way and detonated it immediately, about 1% or $10^{42}$ joules would be released. Because the diameter of Jupiter is about 130,000 km, the blast would last at least half a second or so. So we have $10^{42}$ joules per half a second. It's $2\times 10^{42}$ watts. The Sun only releases $4\times 10^{26}$ watts of power, so the blast would be $2\times 10^{16}$ times stronger than the Sun. However, looking at the effects on the Earth, we must realize that Jupiter is about 5 times further from the Earth than the Sun, reducing the energy flux by a factor of $5^2=25$. So the half-second blast seems about $10^{15}$ times stronger than the sunshine. The equilibrium temperature is, because of the $\sigma T^4$ law, about $10^4$ times higher than that from the sunshine, about a million degrees. The Sun warms the Earth by a degree in hours or so. A source that is $10^{15}$ times stronger obviously needs a tiny fraction of a second to reach thousands of degrees and evaporate the matter on the surface. So no doubt about it, the thermonuclear blast of Jupiter would burn and evaporate all nearby sides of all the planets – all of them are comparably far from the ground zero. On the other hand, would the incoming energy be able to evaporate the whole Earth? We would be getting $10^{15}\times 342\times 4\pi \times 6,378,000^2\sim 2\times 10^{32}$ watts for half a second, about $10^{32}$ joules per the blast and per the surface of the Earth. The specific heats of materials are comparable to $1,000$ joules per Celsius degree and kilogram so we have $10^{29}$ kilogram-degrees to be heated. Divide it by the Earth mass below $10^{25}$ kg to see that you may still heat the material by tens of thousands of degrees by the incoming light. So I do think that this could evaporate the whole Earth but not the largest planets like Saturn. Needless to say, the Sun itself would be pretty much untouched. Its surface already has 6,000 degrees or so. The strong radiation from Jupiter could bring it to a million of degrees, by the calculation above, but it's the same as the temperature of the interior layers. So the Sun would get destabilized a bit but it would quickly converge back to the Sun we know, I guess. The calculations above are completely unrealistic because at most, one could think about turning Jupiter into a small star that would still burn very slowly and would be far weaker than the Sun. | {
"source": [
"https://physics.stackexchange.com/questions/58185",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22463/"
]
} |
58,208 | I asked my Dad this once when I was about 14, and he said that no matter how short the amount of time you were exposed to such a great temperature, you would surely die. The conversation went something like: Me: What about a millisecond?
Dad: Of course not.
Me: A nanosecond?
Dad: Nope.
Me: 10^-247 seconds?
Dad: Death. 15 years later I still think that you could survive for some amount of time even though you'd surely get a few layers cooked. Thus, what's the longest period that you could theoretically withstand 1,000,000 degrees F heat and live to tell about it? Let's assume that you are wearing normal clothes (a t-shirt, jeans, sneakers, sunglasses, and a baseball cap), that the air around you is the only thing that is heated, and that after the time is up your friend sprays you with a fire extinguisher and the air magically returns to a nice 73 degrees F. | Let me mention that the air heated to 1 million degrees isn't a gas. It's probably plasma. Just order-of-magnitude estimates. It takes seconds for your exterior skin temperature to drop or increase by 1 deg C if the body-air temperature difference is 10 deg C or so. If it were 1 million deg C, i.e. 100,000 times higher, the heating of the skin would be roughly 100,000 times faster but it seems likely to me that even microseconds would still be fine: that may be the time scale at which you would start to feel burns. However, one would have to analyze the actual behavior of the "hot air", as you call it, namely the plasma. At a million of degrees, it is ionized and emits thermal X-rays. This is a very harmful, ionizing radiation. The plasma itself is ionized, too. These conditions probably lead to the creation of cancer more quickly than you get burned by the heat. So a more relevant calculation than heat answer would probably deal with the number of X-ray photons that you can "sort of non-lethally" absorb from the "air". The thermal radiation is unavoidable and dominant. The first paragraph only discussed heat conduction. But the thermal radiation goes like $\sigma T^4$ so if the temperature is 160 times greater than the Sun's surface temperature, the radiation carries $(160)^4\sim 10^9$ times larger energy flux than if you're sitting next to the Sun's surface. And the energy fluxes near the Sun's surface are about $10^{5}$ times greater than on a sunny beach, so the thermal radiation is about $10^{14}$ times more intense than the solar radiation. So even if you only view this radiation as a non-carcinogenic heat, the safe time may be reduced to something like $10^{-14}$ seconds. When you realize that the energy counting heavily understates the harmful influence of the X-rays, you will get to a shorter timescale of safety, perhaps $10^{-20}$ seconds. At any rate, I am confident that you will never get to something like $10^{-247}$ seconds. From a practical viewpoint, nothing happens at these extremely short time scales, they're really unphysical. You would probably not absorb a single thermal X-ray photon during such a ludicrously short timescale so such short exposures may be called "totally safe". They're also totally impossible, of course. You can never design a switch that would only "turn something on" for $10^{-247}$ seconds. | {
"source": [
"https://physics.stackexchange.com/questions/58208",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9130/"
]
} |
59,213 | By conservation of energy, the solid is left in a lower energy state following emission of a photon. Clearly absorption and emission balance at thermal equilibrium, however, thermodynamic equilibrium is a statement of the mean behaviour of the system, not a statement that the internal energy is constant on arbitrarily short timescales. The energy has to come from somewhere during emission, and go somewhere during absorption. Energy in a solid can be stored as kinetic and potential energy of electrons and nuclei, either individually or in collective modes such as phonons and plasmons. In thermal equilibrium energy will be stored more or less in various forms depending on the temperature and material. However, even if most of the thermal energy in a particular solid at temperature $T$ is stored in the form of phonons, it could be that phonons primarily interact with light indirectly via electrons, e.g. a phonon excites an electron in a phonon-electron interaction, which can interact with light via the EM field. Given that light is an EM field, it makes sense to me that it is emitted and absorbed by charged particles. The electron-photon interaction is probably dominant for visible and ultraviolet light, given that metals are opaque, while semiconductors and insulators are transparent to (visible and UV) light with energy lower than their bandgap. However, once you get into energies in the IR and below, or X-rays and above, other mechanisms apparently take over. For example, on the high-energy end of the spectrum I've heard that gamma rays can interact directly with nuclear degrees of freedom, which is reasonable considering that gamma rays are emitted during a lot of nuclear reactions. A review of absorption spectroscopy might give clues to important light-matter interactions over a broad range of wavelengths. Whether all of the these processes are involved in blackbody emission is a somewhat different question. What physical processes mediate energy transfer during blackbody emission, and in which energy ranges are the various processes dominant? | This is a fantastic question, and a topic I was very confused about when I first took a class on Radiative Processes . The ultimate answer, as hinted at by @LubošMotl, is anything ---if you start with a 'white-noise' of radiation (i.e. equal amounts of every frequency), it will equilibrate with the medium/material into a black-body distribution because of its thermal properties (see: Kirchhoff's Law , and the Einstein Coefficients ). This is just like if you gave each molecule in a gas the same energy, they would settle to a Boltzmann Distribution . In practice (and hopefully a more satisfying answer) is that it's generally a combination of line-emission and Bremsstrahlung , with Bremsstrahlung 1 dominating at high temperatures ( $T \gtrsim 10^6 -10^7 K$ ). Lines are produced at myriads of frequencies depending on the substance of interest, and the thermodynamic properties (e.g. temperature). For everyday objects, I think the emission is primarily from molecular-vibrational lines. Individual lines are spread out by numerous thermodynamic broadening effects to cover larger portions of the spectrum. Finally, as per Kirchhoff's law, equilibrated objects can only emit up to the black-body spectrum. In practice, you'll still see emission/absorption lines imprinted, and additional sources of radiation. Lets look at a breakdown of the relevant transitions as a function of energy level : radio : nuclear magnetic energy levels (also cyclotron emission in the presence of moderate magnetic fields). microwave : rotational energy levels infrared : vibrational energy levels (molecules) visible : electronic (especially outer electron transitions) ultraviolet : electronic (especially outer/valence electron ejection/combination) x-ray : electronic (inner electron transitions) gamma-ray : nuclear transitions 1: Bremsstrahlung (German for 'braking radiation') is radiation produced by the acceleration of charged particles---most often electrons. This can happen between any combination of bound (in atoms) or unbound (free or in plasma) charges. | {
"source": [
"https://physics.stackexchange.com/questions/59213",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22399/"
]
} |
59,333 | The definitions between on- and off-shell are given in Wikipedia . Why is it so important in QFT to distinguish these two notions? | It's important to distinguish them because on-shell and off-shell are opposite to each other, in a sense. On-shell describes fields that obey the equations of motion and real particles; off-shell describes fields that don't have to obey the equations of motion and virtual particles. On-shell are momenta with $p^2=m^2$ with the right $m^2$ for the given field; off-shell are momenta that don't obey the condition. Amplitudes with external particles that are on-shell, i.e. on-shell amplitudes, express the scattering amplitudes and may be directly used to calculate cross sections etc. Off-shell amplitudes i.e. amplitudes with external off-shell momenta encode much more general correlators. In some theories, i.e. quantum gravity i.e. string theory in the flat space, only on-shell amplitudes for the external particles such as gravitons may be calculated. On the contrary, the analogous quantities to these on-shell amplitudes in the AdS space may be expressed by off-shell correlators in the corresponding CFT. It's always important to know whether the 4-momenta etc. we are attaching are obliged to be on-shell or not, i.e. whether they're on-shell or off-shell. If we substitute off-shell momenta to on-shell-only formulae, we get meaningless or ill-defined results. If we only substitute on-shell momenta to off-shell formulae, we miss a significant portion of the physical information. | {
"source": [
"https://physics.stackexchange.com/questions/59333",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/16689/"
]
} |
59,359 | After going through several forums, I became more confused whether it is DC or AC that is more dangerous. In my text book, it is written that the peak value of AC is greater than that of DC, which is why it tends to be dangerous. Some people in other forums were saying that DC will hold you, since it doesn't have zero crossing like that of AC. Many others also say that our heart tries to beat with the frequency of ac which the heart cannot support leading to people's death. What is the actual thing that matters most? After all, which is more dangerous? AC or DC? | The RMS (root-mean square) value of an AC voltage, which is what is represented as "110 V" or "120 V" or "240 V" is lower than the electricity's peak voltage. Alternating current has a sinusoidal voltage, that's how it alternates. So yes, it's more than it appears, but not by a terrific amount. 120 V RMS turns out to be about 170 V peak-to-ground. I remember hearing once that it is current, not voltage, that is dangerous to the human body. This page describes it well. According to them, if more than 100 mA makes it through your body, AC or DC, you're probably dead. One of the reasons that AC might be considered more dangerous is that it arguably has more ways of getting into your body. Since the voltage alternates, it can cause current to enter and exit your body even without a closed loop, since your body (and what ground it's attached to) has capacitance. DC cannot do that. Also, AC is quite easily stepped up to higher voltages using transformers, while with DC that requires some relatively elaborate electronics. Finally, while your skin has a fairly high resistance to protect you, and the air is also a terrific insulator as long as you're not touching any wires, sometimes the inductance of AC transformers can cause high-voltage sparks that break down the air and I imagine can get through your skin a bit as well. Also, like you mentioned, the heart is controlled by electric pulses and repeated pulses of electricity can throw this off quite a bit and cause a heart attack. However, I don't think that this is unique to alternating current. I read once about an unfortunate young man that was learning about electricity and wanted to measure the resistance of his own body. He took a multimeter and set a lead to each thumb. By accident or by stupidity, he punctured both thumbs with the leads, and the small (I imagine it to be 9 V) battery in the multimeter caused a current in his bloodstream, and he died on the spot. So maybe ignorance is more dangerous than either AC or DC. | {
"source": [
"https://physics.stackexchange.com/questions/59359",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21068/"
]
} |
59,377 | I'm developing an application using accelerometer sensor. I'm not good at physics so forgive me if the question is trivial. If I have 3 values of acceleration: $x$, $y$, $z$, I find acceleration magnitude by taking square root of $x^2+y^2+z^2$, but how do I find it's sign? Example reading: x: -0.010020584 y: 0.010257386 z: -0.04910469 The magnitude will be around 0.05115, but how do I know if it is deceleration or acceleration? | The RMS (root-mean square) value of an AC voltage, which is what is represented as "110 V" or "120 V" or "240 V" is lower than the electricity's peak voltage. Alternating current has a sinusoidal voltage, that's how it alternates. So yes, it's more than it appears, but not by a terrific amount. 120 V RMS turns out to be about 170 V peak-to-ground. I remember hearing once that it is current, not voltage, that is dangerous to the human body. This page describes it well. According to them, if more than 100 mA makes it through your body, AC or DC, you're probably dead. One of the reasons that AC might be considered more dangerous is that it arguably has more ways of getting into your body. Since the voltage alternates, it can cause current to enter and exit your body even without a closed loop, since your body (and what ground it's attached to) has capacitance. DC cannot do that. Also, AC is quite easily stepped up to higher voltages using transformers, while with DC that requires some relatively elaborate electronics. Finally, while your skin has a fairly high resistance to protect you, and the air is also a terrific insulator as long as you're not touching any wires, sometimes the inductance of AC transformers can cause high-voltage sparks that break down the air and I imagine can get through your skin a bit as well. Also, like you mentioned, the heart is controlled by electric pulses and repeated pulses of electricity can throw this off quite a bit and cause a heart attack. However, I don't think that this is unique to alternating current. I read once about an unfortunate young man that was learning about electricity and wanted to measure the resistance of his own body. He took a multimeter and set a lead to each thumb. By accident or by stupidity, he punctured both thumbs with the leads, and the small (I imagine it to be 9 V) battery in the multimeter caused a current in his bloodstream, and he died on the spot. So maybe ignorance is more dangerous than either AC or DC. | {
"source": [
"https://physics.stackexchange.com/questions/59377",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22556/"
]
} |
59,469 | When light passes from one medium to another its velocity and wavelength change. Why doesn't frequency change in this phenomenon? | The electric and magnetic fields have to remain continuous at the refractive index boundary. If the frequency changed, the light at each side of the boundary would be continuously changing its relative phase and there would be no way to match the fields. | {
"source": [
"https://physics.stackexchange.com/questions/59469",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/16941/"
]
} |
59,502 | Does gravity slow the speed that light travels? Can we actual measure the time it takes light from the sun to reach us? Is that light delayed as it climbs out of the sun's gravity well? | This is one of those questions that is more subtle than it seems. In GR the velocity of light is only locally equal to $c$, and we (approximately) Schwarzschild observers do see the speed of light change as light moves to or away from a black hole (or any gravity well). Famously, the speed that radially moving light travels falls to zero at the event horizon. So the answer to your first question is that yes gravity does slow the light reaching us from the Sun. To be more precise about this, we can measure the Schwarzschild radius $r$ by measuring the circumference of a circular orbit round the Sun and dividing by 2$\pi$. We can also measure the circumference of the Sun and calculate its radius, and from these values calculate the distance from our position to the Sun's surface. If we do this we'll find the average speed of light over this distance is less than $c$. However suppose we measured the distance to the Sun's surface with a (long) tape measure. We'd get a value bigger than the one calculated in the paragraph above, and if we use this distance to calculate the speed of the light from the Sun we'd get an average speed of $c$. So I suppose the only accurate answer to your question is: it depends . Re your other question, assuming the spacetime around the Sun is described by the Schwarzschild metric, the time dilation at the surface of the Sun is given by: $$ \text{time dilation factor} = \frac{1}{\sqrt{1 - r_s/r}} $$ where $r_s$ is the radius of a black hole with the mass of the Sun and $r$ is the radius of the Sun. The former is about 3,000m and the latter about 700,000,000m so I calculate the time dilation factor to be around 1.000002 and this is too small to measure directly. However you can interpret gravitational lensing to be due to changes in the speed of light, and since we can measure the gravitational lensing due to the Sun you can argue we have measured its effect on the speed of light. This isn't really true as what gravitational lensing measures is the spacetime curvature. However the change in the speed of light (measured by a Schwarzschild observer) is an aspect of this. | {
"source": [
"https://physics.stackexchange.com/questions/59502",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/532/"
]
} |
59,513 | Yesterday, I understood what it means to say that the moon is constantly falling (from a lecture by Richard Feynman ). In the picture below there is the moon in green which is orbiting the earth in grey. Now the moon wants to go at a tangent and travel along the arrow coming out of it. Say after one second it arrives at the red disc. Due to gravity it falls down toward the earth and ends up at the blue disc. The amount that it falls makes it reach the orbital path. So the moon is constantly falling into the orbital path, which is what makes it orbit. The trouble I'm having is: shouldn't the amount of "fall" travelled by the moon increase over time? The moon's speed toward the earth accelerates but its tangential velocity is constant. So how can the two velocities stay in balance? This model assumes that the moon will always fall the same distance every second. So is the model wrong or am I missing something? Extra points to whoever explains: how come when you do the calculation that Feynman does in the lecture, to find the acceleration due to gravity on earth's surface, you get half the acceleration you're supposed to get (Feynman says that the acceleration is $16 ~\mathrm{ft}/\mathrm{s}^2$, but it's actually twice that). | What's actually happening is something more like this: Here, $x_0$ and $v_0$ are the initial position and velocity of the moon, $a_0$ is the acceleration experienced by the moon due to gravity at $x_0$, and $\Delta t$ is a small time step. In the absence of gravity, the moon would travel at the constant velocity $v_0$, and would thus move a distance of $v_0 \Delta t$ during the first time step, as shown by the arrow from the green circle to the red one. However, as it moves, the moon is also falling under gravity. Thus, the actual distance it travels , assuming the gravitational acceleration stays approximately constant, is $v_0 \Delta t + \frac12 a_0 \Delta t^2$ plus some higher-order terms caused by the change in the acceleration over time, which I'll neglect. However, moon's velocity is also changing due to gravity. Assuming that the change in the gravitational acceleration is approximately linear, the new velocity of the moon, when it's at the blue circle marking its new position $x_1$ after the first time step, is $v_1 = v_0 + \frac12(a_0 + a_1)\Delta t$. Thus, after the first time step, the moon is no longer moving horizontally towards the gray circle, but again along the circle's tangent towards the spot marked with the second red circle. Over the second time step, the moon again starts off moving towards the next red circle, but falls down to the blue circle due to gravity. In the process, its velocity also changes, so that it's now moving towards the third red circle, and so on. The key thing to note is that, as the moon moves along its circular path, the acceleration due to gravity is always orthogonal to the moon's velocity. Thus, while the moon's velocity vector changes, its magnitude does not . Ps. Of course, the picture I drew and described above, with its discrete time steps, is just an approximation of the true physics, where the position, velocity and acceleration of the moon all change continuously over time. While it is indeed a valid approximation, in the sense that we recover the correct differential equations of motion from it if we take the limit as $\Delta t$ tends towards zero, it's in that sense no more or less valid than any other such approximation, of which there are infinitely many. However, I didn't just pull the particular approximation I showed above out of a hat. I chose it because it actually corresponds to a very nice method of numerically solving such equations of motion, known as the velocity Verlet method . The neat thing about the Verlet method is that it's a symplectic integrator , meaning that it conserves a quantity approximating the total energy of the system. In particular, this means that, if we use the velocity Verlet approximation to simulate the motion of the moon, it actually will stay in a stable orbit even if the time step is rather large, as it is in the picture above. | {
"source": [
"https://physics.stackexchange.com/questions/59513",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22570/"
]
} |
59,579 | The actual number: How far apart are galaxies on average? An attempt to visualize such a thing: If galaxies were the size of peas, how many would be in a cubic meter? | The simple answer is that the average galaxy spacing is around a few megaparsecs, while the biggest galaxies are around 0.1 megaparsecs in size. So the average spacing is somewhere in the range of 10 - 100 times the size of the biggest galaxies. The peas I had for lunch today were (at a guess - I didn't measure them!) 5mm in diameter so the interpea spacing would be 5 - 50cm, or between 8 and 8,000 per cubic metre. But this is a very misleading statistic. Galaxies are not distributed uniformly, but instead are grouped into clusters, which are themselves grouped into superclusters. Also galaxies vary enormously in size, with dwarf galaxies around a thousand times smaller than the biggest galaxies. I would resist the temptation to assign any significance to my figures above. However there is a take home message i.e. galaxies are much, much , much closer relative to their size than stars are. That's why galaxy collisions are quite frequent while stellar collisions are rare to the point of non-existence. | {
"source": [
"https://physics.stackexchange.com/questions/59579",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10352/"
]
} |
59,581 | After reading an recent news " Stargazers capture first picture of a planet with two suns – just like Luke Skywalker’s home planet of Tatooine in Star Wars ", I am thinking that: can we calculate the probability of extrasolar planets, binary stars and so on? Several years ago, we were not sure of the existence of extrasolar planet. But now, we have found many of them. I think we should be able to calculate the probability via first principles when we focus on galaxy formation. Then, this problem is different from the Drake-Sagan equation for estimating the number of detectable extraterrestrial civilizations. And, the result should be useful as an guider for astronomical observations. | The simple answer is that the average galaxy spacing is around a few megaparsecs, while the biggest galaxies are around 0.1 megaparsecs in size. So the average spacing is somewhere in the range of 10 - 100 times the size of the biggest galaxies. The peas I had for lunch today were (at a guess - I didn't measure them!) 5mm in diameter so the interpea spacing would be 5 - 50cm, or between 8 and 8,000 per cubic metre. But this is a very misleading statistic. Galaxies are not distributed uniformly, but instead are grouped into clusters, which are themselves grouped into superclusters. Also galaxies vary enormously in size, with dwarf galaxies around a thousand times smaller than the biggest galaxies. I would resist the temptation to assign any significance to my figures above. However there is a take home message i.e. galaxies are much, much , much closer relative to their size than stars are. That's why galaxy collisions are quite frequent while stellar collisions are rare to the point of non-existence. | {
"source": [
"https://physics.stackexchange.com/questions/59581",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20690/"
]
} |
59,719 | Some physical quantities like position, velocity, momentum and force, have precise definition even on basic textbooks, however energy is a little confusing for me. My point here is: using our intuition we know what momentum should be and also we know that defining it as $p = mv$ is a good definition. Also, based on Newton's law we can intuit and define what forces are. However, when it comes to energy many textbooks become a little "circular". They first try to define work, and after some arguments they just give a formula $W = F\cdot r$ without motivating or giving intuition about this definition. Then they say that work is variation of energy and they never give a formal definition of energy. I've heard that "energy is a number that remains unchanged after any process that a system undergoes", however I think that this is not so good for three reasons: first because momentum is also conserved, so it fits this definition and it's not energy, second because recently I've heard that on general relativity there's a loss of some conservation laws and third because conservation of energy can be derived as consequence of other definitions. So, how energy is defined formally in a way that fits both classical and modern physics without falling into circular arguments? | The Lagrangian formalism of physics is the way to start here. In this formulation, we define a function that maps all of the possible paths a particle takes to the reals, and call this the Lagrangian. Then, the [classical] path traveled by a particle is the path for which the Lagrangian has zero derivative with respect to small changes in each of the paths. It turns out, due to a result known as Noether's theorem, that if the Lagrangian remains unchanged due to a symmetry, then the motion of the particles will necessarily have a conserved quantity. Energy is a conserved quantity associated with a time translation symmetry in the Lagrangian of a system. So, if your Lagrangian is unchanged after substituting $t^{\prime} = t + c$ for $t$, then Noether's theorem tells us that the Lagrangian will have a conserved quantity. This quantity is the energy. If you know something about Lagrangians, you can explicitly calculate it. There are numerous googlable resources on all of these words, with links to how these calculations happen. I will answer further questions in edits. | {
"source": [
"https://physics.stackexchange.com/questions/59719",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21146/"
]
} |
59,782 | In Wikipedia it's said that time is a scalar quantity. But its hard to understand that how? As stated that we consider only the magnitude of time then its a scalar. But on basis of time we define yesterday, today and tomorrow then what it will be? | To pick up on twistor59's point, time is not a vector but a time interval is. The confusion arises because you have to define carefully what you mean by the word time . In special relativity we label spacetime points by their co-ordinates $(t, x, y, z)$, where $t$ is the time co-ordinate. The numbers $t$, $x$, etc are not themselves vectors because they just label positions in spacetime. So in this sense the time co-ordinate, $t$, is not a vector any more than the spatial co-ordinates are. But we often use the word time to mean a time interval, and in this sense the time is the vector joining the spacetime points $(t, x, y, z)$ and $(t + t', x, y, z)$, where $t'$ is the time interval you measure with your stopwatch between the two points. The interval between the two points is $(t', 0, 0, 0)$ and this is a vector. | {
"source": [
"https://physics.stackexchange.com/questions/59782",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20592/"
]
} |
59,921 | A falling object with no initial velocity with mass $m$ is influenced by a gravitational force $g$ and the drag (air resistance) which is proportional to the object's speed. By Newton´s laws this can be written as: $mg-kv=ma$ (for low speeds) $mg-kv^2=ma$ (for high speeds). I assume that $k$ is a positive constant that depends on the geometry of the object and the viscosity. But how can one explain that the air resistance is proportional to the velocity? And to the velocity squared in the second equation? | One's naive expectation would be that as the object moves through the medium, it collides with molecules at a rate proportional to $v$. The volume swept out in time $t$ is $A v t$, where $A$ is the cross-sectional area, so the mass with which it collides is $\rho A v t$. The impulse in each collision is proportional to $v$, and therefore the drag force should be proportional to $\rho A v^2$, with a constant of proportionality $C_D$ (the drag coefficient) of order unity. In reality, this is only true for a certain range of Reynolds numbers, and even in the range of Reynolds numbers for which it's true, the independent-collision picture above is not what really happens. At low Reynolds numbers you get laminar flow and $C_D\propto 1/v$, while at higher Reynolds numbers there's turbulence, and you get $C_D$ roughly constant. | {
"source": [
"https://physics.stackexchange.com/questions/59921",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20330/"
]
} |
59,978 | I'm making a table where columns are labelled with the property and the units it's measured in: Length (m) |||| Force (N) |||| Safety Factor (unitless) ||| etc... I'd like not to write "unitless" on several columns...and I'm quite surprised I can't seem to find a symbol for it. Any suggestions? | Straight from the horse 's mouth: Source: Bureau International des Poids et Mesures (Search for "dimensionless" for all guidelines.) The International Bureau of Weights and Measures (French: Bureau international des poids et mesures ), is an international standards organisation, one of three such organisations established to maintain the International System of Units (SI) under the terms of the Metre Convention ( Convention du Mètre ). The organisation is usually referred to by its French initialism, BIPM. Wikipedia | {
"source": [
"https://physics.stackexchange.com/questions/59978",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2456/"
]
} |
60,519 | According to this article on the European Space Agency web site just after the Big Bang and before inflation the currently observable universe was the size of a coin. One millionth of a second later the universe was the size of the Solar System, which is an expansion much much faster than speed of light. Can space expand with unlimited speed? | There are quite a few common misconceptions about the expansion of the universe, even among professional physicists. I will try to clarify a few of these issues; for more information, I highly recommend the article
" Expanding Confusion: common misconceptions of cosmological horizons and the superluminal expansion of the Universe " from Tamara M. Davis and Charles H. Lineweaver. I will assume a standard ΛCDM-model, with
$$
\begin{align}
H_0 &= 67.3\;\text{km}\,\text{s}^{-1}\text{Mpc}^{-1},\\
\Omega_{R,0} &= 9.24\times 10^{-5},\\
\Omega_{M,0} &= 0.315,\\
\Omega_{\Lambda,0} &= 0.685,\\
\Omega_{K,0} &= 1 - \Omega_{R,0} - \Omega_{M,0} - \Omega_{\Lambda,0} = 0.
\end{align}
$$ The expansion of the universe can be described by a scale factor $a(t)$, which can be thought of as the length of an imaginary ruler that expands along with the universe, relative to the present day, i.e. $a(t_0)=1$ where $t_0$ is the present age of the universe. From the standard equations, one can derive the Hubble parameter $$
H(a) = \frac{\dot{a}}{a} = H_0\sqrt{\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} + \Omega_{K,0}\,a^{-2} + \Omega_{\Lambda,0}},
$$
such that $H(1)=H_0$ is the Hubble constant . In a previous post , I showed that the age of the universe, as a function of $a$, is
$$
t(a) = \frac{1}{H_0}\int_0^a\frac{a'\,\text{d}a'}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a' + \Omega_{K,0}\,a'^2 + \Omega_{\Lambda,0}\,a'^4}},
$$
which can be numerically inverted to yield $a(t)$, and consequently $H(t)$. It also follows that the present age of the universe is $t_0=t(1)=13.8$ billion years. Now, another consequence of the Big Bang models is Hubble's Law ,
$$
v_\text{rec}(t_\text{ob}) = H(t_\text{ob})\,D(t_\text{ob}),
$$
describing the relation between the recession velocity $v_\text{rec}(t_\text{ob})$ of a light source and its proper distance $D(t_\text{ob})$, at a time $t_\text{ob}$. In fact, this follows immediately from the definition of $H(t_\text{ob})$, since $v_\text{rec}(t_\text{ob})$ is proportional to $\dot{a}$ and $D(t_\text{ob})$ is proportional to $a$. However, it should be noted that this is a theoretical relation: neither $v_\text{rec}(t_\text{ob})$ nor $D(t_\text{ob})$ can be observed directly. The recession velocity is not a "true" velocity, in the sense that it is not an actual motion in a local inertial frame; clusters of galaxies are locally at rest. The distance between them increases as the universe expands, which can be expressed as $v_\text{rec}(t_\text{ob})$. Some cosmologists therefore prefer to think of $v_\text{rec}(t_\text{ob})$ as an apparent velocity, a theoretical quantity with little physical meaning. A related quantity that is observable is the redshift of a light source, which is the cumulative increase in wavelength of the photons as they travel through the expanding space between source and observer. There is a simple relation between the scale factor and the redshift of a source, observed at a time $t_\text{ob}$:
$$
1 + z(t_\text{ob}) = \frac{a(t_\text{ob})}{a(t_\text{em})},
$$
such that the observed redshift of a photon immediately gives the time $t_\text{em}$ at which the photon was emitted. The proper distance $D(t_\text{ob})$ of a source is also a theoretical quantity. It's an "instantaneous" distance, which can be thought of as the distance you would obtain with a (very long!) measuring tape if you were able to "stop" the expansion of the universe. It can however be derived from observable quantities,
such as the luminosity distance or the angular diameter distance . The proper distance to a source, observed at time $t_\text{ob}$ with a redshift $z_\text{ob}$ is
$$
D(z_\text{ob},t_\text{ob}) = a_\text{ob}\frac{c}{H_0}\int_{a_\text{ob}/(1+z_\text{ob})}^{a_\text{ob}}\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}},
$$
with $a_\text{ob} = a(t_\text{ob})$. The furthest objects that we theoretically can observe have infinite redshift; they mark the edge of the observable universe , also known as the particle horizon . Ignoring inflation, we get:
$$
D_\text{ph}(t_\text{ob}) = a_\text{ob}\frac{c}{H_0}\int_0^{a_\text{ob}}\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}.
$$
In practice though, the furthest we can see is the CMB, which has a current redshift $z_\text{CMB}(t_0)\approx 1090$. A source that has a recession velocity $v_\text{rec}(t_\text{ob})=c$ has a corresponding distance
$$
D_\text{H}(t_\text{ob})=\frac{c}{H(t_\text{ob})}.
$$
This is called the Hubble distance . Almost there, just a few more quantities need to be defined. The photons that we observe at a time $t_\text{ob}$ have travelled on a null geodesic called the past light cone . It can be defined as the proper distance that a light source had at a time $t_\text{em}$ when it emitted the photons that we observe at $t_\text{ob}$:
$$
D_\text{lc}(t_\text{em},t_\text{ob})= a_\text{em}\frac{c}{H_0}\int_{a_\text{em}}^{a_\text{ob}}\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}.
$$
There are two special cases: for $t_\text{ob}=t_0$ we have our present-day past light cone (i.e. the photons that we are observing right now), and for $t_\text{ob}=\infty$ we get the so-called cosmic event horizon :
$$
D_\text{eh}(t_\text{em})= a_\text{em}\frac{c}{H_0}\int_{a_\text{em}}^\infty\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}.
$$
For light emitted today, $t_\text{em}=t_0$, this has a special significance: if a source closer to us than $D_\text{eh}(t_0)$ emits photons today, then we will be able to observe those at some point in the future. In contrast, we will never observe photons emitted today by sources further than $D_\text{eh}(t_0)$. One final definition: instead of proper distances, we can use co-moving distances . These are distances defined in a co-ordinate system that expands with the universe. In other words, the co-moving distance of a source that moves away from us along with the Hubble flow, remains constant. The relation between co-moving and proper distance is simply
$$
D_c(t) = \frac{D(t)}{a(t)},
$$
so that both are the same at the present day $a(t_0)=1$. Thus
$$
\begin{align}
D_\text{c,ph}(t_\text{ob}) &= \frac{D_\text{ph}(t_\text{ob})}{a_\text{ob}},\\
D_\text{c,lc}(t_\text{em},t_\text{ob}) &= \frac{D_\text{lc}(t_\text{em},t_\text{ob})}{a_\text{em}},\\
D_\text{c,H}(t_\text{ob}) &= \frac{D_\text{H}(t_\text{ob})}{a_\text{ob}}.
\end{align}
$$
In fact, it would have been more convenient to start with co-moving distances instead of proper distances; in case you've been wondering where all the above integrals come from, those can be derived from the null geodesic of the FLRW metric:
$$
0 = c^2\text{d}t^2 - a^2(t)\text{d}\ell^2,
$$
such that
$$
\text{d}\ell = \frac{c\,\text{d}t}{a(t)} = \frac{c\,\text{d}a}{a\,\dot{a}} = \frac{c\,\text{d}a}{a^2\,H(a)},
$$
and $\text{d}\ell$ is the infinitesimal co-moving distance. So, what can we do with all these tedious calculations? Well, we can draw a graph of the evolution of the expanding universe (after inflation). Inspired by a similar plot in the article from Davis & Lineweaver, I made the following diagram: This graph contains a lot of information. On the horizontal axis, we have the co-moving distance of light sources, in Gigalightyears (bottom) and the corresponding Gigaparsecs (top). The vertical axis shows the age of the universe (left) and the corresponding scale factor $a$ (right). The horizontal thick black line marks the current age of the universe (13.8 billion years). Co-moving sources have a constant co-moving distance, so that their world lines
are vertical lines (the black dotted lines correspond with sources at 10, 20, 30, etc Gly). Of course, our own world line is the thick black vertical line, and we are currently situated at the intersection of the horizontal and vertical black line. The yellow lines are null geodesics, i.e. the paths of photons. The scale of the time axis is such that these photon paths are straight lines at 45° angles. The orange line is our current past light cone. This is the cross-section of the universe that we currently observe: all the photons that we receive now have travelled on this path. The path extends to the orange dashed line, which is our future light cone. The particle horizon, i.e. the edge of our observable universe, is given by the blue line; note that this is also a null geodesic. The red line is our event horizon: photons emitted outside the event horizon will never reach us. The purple dashed curves are distances corresponding with particular redshift values $z(t_\text{ob})$, in particular $z(t_\text{ob}) = 1, 3, 10, 50, 1000$. Finally, the green curves are lines of constant recession velocity, in particular $v_\text{rec}(t_\text{ob}) = c, 2c, 3c, 4c$. Of course, the curve $v_\text{rec}(t_\text{ob}) = c$ is nothing else than the Hubble distance. What can we learn from all this? Quite a lot: The current (co-moving) distance of the edge of the observable universe is 46.2 billion ly . Of course, the total universe can be much bigger, and is possibly infinite. The observable universe will keep expanding to a finite maximum co-moving distance at cosmic time $t = \infty$, which is 62.9 billion ly. We will never observe any source located beyond that distance. Curves of constant recession velocity expand to a maximum co-moving distance, at $t_\text{acc} = 7.7$ billion years, and then converge again. This time $t_\text{acc}$, indicated by the horizontal black dashed line, is in fact the moment at which the expansion of the universe began to accelerate. Curves of constant redshift also expand first, and converge when $t$ becomes very large. This means that a given source, which moves along a vertical line, will be observed with an infinite redshift when it enters the particle horizon, after which its redshift will decrease to a mimimum value, and finally increase again to infinity at $t = \infty$. In other words, every galaxy outside our local cluster will eventually be redshifted to infinity when the universe becomes very old. This is due to the dominance of dark energy at late cosmic times. Photons that we currently observe of sources at co-moving distances of 10, 20, 30 and 40 Gly have redshifts of 0.87, 2.63, 8.20 and 53.22 respectively. The edge of the observable universe is receding from us with a recession velocity of more than 3 times the speed of light. $3.18c$, to be exact. In other words, we can observe sources that are moving away from us faster than the speed of light. Sources at co-moving distances of 10, 20, 30 and 40 Gly are receding from us at 0.69, 1.38, 2.06 and 2.75 times the speed of light, respectively. Sources outside our particle horizon are moving away even faster. There is no a priori limit to the maximum recession velocity: it is proportional to the size of the total universe, which could be infinite. The Hubble distance lies completely inside the event horizon . It will asymptotically approach the event horizon (as well as the curve of constant redshift 1) as $t$ goes to infinity. The current Hubble distance is 14.5 Gly (corresponding with $z=1.48$) , while the current distance to the event horizon is 16.7 Gly ($z=1.87$). Photons emitted today by sources that are located between these two distances will still reach us at some time in the future. Although the difference between the Hubble distance and the event horizon today is rather small, this difference was much larger in the past. Consider for example the photons that we observe today, emitted by a source at a co-moving distance of 30 Gly. It emitted those photons at $t=0.62$ Gy, when the source was moving away from us at $3.5c$. The source continued its path along the vertical dotted line, while the photons moved on our past light cone. At $t=0.83, 1.64, 4.06$ Gy those photons passed regions that were moving away from us at $3c, 2c, c$ respectively. Along the way, those photons accumulated a total redshift of 53.22. From all the above, it should be clear that the Hubble distance is not a horizon. I should stress again that all these calculations are only valid for the standard ΛCDM-model. Apologies for the very lengthy post, but I hope it has clarified a few things. | {
"source": [
"https://physics.stackexchange.com/questions/60519",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22943/"
]
} |
60,690 | Whatever happens in there is not falsifiable nor provable to the outside. If for (amusing) example the interior consisted of 10^100 Beatles clones playing "Number Nine" backwards, do we know how to unscramble the Hawking radiation to divine this? The same question applies to this new firewall furor. So of what use is a description of the interior to our physics on the outside? The only possibility of usefulness I can see is if our own universe can be described as an interior up to the cosmic horizon in de Sitter space. But that's only an "if". | Simply because our final goal is a set of laws of physics that describes any part of the universe equally well. Let's say a physicist jumped into a black hole and saw that the interior of the black hole was composed entirely of John Lennon clones. His last thoughts before getting spaghettified would be "why?". From his perspective, physics is incomplete. Sure, we probably can't use it to predict anything -- but modern physics is much less about predictions and much more about having a beautiful, mathematically rigorous model of the universe. Mathematical models with discontinuities usually aren't "beautiful", and John Lennon in black holes counts as a discontinuity if we take general relativity as our mathematical model of the universe. Which would mean that we will eventually have to replace our model (which is why knowing as much as we can about the inside is important). Besides, if our current theories partially fail inside a black hole, we need to patch that up. | {
"source": [
"https://physics.stackexchange.com/questions/60690",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11633/"
]
} |
60,830 | If we set the Boltzmann constant to $1$, then entropy would just be $\ln \Omega$, temperature would be measured in $\text{joules}$ ($\,\text{J}\,$) , and average kinetic energy would be an integer times $\frac{T}{2}$. Why do we need separate units for temperature and energy? | One reason you might think $T$ should be measured in Joules is the idea that temperature is the average energy per degree of freedom in a system. However, this is only an approximation. That definition would correspond to something proportional to $\frac{U}{S}$ (internal energy over entropy) rather than $\frac{\partial U}{\partial S}$ , which is the real definition. The approximation holds in cases where the number of degrees of freedom doesn't depend much on the amount of energy in the system, but for quantum systems, particularly at low temperatures, there can be quite a bit of dependence. If you accept that $T$ is defined as $\frac{\partial U}{\partial S}$ then the question is about whether we should treat entropy as a dimensionless quantity. This is certainly possible, as you say. But for me there's a very good practical reason not to do that: temperature is not an energy, in the sense that it doesn't, in general, make sense to add the temperature to the internal energy of a system or set them equal. Units are a useful tool for preventing you from accidentally trying to do such a thing. In special relativity, for example, it makes sense to set $c=1$ because then it does make sense to set a distance equal to a time. By doing that, you're simply saying that the path between two points is light-like. But $T=\frac{\partial U}{\partial S}$ measures the change in energy with respect to entropy. Entropy and energy are extensive quantities, whereas temperature is an intensive one. This means that it doesn't very often make sense to equate them without also including some non-constant factor relating to the system's size. For this reason, it's very useful to keep Boltzmann's constant around. My personal favorite way to do it is to measure entropy in bits, so that $k_B = \frac{1}{\ln 2} \,\mathrm{bits}$ and the units of temperature are $\mathrm{J\cdot bits^{-1}}$ . Having entropy rather than temperature as the quantity with the fundamental unit tends to make it much clearer what's going on, and bits are a pretty convenient unit in terms of building an intuition about the relationship to probability theory. | {
"source": [
"https://physics.stackexchange.com/questions/60830",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23056/"
]
} |
60,869 | What is the definition of a timelike and spacelike singularity ? Trying to find, but haven't yet, what the definitions are. | A singularity is a condition in which geodesics are incomplete. For example, if you drop yourself into a black hole, your world-line terminates at the singularity. It's not just that you're destroyed. You (and the subatomic particles you're made of) have no future world-lines. A careful definition of geodesic incompleteness is a little tricky, because we want to talk about geodesics that can't be extended past a certain length, but length is measured by the metric, and the metric goes crazy at a singularity so that length becomes undefined. The way to get around this is to use an affine parameter, which can be defined without a metric. Geodesic incompleteness means that there exists a geodesic that can't be extended past a certain affine parameter. (This also covers lightlike geodesics, which have zero metric length.) There are two types of singularities, curvature singularities and conical singularities. A black hole singularity is an example of a curvature singularity; as you approach the singularity, the curvature of spacetime diverges to infinity, as measured by a curvature invariant such as the Ricci scalar. Another example of a curvature singularity is the Big Bang singularity. A conical singularity is like the one at the tip of a cone. Geodesics are incomplete there basically because there's no way to say which way the geodesic should go once it hits the tip. In 2+1-dimensional GR, curvature vanishes identically, and the only kind of gravity that exists is conical singularities. I don't think conical singularities are expected to be important in our universe, e.g., I don't think they can form by gravitational collapse. Actual singularities involving geodesic incompleteness are to be distinguished from coordinate singularities, which are not really singularities at all. In the Schwarzschild spacetime, as described in Schwarzschild's original coordinates, some components of the metric blow up at the event horizon, but this is not an actual singularity. This coordinate system can be replaced with a different one in which the metric is well behaved. The reason curvature scalars are useful as tests for an actual curvature singularity is that since they're scalars, they can't diverge in one coordinate system but stay finite in another. However, they are not definitive tests, for several reasons: (1) a curvature scalar can diverge at a point that is at an infinite affine distance, so it doesn't cause geodesic incompleteness; (2) curvature scalars won't detect conical singularities; (3) there are infinitely many curvature scalars that can be constructed, and some could blow up while others don't. A good treatment of singularities is given in the online book by Winitzki, section 4.1.1. The definition of a singularity is covered in WP and in all standard GR textbooks. I assume the real issue you were struggling with was the definition of timelike versus spacelike. In GR, a singularity is not a point in a spacetime; it's like a hole in the topology of the manifold. For example, the Big Bang didn't occur at a point. Because a singularity isn't a point or a point-set, you can't define its timelike or spacelike character in quite the way you would with, say, a curve. A timelike singularity is one that is in the future light cone of some point A but in the past light cone of some other point B, such that a timelike world-line can connect A to B. Black hole and big bang singularities are not timelike, they're spacelike, and that's how they're shown on a Penrose diagram. (Note that in the Schwarzschild metric, the Schwarzschild r and t coordinates swap their timelike and spacelike characters inside the event horizon.) There is some variety in the definitions, but a timelike singularity is essentially what people mean by a naked singularity. It's a singularity that you can have sitting on your desk, where you can look at it and poke it with a stick. For more detail, see Penrose 1973. In addition to the local definition I gave, there is also a global notion, Rudnicki, 2006, which is essentially that it isn't hidden behind an event horizon (hence the term "naked"). What's being formalized is the notion of a singularity that can form by gravitational collapse from nonsingular initial conditions (unlike a Big Bang singularity), and from which signals can escape to infinity (unlike a black hole singularity). Penrose, Gravitational radiation and gravitational collapse; Proceedings of the Symposium, Warsaw, 1973. Dordrecht, D. Reidel Publishing Co. pp. 82-91, free online at http://adsabs.harvard.edu/full/1974IAUS...64...82P Rudnicki, Generalized strong curvature singularities and weak cosmic censorship in cosmological space-times, http://arxiv.org/abs/gr-qc/0606007 Winitzki, Topics in general relativity, https://sites.google.com/site/winitzki/index/topics-in-general-relativity | {
"source": [
"https://physics.stackexchange.com/questions/60869",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23071/"
]
} |
61,174 | Most people can ride 10 km on their bike. However, running 10 km is a lot harder to do. Why? According to the law of conservation of energy, bicycling should be more intensive because you have to move a higher mass, requiring more kinetic energy to reach a certain speed. But the opposite is true. So, to fulfill this law, running must generate more heat. Why does it? Some things I can think of as (partial) answers: You use more muscles to run. While running, you have more friction with the ground; continuously pouncing it dissipates energy to it. While you move your body at a slow speed, you need to move your arms and legs alternately at higher and lower speeds. | One word: inertia. When you're riding a bike on a level gradient you just need to give it a push to get going, then you can coast for quite a while before friction and air resistance slow you down. In other words, the relatively frictionless wheels mean the bicycle's kinetic energy doesn't dissipate quickly. But the human body doesn't have wheels, so while running you have to give a good kick to get going, and then another kick to keep going on the next step, and so on. When hills are involved the difference is even more pronounced, since we run downhill the same way we do on the level, by continually pushing ourselves forward; whereas on a bicycle you can take advantage of the slope and just coast down it. I suspect that raising and lowering your centre of mass isn't as inefficient as the other answers have suggested. This is because your legs are springy, so at least to some extent you're just converting energy back and forth between gravitational potential and the spring force in your legs. Humans are possibly the most efficient long-distance runners in the animal kingdom. There is a school of thought that says the reason we are bipeds is that we evolved as endurance hunters, chasing our prey until it collapsed from exhaustion rather than trying to outrun it over short distances. Whether that's true or not, we probably wouldn't do all that bouncing up and down if there wasn't a good reason for it. You might ask why, if using wheels is so much more efficient, didn't we evolve that instead? I don't know, but it seems no animal has been able to evolve wheeled locomotion. | {
"source": [
"https://physics.stackexchange.com/questions/61174",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9780/"
]
} |
61,347 | Though often heard, often read, often felt being overused, I wonder what are the precise definitions of invariance and covariance. Could you please give me an example from field theory? | The definitions of these terms are somewhat context-dependent. In general, however, invariance in physics refers to when a certain quantity remains the same under a transformation of things out of which it is built, while covariance refers to when equations "retain the same form" after the objects in the equations are transformed in some way. In the context of field theory, one can make these notions precise as follows. Consider a theory of fields $\phi$. Let a transformation $T$
$$
\phi \to\phi_T
$$
on fields be given. Let a functional $F[\phi]$ of the fields be given (consider the action functional for example). The functional is said to be invariant under the transformation $T$ of the fields provided
$$
F[\phi_T] = F[\phi]
$$
for all fields $\phi$. One the other hand, the equations of motion of the theory are said to be covariant with respect to the transformation $T$ provided if the fields $\phi$ satisfy the equations, then so do the fields $\phi_T$; the form of the equations is left the same by $T$. For example, the action of a single real Klein-Gordon scalar $\phi$ is Lorentz-invariant meaning that it doesn't change under the transformation
$$
\phi(x)\to\phi_\Lambda(x) = \phi(\Lambda^{-1}x),
$$
and the equations of motion of the theory are Lorentz-covariant in the sense that if $\phi$ satisfies the Klein-Gordon equation, then so does $\phi_\Lambda$. Also, I'd imagine that you'd find this helpful. | {
"source": [
"https://physics.stackexchange.com/questions/61347",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11220/"
]
} |
61,353 | Is the definition of $$d s^2=-d \tau^2$$ assuming that $c=1$, so that we always have $$\left({ds\over d\tau}\right)^2=-1$$?
Is there a reason for this definition?
Don't we get an imaginary ${ds\over d\tau}$? | The definitions of these terms are somewhat context-dependent. In general, however, invariance in physics refers to when a certain quantity remains the same under a transformation of things out of which it is built, while covariance refers to when equations "retain the same form" after the objects in the equations are transformed in some way. In the context of field theory, one can make these notions precise as follows. Consider a theory of fields $\phi$. Let a transformation $T$
$$
\phi \to\phi_T
$$
on fields be given. Let a functional $F[\phi]$ of the fields be given (consider the action functional for example). The functional is said to be invariant under the transformation $T$ of the fields provided
$$
F[\phi_T] = F[\phi]
$$
for all fields $\phi$. One the other hand, the equations of motion of the theory are said to be covariant with respect to the transformation $T$ provided if the fields $\phi$ satisfy the equations, then so do the fields $\phi_T$; the form of the equations is left the same by $T$. For example, the action of a single real Klein-Gordon scalar $\phi$ is Lorentz-invariant meaning that it doesn't change under the transformation
$$
\phi(x)\to\phi_\Lambda(x) = \phi(\Lambda^{-1}x),
$$
and the equations of motion of the theory are Lorentz-covariant in the sense that if $\phi$ satisfies the Klein-Gordon equation, then so does $\phi_\Lambda$. Also, I'd imagine that you'd find this helpful. | {
"source": [
"https://physics.stackexchange.com/questions/61353",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23182/"
]
} |
61,445 | Given that everything else is equal (model of fridge, temperature settings, external temperature, altitude), over a given duration of having the door closed, does it require more electricity to cool an empty refrigerator AND maintain that temperature, than a full one? | The two "no" answers you've already received are correct for all practical purposes. In real-world cases there can be a difference though. The difference depends on when the refrigerator decides to cycle on and cool. If the fridge cycles on a timer or based on heat energy then there will be a difference due to the added heat capacity. The outside of the refrigerator will acquire heat due to conduction, convection, and radiation from external sources. All heat transfer depends on $\Delta T$. The greater the difference in temperature between two systems the faster heat will flow. When you add heat energy to a full refrigerator, the system has greater heat capacity so the temperature changes more slowly and $\Delta T$ is greater than it would be in an empty refrigerator. If the refrigerator could keep the temperature absolutely constant at all times the difference would not matter. Because a real system cools and then stops cooling in discrete steps, a loaded refrigerator acquires heat from the environment slightly faster because it stays colder for longer. The difference is so small though that I'm sure it couldn't be measured. But, if the refrigerator only cycles on at a specific temperature there will be no difference. I wrote a numerical simulation to test this. Here is the plot: The simulation is a simplified, idealized, purely numerical one, hence the lack of units. The Y axis is temperature and the X axis is time. The red curve is the loaded refrigerator and green curve is the unloaded one. For the simulation I made the loaded fridge have double the heat capacity. Assuming the refrigerator only cycles on when the temp reaches the top of the graph then there is no difference between the two. Although twice as much heat energy must be put into the loaded refrigerator, the simulation shows that it takes exactly twice as long. That is, the average rate of heat flow is identical. So, the answer to the question depends on how the refrigerator works. If the refrigerator is time interval or heat energy interval based, a loaded fridge takes more energy to maintain a cold temperature. If the fridge is purely thermostat based, there is no difference in energy consumption. | {
"source": [
"https://physics.stackexchange.com/questions/61445",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20633/"
]
} |
61,449 | I'm working through a derivation for Curie paramagnetism and hope someone could help clarify a couple of steps. The way that makes sense to me (although now I have seen the wikipedia derivation below I realise this way is pretty long) is to not take any high temperature approximations until near the end of the derivation where I have: $M=g_J\mu_B[(J+1/2)coth[g_J\mu_B\beta(J+1/2)]-\frac{1}{2}coth(g_J\mu_B\beta/2)]$ now to get to the curie susceptibility it seems that when taking the high T limit of the above expression the leading $\frac{1}{x}$ term of the coth expansion is ignored and the second $\frac{1}{3}x$ term is considered (this pops out the correct answer $\chi_{curie}=\frac{n(g_J\mu_B)^2}{3}\frac{J(J+1)}{k_BT}$). I can't find or think or any sensible reason for this apart from when we take the limit of zero B the infinity is just a constant which has no temperature dependence (which is what we're interested in) so we happily ignore it. The wikipedia method below almost makes sense aside from the last equality where I can't follow how they've simplified the sums (I can see that you could drop the first parts of the sums as they have no H dependence so won't matter when it comes to finding $\chi_{curie}$ but this still doesn't seem to work) $\bar{m}=\frac{\sum\limits_{M_{J}=-J}^{J}{M_{J}g_{J}\mu _{B}e^{{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\;}}}{\sum\limits_{M_{J}=-J}^{J}{e^{{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\;}}}\simeq g_{J}\mu _{B}\frac{\sum\limits_{M_{J}=-J}^{J}{M_{J}\left( 1+{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\; \right)}}{\sum\limits_{M_{J}=-J}^{J}{\left( 1+{M_{J}g_{J}\mu _{B}H}/{k_{B}T}\; \right)}}=\frac{g_{J}^{2}\mu _{B}^{2}H}{k_{B}T}\frac{\sum\limits_{-J}^{J}{M_{J}^{2}}}{\sum\limits_{M_{J}=-J}^{J}{\left( 1 \right)}}$ (Copied from Wikipedia paramagnetism article) | The two "no" answers you've already received are correct for all practical purposes. In real-world cases there can be a difference though. The difference depends on when the refrigerator decides to cycle on and cool. If the fridge cycles on a timer or based on heat energy then there will be a difference due to the added heat capacity. The outside of the refrigerator will acquire heat due to conduction, convection, and radiation from external sources. All heat transfer depends on $\Delta T$. The greater the difference in temperature between two systems the faster heat will flow. When you add heat energy to a full refrigerator, the system has greater heat capacity so the temperature changes more slowly and $\Delta T$ is greater than it would be in an empty refrigerator. If the refrigerator could keep the temperature absolutely constant at all times the difference would not matter. Because a real system cools and then stops cooling in discrete steps, a loaded refrigerator acquires heat from the environment slightly faster because it stays colder for longer. The difference is so small though that I'm sure it couldn't be measured. But, if the refrigerator only cycles on at a specific temperature there will be no difference. I wrote a numerical simulation to test this. Here is the plot: The simulation is a simplified, idealized, purely numerical one, hence the lack of units. The Y axis is temperature and the X axis is time. The red curve is the loaded refrigerator and green curve is the unloaded one. For the simulation I made the loaded fridge have double the heat capacity. Assuming the refrigerator only cycles on when the temp reaches the top of the graph then there is no difference between the two. Although twice as much heat energy must be put into the loaded refrigerator, the simulation shows that it takes exactly twice as long. That is, the average rate of heat flow is identical. So, the answer to the question depends on how the refrigerator works. If the refrigerator is time interval or heat energy interval based, a loaded fridge takes more energy to maintain a cold temperature. If the fridge is purely thermostat based, there is no difference in energy consumption. | {
"source": [
"https://physics.stackexchange.com/questions/61449",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22876/"
]
} |
61,452 | I often read about s-wave and p-wave superconductors. In particular a $p_x + i p_y$
superconductor - often mentioned in combination with topological superconductors. I understand that the overall Cooper pair wavefunction may have orbital angular momentum = 0 (s-wave)
or orbital angular momentum = 1 (p-wave) where the first one is spherically symmetric. Now what does the splitting in a real ($p_x$) and imaginary ($p_y$) part mean? Why
is it written in this form and why is that important (e.g. for zero Majorana modes) ? | Symmetry of the superconducting gap First of all, a bit of theory. Superconductivity appears due to the
Cooper pairing of two electrons, making non-trivial correlations between
them in space. The correlation is widely known as the gap parameter $\Delta_{\alpha\beta}\left(\mathbf{k}\right)\propto\left\langle c_{\alpha}\left(\mathbf{k}\right)c_{\beta}\left(-\mathbf{k}\right)\right\rangle $ (the proportionality is merely a convention that will not matter for
us) with $\alpha$ and $\beta$ the spin indices, $\mathbf{k}$ some
wave vector, and $c$ the fermionic destruction operator. $\Delta$ corresponds to the order parameter associated to the general recipe
of second order phase transition proposed by Landau. Physically, $\Delta$ is the energy gap at the Fermi energy created by the Fermi surface
instability responsible for superconductivity. Since it is a correlation function between two fermions, $\Delta$ has to verify the Pauli exclusion principle, which imposes that $\Delta_{\alpha\beta}\left(\mathbf{k}\right)=-\Delta_{\beta\alpha}\left(-\mathbf{k}\right)$ . You can derive this property from the anti-commutation relation of the fermion operator and the definition of $\Delta_{\alpha\beta}\left(\mathbf{k}\right)$ if you wish.
When there is no spin-orbit coupling, both the spin and the momentum
are good quantum numbers (you need an infinite system for the second, but this
is of no importance here), and one can separate $\Delta_{\alpha\beta}\left(\mathbf{k}\right)=\chi_{\alpha\beta}\Delta\left(\mathbf{k}\right)$ with $\chi_{\alpha \beta}$ a spinor matrix and $\Delta\left(\mathbf{k}\right)$ a function.
Then, there are two possibilities $\chi_{\alpha\beta}=-\chi_{\beta\alpha}\Leftrightarrow\Delta\left(\mathbf{k}\right)=\Delta\left(-\mathbf{k}\right)$ this situation is referred as the spin-singlet pairing $\chi_{\alpha\beta}=\chi_{\beta\alpha}\Leftrightarrow\Delta\left(\mathbf{k}\right)=-\Delta\left(-\mathbf{k}\right)$ this situation is referred as the spin-triplet pairing. Singlet includes $s$ -wave, $d$ -wave, ... terms, triplet includes
the famous $p$ -wave superconductivity (among others, like $f$ -wave, ...). Since the normal situation (say, the historical BCS one) was for singlet
pairing, and because only the second Pauli $\sigma_{2}$ matrix is
antisymmetric, one conventionally writes the order parameter as $$
\Delta_{\alpha\beta}\left(\mathbf{k}\right)=\left[\Delta_{0}\left(\mathbf{k}\right)+\mathbf{d}\left(\mathbf{k}\right)\boldsymbol{\cdot\sigma}\right]\left(\mathbf{i}\sigma_{2}\right)_{\alpha\beta}
$$ where $\Delta_{0}\left(\mathbf{k}\right)=\Delta_{0}\left(-\mathbf{k}\right)$ encodes the singlet component of $\Delta_{\alpha\beta}\left(\mathbf{k}\right)$ and $\mathbf{d}\left(\mathbf{k}\right)=-\mathbf{d}\left(-\mathbf{k}\right)$ is a vector encoding the triplet state. Now the main important point: what is the exact $\mathbf{k}$ -dependency
of $\Delta_{0}$ or $\mathbf{d}$ ? This is a highly non-trivial question,
to some extent still unanswered. There is a common consensus supposing
that the symmetry of the lattice plays a central role for this question.
I highly encourage you to open the book by Mineev and Samokhin (1998), Introduction to unconventional superconductivity , Gordon and
Breach Science Publishers, to have a better idea about that point. The $p_{x}+\mathbf{i}p_{y}$ superconductivity For what bothers you, the $p_{x}+\mathbf{i}p_{y}$ superconductivity
is the superconducting theory based on the following "choice" $\Delta_{0}=0$ , $\mathbf{d}=\left(k_{x}+\mathbf{i}k_{y},\mathbf{i}\left(k_{x}+\mathbf{i}k_{y}\right),0\right)$ such that one has $$
\Delta_{\alpha\beta}\left(\mathbf{k}\right)\propto\left(\begin{array}{cc}
1 & 0\\
0 & 0
\end{array}\right)\left(k_{x}+\mathbf{i}k_{y}\right)\equiv\left(k_{x}+\mathbf{i}k_{y}\right)\left|\uparrow\uparrow\right\rangle
$$ which is essentially a phase term (when $k_{x}=k\cos\theta$ and $k_{y}=k\sin\theta$ )
on top of a spin-polarized electron pair. This phase
accumulates around a vortex, and has non-trivial properties then. Note that the notation $\left|\uparrow\uparrow\right\rangle $ refers
to the spins of the electrons forming the Cooper pair. A singlet state
would have something like $\left|\uparrow\downarrow\right\rangle -\left|\downarrow\uparrow\right\rangle $ , and for $s$ -wave $\Delta_0$ is $\mathbf{k}$ independent, whereas $\mathbf{d}=0$ . Note that the $p$ -wave also refers to the angular momentum $\ell=1$ as you mentioned in your question. Then, in complete analogy
with conventional composition of angular momentum (here it's for two
electrons only), the magnetic moment can be $m=0,\;\pm1$ . The natural
spherical harmonic for these states are then $Y_{\ell,m}$ with $Y_{1,\pm1}\propto k_{x}\pm\mathbf{i}k_{y}$ and $Y_{1,0}\propto k_{z}$ , so it should be rather natural to find
the above mentioned "choice" for $\mathbf{d}\left(\mathbf{k}\right)$ .
I nevertheless say a "choice" since this is not a real choice:
the symmetry of the gap should be imposed by the material you consider,
even if it is not yet satisfactorily understood. Note also that only the state $m=+1$ appears in the $p_{x}+\mathbf{i}p_{y}$ superconductivity. You might wonder about the other magnetic momentum contribution... Well, they are discarded, being less favourable (having a lower transition temperature for instance) under specific conditions that you have to know / specify for a given material. Here you may argue about the Zeeman effect for instance, which polarises the Cooper pair. [NB: I'm not sure about the validity of this last remark.] Relation between $p_{x}+\mathbf{i}p_{y}$ superconductivity and emergent unpaired Majorana modes Now, quickly, I'll try to answer your second question: why is this
state important for emergent unpaired Majorana fermions in the vortices excitations
? To understand that, one has to remember that the emergent unpaired
Majorana modes in superconductors are non-degenerate particle-hole
protected states at zero-energy (in the middle of the gap if you prefer).
Particle-hole symmetry comes along with superconductivity, so we already
validate one point of our check list. To make non-degenerate mode,
one has to fight against the Kramers degeneracy. That's the reason
why we need spin-triplet state. If you would have a singlet state
Cooper pair stuck in the vortex, it would have been degenerate, and
you would have been unable to separate the Majorana modes, see also Basic questions in Majorana fermions for more details about the difference between Majorana modes and
unpaired Majorana modes in condensed matter. A more elaborate treatment about the topological aspect of $p$ -wave
superconductivity can be found in the book by Volovik, G. E. (2003), Universe in a Helium Droplet , Oxford University Press, available
freely from the author's website http://ltl.tkk.fi/wiki/Grigori\_Volovik . Note that Volovik mainly discuss superfluids, for which $p$ -wave has been observed in $^{3}$ He. The $p_{x}+\mathbf{i}p_{y}$ superfluidity is also called the $A_{1}$ -phase [Volovik, section 7.4.8]. There is no known $p$ -wave superconductor to date. Note also that the two above mentionned books (Samokhin and Mineev, Volovik) are
not strictly speaking introductory materials for the topic of superconductivity.
More basics are in Gennes, Tinkham or Schrieffer books (they are all named blabla... superconductivity blabla... ). | {
"source": [
"https://physics.stackexchange.com/questions/61452",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23291/"
]
} |
61,884 | In the modern electromagnetism textbooks, electric fields in the presence of stationary currents are assumed to be conservative,$$
\nabla \times E~=~0
~.$$ Using this we get$$
E_{||}^{\text{out}}~=~E_{||}^{\text{in}}
~,$$which means we have the same amount of electric field just outside of the wire! Is this correct? Is there any experimental proof? | Outside a current carrying conductor, there is, in fact, an electric field. This is discussed for example, in " Surface charges on circuit wires and resistors play three roles " by J. D. Jackson, in American Journal of Physics – July 1996 – Volume 64, Issue 7, pp. 855. To quote Norris W. Preyer quoting Jackson: Jackson describes the three roles of surface charges in circuits: to maintain the potential around the circuit, to provide the electric field in the space around the circuit, and to assure the confined flow of current. Experimental verification was provided by Jefimenko several decades ago. A modern experimental demonstration is provided by Rebecca Jacobs, Alex de Salazar, and Antonio Nassar, in their article " New experimental method of visualizing the electric field due to surface charges on circuit elements ", in American Journal of Physics – December 2010 – Volume 78, Issue 12, pp. 1432. | {
"source": [
"https://physics.stackexchange.com/questions/61884",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21108/"
]
} |
61,899 | Firstly I think shades of this question have appeared elsewhere (like here , or here ). Hopefully mine is a slightly different take on it. If I'm just being thick please correct me. We always hear about the force of gravity being the odd-one-out of the four forces. And this argument, whenever it's presented in popular science at least, always hinges on the relative strength of the forces. Or for a more in depth picture this excellent thread . But, having had a single, brief semester studying general relativity, I'm struggling to see how it is viewed as a force at all. A force, as I understand it, involves the interaction of matter particles with each other via a field. An energy quantisation of the field is the force carrying particle of the field. In the case of gravity though, particles don't interact with one another in this way. General relativity describes how space-time is distorted by energy. So what looked to everyone before Einstein like two orbiting celestial bodies, bound by some long distance force was actually two lumps of energy distorting space-time enough to make their paths through 3D space elliptical. Yet theorists are still very concerned with "uniting the 4 forces". Even though that pesky 4 th force has been well described by distortions in space time. Is there a reason for this that is understandable to a recent physics graduate like myself? My main points of confusion: Why is gravity still viewed as a force? Is the interaction of particles with space time the force-like interaction? Is space-time the force field? If particles not experiencing EM/weak/strong forces merely follow straight lines in higher-dimensional space (what I understand geodesics to be) then how can there be a 4 th force acting on them? Thanks to anyone who can help shed some light on this for me! | Gravity is viewed as a force because it is a force. A force $F$ is something that makes objects of mass $m$ accelerate according to $F=ma$. The Moon or the ISS orbiting the Earth or a falling apple are accelerated by a particular force that is linked to the existence of the Earth and we have reserved the technical term "gravity" for it for 3+ centuries. Einstein explained this gravitational force, $F=GMm/r^2$, as a consequence of the curved spacetime around the massive objects. But it's still true that: Gravity is an interaction mediated by a field and the field also has an associated particle, exactly like the electromagnetic field. The field that communicates gravity is the metric tensor field $g_{\mu\nu}(x,y,z,t)$. It also defines/disturbs the relationships for distances and geometry in the spacetime but this additional "pretty" interpretation doesn't matter. It's a field in the very same sense as the electric vector $\vec E(x,y,z,t)$ is a field. The metric tensor has a higher number of components but that's just a technical difference. Much like the electromagnetic fields may support wave-like solutions, the electromagnetic waves, the metric tensor allows wave-like solutions, the gravitational waves. According to quantum theory, the energy carried by frequency $f$ waves isn't continuous. The energy of electromagnetic waves is carried in units, photons, of energy $E=hf$. The energy of gravitational waves is carried in the units, gravitons, that have energy $E=hf$. This relationship $E=hf$ is completely universal. In fact, not only "beams" of waves may be interpreted in terms of these particles. Even static situations with a force in between may be explained by the action of these particles – photons and gravitons – but they must be virtual, not real, photons and gravitons. Again, the situations of electromagnetism and gravity are totally analogous. You ask whether the spacetime is the force field. To some extent Yes, but it is more accurate to say that the spacetime geometry, the metric tensor, is the field. Concerning your last question, indeed, one may describe the free motion of a probe in the gravitational field by saying that the probe follows the straightest possible trajectories. But where these straightest trajectories lead – and, for example, whether they are periodic in space (orbits) – depends on what the gravitational field (spacetime geometry) actually is. So instead of thinking about the trajectories as "straight lines" (which is not good as a universal attitude because the spacetime itself isn't "flat" i.e. made of mutually orthogonal straight uniform grids), it's more appropriate to think about the trajectories in a coordinate space and they're not straight in general. They're curved and the degree of curvature of these trajectories depends on the metric tensor – the spacetime geometry – the gravitational force field. To summarize, gravity is a fundamental interaction just like the other three. The only differences between gravity and the other three forces are an additional "pretty" interpretation of the gravitational force field and some technicalities such as the higher spin of the messenger particle and non-renormalizability of the effective theory describing this particle. | {
"source": [
"https://physics.stackexchange.com/questions/61899",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23158/"
]
} |
61,918 | If light is passed through two polarizing filters before arriving at a target, and both of the filters are oriented at 90° to each other, then no light will be received at the target. If a third filter is added between the first two, oriented at a 45° angle (as shown below), light will reach the target. Why is this the case? As I understand it, a polarized filter does nothing except filter out light--it does not alter the light passing through in any way. If two filters exist that will eliminate all of the light, why does the presence of a third, which should serve only to filter out additional light, actually act to allow light through? | This link: http://alienryderflex.com/polarizer/ has an excellent explanation; much better than anything I could write here. Essentially, it says that this occurs because the 45 degree filter outputs a projection of the vertical rays at 45 degrees. This, in turn, has a horizontal component, which the final filter projects in its output. | {
"source": [
"https://physics.stackexchange.com/questions/61918",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12907/"
]
} |
62,664 | Ohm's law states that the relationship between current ( I ) voltage ( V ) and resistance ( R ) is $$I = \frac{V}{R}$$ However superconductors cause the resistance of a material to go to zero, and as I understand it, as $R \to 0$, $I \to \infty$. Does this present a problem for Ohm's law? | Ohm's law is generally NOT correct, it's called a law for historical reasons only !! It's a law in the same sense in which Hooke's law is a law... it holds only for certain systems under certain conditions, but it's widely known because it's simple and linear! It's not just superconductors, diodes are a neat everyday example of Ohm's law failing to hold. But it fails for every material under sufficiently extreme circumstances. Check out this I-V graph for a diode. | {
"source": [
"https://physics.stackexchange.com/questions/62664",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/17173/"
]
} |
62,755 | Sound can't be polarized because the vibration of such type can't be polarized i.e, it can't be limited or controlled by any barriers and so polarization is not possible in them. This is what my teacher answered me when i asked the question. But i didn't understand what did he mean by "the vibration can't be controlled or limited." Does the word cant be limited or controlled make sense here? Moreover can anybody explain in details and more clearly to me? | It sounds like your teacher's explanation might have been a little misleading. The reason sound can't be polarised is that it is a longitudinal wave , unlike light which is a transverse wave . (Those links have some animated diagrams that should help to make the difference clear.) "Transverse" means that if a beam of light is coming towards you, the electromagnetic field is vibrating either from side to side or up and down. Unpolarised light is doing a mixture of those two things, but a polarising filter puts it into a more "pure" state, so that it's only going side to side, or only going up and down. (Or diagonally or whatever. There's also a third possibility, called circular polarisation , which is a special combination of the two.) On the other hand, "Longitudinal" means that if a sound wave is coming towards you, the air molecules are vibrating forwards and backwards, not side to side or up and down. Sound waves cannot be polarised because because they don't have any side-to-side or up-and-down motion, only front-to-back. | {
"source": [
"https://physics.stackexchange.com/questions/62755",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21068/"
]
} |
62,974 | Inspired by the wording of this answer , a thought occurred to me. If a photon and a neutrino were to race along a significant stretch of our actual galaxy, which would win the race? Now, neutrinos had better not be going faster than the speed of light in vacuum . However, an energetic enough neutrino can have a velocity arbitrarily close to $c$. Say we took a neutrino from a typical core-collapse supernova. It would have a speed
$$ v_\nu = (1 - \epsilon_\nu) c $$
for some small $\epsilon_\nu > 0$. What is the order of magnitude for $\epsilon_\nu$? At the same time, photons can also travel slower than $c$. The interstellar medium is not completely devoid of matter, and in fact much of this matter is ionized plasma. As such, it should have a plasma frequency $\omega_\mathrm{p}$, and so it should effectively have an index of refraction depending on the ratio $\omega/\omega_\mathrm{p}$. Then the speed of a photon will be
$$ v_\gamma = (1 - \epsilon_\gamma) c, $$
where $\epsilon_\gamma$ is in general frequency-dependent. What is the order of magnitude for this deviation? I know it comes into play at radio frequencies, where in fact even the variation of $v_\gamma$ with frequency is detected: Pulses from pulsars suffer dispersion as they travel over hundreds to thousands of parsecs to reach us. For simplicity, let's assume there are no obstructions like giant molecular clouds or rogue planets to get in the way of the photon. Is it possible that some photons will be outpaced by typical neutrinos? How big is this effect, and how does it depend on photon frequency and neutrino energy? | Cute question! For a neutrino with mass $m$ and energy $E\gg m$, we have $v=1-\epsilon$, where $\epsilon\approx (1/2)(m/E)^2$ (in units with $c=1$). IceCube has detected neutrinos with energies on the order of 1 PeV, but that's exceptional. For neutrinos with mass 0.1 eV and an energy of 1 PeV, we have $\epsilon\sim10^{-32}$. The time of flight for high-energy photons has been proposed as a test of theories of quantum gravity. A decade ago, Lee Smolin was pushing the idea that loop quantum gravity predicted measurable vacuum dispersion for high-energy photons from supernovae. The actual results of measurements were negative: http://arxiv.org/abs/0908.1832 . Photons with energies as high as 30 GeV were found to be dispersed by no more than $\sim 10^{-17}$ relative to other photons. What this tells us is that interactions with the interstellar medium must cause $\epsilon \ll 10^{-17}$, or else those interactions would have prohibited such an experiment as a test of LQG. According to WP, the density of the interstellar medium varies by many order of magnitude, but assuming that it's $\sim 10^{-22}$ times the density of ordinary matter, we could guess that it causes $\epsilon\sim 10^{-22}$. This would be consistent with the fact that it wasn't considered important in the tests of vacuum dispersion. For a neutrino with a mass of 0.1 eV to have $\epsilon\sim 10^{-22}$, it would have to have an energy of 10 GeV. This seems to be within but on the high end of the energy scale for radiation emitted by supernovae. So I think the answer is that it really depends on the energy of the photon, the energy of the neutrino, and the density of the (highly nonuniform) interstellar medium that the particles pass through. | {
"source": [
"https://physics.stackexchange.com/questions/62974",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
63,164 | From a discussion in the DMZ (security stack exchange's chat room - a place where food and drink are important topics) we began to question the difference between how ice and whisky stones work to cool drinks. Both are frozen, but when ice is placed in a drink it slowly melts, using energy from the drink, thus cooling it. But whisky stones don't melt, so how do they cool the drink? | Ice cubes have three distinct cooling effects: The cube, initially at sub-zero temperature, absorbs some heat to reach fusion point (0⁰C). The cube absorbs more heat to switch phase: it takes some energy to turn 1 kg of ice at 0⁰C into 1 kg of liquid water at 0⁰C. The water absorbs some heat to become warmer than 0⁰C. The three effects occur more or less successively, although not necessarily simultaneously throughout the ice cube. But the idea remain the same. For ice, the bulk of the cooling comes from the melting. Let's put some figures on it. Heat capacity of ice is 2.06 kJ·kg -1 ·K -1 , meaning it takes 2.06 kJ to transform 1 kg of ice at -12⁰C into 1 kg of ice at -11⁰C. For liquid water, that's 4.217 kJ·kg -1 ·K -1 . The latent heat , i.e. energy used for turning ice into liquid water (at constant temperature) is 333 kJ·kg -1 . Imagine that you have some beverage at room temperature, which you want to lower to 8⁰C with ice cubes. The ice cubes come from the freezer and are initially at -18⁰C. The three cooling effects amount to, per kg of ice: Raising ice temperature to 0⁰C: 18×2.06 = 37.08 kJ. Melting the ice: 333 kJ. Raising the water temperature to 8⁰C: 8×4.217 = 33.736 kJ. So, in this example, the melting contributes to about 82% of the cooling. Non-melting stones work only on heat capacity. So they are effective only insofar as a material with high heat capacity is used -- but, in practice, water (and ice) have a quite high heat capacity, higher than stones, so the cooling effect of such stones is necessarily quite reduced compared to ice cubes. On the other hand, since there is no dilution effect, you can put a lot of stones in your glass. Reusable ice cubes are actually much better at cooling things, because they do melt -- but they do so internally, in a sealed envelope, thus not spilling into your drink. Since they use latent heat of phase transitions, they are as good as true ice cubes. Although they do lack in aesthetics. But what business have you torturing perfectly fine Whisky with unnatural coolness ? | {
"source": [
"https://physics.stackexchange.com/questions/63164",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2498/"
]
} |
63,177 | 1) First of all, let us consider a particle of light, also known as a photon. One of the interesting properties of photons is that they have momentum and yet have no mass. This was established in the 1850s by James Clerk Maxwell. However, if we recall our basic physics, we know that momentum is made up of two components: mass and velocity. How can a photon have momentum and yet not have a mass? Einstein’s great insight was that the energy of a photon must be equivalent to a quantity of mass and hence could be related to the momentum. 2) Einstein’s thought experiment runs as follows. First, imagine a stationary box floating in deep space. Inside the box, a photon is emitted and travels from the left towards the right. Since the momentum of the system must be conserved, the box must recoils to the left as the photon is emitted. At some later time, the photon collides with the other side of the box, transferring all of its momentum to the box. The total momentum of the system is conserved, so the impact causes the box to stop moving. 3) Unfortunately, there is a problem. Since no external forces are acting on this system, the centre of mass must stay in the same location. However, the box has moved. How can the movement of the box be reconciled with the centre of mass of the system remaining fixed? 4) Einstein resolved this apparent contradiction by proposing that there must be a ‘mass equivalent’ to the energy of the photon. In other words, the energy of the photon must be equivalent to a mass moving from left to right in the box. Furthermore, the mass must be large enough so that the system centre of mass remains stationary. My questions:
1) I'm not able to grasp the concept of centre of mass in paragraph (3).
2) What's the center of mass of the system of the box and photon?
3) If no external forces are acting on the system, does the location of the center of mass remains the fixed? Then what does it mean by the location of center of mass being fixed if the box has moved?
4) What's the relation between the mass being large enough and the center of mass to remain stationary in paragraph 4? | Ice cubes have three distinct cooling effects: The cube, initially at sub-zero temperature, absorbs some heat to reach fusion point (0⁰C). The cube absorbs more heat to switch phase: it takes some energy to turn 1 kg of ice at 0⁰C into 1 kg of liquid water at 0⁰C. The water absorbs some heat to become warmer than 0⁰C. The three effects occur more or less successively, although not necessarily simultaneously throughout the ice cube. But the idea remain the same. For ice, the bulk of the cooling comes from the melting. Let's put some figures on it. Heat capacity of ice is 2.06 kJ·kg -1 ·K -1 , meaning it takes 2.06 kJ to transform 1 kg of ice at -12⁰C into 1 kg of ice at -11⁰C. For liquid water, that's 4.217 kJ·kg -1 ·K -1 . The latent heat , i.e. energy used for turning ice into liquid water (at constant temperature) is 333 kJ·kg -1 . Imagine that you have some beverage at room temperature, which you want to lower to 8⁰C with ice cubes. The ice cubes come from the freezer and are initially at -18⁰C. The three cooling effects amount to, per kg of ice: Raising ice temperature to 0⁰C: 18×2.06 = 37.08 kJ. Melting the ice: 333 kJ. Raising the water temperature to 8⁰C: 8×4.217 = 33.736 kJ. So, in this example, the melting contributes to about 82% of the cooling. Non-melting stones work only on heat capacity. So they are effective only insofar as a material with high heat capacity is used -- but, in practice, water (and ice) have a quite high heat capacity, higher than stones, so the cooling effect of such stones is necessarily quite reduced compared to ice cubes. On the other hand, since there is no dilution effect, you can put a lot of stones in your glass. Reusable ice cubes are actually much better at cooling things, because they do melt -- but they do so internally, in a sealed envelope, thus not spilling into your drink. Since they use latent heat of phase transitions, they are as good as true ice cubes. Although they do lack in aesthetics. But what business have you torturing perfectly fine Whisky with unnatural coolness ? | {
"source": [
"https://physics.stackexchange.com/questions/63177",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20263/"
]
} |
63,383 | Free neutrons are known to undergo beta decay with a half-life of slightly above 10 minutes . Binding with other nucleons stabilizes the neutrons in an atomic nucleus, but only if the fraction of protons is high enough (at least a third or so). But what keeps a neutron star stable against beta decay? Apparently, this is extra pressure due to gravity in contrast to "negative pressure" of proton Coulomb repulsion in a nucleus but how do we know that this is enough to stabilize the degenerate neutronic fluid? I am aware of a closely related question but not really happy with the answers there. There is lot of dazzling details here , but I am a looking for an answer suitable form a 8-year old with enhanced curiosity towards astrophysics. | Conservation of energy and the electron-degenerate pressure. For the neutron to decay you must have
$$ n \to p + e^- + \bar{\nu}$$
or
$$ n + \nu \to p + e^- \quad. $$ In either case that electron is going to stay around, but in addition to the neutrons being in a degenerate gas, the few remaining electrons are also degenerate, which means that adding a new one requires giving it momentum above the Fermi surface and the energy is not available. | {
"source": [
"https://physics.stackexchange.com/questions/63383",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1739/"
]
} |
63,394 | I'm not sure which is the exact definition of a Casimir operator . In some texts it is defined as the product of generators of the form:
$$X^2=\sum X_iX^i$$ But in other parts it is defined as an operator that conmutes with every generator of the Lie group. Are these definitions equivalent?
If the answer is yes, how could I prove it (I'm thinking in using Jacobi's identity)? | I'll give you enough hints to complete the proof yourself. If you're desperate, I'm following the notes by Zuber, which are available online, IIRC. Let's start with some notation: pick some basis $\{t_a\}$ of your Lie algebra, then
$$ [t_a,t_b] = C_{ab}{}^c t_c$$
defines the structure constants. If you define
$$ g_{ab} = C_{ad}{}^e C_{be}{}^d,$$
then this gives you an inner product
$$(X,Y) := g_{ab} x^a y^b, \quad X = x^a t_a \text{ and } Y = y^b t_b.$$
Indeed this "Killing form" is related to the adjoint representation, as
$$(X,Y) = \text{tr}(\text{ad } X \text{ ad} Y)$$
(exercise!). Similarly,
$$g_{ab} =\text{tr}(\text{ad } t_a \text{ ad } t_b).$$
In this language, the Casimir $c_2$ is given by
$$ c_2 = g^{ab} t_a t_b, \qquad \text{ so}$$
$$[c_2,t_e] = g^{ab} [t_a t_b,t_e].$$
Now you need to do some basic work (expand the first factor of the commutator, work out the resulting brackets) and you'll see that this gives you
$$ \ldots = g^{ab} g^{dk} C_{bek} \{ t_a,t_d \}.$$
This vanishes (why?), so you're done! Edit (regarding Peter Kravchuk's remark): when you write $c_2 \sim t_a t_b$, it's not really part of the Lie algebra. The only multiplication that "works" in Lie algebras is the commutator $[t_a,t_b]$. So these guys live in some richer structure, which is called the "universal enveloping algebra." Indeed you often hear that "the Casimir is a multiple of the identity matrix," but the identity matrix is seldom part of the Lie algebra (the identity in a Lie algebra is 0). In practice everything is self-evident, because you do calculations in some vector space. | {
"source": [
"https://physics.stackexchange.com/questions/63394",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21817/"
]
} |
63,558 | This is really bugging me. When you look up some educational text about stars life, this is what you find out: Gravity creates the temperature and pressure to start fusion reactions. The fusion proceeds to heavier and heavier cores ending with iron, which remains in the centre of the star. One moment, all light cores are depleted and the gravity wins over the power of the fusion reactions, now absent. The core of the star collapses into high density object, which may vary depending on the star mass. And the top layers of the star explode . And I just cannot find clear explanation why. According to what I imagine, the top layers of the star should just fall into the collapsing core. Is that because of the 3rd Newtons rule? Or do the stars have some need to end with a cool boom? | There are lots of possible ways that stars can end their life, even in the subset of cases where the end is violent. Eloff has given an excellent answer, but I wanted to add a few points. Summary (tl;dr): You need the right conditions (mass, angular momentum, metallicity, etc) to produce a proto-neutron-star which is able to resist complete collapse to a black-hole. The bounce from hitting that proto-neutron-star surface, and the heating from neutrinos, is what drives the explosion of material. Radioactivity is eventually the source of the light we see from supernovae. The basic picture for producing a supernova from a massive star 1 : The star burns progressively heavier elements on shorter timescales until producing iron (Fe) on the timescale of seconds. After iron, fusion in the core ceases, and pressure support is lost. Gravity is unhindered, and the star begins dynamical collapse . As the Fe-core contracts, electron-capture begins to convert protons + electrons into neutrons, emitting MeV neutrinos . The Fe-core, now largely composed of neutrons is stabilized to further collapse by neutron degeneracy pressure at nuclear densities. Material further out, which is still collapsing, hits the incredibly hard proto-neutron-star surface - causing a bounce (see video analog) : the launch of a powerful shockwave outwards through the star. Because the neutrinos produced from electron-capture are so energetic (as dmckee points out), and because the densities are so high - the neutrinos are able to deposit significant amounts of energy into the outer-material, accelerating it beyond escape velocities . This is the supernova explosion . Due to the hot, dense, nucleon-rich nature of the ejecta, r(apid)-process nucleosynthesis produces radioactive Nickel (Ni) and Cobalt (Co) . After roughly 10's of days, the expanding supernova ejecta becomes optically thin - allowing the radiation produced by Ni and Co decay to escape - this causes the optical emission we call a supernovae . from http://arxiv.org/abs/astro-ph/0612072 Why does a supernova explode? All massive stars are not believed to produce supernovae when they explode. In the following figure (which is intended to convey the basic idea - but not necessarily the quantitative aspects), regions titled 'direct black-hole formation' are regions of initial mass where the neutron-degeneracy pressure (stage '4' above) is insufficient to halt collapse. The Fe core is massive enough that it continues collapsing until a black-hole is formed, and most of the material further out is rapidly accreted. The region in this plot between about 8 and 35 solar masses is where the vast majority of observed supernovae are believed to come from. To answer why supernovae explode: Consider the schematic process outlined above. The reason why some deaths-of-massive-stars explode and others don't, is that you need the right conditions (mass, angular momentum, metallicity, etc) to produce a proto-neutron-star which is able to resist complete collapse. The bounce from hitting that proto-neutron-star surface, and the heating from neutrinos is what drives the explosion of material. Radioactivity is eventually the source of the light we see from supernovae. from http://rmp.aps.org/abstract/RMP/v74/i4/p1015_1 Footnote 1: This discussion is constrained to ' core collapse' supernovae - the collapse of massive stars, observed as type Ib, Ic, and type II supernovae Additional References Basically any paper by or with Stan Woosley, e.g. Woosley & Janka 2006 - The Physics of Core-Collapse Supernovae Lecture Notes by Dmitry A. Semenov - "Basics of Star Formation and
Stellar Nucleosynthesis" | {
"source": [
"https://physics.stackexchange.com/questions/63558",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20590/"
]
} |
63,811 | I'm not sure if this is the right place to ask this question. I realise that this maybe a borderline philosophical question at this point in time, therefore feel free to close this question if you think that this is a duplicate or inappropriate for this forum. Anyway, I'm an electrical engineer and I have some basic knowledge of quantum mechanics. I know that Schrödinger's equation is deterministic. However, quantum mechanics is much deeper than that and I would like to learn more. If this question is not clearly answerable at this point than can anyone point out some recognized sources that try to answer this question. I would appreciate it if the source is scientific and more specifically, is related to quantum theory. | You're right; the Schrödinger's equation induces a unitary time evolution, and it is deterministic. Indeterminism in Quantum Mechanics is given by another "evolution" that the wavefunction may experience: wavefunction collapse. This is the source of indeterminism in Quantum Mechanics, and is a mechanism that is still not well understood at a fundamental level (this is often called as "Measurement Problem"). If you want a book that talks about this kind of problems, I suggest you "Decoherence and the Appearance of a Classical World in Quantum Theory" by Joos, Zeh et al; it is a good book on this and other modern topics in Quantum Mechanics. It's understandable with some effort, assuming you know basic things about Hilbert Spaces and the basic mathematical tools of QM. | {
"source": [
"https://physics.stackexchange.com/questions/63811",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24197/"
]
} |
64,240 | I posted this picture of someone on a zipline on Facebook. One of my friends saw it and asked this question , so he could try to calculate the speed at which someone on the zipline would be going when they hit the water. The only answer, which was accepted, includes the disclaimer, " Assuming the pulley being used to slide to be friction less.Though not possible.Also the rope is assumed to be in-extensible and straight. " I used to have a zipline of about the same length in my back yard as a kid and even when I was young, I noticed that we could never straighten the line completely, even when it was slack, we could not make it completely straight. And, naturally, once weight was added, there was a curve where the weight pulled the line down. One of the comments from the member providing the answer is " Well i can show you why the string cannot be ever straight. " I know that from experience. We could never make it completely straight with no sagging. I asked the reason for this and was directed to a book on Amazon. Having just spend $50 on a number of books for summer reading, my book budget is gone for a while. So can someone answer that? Why will the line never be straight when it's set up (and when there is no load on it)? | Imagine a heavy chord raised off the ground between two blocks. Rather than consider all of the mass pieces of the rope, and the forces on them, we can simplify the problem a little bit by considering a slightly different one. The chord can be represented by a heavy ball (in the middle of the chord) connected by two massless strings to the blocks. From experience, we know that this mass/string combination forms a triangle with two sides that are the same length and one other side. Each of the slanted sides forms some angle with respect to the ground. When you pull tighter on the strings (in the picture of the tightrope walker below, for example) the ball (or tightrope walker) goes up a little. The angle between the angled sides and the ground gets smaller. But the tighter you make the strings, and the higher the ball goes, the more the tension in the string (which always points along a string) is going into pulling the ball sideways. So consider this. If the ball were hanging at its highest point, that is, with the strings forming a straight line instead of a triangle, then the tension force of the rope would be pulling totally horizontally on the ball. But this doesn't cancel the force of gravity, which pulls the ball downward -- regardless of the tension in the rope. So the ball will sink a little. Therefore there can never be a configuration where a ball hangs on a straight rope. The rope must have a kink in it. This is the same reason why if you have a "straight" rope and a tightrope walker walks on it, the rope must sag a little. See the diagram below. In a static problem, all of the components of the arrows (left-right, up-down) have to add up to zero. Notice how the tension arrows point a tiny bit upward. For a heavy rope (a rope with mass but without a ball) the hanging doesn't form a kink, but a slightly slopey curve. The principle is the same. | {
"source": [
"https://physics.stackexchange.com/questions/64240",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24233/"
]
} |
64,250 | If $r=r(t)$, why is $\frac{r'(t)}{(r(t))^2}$ = $\frac{1}{r(t)}$ where $'$ denotes the derivative? I saw it in a lecture. Can you please explain? | Imagine a heavy chord raised off the ground between two blocks. Rather than consider all of the mass pieces of the rope, and the forces on them, we can simplify the problem a little bit by considering a slightly different one. The chord can be represented by a heavy ball (in the middle of the chord) connected by two massless strings to the blocks. From experience, we know that this mass/string combination forms a triangle with two sides that are the same length and one other side. Each of the slanted sides forms some angle with respect to the ground. When you pull tighter on the strings (in the picture of the tightrope walker below, for example) the ball (or tightrope walker) goes up a little. The angle between the angled sides and the ground gets smaller. But the tighter you make the strings, and the higher the ball goes, the more the tension in the string (which always points along a string) is going into pulling the ball sideways. So consider this. If the ball were hanging at its highest point, that is, with the strings forming a straight line instead of a triangle, then the tension force of the rope would be pulling totally horizontally on the ball. But this doesn't cancel the force of gravity, which pulls the ball downward -- regardless of the tension in the rope. So the ball will sink a little. Therefore there can never be a configuration where a ball hangs on a straight rope. The rope must have a kink in it. This is the same reason why if you have a "straight" rope and a tightrope walker walks on it, the rope must sag a little. See the diagram below. In a static problem, all of the components of the arrows (left-right, up-down) have to add up to zero. Notice how the tension arrows point a tiny bit upward. For a heavy rope (a rope with mass but without a ball) the hanging doesn't form a kink, but a slightly slopey curve. The principle is the same. | {
"source": [
"https://physics.stackexchange.com/questions/64250",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24375/"
]
} |
64,530 | First post to this site, and I've got at most a high school background in physics - I really appreciate any answer, but I may not be able to follow you if you're too advanced. I suppose this goes for regular planes too, but I'm especially interested in supersonic planes. I read some reports in the news about various people working on commercial supersonic travel, but there were a lot of comments attached to these news posts listing essentially what were physics constraints that would make such travel severely cost ineffective: "skin friction," causing high heat and stress, leading to different metaled (thus more expensively researched/manufactured) and heavier (thus less efficient) airplanes. increased drag, requiring more fuel to overcome. sonic booms. I'll leave alone sonic booms - I understand as well as I can why this could be hard to engineer around, and why various countries have made generating them over land illegal. The other two I don't get. After spending some time on wikipedia this evening, if I've got this right, it seems that, holding the shape of the airplane constant, skin friction, lift, and drag are each equal to a scalar times density times velocity squared. Density drops as the altitude raises, which seems to mean to me that you could keep drag, lift, and skin friction constant when increasing speed by merely increasing altitude. I assume this is right, so I guessed that the "gas mileage" issue had to do with needing to burn too much more gas to achieve thrust required for the higher velocity. And yet, the wikipedia article on jet engines states that the concorde was actually more fuel efficient than some conventional subsonic turbofan engines used in 747's. Given all this, what did I get wrong? Why can't supersonic planes just fly higher to be as cost effective, or more, than conventional subsonic commercial jetliners, using the same construction materials? Relatedly, why do current jets have service ceilings and max speeds (assuming it's not just about the high stress of breaking through the sound barrier)? Thanks! | There are lots of questions here that I will try to answer, hopefully I'll get to them all... Creature Comforts It's hard to "just fly higher" when you consider passenger planes. Supersonic military aircraft like the SR-71 do fly ridiculously high. It's service ceiling is 85,000 feet! But, it has the advantage that it doesn't need to keep anybody but the pilot comfortable. The issue deals with pressurization. As you increase altitude, the aircraft must also be able to withstand a larger pressure differential if the cabin will be kept at a comfortable pressure. Most very high altitude military aircraft do not pressurize the cabin; rather, the pilot wears a pressure suit. Imagine if you had to suit up for a flight to visit relatives! It's not that we can't build a plane that can withstand the pressure difference, but doing so would require very heavy or very expensive materials. The former makes it much harder to fly while the latter makes it not very commercially viable. Increased Drag There's a reason going past the speed of sound was called "breaking the sound barrier." There is a magic number called the Drag Divergence Mach Number (Mach number is the fraction of the speed of sound at which you are traveling). Beyond this number, the drag increases tremendously until you are supersonic, at which point it decreases quite rapidly (but is still higher than subsonic). Therein lies one of the biggest problems. You need very powerful engines to break the barrier, but then they don't need to be very powerful on the other side of it. So it's inefficient from a weight/cost standpoint because the engines are so over-engineered at cruise conditions (note: this does not imply the engines are inefficient on their own). Increased Heat There's no denying that it will get hot. It is storied that the SR-71 would get so hot and the metal would expand so much, that when it was fully fueled on the runway, the fuel would leak out of the gaps in the skin. The plane would have to take off, fly supersonic to heat the skin enough to close the gaps in the metal, then be refueled mid-air again because it used it all up. Then it would go about it's mission. At the Mach numbers for a commercial aircraft, the heating would not be as extreme. But it would require some careful engineering, which makes it more expensive. So why can't it just fly higher? Ignoring international law for a moment, there's several reasons why flying higher just isn't as viable: Cabin pressure issues Emergency procedures: Let's assume for a moment we could pressurize the cabin. In the event it loses pressure, what do we do? The normal procedure would be to dive down to a safe altitude, that takes considerably longer from 60,000 feet than 30,000 feet. Drag is proportional to density, but so is lift. This means to fly higher, an aircraft needs bigger wings. But bigger wings mean more drag, so it gets into a vicious cycle. There is a sweet-spot that can be optimized for an ideal balance, but that means that "just go higher" may not be a good option. Ceilings and Speeds This one doesn't have to do entirely with legal issues, but that's part of it. A service ceiling is defined as the maximum altitude at which the aircraft can operate and maintain a specified rate of climb. This is entirely imposed by the aircraft design (laws may require a minimum ceiling, but not a maximum... although they may restrict a plane from flying at the maximum). Likewise, an absolute ceiling is the altitude at which the aircraft can maintain level flight at maximum thrust. Naturally, as the plane burns fuel and becomes lighter, it needs less lift to stay at the same altitude. But the lift force is based solely on the geometry and speed, so actual lift will exceed what is needed and the plane will climb. As it climbs, the air density drops and so does lift. This means as the plane flies, it's absolute ceiling actually increases. Now for the speeds... Commercial aircraft fly as close as they can to the Drag Divergence Mach Number because it's the most economic point to fly. The plane goes as fast as it can go without the drag coefficient increasing tremendously. This is usually around Mach 0.8. But they can, and often do, go faster than that. It's not unusual for an airplane that is delayed taking off to land on time or even early. This happens because they can still go faster than they normally operate (not significantly of course, perhaps Mach 0.83-0.85). It may cost some more fuel because the drag coefficient is likely increasing as it approaches Mach 1, but a delayed plane is more expensive for the airline than the extra fuel used (maybe not in direct dollars, but in PR, reputation, etc.) | {
"source": [
"https://physics.stackexchange.com/questions/64530",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24466/"
]
} |
64,555 | I have never understood what's the meaning of the sentence "rolling without slipping". Let me explain. I'll give an example. Yesterday my mechanics professor introduced some concepts of rotational dynamics. When he came to talk about spinning wheels he said something like: "If the wheel is rolling without slipping, what's the velocity of the point at the base of the wheel?? It is... zero! Convince yourself that the velocity must be zero. Since if it wasn't zero, the wheel wouldn't be rolling without slipping. So the wheel is rolling without slipping if and only if the point at the base has velocity zero, i.e. if and only if the tangential speed equals the speed of the center of mass." Well, what I really don't understand is this: is the "rolling without slipping" condition defined as "Point at the base has zero speed"? If not, what's the proper definition for that kind of motion? Looking across the internet, I have found more or less the same ideas expressed in the quotation. Furthermore, if it was a definition, then it would be totally unnecessary to say "convince yourself" and improper to talk about necessary and sufficient conditions. I'd like to point out that I'm not really confused about the mathematics behind this or with the meaning of the condition above. What puzzles me is why are those explanations always phrased as if the condition $v'=0$ (where $v'$ is the relative velocity beetween the point at base and the surface) is some necessary and sufficient condition to be "rolling without slipping". Seems to me that this is exactly the definition of "rolling without slipping" and not an "iff". Any help is appreciated, thanks. | You can always decompose a motion like this into two parts: (1) rolling without slipping and (2) slipping without rolling. What is slipping without rolling? It means the object moves uniformly in one direction along the surface, with no angular velocity about the object's own center of mass. For instance, a box that is pushed along the ground can easily slip without rolling. Unfortunately, most people seem to assume that you can infer some physically important information from your own notion of what slipping is, without having to define it. I believe this is done to try to connect to intuition, but in the process, things get a lot more nebulous and ill-defined. To me, it's easier to think about this in terms of the object's rotation--it was never obvious to me that the point in contact with the ground doesn't have velocity at the instant it touches. I prefer to think instead that an object that rolls without slipping travels 1 circumference along the ground for every for every full rotation it makes. And object that travels more than this distance (or that doesn't rotate at all) is slipping in some way. Then, eventually, we can get to the notion that the point in contact during rolling cannot have nonzero velocity through whatever logical or physical arguments necessary. But as is usual in physics, it's not really clear what definition should be considered "fundamental" with other results stemming from it. This emphasizes that physics is not built axiomatically. | {
"source": [
"https://physics.stackexchange.com/questions/64555",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21006/"
]
} |
64,869 | Hm, this just occurred to me while answering another question: If I write the Hamiltonian for a harmonic oscillator as
$$H = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 x^2$$
then wouldn't one set of possible basis states be the set of $\delta$-functions
$\psi_x = \delta(x)$, and that indicates that the size of my Hilbert space is that of $\mathbb{R}$. On the other hand, we all know that we can diagonalize $H$ by going to the occupation number states, so the Hilbert space would be $|n\rangle, n \in \mathbb{N}_0$, so now the size of my Hilbert space is that of $\mathbb{N}$ instead. Clearly they can't both be right, so where is the flaw in my logic? | This question was first posed to me by a friend of mine; for the subtleties involved, I love this question. :-) The "flaw" is that you're not counting the dimension carefully. As other answers have pointed out, $\delta$ -functions are not valid $\mathcal{L}^2(\mathbb{R})$ functions, so we need to define a kosher function which gives the $\delta$ -function as a limiting case. This is essentially done by considering a UV regulator for your wavefunctions in space. Let's solve the simpler "particle in a box" problem, on a lattice. The answer for the harmonic oscillator will conceptually be the same. Also note that solving the problem on a lattice of size $a$ is akin to considering rectangular functions of width $a$ and unit area, as regulated versions of $\delta$ -functions. The UV-cutoff (smallest position resolution) becomes the maximum momentum possible for the particle's wavefunction and the IR-cutoff (roughly max width of wavefunction which will correspond to the size of the box) gives the minimum momentum quantum and hence the difference between levels. Now you can see that the number of states (finite) is the same in position basis and momentum basis. The subtlety is when you take the limit of small lattice spacing. Then the max momentum goes to "infinity" while the position resolution goes to zero -- but the position basis states are still countable! In the harmonic oscillator case, the spread of the ground state (maximum spread) should correspond to the momentum quantum i.e. the lattice size in momentum space. The physical intuition When we consider the set of possible wavefunctions, we need them to be reasonably behaved i.e. only a countable number of discontinuities. In effect, such functions have only a countable number of degrees of freedom (unlike functions which can be very badly behaved). IIRC, this is one of the necessary conditions for a function to be fourier transformable. ADDENDUM: See @tparker's answer for a nice explanation with a slightly more rigorous treatment justifying why wavefunctions have only countable degrees of freedom. | {
"source": [
"https://physics.stackexchange.com/questions/64869",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/661/"
]
} |
64,872 | Why does each individual photon have such a low amount of energy? I am hit by photons all day and I find it amazing that I am not vaporized. Am I simply too physically big for the photons to harm me much, or perhaps the Earth's magnetic field filters out enough harmful causes such as gamma rays? | Individual photons are very small and don't have much energy. If you put a lot of them together in one place you can hurt somebody - by simply supplying enough power to melt an object (ask any spy on a table underneath a laser beam). There is another very odd feature of photons . Although lots of them can provide a lot of energy and heat an object, it takes an individual photon of enough energy to break a chemical bond. So while a single high-energy ultraviolet photon can break a molecule in your skin and cause damage, a billion lower energy visible photons hitting the same point can't break that single bond. Even though they together carry much more energy, it is the energy that is delivered in a single photon that matters in chemistry. Fortunately the Earth's atmosphere shields us from the photons with enough energy to break most chemical bonds. | {
"source": [
"https://physics.stackexchange.com/questions/64872",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24579/"
]
} |
64,916 | There was an atomic bomb dropped in Hiroshima, but today there are residents in Hiroshima. However, in Chernobyl, where there was a nuclear reactor meltdown, there are no residents living today (or very few). What made the difference? | While they work on the same principles, the detonation of an atomic bomb and the meltdown of a nuclear plant are two very different processes. An atomic bomb is based on the idea of releasing as much energy from a runaway nuclear fission reaction as possible in the shortest amount of time. The idea being to create as much devastating damage as possible immediately so as to nullify enemy forces or intimidate the opposing side into surrender. Both effectively ensuring the conflict ends quickly. Thus, it would be important that the area bombed does not remain uninhabitable long after the two sides make peace (Ok, that's my own speculation, but I think it's a nice ideal to work with). A nuclear reactor is based on the idea of producing low amounts of power using a controlled and sustained nuclear fission reaction. The point being that it does not release all of the energy at once and slower reaction processes are used to ensure maximum lifetime of the nuclear fuel. Moving beyond the ideas behind each, the radioactive isotopes created in an atomic blast are relatively short-lived due to the nature of the blast and the fact that they are normally detonated above the ground to increase destructive power of the concussive wave. Most radioactive materials from an atomic blast have a maximum half-life of 50 years. However, in the Chernobyl meltdown, most of the actual exploding was due to containment failure and explosions from steam build-up. Chunks of fuel rods and irradiated graphite rods remained intact. Furthermore, the reaction has, both initially and over its life, produced a far higher amount of radioactive materials. This is partly due to the nature of the reaction, the existence of intact fuel to this date, and that the explosion happened at ground level. A fission explosion at ground level creates more radioactive isotopes due to neutron activation in soil. Furthermore, the half-lives of the isotopes made in the Chernobyl accident (because of the nature of the process) are considerably longer. It is estimated that the area will not be habitable for humans for another 20 000 years (Edit: to prevent further debate I rechecked this number. That is the time before the area within the cement sarcophagus - the exact location of the blast - becomes safe. The surrounding area varies between 20 years and several hundred due to uneven contamination). Long story short, an atomic bomb is, like other bombs, designed to achieve the most destructive force possible over a short amount of time. The reaction process that accomplishes this ends up creating short-lived radioactive particles, which means the initial radiation burst is extremely high but falls off rapidly. Whereas a nuclear reactor is designed to utilize the full extent of fission in producing power from a slow, sustained reaction process. This reaction results in the creation of nuclear waste materials that are relatively long-lived, which means that the initial radiation burst from a meltdown may be much lower than that of a bomb, but it lasts much longer. In the global perspective: an atomic bomb may be hazardous to the health of those nearby, but a meltdown spreads radiation across the planet for years. At this point, everyone on Earth has averaged an extra 21 days of background radiation exposure per person due to Chernobyl. This is one of the reasons Chernobyl was a level 7 nuclear event . All of this contribute to why even though Hiroshima had an atomic bomb detonate, it is Chernobyl (and Fukushima too I'll wager) that remains uninhabitable. Most of the relevant info for this can be found in Wikipedia . One further thing: As pointed out, one thing I forgot to mention is that the amount of fissionable material in an atomic bomb is usually considerably less than the amount housed in a nuclear reactor. A standard nuclear reactor can consume $50000lb$ ($\sim22700kg$) of fuel in a year, whereas little boy held significantly less (around $100-150lb$ or $45-70kg$). Obviously, having more fissionable material drastically increases the amount of radiation that can be output as well as the amount of radioactive isotopes. For example, the meltdown at Chernobyl released 25 times more Iodine-129 isotope than the Hiroshima bomb (an isotope that is relatively long-lived and dangerous to humans) and 890 times more Cesium-137 (not as long lived, but still a danger while it is present). | {
"source": [
"https://physics.stackexchange.com/questions/64916",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24548/"
]
} |
65,165 | Consider the following statement: If we know that the system is in the ground state, then the temperature is zero. How does this follow from the statistical definition of temperature? | Mad props for a cool question. I'm going to justify essentially the converse of the statement because it doesn't make much sense to talk about the temperature of a system that is in a pure state. Let's assume that we're talking about a quantum system with discrete energy spectrum (with no accumulation points) in thermal equilibrium. Let $\beta = 1/kT$ be the inverse temperature. Then recall that the Boltzmann distribution tells us that the population fraction of systems in the ensemble corresponding to energy $E_i$ is given by
$$
p_i = \frac{g_ie^{-\beta E_i}}{Z}
$$
where $g_i$ is the degeneracy of the energy level. In particular, note that the relative frequency with which energies $E_i$ and $E_j$ will be found in the ensemble is
$$
p_{ij}(\beta) = \frac{g_ie^{-\beta E_i}}{g_je^{-\beta E_j}} = \frac{g_i}{g_j}e^{-\beta(E_i - E_j)}
$$
In particular, let $i=0$ correspond to the ground level, then the frequency of any level relative to the ground level is
$$
p_{i0}(\beta) = \frac{g_i}{g_0}e^{-\beta(E_i - E_0)}
$$
Notice that since the ground level has the lowest energy by definition, we have $E_i - E_0 \geq 0$, but zero temperature corresponds to the limit $\beta \to \infty$, and we have
$$
\lim_{\beta \to \infty } p_{i0}(\beta) = \delta_{i0}
$$
In other words, at zero temperature, every member of the ensemble must be in the ground energy level; the probability that a system in the ensemble will have any other energy becomes vanishingly small compared to the probability that a member of the ensemble has the lowest energy. Addendum - September 18, 2017 In response to a question in the comments about whether or not at zero temperature the system is in a pure state: Recall that a quantum state (density matrix) $\rho$ is said to be pure if and only if $\rho^2 = \rho$. We now show that as $T\to 0$, or equivalently as $\beta\to+\infty$, the thermal density matrix $\rho$ approaches a density matrix $\rho_*$ that is pure if the ground level is non-degenerate and not pure otherwise. We will rely on an argument quite similar in character to the one given above in which we compared the probabilities of finding a system in a given energy level when we approach zero temperature. For any positive integer $d$, let $I_d$ denote the $d\times d$ identity matrix. As above, we consider a system with discrete energy levels $E_0<E_1<\dots$ and with corresponding degeneracies $g_0, g_1, \dots$. Recall that the thermal density matrix, namely the density matrix for a system in thermal equilibrium with a heat bath at inverse temperature $\beta$, is given in the energy eigenbasis by: \begin{align}
\rho = \frac{1}{Z}
\begin{pmatrix}
e^{-\beta E_0}I_{g_0} & & & \\
& e^{-\beta E_1} I_{g_1} & & \\
&& e^{-\beta E_2} I_{g_2} \\
& & &\ddots
\end{pmatrix}, \qquad Z = \sum_j g_j e^{-\beta E_j}.
\end{align} Let $i\geq 0$ be given, and let us concentrate on the scalar factor in front of the identity matrix $I_{g_i}$ in the $i^\mathrm{th}$ block of the density matrix: \begin{align}
\frac{e^{-\beta E_i}}{Z} = \frac{e^{-\beta E_i}}{\sum_j g_je^{-\beta E_j}} = \frac{e^{-\beta(E_i -E_0)}}{\sum_jg_j e^{-\beta(E_j - E_0)}} = \frac{e^{-\beta(E_i -E_0)}}{g_0 + \sum_{j>0}g_j e^{-\beta(E_j - E_0)}}.
\end{align} Therefore, we have \begin{align}
\lim_{\beta\to +\infty} \frac{e^{-\beta E_i}}{Z} = \frac{\lim_{\beta\to +\infty}e^{-\beta(E_i -E_0)}}{g_0 + \lim_{\beta\to +\infty}\sum_{j>0}g_j e^{-\beta(E_j - E_0)}} = \frac{\delta_{i0}}{g_0 + 0} = \frac{\delta_{i0}}{g_0}.
\end{align}
Care may need to be taken in the case of an infinite-dimensional Hilbert space in asserting that the limit of the sum in the denominator approaches zero since it's an infinite series. It follows that
\begin{align}
\rho_* = \lim_{\beta\to+\infty} \rho =
\frac{1}{g_0}\begin{pmatrix}
I_{g_0} & & & \\
& 0 & & \\
&& 0 \\
& & &\ddots
\end{pmatrix}
\end{align} and therefore in particular $\rho_*^2 = \rho_*$ if and only if $g_0 = 1$. | {
"source": [
"https://physics.stackexchange.com/questions/65165",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20327/"
]
} |
65,177 | Once and for all: Is the preferred basis problem in the Everettian Interpretation of QM considered solved by decoherence or not?
THere are a few people who claim that it's not, but it seems the vast majority of the literature says it has been solved by Zurek, Joos, Zeh, Saunders and Wallace. So which is true and why? | Unfortunately, physicists and philosophers disagree on what exactly the preferred basis problem is, and what would constitute a solution. Wojciech Zurek was my PhD advisor, and even he and I don't agree. I wish I could give you an objective answer, but the best I can do is state the problem as I see it. In my opinion, the most general version of the problem was best articulated by Adrian Kent and Fey Dowker near the end their 1996 article " On the Consistent Histories Approach to Quantum Mechanics " in the Journal of Statistical Physics . Unfortunately, this article is long so I will try to quickly summarize the idea. Kent and Dowker analyzed the question of whether the consistent histories formalism provided a satisfactory and complete account of quantum mechanics (QM). Contrary to what is often said, consistent histories and many-worlds need not be opposing interpretations of quantum mechanics [1]. Instead, consistent histories is a good mathematical framework for rigorously identifying the branch structure of the wavefunction of the universe [2]. Most many-world'ers would agree that unambiguously describing this branch structure would be very nice (although they might disagree on whether this is "necessary" for QM to be a complete theory). In my opinion, the situation is almost exactly analogous to the question of whether an abstract formulation of classical mechanics (e.g. Lagrangian mechanics ) is satisfactory in the absence of a clear link between the mathematical formalism and our experiences. I could write down the math of Lagrangian mechanics very compactly, but it would not feel like a satisfactory theory until I told you how to link it up with your experiences (e.g. this abstract real scalar x = the position coordinate of a baseball) and you could then use it to make predictions. Similarly, a unitarily evolving wavefunction of the universe is not useful for making predictions unless I give you the branch structure which identifies where you are in wavefunction as well as the possible future, measurement-dependent versions of you. I would claim that the Copenhagen cook book for making predictions that is presented in introductory QM books is a correct but incomplete link between the mathematical formalism of QM and our experiences; it only functions correctly when (1) the initial state of our branch and (2) the measurement basis are assumed (rather than derived). Anyways, Dowker and Kent argue that consistent histories might be capable of giving a satisfactory account of QM if only one could unambiguously identify the set of consistent histories describing the branch structure of our universe [3]. They point out that the method sketched by other consistent historians is often circular: the correct "quasi-classical" branch structure is said to be the one seen by some observer (e.g. the " IGUS es" of Murray Gell-Mann and Jim Hartle), but then the definition of the observer generally assumes a preferred set of quasi-classical variables. They argue that either we need some other principle for selecting quasi-classical variables, or we need some way to define what an observer is without appealing to such variables. Therefore, the problem of identifying the branch structure has not been solved and is still open. I like to call this "Kent's set-selection problem". I consider it the outstanding question in the foundations of quantum mechanics, and I think of the preferred basis problem as a sort of special case. The reason I say special case is that the preferred basis problem answers the question: how does the wave function branch when there is a preferred decomposition into system and environment (or into system and measuring apparatus). However, the boundaries of what we intuitively identify as systems (like a baseball) are not always well-defined. (What happens as atoms are removed from the baseball one by one? When does the baseball cease to be a useful system?) In this sense, I say that the decoherence program as led by Zeh, Zurek, and others is an improvement but not a complete solution. Sorry that's not as clear as you would like, but that's the state of the field as I see it. There's more work to be done! [1] Of course, some consistent histories make ontological claims about how the histories are "real", where as the many-worlders might say that the wavefunction is more "real". In this sense they are contradictory. Personally, I think this is purely a matter of taste. [2] Note that although many-worlders may not consider the consistent histories formalism the only way possible to mathematically identify branch structure, I believe most would agree that if, in the future, some branch structure was identified using a completely different formalism, it could be described at least approximately by the consistent histories formalism. [3] They argue that this set should be exact , rather than approximate, but I think this is actually too demanding and most Everettians would agree. David Wallace articulates this view well. | {
"source": [
"https://physics.stackexchange.com/questions/65177",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9822/"
]
} |
65,191 | When we find the electric field between the plates of a parallel plate capacitor we assume that the electric field from both plates is $${\bf E}=\frac{\sigma}{2\epsilon_0}\hat{n.}$$ The factor of two in the denominator comes from the fact that there is a surface charge density on both sides of the (very thin) plates. This result can be obtained easily for each plate. Therefore when we put them together the net field between the plates is $${\bf E}=\frac{\sigma}{\epsilon_0}\hat{n}$$ and zero everywhere else. Here, $\sigma$ is the surface charge density on a single side of the plate, or $Q/2A$, since half the charge will be on each side. But in a real capacitor the plates are conducting, and the surface charge density will change on each plate when the other plate is brought closer to it. That is, in the limit that the two plates get brought closer together, all of the charge of each plate must be on a single side. If we let $d$ denote the distance between the plates, then we must have $$\lim_{d \rightarrow 0}{\bf E}=\frac{2\sigma}{\epsilon_0}\hat{n}$$ which disagrees with the above equation. Where is the mistake in this reasoning? Or more likely, do our textbook authors commonly assume that we are in this limit, and that this is why the conductor behaves like a perfectly thin charged sheet? | When discussing an ideal parallel-plate capacitor, $\sigma$ usually denotes the area charge density of the plate as a whole - that is, the total charge on the plate divided by the area of the plate. There is not one $\sigma$ for the inside surface and a separate $\sigma$ for the outside surface. Or rather, there is, but the $\sigma$ used in textbooks takes into account all the charge on both these surfaces, so it is the sum of the two charge densities. $$\sigma = \frac{Q}{A} = \sigma_\text{inside} + \sigma_\text{outside}$$ With this definition, the equation we get from Gauss's law is $$E_\text{inside} + E_\text{outside} = \frac{\sigma}{\epsilon_0}$$ where "inside" and "outside" designate the regions on opposite sides of the plate. For an isolated plate, $E_\text{inside} = E_\text{outside}$ and thus the electric field is everywhere $\frac{\sigma}{2\epsilon_0}$. Now, if another, oppositely charge plate is brought nearby to form a parallel plate capacitor, the electric field in the outside region (A in the images below) will fall to essentially zero, and that means $$E_\text{inside} = \frac{\sigma}{\epsilon_0}$$ There are two ways to explain this: The simple explanation is that in the outside region, the electric fields from the two plates cancel out. This explanation, which is often presented in introductory textbooks, assumes that the internal structure of the plates can be ignored (i.e. infinitely thin plates) and exploits the principle of superposition. The more realistic explanation is that essentially all of the charge on each plate migrates to the inside surface. This charge, of area density $\sigma$, is producing an electric field in only one direction, which will accordingly have strength $\frac{\sigma}{\epsilon_0}$. But when using this explanation, you do not also superpose the electric field produced by charge on the inside surface of the other plate. Those other charges are the terminators for the same electric field lines produced by the charges on this plate; they're not producing a separate contribution to the electric field of their own. Either way, it's not true that $\lim_{d\to 0} E = \frac{2\sigma}{\epsilon_0}$. | {
"source": [
"https://physics.stackexchange.com/questions/65191",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11625/"
]
} |
65,197 | Consider the following situation. A certain quantity of ideal monatomic gas (say one mole) is confined in a cylinder by a piston and is maintained at constant temperature (say $T_0$ ) by thermal contact with a heat reservoir. Then the gas slowly expands from $V_1$ to $V_2$ while being held at the same temperature $T_0$ . Question : Is this process reversible or irreversible? Attempt : When the gas expands, the temperature must decrease, so the heat reservoir gives energy to the gas so the gas is maintained to the same temperature, right? Then If we do work on the gas so that it returns to the initial volume $V_1$ , we know that due to $\Delta T=0$ , then $\Delta U=0$ , right? So, the work done on the gas is going to transform itself to heat that would be absorbed by the heat reservoir. My question is: how do we know if the heat given by the heat reservoir is the same that the heat absorbed by itself? If this is true, then I guess the process will be reversible, but if it's not true, does the process would be irreversible? Or it does not matter due that we have a heat reservoir? | When discussing an ideal parallel-plate capacitor, $\sigma$ usually denotes the area charge density of the plate as a whole - that is, the total charge on the plate divided by the area of the plate. There is not one $\sigma$ for the inside surface and a separate $\sigma$ for the outside surface. Or rather, there is, but the $\sigma$ used in textbooks takes into account all the charge on both these surfaces, so it is the sum of the two charge densities. $$\sigma = \frac{Q}{A} = \sigma_\text{inside} + \sigma_\text{outside}$$ With this definition, the equation we get from Gauss's law is $$E_\text{inside} + E_\text{outside} = \frac{\sigma}{\epsilon_0}$$ where "inside" and "outside" designate the regions on opposite sides of the plate. For an isolated plate, $E_\text{inside} = E_\text{outside}$ and thus the electric field is everywhere $\frac{\sigma}{2\epsilon_0}$. Now, if another, oppositely charge plate is brought nearby to form a parallel plate capacitor, the electric field in the outside region (A in the images below) will fall to essentially zero, and that means $$E_\text{inside} = \frac{\sigma}{\epsilon_0}$$ There are two ways to explain this: The simple explanation is that in the outside region, the electric fields from the two plates cancel out. This explanation, which is often presented in introductory textbooks, assumes that the internal structure of the plates can be ignored (i.e. infinitely thin plates) and exploits the principle of superposition. The more realistic explanation is that essentially all of the charge on each plate migrates to the inside surface. This charge, of area density $\sigma$, is producing an electric field in only one direction, which will accordingly have strength $\frac{\sigma}{\epsilon_0}$. But when using this explanation, you do not also superpose the electric field produced by charge on the inside surface of the other plate. Those other charges are the terminators for the same electric field lines produced by the charges on this plate; they're not producing a separate contribution to the electric field of their own. Either way, it's not true that $\lim_{d\to 0} E = \frac{2\sigma}{\epsilon_0}$. | {
"source": [
"https://physics.stackexchange.com/questions/65197",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/15672/"
]
} |
65,335 | I'm tutoring high school students. I've always taught them that: A charged particle moving without acceleration produces an electric as well as a magnetic field . It produces an electric field because it's a charge particle. But when it is at rest, it doesn't produce a magnetic field. All of a sudden when it starts moving, it starts producing a magnetic field. Why? What happens to it when it starts moving? What makes it produce a magnetic field when it starts moving? | If you are not well-acquainted with special relativity, there is no way to truly explain this phenomenon. The best one could do is give you rules steeped in esoteric ideas like "electromagnetic field" and "Lorentz invariance." Of course, this is not what you're after, and rightly so, since physics should never be about accepting rules handed down from on high without justification. The fact is, magnetism is nothing more than electrostatics combined with special relativity . Unfortunately, you won't find many books explaining this - either the authors mistakenly believe Maxwell's equations have no justification and must be accepted on faith, or they are too mired in their own esoteric notation to pause to consider what it is they are saying. The only book I know of that treats the topic correctly is Purcell's Electricity and Magnetism , which was recently re-released in a third edition . (The second edition works just fine if you can find a copy.) A brief, heuristic outline of the idea is as follows. Suppose there is a line of positive charges moving along the $z$-axis in the positive direction - a current. Consider a positive charge $q$ located at $(x,y,z) = (1,0,0)$, moving in the negative $z$-direction. We can see that there will be some electrostatic force on $q$ due to all those charges. But let's try something crazy - let's slip into $q$'s frame of reference. After all, the laws of physics had better hold for all points of view. Clearly the charges constituting the current will be moving faster in this frame. But that doesn't do much, since after all the Coulomb force clearly doesn't care about the velocity of the charges, only on their separation. But special relativity tells us something else. It says the current charges will appear closer together. If they were spaced apart by intervals $\Delta z$ in the original frame, then in this new frame they will have a spacing $\Delta z \sqrt{1-v^2/c^2}$, where $v$ is $q$'s speed in the original frame. This is the famous length contraction predicted by special relativity. If the current charges appear closer together, then clearly $q$ will feel a larger electrostatic force from the $z$-axis as a whole. It will experience an additional force in the positive $x$-direction, away from the axis, over and above what we would have predicted from just sitting in the lab frame. Basically, Coulomb's law is the only force law acting on a charge, but only the charge's rest frame is valid for using this law to determine what force the charge feels. Rather than constantly transforming back and forth between frames, we invent the magnetic field as a mathematical device that accomplishes the same thing. If defined properly, it will entirely account for this anomalous force seemingly experienced by the charge when we are observing it not in its own rest frame. In the example I just went through, the right-hand rule tells you we should ascribe a magnetic field to the current circling around the $z$-axis such that it is pointing in the positive $y$-direction at the location of $q$. The velocity of the charge is in the negative $z$-direction, and so $q \vec{v} \times \vec{B}$ points in the positive $x$-direction, just as we learned from changing reference frames. | {
"source": [
"https://physics.stackexchange.com/questions/65335",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/447/"
]
} |
65,339 | Let's consider it case by case: Case 1: Charged particle is at rest. It has an electric field around it. No problem. That is its property. Case 2: Charged particle started moving (it's accelerating). We were told that it starts radiating EM radiation. Why? What happened to it? What made it do this? Follow up question:
Suppose a charged particle is placed in uniform electric field. It accelerates because of electric force it experiences. Then work done by the electric field should not be equal to change in its kinetic energy, right? It should be equal to change in K.E + energy it has radiated in the form of EM Waves. But then, why don't we take energy radiated into consideration when solving problems? (I'm tutoring grade 12 students. I never encountered a problem in which energy radiated is considered.) How do moving charges produce magnetic field? | A diagram may help: Here, the charged particle was initially stationary, uniformly accelerated for a short period of time, and then stopped accelerating. The electric field outside the imaginary outer ring is still in the configuration of the stationary charge. The electric field inside the imaginary inner ring is in the configuration of the uniformly moving charge. Within the inner and outer ring, the electric field lines, which cannot break, must transition from the inner configuration to the outer configuration. This transition region propagates outward at the speed of light and, as you can see from the diagram, the electric field lines in the transition region are (more or less) transverse to the direction of propagation. Also, see this Wolfram demonstration: Radiation Pulse from an Accelerated Point Charge | {
"source": [
"https://physics.stackexchange.com/questions/65339",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/447/"
]
} |
65,340 | The slip-stick phenomenon is present all around us, be it the noise of car breaks or in earthquakes. But does it have any real-life application? | A diagram may help: Here, the charged particle was initially stationary, uniformly accelerated for a short period of time, and then stopped accelerating. The electric field outside the imaginary outer ring is still in the configuration of the stationary charge. The electric field inside the imaginary inner ring is in the configuration of the uniformly moving charge. Within the inner and outer ring, the electric field lines, which cannot break, must transition from the inner configuration to the outer configuration. This transition region propagates outward at the speed of light and, as you can see from the diagram, the electric field lines in the transition region are (more or less) transverse to the direction of propagation. Also, see this Wolfram demonstration: Radiation Pulse from an Accelerated Point Charge | {
"source": [
"https://physics.stackexchange.com/questions/65340",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24753/"
]
} |
65,724 | I have read the thread regarding 'the difference between the operators $\delta$ and $d$ ' , but it does not answer my question. I am confused about the notation for change in Physics. In Mathematics, $\delta$ and $\Delta$ essentially refer to the same thing, i.e., change. This means that $\Delta x = x_1 - x_2 = \delta x$ . The difference between $\delta$ and $d$ is also clear and distinct in differential calculus. We know that $\frac{dy}{dx}$ is always an operator and not a fraction, whereas $\frac{\delta y}{\delta x}$ is an infinitesimal change. In Physics, however, the distinction is not as clear. Can anyone offer a clearer picture? | The symbol $\Delta$ refers to a finite variation or change of a quantity – by finite, I mean one that is not infinitely small. The symbols $d,\delta$ refer to infinitesimal variations or numerators and denominators of derivatives. The difference between $d$ and $\delta$ is that $dX$ is only used if $X$ without the $d$ is an actual quantity that may be measured (i.e. as a function of time) without any ambiguity about the "additive shift" (i.e. about the question which level is declared to be $X=0$). On the other hand, we sometimes talk about small contributions to laws that can't be extracted from a well-defined quantity that depends on time. An example, the first law of thermodynamics .
$$dU = \delta Q - \delta W$$
The left hand side has $dU$, the change of the total energy $U$ of the system that is actually a well-defined function of time. The law says that it is equal to the infinitesimal heat $\delta Q$ supplied to the system during the change minus the infinitesimal work $\delta W$ done by the system. All three terms are equally infinitesimal but there is nothing such as "overall heat" $Q$ or "overall work" $W$ that could be traced – we only determine the changes (flows, doing work) of these things. Also, one must understand the symbol $\partial$ for partial derivatives – derivatives of functions of many variables for which the remaining variables are kept fixed, e.g. $\partial f(x,y)/\partial x$ and similarly $y$ in the denominator. Independently of that, $\delta$ is sometimes used in the functional calculus for functionals – functions that depend on whole functions (i.e. infinitely many variables). In this context, $\delta$ generalizes $d$ and has a different meaning, closer to $d$, than $\delta$ in the example of $\delta W$ and $\delta Q$ above. Just like we have $dy=f'(x)dx$ for ordinary derivatives in the case of one variable, we may have $\delta S = \int_a^b dt\,C(t)\delta x(t)$ where the integral is there because $S$ depends on uncountably many variables $x(t)$, one variable for each value of $t$. In physics, one must be ready that $d,\delta,\Delta$ may be used for many other things. For example, there is a $\delta$-function (a distribution that is only non-vanishing for $x=0$) and its infinite-dimensional, functional generalization is called $\Delta[f(x)]$. That's a functional that is only nonzero for $f(x)=0$ for every $x$ and the integral $\int {\mathcal D}f(x) \,\Delta[f(x)]=1$. Note that for functional integrals (over the infinite-dimensional spaces of functions), the integration measure is denoted ${\mathcal D}$ and not $d$. | {
"source": [
"https://physics.stackexchange.com/questions/65724",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24300/"
]
} |
65,794 | I have come across the equation which comes out of the nothing in Zettili's book Quantum mechanics concepts and applications p. 167: $$\psi(\vec{r},t) ~=~ \langle \vec{r} \,|\, \psi(t) \rangle.$$ How do I know that I get function of position and time if i take an inner product of a state vector with an vector $\vec{r}$? Where is the proof? | The proof is probably not the right word since the expression $\Psi(x,t) = \langle{x}|{\Psi(t)}\rangle$ is actually the definition of position space wave function. Basis in finite dimensional vector space Any vector $|v\rangle$ from some finite dimensional vector space $V(F)$ can be written as a linear combination of basis vectors $|e_{i}\rangle$ from an ordered basis $( |e_{i}\rangle )_{i=1}^{n}$
$$
|v\rangle = \sum\limits_{i}^{n} c_{i} |e_{i}\rangle \, , \quad (1)
$$
where $c_{i} \in F$. And we say that with respect to chosen ordered basis $( |e_{i}\rangle )_{i=1}^{n}$ any vector $|v\rangle$ can be uniquely represented by an ordered collection, or $n$-tuple of the coefficients in its linear expansion over the basis
$$
|v\rangle \longleftrightarrow ( c_{i} )_{i=1}^{n} \, .
$$ Finite dimensional inner product spaces Now if we have a special kind of a vector space - an inner product space - a vector space with an inner product, i.e. a map $I(|v\rangle, |w\rangle)$ of the following form
\begin{equation}
I(|v\rangle, |w\rangle): V(F) \times V(F) \to {F} \, ,
\end{equation}
with some properties defined for any two vectors $|v\rangle$ and $|w\rangle$ from the vectors space, we know how the coefficients $c_{i}$ looks like.
Taking the inner product of both sides of equation (1) with some basis vector $|e_{j}\rangle$ gives
$$
I(|e_{j}\rangle, |v\rangle) = \sum\limits_{i}^{n} c_{i} I(|e_{j}\rangle, |e_{i}\rangle) \, . \quad (2)
$$
Now since we can always orthonormalize our basis set so that
$$
I(|e_{j}\rangle, |e_{i}\rangle) = \delta_{ji} \, ,
$$
where $\delta_{ji}$ is the Kronecker delta
\begin{equation}
\delta_{ji} =
\left\{
\begin{matrix}
0, & \text{if } j \neq i \, ; \\
1, & \text{if } j = i \, .
\end{matrix}
\right.
\end{equation}
equation (2) becomes
$$
I(|e_{j}\rangle, |v\rangle) = c_{j} \, .
$$ That is it basically. Each coefficient $c_{i}$ in equation (1) in inner product space is given by $I(|e_{i}\rangle, |v\rangle)$. Oh, and if the inner product space is complete, i.e. it is a Hilber space, than we almost always use the following notation for an inner product
$$
I(|v\rangle, |w\rangle) = \langle v | w \rangle \, ,
$$
which is another story and so with respect to chosen ordered basis $( |e_{i}\rangle )_{i=1}^{n}$ any vector $|v\rangle$ in Hilbert space can be uniquely represented by an ordered collection, or $n$-tuple, of the coefficients $c_{i}$ given by $\langle e_{i} | v \rangle$
$$
|v\rangle \longleftrightarrow ( \langle e_{i} | v \rangle )_{i=1}^{n} \, .
$$ Infinite dimensional Hilbert space We have infinite dimensional complex Hilbert space $H(\mathbb{C})$ and we would like to expand the state vector $|\Psi(t)\rangle$ over a set of eigenvectors of some self-adjoint operator which represent some observable. If the spectrum of self-adjoint operator $\hat{A}$ is discrete then one can label eigenvalues using some discrete variable $i$
\begin{equation}
\hat{A} |a_{i}\rangle = a_{i} |a_{i}\rangle \, ,
\end{equation}
and expansion of the state vector has the following form
$$
|\Psi(t)\rangle = \sum\limits_{i}^{\infty} c_{i}(t) |a_{i}\rangle, \quad \text{where} \quad c_{i}(t) = \langle a_{i} | \Psi(t) \rangle \, .
$$
So you have a a set of discrete coefficients $c_{i}(t)$ which can be used to represent the state vector. Each $c_{i}$ is basically complex number, but this number is different at different times and so it is written as $c_{i}(t)$. But if the spectrum of self-adjoint operator is continuous then it is not possible to use discrete variable to label the eigenvalues, rather $a$ in the eigenvalue equation
\begin{equation}
\hat{A} |a\rangle = a |a\rangle \, ,
\end{equation}
should be interpreted as continuous variable which is used to label eigenvalues and corresponding eigenvectors and expansion of the state vector looks like
\begin{equation}
|\Psi(t)\rangle = \int\limits_{a_{min}}^{a_{max}} c(a,t) |a\rangle \, \mathrm{d}a, \quad \text{where} \quad c(a,t) = \langle a | \Psi(t) \rangle \, .
\end{equation}
So the the coefficients in the expansion are given not by a set of complex numbers labeled using discrete variable but rather as a complex-valued function of continuous variable. But this function $c(a,t)$ plays the same role: it determines the coefficients in the expansion. This time, however, you need a coefficient for each and every value of continuous variable $a$ and that's why they are given by a function. And again these coefficients different at different times. Position operator $\hat{X}$ has continuous spectrum.
In the the simplest case - one particle in one spatial dimension - variable $x$ in
$$
\hat{X} |x\rangle = x |x\rangle \, ,
$$
represents a position of the particle and runs over all possible values of position in one spatial dimension, i.e. $x \in \mathbb{R}$, and state vector is expanded over the set of eigenvectors $|x\rangle$ as
\begin{equation}
|\Psi(t)\rangle = \int\limits_{-\infty}^{+\infty} \Psi(x,t) |x\rangle \,\mathrm{d}x, \quad \text{where} \quad \Psi(x,t) = \langle{x}|{\Psi(t)}\rangle \, .
\end{equation} | {
"source": [
"https://physics.stackexchange.com/questions/65794",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6764/"
]
} |
65,812 | It is well known that a prism can "split light" by separating different frequencies of light: Many sources state that the reason this happens is that the index of refraction is different for different frequencies. This is known as dispersion . My question is about why dispersion exists. Is frequency dependence for refraction a property fundamental to all waves? Is the effect the result of some sort of non-linearity in response by the refracting material to electromagnetic fields? Are there (theoretically) any materials that have an essentially constant, non-unity index of refraction (at least for the visible spectrum)? | Lorentz came with a nice model for light matter interaction that describes dispersion quite effectively. If we assume that an electron oscillates around some equilibrium position and is driven by an external electric field $\mathbf{E}$ (i.e., light), its movement can be described by the equation
$$
m\frac{\mathrm{d}^2\mathbf{x}}{\mathrm{d}t^2}+m\gamma\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}+k\mathbf{x} = e\mathbf{E}.
$$
The first and third terms on the LHS describe a classical harmonic oscillator, the second term adds damping, and the RHS gives the driving force. If we assume that the incoming light is monochromatic, $\mathbf{E} = \mathbf{E}_0e^{-i\omega t}$ and we assume a similar response $\xi$, we get
$$
\xi = \frac{e}{m}\mathbf{E}_0\frac{e^{-i\omega t}}{\Omega^2-\omega^2-i\gamma\omega},
$$
where $\Omega^2 = k/m$.
Now we can play with this a bit, using the fact that for dielectric polarization we have $\mathbf{P} = \epsilon_0\chi\mathbf{E} = Ne\xi$ and for index of refraction we have $n^2 = 1+\chi$ to find out that
$$
n^2 = 1+\frac{Ne^2}{\epsilon_0 m}\frac{\Omega^2-\omega^2+i\gamma\omega}{(\Omega^2-\omega^2)^2+\gamma^2\omega^2}.
$$
Clearly, the refractive index is frequency dependent. Moreover, this dependence comes from the friction in the electron movement; if we assumed that there is no damping of the electron movement, $\gamma = 0$, there would be no frequency dependence. There is another possible approach to this, using impulse method, that assumes that the dielectric polarization is given by convolution
$$
\mathbf{P}(t) = \epsilon_0\int_{-\infty}^t\chi(t-t')\mathbf{E}(t')\mathrm{d}t'.
$$
Using Fourier transform, we have $\mathbf{P}(\omega) = \epsilon_0\chi(\omega)\mathbf{E}(\omega)$. If the susceptibility $\chi$ is given by a Dirac-$\delta$-function, its Fourier transform is constant and does not depend on frequency. In reality, however, the medium has a finite response time and the susceptibility has a finite width. Therefore, its Fourier transform is not a constant but depends on frequency. | {
"source": [
"https://physics.stackexchange.com/questions/65812",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22494/"
]
} |
66,234 | What is the proof, without leaving the Earth, and involving only basic physics, that the earth rotates around its axis? By basic physics I mean the physics that the early physicists must've used to deduce that it rotates, not relativity. | Foucault pendulum . I don't know how the ancients did it, but it is surely pure classical mechanics. The animation describes the motion of a Foucault Pendulum at a latitude of 30°N. | {
"source": [
"https://physics.stackexchange.com/questions/66234",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25089/"
]
} |
66,350 | The tensor of moment of inertia contains six off-diagonal matrix elements, which vanish if we choose a reference frame aligned with the principal axes of the rotating rigid body; the angular momentum vector is then parallel to the angular velocity. But while considering the general case, what are the off-diagonal moment of inertia matrix elements? That is, do they have any physical significance as [say] the components of a vector? Or is it merely a mathematical construction with no definite physical meaning (which seems rather wrong to me)? A similar thread exists here but they are more interested in the principal axes of the body. It also says: The physical significance of the off-diagonal components is that
you're using a coordinate system not aligned with the principal
directions of the object. They tell us nothing interesting about the
object itself. Is that all or is there more to it, perhaps related to properties of tensors in general? | The moment-of-inertia (MOI) tensor is real (no imaginary terms), symmetric, and positive-definite. Linear algebra tells us that for any (3x3) matrix that has those three properties, there's always a set of three perpendicular axes such that the MOI tensor can be expressed as a diagonal tensor in the basis of those axes. These are called the principal axes (or eigenvectors) of rotation, and the physical meaning behind them is that if you rotate the object around one of those axes, the angular momentum will lie along the axis. So one important thing to realize is that there is nothing fundamentally meaningful about off-diagonal elements; you can always rotate your coordinates to get rid of them. If the object has a symmetry axis, then that will be a principal axis. (Though, having a principal axis does not imply any symmetry of the object.) On the other hand, what if the body is rotating about an axis that isn't one of the principal axes? This is equivalent to writing your MOI in a basis where the rotation axis is one of the basis vectors, in which case there are off-diagonal elements, which is what your question is asking about. So, off-diagonal elements in your MOI are equivalent to having a rotation axis that is not aligned with any of the principal axes. Again, this only happens when your body is not symmetric about the rotation axis. And what does this mean physically? For one thing, the angular momentum is not aligned with the angular velocity. For example, imagine your object spinning inside a nicely symmetric little satellite in space. You can see its rotation axis, but if the satellite grabs onto the object, it will absorb the angular momentum, and you'll find the satellite spinning on a different axis. Alternatively, you can think of the expression relating torque and angular acceleration $\vec{\tau} = I \cdot \vec{\alpha}$. An off-diagonal element in the MOI means that if I apply a torque about a certain axis, the object will accelerate its rotation about a different axis. | {
"source": [
"https://physics.stackexchange.com/questions/66350",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/17341/"
]
} |
66,818 | I found the following picture in the Internet and I am curious how from a physicist point of view to explain it. Basically the idea is the following. If you are a normal person - you suppose to see Einstein here, but if you are myopic , then you will see Marilyn Monroe . If you will move far away from a computer, you will still be able to see Monroe. Can anyone explain it from a physical point of view? | The reason "myopic" people see Monroe and others see Einstein is that the high frequency information in the image says Einstein and the low frequency says Monroe. When looking at the image closely, you seen the high frequencies and therefore Einstein. By looking at it out of focus (presumably what is meant by "myopic"), the high frequencies are filtered out and you see Monroe. Here is the original: Here are versions successively more low-pass filtered (the high frequency content was removed): | {
"source": [
"https://physics.stackexchange.com/questions/66818",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25319/"
]
} |
66,941 | At room temperature, play-dough is solid(ish). But if you make a thin strip it cannot just stand up on it's own, so is it still solid? On a more general note, what classifies or differentiates a solid from a liquid? | Play-Doh is mostly flour, salt and water, so it's basically just (unleavened) dough. There are a lot of extra components like colourings, fragrances, preservatives etc, but these are present at low levels and don't have a huge effect on the rheology. The trouble with saying it's basically just dough is that the rheology of dough is fearsomely complicated. In a simple flour/salt/water dough you have a liquid phase made up of an aqueous solution of polymers like gluten , and solid particles of starch. So a dough is basically a suspension of solid particles in a viscous fluid. To make things more complicated the particles are flocculated , so you end up with a material that exhibits a yield stress unlike the non-flocculated particles in e.g. oobleck . At low stresses dough behaves like a solid because the flocculated particles act like a skeleton. However the bonds between flocculated particles are weak (they're only Van der Waals forces) so at even moderate stresses the dough flows and behaves like a liquid. Dough, and Play-Doh, are best described as non-Newtonian fluids . | {
"source": [
"https://physics.stackexchange.com/questions/66941",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/18693/"
]
} |
67,211 | I have often seeen statements on physics.SE such as, The only consistent theory of everything which we know of to date (2013) is string theory. Why exactly is this so? Adding the Loop Quantum Gravity Lagrangian Density (the Einstein-Hilbert-Palatini-Ashtekar lagrangian density) to the Standard Model Lagrnagian Density should be able to describe all the interactions and fermions, in my opinion. Maybe it isn't as elegant as string theory since it doesn't really unify all the forces/interactions and fermions but it is still a complet description, right? Because once the Lagrangian Densities are added, one obtains the following "Complete Lagrangian Density":
$${{{\cal L}}_{\operatorname{complete}}} = - \frac{1}{4}{H^{\mu \nu \rho }}{H_{\mu \nu \rho }} + i\hbar {c_0}\bar \psi \not \nabla \psi + {c_0}\bar \psi \phi \psi + \operatorname{h.c.} + {\left\| {\not \nabla \phi } \right\|^2} - U\left( \phi \right){\rm{ }}+\Re \left( {\frac{1}{{4\kappa }}\mbox{}^ \pm\Sigma _{IJ}^\mu {{\rm{ }}^ \pm }F_{IJ}^\mu} \right) $$ | Because the "theory" you write down doesn't exist. It's just a logically incoherent mixture of apples and oranges, using a well-known metaphor. One can't construct a theory by simply throwing random pieces of Lagrangians taken from different theories as if we were throwing different things to the trash bin. For numerous reasons, loop quantum gravity has problems with consistency (and ability to produce any large, nearly smooth space at all), but even if it implied the semi-realistic picture of gravity we hear in the most favorable appraisals by its champions, it has many properties that make it incompatible with the Standard Model, for example its Lorentz symmetry violation. This is a serious problem because the terms of the Standard Model are those terms that are renormalizable, Lorentz-invariant, and gauge-invariant. The Lorentz breaking imposed upon us by loop quantum gravity would force us to relax the requirement of the Lorentz invariance for the Standard Model terms as well, so we would have to deal with a much broader theory containing many other terms, not just the Lorentz-invariant ones, and it would simply not be the Standard Model anymore (and if would be infinitely underdetermined, too). And even if these incompatible properties weren't there, adding up several disconnected Lagrangians just isn't a unified theory of anything. Two paragraphs above, the incompatibility was presented from the Standard Model's viewpoint – the addition of the dynamical geometry described by loop quantum gravity destroys some important properties of the quantum field theory which prevents us from constructing it. But we may also describe the incompatibility from the – far less reliable – viewpoint of loop quantum gravity. In loop quantum gravity, one describes the spacetime geometry in terms of some other variables you wrote down and one may derive that the areas etc. are effectively quantized so the space – geometrical quantities describing it – are "localized" in some regions of the space (the spin network, spin foam, etc.). This really means that the metric tensor that is needed to write the kinetic and other terms in the Standard Model is singular almost everywhere and can't be differentiated. The Standard Model does depend on the continuous character of the spacetime which loop quantum gravity claims to be violated in Nature. So even if we're neutral about the question whether the space is continuous to allow us to talk about all the derivatives etc., it's true that the two frameworks require contradictory answers to this question. | {
"source": [
"https://physics.stackexchange.com/questions/67211",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23119/"
]
} |
67,221 | When we stand on a weighing scale the reading we get is in $\mathrm{kg}$. Does it refers to mass or weight ? | Because the "theory" you write down doesn't exist. It's just a logically incoherent mixture of apples and oranges, using a well-known metaphor. One can't construct a theory by simply throwing random pieces of Lagrangians taken from different theories as if we were throwing different things to the trash bin. For numerous reasons, loop quantum gravity has problems with consistency (and ability to produce any large, nearly smooth space at all), but even if it implied the semi-realistic picture of gravity we hear in the most favorable appraisals by its champions, it has many properties that make it incompatible with the Standard Model, for example its Lorentz symmetry violation. This is a serious problem because the terms of the Standard Model are those terms that are renormalizable, Lorentz-invariant, and gauge-invariant. The Lorentz breaking imposed upon us by loop quantum gravity would force us to relax the requirement of the Lorentz invariance for the Standard Model terms as well, so we would have to deal with a much broader theory containing many other terms, not just the Lorentz-invariant ones, and it would simply not be the Standard Model anymore (and if would be infinitely underdetermined, too). And even if these incompatible properties weren't there, adding up several disconnected Lagrangians just isn't a unified theory of anything. Two paragraphs above, the incompatibility was presented from the Standard Model's viewpoint – the addition of the dynamical geometry described by loop quantum gravity destroys some important properties of the quantum field theory which prevents us from constructing it. But we may also describe the incompatibility from the – far less reliable – viewpoint of loop quantum gravity. In loop quantum gravity, one describes the spacetime geometry in terms of some other variables you wrote down and one may derive that the areas etc. are effectively quantized so the space – geometrical quantities describing it – are "localized" in some regions of the space (the spin network, spin foam, etc.). This really means that the metric tensor that is needed to write the kinetic and other terms in the Standard Model is singular almost everywhere and can't be differentiated. The Standard Model does depend on the continuous character of the spacetime which loop quantum gravity claims to be violated in Nature. So even if we're neutral about the question whether the space is continuous to allow us to talk about all the derivatives etc., it's true that the two frameworks require contradictory answers to this question. | {
"source": [
"https://physics.stackexchange.com/questions/67221",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22550/"
]
} |
67,445 | There's already a question like this here so that my question could be considered duplicate, but I'll try to make my point clear that this is a different question. Is there a way to derive Biot-Savart law from the Lorentz' Force law or just from Maxwell's Equations? The point is that we usually define, based on experiments, that the force felt by a moving charge on the presence of a magnetic field is $\mathbf {F} = q\mathbf{v}\times \mathbf{B}$, but in that case the magnetic field is usually left to be defined later. Now can that force law be used in some way to obtain Biot-Savart law like we obtain the equation for the electric field directly from Coulomb's Force law? I wanted to know that because as pointed out in the question I've mentioned, although Maxwell's Equations can be considered more fundamental, those equations are obtained after we know Coulomb's and Biot-Savart's laws, so if we start with Maxwell's Equations to obtain Biot-Savart's having use it to find Maxwell's Equations then I think we'll fall into a circular argument. In that case, without recoursing to Maxwell's Equations the only way to obtain Biot-Savart's law is through observations or can it be derived somehow? | $\def\VA{{\bf A}}
\def\VB{{\bf B}}
\def\VJ{{\bf J}}
\def\VE{{\bf E}}
\def\vr{{\bf r}}$The Biot-Savart law is a consequence of Maxwell's equations. We assume Maxwell's equations and choose the Coulomb gauge, $\nabla\cdot\VA = 0$.
Then
$$\nabla\times\VB
= \nabla\times(\nabla\times\VA)
= \nabla(\nabla\cdot\VA) - \nabla^2\VA
= -\nabla^2\VA.$$
But
$$\nabla\times\VB - \frac{1}{c^2}\frac{\partial\VE}{\partial t} = \mu_0 \VJ.$$
In the steady state this implies
$$\nabla^2\VA = -\mu_0 \VJ.$$
Thus, we have Poisson's equation for each component of the above equation.
The solution is
$$\VA(\vr) = \frac{\mu_0}{4\pi}\int \frac{\VJ(\vr')}{|\vr-\vr'|}d^3 r'.$$
Now we need only calculate $\VB = \nabla\times\VA$.
But
$$\nabla\times\frac{\VJ(\vr')}{|\vr-\vr'|}
= \frac{\VJ(\vr')\times(\vr-\vr')}{|\vr-\vr'|^3}$$
and so
$$\VB(\vr) = \frac{\mu_0}{4\pi}\int
\frac{\VJ(\vr')\times(\vr-\vr')}{|\vr-\vr'|^3}
d^3 r'.$$
This is the Biot-Savart law for a wire of finite thickness.
For a thin wire this reduces to
$$\VB(\vr) = \frac{\mu_0}{4\pi}\int
\frac{I d{\bf l}\times(\vr-\vr')}{|\vr-\vr'|^3}.$$ Addendum :
In mathematics and science it is important to keep in mind the distinction between the historical and the logical development of a subject.
Knowing the history of a subject can be useful to get a sense of the personalities involved and sometimes to develop an intuition about the subject.
The logical presentation of the subject is the way practitioners think about it.
It encapsulates the main ideas in the most complete and simple fashion.
From this standpoint, electromagnetism is the study of Maxwell's equations and the Lorentz force law.
Everything else is secondary, including the Biot-Savart law. | {
"source": [
"https://physics.stackexchange.com/questions/67445",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21146/"
]
} |
67,616 | I study physics and am attending a course on quantum field theory. It is hard for me to draw connections from there to the old conventional theories. In quantum field theory spin originates from the Dirac equation in the case of fermions. I don't remember where it comes from in quantum mechanics. I just remember that there was the Stern Gerlach experiment where you shoot neutral spin 1/2 Ag atoms in. Is there also a electrically neutral elementary particle? If this is the case, how would this experiment look like in quantum field theory?
Of course I ask this for the lowest order, otherwise we would have to calculate an infinite number of Feynman graphs, won't we? | Fundamentally, the spin originates from the fact that we want our quantum fields to transform well-behaved under Lorentz transformations. Mathematically, one can start to construct the representations of the Lorentz group as follows:
The generators $M^{\mu \nu}$ can be expressed in terms of the generators of rotations $J^{i}$ and those of boosts $K^{i}$. They fulfill
$$ [J^{i}, J^{j}] = i \epsilon_{ijk} J^k, \, [K^i, K^j] = -i \epsilon_{ijk} J^k,\, [J^i, K^j] = i \epsilon_{ijk} K^k.$$
From them one can construct the operators $M^i = \frac{1}{\sqrt{2}} (J^i + i K^i)$ and $N^i = \frac{1}{\sqrt{2}} (J^i - i K^i)$. They fulfill
$$[M^i, N^j] = 0,\, [M^i, M^j] = i \epsilon_{ijk} M^k \, [N^i, N^j] = i \epsilon_{ijk} N^k$$
These are just the relations for angular momentum that you should know from your QM introductory course. Group theoretically this means that every representation of the Lorentz group can be characterized by two integer of half-integer numbers $(m, n)$. If you construct the transformations explicitly you will find $(m = 0, n = 0)$ is a scalar, i.e. does not change under LT. $(m = 1/2, n = 0)$ is a left-handed Weyl spinor $(m = 0, n = 1/2)$ is a right-handed Weyl spinor $(m = 1/2, n = 1/2)$ is a vector A Dirac spinor is a combination of a right and a left handed Weyl spinor. Actually, one can now use these objects and try to find Lorentz invariant terms in order to construct a Lagrangian. From that construction (that is too lengthy for this post) one finds that the Dirac equation is the only sensible equation of motion for a Dirac spinor simply from the Lorentz group's properties! Similarly one finds the Klein-Gordon equation for scalars and so forth. (One can even construct higher spin objects than vectors, but those have no physical application except maybe in supergravity theories). So, as you can see now, spin is fundamentally a property of the Lorentz group. It is only natural that we find particles with non-zero spin in our Lorentz invariant world. Sidenote: Since we found the Dirac and Klein-Gordon equations from Lorentz invariance alone, and their low-energy limit is the Schroedinger equation, we get a 'derivation' of the Schroedinger equation, too. Most of the time the SE is simply postulated and worked with: this is where it comes from! | {
"source": [
"https://physics.stackexchange.com/questions/67616",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25617/"
]
} |
67,826 | I learned that the force from a magnetic field does no work. However I was wondering how magnets can be used to pick up pieces of metal like small paperclips and stuff. I also was wondering how magnets can stick to pieces of metal like a refrigerator? | The Lorentz force $\textbf{F}=q\textbf{v}\times\textbf{B}$ never does work on the particle with charge $q$. This is not the same thing as saying that the magnetic field never does work. The issue is that not every system can be correctly described as a single isolated point charge For example, a magnetic field does work on a dipole when the dipole's orientation changes. A nonuniform magnetic field can also do work on a dipole. For example, suppose that an electron, with magnetic dipole moment $\textbf{m}$ oriented along the $z$ axis, is released at rest in a nonuniform magnetic field having a nonvanishing $\partial B_z/\partial z$. Then the electron feels a force $F_z=\pm |\textbf{m}| \partial B_z/\partial z$. This force accelerates the electron from rest, giving it kinetic energy; it does work on the electron. For more detail on this scenario, see this question . You can also have composite (non-fundamental) systems in which the parts interact through other types of forces. For example, when a current-carrying wire passes through a magnetic field, the field does work on the wire as a whole, but the field doesn't do work on the electrons. When we say "the field does work on the wire," that statement is open to some interpretation because the wire is composite rather than fundamental. Work is defined as a mechanical transfer of energy, where "mechanical" is meant to distinguish an energy transfer through a macroscopically measurable force from an energy transfer at the microscopic scale, as in heat conduction, which is not considered a form of work. In the example of the wire, any macroscopic measurement will confirm that the field makes a force on the wire, and the force has a component parallel to the motion of the wire. Since work is defined operationally in purely macroscopic terms, the field is definitely doing work on the wire. However, at the microscopic scale, what is happening is that the field is exerting a force on the electrons, which the electrons then transmit through electrical forces to the bulk matter of the wire. So as viewed at the macroscopic level (which is the level at which mechanical work is defined), the work is done by the magnetic field, but at the microscopic level it's done by an electrical interaction. It's a similar but more complicated situation when you use a magnet to pick up a paperclip; the magnet does work on the paperclip in the sense that the macroscopically observable force has a component in the direction of the motion of the paperclip. | {
"source": [
"https://physics.stackexchange.com/questions/67826",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25671/"
]
} |
67,957 | I've noticed something curious about the rotation of a rectangular prism. If I take a box with height $\neq$ width $\neq$ depth and flip it into the air around different axes of rotation, some motions seem more stable than others. The 3 axes which best illustrate what I mean are: (1) Through the centre of mass, parallel to the longest box edge. (2) Through the centre of mass, parallel to the shortest box edge. (3) Through the centre of mass, parallel to the remaining box edge. It's "easy" to get the box to rotate cleanly around (1) and (2), but flipping the box around (3) usually results in extra twisting besides the rotation around (3) that I'm trying to achieve (obviously a "perfect" flip on my part would avoid this twisting, which is why I call it an instability). If you're not quite sure what I'm talking about, grab a box or book with 3 different side lengths and try it out (but careful not to break anything!). What's special about axis (3)? Image taken from Marsden and Ratiu . | The rectangular prism is a rigid body. The equations of motion of a rigid body around its center of mass are given by: (Please, see for example: Marsden and Ratiu , (page 6). $$I_1\dot\Omega_1=(I_2-I_3)\Omega_2\Omega_3$$
$$I_2\dot\Omega_2=(I_3-I_1)\Omega_3\Omega_1$$
$$I_3\dot\Omega_3=(I_1-I_2)\Omega_1\Omega_2$$ Where $\Omega_1,_2,_3$ are the angular velocity components around the
body axes and $I_1,_2,_3$ are the corresponding moments of inertia. Given that the moments of inertia are different, we may assume without
loss of generality that: $I_1>I_2>I_3$. The fact is that the steady motion around the intermediate axis $2$ is not stable, while around the two other axes, the motion is stable. This fact is explained by Marsden and Ratiu on page 30. Also, various other explanations are given in the answers of a related question asked on mathoverflow. Here I'll describe the details of a
linearized stability analysis. A steady state in which the angular velocity vector has only one
nonvanishing constant component is a solution of the equations of motion. For example: $$\Omega_1=\Omega = const.$$
$$\Omega_2=0$$
$$\Omega_3=0$$ is a solution describing rotation around the first axis. Also $$\Omega_1=0$$
$$\Omega_2=\Omega = const.$$
$$\Omega_3=0$$ is also a solution describing rotation around the second axis. Now, we can analyze the stability of small perturbations around these
solutions. A perturbation of the first solution is given by: $$\Omega_1=\Omega + \epsilon \omega_1$$
$$\Omega_2=\epsilon \omega_2$$
$$\Omega_3=\epsilon \omega_3$$ With $\epsilon<<1$. Substituting in the equations of motion and keeping
only terms up to the first power of $\epsilon$, we obtain: $$I_2\dot\omega_2=\epsilon \Omega(I_3-I_1)\omega_3$$
$$I_3\dot\omega_3=\epsilon \Omega(I_1-I_2)\omega_2$$ Taking the first derivative of the second equation with respect to time and substituting the second equation, we obtain: $$I_2I_3\ddot\omega_3=\epsilon ^2 \Omega^2 (I_3-I_1)(I_1-I_2)\omega_3$$ Since $I_3<I_1$ and $I_1>I_2$, the coefficient on the right hand side is negative and the perturbation satisfies a harmonic oscillator equation of motion of the form: $$\ddot\omega_3 + k^2 \omega_3 =0$$ Repeating the perturbation analysis for the second solution (rotation about the second axis) we obtain: $$I_2I_3\ddot\omega_3=\epsilon ^2 \Omega^2 (I_2-I_3)(I_1-I_2)\omega_3$$ Since $I_3<I_2$ and $I_1>I_2$, this coefficient is now negative and the solution describes a Harmonic oscillator with a negative spring constant of the form: $$\ddot\omega_3 - k^2 \omega_3 =0$$ Which is an unstable perturbation. | {
"source": [
"https://physics.stackexchange.com/questions/67957",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11053/"
]
} |
67,970 | An incredible news story today is about a man who survived for two days at the bottom of the sea (~30 m deep) in a capsized boat, in an air bubble that formed in a corner of the boat. He was eventually rescued by divers who came to retrieve dead bodies. Details here . Since gases diffuse through water (and are dissolved in it) the composition of air in the bubble should be close to the atmosphere outside, if the surface of the bubble is large enough; so the excessive carbon dioxide is removed and oxygen is brought in to support life of a human. Question: How large does the bubble have to be so that a person in it can have indefinite supply of breathable air? | Summary: I find a formula for the diameter of a bubble large enough to support one human and plug in known values to get $d=400\,{\rm m}$. I'll have a quantitative stab at the answer to the question of how large an air bubble has to be for the carbon dioxide concentration to be in a breathable steady state, whilst a human is continuously producing carbon dioxide inside the bubble. Fick's law of diffusion is that the flux of a quantity through a surface (amount per unit time per unit area) is proportional to the concentration gradient at that surface, $$\vec{J} = - D \nabla \phi,$$ where $\phi$ is concentration and $D$ is the diffusivity of the species. We want to find the net flux out of the bubble at the surface, or $\vec{J} = -D_{\text{surface}} \nabla \phi$. $D_{\text{surface}}$ is going to be some funny combination of the diffusivity of $CO_2$ in air and in water, but since the coefficient in water is so much lower, really diffusion is going to be dominated by this coefficient: it can't diffuse rapidly out of the surface and very slowly immediately outside the surface, because the concentration would then pile up in a thin layer immediately outside until it was high enough to start diffusing back in again. So I'm going to assume $D_{\text{surface}} = D_{\text{water}}$ here. To estimate $\nabla \phi$, we can first assume $\phi(\text{surface})=\phi(\text{inside})$, fixing $\phi(\text{inside})$ from the maximum nonlethal concentration of CO2 in air and the molar density of air ($=P/RT$); then assuming the bubble is a sphere of radius $a$, because in a steady state the concentration outside is a harmonic function, we can find $$\phi(r) = \phi(\text{far}) + \frac{(\phi(\text{inside})-\phi(\text{far}))a}{r},$$ where $\phi(\text{far})$ is the concentration far from the bubble, assumed to be constant. Then $$\nabla \phi(a) = -\frac{(\phi(\text{inside})-\phi(\text{far}))a}{a^2} = -\frac{\phi(\text{inside})-\phi(\text{far})}{a}$$ yielding $$J = D \frac{\phi(\text{inside})-\phi(\text{far})}{a}.$$ Next we integrate this over the surface of the bubble to get the net amount leaving the bubble, and set this $=$ the amount at which carbon dioxide is exhaled by the human, $\dot{N}$. Since for the above simplifications $J$ is constant over the surface (area $A$), this is just $JA$. So we have
$$\dot{N} = D_{\text{water}} A \frac{\phi(\text{inside})-\phi(\text{far})}{a} = D_{\text{water}} 4 \pi a (\phi(\text{inside})-\phi(\text{far})).$$ Finally assuming $\phi(\text{far})=0$ for convenience, and rearranging for diameter $d=2a$ $$d = \frac{\dot{N}}{2 \pi D_{\text{water}} \phi(\text{inside})}$$ and substituting $D = 1.6\times 10^{-9}\,{\rm m}^2\,{\rm s}^{-1}$ (from wiki) $\phi \approx 1.2\,{\rm mol}\,{\rm m}^{-3}$ (from OSHA maximum safe level of 3% at STP) $\dot{N}= 4\times 10^{-6}\,{\rm m}^3\,{\rm s}^{-1} = 4.8\times 10^{-6}\,{\rm mol}\,{\rm s}^{-1}$ (from $\%{\rm CO}_2 \approx 4\%$, lung capacity $\approx 500\,{\rm mL}$ and breath rate $\approx \frac{1}{5}\,{\rm s}^{-1}$) I get $d \approx 400\,{\rm m}$. It's interesting to note that this is independent of pressure: I've neglected pressure dependence of $D$ and human resilience to carbon dioxide, and the maximum safe concentration of carbon dioxide is independent of pressure, just derived from measurements at STP. Finally, a bubble this large will probably rapidly break up due to buoyancy and Plateau-Rayleigh instabilities. | {
"source": [
"https://physics.stackexchange.com/questions/67970",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/17215/"
]
} |
68,147 | Lamb 1969 states, A misconception which most physicists acquire in their formative years is that the photoelectric effect requires the quantization of the electromagnetic field for its explanation. [...] In fact we shall see that the photoelectric effect may be completely explained without invoking the concept of "light quanta." The paper gives a description in which an atom is ionized by light, with the atom being treated quantum-mechanically but the light being treated as a classical wave. Is it true that all the standard treatments in textbooks are getting this wrong? Lamb and Scully "The photoelectric effect without photons," in "Polarization, Matière et Rayonnement," Volume in Honour of A. Kastler (Presses Universitaires de France, Paris, 1969) -- can be found online by googling | Yes, the photoelectric effect can be explained without photons! One can read it in L. Mandel and E. Wolf,
Optical Coherence and Quantum Optics,
Cambridge University Press, 1995, a standard reference for quantum optics. Sections 9.1-9.5 show that the electron field responds to a classical external electromagnetic radiation
field by emitting electrons according to Poisson-law probabilities, very much
like that interpreted by Einstein in terms of light particles. Thus the
quantum detector produces discrete Poisson-distributed clicks, although
the source is completely continuous, and there are no photons at all in
the quantum mechanical model. The state space of this quantum system
consists of multi-electron states only. So here the multi-electron
system (followed by a macroscopic decoherence process that leads to the
multiple dot localization of the emitted electron field) is responsible
for the creation of the dot pattern. This proves that the clicks cannot
be taken to be a proof of the existence of photons. An interesting collection of articles explaining different current views is in The Nature of Light: What Is a Photon?
Optics and Photonics News, October 2003
https://www.osa-opn.org/home/articles/volume_14/issue_10/ Further discussion is given in the entry ''The photoelectric effect''
of my theoretical physics FAQ at http://arnold-neumaier.at/physfaq/physics-faq.html . See also the slides
of my lectures http://arnold-neumaier.at/ms/lightslides.pdf and http://arnold-neumaier.at/ms/optslides.pdf . QED and photons are of course needed to explain special quantum effects of light revealed in modern experiments (discussed in the Optics and Photonics News issue cited above) such as nonclassical states of light or parametric down conversion, but not for the photoelectric effect. | {
"source": [
"https://physics.stackexchange.com/questions/68147",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
68,721 | Yesterday my wife asked me that question, and I couldn't answer. Consider a car, in a sunny day, and that is consumes x gallons per mile. Considering that everything is equal, except that it's traveling in a rainy day, but at the same temperature as the sunny day so that air density is the same. Will the lower friction of the tires make it consume more or less fuel? And the fact that rain drops are falling over and in front of it ? I answered that it'll consume more fuel, since friction is what makes car move and that the rain will act against it... but I'm not sure ? | There is an additional loss of energy when driving through puddles on a wet road, because the tire treads have to exert work in order to eject water. One way to look at it is that the keeps trying to glide on top of the water, but is continuously sinking into it to meet the pavement, which is equivalent to driving slightly uphill. | {
"source": [
"https://physics.stackexchange.com/questions/68721",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10098/"
]
} |
69,448 | To measure the lifetime of a specific particle one needs to look at very many such particles in order to calculate the average. It cannot matter when the experimentalist actually starts his stopwatch to measure the time it takes for the particles to decay. If he measures them now or in 5 minutes makes no difference, since he still needs to take an average. If he measures later there will be particles out of the picture already (those who have decayed in the last 5 min), which won't contribute and the ones his measuring now behave (statistically) the very same, of course. I have just read the following in Introduction to Elementary Particles by Griffiths: Now, elementary particles have no memories, so the probability of a given muon
decaying in the next microsecond is independent of how long ago that muon was
created. (It's quite different in biological systems: an 80-year-old man is much more likely to die in the next year than is a 20-year-old, and his body shows the signs of eight decades of wear and tear, But all muons are identical, regardless of when they were produced; from an actuarial point of view they’re all on an equal footing.) But this is not really the view I had. I was imagining, that a particle that has existed for a while is analogous to the 80 year old man, since it will probably die (decay) soon. It just doesn't matter because we are looking at a myriad of particles, so statistically there will be about as many old ones as babies. On the other hand it is true that I cannot see if a specific particle has already lived long or not; they are all indistinguishable. Still I am imagining particles as if they had an inner age , but one just can't tell by their looks. So is the view presented in Griffiths truer than mine or are maybe both valid? How can one argue why my view is wrong? | It's impossible to say whether you are correct or Griffiths is correct a priori -- that is, before having any experience of how the world works. You need to do experiments, and Griffiths' version agrees with experiments better than yours. The basic experiment involves detecting the products of decayed particles. Suppose we have some process that happens pretty quickly and produces a certain amount of some unstable particle. The particles were all produced at basically the same time, so they all have basically the same "age". Now, if we just measure how many decays happen per second, Griffiths' version predicts that you'll see an exponential falloff in that number -- as Griffiths does a fine job of explaining. And that's what we actually see when we do experiments like this. In your version, you would expect to see very few decays until some fixed time after the production, then a sudden rush of decays, and then the decays would mostly stop because the particles would basically all be gone. But that's just not what we see. Again, there's no reason the laws of the universe have to work the way Griffiths says, and not work the way you say. It's just that the predictions of the two versions are testable, and only Griffiths' version agrees with experiments. That's science! | {
"source": [
"https://physics.stackexchange.com/questions/69448",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26062/"
]
} |
69,733 | There are two ways to transmit the same amount of power, 1 amp at 1 million volts or 1 million amps at 1 volt. Conceptually what is the difference? How can I think about it conceptually? I would prefer it if an analogy were made. | Suppose you are using a waterwheel to do some form of work (e.g. grind corn). You need a head of water to make the wheel move, and you could use either 1kg of water at a height of a million metres or you could use a million kg of water at a height of one metre. In both cases the water would do the same amount of work as it flowed through your wheel. The pressure of the water (i.e. the height) is analogous to the voltage, so the water at a height of a million metres (OK, OK, that would be above Earth's atmosphere but it's just an analogy :-) has a voltage a million times greater than the water at a height of one metre. The current is analogous to the water flow rate. If all the water flows through the wheel in the same time then obviously the million kg of water at 1 metre has a flow rate a million times as great as the one kg of water at a million metres. Although in both cases the total energy of the water is the same, they would behave very differently in practice. Your water wheel probably has an ideal flow rate (i.e. current) at which it's most efficient. So in practice the two cases almost certainly wouldn't grind the same amount of corn. You'd choose whichever was best suited to your purpose. The same is true of electricity. For example, resistive losses scale with current, so for transmission across the country you want a high voltage and low current. In the home a high voltage would lead to lots of fried customers, so you want a low voltage and correspondingly higher current. | {
"source": [
"https://physics.stackexchange.com/questions/69733",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22302/"
]
} |
69,929 | I haven't yet gotten a good answer to this: If you have two rays of light of the same wavelength and polarization (just to make it simple for now, but it easily generalizes to any range and all polarizations) meet at a point such that they're 180 degrees out of phase (due to path length difference, or whatever), we all know they interfere destructively, and a detector at exactly that point wouldn't read anything. So my question is, since such an insanely huge number of photons are coming out of the sun constantly, why isn't any photon hitting a detector matched up with another photon that happens to be exactly out of phase with it? If you have an enormous number of randomly produced photons traveling random distances (with respect to their wavelength, anyway), that seems like it would happen, similar to the way that the sum of a huge number of randomly selected 1's and -1's would never stray far from 0. Mathematically, it would be: $$\int_0 ^{2\pi} e^{i \phi} d\phi = 0$$ Of course, the same would happen for a given polarization, and any given wavelength. I'm pretty sure I see the sun though, so I suspect something with my assumption that there are effectively an infinite number of photons hitting a given spot is flawed... are they locally in phase or something? | First let's deal with a false assumption: similar to the way that the sum of a huge number of randomly selected 1's and -1's would never stray far from 0. Suppose we have a set of $N$ random variables $X_i$, each independent and with equal probability of being either $+1$ or $-1$. Define
$$ S = \sum_{i=1}^N X_i. $$
Then, yes, the expectation of $S$ may be $0$,
$$ \langle S \rangle = \sum_{i=1}^N \langle X_i \rangle = \sum_{i=1}^N \left(\frac{1}{2}(+1) + \frac{1}{2}(-1)\right) = 0, $$
but the fluctuations can be significant. Since we can write
$$ S^2 = \sum_{i=1}^N X_i^2 + 2 \sum_{i=1}^N \sum_{j=i+1}^N X_i X_j, $$
then more manipulation of expectation values (remember, they always distribute over sums; also the expectation of a product is the product of the expectations if and only if the factors are independent, which is the case for us for $i \neq j$) yields
$$ \langle S^2 \rangle = \sum_{i=1}^N \langle X_i^2 \rangle + 2 \sum_{i=1}^N \sum_{j=i+1}^N \langle X_i X_j \rangle = \sum_{i=1}^N \left(\frac{1}{2}(+1)^2 + \frac{1}{2}(-1)^2\right) + 2 \sum_{i=1}^N \sum_{j=i+1}^N (0) (0) = N. $$
The standard deviation will be
$$ \sigma_S = \left(\langle S^2 \rangle - \langle S \rangle^2\right)^{1/2} = \sqrt{N}. $$
This can be arbitrarily large. Another way of looking at this is that the more coins you flip, the less likely you are to be within a fixed range of breaking even. Now let's apply this to the slightly more advanced case of independent phases of photons. Suppose we have $N$ independent photons with phases $\phi_i$ uniformly distributed on $(0, 2\pi)$. For simplicity I will assume all the photons have the same amplitude, set to unity. Then the electric field will have strength
$$ E = \sum_{i=1}^N \mathrm{e}^{\mathrm{i}\phi_i}. $$
Sure enough, the average electric field will be $0$:
$$ \langle E \rangle = \sum_{i=1}^N \langle \mathrm{e}^{\mathrm{i}\phi_i} \rangle = \sum_{i=1}^N \frac{1}{2\pi} \int_0^{2\pi} \mathrm{e}^{\mathrm{i}\phi}\ \mathrm{d}\phi = \sum_{i=1}^N 0 = 0. $$ However , you see images not in electric field strength but in intensity , which is the square-magnitude of this:
$$ I = \lvert E \rvert^2 = \sum_{i=1}^N \mathrm{e}^{\mathrm{i}\phi_i} \mathrm{e}^{-\mathrm{i}\phi_i} + \sum_{i=1}^N \sum_{j=i+1}^N \left(\mathrm{e}^{\mathrm{i}\phi_i} \mathrm{e}^{-\mathrm{i}\phi_j} + \mathrm{e}^{-\mathrm{i}\phi_i} \mathrm{e}^{\mathrm{i}\phi_j}\right) = N + 2 \sum_{i=1}^N \sum_{j=i+1}^N \cos(\phi_i-\phi_j). $$
Paralleling the computation above, we have
$$ \langle I \rangle = \langle N \rangle + 2 \sum_{i=1}^N \sum_{j=i+1}^N \frac{1}{(2\pi)^2} \int_0^{2\pi}\!\!\int_0^{2\pi} \cos(\phi-\phi')\ \mathrm{d}\phi\ \mathrm{d}\phi' = N + 0 = N. $$
The more photons there are, the greater the intensity, even though there will be more cancellations. So what does this mean physically? The Sun is an incoherent source, meaning the photons coming from its surface really are independent in phase, so the above calculations are appropriate. This is in contrast to a laser, where the phases have a very tight relation to one another (they are all the same). Your eye (or rather each receptor in your eye) has an extended volume over which it is sensitive to light, and it integrates whatever fluctuations occur over an extended time (which you know to be longer than, say, $1/60$ of a second, given that most people don't notice faster refresh rates on monitors). In this volume over this time, there will be some average number of photons. Even if the volume is small enough such that all opposite-phase photons will cancel (obviously two spatially separated photons won't cancel no matter their phases), the intensity of the photon field is expected to be nonzero. In fact, we can put some numbers to this. Take a typical cone in your eye to have a diameter of $2\ \mathrm{µm}$, as per Wikipedia . About $10\%$ of the Sun's $1400\ \mathrm{W/m^2}$ flux is in the $500\text{–}600\ \mathrm{nm}$ range, where the typical photon energy is $3.6\times10^{-19}\ \mathrm{J}$. Neglecting the effects of focusing among other things, the number of photons in play in a single receptor is something like
$$ N \approx \frac{\pi (1\ \mathrm{µm})^2 (140\ \mathrm{W/m^2}) (0.02\ \mathrm{s})}{3.6\times10^{-19}\ \mathrm{J}} \approx 2\times10^7. $$ The fractional change in intensity from "frame to frame" or "pixel to pixel" in your vision would be something like $1/\sqrt{N} \approx 0.02\%$. Even give or take a few orders of magnitude, you can see that the Sun should shine steadily and uniformly. | {
"source": [
"https://physics.stackexchange.com/questions/69929",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
69,997 | This might be a completely wrong question, but this is bothering me since many days ago. Given the mass (Sun) curves the space around it, gravitation is the result of such curved space (Correct me if I am wrong, source: A Documentary film). Given any point on a circle with center same as center of the mass, curvature in the space should be equal (Intuition). Planets rotate around the Sun because of the curve in the space they should follow a circular path and the distance between planet and Sun should be at a distance. Given the fact that earth has a elliptical orbit around the sun, and the distance between Earth and Sun varies according to position of the earth. Why do we have a elliptical orbit not a circular orbit. | Because orbits are general conic sections . Why this is true is another fascinating question in and of itself, but for now I'll just assume it. The point is that circular orbits are special examples of general orbits. It's perfectly possible to get a circular orbit, but the relationship between the bodies' velocities and separation needs to be exactly right. In practice it rarely is, unless we plan it that way (e.g, for satellites). If you threw a planet around the sun really hard its path would be bent by the sun's gravity, but it would still eventually fly off at a tangent. Throwing it really hard would make it almost go straight, since it moves by the sun so quickly. As you reduce the speed, the sun gets to bend it more and more, and so the tangent is flies off on gets angled more and more towards moving backwards. So general hyperbolas are possible orbits. If you move it at the right speed, then it'll be just slow enough that other tangent points 'exactly backwards', and here the motion will be a parabola. Less than this and the planet will be captured. It doesn't have enough energy at this point to escape at all. A key realization here is that the path should change continuously with the initial speed. Imagine the whole path traced out by a planet with a high velocity. An almost-straight hyperbola, say. Now as you continuously lower the velocity, the hyperbola bends more and more ( continuously ) until it bends "all the way around" and becomes a parabola. After this point, you'll have captured orbits. But they have to be steady changes from the parabola . All captured orbits magically being circles (of what size anyway, since they have to start looking like parabolas at some point?) wouldn't make any sense. Instead you get ellipses that get shorter and shorter as you get slower. Keep doing this, and those ellipses will come to a circle at some critical speed. So circular orbits are possible , they're just not general . In fact, I'd say the real question is why the orbits are often so close to circular, since there are so many other options! | {
"source": [
"https://physics.stackexchange.com/questions/69997",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26630/"
]
} |
70,047 | The Hubble constant, which roughly gauges the extent to which space is being stretched, can be determined from astronomical measurements of galactic velocities (via redshifts) and positions (via standard candles) relative to us. Recently a value of 67.80 ± 0.77 (km/s)/Mpc was published. On the scale of 1 A.U. the value is small, but not infinitesimal by any means (I did the calculation a few months ago, and I think it came out to about 10 meters / year / A.U.). So, can you conceive of a measurement of the Hubble constant that does not rely on any extra-galactic observations? I ask because, whatever the nature of the expansion described by the Hubble constant, it seems to be completely absent from sub-galactic scales. It is as though the energy of gravitational binding (planets), or for that matter electromagnetic binding (atoms) makes matter completely immune from the expansion of space. The basis for this claim is that if space were also pulling atoms apart, I would naively assume we should be able to measure this effect through modern spectroscopy. Given that we are told the majority of the universe is dark energy, responsible for accelerating the expansion, I wonder, how does this expansion manifest itself locally? Any thoughts would be appreciated. | Everything doesn't expand equally because of cosmological expansion. If everything expanded by the same percentage per year, then all our rulers and other distance-measuring devices would expand, and we wouldn't be able to detect any expansion at all. Actually, general relativity predicts that cosmological expansion has very little effect on objects that are small and strongly bound. Expansion is too weak an effect to detect at any scale below that of distant galaxies. Cooperstock et al. have estimated the effect for systems of interest such as the solar system. For example, the predicted general-relativistic effect on the radius of the earth's orbit since the time of the dinosaurs is calculated to be about as big as the diameter of an atomic nucleus; if the earth's orbit had expanded according to the cosmological scaling function $a(t)$ , the effect would have been millions of kilometers. To see why the solar-system effect is so small, let's consider how it can depend on $a(t)$ . There is a cosmology called the Milne universe, which is just flat, empty spacetime described in silly coordinates; $a(t)$ is chosen to grow at a steady rate, but this has no physical significance, since there is no matter that has to expand like this. The Milne universe has $\dot{a}\ne 0$ , i.e., a nonvanishing value of the Hubble constant $H_o$ . This shows that we should not expect any expansion of the solar system due to $\dot{a}\ne 0$ . The lowest-order effect requires $\ddot{a}\ne 0$ . For two test particles released at a distance $\mathbf{r}$ from one another in an FRW spacetime, their relative acceleration is given by $(\ddot{a}/a)\mathbf{r}$ . The factor $\ddot{a}/a$ is on the order of the inverse square of the age of the universe, i.e., $H_o^2\sim 10^{-35}$ s $^{-2}$ . The smallness of this number implies that the relative acceleration is very small. Within the solar system, for example, such an effect is swamped by the much larger accelerations due to Newtonian gravitational interactions. It is also not necessarily true that the existence of an anomalous acceleration leads to the expansion of circular orbits over time. An anomalous acceleration $(\ddot{a}/a)\mathbf{r}$ just acts like a slight repulsive force, which is equivalent to reducing the strength of the gravitational attraction by some small amount. The actual trend in the radius of the orbit over time, called the secular trend, is proportional to $(d/dt)(\ddot{a}/a)$ , and this vanishes, for example, in a cosmology dominated by dark energy, where $\ddot{a}/a$ is constant. Thus the nonzero (but undetectably small) effect estimated by Cooperstock et al. for the solar system is a measure of the extent to which the universe is not yet dominated by dark energy. The sign of the effect can be found from the Friedmann equations. Assume that dark energy is describable by a cosmological constant $\Lambda$ , and that the pressure is negligible compared to $\Lambda$ and to the mass-energy density $\rho$ . Then differentiation of the Friedmann acceleration equation gives $(d/dt)(\ddot{a}/a)\propto\dot{\rho}$ , with a negative constant of proportionality. Since $\rho$ is currently decreasing, the secular trend is currently an increase in the size of gravitationally bound systems. For a circular orbit of radius $r$ , a straightforward calculation (see my presentation here , sec. 8.2) shows that the secular trend is $\dot{r}/r=\omega^{-2}(d/dt)(\ddot{a}/a)$ . This produces the undetectably small effect on the solar system referred to above. In "Big Rip" cosmologies, $\ddot{a}/a$ blows up to infinity at some finite time, so cosmological expansion tears apart all matter at progressively smaller and smaller scales. Cooperstock, Faraoni, and Vollick, "The influence of the cosmological expansion on local systems," http://arxiv.org/abs/astro-ph/9803097v1 | {
"source": [
"https://physics.stackexchange.com/questions/70047",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26438/"
]
} |
70,186 | If you consider them as laws, then there must be independent definitions of force and mass but I don't think there's such definitions. If you consider them as definitions, then why are they still called laws? | In my view, standard statements of Newton's laws are usually overly concise, and this lack of detail causes confusion about what is a definition, and what is an empirical fact. To avoid this confusion, let's proceed in a systematic way that makes the distinctions between these definitions and empirical statements clear. What follows certainly is not the original statement of the laws made by Newton himself; it is a modern interpretation meant to clarify the foundations of Newtonian mechanics. As a result, the laws will be presented out of order in the interest of logical clarity. To start off, we note that the definitions of mass and force given below will require the concept of a local inertial frame . These are frames of reference in which when an object is isolated from all other matter, it's local acceleration is zero. It is an empirical fact that such frames exist, and we'll take this as the first law: First Law. Local inertial reference frames exist. How is this in any way related to the first law we know and love? Well, the way it is often stated, it basically says "if an object isn't interacting with anything, then it won't accelerate." Of course, this is not entirely correct since there are reference frames (non-inertial ones) in which this statement breaks down. You could then say, all right, all we need to do then is to qualify this statement of the first law by saying " provided we are making observations in an inertial frame, an object that doesn't interact with anything won't accelerate," but one could then object that this merely follows from the definition of inertial frames, so it has no physical content. However, going one step further, we see that it's not at all clear a priori that inertial frames even exist, so the assertion that they do exist does have (deep) physical content. In fact, it seems to me that this existence statement is kind of the essence of how the first law should be thought because it basically is saying that there are these special frames in the real world, and if your are observing an isolated object in one of these frames, then it won't accelerate just as Newton says. This version of the first law also avoids the usual criticism that the first law trivially follows from the second law. Equipped with the first law as stated above, we can now define mass. In doing so, we'll find it useful to have another physical fact. Third Law. If two objects, sufficiently isolated from interactions with other objects, are observed in a local inertial frame, then their accelerations will be opposite in direction, and the ratio of their accelerations will be constant. How is this related to the usual statement of the third law? Well, thinking a bit "meta" here to use terms that we haven't defined yet, note that the way the third law is usually stated is "when objects interact in an inertial frame, they exert forces on each other that are equal in magnitude, but opposite in direction." If you couple this with the second law, then you obtain that the product of their respective masses and accelerations are equal up to sign; $m_1\mathbf a_1 = -m_2\mathbf a_2$ . The statement of the third law given in this treatment is equivalent to this, but it's just a way of saying it that avoids referring to the concepts of force and mass which we have not yet defined. Now, we use the third law to define mass. Let two objects $O_0$ and $O_1$ be given, and suppose that they are being observed from a local inertial frame. By the third law above, the ratio of their accelerations is some constant $c_{01}$ ; \begin{align}
\frac{a_0}{a_1} = c_{01}
\end{align} We define object $O_0$ to have mass $m_0$ (whatever value we wish, like 1 for example if we want the reference object to be our unit mass), and we define the mass of $O_1$ to be \begin{align}
m_1=-c_{01}m_0
\end{align} In this way, every object's mass is defined in terms of the reference mass. We are now ready to define force. Suppose that we observe an object $O$ of mass $m$ from a local inertial frame, and suppose that it is not isolated; it is exposed to some interaction $I$ to which we would like to associate a "force." We observe that in the presence of only this interaction, the mass $m$ accelerates, and we define the force $\mathbf F_{I}$ exerted by $I$ on $O$ to be the product of the object's mass and its observed acceleration $\mathbf a$ ; \begin{align}
\mathbf F_{I} \equiv m\mathbf a
\end{align} In other words, we are defining the force exerted by a single interaction $I$ on some object of mass $m$ as the mass times acceleration that a given object would have if it were exposed only to that interaction in a local inertial frame. Second Law. If an object $O$ of mass $m$ in a local inertial frame simultaneously experiences interactions $I_1, \dots, I_N$ , and if $\mathbf F_{I_i}$ is the force that would be exerted on $O$ by $I_i$ if it were the only interaction, then the acceleration $\mathbf a$ of $O$ will satisfy the following equation: \begin{align}
\mathbf F_{I_1} + \cdots +\mathbf F_{I_N} = m \mathbf a
\end{align} | {
"source": [
"https://physics.stackexchange.com/questions/70186",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25575/"
]
} |
70,248 | As far as I know, today most of the computers are made from semiconductor devices, so the energy consumed all turns into the heat emitted into space. But I wonder, is it necessary to consume energy to perform computation? If so, is there a theoretical numerical lower bound of the energy usage ? (I even have no idea about how to measure the amount of "computation") If not, is there a physically-practical Turing-complete model that needs no energy? edit: Thank @Nathaniel for rapidly answering the question and pointing out it's actually Landauer's principle . Also thank @horchler for referring to the Nature News and the related article . There are lots of useful information in the comments; thank every one! This whole stuff is really interesting! | What you're looking for is Landauer's principle . You should be able to find plenty of information about it now that you know its name, but briefly, there is a thermodynamic limit that says you have to use $k_\mathrm BT \ln 2$ joules of energy (where $k_\mathrm B$ is Boltzmann's constant and $T$ is the ambient temperature) every time you erase one bit of computer memory. With a bit of trickery, all the other operations that a computer does can be performed without using any energy at all. This set of tricks is called reversible computing . It turns out that you can make any computation reversible, thus avoiding the need to erase bits and therefore use energy, but you end up having to store all sorts of junk data in memory because you're not allowed to erase it. However, there are tricks for dealing with that as well. It's quite a well-developed area of mathematical theory, partly because the theory of quantum computing builds upon it. The energy consumed by erasing a bit is given off as heat. When you erase a bit of memory you reduce the information entropy of your computer by one bit, and to do this you have to increase the thermodynamic entropy of its environment by one bit, which is equal to $k_\mathrm B \ln 2$ joules per kelvin. The easiest way to do this is to add heat to the environment, which gives the $k_\mathrm BT \ln 2$ figure above. (In principle there's nothing special about heat, and the entropy of the environment could also be increased by changing its volume or driving a chemical reaction, but people pretty much universally think of Landauer's limit in terms of heat and energy rather than those other things.) Of course, all of this is in theory only. Any practical computer that we've constructed so far uses many orders of magnitude more energy than Landauer's limit. | {
"source": [
"https://physics.stackexchange.com/questions/70248",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26749/"
]
} |
70,316 | When there's dense mist/fog outside and I open a window, why doesn't the inside of my house fill with mist? | It's a matter of temperature and dew point. If it's foggy outside, that means the temperature outside is at or below the dew point. Chances are it's warmer inside your house. If so, no fog. On the other hand, if the temperature inside is the same as outside, it may actually be foggy inside, but you don't notice because sight lines inside may not be long enough to notice the fog. P.S. Dew point is the temperature at which the rate of condensation of water vapor into droplets (releasing heat) equals the rate of evaporation of droplets into vapor (absorbing heat). It depends, of course, on the amount of water vapor in the air. | {
"source": [
"https://physics.stackexchange.com/questions/70316",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9243/"
]
} |
70,376 | Just as background, I should say I am a mathematics grad student who is trying to learn some physics. I've been reading "The Theoretical Minimum" by Susskind and Hrabovsky and on page 134, they introduce infinitesimal transformations. Here's the first example they use: Consider a particle moving in the x,y plane under the influence of a potential, $V$, which depends only on the radius, with Lagrangian: $L=\frac{m}{2}(\dot{x}^2+\dot{y}^2)-V(x^2+y^2)$ This is clearly invariant under rotations: $x \rightarrow x\cos \theta + y\sin \theta$ $y \rightarrow -x\sin \theta + y\cos \theta$ All well and good. Now they say "consider what happens ... when the angle $\theta$ is replaced by an infinitesimal angle $\delta$." Already I could say "What the heck is $\delta$ really?", but I'm willing to play along with my intuition. Since $\delta$ is infinitesimal, we work to first order and say $\cos \delta=1$ and $\sin \delta= \delta$. Plugging this into our rotation formulas above, we obtain: $x \rightarrow x+y\delta$ $y \rightarrow y-x\delta$ By differentiating, we see that: $\dot{x} \rightarrow \dot{x}+\dot{y}\delta$ $\dot{y} \rightarrow \dot{y}-\dot{x}\delta$ Plugging these into the Lagrangian and ignoring terms higher than first order, we see that the Lagrangian is invariant under this transformation. My main problem with all of this is that I don't understand what the physical nature of an infinitesimal transformation actually is. All I got from the above was that if you do this formal calculation by following rules like "only work to first order in $\delta$," then the Lagrangian is invariant. This is in contrast to the case where we have an actual transformation, like a rotation, where there is no question about what is going on physically. I would also like to know how all of this relates to rigorous mathematics. In mathematics, I can't recall ever using infinitesimals in an argument or calculation, so it would be useful if there was some way of formulating the above in terms of limits/ derivatives/ differential forms (for example). I sense a connection to Lie Algebras, as the infinitesimal version of the rotation is $(I+A)$ where $I$ is the identity matrix and $A$ is an element of the Lie Algebra of $SO(2)$. Here are some questions whose answers I believe may be useful to me (feel free to answer some or all): -What is an infinitesimal quantity like $\delta$ to the physicist? -Why do physicists argue using infinitesimals rather than "standard" calculus? -What is the physical meaning of infinitesimal transformation? How does it relate to Lie Algebras? -Is there a rigorous theoretical apparatus for justifying the computations shown above? -What is meant by the Lagrangian being invariant under infinitesimal transformations? If any of the questions seem too vague, please say so. Thanks in advance for your insights! | When I asked my undergrad analytic mechanics professor "what does it mean for a rotation to be infinitesimal?" after he hand-wavily presented this topic in class, he answered "it means it's really small." At that point, I just walked away. Later that day I emailed my TA who set me straight by pointing me to a book on Lie theory. Fortunately, I don't intend to write an answer like my professor's. In general, whenever you see the term "infinitesimal BLANK" in physics, you can be relatively certain that this is merely a placeholder for "first order (aka linear) approximation to BLANK." Let's look at one of the most important examples. Infinitesimal transformations. To be more rigorous about this, let's consider the special case of "infinitesimal transformations." If my general terminological prescription above is to be accurate, we have to demonstrate that we can make the concept of a "first order approximation to a transformation" rigorous, and indeed we can. For concreteness, let's restrict the discussion to tranformations on normed vector spaces. Let an open interval $I=(a,b)$ containing $0$ be given, and suppose that $T_\epsilon$ is a transformation on some normed vector space $X$ such that $T_0(x)$ is the identity. Let $T_\epsilon$ depend smoothly on $\epsilon$, then we define the infinitesimal version $\widehat T$ of $T_\epsilon$ as follows. For each point $x\in X$, we have
$$
\widehat T_\epsilon(x) = x + \epsilon\frac{\partial}{\partial\epsilon}T_{\epsilon}(x)\bigg|_{\epsilon=0}
$$
The intuition here is that we can imagine expanding $T_\epsilon(x)$ as a power series in $\epsilon$;
$$
T_\epsilon(x) = x + \epsilon T_1(x) + \mathcal O(\epsilon^2)
$$
in which case the above expression for the infinitesimal version of $T_\epsilon$ gives
$$
\widehat {T}_\epsilon(x) = x+\epsilon T_1(x)
$$
so the transformation $\widehat T$ encodes the behavior of the transformation $T_\epsilon$ to first order in $\epsilon$. Physicists often call the transformation $T_1$ the infinitesimal generator of $T_\epsilon$. Example. Infinitesimal rotations in 2D Consider the following rotation of the 2D Euclidean plane:
$$
T_\epsilon = \begin{pmatrix}
\cos\epsilon& -\sin\epsilon\\
\sin\epsilon& \cos\epsilon\\
\end{pmatrix}
$$
This transformation has all of the desired properties outlined above, and its infinitesimal version is
$$
\widehat T_\epsilon =
\begin{pmatrix}
1& 0\\
0& 1\\
\end{pmatrix}
+ \begin{pmatrix}
0& -\epsilon\\
\epsilon& 0\\
\end{pmatrix}
$$
If we act on a point in 2D with this infinitesimal transformation, then we get a good approximation to what the full rotation does for small values of $\epsilon$ because we have made a linear approximation. But independent of this statement, notice that the infinitesimal version of the transformation is rigorously defined. Relation to Lie groups and Lie algebras. Consider a Lie group $G$. This is essentially a group $G$ that can also be thought of as a smooth manifold in such a way that the group multiplication and inverse maps are also smooth. Each element of this group can be thought of as a transformation, and we can consider a smooth, one-parameter family of group elements $g_\epsilon$ with the property that $g_0 = \mathrm{id}$, the identity in the group. Then as above, we can define an infinitesimal version of this one-parameter family of transformations;
$$
\widehat g_\epsilon = \mathrm{id} + \epsilon v
$$
The coefficient $v$ of $\epsilon$ in this first order approximation is basically (this is exactly true for matrix Lie groups) an element of the Lie algebra of this Lie group. In other words, Lie algebra elements are infinitesimal generators of smooth, one-parameter families of Lie group elements that start at the identity of the group. For the rotation example above, the matrix
$$
\begin{pmatrix}
0& -1\\
1& 0\\
\end{pmatrix}
$$
is therefore an element of the Lie algebra $\mathfrak{so}(2)$ of the Lie group $\mathrm{SO}(2)$ of rotations of the Euclidean plane. As it turns out, transformations associated with Lie groups are all over the place in physics (particularly in elementary particle physics and field theory), so studying these objects becomes very powerful. Invariance of a lagrangian. Suppose we have a Lagrangian $L(q,\dot q)$ defined on the space (tangent bundle of the configuration manfiold of a classical system) of generalized positions $q$ and velocities $\dot q$. Suppose further that we have a transformation $T_\epsilon$ defined on this space, then we say that the Lagrangian is invariant under this transformation provided
$$
L(T_\epsilon(q,\dot q)) = L(q, \dot q)
$$
The Lagrangian is said to be infinitesimally invariant under $T_\epsilon$ provided
$$
L(T_\epsilon(q,\dot q)) = L(q, \dot q) + \mathcal O(\epsilon^2)
$$
In other words, it is invariant to first order in $\epsilon$. As you can readily see, infinitesimal invariance is weaker than invariance. Interestingly, only infinitesimal invariance of the lagrangian is required for certain results (most notably Noether's theorem) to hold . This is one reason why infinitesimal transformations, and therefore Lie groups and Lie algebras, are useful in physics. Application: Noether's theorem. Let a Lagrangian $L:\mathscr C\times\mathbb R$ be given where $\mathscr C$ is some sufficiently well-behaved space of paths on configuration space $Q$. Given a one-parameter family of transformations $T_\epsilon:\mathscr C\to\mathscr C$ starting at the identity. The first order change in the Lagrangian under this transformation is
$$
\delta L(q,t) = \frac{\partial}{\partial\epsilon}L(T_\epsilon(q),t)\Big |_{\epsilon=0}
$$
One (not the strongest) version of Noether's theorem says that if $L$ is local in $c$ and its first derivatives, namely if there is a function $\ell$ such that (in local coordinates on $Q$) $L(q,t) = \ell(q(t), \dot q(t), t)$ and if
$$
\delta L(q,t) = 0
$$
for all $c\in\mathscr C$ that satisfy the equation of motion, namely if the Lagrangian exhibits infinitesimal invariance, then the quantity
$$
G = \frac{\partial \ell}{\partial \dot q^i}\delta q^i, \qquad \delta q^i(t) = \frac{\partial}{\partial\epsilon}T_\epsilon(q)^i(t)\bigg|_{\epsilon=0}
$$
is conserved along solutions to the equations of motion. The proof is a couple lines; just differentiate $G$ evaluated on a solution with respect to time and use the chain rule and the Euler-Lagrange equations to show it's zero. | {
"source": [
"https://physics.stackexchange.com/questions/70376",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26804/"
]
} |
70,400 | NASA just announced that they detected the first radio bursts from outside of our galaxy . Astronomers, including a team member from NASA's Jet Propulsion Laboratory in Pasadena, Calif., have detected the first population of radio bursts known to originate from galaxies beyond our own Milky Way. The sources of the light bursts are unknown , but cataclysmic events, such as merging or exploding stars, are likely the triggers. The new radio-burst detections -- four in total -- are from billions of light-years away , erasing any doubt that the phenomenon is real. If we don't know what the source is, then how do we know how far away the source is? How can we tell how far the light waves have traveled? | The relevant Science summary is "Radio Bursts, Origin Unknown" by Cordes, abstract here . Briefly, it notes that a burst of radio waves will undergo dispersion in the interstellar and intergalactic medium (ISM and IGM), the amount of which is indicative of how much matter the signal has passed through. Some further background not given in that summary: In general, the ISM/IGM consists largely of free protons and electrons. As a plasma, this medium has a frequency-dependent plasma frequency , so different frequency components will propagate at slightly different speeds (just under the speed of light). (In fact, radio waves that are too low in frequency cannot propagate at all through the "emptiness" of space.) The more material the signal has passed through, the more spread-out the pulse will be, with lower frequencies arriving after higher ones. Assuming the original shape of the pulse is known (perhaps it is just a very sharp spike, at least compared to how broad it is by the time we detect it), then one can take the shape of the detected pulse and figure out how much dispersion has occurred. Then one asks, "How much of this is due to material in our own galaxy?" which is answerable based on maps astronomers have constructed of the interstellar ( intra -galactic) medium along various sight lines. Next one guesses (in an educated fashion) how much is due to the host galaxy of the source. The rest is attributed to the IGM, which, assuming some uniform density, yields a distance. A summary of this technique can also be found on Wikipedia; see Dispersion in pulsar timing As it turns out, this is indeed the method used in the Science paper by Thornton et al. , "A Population of Fast Radio Bursts at Cosmological Distances," abstract here , to which the NASA press release was referring. These particular events were found at high galactic latitude, meaning they were seen looking "up" or "down" out of the plane of the galaxy, rather than through the bulk of the disk, so not much of the measured dispersion can be attributed to the ISM in the Milky Way. That second Wikipedia article defines dispersion measure. The Thornton paper reports dispersion measures for these four objects are $553$, $723$, $944$, and $1104\ \mathrm{pc/cm^3}$. After subtracting the effect of our own ISM, they conclude the extragalactic dispersion measures (including any contribution from host galaxies) are $521$, $677$, $910$, and $1072\ \mathrm{pc/cm^3}$. They then assume host dispersion measures are $100\ \mathrm{pc/cm^3}$ in all cases, subtract that off, and divide by a value for the IGM number density of electrons (in $\mathrm{cm^{-3}}$, to get a distance in parsecs). Actually, this last step is a little more complicated due to the fact that the universe has expanded over the long time those radio waves have been traveling, but the authors take that into account. In summary, radio astronomy uses the fact that the space between galaxies is not completely empty, and that radio waves, like all forms of light, slow down in various ways when traveling through matter. This is particularly useful when the signal doesn't have sharp spectral features from which to obtain redshifts (this technique being common in optical, UV, and IR astronomy). | {
"source": [
"https://physics.stackexchange.com/questions/70400",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
70,496 | I understand mathematically how one can obtain the conservation equations in both the conservative
$${\partial\rho\over\partial t}+\nabla\cdot(\rho \textbf{u})=0$$ $${\partial\rho{\textbf{u}}\over\partial t}+\nabla\cdot(\rho \textbf{u})\textbf{u}+\nabla p=0$$ $${\partial E\over\partial t}+\nabla\cdot(\textbf{u}(E+p))=0$$ and non-conservative forms. However, I am still confused, why do we call them conservative and non-conservative forms? can any one explain from a physical and mathematical point of view? Many off-site threads deal with this question ( here and here ), but none of them provides a good enough answer for me! If any one can provide some hints, I will be very grateful. | What does it mean? The reason they are conservative or non-conservative has to do with the splitting of the derivatives. Consider the conservative derivative: $$ \frac{\partial \rho u}{\partial x} $$ When we discretize this, using a simple numerical derivative just to highlight the point, we get: $$ \frac{\partial \rho u}{\partial x} \approx \frac{(\rho u)_i - (\rho u)_{i-1}}{\Delta x} $$ Now, in non-conservative form, the derivative is split apart as: $$ \rho \frac{\partial u}{\partial x} + u \frac{\partial \rho}{\partial x} $$ Using the same numerical approximation, we get: $$ \rho \frac{\partial u}{\partial x} + u \frac{\partial \rho}{\partial x} = \rho_i \frac{u_i - u_{i-1}}{\Delta x} + u_i \frac{\rho_i - \rho_{i-1}}{\Delta x} $$ So now you can see (hopefully!) there are some issues. While the original derivative is mathematically the same, the discrete form is not the same. Of particular difficulty is the choice of the terms multiplying the derivative. Here I took it at point $i$, but is $i-1$ better? Maybe at $i-1/2$? But then how do we get it at $i-1/2$? Simple average? Higher order reconstructions? Those arguments just show that the non-conservative form is different, and in some ways harder, but why is it called non-conservative? For a derivative to be conservative, it must form a telescoping series . In other words, when you add up the terms over a grid, only the boundary terms should remain and the artificial interior points should cancel out. So let's look at both forms to see how those do. Let's assume a 4 point grid, ranging from $i=0$ to $i=3$. The conservative form expands as: $$ \frac{(\rho u)_1 - (\rho u)_0}{\Delta x} + \frac{(\rho u)_2 - (\rho u)_1}{\Delta x} + \frac{(\rho u)_3 - (\rho u)_2}{\Delta x} $$ You can see that when you add it all up, you end up with only the boundary terms ($i = 0$ and $i = 3$). The interior points, $i = 1$ and $i = 2$ have canceled out. Now let's look at the non-conservative form: $$ \rho_1 \frac{u_1 - u_0}{\Delta x} + u_1 \frac{\rho_1 - \rho_0}{\Delta x} + \rho_2 \frac{u_2 - u_1}{\Delta x} + u_2 \frac{\rho_2 - \rho_1}{\Delta x} + \rho_3 \frac{u_3 - u_2}{\Delta x} + u_3 \frac{\rho_3 - \rho_2}{\Delta x} $$ So now, you end up with no terms canceling! Every time you add a new grid point, you are adding in a new term and the number of terms in the sum grows. In other words, what comes in does not balance what goes out, so it's non-conservative. You can repeat the analysis by playing with altering the coordinate of those terms outside the derivative, for example by trying $i-1/2$ where that is just the average of the value at $i$ and $i-1$. How to choose which to use? Now, more to the point, when do you want to use each scheme? If your solution is expected to be smooth, then non-conservative may work. For fluids, this is shock-free flows. If you have shocks, or chemical reactions, or any other sharp interfaces, then you want to use the conservative form. There are other considerations. Many real world, engineering situations actually like non-conservative schemes when solving problems with shocks. The classic example is the Murman-Cole scheme for the transonic potential equations. It contains a switch between a central and upwind scheme, but it turns out to be non-conservative. At the time it was introduced, it got incredibly accurate results. Results that were comparable to the full Navier-Stokes results, despite using the potential equations which contain no viscosity. They discovered their error and published a new paper, but the results were much "worse" relative to the original scheme. It turns out the non-conservation introduced an artificial viscosity, making the equations behave more like the Navier-Stokes equations at a tiny fraction of the cost. Needless to say, engineers loved this. "Better" results for significantly less cost! | {
"source": [
"https://physics.stackexchange.com/questions/70496",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26864/"
]
} |
70,541 | The GPS is a very handy example in explaining to a broad audience why it is useful for humanity to know the laws of general relativity. It nicely bridges the abstract theory with daily life technologies! I'd like to know an analogous example of a technology which could not have been developed by engineers who didn't understand the rules of quantum mechanics. (I guess that I should say quantum mechanics , because asking for a particle physics application could be too early.) To bound the question: No future applications (e.g. teleportation). No uncommon ones (for, who has a quantum computer at home?). A less frequently-cited example than the laser, please. If possible, for sake of simplicity, we'll allow that the quantum theory appears in form of a small correction to the classical one (just like one doesn't need the full apparatus of general relativity to deduce the gravitational red-shift). | How about diagnostic methods in modern medicine? Nuclear magnetic resonance (NMR) - it wouldn't even make sense to talk about it without quantum mechanics, because it depends on the quantum mechanical concept of spin Positron emission tomography - hey, the name says it all, not only do you apply quantum mechanics, but you have a direct application of antimatter X-ray scanning , scintigraphy and many, many more... nuclear medicine is full of direct applications of nuclear, particle and quantum physics... It's even common to find particle accelerators in oncology departments for cancer therapy! And what's a better application to mention to a common layman than "curing cancer"? I'm sure you'll find lots of examples from medicine on the Internet :) | {
"source": [
"https://physics.stackexchange.com/questions/70541",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9662/"
]
} |
70,651 | I always thought of current as the time derivative of charge, $\frac{dq}{dt}$. However, I found out recently that it is the ampere that is the base unit and not the coulomb. Why is this? It seems to me that charge can exist without current, but current cannot exist without charge. So the logical choice for a base unit would be the coulomb. Right? | Because it was defined by measurements (the force between two wire segments) that could be easily made in the laboratory at the time . The phrase is " operational definition ", and it is the cause of many (most? all?) of the seemingly weird decision about fundamental units. It is why we define the second and the speed of light but derive the meter these days. | {
"source": [
"https://physics.stackexchange.com/questions/70651",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23651/"
]
} |
70,823 | I've seen the phrase "Trace Anomaly" refer to two seemingly different concepts, though I assume they must be related in some way I'm not seeing. The first way I've seen it used is in the manner, for example, that is relevant for the a-theorem and c-theorem. That is, given a CFT on a curved background, the trace of the energy-momentum tensor is non-zero due to trace anomalies which relate $T^\mu_\mu$ to different curvatures, that is (in 4D) $T^\mu_\mu\sim b\square R+aE_4+cW^2$, where $E_4$ is the Euler density and $W^2$ is the square of the Weyl tensor. The second manner in which I've seen it used is in the context of relating $T^\mu_\mu$ to beta functions as the presence of a non-zero beta function indicates scale dependence, and hence breaks conformal invariance. For example, Yang-Mills is classically conformally invariant but it is quoted as having a trace anomaly which seems to be of different character than that of the previous paragraph. As in the Peskin and Schroeder chapter on scale anomalies, it's quoted that since the gauge coupling, $g$, depends on scale due to RG the theory is not quantum mechanically scale invariant (or more generally, I guess, Weyl invariant) and hence $T^\mu_\mu$ is non-zero. Slightly more precisely, given the YM lagrangian $\mathcal{L}\sim \frac{1}{g^2}{\rm Tr}F^2$, one finds $T^\mu_\mu\sim \beta(g){\rm Tr}F^2$, or something to that effect. It's my understanding that this second kind of trace anomaly is important for explaining the mass of nuclei, as most of their mass comes from gluonic energy. That might be wrong, though, and it's not so relevant anyway. What's the relation between these two types of anomalies? Are they the same thing in disguise? | A good analogy for the difference between the two can be given in terms of two other examples of anomalies, that are possibly more familiar. Consider a field theory with a global symmetry, take $U(1)$ for simplicity. At the classical level, the equations of motion lead to the existence of a conserved current (Noether's theorem). At the quantum level, the conservation of the current is valid as an operator equation, namely it is valid in correlators at separated points . The two effects, related but very different in nature, that are referred to as anomalies, are: 1) There can exist contact terms in correlators (i.e. terms that are non-zero only when two or more of the operators in the correlator are evaluated at the same point) that do not respect the operator equation. In 4D field theory this typically happens in correlators of three current operators. This is what sometimes is referred to as an 't Hooft anomaly. It does not represent a breaking of the symmetry, because the conservation of the current operator is still valid at separated points, and one still gets a conserved charge. However, it leads to interesting constraints (the coefficients of such contact terms must match between the UV and IR, if the symmetry is not broken along the RG flow). 2) There can be quantum effects (you can think about them as loop corrections, assuming we are in a perturbative setting) that violate the operator equation even at separated points . In this case the symmetry is broken, much like if you add a term in the Lagrangian that does not respect the symmetry. There is no conserved charge any more. The relation between 1) and 2) can be explained in a slightly refined example. Take the global symmetry to be $U(1)^2$ . Than you could have an anomaly of type 1) in a correlator involving one current of the first $U(1)$ , and two currents of the second $U(1)$ . Now suppose modifying the theory by gauging the second $U(1)$ , i.e. coupling the current of the second $U(1)$ to dynamical gauge fields. In the new gauged theory, the first $U(1)$ is broken by an anomaly of type 2). The divergence of its current is now non-zero, and given by the Pontryagin density of the gauge fields of the second $U(1)$ . The first example of trace-anomaly that you discuss is the analogue of 1), while the second is the analogue of 2), when instead of a global $U(1)$ we consider the dilatation symmetry. The first example does not represent a violation of the symmetry, it is just the statement that certain contact terms in the correlators with multiple insertions of the energy-momentum tensor are not compatible with the traceless-ness condition. The second example instead is a genuine violation of the symmetry. The analogy with the $U(1)$ symmetry does not go through when we try to relate 1) with 2), because the equivalent of "coupling the current to gauge field" would be introducing dynamical gravity, which brings us away from the domain of quantum field theory. This analogy becomes very concrete in supersymmetric theories. There, the energy-momentum tensor belongs to the same multiplet of the current associated to the so-called R-symmetry. Supersymmetry relates the 't Hooft anomaly of this current to the first kind of trace-anomaly that you discuss (i.e. they have the same coefficient). Moreover, when dilatation symmetry is broken by a gauge coupling via the trace anomaly of second type that you discuss, then the current has an anomaly of type 2). Again, the trace anomaly and the current anomaly have the same coefficient by supersymmetry. | {
"source": [
"https://physics.stackexchange.com/questions/70823",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26866/"
]
} |
70,839 | I read some methods but they're not accurate. They use the Archimedes principle and they assume uniform body density which of course is far from true. Others are silly like this one: Take a knife then remove your head. Place it on some scale Take the reading re-attach your head. I'm looking for some ingenious way of doing this accurately without having to lie on your back and put your head on a scale which isn't a good idea. | Get someone to relax their neck as much as possible, stabilize their torso, then punch them in the head with a calibrated fist and measure the initial acceleration. Apply $\vec F=m \vec a$. | {
"source": [
"https://physics.stackexchange.com/questions/70839",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25575/"
]
} |
70,882 | Many (if not all) of the materials I've read claim Ward identity is a consequence of gauge invariance of the theory, while actually their derivations only make use of current conservation $\partial_\mu J^\mu=0$ (which is only equivalent to a global phase symmetry). I'm aware of the fact that a gauge field has to be coupled to a conserved current to keep gauge invariance, but a non-gauge field can also be (though not must be) coupled to a conserved current and in that case Ward identity should still hold. So do you think it is at least misleading, if not wrong, to claim Ward identity is a consequence of gauge invariance? | This answer partially disagrees with Motl's. The crucial point is to consider the difference between the abelian and non-abelian case. I totally agree with Motl's answer in the non-abelian event — where these identities are usually denominated Slavnov-Taylor's rather than Ward's, so that I will refer to the abelian case. First, a few words about terminology: Ward identities are the quantum counterpart to (first and second) Noether's theorem in classical physics. They apply to both global and gauge symmetries. However, the term is often reserved for the $U(1)$ gauge symmetry in QED. In the case of gauge symmetries, Ward identities yield real identities, such as $k^{\mu}\mathcal M_{\mu}=0$ , where $\mathcal M_{\mu}$ is defined by $\mathcal M=\epsilon_{\mu}\,\mathcal M^{\mu}$ , in QED, that tell us that photon's polarizations parallel to photon's propagation don't contribute to scattering amplitudes. In the case of global symmetries, however, Ward identities reflect properties of the theory. For example, the S-matrix of a Lorentz invariant theory is also Lorentz invariant or the number of particles minus antiparticles in the initial state is the same as in the final state in a theory with a global (independent of the point in space-time) $U(1)$ phase invariance. Let's study the case of a massive vectorial field minimally coupled to a conserved current: \begin{align}\mathcal L&=-\frac{1}{4}\,F^2+\frac{a^2}{2}A^2+i\,\bar\Psi\displaystyle{\not}D\, \Psi - \frac{m^2}{2}\bar\Psi\Psi \\
&=-\frac{1}{4}\,F^2+\frac{a^2}{2}A^2+i\,\bar\Psi\displaystyle{\not}\partial \, \Psi - \frac{m^2}{2}\bar\Psi\Psi-e\,A_{\mu}\,j^{\mu}\end{align} Note that this theory has a global phase invariance $\Psi\rightarrow e^{-i\theta}\,\Psi$ , with a Noether current $$j^{\mu}={\bar\Psi\, \gamma^{\mu}}\,\Psi$$ such that (classically) $\partial_{\mu}\,j^{\mu}=0$ . Apart from this symmetry, it is well-known that the Lagrangian above is equivalent to a theory that i) doesn't have an explicit mass term for the vectorial field and that ii) contains a scalar field (a Higgs-like field) with a non-zero vacuum expectation value, which spontaneously break a $U(1)$ gauge symmetry (this symmetry is not the gauged $U(1)$ global symmetry mentioned previously). The equivalence is in the limit where vacuum expectation value goes to infinity and the coupling between the vectorial field and the Higgs-like scalar goes to zero. Since one has to take this last limit, the charge cannot be quantized and therefore the $U(1)$ gauge symmetry must be topologically equivalent to the addition of real numbers rather than the multiplication of complex numbers with unit modulus (a circumference). The difference between both groups is only topological (does this mean then that the difference is irrelevant in the following?). This mechanism is due to Stueckelberg and I will summarize it at the end of this answer. In a process in which there is a massive vectorial particle in the initial or final state, the LSZ reduction formula gives: $$\langle i\,|\,f \rangle\sim \epsilon _{\mu}\int d^4x\,e^{-ik\cdot x}\, \left(\eta^{\mu\nu}(\partial ^2-a^2)-\partial^{\mu}\partial^{\nu}\right)\cdots\langle 0|\mathcal{T}A_{\nu}(x)\cdots|0\rangle$$ From the Lagrangian above, the following classical equations of motion may be obtained $$\left(\eta^{\mu\nu}(\partial ^2-a^2)-\partial^{\mu}\partial^{\nu}\right)A_{\nu}=ej^{\mu}$$ Then, quantumly, $$\left(\eta^{\mu\nu}(\partial ^2-a^2)-\partial^{\mu}\partial^{\nu}\right)\langle 0|\mathcal{T}A_{\nu}(x)\cdots|0\rangle = e\,\langle 0|\mathcal{T}j^{\mu}(x)\cdots|0\rangle + \text{contact terms, which don't contribute to the S-matrix}$$ and therefore $$\langle i\,|\,f \rangle\sim \epsilon _{\mu}\int d^4x\,e^{-ik\cdot x}\cdots\langle 0|\mathcal{T}j^{\mu}(x)\cdots|0\rangle +\text{contact terms}\sim \epsilon_{\mu}\mathcal{M}^{\mu}$$ If one replaces $\epsilon_{\mu}$ with $k_{\mu}$ , one obtains $$k_{\mu}\mathcal{M}^{\mu}\sim k _{\mu}\int d^4x\,e^{-ik\cdot x}\cdots\langle 0|\mathcal{T}j^{\mu}(x)\cdots|0\rangle$$ Making use of $k_{\mu}\sim \partial_{\mu}\,,e^{-ik\cdot x}$ , integrating by parts, and getting rid of the surface term (the plane wave is an idealization, what one actually has is a wave packet that goes to zero in the spatial infinity), one gets $$k_{\mu}\mathcal{M}^{\mu}\sim \int d^4x\,e^{-ik\cdot x}\cdots \partial_{\mu}\,\langle 0|\mathcal{T}j^{\mu}(x)\cdots|0\rangle$$ One can now use the Ward identity for the global $\Psi\rightarrow e^{-i\theta}\,\Psi$ symmetry (classically $\partial_{\mu}\,j^{\mu}=0$ over solutions of the matter, $\Psi$ , equations of motion) $$\partial_{\mu}\, \langle 0|\mathcal{T}j^{\mu}(x)\cdots|0\rangle = \text{contact terms, which don't contribute to the S-matrix}$$ And hence $$k^{\mu}\mathcal M_{\mu}=0$$ same as in the massless case. Note that in this derivation, it has been crucial that the explicit mass term for the vectorial field doesn't break the global $U(1)$ symmetry. This is also related to the fact that the explicit mass term for the vectorial field can be obtained through a Higgs-like mechanism connected with a hidden (the Higgs-like field decouples from the rest of the theory) $U(1)$ gauge symmetry. A more careful calculation should include counterterms in the interacting theory, however I think that this is the same as in the massless case. We can think of the fields and parameters in this answer as bare fields and parameters. Stueckelberg mechanism Consider the following Lagrangian $$\mathcal L=-{1\over 4}\,F^2+|d\phi|^2+\mu^2\,|\phi|^2-\lambda\, (\phi^*\phi)^2$$ where $d=\partial - ig\, B$ and $F$ is the field strength (Faraday tensor) for $B$ . This Lagrangian is invariant under the gauge transformation $$B\rightarrow B + (1/g)\partial \alpha (x)$$ $$\phi\rightarrow e^{i\alpha(x)}\phi$$ Let's take a polar parametrization for the scalar field $\phi$ : $\phi\equiv {1\over \sqrt{2}}\rho\,e^{i\chi}$ , thus $$\mathcal L=-{1\over 4}\,F^2+{1\over 2}\rho^2\,(\partial_{\mu}\chi-g\,B_{\mu})^2+{1\over 2}(\partial \rho)^2+{\mu^2\over 2}\,\rho ^2- {\lambda\over 4}\rho^4$$ We may now make the following field redefinition $A\equiv B - (1/g)\partial \chi$ and noting that $F_{\mu\nu}=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is also the field strength for $A$ $$\mathcal L=-{1\over 4}\,F^2+{g^2\over 2}\rho^2\,A^2+{1\over 2}(\partial \rho)^2+{\mu^2\over 2}\,\rho ^2-{\lambda\over 4}\, \rho^4$$ If $\rho$ has a vacuum expectation value different from zero $\langle 0|\rho |0\rangle = v=\sqrt{\mu^2\over \lambda}$ , it is then convenient to write $\rho (x)=v+\omega (x)$ . Thus $$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}\,A^2+g^2\,v\,\omega\,A^2+{g^2\over 2}\,\omega ^2\,A^2+{1\over 2}(\partial \omega)^2-{\mu^2\over 2}\,\omega ^2-\lambda\,v\omega^3-{\lambda\over 4}\, \omega^4+{v^4\,\lambda^2\over 4}$$ where $a\equiv g\times v$ . If we now take the limit $g\rightarrow 0$ , $v\rightarrow \infty$ , keeping the product, $a$ , constant, we get $$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}\,A^2+{1\over 2}(\partial \omega)^2-{\mu^2\over 2}\,\omega ^2-\lambda\,v\omega^3-{\lambda\over 4}\, \omega^4+{v^4\,\lambda^2\over 4}$$ that is, all the interactions terms between $A$ and $\omega$ disappear so that $\omega$ becomes an auto-interacting field with infinite mass that is decoupled from the rest of the theory, and therefore it doesn't play any role. Thus, we recover the massive vectorial field with which we started. $$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}\,A^2$$ Note that in a non-abelian gauge theory must be non-linear terms such as $\sim g A^2\,\partial A\;$ , $\sim g^2 A^4$ , which prevent us from taking the limit $g\rightarrow 0$ . | {
"source": [
"https://physics.stackexchange.com/questions/70882",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6848/"
]
} |
70,887 | As above... Surely it must go somewhere? My child asked me this question and it was a difficult one to answer. | Light travels at 300,000,000 meters per second. There is a very small period of time after switching it off where there still are photons from the bulb in the room. But these get absorbed/scattered by the wall and thus you don't see them. This what happens to all of the light that came out of the bulb significantly before you turned it off as well -- the photons (or EM radiation, take your pick) are no longer present in the room in their initial form. When the light hits the wall, some of it is absorbed, which heats up the wall ever so slightly (just how sunlight can heat things. For that matter, you can warm your hand even with a flashlight if you hold it in place fo a couple of minutes. You can do the same by placing your hand near a lightbulb). This heat dissipates through the wall. Also, some of it is scattered back with a shift in wavelength -- the electromagnetic radiation is still present in the room, but it is no longer visible. It could be radio waves/infrared waves or even ultraviolet waves. It's harmless, though. Eventually, it all leaves the room in the form of heat. Remember, a light bulb doesn't "contain" light. It contains a filament, which glows when you pass electricity through it. This exactly what happens when you heat a metal -- the electrons get excited and start emitting visible light. Here, you are supplying energy to the bulb (in the form of electricity), and it gets heated, giving off light. For CFL bulbs the system is more complicated, but it still involves the excitation of electrons. | {
"source": [
"https://physics.stackexchange.com/questions/70887",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26979/"
]
} |
70,915 | The Abraham-Lorentz force gives the recoil force, $\mathbf{F_{rad}}$, back on a charged particle $q$ when it emits electromagnetic radiation. It is given by: $$\mathbf{F_{rad}} = \frac{q^2}{6\pi \epsilon_0 c^3}\mathbf{\dot{a}},$$ where $\mathbf{\dot{a}}$ is the rate of change of acceleration. If a particle has a constant acceleration, $\mathbf{\dot{a}}=0$, then there is no reaction force acting on it. Therefore the particle does not lose energy. Does this mean that a constantly accelerating charged particle does not emit electromagnetic radiation? | This is an old, hard, controversial question. It is in some sense not well defined, because there are subtle ways in which it can be difficult to pin down the distinction between a radiation field and a non-radiative field. Perhaps equivalently, there are ambiguities in the definition of "local." If an accelerating charge did radiate, it would cause a problem for the equivalence principle. There are arguments by smart people who claim that an accelerating charge doesn't radiate (Harpaz 1999; Feynman's point of view is presented at http://www.mathpages.com/home/kmath528/kmath528.htm ). There are arguments by smart people who claim that an accelerating charge does radiate (Parrott 1993). There are other people who are so smart that they don't try to give a yes/no answer (Morette-DeWitt 1964, Gralla 2009, Grøn 2008). People have written entire books on the subject (Lyle 2008). A fairly elementary argument for the Feynman point of view is as follows. Consider a rigid blob of charge oscillating (maybe not sinusoidally) on the end of a shaft. If the oscillations are not too violent, then in the characteristic time it takes light to traverse the blob, all motion is slow compared to c, and we can approximate the retarded potentials by using Taylor series (Landau 1962, or Poisson 1999). This procedure will lead us to compute a force and therefore the lower derivatives (x'') from the higher derivatives (x'''); but this is the opposite of how the laws of nature normally work in physics. Even terms in the Taylor series are the same for retarded and advanced fields, so they don't contribute to radiation and can be ignored. In odd terms, x' obviously can't contribute, because that would violate Lorentz invariance; therefore the first odd term that can contribute is x'''. Based on units, the force must be a unitless constant times $kq^2x'''/c^3$ ; the unitless constant turns out to be 2/3; this is the Lorentz-Dirac equation, $F=(2/3)kq^2x'''/c^3$ . The radiated power is then of the form $x'x'''$ . This is nice because it vanishes for constant acceleration, which is consistent with the equivalence principle. It's not so nice because you get nasty behavior such as exponential runaway solutions for free particles, and causality violation in which particles start accelerating before a force is applied. Integration by parts lets you reexpress the radiated energy as the integral of $x''x''$ , plus a term that vanishes over one full cycle of periodic motion. This gives the Larmor formula $P=(2/3)kq^2a^2/c^3$ , which superficially seems to violate the equivalence principle. Note that starting from the expression $x'x'''$ for the radiated power, you can integrate by parts and get $x''x''$ plus surface terms. On the other hand, if you believe that $x''x''$ is more fundamental, you can integrate by parts and get $x'x'''$ plus surface terms. So this fails to resolve the issue. The surface terms only vanish for periodic motion. In a comment, Michael Brown asks the natural question of whether the issue can be resolved by experiment. I don't know that experiments can resolve the issue, since the issue is really definitional: what constitutes radiation, and how do we describe the observer-dependence of what constitutes radiation? In particular, if observers A and B are accelerated relative to one another, it's not obvious that what A calls a radiation field will also be a radiation field according to B. We know that bremsstrahlung exists and that it's the process responsible for the x-rays that produce an image of my broken arm. There doesn't seem to be much controversy over whether the power generated by the x-ray tube can be calculated according to $x''x''$ . What about the frame of the decelerating electron, in which $x''=0$ ? The question then arises as to whether this frame can be extended far enough to encompass the photographic film or CCD chip that forms the image. It gets even tougher when we deal with gravitational accelerations. To a relativist, a charge sitting on a tabletop has a proper acceleration of 9.8 m/s2. Does this charge radiate? How about a charge orbiting the earth (Chiao 2006) or free-falling near the earth's surface? Lyle 2008 has this clear-as-mud summary (gotta love amazon's Look Inside! feature): To a first approximation, remaining close enough to the charge for curvature effects to be negligible, in the sense that the metric components remain roughly constant, GR+SEP tells us that there should not be electrogravitic bremsstrahlung for a charge following a geodesic, although there will when the charge follows curves [satisfying the equations of motion], due to its deviation from the geodesic. Unfortunately, calculations show that the electromagnetic radiation from a free-falling charge, if it exists as suggested by the Larmor $x''x''$ formula, would be many, many orders of magnitude too small to measure. Chiao, http://arxiv.org/abs/quant-ph/0601193v7 Gralla, http://arxiv.org/abs/0905.2391 Grøn, http://arxiv.org/abs/0806.0464 Harpaz, http://arxiv.org/abs/physics/9910019 Landau and Lifshitz, The classical theory of fields Lyle, "Uniformly Accelerating Charged Particles: A Threat to the Equivalence Principle," http://www.amazon.com/Uniformly-Accelerating-Charged-Particles-Equivalence/dp/3540684697/ref=sr_1_1?ie=UTF8&qid=1373683154&sr=8-1&keywords=Uniformly+Accelerating+Charged+Particles%3A+A+Threat+to+the+Equivalence+Principle C. Morette-DeWitt and B.S. DeWitt, "Falling Charges," Physics, 1,3-20 (1964); copy available at https://journals.aps.org/ppf/abstract/10.1103/PhysicsPhysiqueFizika.1.3 Parrott, http://arxiv.org/abs/gr-qc/9303025 Poisson, http://arxiv.org/abs/gr-qc/9912045 | {
"source": [
"https://physics.stackexchange.com/questions/70915",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22307/"
]
} |
71,069 | Air is 6 times denser than aerographite but looking at pictures or videos presenting the material, I see it resting on tables rather than raising to the ceiling. Also, since the material is made of carbon nanotubes, I assume there are empty spaces between those tubes. Why are those spaces not occupied by air? | Your two questions are connected. There is a huge amount of empty space in aerographene (and other aerogels). However this space is filled with air, and precisely because it is filled with air it doesn't float. This is because the density reported is the density the material would have if the air was sucked out (i.e. in vacuum), and it is so low because the material is extremely porous. But in the atmosphere, the air fills the immense empty space. The effective volume of air displaced by aerographite now takes up only the volume of the constituent nanotubes of aerographite, which is extremely small. The tiny weight of this displaced air presents the buoyant force , which is not sufficient to counter the weight of the structure. Effectively because it is so porous the aerographite's density increases when not in vacuum. On the other hand, given that graphene is known to be non-permeable for atoms , if you sucked the air out of aerographene and encased it in graphene, and if outside air didn't squish the whole thing so that it's density surpassed that of air, than the resulting balloon might float. | {
"source": [
"https://physics.stackexchange.com/questions/71069",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27064/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.